As Julia Galef said to me after Singularity Summit 2011, there are many academics (Hofstadter, Pinker, Dawkins, etc.) who agree that intelligence explosion is quite plausible but nevertheless consider themselves “skeptics” of Singularity talk because they associate the Singularity with “timelines optimism” (e.g. Kurzweil’s prediction that the Singularity will occur in 2046). Thus, Machine Intelligence Research Institute and other organizations may gain credibility by distancing themselves from timeline optimism.
On the other hand, timelines matter for decision making. If you think that intelligence explosion is extremely unlikely until 2200, then it’s probably best to focus research on the safety of more near-term transformative technologies like synthetic biology and even nanotechnology. If intelligence explosion will plausibly arrive before 2070, then mankind should redirect significant resources toward AI safety research.
Of course, no matter what your timelines are it’s still the case that, as someone (Jaan Tallinn?) recently said, it’s insane for humanity to spend less than .00001% of GDP on the mitigation of risks to its survival.