Update: Facing the Intelligence Explosion has a new home at FacingTheSingularity.com.
I’ve learned more in the past year than in the previous 5 years of my life combined. (Largely, because I got really good at independent learning.)
Also, I joined Machine Intelligence Research Institute and discovered an enormous treasure trove of knowledge and analysis that hadn’t been written down or well-organized anywhere.
I’ve been trying to share what I’ve learned in a long series of brief, carefully written, and well-referenced articles, but such articles take a long time to write. It’s much easier to write long, rambling, unreferenced articles.
This new series, ‘Facing the Intelligence Explosion,’ is my new attempt to rush through explaining as much material as possible. I won’t edit, I won’t hunt down references, and I won’t try to be brief. I’ll just write, quickly.
The subject matter won’t appeal to everyone. The unedited writing certainly won’t. Nevertheless I suspect what I cover will be interesting and informative to many. Besides, this is my personal blog; I’ll write what I want, here.
I’ll begin with my personal background. It will help to know who I am and where I’m coming from. That information is some evidence about how you should respond to the other things I say.
I’ve told my personal story before, so here’s the quick version: I was raised a devoted evangelical Christian in Minnesota. I attended a Christian school, enthusiastically read pop-theology in my spare time, played in the worship band, went on short-term missionary trips around the world, experienced what I believed to be the Holy Spirit many times — I was a true believer.
Around age 22 I wanted nothing more than to be like Jesus to a lost and hurting world. Thus, I had to study the Historical Jesus, to figure out what “being like Jesus” meant. That’s when I learned all kinds of disturbing things about the Bible and the origins of Christianity. I wanted to protect my faith, so I read more and more “sophisticated” works of Christian apologetics. I also read a few books by atheists, so I could be “fair” in my investigation. The short story is that the atheists won, and I lost my faith in God completely.
This was horrifying at first, because my relationships, my morality, my sense of purpose, my life plans, and my picture of the world were all grounded in Christianity. Gradually, I built up a new worldview based on the mainstream scientific understanding of the world, and approach called “naturalism.” I launched Common Sense Atheism in November 2008 (at age 23) to explain atheism and naturalism to others, because I was unhappy with the lack of philosophical and rational seriousness displayed by the atheist blogosphere at that time.
My blog became one of the most popular atheism blogs rather quickly. I enjoyed translating the results of professional philosophy of religion for the mass public, and I enjoyed speaking with experts in the field for my podcast Conversations from the Pale Blue Dot.
Morality was always an important topic for me, and eventually I was writing more about morality than I was about philosophy of religion. When I launched my blog I was an error theorist, which is one way of being a moral anti-realist. Shortly after launching Common Sense Atheism, I encountered the work of Alonzo Fyfe, who presented the first theory of naturalistic moral realism that didn’t smell like bullshit to me. I came to accept and then promote his theory of “desire utilitarianism” (aka “desirism”) on my blog.
My moral views have continued to evolve, and my latest statement of moral theory is summarized here. I’ve come to think that moral language is so confused and contentious that we may want to abandon moral terms altogether, and I try to avoid classifying myself as either “moral realist” or “moral anti-realist” because, as always, whether I’m one or the other depends on what is meant by those terms, and people use the terms in a variety of ways. Whether anti-realism or naturalistic moral realism is “true” depends to some degree on one’s attitude toward the use of moral language, as Richard Joyce argues in his paper Metaethical Pluralism, and as I argue in Pluralistic Moral Reductionism. (I still think desirism is one particularly useful way to talk about morality, much better than many others, and it fits within the framework of pluralistic moral reductionism.)
I was also always interested in rationality, at least since my deconversion, during which I discovered that I could easily be strongly confident of things that were total nonsense. How could the human brain be so incredibly misled? Obviously, I wasn’t Aristotle’s “rational animal.” Instead, I was Gazzaniga’s rationalizing animal. Critical thinking was a major focus on Common Sense Atheism, and I spent as much time criticizing poor thinking in atheists as I did criticizing poor thinking in theists.
My interest in rationality inevitably lead me (in mid 2010, I think) to the largest and best treasure trove of articles on the mainstream cognitive science of rationality: Less Wrong. It was here that I first encountered the idea of intelligence explosion and the need for Friendly AI, though I had encountered the mainstream machine ethics literature back in June 2009.
I tell the story of my first encounter with the famous paragraph from I.J. Good on intelligence explosion here. In short:
Good’s paragraph ran me over like a train. Not because it was absurd, but because it was clearly true. Intelligence explosion was a direct consequence of things I already believed, I just hadn’t noticed! Humans do not automatically propagate their beliefs, so I hadn’t noticed that my worldview already implied intelligence explosion.
I spent a week looking for counterarguments, to check whether I was missing something, and then accepted intelligence explosion to be likely (so long as scientific progress continued). And though I hadn’t read Eliezer on the complexity of value, I had read David Hume and Joshua Greene. So I already understood that an arbitrary artificial intelligence would almost certainly not share our values.
My response to this discovery was immediate and transforming:
I put my other projects on hold and spent the next month reading almost everything Eliezer had written. I also found articles by Bostrom and Omohundro. I began writing articles for Less Wrong and learning from the community. I applied to Machine Intelligence Research Institute’s Visiting Fellows program and was accepted. I quit my job in L.A., moved to Berkeley, worked my ass off, got hired, and started collecting research related to rationality and intelligence explosion.
As my friend Will Newsome once said, “Luke seems to have two copies of the ‘Take Ideas Seriously’ gene.”
Of course, what some people laud as “taking serious ideas seriously,” others see as an innate tendency toward fanaticism. Here’s a comment I could imagine someone making:
I’m not surprised. Luke grew up believing that he was on a cosmic mission to save humanity before the world ended with the arrival of a super-powerful being (the return of Christ). He lost his faith and with it, his sense of epic purpose. His fear of nihilism made him susceptible to seduction by something that felt like moral realism (desirism), and his need for an epic purpose made him susceptible to seduction by Singularitarianism.
One response I could make to this would be to say that this is just “psychologizing,” and doesn’t address the state of the evidence for the claims I now defend concerning intelligence explosion. That’s true, but again: Plausible facts about my psychology do provide some Bayesian evidence about how you should respond to the words I’m writing in this series.
Another response I could make would be to explain why I don’t think this is quite what happened, though elements of it are certainly true. (For example, I don’t recall feeling that the return of Christ was immanent or that I was on a cosmic mission to save every last soul, though as an evangelical Christian I was theologically committed to those positions. But it’s certainly the case that I am drawn to “epic” things, like the rock band Muse and the movie Avatar.) But I don’t want to make this post even moreso about my personal psychology.
A third response would be to appeal to social proof. There seems to be a class of Common Sense Atheism readers that has read my writing so closely that they have developed a strong respect for my serious commitment to rationality and changing my mind when I’m wrong, and so when I started writing about Singularity issues they thought, “Well, I used to think the Singularity stuff was pretty cooky, but if Luke is taking it seriously then maybe there’s more to it than I’m realizing,” and they followed me to Less Wrong (where I was now posting regularly). I’ll also mention that a significant causal factor in my being made Executive Director of Machine Intelligence Research Institute after so little time with the organization was that Machine Intelligence Research Institute staff could see that I was seriously devoted to rationality and debiasing, seriously devoted to saying “oops” and changing my mind and responding to argument, and seriously devoted to acting on decision theory rather than habit and emotion as often as I could.
In surveying my possible responses to the “fanaticism” criticism above, I’ve already put up something of a defense. But that’s about as far as I’ll take it. I want people to take what I say with a solid serving of salt. I am, after all, only human. Hopefully my readers will take into account not only my humanity but also the force of the arguments and evidence I will later supply concerning the arrival of machine superintelligence.