Derek Parfit ended Reasons and Persons in this way:
I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:
(2) A nuclear war that kills 99% of the world’s existing population.
(3) A nuclear war that kills 100%.
(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.
…The Earth will remain inhabitable for at least another billion years. Civilization began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilized human history.
The difference between (2) and (3) may thus be the difference between this tiny fraction and all of the rest of this history. If we compare this possible history to a day, what has occurred so far is only a fraction of a second.
Many share the view that total human extinction would be far worse than the loss of 99.999% of all humans, after which enough humans would survive that they could eventually repopulate Earth and perhaps the galaxy.
Moreover, total human extinction is, perhaps for the first time in recorded history, quite plausible. Experts who have voiced this concern include Martin Rees (Our Final Hour), John Leslie (The End of the World), and others (Global Catastrohpic Risks, from which most of the data below comes).
Perhaps we ought to take a moment of our time to assess these risks.
What disasters could cause total human extinction?
Well. The heat death of the universe definitely will. But that’s a long way off. Let’s be more precise.
What disasters could cause total human extinction in the next 200 years?
There is another reason for framing the question this way. If we can survive the next 200 years, we may by then be able to upload our minds into computers, make copies, and send them off in probes to thousands or millions of destinations in the galaxy.
Thus, it may be that these next 200 years are the most critical: when total human extinction is most plausible. This may even be the most critical time (for intelligent life) in the history of the galaxy. We haven’t received signals – or probes – from any other intelligent life in the galaxy (the Fermi paradox). Maybe they never existed because intelligent life so rarely evolves, or maybe all other advanced civilizations destroyed themselves before they got to the stage of uploading themselves to computers and sending out masses of probes.
So: what could cause total human extinction in the next 200 years?
Non-Prospects for Total Human Extinction
A volcanic super-eruption (VEI 8) happens on average every 50,000 years, the most recent being the Oruanui eruption in New Zealand 25,000 years ago. But even if a super-eruption occurred, humans are now such an adaptive species that even the largest super-eruption in Earth’s history would probably not kill off all humans. There would probably be hundreds of thousands – or millions – of us left to repopulate the globe.
Solar flares are never large enough to destroy our entire atmosphere, and thus our species.
A nearby supernova explosion might strip the Earth of its ozone layer, allowing penetration of UV rays. This would wipe out most of our species, but we are adaptable enough that we would (barely) survive underground. Likewise, a gamma ray burst aimed directly at Earth would kill most of us, but would spare some of those who happen to be underground at the time.
The Earth’s magnetic sphere disappears every few hundred thousand years as it reverses itself. But these events do not correlate with mass extinction events, and won’t wipe out humanity.
Climate change won’t happen quickly enough to kill more than a couple billion people in the next two centuries, even at worst.
A natural pandemic of the worst kind in Earth’s history would kill, at most, a couple billion people.
Nuclear war, even with future nuclear weapons even more powerful than today’s, would not kill at least a few dozen people who are deep underground in, for example, mines and data storage facilities and military bunkers.
Extremely Unlikely to Cause Total Human Extinction
Impact by a large asteroid or comet (> 25km in diameter) could wipe out the human species entirely. (A 10-15km asteroid impact in Mexico seems to have been what killed the dinosaurs.) If the rock is in a close-earth orbit, we’ll have decades or centuries of advance warning, but a comet or “dark Damondoid” wouldn’t given enough warning for us to react at all. Luckily, such events are far more rare than, volcanic super-eruptions, and extremely unlikely to occur in the next 200 years.
Biowarfare is pretty likely as cheap technology makes biology programmable, like a computer. A global and lethal airborn virus is unlikely to reach the most remote persons, including those deep underground. But a precisely engineered bioweapon could penetrate some strongholds protected even from, say, nuclear warfare, and thus I’ve graduated this risk to “extremely unlikely to cause total human extinction.”
Runaway physics experiments could, say, create a black hole that would swallow the Earth. Though worries about the Large Hadron Collider are unfounded, the general point remains that physicists are often creating forces and particles and environments that have never existed before on Earth, and playing with physical laws and dimensions we do not understand. The risk of runaway physics experiments could be nil or they could be somewhat high (a century from now, when our biggest physics experiments are even bigger). There is no cause for immediate concern, but because we have almost no idea what the likelihood of such disasters is, I’m graduating this disaster to “extremely unlikely to cause total human extinction.”
Merely “Unlikely” to Cause Total Human Extinction
Runaway nanotechnology could self-replicate beyond human control, taking up all resources (of a certain kind) in sight – perhaps filling the sky and even digging into the Earth and oceans. However, such intelligent self-replication may rely on another risk that is broader in scope, namely: unfriendly machine superintelligence.
Unfriendly machine superintelligence could wipe out humanity by converting all matter and energy in the nearby solar system into parts for achieving whatever goals were programmed into it (solving certain math problems or colonizing the galaxy or whatever). The path to such god-like powers for AI is this: Once an AI becomes as smart as we are at designing AI, it will be able to recursively improve its own intelligence, very quickly becoming vastly smarter and more powerful than any human resistance would be capable of stopping. Intelligence is the most powerful thing in the universe, and a superintelligent machine could easily destroy humanity unless it is programmed to preserve it.
I didn’t use to think so, but I’ve become persuaded that unfriendly machine superintelligence is the most plausible cause of total human extinction. If this is correct, then the single largest impact you can have with your charity dollars is to give all of them toward ensuring we develop artificial superintelligence that is friendly to human goals. Giving to stop global warming looks like a drop in a puddle in comparison.
Still, it is important to keep in mind that some risks that won’t cause total human extinction while still causing hundreds of millions of deaths may be more likely to occur than unfriendly machine superintelligence. Biowarfare and climate change examples; luckily, there are things we can do about them, too.
I must also admit that I am not an expert of human extinction risks. I invite correction. I don’t propose this post as providing a scholarly game plan, but as an invitation for more people to engage in researching this badly neglected subject: the possibility of total human extinction.