Playing God with Humanity’s Future

by Luke Muehlhauser on March 10, 2011 in Ethics,Guest Post

Today’s post on ethics is written by Alonzo Fyfe of Atheist Ethicist. (Keep in mind that questions of applied ethics are complicated and I do not necessarily agree with Fyfe’s moral calculations.)

cloud_break

So, it seems Luke is moving on from telling us that God does not exist to playing God. Now, he wants to create intelligent (machine) life and, to top it off, he wants to be the Law Giver that provides this life with it’s own moral code.

Such arrogance – Luke wants to play God. It’s no wonder he became an Atheist. He doesn’t like the idea of a being greater than himself.

(Get used to it, Luke. This is exactly how some people are going to describe your project.)

The standard objection to these attitudes can actually be given in desirist terms. The reason we have to condemn this level of arrogance is because some humility is necessary to prevent the destruction of the whole human race.

We explore space, or we do genetic research and we risk bringing back or creating a pathogen that destroys humanity. Or we unwittingly tell an alien race that we exist so they come here to destroy us before we are a threat.

A new particle accelerator sucks the Earth into a manufactured black hole.

An intelligent race of machines decide that it is in their interests to terminate the human race.

It is hard to predict the consequence of our actions.

Some people deal with these possibilities by saying that we are the creation of an omnipotent and benevolent God who will protect us from such a fate. However, an atheist would have trouble acquiring that level of security. I look at what the astronomers show us and I see a universe that does not care whether humans live or die. From potential asteroid impacts to gamma-ray bursts to the eruption of nearby galaxies, from the collusion of suns to black holes, to the deaths of stars, I see threats to life everywhere.

In fact, I suspect that the odds if intelligent life are not only good, but that there is or gas been a species that will reach our level of development only to learn of their imminent extinction by forces they cannot control or avoid. There are, in space, dead planets that hold the ruins of lost civilizations.

I imagine an alien race coming to Earth and, while weathering and geological effects have destroyed all signs of humanity on the planet, they find a thousand dead satellites in high earth orbit and an array of machines on the lunar surface.

The possibility of human extinction is very real, and we have many and strong reasons to promote attitudes that will not hasten us to that fate.

But, you know, living in a primitive state, powerless to understand or manipulate it’s environment did not save the dinosaurs. A hundred years ago hum and could have been destroyed by such a natural disaster. Right now, scientists are telling us we have nothing to fear on that front. But if we did, we could avoid the fate of the dinosaurs.

We could choose to remain ignorant of the universe in which we live, but that does not guarantee our safety. Knowing about the threats that exist and having options that will allow us to avoid those threats is the better option.

Some people may worry that intelligent machines will be the tool that destroys us. It may well be that the intelligent machine is what saves us – working out a solution to problems that are much too complicated for us to grasp. They will tell us about the rogue planet threatening our existence, design the vaccine that will prevent a new plague, set up the planetary defense system, and design, build, and navigate the space habitats that future generations will live in.

There will be a time when all life on earth, and its descendants, will owe its survival to humanity – or to whatever intelligent life follows us. We will make it possible for them to live under conditions when the earth becomes a lifeless cinder – or would have done so if nature proceeds on it’s own course.

And the time may well come when our survival – or that of our descendants – will depend on the actions of beings more intelligent than ourselves. They may well be the beings that we create.

It is quite possible that the development of an artificial intelligence may lead to the destruction of the human race. What we may need to also consider the possibility that not inventing such an intelligence might be the more dangerous option.

- Alonzo Fyfe

P.S. from Luke: Alonzo’s last paragraph is also argued by Yudkowsky and Bostrom.

Previous post:

Next post:

{ 11 comments… read them below or add one }

Alexander Kruel March 10, 2011 at 6:38 am

Or we neither try to invent Gods nor prevent Gods but become Gods ourselves by merging with our technology.

  (Quote)

Reginald Selkirk March 10, 2011 at 6:40 am

I think it’s time for God to stop playing God.

  (Quote)

Louis March 10, 2011 at 6:49 am

Alexander, the creation of super friendly AI means just that. Or am I mistaken? I don’t think I would stay long with this exact substrate as a medium to exist if you had an AI that could “tell us about the rogue planet threatening our existence, design the vaccine that will prevent a new plague, set up the planetary defense system, and design, build, and navigate the space habitats that future generations will live in.”

Actually on the point “What we may need to also consider the possibility that not inventing such an intelligence might be the more dangerous option.” Luke, hasn’t Nick Bostrom said that we need AGI before we have nanotech in order to control it safely?

  (Quote)

Alexander Kruel March 10, 2011 at 7:07 am

Louis, if you assume that there will be some kind of very rapid development or a single quantum leap that leads to the creation of superhuman intelligence then the route I mentioned might be impossible to take. But if it will be a somewhat modest development then I am all for the option of trying to uplift humans rather than putting ever more abstract intelligence into independent systems. I don’t think that anyone can be reasonably sure that there will be rapid (explosive) progress towards artificial general intelligence. I know that your take on the matter is to be cautious just in case because of the high-risk associated with the possibility. That is fine, should be taken seriously and research to mitigate such a risk should be supported. But if we are cautious for too long we’ll miss out on the opportunity to rule ourselves. I’d rather be a God than being ruled by one, I assign vastly more utility to such an scenario as being an ape under the control of a singleton super-intelligence. Freedom is my choice, even if it means that the outcome is suboptimal or that we’ll fail completely.

  (Quote)

Louis March 10, 2011 at 7:46 am

Louis, if you assume that there will be some kind of very rapid development or a single quantum leap that leads to the creation of superhuman intelligence then the route I mentioned might be impossible to take. But if it will be a somewhat modest development then I am all for the option of trying to uplift humans rather than putting ever more abstract intelligence into independent systems. I don’t think that anyone can be reasonably sure that there will be rapid (explosive) progress towards artificial general intelligence. I know that your take on the matter is to be cautious just in case because of the high-risk associated with the possibility. That is fine, should be taken seriously and research to mitigate such a risk should be supported. But if we are cautious for too long we’ll miss out on the opportunity to rule ourselves. I’d rather be a God than being ruled by one, I assign vastly more utility to such an scenario as being an ape under the control of a singleton super-intelligence. Freedom is my choice, even if it means that the outcome is suboptimal or that we’ll fail completely.  

Good point.

  (Quote)

MarkD March 10, 2011 at 1:50 pm

Brief plug for my novel, Teleology. Also available on Kindle, Nook, and at iBookstore. Here’s the synopsis:

Teleology opens where it ends: two million years in the future. Humanity has uploaded itself into an interplanetary computer where the boredom of immortality, omniscience, and boundless powers of creation is only limited by the shadowy world of memories from before the transformation. Then the narrative carries the reader back, recreating history as twin brothers–Mikey and Harry–find disparate trajectories through deep religious faith and the life of reason and science.

When Harry joins an extremist cult and plots the bombing of Mikey’s research facility, Mikey’s wife is spared in the initial explosion but dies from the lingering effects of the attack. Yet the results of Mikey’s research eclipses expectations and understanding as a new intelligence arises to transform human life by eliminating privation and want. Trapped by violence and circumstance, the twins finally reconcile in a holographic stasis while the world erupts in war around them. Soon there are no differences or conflicts left at all—only creative acts by human agency and arising in a
chain of causation.

Teleology combines a New Atheist critique of society’s complacency over religious extremism with science fiction elements that challenge our sense of identity and purpose, all seen through the entwined fates of Narcissus and Goldmund protagonists. Playful literary references, linguistic jokes and experimental side trips into the belief systems of the evolved creatures help move the narrative forward, and we watch a rapidly unfolding technological future from the perspective of the twins and self-mortifying priests in a parallel artificial world. Everyone and everything is trying to understand and fulfill its purpose.

  (Quote)

Brian March 11, 2011 at 12:42 am

Is Fyfe under some sort of artificial pressure to publish on this site at a certain rate?

  (Quote)

Colin March 11, 2011 at 2:23 am

Is Fyfe under some sort of artificial pressure to publish on this site at a certain rate?  

Did you draw this conclusion from the inordinate amount of typos – many of them difficult to unravel?

  (Quote)

Luke Muehlhauser March 11, 2011 at 7:54 am

No, Fyfe’s under no pressure. Did I miss some typos here? I’m trying to catch them…

  (Quote)

Jeff H March 11, 2011 at 3:26 pm

There were quite a few typos in this. Here are the ones I found:

“this life with it’s own moral code.” (“it’s” should be “its”)
“from the collusion of suns to black holes” (“collusion” should be “collision”, unless the suns are holding secret meetings and conspiring against us :) )
“the odds if intelligent life” (“if” should be “of”)
“that there is or gas been a species” (“gas” should be “has”)
“powerless to understand or manipulate it’s environment” (“it’s” should be “its”)
“A hundred years ago hum and could” (“hum and” should be “humans”)
“if nature proceeds on it’s own course” (again, “it’s” should be “its”)

  (Quote)

JNester March 12, 2011 at 4:54 pm

Jeff H: There were quite a few typos in this.

hehehe is that what a person with good desires would do?

  (Quote)

Leave a Comment