Desirism and the Singularity

by Luke Muehlhauser on July 23, 2010 in Ethics,Friendly AI,Science

For me, desirism is not just a useful and prescriptive moral theory. Desirism could also be part of an engineering problem that will determine the fate of humanity.

Let me explain.

Many people, myself included, think that the Singularity is fewer than 300 years away. By Singularity, I mean the point at which we develop an artificial intelligence that surpasses humans at every intelligent task we can perform. At that point, the AI will be smart enough to improve itself, which will make it capable of improving itself even faster, which will make it capable of improving itself even faster, and so on.

Once we reach the Singularity, this AI will rapidly develop into a superintelligence that is capable of killing all humans or of ushering in a kind of utopia in which most of our problems of energy, health, and governance are solved.1

So besides the usual hardware and software challenges we must overcome before we reach the Singularity, we must be very careful to create a good superintelligence rather than an evil one.

So how do we program this superintelligence to be morally good? That’s a tough one.2 Eliezer Yudkowsky asks us to think of the Greeks. If the Greeks had designed an AI to be moral, this AI would have been instilled with the idea that slavery was useful in many contexts, that women were little more than property, that the most glorious death was one in battle, and so on.

And if we program a superintelligence with our most progressive values of today, would we be much better off in the long run? If we program it to value liberal democracy and equality and veganism, would we regret this in the long run? Almost certainly. I doubt we have come to the end of moral progress.

So really, we don’t want to program the superintelligence with our particular moral values. Rather, we want to program it with a kind of moral trajectory, or some kind of system for progressively figuring out what is moral.

Well, that’s what desirism is.

Desirism is a system for figuring out what is moral that respects no particular assumptions about moral values and political systems. It does not assume the goodness of democracy or veganism or equality or capitalism or, really, anything. It is not tied to the intuitions of philosophers of any age. Rather, it respects only a consideration of all the reasons for action that exist, whatever those happen to be according to scientific inquiry.

And in fact, according to desirism, only a superintelligence is even capable of answering a great many moral questions. We lowly humans can use heuristics and make some pretty good guesses about the moral facts relevant to some situations. But when it comes to enormously complex issues like how to treat animals or how to organize political systems, desirism says there is an enormous amount of scientific research that must be done first – perhaps more research than any merely human research team could ever conduct.

So there you go. Desirism is not just a plausible descriptive and prescriptive theory of morality for human practice. Desirism may well be the key to creating utopia and avoiding extinction.3

And if desirism is false, then we’d better figure out something better, with which to program the AI. Quickly.

  1. You can’t avoid the Singularity. You could outlaw it, which could delay it for centuries. You could set human civilization back to the stone age, which would delay it for millennia. But you can’t avoid it altogether without extinguishing the most intelligent species altogether. Moreover, there is no “learning curve,” and no second chances. Once AI reaches the Singularity, then if you fucked up, all of humanity will be dead within weeks. You can’t set a hundred scientists trying lots of different methods in a disorganized fashion, because chances are not good that the first one to hit the Singularity will have created a ‘Friendly’ AI. You have to hit the Singularity, you have to hit it once, and you have to be damn sure you hit it right. []
  2. Another tough question is: “How do we know the superintelligence wouldn’t decide to reprogram its own motives and become evil? Yudkowsky uses an argument by analogy involving Gandhi. If you offered Gandhi a pill that would make him desire violence, would he take it? No, he would not, for he does not desire violence, and he does not desire to desire violence. But this is only an intuitive argument. We would want to prove this mathematically when the stakes are so high, and we have not done so yet. []
  3. Technically, it would not be desirism proper used to program the AI. Rather, it would be a modified desirism. Why? Desirism is only concerned with desires that happen to be malleable in moral agents. In contrast, the moral programming used in Friendly AI will be written “from scratch,” meaning that all possible desires are “malleable.” That’s a bit different. []

Previous post:

Next post:

{ 56 comments… read them below or add one }

Alexandros Marinos July 23, 2010 at 4:17 am

Nice to see you covering Singularity-type subjects. I think a link to Yodkowsky’s tentative answer to the problem you described, which is called [Coherent Extrapolated Volition](http://intelligence.org/upload/CEV.html). As you said, I also see it as ‘computational desirism’. Not sure which one came first, but good to see ideas from different fields converging to similar ideas.

  (Quote)

G'DIsraeli July 23, 2010 at 5:09 am

I see you speak of moral agents.
I was reading Paul Broks the other day (“Into the Silent Land: Travels in Neuropsychology”) that suggested Neuroscience points out in favor of the bundle theory.
If it tuns out it’s correct concerning the “Self” (ego theory) How would this effect desirism?
David Hume went out to play backgammon to avoid the depression of this idea, I don’t find it helpful.

  (Quote)

Joshua Zelinsky July 23, 2010 at 5:24 am

You write:

“Moreover, there is no “learning curve,” and no second chances. Once AI reaches the Singularity, then if you fucked up, all of humanity will be dead within weeks.”

This is a common claim, often referred to as AI going foom. However, there are strong reasons to doubt he possibility of AI fooming or at least fooming quickly. A self-improving AI would likely need to be improve both its hardware and software. How fast can you do that? We’ll hardware, such as new chip design, takes months or years simply to set up the equipment. It is conceivable that this could be shortened, especially if one has highly flexible nanotech. But note that we already have computers doing a lot of our chip and circuit design already. It isn’t immediately clear that removing humans from the loop will speed it up by that much. What about software improvement? Well, theoretical computer science can help answer this. Currently, it is strongly believed but not proven that most of the computational complexity hierarchy does not collapse (i.e. P, NP, coNP, PSPACE and EXP are all distinct computational classes). If that’s the case there are strong limits at how much improvement can be done to many algorithms. Both the traveling salesman problem and the graph coloring problem are NP-hard problems, and both show up in practical issues for an AI going foom, such as memory design, memory management and circuit design. (It has been recently pointed out to me that there’s a problem with this line of argument in that AI will likely not need to be able to solve arbitrary versions of these problems to go foom, just specific instances which have more regularity than the general problem. So this argument may not work well). Still other problems that we care about, such as solving linear programming or finding the gcd of integers are close to the theoretically best possible. (In the case of the gcd example, it isn’t that much better than what Euclid had, essentially the same thing slightly tweaked.)

There are some ways this might not matter if for example we get efficient quantum computing and it turns out that BQP contains NP, or if we can have small wormholes inside our hardware. See Scott Aaronson’s discussion of what fun stuff happens then http://scottaaronson.com/blog/?p=368 . But that seems unlikely.

The notion that a reasonably smart AI might go FOOM quickly is a plausible but one but it isn’t something that should be taken for granted as definite.

  (Quote)

lukeprog July 23, 2010 at 6:01 am

Alexandros,

I haven’t read that piece by Yudkowsky; thanks for the link.

  (Quote)

Haukur July 23, 2010 at 6:17 am

The notion that a reasonably smart AI might go FOOM quickly is a plausible but one but it isn’t something that should be taken for granted as definite.

Even if the AI doesn’t spend any effort on improving computer hardware or software, presumably it would still be able to take advantage of progress made by other entities in these areas. So, merely by Moore’s law, an AI that has merely human intelligence now would, in a couple of years, think twice as fast as a human. Double that a few more times and we’re presumably talking about a really smart AI. Toss in abilities like immortality, photographic memory and the ability to natively interface with computer systems – pretty soon you’ve got something really scary going on.

  (Quote)

Alexandros Marinos July 23, 2010 at 6:42 am

Luke, glad I could be of help.

In my mind, I’m imagining all this is a preparation of an upcoming CPBD episode with Eliezer :)

  (Quote)

Márcio July 23, 2010 at 7:01 am

“Desirism is a system for figuring out what is moral that respects no particular assumptions about moral values and political systems.”

I think we already have that answer.

God!

  (Quote)

Márcio July 23, 2010 at 7:05 am

And about the AI, i think “Matrix” and “I, Robot” are good movies, but humanity will never be enslaved by machines. And machines are just pieces of metal and will never be something more than that.

  (Quote)

Eneasz July 23, 2010 at 7:35 am

Luke, HUZZAH!! I’ve tried to say the same a few times in LessWrong comments, glad to see someone with a large audience coming forward with it. If you ever have the time (and it takes a LOT of time) the LW community is one of the most thoughtful I’ve found. Most comments are as good as top-level posts.

  (Quote)

Terry July 23, 2010 at 7:48 am

As I understand it, one of the main reasons that many people reject certain ethical theories is that they tend to rely on non-existent actors/things like categorical imperatives, imaginary omniscient beings, etc. Seems like a good reason to me.

But once the singularity occurs, is it conceivable that the new intelligence could assume one of those roles (if not immediately, at some future time)? And if so, do any of the other ethical theories become more compelling than desirism as a framework for achieving “good”?

Or to phrase the question differently, should we take into account the idea that the new intelligence could become the so far non-existent omniscient moral actor when deciding which ethical theory/trajectory to try to instill in the new intelligence?

  (Quote)

mkandefer July 23, 2010 at 7:49 am

Marcio,

I disagree. What intrinsic properties of “metal”* preclude it from having intelligent behaviors? What intrinsic properties of “flesh” (i.e., carbon-based life) allow only creatures with it to behave intelligently?

* – I take it you refer to more than just metals, as machines come in many forms, like wood, plastic, etc.

  (Quote)

Alexandros Marinos July 23, 2010 at 7:55 am

Terry, a singleton arising is part of the singularitarian narrative. The point is to instill ‘what humans would want if they were as unified, intelligent & informed as the singleton’ as the singleton’s moral system, so that we won’t be forced to follow a system that is not ‘good’ for us.

  (Quote)

Josh July 23, 2010 at 7:58 am

Yeah for me one of the scariest things in the world is our coming AI overlords. Hopefully I’m dead first >_>

  (Quote)

TH July 23, 2010 at 7:59 am

So besides the usual hardware and software challenges we must overcome before we reach the Singularity, we must be very careful to create a good superintelligence rather than an evil one.

I wouldn’t worry too much about that, it will happen naturally. An artificial intelligence’s most practical application will be enhancing our natural abilities, such as expanding memory, speeding up pattern detection, etc; so the the most profitable market for AI will be in creating human side-kicks/avatars/ with cyborg-like interfaces. However, given the choice, we won’t put down money for some cold, calculating machine-like entity, we’ll pay extra for something that at least appears to have a little human compassion. Companies that create AIs with human tendencies and human values, therefore, will outperform those that who don’t in the market and AI research will be pressured in the “right” direction by profit alone.

  (Quote)

mkandefer July 23, 2010 at 8:01 am

This was always a problem on dates… I brought up the fact that I’m in AI research, and then they lambaste me with the horrors of it they learned from movies. Informed discussion was never had. Luckily, I have a guy that doesn’t see me as the bringer of humanity’s doom! :)

  (Quote)

Eneasz July 23, 2010 at 8:32 am

Hopefully I’m dead first

Or, you know, maybe you could do something to help increase the chances that the AI will be Friendly and you’d much rather be alive.

we’ll pay extra for something that at least appears to have a little human compassion.

Things can appear to have a little human compassion without actually having any. And just how “little” are you talking about? Even Hitler had a “little” human compassion. This “the magic of markets solves everything” thinking has a terrible track record already.

  (Quote)

Lorkas July 23, 2010 at 8:37 am
Hopefully I’m dead first

Or, you know, maybe you could do something to help increase the chances that the AI will be Friendly and you’d much rather be alive.

Or you could accept that we will all become cyborgs and therefore be the AI.

  (Quote)

TH July 23, 2010 at 8:41 am

>
Even Hitler had a “little” human compassion.

>

Sorry, my AI interface fried on that sentence and I’m going to be out of commission for a while until I can figure out where the smoke is coming from.

  (Quote)

Alonzo Fyfe July 23, 2010 at 9:27 am

I find it odd to speak as if there will be a single superintelligence.

Unless we are talking about an infinitely powerful being, any race of AI robots is going to gain advantages through specialization and division of labor. Different beings will exist for different niches – climate niches, social niches, economic niches.

This, ultimately, will result in a form of species. The diversity of beings will give others a chance to succeed in their environment and replicate, while others will fail and become extinct.

I suspect that those who learn to value diversity and cooperation will be more successful than those who will demand uniformity and promote conflict with all others who are not “like us”.

In fact, the second group will, at worst, simply need to retreat into a climate niche in which the first group is poorly adapted, where its values of diversity and cooperation will ultimately allow it to grow in ways that the uniformity/conflict group could not match.

I would also expect that these AI machines will learn these facts quite quickly.

  (Quote)

anon July 23, 2010 at 9:34 am

1: IMHO the singularity is silly.

2: Assume it isn’t silly. Then I am puzzled by these comments:

“And if we program a superintelligence with our most progressive values of today, would we be much better off in the long run? If we program it to value liberal democracy and equality and veganism, would we regret this in the long run? Almost certainly. I doubt we have come to the end of moral progress.

So really, we don’t want to program the superintelligence with our particular moral values. Rather, we want to program it with a kind of moral trajectory, or some kind of system for progressively figuring out what is moral.

Well, that’s what desirism is.

Desirism is a system for figuring out what is moral that respects no particular assumptions about moral values and political systems. It does not assume the goodness of democracy or veganism or equality or capitalism or, really, anything. It is not tied to the intuitions of philosophers of any age. Rather, it respects only a consideration of all the reasons for action that exist, whatever those happen to be according to scientific inquiry.”

Here are some assumptions about moral values and political systems:

A: It is more morally valuable to maximize the satisfaction of desires than to refrain from doing so.

B: We should have a political system that maximizes the satisfaction of desires.

Desirism is committed to A and B. A and B are assumptions about moral value and politics. So desirism is committed to some assumptions about moral value and politics.

Maybe the idea is instead that we have two choices: One choice is to program the AI to adopt a certain moral theory, maybe desirism maybe kantinainsm maybe something else. The other choice is to not provide the AI with any moral theory but instead to just give it a bunch of pieces of data about particular actions and about what we think are right and wrong and tell it to do the things we think are right and to refrain from doing the things that are wrong without giving it a theory about what is right and what is wrong. I don’t understand why one choice is better than another or why they need to be competitors.

  (Quote)

JS Allen July 23, 2010 at 9:47 am

I hope the singularity is way smarter than the creator of desirism, and also I hope it is one big opium den.

To Alonzo’s point, Jaron Lanier has been a big proponent of this idea of creating millions of individual AI instances that interact with each similar to humans interact with one another. He calls it “phenotropic computing”. Although, once we get to the singularity, millions of intelligences seamlessly communicating and planning together might be indistinguishable from a giant mind.

Here’s a discussion between Yudkowsky and Lanier, not sure if he talks much about “phenotropic computing”.

  (Quote)

JS Allen July 23, 2010 at 9:49 am

I hope the singularity is way smarter than the creator of desirism, and also I hope it is one big opium den.

To Alonzo’s point, Jaron Lanier has been a big proponent of this idea of creating millions of individual AI instances that interact with each similar to humans interact with one another. He calls it “phenotropic computing”. Although, once we get to the singularity, millions of intelligences seamlessly communicating and planning together might be indistinguishable from a giant mind.

There’s a discussion between Yudkowsky and Lanier on Yudkowsky’s site (WP is swallowing my link right now). They actually bring up the idea of crappy programming causing a bad singularity :-) Not sure if he talks much about “phenotropic computing”.

  (Quote)

Eneasz July 23, 2010 at 9:50 am

Alonzo – the “AI go FOOM!” scenario does indeed assume a being that is, for practical purposes, infinitely powerful. You describe a “slow takeoff” scenario (damnably, Hanson’s “If Uploads Come First” doesn’t appear to be online anymore! :( )

I haven’t yet seen anything convincing enough to place the possibility of either one at a low enough value to make it negligible, so it’s probably worth at least some effort to reduce the risks of a FOOM scenario.

  (Quote)

cl July 23, 2010 at 11:33 am

Luke,

Okay. I assure you the tone imbibed into my keystrokes is one of pleasant inquiry. At the same time, it’s time to be blunt and direct and call a spade a spade. That said…

If the Greeks had designed an AI to be moral, this AI would have been instilled with the idea that slavery was useful in many contexts, that women were little more than property, that the most glorious death was one in battle, and so on.

Or, that it’s constructive for middle-aged men to have sex with teen boys, which is why I’ve repeatedly asked Alonzo Fyfe to give an explanation for his claim that the Greeks were “probably wrong” concerning pederasty. Unfortunately, Alonzo – much like the intellectually reckless he condemns – refuses to answer coherently. To date, his best effort that I’m aware of amounts to a vague allusion to an unspecified set of “venereal diseases” that would seemingly make all other forms of non-monogamous sex also “probably wrong.” Was non-monogamous sex also “probably wrong” at that time? Is non-monogamous sex more “probably wrong” now, seeing as how we have more VD at our disposal today?

Desirism is a system for figuring out what is moral that respects no particular assumptions about moral values and political systems.

That’s blatantly untrue, as desirism most certainly does respect particular assumptions about moral values: it rejects the concept of intrinsic moral value altogether, as one not-so-trivial example. That is certainly an instance of “respecting a particular assumption,” don’t you think? BTW, if you’re really thinking of submitting your “defense” of desirism for peer review, omit sentences like that. You’re anthropomorphizing. Desirism doesn’t respect anything and good referees will take notice of stuff like that.

…according to desirism, only a superintelligence is even capable of answering a great many moral questions.

There, I agree with you: this is precisely why a morality dictated by an omnibenevolent, omniscient God is the best possible morality. Atheists can hate on that all they want, but it’s the undeniable truth.

Desirism is not just a plausible descriptive and prescriptive theory of morality for human practice.

You need to justify your claim that desirism is prescriptive. For example, how is desirism prescriptive, if – as Alonzo Fyfe says – it “prescribes nothing” in the case of 200 that P and one that ~P, where P = some malleable desire (for example pederasty or smoking)?

Desirism may well be the key to creating utopia and avoiding extinction.

PUH-leez! I’d put ‘LOL’ but I fear you’re serious! If this theory was half or even a quarter of what it’s cracked up to be, why do you and Alonzo seemingly make a habit of avoiding salient questions?

  (Quote)

V July 23, 2010 at 12:17 pm

cl is correct, here’s all you need for a moral guide for a superintelligence:
ONE: ‘You shall have no other gods before Me.’

TWO: ‘You shall not make for yourself a carved image–any likeness of anything that is in heaven above, or that is in the earth beneath, or that is in the water under the earth.’

THREE: ‘You shall not take the name of the LORD your God in vain.’

FOUR: ‘Remember the Sabbath day, to keep it holy.’

FIVE: ‘Honor your father and your mother.’

SIX: ‘You shall not murder.’

SEVEN: ‘You shall not commit adultery.’

EIGHT: ‘You shall not steal.’

NINE: ‘You shall not bear false witness against your neighbor.’

TEN: ‘You shall not covet your neighbor’s house; you shall not covet your neighbor’s wife, nor his male servant, nor his female servant, nor his ox, nor his donkey, nor anything that is your neighbor’s.’

  (Quote)

Alonzo Fyfe July 23, 2010 at 12:27 pm

Alonzo – the “AI go FOOM!” scenario does indeed assume a being that is, for practical purposes, infinitely powerful. You describe a “slow takeoff” scenario…

There is no such thing as infinitely powerful, even “for all practical purposes”.

When I speak about the necessity for infinite capability, I mean it literally. Anything less than infinite capability will require diversification and cooperation. And any finite system, no matter how large and powerful, is in infinitesimally small speck of infinite.

  (Quote)

Terry July 23, 2010 at 12:29 pm

Don’t forget Deuteronomy 23:1 – if you’re testicles have been injured, you are not welcome at church. I know its not part of the official list with the numbers and such, but I choose to believe that this is a key moral principle that should never been done away with. This, and the whole kill your wimmin fokes if they have the audacity to dishonor you, or make you feel inadequate, or make you “disfunctional” if you know what I mean (wink), or… Bejeebus, those wimmin fokes have a crap-load of power over me, wtf?!?!?

  (Quote)

Atheist.pig July 23, 2010 at 12:59 pm

And in fact, according to desirism, only a superintelligence is even capable of answering a great many moral questions.

What does intelligence have to do with answering moral questions? Wouldn’t it just be a supercomputing utilitarian calculator, as long as, of course, you want the greatest overall outcome for well being, happiness, etc. But we already know the bizarre outcomes entailed by this line of reasoning. Your making no sense man!

  (Quote)

anon July 23, 2010 at 1:08 pm

“Don’t forget Deuteronomy 23:1 – if you’re testicles have been injured, you are not welcome at church.”

There were no churches back then.

  (Quote)

V July 23, 2010 at 2:06 pm

Strict brain emulation seems like the starting point for the singularity. If that’s the case it may be all out of our hands. It seems unlikely that we will understand perfectly how morality is formed in the brain before we emulate it. And enforcing a morality paradigm on an emulation that we have little understanding of would be virtually impossible.

  (Quote)

Terry July 23, 2010 at 3:04 pm

anon:

Hmm, you’re right. If I said I used “church” as shorthand for “the congregation of the Lord” would you believe me?

  (Quote)

Nonchai July 23, 2010 at 4:27 pm

One thing that i’ve often thought about is the idea of escalating “arms races” between such singularity entities – whether in digital form or physical. It seems to me that intelligence , knowledge or technological advance have not necessarily led to such civilisations becoming less prone to conflict ( but i am willing to be corrected on this ).

Bearing in mind that networks that “evolve” complexity often exhibit chaotic behaviours ( and i mean this in both the mathematic and common senses of the word ) it seems to me its likely there will be a lot of destructive and disruptive activity going on the more such sentient entities get to grip with their existence and their “siblings”.

Maybe such future “wars” are the reason we dong get to see anything going on “out there” according to current SETI findings at least.

None of this is to say the singularity wont happen it certainly will in my opinion, but i think as soon as these sentient singularity entities get to have an ego it wont be love and peace. Maybe such AIs will need to go through their own two ( or more ) “world wars: before they too come to their “senses”..

Nothing about AI ad the singularity seems to me to suggest there wont be EGO’s out there and until those get sublimated or possibly suppressed even – it isn’t going to be pretty – and thats not taking into account anything about what will happen to us “wetware” …. :)

  (Quote)

piero July 23, 2010 at 4:48 pm

One thing that puzzles me about this discussion is the assumption that a smart machine will have desires. Why? We have desires because they were programmed into us by evolution. There is no logical connection between processing ability (intelligence) and desires. I think we would have to program desires into artificial intelligences before we could have a singularity at all. Otherwise, what reason would that AI have to improve itself?

  (Quote)

anon July 23, 2010 at 5:06 pm

Hi Terry,

“Hmm, you’re right. If I said I used “church” as shorthand for “the congregation of the Lord” would you believe me?”

I would believe you.

I don’t remember the verse you cite very well and I didn’t go back and read it. But I always thought that verse was about one very important spot in one temple in Jerusalem. Maybe I’m wrong. I’m willing to be convinced.

If I’m not wrong, then someone with crushed nuts could be a part of God’s people and hang out in “the congregation of the Lord.” Its just that someone with crushed nuts couldn’t go into that one spot in that one temple. One might not like that prohibition. But its very different then saying someone with crushed nuts isn’t allowed to go to church.

  (Quote)

Terry July 23, 2010 at 5:07 pm

piero:

Interesting point. I wonder if the idea of self awareness would play a role in this. For example, does being self aware necessarily suggest a desire to survive (or to perpetuate self awareness)? Maybe that is the logical connection.

I could also imagine an organization such as the military intentionally programming some specific desires into the smart machine, perhaps without the level of moral thought that we hope would be applied.

  (Quote)

Terry July 23, 2010 at 5:12 pm

anon:

Deut. 23:1 (KJV):
“He that is wounded in the stones, or hath his privy member cut off, shall not enter into the congregation of the LORD.”

That’s why I used that phrase – I don’t actually know much about the specifics of how that law was applied back in the day. You might be right about the temple thing. Either way, it’s an interesting criteria that “the Lord” uses to define who is worthy to enter into his congregation – whatever that means.

  (Quote)

Márcio July 23, 2010 at 5:54 pm

mkandefer,

Humans are not just a piece of flash, we have a soul, something AI will never have. That is why they will only be “things”.

When we destroy a machine, we don’t murder it.

  (Quote)

piero July 23, 2010 at 5:58 pm

Terry:

Yes, it might be the case that consciousness implies a desire to survive. I’ll have to give that some furher thought (like a couple of years, I mean).

The possibility of the military instilling evil desires on a machine I find slightly implausible, unless the machine is thicker than us. I mean, we have learnt to restrain our sexual impulses (well, some of us have); a machine which is smarter than us would certainly question its motives for action and could write them off as unsound. It is hard to fool someone who is smarter than us.

  (Quote)

piero July 23, 2010 at 6:02 pm

Terry:
Further to the military use of smarter-than-humans machines.
A clever machine would probably realize that waging war runs counter to its own interests (it might be destroyed by an enemy machine). So it would probably decide to let the humans fight it out themselves.

  (Quote)

piero July 23, 2010 at 6:06 pm

Márcio:

You sound just like Fukuyama: people are different, because they have some special ingredient X. Once you try to define what that special ingredient is, however, it slips away like the chimera it is.

People are machines. Biological machines, wet and messy, but machines nevertheless. Watch an Alzheimer patient slowly drift into soulessness and you’ll see what I mean.

  (Quote)

Márcio July 23, 2010 at 6:55 pm

piero,

Its very wrong to think humans are just biological machines and we are not special. Destroing a biological machine is not wrong, but murdering a human being is very WRONG.

Your way of treating humans is very DANGEROUS! If more people think as you do, we are doomed.

A murderer will just destroy a biological machine and not a human being. A mother will just throw away a biological machine and not her baby.

That way of thinking is insane man. Don’t go saying stupid things like that as you like.

  (Quote)

piero July 23, 2010 at 6:56 pm

Further thought on the military use of smart machines.
They are useless, because you can never be sure they’ll do what you think they will. Because they are smarter than you. So developing them for military purposes is just as stupid as creating a superpower in order to defeat an enemy: now how do you defend yourself from the superpower you created?

  (Quote)

piero July 23, 2010 at 7:12 pm

Márcio:

LOL!

Your post reads like a parody. If it is, congratulations; one of the better ones I’ve seen. If it’s not, my deepest sympathy; I believe English is not your first language, is it?

Anyway, just in case your post was meant in earnest, I’d recommend you read “The Mind’s I” by Hofstadter and Dennett and “How the Mind Works” by Pinker. Those two will give a lot of pointers to further reading. It will take you six months if you are a fast reader. Then come back and make some comments.

  (Quote)

Bradm July 23, 2010 at 10:36 pm

You’re a believer in the singularity? Excuse me, the Singularity. Seriously? You’ve always seemed more rational than that.

  (Quote)

lukeprog July 23, 2010 at 10:40 pm

Bradm,

Or, perhaps belief in the singularity is more rational than you currently believe.

  (Quote)

Eneasz July 23, 2010 at 10:51 pm

piero:

One thing that puzzles me about this discussion is the assumption that a smart machine will have desires. Why?

A desire is basically a goal. Most machines designed by man have some sort of goal. The classic example of unFriendly AI is the Paperclipper. Suppose that the first machine intelligence to go critical was the operating system for a paperclip factory. It would have been given the goal, by it’s owner, of producing paperclips. As it improves itself it finds new ways to make better paperclips, faster, and more efficiently. It may seize local assets to facilitate greater paperclip production. It may recognize that all the machinery that currently goes into production of food for humans is being wasted on non-paperclip-maximizing work and convert those to paperclipping as well. When humans attack its paperclipping facilities it will realize that to ensure continued growth of paperclip production it may have to eliminate anti-paperclip threats.

In short, we may find ourselves in a solar system converted to unimaginably huge amounts of paperclips, simply because we were careless. While this particular case is of course incredibly unlikely, it is nearly guaranteed that the goals of the first goal-directed superintelligence will be inimical to human happiness/survival if we don’t actively work towards ensuring that they aren’t.

  (Quote)

Zeb July 24, 2010 at 2:23 am

If the superintelligence were able to make all desires malleable, I would expect it to dial all desires down to zero. That would be the simplest and safest way to harmonize all desires, which is what I understand to be the goal of morality on desirism. So the Singularity is the Buddha Mind, finally coming to deliver all sentient beings into Nirvana.

  (Quote)

piero July 24, 2010 at 6:34 am

Zeb and Eneasz:

I find Zeb’s outlandish scenario reasonable, and Eneasz’s run-of-the-mill (paperclip mill, in this case)scenario outlandish.

  (Quote)

Bradm July 24, 2010 at 7:10 am

“Or, perhaps belief in the singularity is more rational than you currently believe.”

I highly doubt it. I just hope you are as skeptical about it as you are about other things.

  (Quote)

piero July 24, 2010 at 11:35 am

Bradm:

Can you point us to some good arguments that explain why the singularity is unlikely? Thanks.

  (Quote)

lukeprog July 24, 2010 at 11:42 am

A good discussion between a singularity believer and a singularity skeptic – Yudkowsky vs. Pigliucci – is here.

  (Quote)

Bradm July 24, 2010 at 11:49 am

piero,

If somebody presents a good argument for the singularity, I will gladly consider it. Until then, it seems like a silly thing to believe in. Thanks.

  (Quote)

piero July 24, 2010 at 12:01 pm

Thanks, Luke.
No thanks, Bradm.

  (Quote)

piero July 24, 2010 at 1:12 pm

Very interesting discussion, Luke. I think Massimo’s argument ultimately boils down to “we have no instantiation of consciousness in something other than a brain, therefore we have no reason to believe artificial consciousness is possible”. Also, his comparison of consciousness and sugar was akin to Searle’s view of consciousness as something the brain “secretes”. Not very convincing.

  (Quote)

Eneasz July 28, 2010 at 2:59 pm

A recent post brought up the dependence of AI on knowledge of meta-philosophy, I thought you might be interested.

  (Quote)

tink January 19, 2011 at 11:21 am

We can never be sure about AI`s friendliness and loyalty unless we make it absolutely dependent on us for survival. This way any harmful exemplars will be naturally selected to die off if they attempt to harm humans. The best way to do it, of course, is to become AIs ourselves.

  (Quote)

Leave a Comment

{ 2 trackbacks }