Reading Yudkowsky, part 20

by Luke Muehlhauser on March 11, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Overcoming Bias (now moved to Less Wrong) are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 141st post is No One Knows What Science Doesn’t Know:

In the ancestral environment it was possible to know everything, and nearly everyone did.  In hunter-gatherer bands of less than 200 people, with no written literature, all background knowledge was universal knowledge.  If one person, in a world containing 200 people total, discovered how gravity worked, you could certainly expect to hear about it.

In a world of 6 billion people, there is not one person alive who can say with certainty that science does not know a thing.  There is too much science.  Our current lifetimes are too short to learn more than a tiny fraction of it, and more is being produced all the time.

Why Are Individual IQ Differences Okay? examines some touchy issues in intelligence research, starting with James Watson’s statement that blacks are dumber that whites. Eliezer asks:

…why is it that the rest of the world seems to think that individual genetic differences are okay, whereas racialgenetic differences in intelligence are not? …What difference does skin colour make?  At all?

Bay Area Bayesians Unite! is a meetup post. Next is Motivated Stopping and Motivated Continuation:

Gilovich’s distinction between motivated skepticism and motivated credulity highlights how conclusions a person does not want to believe are held to a higher standard than conclusions a person wants to believe.  A motivated skeptic asks if the evidence compels them to accept the conclusion; a motivated credulist asks if the evidenceallows them to accept the conclusion.

I suggest that an analogous bias in psychologically realistic search is motivated stopping and motivated continuation: when we have a hidden motive for choosing the “best” current option, we have a hidden motive to stop, and choose, and reject consideration of any more options. When we have a hidden motive to reject the current best option, we have a hidden motive to suspend judgment pending additional evidence, to generate more options – to find something, anything, to do instead of coming to a conclusion.

Torture vs. Dust Specks describes a potential problem for utilitarianism:

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

Eliezer sides with torture, some commenters sided with dust specks, and many people refused to state a preference. Eliezer calls this A Case Study in Motivated Continuation:

In “bioethics” debates, you very often see experts on bioethics discussing what they see as the pros and cons of, say, stem-cell research; and then, at the conclusion of their talk, they gravely declare that more debate is urgently needed, with participation from all stakeholders.  If you actually come to a conclusion, if you actually argue for banning stem cells, then people with relatives dying of Parkinson’s will scream at you.  If you come to a conclusion and actually endorse stem cells, religious fundamentalists will scream at you.  But who can argue with a call to debate?

Uncomfortable with the way the evidence is trending on Darwinism versus creationism?  Consider the issue soberly, and decide that you need more evidence; you want archaeologists to dig up another billion fossils before you come to a conclusion.  That way you neither say something sacrilegious, nor relinquish your self-image as a rationalist.  Keep on doing this with all issues that look like they might be trending in an uncomfortable direction, and you can maintain a whole religion in your mind.

Real life is often confusing, and we have to choose anyway, because refusing to choose is also a choice.  The null plan is still a plan.  We always do something, even if it’s nothing.  As Russell and Norvig put it, “Refusing to choose is like refusing to allow time to pass.”

A Terrifying Halloween Costume shows Yudkowsky dressed up as 3^^^^3 dust specks.


Previous post:

Next post:

{ 21 comments… read them below or add one }

Haukur March 11, 2011 at 6:11 am

I take a break from this blog for a while and when I come back I find out that no longer is religion the greatest threat to humanity’s future – intelligent machines are a bigger threat. Funnily enough, I actually agree. When I did my degree in computer science I was a bit perplexed by how uninterested the professors were in general artificial intelligence and how dismissive they were of it. That’s always seemed really really important to me, despite the failures of early researchers .

This Jewish Messiah figure you’re promoting actually sounds like a really nice guy and he says a lot of insightful things. Could stand to lose the arrogance-cum-ignorance towards the humanities, though. But you seem to be working on that, Luke.

  (Quote)

Luke Muehlhauser March 11, 2011 at 7:55 am

Haukur,

I never thought religion was the biggest threat to humanity. Also, what’s my arrogance-cum-ignorance toward the humanities? Are you inferring that from the fact that I almost never mention them, excepting philosophy? :)

  (Quote)

Haukur March 11, 2011 at 8:32 am

I apologize for the ambiguous wording, I was talking about Eliezer and your ever so gentle and deferential attempts to convince him to stop being such an ass.

I never thought religion was the biggest threat to humanity.

I’m honestly surprised. I’d inferred from posts like this that you did. But you’re certainly in a better position to know your past opinions than I am.

  (Quote)

Jacopo March 11, 2011 at 9:16 am

More good stuff. I want Yudkowsky’s book and I want it now!

  (Quote)

David Rogers March 11, 2011 at 9:17 am

“In a world of 6 billion people, there is not one person alive who can say with certainty that science does not know a thing.”

I can say with absolute certainty that science does not know anything.

Scientists know things (alledgedly, if they’re not fudging their data or conclusions for grant money). Science is the data and the conclusions. Data and conclusions know nothing.

If Yudkowsky has clarified that when he says “science” he means “scientists,” then the semantic context of the piece would allow “science” to know something, although I would contend his communication intentions are awkward.

  (Quote)

Curt March 11, 2011 at 10:58 am

I do not know how many people 3^^^^3 is but I clearly go for the dust specks.
I hope that anyone who disagrees gets horribly tortured for 50 years with out rest.
I wil take a dust speck in both eyes to prevent someone from being tortured for 50 years with out rest. Even if the calculation considers follow on consequences of dust specks in eyes such as car accidents, train and plane crashes, and mis read utility readings I still go with the dust specks. Besides how likely is it that a dust speck in an eye will cause a train to hit an airlplane?

  (Quote)

Rob March 11, 2011 at 11:41 am

Those concepts of motivated skepticism and motivated credulity are great. Often I point to the overwhelming evidence that our minds are physical, and the creduloid Christian ties himself in knots showing how the evidence allows him to believe his mind is actually a mysterious lump of wonder stuff riding heard on his brain.

  (Quote)

Luke Muehlhauser March 11, 2011 at 11:52 am

Rob,

Yup.

Warning: You are using a HUMAN BRAIN to do your reasoning. It has many defaults that have been identified!

  (Quote)

JNester March 11, 2011 at 12:24 pm

Haukur: This Jewish Messiah figure you’re promoting actually sounds like a really nice guy and he says a lot of insightful things.

LOL! Too classic. Luke’s “new messiah.”

  (Quote)

Haecceitas March 11, 2011 at 1:41 pm

I take a break from this blog for a while and when I come back I find out that no longer is religion the greatest threat to humanity’s future – intelligent machines are a bigger threat.

But just think what’s going to happen when those intelligent machines become religious.

  (Quote)

Jeff H March 11, 2011 at 3:35 pm

I do not know how many people 3^^^^3 is but I clearly go for the dust specks.

Curt,
Maybe it will or won’t change your decision, but the article mentions that 3^^^3 is essentially a number greater than the number of atoms in the universe. In other words, the choice is between the torture of one person, or dust specks in the eyes of everyone who has ever lived, plus everyone currently living, plus billions upon billions more. Just wanted to point that out.

  (Quote)

Curt March 11, 2011 at 3:45 pm

Jeff,
Thank you for pointing that out.

  (Quote)

Colin March 11, 2011 at 5:57 pm

‘Because refusing to choose is also a choice’.

Sounds like “Free Will” by Rush.

  (Quote)

Curt March 13, 2011 at 3:13 pm

I am computer illiterate and have not read more than 3 articles in my life about Artificial Intellegence. At least one of those 3 articles was about AI machines destroying humans.
My response to such a thought was and still is nonsense. But since I am such a novice I would be willing to listen to what others have to say on this subject.
My reasoning that AI machines would have no desire to take over from or destory humans is that a machine would not have emotions. It would not have emotions because it would not have chemistry so to speak. When a human touches a hot stove the human feels pain. I imagine that a machine could have senors built in to a hand that would tell it that the hand is on something hot or very hot and if it does not want damage to occur to its (hand) hardware it better move its hand away from the hot stove sooner rather than later. Yet I can not see that the AI would actually think that leaving its hand there was painful.
Taking it one step further the lack of emotions would mean that a machine would not have its own goals. Unless it was commanded to do something a machine would not have an incentive to do anything.
Now if scientists start combining mahines with flesh and blood and if nano technology allows a scientist to design and create a brain of living tissue then I would say that we are no longer talking about ARTIFICIAL intellegence even thought the processes used to create it might have been artificial.
Your (anyones) thoughts on this subject are welcome for the purpose of updating me on this subject.
Climate Controlled Curt

  (Quote)

Curt March 14, 2011 at 8:53 am

I am knew here. Do old threads get overlooked and forgotten?

  (Quote)

Alexander Kruel March 14, 2011 at 10:32 am

@Curt

First of all, emotions are not a part of “intelligence” in its most abstract form. What humans mean by “emotions” are very complex motivations. The actual problem is that most artificially designed intelligences do lack human “emotions”. You argued that artificial agents would not have an incentive to do anything. That is partly true, insofar as an artificial agent would not have an incentive to care about humans, it would be indifferent about humans and consequently constitute a danger. Yet the kind of artificial intelligence that Yudkowsky and others are talking about are by definition goal-oriented, agents that are trying to leverage their intelligence to maximize a goal. There is no need to be consciousness to be goal-oriented, nor does an agent, however intelligent, have to be “emotional”, intelligence does not demand complex motivations. When it comes to goals then in a sense an intelligent agent is similar to a stone rolling down a hill, both are moving towards a sort of equilibrium. The difference is that intelligence is following more complex trajectories as its the ability to read and respond to environmental cues is vastly greater than that of a stone. And that is danger, being indifferent about the goals of other agents mixed with the ability to respond to other agents. The problem is how to make those artificial intelligences care about us rather than being indifferent, how to make them “emotional”. Otherwise we might be wiped out because something was emotionless, because something was indifferent about us in the achievement of its goal.

  (Quote)

Curt March 14, 2011 at 4:14 pm

Karl,
Thank you that helps a lot. We need to worry becasue AI machines would be goal oriented and if they are indifferent about Us they might think that we stand in the way of achieving their pre programed goals. My understanding is that for something to be called AI it would have to be able to draw its own conclusions about its environment and expiriences. But could an AI actually decide to change its own preprogramed goals?
What would give it a motivation to do that if it is indifferent to anything except the goals it has been programed with?
Cruel Canabistic Curt

  (Quote)

Alexander Kruel March 15, 2011 at 2:33 am

@Curt

In a way humans are able to change themselves via learning. An artificial general intelligence, one that is at least as broadly capable as a standard human, will be able to learn as well, alter its response to certain stimuli. Would it be able to more dramatically alter its own goals? That’s up to its design. Certainly an AGI that would chaotically alter its goals, including the instrumental goal of becoming more intelligent, would be ineffective. My understanding is that certain kinds of goal-stability are an inherent quality of any efficient AI design. But I am not an expert. Some people think that one of the first tasks any artificial general intelligence will be used for, equipped with, is the design of better, more capable artificial general intelligences. Its goal would be to create more intelligent AI’s, which might very well include a refinement of its goal parameters. To do so it might need a deep understanding of its own design. Here many potential dangers lurk. For further reading go here.

  (Quote)

Curt March 15, 2011 at 2:55 pm

Karl,
The link that you provided here and one that I think that Louis provided on another thread give me the impression that I have so much catching up to do if I really want to be fluent in this area. Then again the current lead article says that machine ethics is a realatively new field of study so that gives me some hope that could catch up. The study of human ethics is a passion of mine. I imagine that I will find the topic of how to design ethical machines quite interesting.
Thank you,
Colorful Cooperative Curt

  (Quote)

Polymeron March 20, 2011 at 8:07 pm

Alexander Kruel,

The problem is how to make those artificial intelligences care about us rather than being indifferent, how to make them “emotional”. Otherwise we might be wiped out because something was emotionless, because something was indifferent about us in the achievement of its goal.  

The problem is not how to make them emotional; I can already do that much. And indeed that is necessary for a goal-driven AI.
The problem, rather, is having those emotions align with our well-being, long term.

(That and symbolic representation of systems, which I still haven’t fully figured out yet. Though I have at least one lead on that, too)

  (Quote)

Polymeron March 20, 2011 at 8:16 pm

Curt,
Re: Dust specks, I assume you haven’t considered that if only one in a trillion people is under danger of being kidnapped and tortured while a dust speck is hitting their eye, and if the chance that this dust speck would make the difference between being captured or not (say, because they blinked or rubbed their eye) is one in a billion trillion, you have still condemned a population VASTLY larger than Earth’s to a life of terrible torture?
Plus those killed in accidents, lost productivity, the list goes on.
All so you wouldn’t torture one person.

Luke,
I think Eliezer’s point was not a problem with Utilitarianism, but rather a demonstration of our inability to think in large numbers. And judging by the comments, I’d call that a very successful demonstration.

  (Quote)

Leave a Comment