Reading Yudkowsky, part 19

by Luke Muehlhauser on March 7, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Overcoming Bias (now moved to Less Wrong) are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 128th post is The “Outside the Box” Box:

Whenever someone exhorts you to “think outside the box”, they usually, for your convenience, point out exactly where “outside the box” is located.  Isn’t it funny how nonconformists all dress the same…

…The problem with originality is that you actually have to think in order to attain it, instead of letting your brain complete the pattern.  There is no conveniently labeled “Outside the Box” to which you can immediately run off.  There’s an almost Zen-like quality to it – like the way you can’t teach satori in words because satori is the experience of words failing you.  The more you try to follow the Zen Master’s instructions in words, the further you are from attaining an empty mind.

Original Seeing quotes a story from Robert Pirsig.

How to Seem (and Be) Deep recommends simply using cached Deep Wisdom that people are unfamiliar with. Everyone is familiar with the standard Deep Wisdom about death: “Death gives life meaning,” or whatever. They will nod and maybe even call this wise, but they will not learn anything or be surprised.

But most people are not familiar with a transhumanist’s cached Deep Wisdom about death: “Death is a pointless tragedy that people rationalize.”

I suspect this is one reason Eastern philosophy seems deep to Westerners – it has nonstandard but coherent cache for Deep Wisdom.  Symmetrically, in works of Japanese fiction, one sometimes finds Christians depicted as repositories of deep wisdom and/or mystical secrets.  (And sometimes not.)

If I recall correctly an economist once remarked that popular audiences are so unfamiliar with standard economics that, when he was called upon to make a television appearance, he just needed to repeat back Econ 101 in order to sound like a brilliantly original thinker.

Also crucial was that my listeners could see immediately that my reply made sense.  They might or might not have agreed with the thought, but it was not a complete non-sequitur unto them.  I know transhumanists who are unable to seem deep because they are unable to appreciate what their listener does not already know.  If you want to sound deep, you can never say anything that is more than a single step of inferential distance away from your listener’s current mental state.  That’s just the way it is.

When talking about the science behind topics regularly treated in science fiction – topics like AI or cryonics – people regularly commit The Logical Fallacy of Generalizing from Fictional Evidence.

Hold Off on Proposing Solutions quotes from Robyn Dawes’ Rational Choice in an Uncertain World:

Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem.  Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested.  Maier enacted an edict to enhance group problem solving: “Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.”

This is very good advice, and the post gives more details on how to do it with groups that you lead.

“Can’t Say No” Spending links to several results where spending more on health care or development aid has basically no beneficial effect. Why do we do it? Apparently because it feels shitty to say “No.”

Next, Eliezer writes a Congratulations to Paris Hilton for signing up to be cryogenically frozen:

Anyone not signed up for cryonics has now lost the right to make fun of Paris Hilton,
because no matter what else she does wrong, and what else you do right,
all of it together can’t outweigh the life consequences of that one little decision.

Except that, as it turns out, Paris later denied signing up to be cryogenically frozen.

Pascal’s Mugging: Tiny Probabilities of Vast Utilities examines Pascal’s Wager, and does not have an answer:

I don’t feel I have a satisfactory resolution as yet, so I’m throwing it open to any analytic philosophers who might happen to read Overcoming Bias.

Illusion of Transparency: Why No One Understand You is highly relevant to all writers:

We always know what we mean by our words, and so we expect others to know it too.  Reading our own writing, the intended interpretation falls easily into place, guided by our knowledge of what we really meant.  It’s hard to empathize with someone who must interpret blindly, guided only by the words.

…two days before Germany’s attack on Poland, Chamberlain sent a letter intended to make it clear that Britain would fight if any invasion occurred.  The letter, phrased in polite dipliomatese, was heard by Hitler as conciliatory – and the tanks rolled.

Eliezer summarizes half a dozen studies that illuminate the problem in more detail.

Self-Anchoring describes how we are unable to fully see things from another’s point of view, even in very basic ways:

We can put our feet in other minds’ shoes, but we keep our own socks on.

Expecting Short Inferential Distances explains much of miscommunication with reference to our evolutionary past:

In the ancestral environment, you were unlikely to end up more than one inferential step away from anyone else.  When you discover a new oasis, you don’t have to explain to your fellow tribe members what an oasis is, or why it’s a good idea to drink water, or how to walk.  Only you know where the oasis lies; this is private knowledge.  But everyone has the background to understand your description of the oasis, the concepts needed to think about water; this is universal knowledge.  When you explain things in an ancestral environment, you almost never have to explain your concepts.  At most you have to explain one new concept, not two or more simultaneously.

In the ancestral environment there were no abstract disciplines with vast bodies of carefully gathered evidence generalized into elegant theories transmitted by written books whose conclusions are a hundred inferential steps removed from universally shared background premises.

In the ancestral environment, anyone who says something with no obvious support, is a liar or an idiot.  You’re not likely to think, “Hey, maybe this guy has well-supported background knowledge that no one in my band has even heard of,” because it was a reliable invariant of the ancestral environment that this didn’t happen.

Conversely, if you say something blatantly obvious and the other person doesn’t see it, they’re the idiot, or they’re being deliberately obstinate to annoy you.

And to top it off, if someone says something with no obvious support and expects you to believe it – acting all indignant when you don’t – then they must be crazy.

Recognizing the problem leads to a solution:

A biologist, speaking to a physicist, can justify evolution by saying it is “the simplest explanation”.  But not everyone on Earth has been inculcated with that legendary history of science, from Newton to Einstein, which invests the phrase “simplest explanation” with its awesome import: a Word of Power, spoken at the birth of theories and carved on their tombstones.  To someone else, “But it’s the simplest explanation!” may sound like an interesting but hardly knockdown argument; it doesn’t feel like all that powerful a tool for comprehending office politics or fixing a broken car.  Obviously the biologist is infatuated with his own ideas, too arrogant to be open to alternative explanations which sound just as plausible.  (If it sounds plausible to me, it should sound plausible to any sane member of my band.)

And from the biologist’s perspective, he can understand how evolution might sound a little odd at first – but when someone rejects evolution even after the biologist explains that it’s the simplest explanation, well, it’s clear that nonscientists are just idiots and there’s no point in talking to them.

A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don’t recurse far enough, you’re just talking to yourself.

Further advice comes in Explainers Shoot High, Aim Low:

A few years ago, an eminent scientist once told me how he’d written an explanation of his field aimed at a much lower technical level than usual.  He had thought it would be useful to academics outside the field, or even reporters.  This ended up being one of his most popular papers within his field, cited more often than anything else he’d written.

The lesson was not that his fellow scientists were stupid, but that we tend to enormously underestimate the effort required to properly explain things.

Eliezer’s own greatest success is this vein may be his Intuitive Explanation of Bayes’ Theorem, and we get its (delightful) origin story in Double Illusion of Transparency.

Previous post:

Next post:

{ 2 comments… read them below or add one }

melior March 9, 2011 at 9:46 am

The Robert M. Pirsig story is actually a novel, and if you haven’t read it yet I highly recommend it. It’s about a man who has a philosophical idea so profound that as he continues to wrestle with it it pushes him to the edge of what might from the outside be characterized as a “breakdown”. Yet, told engagingly from the first person, it seems clear that description is far from the complete story.

No, I won’t tell you the idea here. There’s so much more to enjoy about the novel than that overarching thread that I hesitate to say any more for fear of stealing the joy of your firsthand experience.
Zen and the Art of Motorcycle Maintenance

  (Quote)

Polymeron March 10, 2011 at 3:17 am

I am immensely frustrated by the Pascal’s Mugging problem. My first attempts at solving it failed, and I haven’t the time to sit around all day just on that one problem. But it certainly is an extremely important issue for any AI using utility calculations.

I’m still keeping it at the back of my mind, though. I think I may have a few promising leads on it.

  (Quote)

Leave a Comment