Reading Yudkowsky, part 9

by Luke Muehlhauser on January 12, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Overcoming Bias (now moved to Less Wrong) are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to improve their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

Yudkowsky’s 53rd post is a very important one: Making Beliefs Pay Rent (in Anticipated Experiences).

He begins with the famous parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”

The two people think they disagree, but they do not actually disagree in terms of anticipated experiences. If they left a tape recorder in the forest near a tree they both knew was about to collapse, they would both expect to hear the collapse when they played back the tape recorder. And if they had a machine that could read all brains for auditory processing, they would both expect not to find any auditory processing that relates the falling of that tree (since no humans were within hearing range when it collapsed).

So it sounds like they’re disagreeing, but their anticipated experiences do not differ.

This kind of “fake disagreement” happens in moral philosophy, and my own analogy for calling it out concerns two visitors to an art museum who come upon Marcel Duchamp’s Fountain. Luke proclaims, “Wow, that is great art.” Ashley retorts: “Are you crazy? That’s not great art!”

After much pointless arguing, they agree to each provide a definition for what they mean by “great art.” Luke says that by “great art” he means “something that was hugely influential on future artists.” And Ashley says that by “great art” she means “something that is beautiful and pleasing to most of its observers.”

So it sounded like we were disagreeing, but we don’t disagree at all about anticipated experiences. Luke’s sentence “Duchamp’s Fountain is great art” predicts that Ashley and Luke should read a lot of artists talking about Duchamp’s Fountain and how it influenced their own work, and Ashley and Luke agree about that. Ashley’s sentence “Duchamp’s Fountain is not great art” predicts that Ashley and Luke could interview most casual museum visitors and they would tell us they didn’t much care for the piece. And Ashley and Luke agree about that, too.

So we weren’t disagreeing. In fact, as it happens, we were probably both correct. Our initial confusion came from the fact that we didn’t make our beliefs pay rent. We didn’t make them pay rent in terms of anticipated experiences.

Unfortunately, “to anticipate sensory experiences as precisely as possible, we must process beliefs that are not [by themselves] anticipations of sensory experience.”

It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.

…suppose your postmodern English professor teaches you that the famous writer Wulky Wilkinsen is actually a “post-utopian”. What does this mean you should expect from his books? Nothing. The belief, if you can call it that, doesn’t connect to sensory experience at all. But you had better remember the propositional assertion that “Wulky Wilkinsen” has the “post-utopian” attribute, so you can regurgitate it on the upcoming quiz…

The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict – or better yet, prohibit.  Do you believe that phlogiston is the cause of fire?  Then what do you expect to see happen, because of that? Do you believe that Wulky Wilkinsen is a post-utopian? Then what do you expect to see because of that? No, not “colonial alienation”; what experience will happen to you?

It is even better to ask: what experience must not happen to you?  Do you believe that elan vital explains the mysterious aliveness of living beings?  Then what does this belief not allow to happen – what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you.  It floats…

Above all, don’t ask what to believe – ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.

Yudkowsky’s 54th post is Belief in Belief:

Depending on how your childhood went, you may remember a time period when you first began to doubt Santa Claus’s existence, but you still believed that you were supposed to believe in Santa Claus, so you tried to deny the doubts. As Daniel Dennett observes, where it is difficult to believe a thing, it is often much easier to believe that you ought to believe it. What does it mean to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green? The statement is confusing; it’s not even clear what it would mean to believe it – what exactly would be believed, if you believed. You can much more easily believe that it is proper, that it is good and virtuousand beneficial, to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green.  Dennett calls this “belief in belief”.

And here things become complicated, as human minds are wont to do – I think even Dennett oversimplifies how this psychology works in practice. For one thing, if you believe in belief, you cannot admit to yourself that you only believe in belief, because it is virtuous to believe, not to believe in belief, and so if you only believe in belief, instead of believing, you are not virtuous. Nobody will admit to themselves, “I don’t believe the Ultimate Cosmic Sky is blue and green, but I believe I ought to believe it” – not unless they are unusually capable of acknowledging their own lack of virtue. People don’t believe in belief in belief, they just believe in belief.

Yudkowky uses this notion of “belief in belief” to explain the believer in the invisible dragon (in Carl Sagan’s “dragon in the garage” parable).

Bayesian Judo opens:

You can have some fun with people whose [anticipared experiences] get out of sync with what they [think] they believe.

Yudkowsky tells the story of a conversation he had at a dinner party:

[A man] said: “I don’t believe Artificial Intelligence is possible because only God can make a soul.”

At this point I must have been divinely inspired, because I instantly responded: “You mean if I can make an Artificial Intelligence, it proves your religion is false?”

He said, “What?”

I said, “Well, if your religion predicts that I can’t possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false…”

There was a pause, as [he] realized he had just made his hypothesis vulnerable to falsification, and then he said, “Well, I didn’t mean that you couldn’t make an intelligence, just that it couldn’t be emotional in the same way we are.”

I said, “So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong.”

He said, “Well, um, I guess we may have to agree to disagree on this.”

I said: “No, we can’t, actually. There’s a theorem of rationality called Aumann’s Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong.”

We went back and forth on this briefly. Finally, he said, “Well, I guess I was really trying to say that I don’t think you can make something eternal.”

I said, “Well, I don’t think so either! I’m glad we were able to reach agreement on this, as Aumann’s Agreement Theorem requires.”  I stretched out my hand, and he shook it, and then he wandered away.

A woman who had stood nearby, listening to the conversation, said to me gravely, “That was beautiful.”

See? Skilled Bayesians get the ladies.

There is your motivation to struggle through An Intuitive Explanation of Bayes’ Theorem.

In Professing and Cheering, Yudkowsky calls on Dennett again. Dennett also said that perhaps sometimes what sociologists of religion are studying is not so much religious belief as religious profession:

Suppose an alien anthropologist studied a group of postmodernist English students who all seemingly believed that Wulky Wilkensen was a post-utopian author. The appropriate question may not be “Why do the students all believe this strange belief?” but “Why do they all write this strange sentence on quizzes?” Even if a sentence is essentially meaningless, you can still know when you are supposed to chant the response aloud.

But maybe sometimes what appears to be religious profession isn’t even that. Maybe sometimes it’s just religious cheering, like “Go Blues!”

So, we might call beliefs that control our anticipated experiences “proper beliefs.” Those are the good ones. And then there are various types of “improper beliefs,” beliefs that fail to control our anticipated experiences. These might include “belief in belief” and also “professing” and “cheering,” which are often represented as proper beliefs but probably aren’t.

Of course, proper beliefs can still be just plain wrong, as in the mother who expects prayer to heal her baby.

Another type of improper belief is what Yudkowsky calls Belief as Attire:

On the other hand, it is very easy for a human being to genuinely, passionately, gut-level belong to a group, to cheer for their favorite sports team… Identifying with a tribe is a very strong emotional force.  People will die for it.  And once you get people to identify with a tribe, the beliefs which are attire of that tribe will be spoken with the full passion of belonging to that tribe.

Previous post:

Next post:

{ 7 comments… read them below or add one }

Joseph January 12, 2011 at 5:27 am

Belief as Attire or mob psychology basically coins the same phenomenon as rally-around-the-flag. What was great about the Renaissance is that people started to think as individuals rather than another clog in the machine. But we are still ways to go on that front.


Luke Muehlhauser January 12, 2011 at 6:52 am




Paul Wright January 12, 2011 at 10:47 am

I like Yudkowsky’s take on Sagan’s invisible dragon: Sagan’s original point is about falsifiability, but Yudkowsky turns it into a story about what’s going on with the dragon-believer who pre-emptively knows how tests for the dragon will fail (so on some level “knows” the dragon is not there) while avowing belief in the dragon. I’ve had interesting conversations on healnig prayer experiments where Christians have tied themselves in knots to avoid saying their belief allows them to anticipate anything at all. This seems a classic case of what Yudkowsky is talking about.

Speaking of Dennett, Simon Blackburn covers similar ground in the early part of his Truth book, which I’d recommend.


James H. January 12, 2011 at 9:39 pm

See? Skilled Bayesians get the ladies.

Even my essentially non-existent understanding of Bayes’ Theorem can anticipate measurable increases in traffic to your explanatory article based on the posting of this acute observation.

Now someone just needs to write Bayes and Nights: Every Single Man’s Favorite New Theorem. :)


Luke Muehlhauser January 12, 2011 at 10:28 pm

James H.,



BenSix January 13, 2011 at 4:00 am

See? Skilled Bayesians get the ladies.

There is your motivation to struggle through An Intuitive Explanation of Bayes’ Theorem.

And poor Bayesians get cheated on.

Another type of improper belief is what Yudkowsky calls Belief as Attire

Boo! Different shirt!

Anyway, before my links get even more frivolous, these are useful posts…


Luke Muehlhauser January 13, 2011 at 12:23 pm


Lol, good links!


Leave a Comment