Reading Yudkowsky, part 14

by Luke Muehlhauser on February 13, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Overcoming Bias (now moved to Less Wrong) are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to improve their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 82nd post is Positive Bias: Look into the Dark, where he notes that many of us test our hypotheses by looking for data that fits the hypothesis, rather than looking for data that would disconfirm our data. This is slightly different than confirmation bias, where we are trying to end up where we started.

In Say Not “Complexity”, Yudkowsky distinguishes some good and bad uses of the term “complexity”:

Complexity is not a useless concept.  It has mathematical definitions attached to it, such as Kolmogorov complexity, and Vapnik-Chervonenkis complexity.  Even on an intuitive level, complexity is often worth thinking about – you have to judge the complexity of a hypothesis and decide if it’s “too complicated” given the supporting evidence, or look at a design and try to make it simpler.

But concepts are not useful or useless of themselves.  Only usages are correct or incorrect.  In the step [my apprentice] Marcello was trying to take in the dance, he was trying to explain something for free, get something for nothing…

Marcello and I developed a convention in our AI work: when we ran into something we didn’t understand, which was often, we would say “magic” – as in, “X magically does Y” – to remind ourselves that here was an unsolved problem, a gap in our understanding. It is far better to say “magic”, than “complexity” or “emergence”; the latter words create an illusion of understanding.  Wiser to say “magic”, and leave yourself a placeholder, a reminder of work you will have to do later.

As it happens, I’ve done that myself, without thinking about why. When I’m describing to a fellow technician how some new computer program for financial management works, I’ll sometimes slip in “And then it magically does X” to indicate that I don’t actually understand that part of my own explanation.

In My Wild and Reckless Youth, Eliezer vaguely discusses some mistakes he made when he was merely scientific-minded and not yet Bayes-minded:

As a Traditional Rationalist, the young Eliezer was careful to ensure that his Mysterious Answer made a bold prediction of future experience.  Namely, I expected future neurologists to discover that neurons were exploiting quantum gravity, a la Sir Roger Penrose.  This required neurons to maintain a certain degree of quantum coherence, which was something you could look for, and find or not find.  Either you observe that or you don’t, right?

But my hypothesis made no retrospective predictions.  According to Traditional Science, retrospective predictions don’t count – so why bother making them?  To a Bayesian, on the other hand, if a hypothesis does not today have a favorable likelihood ratio over “I don’t know”, it raises the question of why you today believe anything more complicated than “I don’t know”.  But I knew not the Way of Bayes, so I was not thinking about likelihood ratios or focusing probability density.  I had Made a Falsifiable Prediction; was this not the Law?

As a Traditional Rationalist, the young Eliezer was careful not to believe in magic, mysticism, carbon chauvinism, or anything of that sort.  I proudly professed of my Mysterious Answer, “It is just physics like all the rest of physics!”  As if you could save magic from being a cognitive isomorph of magic, by calling it quantum gravity.  But I knew not the Way of Bayes, and did not see the level on which my idea was isomorphic to magic.  I gave myallegiance to physics, but this did not save me; what does probability theory know of allegiances?  I avoided everything that Traditional Rationality told me was forbidden, but what was left was still magic.

Beyond a doubt, my allegiance to Traditional Rationality helped me get out of the hole I dug myself into.  If I hadn’t been a Traditional Rationalist, I would have been completely screwed.  But Traditional Rationality still wasn’t enough to get it right. It just led me into different mistakes than the ones it had explicitly forbidden.

When I think about how my younger self very carefully followed the rules of Traditional Rationality in the course of getting the answer wrong, it sheds light on the question of why people who call themselves “rationalists” do not rule the world.  You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.

The thought continues in Failing to Learn from History:

I thought the lesson of history was that astrologers and alchemists and vitalists had an innate character flaw, a tendency toward mysterianism, which led them to come up with mysterious explanations for non-mysterious subjects.  But surely, if a phenomenon really was very weird, a weird explanation might be in order?

It was only afterward, when I began to see the mundane structure inside the mystery, that I realized whose shoes I was standing in.  Only then did I realize how reasonable vitalism had seemed at the time, how surprising and embarrassing had been the universe’s reply of, “Life is mundane, and does not need a weird explanation.”

We read history but we don’t live it, we don’t experience it.  If only I had personally postulated astrological mysteries and then discovered Newtonian mechanics, postulated alchemical mysteries and then discovered chemistry, postulated vitalistic mysteries and then discovered biology.  I would have thought of my Mysterious Answer and said to myself:  No way am I falling for that again.

One way to do this is to imagine yourself experiencing everything you know from history – the rise and fall of Rome, the shift from Aristotle to Galileo, the postulation of phlogiston, and so on. This is one way of Making History Available to your current hypotheses.

In Stranger than History, Yudkowsky makes the “stranger than fiction” point:

…suppose it were the year 1901, and you had to choose between believing those statements I have just offered, and believing statements like the following:

  • There is an absolute speed limit on how fast two objects can seem to be traveling relative to each other, which is exactly 670616629.2 miles per hour.  If you hop on board a train going almost this fast and fire a gun out the window, the fundamental units of length change around, so it looks to you like the bullet is speeding ahead of you, but other people see something different.  Oh, and time changes around too.
  • In the future, there will be a superconnected global network of billions of adding machines, each one of which has more power than all pre-1901 adding machines put together.  One of the primary uses of this network will be to transport moving pictures of lesbian sex by pretending they are made out of numbers.
  • Your grandchildren will think it is not just foolish, but evil, to say that someone should not be President of the United States because she is black.

In Explain / Worship / Ignore, Yudkowsky notes that when somebody asks “Why does it rain?” there are at least three possible answers: Explain the phenomenon, Ignore the question, or Worship the mystery and embrace ignorance:

…each time you select Explain, the best-case scenario is that you get an explanation, such as “sky spirits”.  But then this explanation itself is subject to the same dilemma - Explain,Worship, or Ignore?  Each time you hit Explain, science grinds for a while, returns an explanation, and then another dialog box pops up.  As good rationalists, we feel duty-bound to keep hitting Explain, but it seems like a road that has no end.

You hit Explain for life, and get chemistry; you hit Explain for chemistry, and get atoms; you hit Explain for atoms, and get electrons and nuclei; you hit Explain for nuclei, and get quantum chromodynamics and quarks; you hitExplain for how the quarks got there, and get back the Big Bang…

We can hit Explain for the Big Bang, and wait while science grinds through its process, and maybe someday it will return a perfectly good explanation.  But then that will just bring up another dialog box.  So, if we continue long enough, we must come to a special dialog box, a new option, an Explanation That Needs No Explanation, a place where the chain ends – and this, maybe, is the only explanation worth knowing.

There – I just hit Worship.

Never forget that there are many more ways to worship something than lighting candles around an altar.

If I’d said, “Huh, that does seem paradoxical.  I wonder how the apparent paradox is resolved?” then I would have hit Explain, which does sometimes take a while to produce an answer.

And if the whole issue seems to you unimportant, or irrelevant, or if you’d rather put off thinking about it until tomorrow, than you have hit Ignore.

Previous post:

Next post:

{ 5 comments… read them below or add one }

Scott February 13, 2011 at 11:34 am

Honestly, I’ve always felt that history is one of the most useful of the the liberal arts (could it even be a social science? I digress). The ability to understand what has already happened and apply those lessons. Even something as rigorous as Bayes’s principles relies on understanding the past. Why ignore such a trove of information?

  (Quote)

TK February 13, 2011 at 3:37 pm

Luke, have you ever read a story called “The Metamorphosis of Prime Intellect”? Short version: a computer gains the ability to alter its own makeup, then subsequently all of reality, but has Asimov’s three laws of robotics built in. Hilarity/chaos/boredom ensues. http://www.kuro5hin.org/prime-intellect/

  (Quote)

Dustin February 15, 2011 at 12:45 pm

I’ve been reading Eliezer Yudkowsky site because of this post. So I just want to thank you for directing me to it =)

  (Quote)

JS February 15, 2011 at 8:05 pm

Is there a place where “traditional rationality” and “Bayesian rationality” are clearly explained and contrasted? I have to admit that I find EY to be frustratingly circuitous and verbose.

  (Quote)

Louise February 17, 2011 at 6:09 am

Do you know of the upcoming debate between William Lane Craig and Lawrence Krauss at Notre Dame State University on March 30? And there is another one between Craig and Sam Harris at Notre dame on April 7.

  (Quote)

Leave a Comment

{ 1 trackback }