Reading Yudkowsky, part 42

by Luke Muehlhauser on June 4, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Less Wrong are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 350th post is When Science Can’t Help:

Once upon a time, a younger Eliezer had a stupid theory.  Let’s say that Eliezer18‘s stupid theory was that consciousness was caused by closed timelike curves hiding in quantum gravity.  This isn’t the whole story, not even close, but it will do for a start.

And there came a point where I looked back, and realized:

  1. I had carefully followed everything I’d been told was Traditionally Rational, in the course of going astray.  For example, I’d been careful to only believe in stupid theories that made novel experimental predictions, e.g., that neuronal microtubules would be found to support coherent quantum states.
  2. Science would have been perfectly fine with my spending ten years trying to test my stupid theory, only to get a negative experimental result, so long as I then said, “Oh, well, I guess my theory was wrong.”

From Science’s perspective, that is how things are supposed to work – happy fun for everyone.  You admitted your error!  Good for you!  Isn’t that what Science is all about?

But what if I didn’t want to waste ten years?

Well… Science didn’t have much to say about that. How could Science say which theory was right, in advance of the experimental test?  Science doesn’t care where your theory comes from – it just says, “Go test it.”

This is the great strength of Science, and also its great weakness.

The problem is, sometimes scientific questions have no experimental test available any time soon:

…sometimes a question will have very large, very definite experimental consequences in your future – but you can’t easily test it experimentally right now – and yet there is a strong rational argument.

Quantum mechanics is one example Yudkowsky has already explained.

Evolutionary psychology is another example of a case where rationality has to take over from science.  While theories of evolutionary psychology form a connected whole, only some of those theories are readily testable experimentally.  But you still need the other parts of the theory, because they form a connected web that helps you to form the hypotheses that are actually testable – and then the helper hypotheses are supported in a Bayesian sense, but not supported experimentally.  Science would render a verdict of “not proven” on individual parts of a connected theoretical mesh that is experimentally productive as a whole.  We’d need a new kind of verdict for that, something like “indirectly supported”.

Or what about cryonics?

Cryonics is an archetypal example of an extremely important issue (150,000 people die per day) that will have huge consequences in the foreseeable future, but doesn’t offer definite unmistakable experimental evidence that we can get right now.

So do you say, “I don’t believe in cryonics because it hasn’t been experimentally proven, and you shouldn’t believe in things that haven’t been experimentally proven?”

Well, from a Bayesian perspective, that’s incorrect.  Absence of evidence is evidence of absence only to the degree that we could reasonably expect the evidence to appear.  If someone is trumpeting that snake oil cures cancer, you can reasonably expect that, if the snake oil was actually curing cancer, some scientist would be performing a controlled study to verify it – that, at the least, doctors would be reporting case studies of amazing recoveries – and so the absence of this evidence is strong evidence of absence.  But “gaps in the fossil record” are not strong evidence against evolution; fossils form only rarely, and even if an intermediate species did in fact exist, you cannot expect with high probability that Nature will obligingly fossilize it and that the fossil will be discovered.

Reviving a cryonically frozen mammal is just not something you’d expect to be able to do with modern technology,even if future nanotechnologies could in fact perform a successful revival.  That’s how I see Bayes seeing it.

Next is the post Science Isn’t Strict Enough:

Science began as a rebellion against grand philosophical schemas and armchair reasoning.  So Science doesn’t include a rule as to what kinds of hypotheses you are and aren’t allowed to test; that is left up to the individual scientist.  Trying to guess that a priori, would require some kind of grand philosophical schema, and reasoning in advance of the evidence.  As a social ideal, Science doesn’t judge you as a bad person for coming up with heretical hypotheses; honest experiments, and acceptance of the results, is virtue unto a scientist…

So that’s all that Science really asks of you – the ability to accept reality when you’re beat over the head with it.  It’s not much, but it’s enough to sustain a scientific culture.

Contrast this to the notion we have in probability theory, of an exact quantitative rational judgment.  If 1% of women presenting for a routine screening have breast cancer, and 80% of women with breast cancer get positive mammographies, and 10% of women without breast cancer get false positives, what is the probability that a routinely screened woman with a positive mammography has breast cancer?  7.5%.  You cannot say, “I believe she doesn’t have breast cancer, because the experiment isn’t definite enough.”  You cannot say, “I believe she has breast cancer, because it is wise to be pessimistic and that is what the only experiment so far seems to indicate.”  7.5% is the rational estimate given this evidence, not 7.4% or 7.6%.  The laws of probability are laws.

So why isn’t everybody practicing the Way of Bayes?

around the time I realized my big mistake, I had also been studying Kahneman and Tversky and Jaynes.  I was learning a new Way, stricter than Science.  A Way that could criticize my folly, in a way that Science never could.  A Way that could have told me, what Science would never have said in advance:  “You picked the wrong hypothesis to test, dunderhead.”

But the Way of Bayes is also much harder to use than Science.  It puts a tremendous strain on your ability to hear tiny false notes, where Science only demands that you notice an anvil dropped on your head.

In Science you can make a mistake or two, and another experiment will come by and correct you; at worst you waste a couple of decades.

But if you try to use Bayes even qualitatively – if you try to do the thing that Science doesn’t trust you to do, and reason rationally in the absence of overwhelming evidence – it is like math, in that a single error in a hundred steps can carry you anywhere.  It demands lightness, evenness, precision, perfectionism.

Do Scientists Already Know This Stuff? examines what practicing scientists could learn by reading Less Wrong.

Previous post:

Next post:

{ 1 comment… read it below or add one }

Taranu June 4, 2011 at 9:02 am

The post entitled Do Scientists Already Know This Stuff reminds me of a talk I had with a former classmate. He said that medication doesn’t help cure diseases. Faith that one will heal is enough. Those who heal do so because they have enough faith, while those who don’t heal do so because they don’t have enough. I told him I wasn’t qualified to deal with such claims and suggested that he went to the medical community to explain this to them (maybe he will start a paradigm shift).
I am opened to suggestions to whatever else I could have told him.

  (Quote)

Leave a Comment