AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Overcoming Bias (now moved to Less Wrong) are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.
I suspect some of my readers want to improve their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.
Yudkowsky’s first post is The Martial Art of Rationality:
Our minds respond less readily to [training] than our hands. Muscles are evolutionarily ancient subjects of neural control, while cognitive reflectivity is a comparatively more recent innovation. We shouldn’t be surprised that muscles are easier to use than brains. But it is not wise to neglect the latter training because it is more difficult. It is not by bigger muscles that the human species rose to prominence upon Earth.
He notes that in just the past few decades, we’ve learned a ton about human rationality. Much of this comes from the study of our cognitive biases in behavioral economics and experimental psychology. Other results have come from the Bayesian systematization of probability theory. More still has come from evolutionary psychology and from social psychology. Right now is the best time ever to be somebody who cares about becoming more rational.
These fields give us new focusing lenses through which to view the landscape of our own minds. With their aid, we may be able to see more clearly the muscles of our brains, the fingers of thought as they move.
Next is Why Truth? Yudkowsky suggests some reasons to pursue truth:
- To achieve a goal, such as building a video chat technology or curing AIDS or investing wisely for retirement.
- Perhaps morality? Some knowledge can benefit society.
Much of getting at the truth has to do with overcoming biases. What’s a bias, again? A bias is “a certain kind of obstacle to our goal of obtaining truth.”
What kind of obstacle, specifically? Before we answer that we must remember that error is the norm, not the exception:
As the proverb goes, “There are forty kinds of lunacy but only one kind of common sense.” The truth is a narrow target, a small region of configuration space to hit. “She loves me, she loves me not” may be a binary question, but E=MC^2 is a tiny dot in the space of all equations, like a winning lottery ticket in the space of all lottery tickets. Error is not an exceptional condition; it is success which is a priori so improbable that it requires an explanation.
Okay, so what is a bias?
Perhaps we will find that we can’t really give any explanation better than pointing to a few extensional examples, and hoping the listener understands. If you are a scientist just beginning to investigate fire, it might be a lot wiser to point to a campfire and say “Fire is that orangey-bright hot stuff over there,” rather than saying “I define fire as an alchemical transmutation of substances which releases phlogiston.” …you should not ignore something just because you can’t define it.
With all that said, we seem to label as “biases” those obstacles to truth which are produced, not by the cost of information, nor by limited computing power, but by the shape of our own mental machinery.
Biases are obstacles to truth not caused by the cost of information, or by limited computing power, or by adopted beliefs or moral values, or by brain damage. Instead, “biases arise from machinery that is humanly universal.”
The next post is The Proper Use of Humility:
It is widely recognized that good science requires some kind of humility. What sort of humility is more controversial.
Consider the creationist who says: “But who can really know whether evolution is correct? It is just a theory. You should be more humble and open-minded.” Is this humility?
After surveying some examples of “good” and “bad” humility, Yudkowsky writes:
The vast majority of appeals that I witness to “rationalist’s humility” are excuses to shrug. The one who buys a lottery ticket, saying, “But you can’t know that I’ll lose.” The one who disbelieves in evolution, saying, “But you can’t prove to me that it’s true.” The one who refuses to confront a difficult-looking problem, saying, “It’s probably too hard to solve.” The problem is motivated skepticism aka disconfirmation bias – more heavily scrutinizing assertions that we don’t want to believe. Humility, in its most commonly misunderstood form, is a fully general excuse not to believe something; since, after all, you can’t be sure. Beware of fully general excuses!
To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty.
He then tackles a very hot topic in epistemology, The Modesty Argument:
When you disagree with someone, even after talking over your reasons, the Modesty Argument claims that you should each adjust your probability estimates toward the other’s, and keep doing this until you agree. The Modesty Argument is inspired by Aumann’s Agreement Theorem, a very famous and oft-generalized result which shows that genuine Bayesians literally cannot agree to disagree; if genuine Bayesians have common knowledge of their individual probability estimates, they must all have the same probability estimate.
Yudkowsky disagrees with the Modesty Argument:
If I have five different accurate maps of a city, they will all be consistent with each other. Some philosophers, inspired by this, have held that “rationality” consists of having beliefs that are consistent among themselves. But, although accuracy necessarily implies consistency, consistency does not necessarily imply accuracy. If I sit in my living room with the curtains drawn, and make up five maps that are consistent with each other, but I don’t actually walk around the city and make lines on paper that correspond to what I see, then my maps will be consistent but not accurate. When genuine Bayesians agree in their probability estimates, it’s not because they’re trying to be consistent – Aumann’s Agreement Theorem doesn’t invoke any explicit drive on the Bayesians’ part to be consistent. That’s what makes AAT surprising! Bayesians only try to be accurate; in the course of seeking to be accurate, they end up consistent. The Modesty Argument, that we can end up accurate in the course of seeking to be consistent, does not necessarily follow.
He gives other arguments, too.
Because I say “I don’t know” pretty often, and because I’m tempted to say it’s a morally good thing to do so in many circumstances, Yudkowsky’s post “I don’t know” is the earliest of his posts to really smack me upside the head and perhaps change my mind. The post records an online chat he had with somebody else (person X):
X: it still seems that saying “i don’t know” in some situations is better than giving your best guess…
Eliezer: in real life, you have to choose, and bet, at some betting odds
Eliezer: i.e., people who want to say “I don’t know” for cryonics still have to sign up or not sign up, and they’ll probably do the latter
Eliezer: “I don’t know” is usually just a screen that people think is defensible and unarguable before they go on to do whatever they feel like, and it’s usually the wrong thing because they refused to admit to themselves what their guess was, or examine their justifications, or even realize that they’re guessing
X: how many apples are in a tree outside?
X: i’ve never seen it and neither have you
Eliezer: 10 to 1000
Eliezer: if you offer to bet me a million dollars against one dollar that the tree outside has fewer than 20 apples, when neither of us have seen it, I will take your bet…
Eliezer: therefore I have assigned a nonzero and significant probability to apples < 20 whether I admit it or not
Eliezer: the first thing to decide is, are you trying to accomplish something for yourself (like not getting in trouble) or are you trying to improve someone else’s picture of reality
Eliezer: “I don’t know” is often a good way of not getting in trouble
It’s interesting to see how Yudkowsky stubbornly defends the claim that “I don’t know” is never really a good answer. I’m not sure he’s, but he presents a fair case.
It is odd that in the end, Person X asks him: “What about people who don’t know about ignorance priors?” (That is, 99.9% of all people living today, I would bet.)
Yudkowsky responds: “Sometimes you just can’t save people from themselves.”
Rather than take that view, I wonder if it’s more helpful to say that sometimes “I don’t know” is a good answer.