Update: Facing the Intelligence Explosion has a new home at FacingTheSingularity.com.
Before I talk about machine superintelligence, I need to talk about rationality, because my understanding of rationality shapes the way I see everything, and it is the main reason I take the problems of machine superintelligence seriously.
If I could say only one thing to the ‘atheist’ and ‘skeptics’ communities, it would be this:
Skepticism and critical thinking teach us important lessons: Extraordinary claims require extraordinary evidence. Correlation does not imply causation. Don’t take authority too seriously. Claims should be specific and falsifiable. Remember to apply Occam’s razor. Beware logical fallacies. Be open-minded, but not gullible. Etc.
But this is only the beginning. In writings on skepticism and critical thinking, these guidelines are only loosely specified, and they are not mathematically grounded in a well-justified normative theory. Instead, they are a grab-bag of vague but generally useful rules of thumb. They provide a great entry point to rational thought, but they are only the beginning. For 40 years there has been a mainstream cognitive science of rationality, with detailed models of how our thinking goes wrong and well-justified mathematical theories of what it means for a thinking process to be “wrong.” This is what we might call the science and mathematics of Technical Rationality. It takes more effort to learn and practice than entry-level skepticism does, but it is powerful. It can improve your life and help you to think more clearly about the world’s toughest problems.
So what is this mainstream cognitive science of rationality? You will find it described in every university textbook on thinking and decision making. For example:
- Baron, Thinking and Deciding
- Hastie & Dawes, Rational Choice in an Uncertain World
- Bazerman & Moore, Judgment in Managerial Decision Making
- Plous, The Psychology of Judgment and Decision Making
- Gilboa, Making Better Decisions
You will also find pieces of it in the recent popular-level books on human irrationality, for example:
- Ariely, Predictably Irrational
- Kahneman, Thinking, Fast and Slow
- Thaler & Sunstein, Nudge
- Tavris & Aronson, Mistakes Were Made: But Not By Me
And you will, of course, find it in the academic journals. Here are links to the Google Scholar results for just a few of the field’s common terms:
- “heuristics and biases“
- “affect heuristic“
- “myside bias“
- “base rate fallacy“
- “framing effects“
- “availability bias“
- “conjunction fallacy“
There are two parts to technical rationality, normative and descriptive.
The normative part describes the laws of thought and action. Basically: logic, probability theory, and decision theory. Logic and probability theory describe how you should reason if you want to maximize your chances of acquiring true beliefs. Decision theory describes how you should act if you want to maximize your chances of acquiring what you want. Of course, these are not physical laws but normative laws. You can break these laws if you choose, and people often do. But if you break the laws of logic or probability theory you decrease your chances of arriving at true beliefs, and if you break the laws of decision theory then you decrease your chances of achieving your goals.
The descriptive part describes not how we should reason and act, but how we usually do reason and act. The descriptive program includes research on how humans think and decide. It also includes a catalogue of common ways in which we violate the laws of thought and action from logic, probability theory, and decision theory. A cognitive bias is a particular way of violating logic, probability theory, or decision theory. That’s how “bias” is defined (see, e.g., Thinking and Deciding or Rationality and the Reflective Mind, each of which has a table of common biases and which part of logic, probability theory, or decision theory is violated by each of them).
Cognitive scientists also distinguish two domains of rationality: epistemic and instrumental.
Epistemic rationality concerns forming true beliefs, or: having in your head an accurate map of the territory out there in the world. (Epistemic rationality is governed by the laws of logic and probability theory.)
Instrumental rationality concerns achieving your goals, or: maximizing your chances of getting what you want. Or, more formally: maximizing your “expected utility.” Also known as “winning.” (Instrumental rationality is governed by the laws of decision theory.)
In a sense, instrumental rationality takes priority, because the point of forming true beliefs is to help you achieve your goals, and sometimes spending too much time on epistemic rationality is not instrumentally rational. For example, I know some people who would be more likely to achieve their goals if they spent less time studying rationality and more time, say, developing their social skills.
Still, it can be useful to talk about epistemic and instrumental rationality separately. Just know that when I talk about epistemic rationality, I’m talking about following the laws of logic and probability theory, and that when I talk about instrumental rationality, I’m talking about following the laws of decision theory.
So: I’ve explained Technical Rationality and given lots of social proof for its legitimacy. But what is the justification for the normative laws and descriptive claims of Technical Rationality, and how does one become skilled in Technical Rationality? And why should a mastery of Technical Rationality lead you to take the problems of machine superintelligence seriously? I’ll begin to answer these questions in the next post.