AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Less Wrong are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.
I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.
Eliezer then points out that Rationality is Systematized Winning, followed by Incremental Progress and the Valley and Extenuating Circumstances and Whinning-Based Communities and Mandatory Secret Identities.
Eliezer considers the controversy over whether it helps to send aid to Africa or not. Other short posts include Real-Life Anthropic Weirdness, Newcomb’s Problem Standard Positions, Of Lies and Black Swan Blowups, Rationality Quotes April 2009, Great Books of Failure, This Didn’t Have to Happen, Special Status Needs Special Support, Willpower Hax #487: Execute by Default, Open-Mindedness: the video, and What is Wrong With Our Thoughts.
Beware Other-Optimizing warns:
I’ve noticed a serious problem in which aspiring rationalists vastly overestimate their ability to optimize other people’s lives.
That Crisis Thing Seems Pretty Useful revisits how to create a crisis of faith.
Considering the community again: Bayesians vs. Barbarians.
Of Gender and Rationality tackles the problem: Why are rationalists overwhelmingly male?
In My Way, Eliezer explains that people are different, and the path to rationality cannot be the same for everyone.
In The Sin of Underconfidence, Eliezer encourages rationalists to be more confident:
Practical Advice Backed by Deep Theories admonishes:
But practical advice really, really does become a lot more powerful when it’s backed up by concrete experimental results, causal accounts that are actually true, and math validly interpreted.
But that doesn’t mean Yudkowsky stopped posting to Less Wrong. Next up is This Failing Earth, which actually ends on a note of hope:
It may be that in the fractiles of the human Everett branches, we live in a failing Earth – but it’s not failed until someone messes up the first AI. I find that a highly motivating thought. Your mileage may vary.