AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Less Wrong are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.
I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.
In his 516th post is Dark Side Epistemology:
I have previously spoken of the notion that, the truth being entangled, lies are contagious. If you pick up a pebble from the driveway, and tell a geologist that you found it on a beach – well, do you know what a geologist knows about rocks? I don’t. But I can suspect that a water-worn pebble wouldn’t look like a droplet of frozen lava from a volcanic eruption. Do you know where the pebble in your driveway really came from? Things bear the marks of their places in a lawful universe; in that web, a lie is out of place.
…The “how to think” memes floating around, the cached thoughts of Deep Wisdom – some of it will be good advice devised by rationalists. But other notions were invented to protect a lie or self-deception: spawned from the Dark Side.
But sometimes, ethics can save you, as Eliezer argues in Protected From Myself.
Which Parts Are “Me”? concludes:
Somewhere at the end of this, I think, is a mastery of techniques that are Zenlike but not Zen, so that you have full passion in the parts of yourself that you identify with, and distance from the pieces of your brain that you reject; and a complex layered personality with a stable inner core, without smoothing out those highs or lows of life that you accept as appropriate to the event.
And if not, then screw it, let’s hack the brain so that it works that way. I have no confidence in my ability to judge how human nature should change, and would sooner leave it up to a more powerful mind in the same metamoral reference frame. But if I had to guess, I think that’s the right thing to do.
Inner Goodness discusses people who are evil and know it. Then comes a news post. Expected Creative Surprises and Belief in Intelligence and Aiming at the Target and Measuring Optimization Power explain one of the problems of AI. This thread, which I do not have the interest to read, continues:
Mundane Magic offers a fun game:
As you may recall from some months earlier, I think that part of the rationalist ethos is binding yourself emotionallyto an absolutely lawful reductionistic universe – a universe containing no ontologically basic mental things such as souls or magic – and pouring all your hope and all your care into that merely real universe and its possibilities, without disappointment.
There’s an old trick for combating dukkha where you make a list of things you’re grateful for, like a roof over your head.
So why not make a list of abilities you have that would be amazingly cool if they were magic, or if only a few chosen individuals had them?
Finally, there is Today’s Inspirational Tale:
At a Foresight Gathering some years ago, a Congressman was in attendance, and he spoke to us and said the following:
“Everyone in this room who’s signed up for cryonics, raise your hand.”
Many hands went up.
“Now everyone who knows the name of your representative in the House, raise your hand.”
Fewer hands went up.
“And you wonder why you don’t have any political influence.”
Rationalists would likewise do well to keep this lesson in mind.