Reading Yudkowsky, part 59

by Luke Muehlhauser on July 27, 2011 in Eliezer Yudkowsky,Resources,Reviews

AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at Less Wrong are a treasure trove for those who want to improve their own rationality. As such, I’m reading all of them, chronologically.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

In his 579th post – What I Think, If Not Why – is a nice summary of Yudkowsky’s positions on AI, without all the long and difficult justifications for those positions:

• A well-designed mind should be much more efficient than a human, capable of doing more with less sensory data and fewer computing operations.  It is not infinitely efficient and does not use zero data

• An AI that reaches a certain point in its own development becomes able to (sustainably, strongly) improve itself.  At this point, recursive cascades slam over many internal growth curves to near the limits of their current hardware, and the AI undergoes a vast increase in capability…

• The default case of FOOM is an unFriendly AI, built by researchers with shallow insights…

• The desired case of FOOM is a Friendly AI, built using deep insight, so that the AI never makes any changes to itself that potentially change its internal values; all such changes are guaranteed using strong techniques that allow for a billion sequential self-modifications without losing the guarantee.  The guarantee is written over the AI’s internal search criterion for actions, rather than external consequences.

• The good guys do not write an AI which values a bag of things that the programmers think are good ideas, like libertarianism or socialism or making people happy or whatever…

• Actually setting up a Friendly AI’s values is an extremely meta operation, less “make the AI want to make people happy” and more like “superpose the possible reflective equilibria of the whole human species, and output new code that overwrites the current AI and has the most coherent support within that superposition“…

• Intelligence is mostly about architecture, or “knowledge” along the lines of knowing to look for causal structure (Bayes-net type stuff) in the environment; this kind of knowledge will usually be expressed procedurally as well as declaratively. Architecture is mostly about deep insights...

You Only Live Twice digs back into the depths of cryonics, AI, and so on. After another Bloggingheads video comes For the People Who Are Still Alive, which contains an interesting potential ethical consequence of Everettian quantum theory:

It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular.

And that’s why, when there is research to be done, I do it not just for all the future babies who will be born – but, yes, for the people who already exist in our local region, who are already our responsibility.

For the good of all of us, except the ones who are dead.

Not Taking Over the World responds to an accusation that Yudkowsky is trying to take over the world, which continues in Visualizing Eutopia.

Perhaps realizing that he had lost a lot of people in talking endlessly about cryonics, AI, and the Singularity, Yudkowsky then writes Prolegomena to a Theory of Fun:

Raise the topic of cryonics, uploading, or just medically extended lifespan/healthspan, and some bioconservative neo-Luddite is bound to ask, in portentous tones:

“But what will people do all day?”

They don’t try to actually answer the question.  That is not a bioethicist’s role, in the scheme of things.  They’re just there to collect credit for the Deep Wisdom of asking the question.  It’s enough to imply that the question is unanswerable, and therefore, we should all drop dead.

That doesn’t mean it’s a bad question.

It’s not an easy question to answer, either.  The primary experimental result in hedonic psychology – the study of happiness – is that people don’t know what makes them happy.

Some transhumanists recommend we invent drugs of infinite happiness stimulation:

In the transhumanist lexicon, “orgasmium” refers to simplified brains that are just pleasure centers experiencing huge amounts of stimulation – a happiness counter containing a large number, plus whatever the minimum surrounding framework to experience it.  You can imagine a whole galaxy tiled with orgasmium.  Would this be a good thing?

And the vertigo-inducing thought is this – if you would prefer not to become orgasmium, then why should you?

The conclusion?

Fun is okay.  It’s allowed.  It doesn’t get any better than fun.

And then we can turn our attention to the question of what is fun, and how to have it.

Previous post:

Next post:

{ 1 comment… read it below or add one }

Robert July 28, 2011 at 8:07 pm

Luke, have you signed up for cryonics?

  (Quote)

Leave a Comment