AI researcher Eliezer Yudkowsky is something of an expert at human rationality, and at teaching it to others. His hundreds of posts at *Less Wrong* are a treasure trove for those who want to improve their own rationality. As such, I’m reading *all of them, chronologically*.

I suspect some of my readers want to “level up” their rationality, too. So I’m keeping a diary of my Yudkowsky reading. Feel free to follow along.

His 239th post is The Intuitions Behind Utilitarianism:

I see the project of morality as a project of renormalizing intuition. We have intuitions about things that seem desirable or undesirable, intuitions about actions that are right or wrong, intuitions about how to resolve conflicting intuitions, intuitions about how to systematize specific intuitions into general principles.

Delete all the intuitions, and you aren’t left with an ideal philosopher of perfect emptiness, you’re left with a rock.

Keep all your specific intuitions and refuse to build upon the reflective ones, and you aren’t left with an ideal philosopher of perfect spontaneity and genuineness, you’re left with a grunting caveperson running in circles, due to cyclical preferences and similar inconsistencies.

“Intuition”, as a term of art, is not a curse word when it comes to morality – there

isnothing else to argue from. Even modus ponens is an “intuition” in this sense – it’s just that modus ponens still seems like a good idea after being formalized, reflected on, extrapolated out to see if it has sensible consequences, etcetera.

I have written before about intuitions in moral philosophy, and until I have a clearer picture of what Yudkowsky means by intuition here, I’ll have to withhold comment.

Anyway, Yudkowsky continues:

After you think about moral problems for a while, and also find new truths about the world, and even discover disturbing facts about how you yourself work, you often end up with different moral opinions than when you started out. This does not quite

definemoral progress, but it is how we experience moral progress.As part of my experienced moral progress, I’ve drawn a conceptual separation between questions of type

Where should we go?and questions of typeHow should we get there? …The question of where a road goes – where it leads – you can answer by traveling the road and finding out. If you have a false belief about where the road leads, this falsity can be destroyed by the truth in a very direct and straightforward manner.

…

I don’t say that morality should always be simple. I’ve already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up. I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination. And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize – that the valuation of this one event is more complex than I know.

But that’s for

one event.When it comes to multiplying by quantities and probabilities, complication is to be avoided – at least if you care more about the destination than the journey. When you’ve reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as “Shut up and multiply.”Where music is concerned, I care about the journey.

When lives are at stake, I shut up and multiply…

It is more important that lives be saved, than that we conform to any particular ritual in saving them. And the

optimalpath to that destination is governed by laws that are simple, because they are math.And that’s why I’m a utilitarian – at least when I am doing something that is overwhelmingly more important than my own feelings about it – which is most of the time, because there are not many utilitarians, and many things left undone.

I look forward to reading Eliezer’s thoughts on meta-ethics.

Trust in Bayes opens:

In Trust in Math, I presented an algebraic proof that 1 = 2, which turned out to be – surprise surprise – flawed. Trusting that algebra,

correctly used,will not carry you to an absurd result, is not a matter ofblindfaith. When we see apparent evidence against algebra’s trustworthiness, we should also take into account the massive evidencefavoringalgebra which we have previously encountered. We should take into account our past experience ofseemingcontradictions which turned out to be themselves flawed. Based on our inductive faith that we may likely have a similar experience in the future, we look for a flaw in the contrary evidence.This seems like a dangerous way to think, and it

isdangerous, as I noted in “Trust in Math”. But, faced with a proof that 2 = 1, I can’t convince myself that it’s genuinely reasonable to think any other way.

The post goes on to reject some supposed refutations of Bayesian math. The conclusion is:

The meta-moral is that Bayesian probability theory and decision theory are math: the formalism provably follows from axioms, and the formalism provably obeys those axioms. When someone shows you a purported paradox of probability theory or decision theory, don’t shrug and say, “Well, I guess 2 = 1 in that case” or “Haha, look how dumb Bayesians are” or “The Art failed me… guess I’ll resort to irrationality.” Look for the division by zero; or the infinity that is assumed rather than being constructed as the limit of a finite operation; or the use of different implicit background knowledge in different parts of the calculation; or the improper prior that is not treated as the limit of a series of proper priors… something illegal.

Trust Bayes. Bayes has earned it.

Previous post: Meta-ethics: Cornell Realism

Next post: Meta-ethics: Railton’s Moral Reductionism (part 1)

{ 4 comments… read them below or add one }

Small world: encountered MIRI mentioned concerning a new startup at a professional dinner in San Jose last week. I think some MIRI alumns are on staff but I couldn’t quite hear over the Il Fornaio clatter.

MarkD(Quote)

Modus Ponens is an intuition?

Huh?!?

I think it is intuitive, but it is far more than a mere intuition. It’s not a valid form of inference because experience has shown it to be so… right?

Maybe I am missing something here.

-Rufus

Rufus(Quote)

The problem, as I see it, is not with a “purported paradox of probability theory” as Eliezer says, but with the paradoxes that spontaneously arise from the application of the instrument of probability theory to areas beyond its competence, for example, in an attempted general theory of knowledge. Probability theory cannot, by definition I would be bold enough to say, give an account of its own epistemological status. It is not enough just to point to the closed circle of “formalism” and “axioms”: a “theory of knowledge” is by its very nature at least second-order.

stag(Quote)

And then he says “trust Bayes”. Well, it depends on the things to which I apply the maths. If you have already decided in advance that probability theory is universally applicable, then by all means, carry on – but you are proceeding on a trust that amounts to blind faith. As for me, it seems to me utterly unreasonable to think that Bayes, by inventing an instrument for calculating probabilities, has “earned” the right to determine the extent of its application.

stag(Quote)