I’ve finally come to admit that I probably won’t continue to record the Morality in the Real World podcast, which intended to explain the moral theory of desirism. People ask me if I still “believe in” desirism, so let me explain my current thinking. First, a few reminders:
- Desirism never posited anything more than the standard, reductionistic, scientific picture of the world.
- Given that most uses of moral terms refer to things that don’t exist (categorical imperatives, divine commands, etc.), my MitRW co-host Alonzo Fyfe several years ago proposed a set of “reforming definitions” for moral terms intended to (1) capture something similar to what most people had meant when using moral terms, but (2) capture a set of processes that actually exist. This is standard practice in moral philosophy: see Rawls, Brandt, Railton, etc.
- Most moral theories treat acts as the primary objects of moral evaluation, but Alonzo’s reforming definitions made motives (“desires”) the primary object of moral evaluation, ala Adams (1976).
- Alonzo’s reforming definitions construed (non-moral) “value” as a relation between desires and states of affairs, such that a state of affairs has value just in case it is desired.
- The existence of a desire is a state of affairs, and according to desirism desires are the primary objects of moral evaluation. A desire is”morally” good, on the desirist view, if it tends to fulfill other desires. This phrase “tends to fulfill” needs quite a bit of fleshing out, which is what we started to do in our podcast. An important point is that this claim does not require that desire fulfillment have any “intrinsic” value: see A Harmony of Desires.
- Whether you want to call this a theory of “moral realism” or “anti-realism” depends on your attitude toward the meaning of those terms: see Pluralistic Moral Reductionism and Joyce (2011).
Why did I stop talking about morality in the language of desirism?
I once wrote a two-part post called ‘The Greatest Objection to Desirism’. I said that the most common objections to desirism were no good, and that the greatest objection to desirism I knew of was one that nobody (except Alonzo) ever mentioned: the possibility that desires do not exist.
“Desire” in desirism was always a metaphor for “whatever a completed neuroscience tells us about the thing that is sort of like the thing we currently call ‘desire’,” and my studies in the neuroscience of human motivation and agent theory in AI have encouraged my view that something close enough to “desire” exists to support a notion of “value,” while in another sense human motivation in particular works quite differently than the folk theory of desire claims. Overall, I’ve shifted away from finding it useful to talk about human “desires” when I’m not talking casually.
But the larger reason I’ve stopped talking about morality in the language of desirism is that I’m tempted to not use moral terms at all. Moral language is thoroughly confused and corrupted and strongly motivated, and I’m more tempted than ever to abandon the entire language and start with a new one.
Another, more personal reason I’ve stopped talking about desirism is that while desirism remains (in my opinion) one of the best sets of definitions for moral terms for talking about ordinary human interactions — saying things like “Bob stole the truck and that was wrong” — I’ve shifted my focus to what I think is a much larger problem looming over us: intelligence explosion. And I don’t find desirism to be the most useful language when talking about the value problem in the context of intelligence explosion.
Do I think desirism’s factual claims are true? Yes, more or less, though neuroscience continues to refine our notion of what the naive term “desire” might refer to.
Do I think desirism’s proposed reforming definitions are useful? Yes, more useful than every other set of proposed reforming definitions I’ve come across for use in discussing ordinary human moral judgments.
Will I continue to use desirist language on a regular basis? Probably not, because (1) moral language itself is not that appealing to me anymore in serious discussion, and (2) desirism is not the most useful language for discussing the value problems I’m focused on, concerning intelligence explosion.