My Latest Thoughts on Desirism

by Luke Muehlhauser on December 19, 2011 in Ethics

I’ve finally come to admit that I probably won’t continue to record the Morality in the Real World podcast, which intended to explain the moral theory of desirism. People ask me if I still “believe in” desirism, so let me explain my current thinking. First, a few reminders:

  1. Desirism never posited anything more than the standard, reductionistic, scientific picture of the world.
  2. Given that most uses of moral terms refer to things that don’t exist (categorical imperatives, divine commands, etc.), my MitRW co-host Alonzo Fyfe several years ago proposed a set of “reforming definitions” for moral terms intended to (1) capture something similar to what most people had meant when using moral terms, but (2) capture a set of processes that actually exist. This is standard practice in moral philosophy: see Rawls, Brandt, Railton, etc.
  3. Most moral theories treat acts as the primary objects of moral evaluation, but Alonzo’s reforming definitions made motives (“desires”) the primary object of moral evaluation, ala Adams (1976).
  4. Alonzo’s reforming definitions construed (non-moral) “value” as a relation between desires and states of affairs, such that a state of affairs has value just in case it is desired.
  5. The existence of a desire is a state of affairs, and according to desirism desires are the primary objects of moral evaluation. A desire is”morally” good, on the desirist view, if it tends to fulfill other desires. This phrase “tends to fulfill” needs quite a bit of fleshing out, which is what we started to do in our podcast. An important point is that this claim does not require that desire fulfillment have any “intrinsic” value: see A Harmony of Desires.
  6. Whether you want to call this a theory of “moral realism” or “anti-realism” depends on your attitude toward the meaning of those terms: see Pluralistic Moral Reductionism and Joyce (2011).

Why did I stop talking about morality in the language of desirism?

I once wrote a two-part post called ‘The Greatest Objection to Desirism’. I said that the most common objections to desirism were no good, and that the greatest objection to desirism I knew of was one that nobody (except Alonzo) ever mentioned: the possibility that desires do not exist.

“Desire” in desirism was always a metaphor for “whatever a completed neuroscience tells us about the thing that is sort of like the thing we currently call ‘desire’,” and my studies in the neuroscience of human motivation and agent theory in AI have encouraged my view that something close enough to “desire” exists to support a notion of “value,” while in another sense human motivation in particular works quite differently than the folk theory of desire claims. Overall, I’ve shifted away from finding it useful to talk about human “desires” when I’m not talking casually.

But the larger reason I’ve stopped talking about morality in the language of desirism is that I’m tempted to not use moral terms at all. Moral language is thoroughly confused and corrupted and strongly motivated, and I’m more tempted than ever to abandon the entire language and start with a new one.

Another, more personal reason I’ve stopped talking about desirism is that while desirism remains (in my opinion) one of the best sets of definitions for moral terms for talking about ordinary human interactions — saying things like “Bob stole the truck and that was wrong” — I’ve shifted my focus to what I think is a much larger problem looming over us: intelligence explosion. And I don’t find desirism to be the most useful language when talking about the value problem in the context of intelligence explosion.

So:

Do I think desirism’s factual claims are true? Yes, more or less, though neuroscience continues to refine our notion of what the naive term “desire” might refer to.

Do I think desirism’s proposed reforming definitions are useful? Yes, more useful than every other set of proposed reforming definitions I’ve come across for use in discussing ordinary human moral judgments.

Will I continue to use desirist language on a regular basis? Probably not, because (1) moral language itself is not that appealing to me anymore in serious discussion, and (2) desirism is not the most useful language for discussing the value problems I’m focused on, concerning intelligence explosion.

Previous post:

Next post:

{ 55 comments… read them below or add one }

Bear December 19, 2011 at 11:58 am

Kudos, Luke.

Good wrap up of your current and past stance on Desirism. I appreciate whenever you make these update posts.

BTW, is that Naturalism website still a project you want to complete?

Thanks.

  (Quote)

Anon December 19, 2011 at 1:14 pm

Hi Luke,

can you say a little more about your current views as to how to think about our current values/desires/etc in regards to preserving them during an intelligence explosion?

I had hoped that with your arrival at MIRI, there would be renewed focus on the “friendly” aspect of Friendly AI, and less on the technicalities of reasoning about artificial agents. It seems the latter has been the main focus recently, and I fear that too much work in that area will only accelerate the production of artificial agents and not so much our ability to ensure beneficial outcomes. Or are you more or less happy with the proposed notions like Coherent Extrapolated Volition? Doesn’t that need much more consideration and fleshing out, and shouldn’t that be a core priority before investigating implementation details about decision making with self-improvements?

  (Quote)

Anon December 19, 2011 at 1:18 pm

Another way to phrase my objection is that it seems that the MIRI focuses on the mathematical/formal aspects which seem rather trivial compared to the philosophical/psychological/sociological/economical aspects of Friendly AI.

  (Quote)

Ivan December 19, 2011 at 1:38 pm

When you say that moral language “is thoroughly confused and corrupted and strongly motivated,” do you mean that moral language depends upon religion, or in other words, “most uses of moral terms refer to things that don’t exist”? Or do you mean something more? It seems to me that moral language can be perfectly coherent in itself, and just runs into that problem of what does not, in fact, exist.

  (Quote)

antiplastic December 19, 2011 at 1:42 pm

Congratulations on vanquishing the air bubble in the metaphysical wallpaper! No longer will those troublesome “moral terms” be a problem in Yud-speak.

But wait! It turns out that simply pushing down an air bubble causes the identical problem to recur a few inches away. You still have to decide what to do, and what to value. You still write posts larded with normative claims about how “important” it is to navigate the robopocalypse “wisely”, and go on just as before with admonitions and exhortations about what the best form of life is. Simply changing the labels you put on a condition does not alter the underlying condition, any more than calling someone “differently abled” will mean their wheelchair will vanish. The ineliminably noncognitive, action-guiding character of moral decisions – ones that Fyfism lamely attempts to define out of existence – are still front and center in our human experience, no matter how much you try to “taboo” them.

  (Quote)

Tony Hoffman December 19, 2011 at 6:37 pm

The Desirism posts and podcasts were not my favorites, but I found the discussion to be incredibly useful in helping me understand my own moral reasoning. Whatever shortcomings you and others may find, there is very little (if anything) that I can think I agree with in Desirism in those ways that you and Alonzo have most recently explicated. For me, the notion that a moral basis should be grounded on desires (rather than a rational calculation) seems both sound and unassailable.

On another note:

AP: “The ineliminably noncognitive, action-guiding character of moral decisions – ones that Fyfism lamely attempts to define out of existence – are still front and center in our human experience, no matter how much you try to “taboo” them.”

I don’t get this criticism. Admittedly I have not been a devoted follower of the Desirism series here, but I don’t know what you mean by this–how is it that Fyfism (as you call it) tries to define the “noncognitive, action-guiding character of moral decisions” out of existence?

  (Quote)

Stephen R. Diamond December 19, 2011 at 9:49 pm

“Given that most uses of moral terms refer to things that don’t exist (categorical imperatives, divine commands, etc.), my MitRW co-host Alonzo Fyfe several years ago proposed a set of “reforming definitions” for moral terms intended to (1) capture something similar to what most people had meant when using moral terms, but (2) capture a set of processes that actually exist. This is standard practice in moral philosophy: see Rawls, Brandt, Railton, etc.”

I don’t know if it’s accurate to say Rawls et. al was engaged in reforming of definitions, but whoever first had the idea, it’s a bad one. The point of reforming definitions is to get rid of inessential conceptual baggage, whereas in your reformation, you abstract away from the moral core of the concept.

I don’t think you understand morality’s function: it doesn’t provide a window on universal human values. But if it did, you would still face the problem of explaining why these values, which happen to be universal, are what they _ought_ to be.

Lack of concern with the actual (rational) function of explicit morality has been a weakness of moral anti-realism. I begin to address this weakness in “Why do what you ‘ought’?—A habit theory of explicit morality” — http://tinyurl.com/6whzghm

  (Quote)

Luke Muehlhauser December 20, 2011 at 12:41 am

Anon,

Your questions will be more or less addressed as I continue to expand this page.

  (Quote)

Luke Muehlhauser December 20, 2011 at 12:42 am

antiplastic,

Normativity doesn’t go away, no. It’s just “moral” talk that might be unhelpful.

  (Quote)

Silver Bullet December 20, 2011 at 7:38 am

Thanks, Luke.

Just a few days ago, I was wondering where you now stood on desirism!

Best of the Season,
SB

  (Quote)

Yair December 20, 2011 at 8:56 am

I shall try to write a Bayesian response, in the hope that this will reach you best at this stage, Luke.

What you seem to be doing is searching for a categorization scheme. This is vital for finite-resources reasoning, as we cannot consider all categorization schemes, and Bayesian reasoning (especially in inifinite domains) will result in different results when using different categorizations (e.g. f(x)=x^2; an equal distribution that is “simplest” for x won’t be for f(x)).

Some categorization schemes are ideal in that they optimize the probability assessment of facts about the world. What you seem to claim, however, is that Desirism is optimal (or at least fairly-good) for a particular utility – the discussion of naturalistic morality. I would argue against that.

First, as you note, it is “standard practice” to focus on things that actually exist – at least in naturalistic ethics. So this is not an advantage of desirism.

Second, as you note desirism is hardly unique in not valuing “acts”. I’d suggest that The Greatest Philosopher That Ever Lived (and I am referring, of course, to the illustrious David Hume) had the right idea when he intentionally kept the precise meaning of his terms vague (see his Principals of Morals), but referring roughly to acts, moral character, laws, customs, decisions or so on – all origination from, concurrence with, furthering, or so on the “moral” emotions, which he argued on both game-theoretic, evolutionary, and empirical grounds (such as existed at his time) corresponded to our altruistic emotions, or basic desires, chiefly Empathy.

Third, the chief point of departure for desirism, that does make it unique, is that it denotes “good” those desires that tend to further other desires. I would note that no argument has been raised, in this or other posts, for why this definition serves out Utility, nor indeed was our Utility clearly stated.

Finally, I now argue (as I have before) that our Utility is determined by Normal Human Nature, id there is such a thing, as (virtually) generally humans would under ideal conditions hold to (something at least very close to) the Extrapolated Human Coherent Desires. I would furthermore add that all research into human and other biological variability indicates that while “generally” this would be the case on average, it is extremely unlikely that all humans will share the same set of extrapolated volitions. Taking this into account, it is far more likely that there is a range of Normal sets of desires, which are broadly tolerant of each other, then that there is a single set of Human (extrapolated coherent) desires.

Given that each of ours’ utility is in this range, game-theory suggests our extrapolated desires at equilibrium would be a liberal core of this Normal set. It is this liberal core that forms the Utility that any sufficiently-rational discussion of our desires attempts to further.

My point is that Desirism does not advance this core. The fact that desires tend to further other desires does not imply that these are also Human desires that will be part of this core. While the two demands are not in opposition, and indeed in many cases both “good” sets will reflect generic liberal features due to game-theory, at bottom the only utility we should care about, our own extrapolated, coherent, equilibrium-social utility, is not identical with “desires that tend to further other desires”. It is not even solely restricted to evaluating desires, as it is more likely to be pluralistic in that respect.

So… how am I at channeling Eliezer? Perhaps this time, the message will sink through.

Yair

  (Quote)

joseph December 20, 2011 at 9:53 am

Thankyou Luke,
I am a little curious, when you reviewed the debate between William Lane Craig and Shelley Kagan you said, something to the effect that, you would have argued for desirism if it were you, do you stand by this, or would you now start on a different tack?

  (Quote)

antiplastic December 20, 2011 at 12:42 pm

@Tony Hoffman AF and LM are (in theory, not in practice) externalists, a position I regard as at least as hazardous as relativism or nihilism. According to this theory, it would be a perfectly ordinary and everyday thing for someone to say “I believe, fully and sincerely, that the Iraq War was a moral abomination, and anyone who supported it should feel horribly guilty at minimum, and at maximum should be condemned in the court of public opinion and possibly even subject to punishment. However, I love the Iraq war, continue to support it enthusiastically and sincerely, and feel no guilt about my beliefs and actions in this regard.”

On this view, questions like the one asked in the most recent podcast, “what should I do in questions of moral uncertainty” simply do not compute: there is no correlation whatsoever between having moral belief and acting according to it, or having a reason to act according to it. The notion of being a “conscientious objector” and not serving in the military because you feel a war is unjust is gibberish, no more understandable than saying you are a “geographic objector” and you refuse to fight in a war on the grounds that the war takes place primarily on land.

  (Quote)

Danny December 20, 2011 at 1:41 pm

antiplastic,Normativity doesn’t go away, no. It’s just “moral” talk that might be unhelpful.

Antiplastic = corrected.
+ 1

  (Quote)

antiplastic December 20, 2011 at 2:11 pm

Normativity doesn’t go away, no. It’s just “moral” talk that might be unhelpful.

If you are going to keep insisting on externalism, then by definition normativity goes away.

I think you and I and everyone understand just fine what someone means when they say they cannot in good conscience support a candidate because of his views on abortion. I think if you really had the courage of your convictions that Fyfism represents an accurate conceptual analysis of morality then you would have no problem making moral claims with the expectation that you would be understood (or you would admit to being an error theorist, depending on which of Janus’s faces that doctrine is presenting at any given moment in an argument). I think this post represents a tactical retreat and an implicit realization that Fyfism is unworkable as a theory and creates more needless confusion than it purports to solve.

But as your continued use of normative language (“important”, “wise”, “best” etc.) shows, the air bubble hasn’t gone away. Even if you somehow (why?) scrubbed all your writings of talk like this, it remains that your AI projects revolve centrally around goals like advocacy, exhorting, and criticising other people’s behaviors and beliefs according to grand moral criteria (and there’s nothing wrong with that! doing that sort of thing is, literally, the meaning of life!). As long as those ineliminably emotive, noncognitive functions are being performed when you do this, externalism and by extension Fyfism remains an utter non-starter.

  (Quote)

Tony Hoffman December 20, 2011 at 2:47 pm

@Antiplastic, it looks like think that Luke (and Alonzo) are being inconsistent. But it also appears that you may be misrepresenting their position — I can’t imagine either of them taking such an obviously inconsistent position. Can you provide two real instances (quotes) of the kind of inconsistency you’re talking about that also fairly represent Alonzo and Luke’s explication of Desirism?

Also, from my (weak) understanding of Desirism, this seems fairly easy to refute — AP: “On this view, questions like the one asked in the most recent podcast, “what should I do in questions of moral uncertainty” simply do not compute: there is no correlation whatsoever between having moral belief and acting according to it, or having a reason to act according to it.” — if you accept that you have desires and that you live in society with other beings that have desires, then you should act in a way that increases the satisfaction of good desires. But I probably just don’t understand your objection.

Btw, I meant to write originally that “Whatever shortcomings [Luke] and others may find, there is very little (if anything) that I can think I DISagree with in Desirism in those ways that [Luke] and Alonzo have most recently explicated.” Without that, whatever I thought I was writing originally does indeed seem pretty inconsistent.

  (Quote)

Stephen R. Diamond December 20, 2011 at 4:48 pm

Normativity doesn’t go away, no. It’s just “moral” talk that might be unhelpful.

You’re _renaming_ morality as “normativity.” Normativity _must_ disappear when you try to restate morality naturalistically; as you must know, nothing in nature corresponds to a moral obligation.

Smuggling moralism into naturalism—so you can say, “What we _ought_ to do to is fulfill human desires”—isn’t just conceptually wrong. It is practically misdirecting in its grandiose ambition to construct a calculus of desire [regardless of whether it's Luke's, Yair's or their mentors']. You have no metric for comparing desires across individuals; and appetitive and aversive desires are incommensurable within an individual. The big “moral” questions involve the distribution of resources, and the burning question has always been, “Whose desires shall society gratify?”

You like to think of these questions as “moral” because it isn’t pleasant to think they have to do with power relations, without *any* ultimate rationalistic appeal. But if you offer solutions that pretend to express a moral calculus, you help hide the stakes.

  (Quote)

Tony Hoffman December 21, 2011 at 7:28 am

@Stephen D, as I understand it Desirism is an attempt to describe the underpinnings of morality, after which questions of normativity come into play. I think of it as an atomic description of the physical world preceding explanations of how those atomic particles interact; you can’t explain the physical world without first describing the atomic world. So, if we’re going to discuss normativity under Desirism, one has to first agree that Desirism accurately describes the things that normativity proscribes and prescribes.

I agree with you entirely about the problems of metrics — I think that the problems of comparing desires is more significant to me than the question of whether or not desires exist. But I can’t say for certain that such a calculus would be impossible to attain, and I see no reason to assert that a moral system underpinned by desires could not eventually form a rational and workable morality that includes normativity. It may be unimaginable and distant, but I don’t know why it should be considered impossible.

  (Quote)

Stephen R. Diamond December 21, 2011 at 6:05 pm

“So, if we’re going to discuss normativity under Desirism, one has to first agree that Desirism accurately describes the things that normativity proscribes and prescribes. ”

So this version of desirism is an attempt to model the (nonnormative) content of the judgments people ordinarily receive as normative? Then the relevant question becomes whether there’s any reason to think morality—even if you limit the scope to our society’s morality—has universal content. I’m not sure why anyone would be tempted to believe in a monolithic conventional morality. After all, the big disputes about morality concern what’s actually good in the moral sense.

Some aspects of desirism are decidedly strange: for instance, the notion that the desires of dead people go into the calculus. (If the common law expressed popular morality, it might be noted that the common law included no right to pass on wealth—which bears on whether dead people’s wishes are honored. Inheritance, at least in the United States, was a product of statute.) So, if there’s no argument that desirism _really_ is “morally true” (somehow), it’s hard to see why anyone would embrace some of the strange tenets.

  (Quote)

mahume December 21, 2011 at 6:50 pm

From reading this blog since roughly a few months after it started, you’ve switched interests an amazing amount of times. I mean think of all the huge projects and the inevitable prospects you thought they had, only to find, three months later, you’re talking about something off on another trail (you were on expert on the Kalam argument, going to “map” the entire dialectic, then an expert on meta-ethics with your own theory, then an expert on singularity, just to name some big ones) . What’s annoying, though, is that every time now you write as if you’ve somehow mastered it, rose to an expert and peered over the literature, when in reality you spend far less time and focus on topics than any given graduate student does in a semester survey course. I guess it would be fine if you weren’t so presumptuous, but then I guess you’d have less people reading your blog and less comments on your posts.

  (Quote)

Tony Hoffman December 22, 2011 at 8:51 am

Stephen D: “So this version of desirism is an attempt to model the (nonnormative) content of the judgments people ordinarily receive as normative?”

I think that all versions of Desirism that I know of are based on desires. What one does based on the premise that desires exist introduces the question of normativity.

Stephen D: “Then the relevant question becomes whether there’s any reason to think morality—even if you limit the scope to our society’s morality—has universal content.”

Well, no. I think that once one accepts that desires exist, then one could proceed to answer normative questions based on the understanding that desires are the content of morality.

Stephen D: “I’m not sure why anyone would be tempted to believe in a monolithic conventional morality. After all, the big disputes about morality concern what’s actually good in the moral sense.”

I believe that Desirism is considered flexible concerning how to to organize its calculus and the ability of behavior to influence the kind and quantity of desires. So this would indicate to me that Desirism does not and should not lead to a monolithic conventional morality.

  (Quote)

antiplastic December 22, 2011 at 11:47 am

I can’t imagine either of them taking such an obviously inconsistent position. Can you provide two real instances (quotes) of the kind of inconsistency you’re talking about that also fairly represent Alonzo and Luke’s explication of Desirism?

You won’t find any disagreement from either of them about being externalists. A quick google found an extended series of exchanges in this old thread, but you should just ask them and they’ll own up to it. See also AF’s classic “desirism has nothing to say to an agent at the moment of decision”.

Any time you see an externalist telling people they ought to do something or refrain from doing something (and this is not merely informing them of a more efficient way to do what they already wanted to do), they are being inconsistent.

[I]f you accept that you have desires and that you live in society with other beings that have desires, then you should act in a way that increases the satisfaction of good desires.

Non sequitur, but you’ve put your finger right on where the Primal Equivocation of Fyfism occurs.

I (pragmatically) should act so as to maximize the fulfillment of my own desires, period. It so happens that in some circumstances, this (pragmatically) requires me to fulfill other people’s desires, but in other circumstances it (pragmatically) requires me to thwart them in some of the most vulgar ways imaginable.

The moral life presents itself to us at its most forceful when what practical reason demands of us comes apart from what moral obligation demands of us. Philosophy going back to the parable of Gyges has been asking, “in what sense, if any, can I be morally compelled to do something when I am pragmatically compelled to do the opposite?”

Fyfism is constructed to bamboozle the buyer by making it sound as if there are such things as moral considerations which take normative precedence over raw hedonic satisfaction of one’s own desires. But, being externalists (and not being up-front about it, burying it in the small print), they are actually error theorists about this kind of moral obligation. They (claim, or “claim to claim”) that “you shouldn’t murder people then dissolve their bodies in vats of acid so you won’t get caught” has no normative force, no necessary connection between believing it and acting according to it, while “you are pragmatically obligated to dissolve the body so you won’t get caught” retains its normative force.

You will never, never, never ever get Fyfists pinned down on this. When they are talking to someone who doesn’t know they are externalists, they will go on and on like normal English-speakers about what is wise, what is important, what is the best form of life, why religion is evil, why rationalism is noble etc. But when someone begins to smell a metaphysical rat in the semantic cheese, suddenly, in Necker-cube fashion, the theory switches back into externalist mode, disavowing that their exhortations were at all meant to change anyone’s behavior or convey any normative content.

  (Quote)

Stephen December 22, 2011 at 11:50 am

From reading this blog since roughly a few months after it started, you’ve switched interests an amazing amount of times. I mean think of all the huge projects and the inevitable prospects you thought they had, only to find, three months later, you’re talking about something off on another trail (you were on expert on the Kalam argument, going to “map” the entire dialectic, then an expert on meta-ethics with your own theory, then an expert on singularity, just to name some big ones) . What’s annoying, though, is that every time now you write as if you’ve somehow mastered it, rose to an expert and peered over the literature, when in reality you spend far less time and focus on topics than any given graduate student does in a semester survey course. I guess it would be fine if you weren’t so presumptuous, but then I guess you’d have less people reading your blog and less comments on your posts.

There’s a lot that’s insufferably dictatorial and anti-rationalist. The worst practice is periodically locking discussants out of the discussion without any notice to other participants or even a heads up to the victim of this obscurantist practice of disappearing discussants.

  (Quote)

Stephen R. Diamond December 22, 2011 at 12:11 pm

[I]f you accept that you have desires and that you live in society with other beings that have desires, then you should act in a way that increases the satisfaction of good desires.

Non sequitur, but you’ve put your finger right on where the Primal Equivocation of Fyfism occurs.

Exactly.

  (Quote)

Tony Hoffman December 22, 2011 at 1:07 pm

Stephen R., when I wrote, “[I]f you accept that you have desires and that you live in society with other beings that have desires, then you should act in a way that increases the satisfaction of good desires.” that was in response to antiplastic’s assertion that (under Desirism) ” there is no correlation whatsoever between having moral belief and acting according to that belief.

But if you accept that desires exist, then don’t you accept that we have reasons to act (called desires)? In other words, it seems to me that if you accept that desires exist, you accept that we have reasons to act (called desires). It sounds as if you are asking for a reason other than desires themselves, but I think that’s an incorrect expectation of what Desirism puts forth; under Desirism, the premise is that desires in of themselves are the reasons for action.

  (Quote)

antiplastic December 22, 2011 at 2:58 pm

But if you accept that desires exist, then don’t you accept that we have reasons to act (called desires)? In other words, it seems to me that if you accept that desires exist, you accept that we have reasons to act (called desires).

And, freeze frame!

See what just happened there? The original quote was “you should act in a way that increases the satisfaction of good desires.” (where good means the Fyfist analysis of what constitutes the moral good, viz. fulfilling desires regardless of whether they’re yours or not), but then you flip back to asserting what no one in their right mind has ever denied, that we have a reason to fulfill our own desires.

You want some money, therefore You have a reason to steal it.”

I want some money, therefore You have a reason to let me steal it from you.”

See the difference? See the utter lack of sequitur-ness?

  (Quote)

Zeb December 22, 2011 at 4:50 pm

The worst practice is periodically locking discussants out of the discussion without any notice to other participants or even a heads up to the victim of this obscurantist practice of disappearing discussants.

What are you referring to?

  (Quote)

Tony Hoffman December 22, 2011 at 5:56 pm

Antiplastic: “You want some money, therefore You have a reason to steal it. / I want some money, therefore You have a reason to let me steal it from you. / See the difference? See the utter lack of sequitur-ness?”

I see the difference between your first two sentences. But the two can be easily related. If, as I have been saying, you accept that desires exist (yours and others), then morality under Desirism is the description of these kinds of desires, their effects, and a determination of whether or not those desires are to be pre- or proscribed.

Here’s a way of looking at it: if you alone exist, Desirism would only be about the the first sentence in your example above. (Leaving aside questions of whether or not one could steal without another agent.) But seeing as how we have more than one desire, and others also exist with many desires, and that all of these desires (our own and others) “compete” with actions that affect the satisfaction of other desires, it makes sense that a system that accounts for all desires (including my desire to not have you steal from me) should follow; for instance, I desire as many of my desires to be fulfilled as possible, but I understand that if everyone acted on the same principle I would realize far less satisfaction than if I worked within a system that accounted for both competition and cooperation. I don’t think that this should be a difficult (theoretical) pill to swallow. I think, as I have mentioned earlier, that the devil is in the details, but I can’t say that the concept is inconsistent or incoherent; I actually struggle understanding why so many people (like yourself) are so confounded by the mere idea of Desirism.

  (Quote)

antiplastic December 22, 2011 at 9:30 pm

I see the difference between your first two sentences.

Then you are streets ahead of proponents of Fyfism, who don’t.

They want you to believe they’ve found a conceptual alchemy which transmutes into the gold of “you have a reason” the lead of “someone has a reason” and most notoriously “people generally have a reason”. That is the entirety of their pseudo-calculus: “generally people should”, therefore you should.

Can we all admit that this central dogma rests on a non-sequitur?

If, as I have been saying, you accept that desires exist (yours and others), then morality under Desirism is the description of these kinds of desires,

ok…

their effects,

still ok…

and a determination of whether or not those desires are to be pre- or proscribed.

… aaaand freeze!

It’s been established that AF and LM are externalists. Therefore by definition their “moral” claims are not prescriptions! They are mere normatively neutral catalogs of certain contingent facts that decent people may or may not have any obligation to attend to. But they adopt a style of speaking which obscures this fact, effectively bamboozling people into thinking they’re offering genuine prescriptions when they are only giving descriptions.

But seeing as how we have more than one desire, and others also exist with many desires, and that all of these desires (our own and others) “compete” with actions that affect the satisfaction of other desires, it makes sense that a system that accounts for all desires (including my desire to not have you steal from me) should follow;

“Accounts for” is a slippery term. No one who has ever interacted with other people fails to “account for” the fact that people have different hopes and desires. But this is pure description, devoid of normative content insofar as other desires don’t factor into what you need to do to accomplish what you want.

for instance, I desire as many of my desires to be fulfilled as possible, but I understand that if everyone acted on the same principle I would realize far less satisfaction than if I worked within a system that accounted for both competition and cooperation.

But it is not up to you whether “everyone” keeps off the grass. It is up to you whether you keep off the grass. If everyone took a shortcut across the grass, it would suck because there would be no grass. From this it does not follow that you have a reason not to take a shortcut.

I actually struggle understanding why so many people (like yourself) are so confounded by the mere idea of Desirism.

I’m not confounded in the sense of not understanding it. I will even go out on a limb and say that based on my interactions on this blog I understand its logical contours better than substantially many of its proponents. It is not merely that it is false (although as an Expressivist I already know it is false). I am first and foremost exercised by the fact that it is propped up by atrociously bad arguments (I have an allergy to bad arguments), and running a close second is my innate dislike for the quasi-religious, cult-like character it seems to engender in its followers. This overlaps with Mahume’s observations above; I’ve tried to express these concerns in my comments on this post.

  (Quote)

Tony Hoffman December 23, 2011 at 6:14 am

AP: “But this is pure description, devoid of normative content insofar as other desires don’t factor into what you need to do to accomplish what you want.”

This seems obviously false. I have a desire to lose weight. I have a desire to eat a donut. How can proposing a system that organizes and navigates these two somewhat competing desires be considered a non-sequitur of that description?

You say that Luke and Alonzo are externalists, but I do not understand why externalism should be considered incompatible with moral normativity. I asked for quotes that demonstrate Alonzo and Luke’s inconsistency along the lines of the scenario you described. It seems to me that many of the most vociferous critics of Desirism have trouble explaining what it is that actually makes them so angry. And I find this odd, because in my experience one of the easiest things to do is expose exactly those things that make an argument a bad one.

  (Quote)

Stephen R. Diamond December 23, 2011 at 12:20 pm

TH:But seeing as how we have more than one desire, and others also exist with many desires, and that all of these desires (our own and others) “compete” with actions that affect the satisfaction of other desires, it makes sense that a system that accounts for all desires (including my desire to not have you steal from me) should follow;

AP: “Accounts for” is a slippery term. No one who has ever interacted with other people fails to “account for” the fact that people have different hopes and desires. But this is pure description, devoid of normative content insofar as other desires don’t factor into what you need to do to accomplish what you want.

TH: This seems obviously false. I have a desire to lose weight. I have a desire to eat a donut. How can proposing a system that organizes and navigates these two somewhat competing desires be considered a non-sequitur of that description?

How can a statement “be considered” or not be considered a non-sequitur of another statement? It either is or isn’t. It’s a matter of logic, not a matter of opinion. A system that imposes preferences on a description *is* a non-sequitur relative to that description. It doesn’t follow. That doesn’t mean it’s contradictory, just that the argument from the description to the normative organizing scheme doesn’t follow. But you’re making an argument that pretends it follows. You won’t even admit your argument is a non-sequitur, in the process forgetting what the very term non-sequitur means.

Why do people become so irritated with defenses of utilitarianism (of which desirism is only a minor variant)? I can only speak for myself. For the same reason I become even more irritated with defenders of “Objectivist” morality (Ayn Randism). One of the worst inheritances from religion is moralistic baggage. Yet some atheists, intent on proving they are more moral than thou, end up more committed to moralism and are more irrational in its defense than are religious fools.

  (Quote)

Stephen R. Diamond December 23, 2011 at 1:08 pm

The worst practice is periodically locking discussants out of the discussion without any notice to other participants or even a heads up to the victim of this obscurantist practice of disappearing discussants.

What are you referring to?

Periodically, the site doesn’t accept my comment. It simply doesn’t appear, but when I try to repost it, I get the message that it’s a duplicate. This only happens when I’ve commented relatively frequently. Apparently, a person or the software decides I’ve posted to much in direct response to another poster or to a given thread—either pattern occurs. I’ve had the same experience at Robin Hanson’s blog.

  (Quote)

Tony Hoffman December 23, 2011 at 1:51 pm

Stephen D: “How can a statement “be considered” or not be considered a non-sequitur of another statement? It either is or isn’t. It’s a matter of logic, not a matter of opinion.”

No, I think a non sequitur can also be a matter of opinion. For instance, “My house is on fire. I need help.” In common usage we could break those up with a paragraph, period, or semicolon, depending on to what extent the second statement follows from the first. People can be found who disagree to what extent the two statements related, and to what extent the second follows from the first. This goes on all the time.

Stephen D: “A system that imposes preferences on a description *is* a non-sequitur relative to that description.”

Yeah, I’m not sure that it’s that cut and dry. I think I’ve heard a description of Desirism include something along the line that “desires are the only reason we have for actions.” This seems to me like a description of a state of affairs, but I think that premise leads naturally to the question of what, if anything, we do about the relationship of desires and actions, knowing what we know about desires and actions. I can’t imagine why this relationship would be seen as something that doesn’t follow.

Perhaps you could make an argument that demonstrated how it is that I’m wrong.

  (Quote)

Reginald Selkirk December 24, 2011 at 4:03 pm

Happy winter solstice holiday of your choosing.

  (Quote)

YWHW Lives December 25, 2011 at 5:29 pm

Merry Christmas, Luke!

May you and your loved ones enjoy this gracious Holiday.

May God’s mercy flow upon you and may you rejoin your brethren in Heaven.

Amen.

  (Quote)

Heuristics December 26, 2011 at 6:44 am

“the greatest objection to desirism I knew of was one that nobody (except Alonzo) ever mentioned: the possibility that desires do not exist.”

I mentioned it. It is widely known to be the biggest obstacle to atheistic value realist moral speakings so I applied it to desirism since it also must needs be realist about desires and mentioned it on this blog. The existence of desires are, however, just the first difficulty. The second difficulty is the interaction problem of how desires interact with physics and the third one how desires could evolve, the fourth one why desires feel the way they do and the fifth why desires would be anything more special then a fart in the air.

Now, doing, as some do, to go with the eliminativistic materalistic route of calling a mechanical/mathematical/physical description in the brains neurons with the word desire is cheating since that is not what is meant when people use the word (they are referring to an emotion not a movement description). Doing what is often the case in neuroscience of pretending to be going with a mechanical/mathematical/physical description but in actuality smuggling in teleologicaly loaded language such as symbols and goal directedness is just plain insulting to naturalism for such terms are not mathematical, they are not mechanical and they are not physical.

  (Quote)

Stephen R. Diamond December 26, 2011 at 4:32 pm

Stephen D: “How can a statement “be considered” or not be considered a non-sequitur of another statement? It either is or isn’t. It’s a matter of logic, not a matter of opinion.”

No, I think a non sequitur can also be a matter of opinion. For instance, “My house is on fire. I need help.” In common usage we could break those up with a paragraph, period, or semicolon, depending on to what extent the second statement follows from the first. People can be found who disagree to what extent the two statements related, and to what extent the second follows from the first. This goes on all the time.

The inference from the house being on fire to the need for help is elliptical. There are unmentioned premises; they’re unmentioned only because they’re too trivial to mention, particularly during an emergency. [E.g. I want to avoid my house's burning down and getting help will prevent it.] Alternatively, the speaker might not have intended a logical inference but only a bit of practical reasoning. And that’s the only sense in which there can be legitimate question about whether a non sequitur has been committed: whether a logical inference is intended. But here, you’ve never denied intending a logical inference.

Stephen D: “A system that imposes preferences on a description *is* a non-sequitur relative to that description.”

Yeah, I’m not sure that it’s that cut and dry. I think I’ve heard a description of Desirism include something along the line that “desires are the only reason we have for actions.” This seems to me like a description of a state of affairs, but I think that premise leads naturally to the question of what, if anything, we do about the relationship of desires and actions, knowing what we know about desires and actions. I can’t imagine why this relationship would be seen as something that doesn’t follow.
Perhaps you could make an argument that demonstrated how it is that I’m wrong.

There _is_ at least one way information can “lead naturally” to other info9rmation besides analytic entailment, and that’s, of course, induction. If you’re saying the entailment isn’t deductive, I thinks it’s up to you to say what you claim it is. “Leads naturally” is too vague to refute. (I guess that’s why you’re not getting the crisp arguments you expect.)

You say that desires being the only reason for action “leads naturally” to “the question of what we do” about that relationship. Apart from this claim’s vagueness (especially around who the “we” is), it doesn’t bear on _answering_ the putative question. The fact that ethics is inherently about desires doesn’t say anything about which desires should be satisfied. The answer on offer favors maximizing satisfaction. Nothing in positing that desires alone are reasons for action implies by any form of inference—certainly not by logical entailment—that desire satisfaction should be maximized or that all desires should be counted equal regardless of their source or object. It doesn’t imply that pursuing any combination of desire satisfactions is what we “should” do.

As to knock down arguments against desirism, I’d advert to the arguments against all forms of moral realism: mainly, Moore’s open question argument. (http://tinyurl.com/7dcbt7y) Perhaps the easiest way to bring the basically simply point home is with a question. Suppose you’ve proved that the correct moral standard is to maximize the satisfaction of the greater number of (intensity-weighted) desires for the greatest number of people. So what? Why _should_ this conclusion lead me to do anything differently than if I believed the good consisted in building many golden calves? What proclivities should this conclusion induce in me? In other words, why should this belief about what I “should” do create any *desire* in me to act accordingly: to seek not just my original desires but a *new* desire to maximize everyone’s satisfaction?

  (Quote)

antiplastic December 28, 2011 at 12:41 pm

This seems obviously false. I have a desire to lose weight. I have a desire to eat a donut. How can proposing a system that organizes and navigates these two somewhat competing desires be considered a non-sequitur of that description?

It wouldn’t be. But that is not what we are talking about because that is not what Fyfism is. It is the claim that everyone’s desires count as reasons for everyone else, which is flatly false absent some irreducibly nondescriptive claim that we should take them into account.

Do you notice how every time you present some fact about reasons as being “obviously” true, it is some innocuous fact about organizing one’s own desires that no one ever has disputed, and how every time I give an example of a moral claim it involves subordinating them to others’ desires?

I’ve long suspected that one of the reasons this equivocation remains just absolutely invisible to Fyfists is that they rely on the (very understandable and very admirable and very probably true) suppressed deontic premise that it’s just really nice to try to make as many people as happy as possible, so when they see a pseudo-calculus which purports to supply a way of ascertaining what that would mean, they project their own normative values onto the system and see it as complete, without bothering to notice that they’ve smuggled in an “ought”.

You say that Luke and Alonzo are externalists, but I do not understand why externalism should be considered incompatible with moral normativity.

I’ve explained this when I said

According to [externalism], it would be a perfectly ordinary and everyday thing for someone to say “I believe, fully and sincerely, that the Iraq War was a moral abomination, and anyone who supported it should feel horribly guilty at minimum, and at maximum should be condemned in the court of public opinion and possibly even subject to punishment. However, I love the Iraq war, continue to support it enthusiastically and sincerely, and feel no guilt about my beliefs and actions in this regard.”

On this view, questions like the one asked in the most recent podcast, “what should I do in questions of moral uncertainty” simply do not compute: there is no correlation whatsoever between having moral belief and acting according to it, or having a reason to act according to it.

And then when I said “any time you see an externalist telling people they ought to do something or refrain from doing something (and this is not merely informing them of a more efficient way to do what they already wanted to do), they are being inconsistent.”

You snipped that out and never addressed it. It is a somewhat technical term, so it’s no sin if maybe you don’t understand what it means, but in that case you should really just ask.

If I believe I ought to do something, whether this is a practical ought or a moral ought, there is a normative component to my belief in virtue of which I am necessarily motivated to at least some degree to act according to it; I have a reason to do so. Conversely, if I believe you ought to do something I believe that you have a reason to act accordingly. According to Fyfism, however, I am capable of telling you you ought to do something even if you don’t have a reason to, and I am capable of believing that I ought to do something without this having any effect on my motivations. I regard this as a prima facie refutation of the theory.

Consider the utopian pipe-dream of the SI, so called “friendly AI”. Luke wants to make sure that our robo-overlords get a good moral education, so to speak, to insure the righteousness and justice of their thousand-year reign. According to externalism, a robot with 100% true moral beliefs can be expected to behave indifferently with respect to them — unless you add the irreducibly normative command that it act so as to fulfill them. This is what is called a “performative contradiction”; the externalists’ actions do not align with what they claim to believe.

It seems to me that many of the most vociferous critics of Desirism have trouble explaining what it is that actually makes them so angry.

Seriously uncool and uncalled for. I have expanded, at length and in detail, on the philosophical and moral reasons for my objections. If you don’t find my explanations convincing, fine; but please drop the “you can’t give any explanations” tack.

  (Quote)

antiplastic December 28, 2011 at 12:57 pm

A note on eliminativism:

I’m pretty sure I raised the issue explicitly in a question long before that double-post went up, but implicitly it comes up whenever anyone points out the in-principle impossibility of a calculus of desires. If you’re willing to go through the old IIDB/FRDB archives, you can see people pointing this problem out to AF years ago, to no avail.

To claim (accurately) that preferences and aversions are in-principle incommensurable is to deny them a place in one’s scientific ontology, which is a kind of eliminativism, even though one might very well keep them for instrumentalist reasons. It is to say that there is no one quantifiable thing called “a desire to eat fruit” that can be weighted quantifiably against some one thing like “a desire not to live in New Jersey” in the way two vectors of Force can be summed in physics to produce a determinate prediction about future motion.

  (Quote)

Adito December 28, 2011 at 6:55 pm

Nice to see a substantial discussion here again.

Antiplastic,

Sorry to jump in here but this might give me a chance to understand Desirism a little better. Your objection seems to center around the idea that descriptions of desires and the world cannot be prescriptive. This seems false. If I have the desire to go north and turning left and walking will lead me north then a statement such as “turn left and walk” is prescriptive to me. There is no arbitrary preference being projected by someone making this prescriptive statement. You said the following –

I am capable of telling you you ought to do something even if you don’t have a reason to

I don’t think that’s how desirism would frame the issue. A desirisist would say they have reason to promote a desire to act in a certain way in another person but there’s no particular reason for this action to be attractive to the person in question at this point in time. If the desirist succeeds in his prompting of desire in another person then this person will have a normative reason to act such that the new desire is fulfilled.

I am capable of believing that I ought to do something without this having any effect on my motivations.

It’s not clear to me that this is the case. To a desirist the belief that I ought to do something is exactly the same as saying I have a motivation to do something.

Am I misunderstanding your points?

  (Quote)

Richard Wein December 29, 2011 at 5:44 am

Luke: “Desirism never posited anything more than the standard, reductionistic, scientific picture of the world.”

Well, that’s not quite true. You have also posited something about the meanings of moral terms. While you’ve sometimes given the impression of feeling free to make up any meanings for moral terms that you like, at more thoughtful times you have acknowledged that there are some limits to what moral terms can be used to mean. Otherwise you could be in the position of someone who claims there is life on the Moon, and attempts to justify that claim by defining “life” to mean “rock”. Or you could define “moral” to mean “green”, and claim that grass is moral.

In your LW post (Pluralistic Moral Reductionism) you make some explicit claims about the pre-existing (pre-theoretical) meanings of moral terms. However, you give no evidence whatsoever in support of these controversial and counterintuitive claims. And I say that you’ve got the meanings of moral terms completely wrong (as I think would most philosophers). If you have indeed got the meanings of moral terms completely wrong, then your “reforming” definition is useless, and you are in a similar position to the life-on-the-Moon claimant that I mentioned.

One of the things you seem to overlook is that claims about the meanings of words are themselves factual claims that need to be supported by evidence.

Luke: “I once wrote a two-part post called ‘The Greatest Objection to Desirism’. I said that the most common objections to desirism were no good, and that the greatest objection to desirism I knew of was one that nobody (except Alonzo) ever mentioned: the possibility that desires do not exist.”

The greatest objection to desirism (as to any variety of definitional moral naturalism) is that you’ve got the meaning of moral terms wrong. I think your failure to even see this as a serious objection reveals your deep confusion about meanings.

Luke: “But the larger reason I’ve stopped talking about morality in the language of desirism is that I’m tempted to not use moral terms at all. Moral language is thoroughly confused and corrupted and strongly motivated, and I’m more tempted than ever to abandon the entire language and start with a new one.”

Well, that sounds like a step in the right direction. You always used to say that you could make your desirist claims in different (non-moral) terms. But you ignored all challenges to do so. But it sounds to me as if you are confused about whether your new language will be a moral one. If your aim is to produce a different moral language, then it’s self-defeating. Every moral language will have the same problems. If your aim is to produce a non-moral language, then you must drop the claim that you’re doing metaethics. If you say you’re doing metaethics then you’re implying that your language is a moral one, and people will tend to interpret your new terms as moral terms. They will then acquire moral meaning for those people.

As I see it, definitional moral naturalists are conflating meanings. They are using moral terms both in their pre-existing sense and in the sense that the moral naturalist defines. This conflation of meanings leads to fallacies of equivocation and general confusion. And if you’re not careful your new terms will end up having similarly conflated meanings.

  (Quote)

Richard Wein December 29, 2011 at 6:08 am

P.S. I forgot to mention an important point. One result of your conflation of meanings is that you are positing something “more than the standard, reductionistic, scientific picture of the world,” and not only something about meanings.

Consider again my life-on-the-Moon example. If the claimant is conflating his defined meaning of “life” (=rock) with the standard meaning (=organisms), he may be committing a fallacy of equivocation. He may think that he’s proven that there really is life (standard sense) on the Moon. In other words, he may not just have adopted a peculiar way of saying that there’s rock on the Moon. He may really be saying that there’s life on the Moon.

Now that conflation is pretty far-fetched. It’s hard to imagine someone being that confused about the meaning of “life”. But people really do get confused about the meaning of moral terms, and the analogous conflation for moral terms is quite plausible. I say that definitional moral naturalists are making such an error, and so (like the life-on-the-Moon man) really are claiming something more than the standard scientific picture of the world.

If you were strictly using moral terms in accordance with your definition, then it would be true that you are claiming nothing more than the standard scientific picture of the world (plus claims about meanings). But I say that you’re conflating your defined meaning with the standard meaning of moral terms, and so claiming something more. If you were simply making descriptive factual claims about desires you could do so much more clearly by avoiding moral language. What motivates you to use moral language is the fact that moral language still has its usual normative, prescriptive, judgemental meaning for you, not just the descriptive meaning you”ve defined. And there’s a danger that any new terms you define will also have that meaning for you.

  (Quote)

Richard Wein December 29, 2011 at 6:20 am

P.P.S. “Philosophy is a battle against the bewitchment of our intelligence by means of our language.” (Wittgenstein)

  (Quote)

Tony Hoffman December 29, 2011 at 8:49 am

AP: “Do you notice how every time you present some fact about reasons as being “obviously” true, it is some innocuous fact about organizing one’s own desires that no one ever has disputed, and how every time I give an example of a moral claim it involves subordinating them to others’ desires?”

The point I was trying to make was that the acknowledgement that different and competing desires exist (whether one’s own, others, or a combination of the two) introduces the question of what can be done so that more of my desires are satisfied. Why? Because describing a desire is also about describing what I want, which is another way of saying what I should do – and what I want to d most is satisfy most of my desires. And since I am not a solipsist, and I believe in other minds, I don’t think it’s outlandish of me to consider the description of desires as inclusive of their prescriptive and proscriptive content. In other words, it seems to me that a part of describing a desire could very well include not just what satisfies that desire, but also what impact that satisfaction has on other desires (mine and others).

Obviously, I am not a moral philosopher, nor am I trying to portray myself as an adequate defender of Desirism. My point is that, from my naïve vantage point, it is not clear to me how it is that Desirism is ill-fated in the same way that others (like yourself) are declaring all systems of moral realism to be.

AP: “According to Fyfism, however, I am capable of telling you you ought to do something even if you don’t have a reason to, and I am capable of believing that I ought to do something without this having any effect on my motivations. I regard this as a prima facie refutation of the theory.”

Right, and either I don’t understand this or maybe you and I have a different understanding of Desirism – can you maybe provide a reference or a quote from a proponent of Desirism from which you are basing your representation above?

AP “Consider the utopian pipe-dream of the SI, so called “friendly AI”. Luke wants to make sure that our robo-overlords get a good moral education, so to speak, to insure the righteousness and justice of their thousand-year reign. According to externalism, a robot with 100% true moral beliefs can be expected to behave indifferently with respect to them — unless you add the irreducibly normative command that it act so as to fulfill them. This is what is called a “performative contradiction”; the externalists’ actions do not align with what they claim to believe.”

Yeah, I think I can see the problem here – I think you’re showing how it is that Desirism and AI introduce the problem of from what the first desire arises (as well as what it is)? I would suggest that the first, axiomatic desire is to maximize one’s own desires. Questions of normativity, navigating one’s actions so as to achieve this, and interacting with other beings capable of having desires, could lead naturally to a set of individual and societal prescriptions and proscriptions that balance all these desires.

AP: “Seriously uncool and uncalled for. I have expanded, at length and in detail, on the philosophical and moral reasons for my objections. If you don’t find my explanations convincing, fine; but please drop the “you can’t give any explanations” tack.”

Yeah, I think you took this as more of an insult than I intended, and you’re right I should have explained that the problem could very well be my own poor understanding. It is, guessing from the number of critics of Desirism, probable that I just haven’t found a way of absorbing the criticism yet. I was trying to suggest to you (and others) that it’s also possible that, if your criticism is indeed valid, I was sincerely still having trouble grasping that fact, and maybe some other analogies or arguments might work better.

  (Quote)

antiplastic December 29, 2011 at 12:02 pm

Your objection seems to center around the idea that descriptions of desires and the world cannot be prescriptive.

Close, but not quite. Descriptions of desires and their fulfillment conditions can be prescriptive, if and only if the agent has those desires. Fyfism is a continual equivocation between being “merely” a description of desires including but not limited to one’s own, and the normative claim that fulfilling desires other than one’s own is what one has a moral obligation to do.

You will never pin them down on this. It has some sort of psychological cloaking device that renders the equivocation utterly invisible to their sensors.

This seems false. If I have the desire to go north and turning left and walking will lead me north then a statement such as “turn left and walk” is prescriptive to me. There is no arbitrary preference being projected by someone making this prescriptive statement.

As I said above, this is the innocuous, trivially true observation that Fyfism tries to take credit for, as though no one had previously realized that finding out what to do to get what you want is useful thing.

“I am capable of telling you you ought to do something even if you don’t have a reason to”

I don’t think that’s how desirism would frame the issue. A desirisist would say they have reason to promote a desire to act in a certain way in another person but there’s no particular reason for this action to be attractive to the person in question at this point in time.

That one often has a reason, based on one’s own desires, to change the outlooks of others, is no philosophical profundity, but rather a banality disputed by no one at any time.

But if I say to you “you should change your attitude about this,” and there is no corresponding desire in you at the time, then I have uttered a falsehood if my only intent was to relate desires and their satisfiers. However, it is a commonplace to say (correctly and truly) that someone has a moral obligation to do something (“morally, you should X”) when they lack the requisite desire.

Therefore, morality is not and cannot be the same thing as a mere cataloging of what fulfills what desires. Their truth conditions are different. The Fyfists at this point start playing a game of Three-card Monte with definitions; while one cant be sure in advance which specific card they will turn over in any given argument, you can be sure it won’t be the one where they own up to being error theorists (you can’t very well have a blog called “The Atheist Ethicist” where every post is “there are no moral truths”) or the one where they admit that their own pseudo-moral definitions entail that calling something morally reprehensible is not intended to have any normative force on its targets.

If the desirist succeeds in his prompting of desire in another person then this person will have a normative reason to act such that the new desire is fulfilled.

Changing other people’s attitudes just is what moral discourse is all about, and is just what by definition an externalist can’t be doing when he makes moral claims! As a matter of practical reason, “you should give more to charity” is false if you lack the requisite desires, and true when you do. But as a matter of morality, it is just as true regardless of how you feel about it; it was true even when you lacked the requisite desires. Therefore, morality cannot simply be a recitation of “how to get what you want”. The truth conditions are clearly different.

“I am capable of believing that I ought to do something without this having any effect on my motivations.”

It’s not clear to me that this is the case. To a desirist the belief that I ought to do something is exactly the same as saying I have a motivation to do something.

No, you are correct. It is not the case. That is my point. The externalist says it is the case. The externalist is wrong. AF and LM are externalists. Therefore they are wrong. Case closed. Any analysis of morality that jettisons the ineliminably action-guiding character of moral truths is, as RW points out, like calling rocks “life” and concluding there is life on the moon: it is not a definition or even a “reforming definition”, it is a pseudo-definition, a way of assigning purely arbitrary verbal tokens to things in a deliberately obscurantist fashion.

But you are incorrect about what Fyfists say about believing one (morally) ought to do something. According to them, one (morally) ought to do something when the entire universe of desires other than your own “generally, in sum, more often than not” contains more of other people’s desires which would be fulfilled by that action. And it is quite obvious that one is capable of believing some normatively neutral fact about poll results of other people without having any particular emotive reaction to them. According to the Fyfist reckoning, it will be a commonplace for people to correctly believe they “ought” (in the scare quotes sense of adding up the universe of desires) to do something when in fact it would be the very height of irrationality and self-sabotage to do so. And then the game of hide the card starts all over again.

  (Quote)

Nester December 29, 2011 at 3:11 pm

“Moral language is thoroughly confused and corrupted”

Maybe because people like you and Fyfe make moral shit up then act like your only talking about things that “actually exist” ??

  (Quote)

Stephen R. Diamond December 29, 2011 at 3:53 pm

“And since I am not a solipsist, and I believe in other minds, I don’t think it’s outlandish of me to consider the description of desires as inclusive of their prescriptive and proscriptive content. In other words, it seems to me that a part of describing a desire could very well include not just what satisfies that desire, but also what impact that satisfaction has on other desires (mine and others).”

The impact that satisfaction has on other desires is part of what you would normally consider in evaluating the desire itself. But that’s not your actual claim; it’s that you should consider the effect a universal rule that moralizes about satisfaction would have on your satisfaction.

The root error seems the same one that leads to other altruistic fallacies, notably, the voter’s paradox. Why vote, when the expect utility is miniscule and the sacrifice of time significant? (My answer: political participation may be inherently pleasurable for some.) The naive answer to the paradox is, “Well, what would happen if everyone thought that way.” You assume (feel free to reject this diagnosis) the categorical imperative: I must act according to a rule that I advocate everyone follow. That’s why it seems so natural to take other people’s beliefs into account; the moral world consists of obligations that both you and they must obey.

“Obviously, I am not a moral philosopher, nor am I trying to portray myself as an adequate defender of Desirism. My point is that, from my naïve vantage point, it is not clear to me how it is that Desirism is ill-fated in the same way that others (like yourself) are declaring all systems of moral realism to be.”

I don’t get that AP oppose moral realism. He oppose moral naturalism, which is something else. He seems to take the contradictory anti-realist implications of desirism as a reductio desirism. He’s a deontologist [giving due weight to his comments that the desirist seek to model “true” ontological moral judgments. (I’m an error theorist/antirealist.)

  (Quote)

Stephen R. Diamond December 29, 2011 at 6:09 pm

Since the discussion has focused on the anti-realist implications of desirist tenets, we should look at what Luke has cited to on the question:

“‘So, does all this mean that we can embrace moral realism, or does it doom us to moral anti-realism? Again, it depends on what you mean by ‘realism’ and ‘anti-realism’.

“In a sense, pluralistic moral reductionism can be considered a robust form of moral ‘realism’, in the same way that pluralistic sound reductionism is a robust form of sound realism. “Yes, there really is sound, and we can locate it in reality — either as vibrations in the air or as mental auditory experiences, however you are using the term.” In the same way: “Yes, there really is morality, and we can locate it in reality — either as a set of facts about the well-being of conscious creatures, or as a set of facts about what an ideally rational and perfectly informed agent would prefer, or as some other set of natural facts.”

“But in another sense, pluralistic moral reductionism is ‘anti-realist’. It suggests that there is no One True Theory of Morality. (We use moral terms in a variety of ways, and some of those ways refer to different sets of natural facts.) And as a reductionist approach to morality, it might also leave no room for moral theories which say there are universally binding moral rules for which the universe (e.g. via a God) will hold us accountable.

“What matters are the facts, not whether labels like ‘realism’ or ‘anti-realism’ apply to ‘morality’.”

Note the clause, “… does it DOOM us to moral anti-realism.” Those wondering why moralist theories have such a grip should consider that we’re talking mostly about first-generation atheists who can live without god but not without morality.

  (Quote)

antiplastic December 30, 2011 at 12:41 pm

@TH

And since I am not a solipsist, and I believe in other minds, I don’t think it’s outlandish of me to consider the description of desires as inclusive of their prescriptive and proscriptive content. In other words, it seems to me that a part of describing a desire could very well include not just what satisfies that desire, but also what impact that satisfaction has on other desires (mine and others).

That is all part of the de-scription. But morality is inherently pre-scriptive; in the realm of practical rationality prescriptivity is relativised to the people who actually have the desires. That is the unbridgeable chasm between the two kinds of reasoning.

Right, and either I don’t understand this or maybe you and I have a different understanding of Desirism – can you maybe provide a reference or a quote from a proponent of Desirism from which you are basing your representation above?

I linked to the earlier thread, but should have drawn your attention to the exchanges between LH, Penneyworth, and myself. Ctrl-f “internalism” there and you will see Luke admitting that Fyfism does not and cannot accommodate this central feature (“I don’t see any way for desirism to accommodate motivational internalism.”)

Yeah, I think I can see the problem here – I think you’re showing how it is that Desirism and AI introduce the problem of from what the first desire arises (as well as what it is)? I would suggest that the first, axiomatic desire is to maximize one’s own desires. Questions of normativity, navigating one’s actions so as to achieve this, and interacting with other beings capable of having desires, could lead naturally to a set of individual and societal prescriptions and proscriptions that balance all these desires.

Not quite.

Consider programming a simple robot with a light sensor. This sensor will give the robot accurate “beliefs” about where the light is brighter and where it is darker. Now, you can program a robot to move towards where it thinks the light is brighter, or you can program it to move away from the light — it depends entirely on what you are trying to get the robot to do.

So in this simple illustration, we have Light-desiring-bot and Light-hating-bot.

If you set them loose in the same room with the same light sources, their sensors will give them accurate “beliefs” about where the light is, but their actions will be very different, because you’ve programmed them (commanded them, prescribed for them) differently.

So the upshot of this is that a belief which is merely descriptive (as externalists claim moral beliefs are) is not sufficient to dictate behavior – there must be an additional element in the programming saying “go towards this” or “steer clear of this”. So even if Luke programs his AI with 100% true moral beliefs, according to his own theory this will not cause the robot to act morally!

Of course, I don’t think for one minute that Luke doesn’t really mean to program the robots to behave morally, or that anyone who speaks English doesn’t understand “friendly AI” to be about getting the right kind of behavior out of them. But this is because he is inconsistent; he doesn’t really believe what he says he believes. He is “in the grip of a theory” and hasn’t fully worked through what the practical implications of Fyfist slogans would actually be. To pull up another famous Wittgenstein quote, “philosophy is the assemblage of reminders for a particular purpose.” Moral naturalists from time to time simply need to be reminded of what they really believe, and to compare this with what the theory says they should believe.

(Sorry about snapping out earlier. I get grouchy when I try to quit caffeine.)

  (Quote)

antiplastic December 30, 2011 at 12:51 pm

@Stephen

I don’t get that AP oppose moral realism. He oppose moral naturalism, which is something else. He seems to take the contradictory anti-realist implications of desirism as a reductio desirism. He’s a deontologist [giving due weight to his comments that the desirist seek to model “true” ontological moral judgments. (I’m an error theorist/antirealist.)

I’m an expressivist. I don’t have any problems talking about real moral truths and genuine obligations which don’t depend on you wanting to have them. But I object to the metaphysical realist angle which tries to pretend that the authority of morality rests in some transhuman “objective” domain.

Good catch on the “doom us to anti-realism” thing. Moral realism is a classic example of what the Yuddites love to point out as “motivated reasoning”. Admitting that we are only responsible to ourselves, and not “the will of god” or “the dictates of reason” or “natural law” is seen as “doom”, an apocalypse to be avoided at all costs, so by hook or by crook we must find some way of talking which makes it sound like we are merely conduits for some free-standing, externally given moral order in the universe.

Oh, and also, a lot of the fyfist arguments are just plain silly, so there’s that too.

  (Quote)

Tony Hoffman December 30, 2011 at 7:06 pm

AP: “So the upshot of this is that a belief which is merely descriptive (as externalists claim moral beliefs are) is not sufficient to dictate behavior – there must be an additional element in the programming saying “go towards this” or “steer clear of this”. So even if Luke programs his AI with 100% true moral beliefs, according to his own theory this will not cause the robot to act morally!”

Well, here’s my problem with your analysis, then: I think that meta-ethical issues (what should we prescribe and proscribe) are possibly meaningless questions. The fact, in a reduction of desirism, comes from the wanting. So, in a world with light-seeking and and light-avoiding robots, there are no facts but the difference in those desires, and no morality but that which can be accommodated given these facts. This seems like the starting point upon which a morality is built, as opposed to vice versa.

Of course, I agree that this doesn’t help Luke and programmers of AI, who have to concern themselves with meta-ethical issues. So to the extent that Desirism’s proponents declare that it offers answers to those questions, I agree that it does not. I suppose that my confusion stems from the extent to which I understood Desirism to be a description of a moral system for any given society (from which the facts about desires are already known) as opposed to a prescription for any hypothetical society (for which desires are to be assigned), at which point I think it fails (with both a sense of dignity and honesty).

  (Quote)

antiplastic December 31, 2011 at 10:52 am

Instead of “meaningless” I would say that most meta-ethics is disguised first-order ethics. The realization that the universe (as in my toy robot example) is nothing more than a congeries of meaningless sequences of cause and effect, devoid of moral structure, is an important result of metaethical philosophizing, but it is just the beginning. There remains the unavoidable existential question of what to do with one’s life in light of this fact.

We are, as Sartre so memorably put it, “condemned to be free,” by which he meant we have infinite moral freedom and therefore infinite moral responsibility, with no possibility of escaping the necessity of personal choice and taking refuge in some externally given order like “the will of god” or “the dictates of reason”. For some people, the attitude they adopt is despair, nihilism. For myself, I take to a sort of Stoic optimism and faith in the creative power of the human spirit. I think the attitude of the nihilist, the error theorist, the relativist is wrong, but by this I only mean it is the wrong sort of reaction to have to the death of God, not that they are “wrong” in the sense of misdescribing the world; existential attitudes are not forced or entailed by facts. I think existential despair is an understandable, quite human reaction, and I view philosophy — in the broadest possible sense of the term — as fundamentally a therapeutic enterprise aimed at treating psychological conditions of this kind.

But in order to have the conversation about what to do with ourselves in an indifferent universe, we first all have to admit that the universe is indifferent. This is why I take such an interest in arguing for antirealism. I am always dismayed and disappointed when the Moral Argument comes up in formal debates, because the nontheist either tries to take a cop-out and argue for objective morality, or goes straight for nihilistic despair. That’s not how I would handle it.

  (Quote)

Tony Hoffman January 2, 2012 at 4:37 pm

AP, I find myself in total agreement with your last post. I have nothing to add, except that I appreciate the time you took to explain yourself further.

  (Quote)

Evan January 10, 2012 at 10:16 pm

I’m pretty late to this, it looks like the conversation has ended, but I have to post because of the frustration I feel at the way the anti-realist arguments here completely miss the point and have to respond to them.

The is-ought problem is totally, completely orthogonal to morality. Hume knew this back when he discovered it back in his day, yet for some bizarre reason people still use the is-ought problem to make arguments for moral anti-realism. The blog post that Stephen R. Diamond linked to was basically a giant rehash of this view. Basically, that post claimed that since there are no universally appealing arguments that will work on any conceivable mind no matter what, all moral statements are wrong. That’s like saying that all moral statements are wrong because unicorns don’t exist. Unicorns have exactly as much to do with morality as universally compelling arguments do.

Hume, who discovered the is-ought gap, knew moral statements were true, in spite of it, because he knew that humans have an excellent reason to behave morally. In fact, his friend Adam Smith wrote a book about them: Moral Sentiments. Since humans already have moral sentiments that make them want to be moral, it is totally unnecessary for moral philosophers to deal with the is-ought gap. Instead all they need do is develop a way to most efficiently follow the dictates of our consciences. So far Fyfe’s desirism has struck me as one of the most effective of these philosophies.

So what is morality? It’s simply a shorthand term for desires that make the world a better place. It’s the sentiments that make us want to help others achieve their desires, be happy, and not harm others. And moral philosophy (when it doesn’t go nuts doing crazy and irrelvant things like trying to bridge the is-ought gap) is simply the human races attempt to systematize our feelings of right and wrong so that we can do good more efficiently. Ethical naturalism is correct, moral statements like “that is wrong” are just shorthand for objective arguments like “that frustrates desires.”

The view that Diamond espouses, that the is-ought problem negates morality seems to be a symptom of people who are infected with too much bad philosophy. I’ve spoken with many of my friends without philosophical background and they found the idea that morality was false because you can’t come up with an argument so convincing that it will convince someone without a conscience, to be utterly bizarre to the point of being nonsensical.

There are no universally compelling arguments and that’s a good thing. If there those arguments might compel us to act immorally. For the best dramatization of this I recommend reading Final Crisis. In that book Darkseid, an evil alien warlord, develops a universally compelling argument, the Anti-Life Equation, that bridges the is-ought gap and proves that he is the one true lord of the universe. Then he conquers Earth by posting it online. The various heroes and authorities of Earth don’t rejoice that morality has finally been solved. They instead fight Darkseid and try to avoid being exposed to his argument. Even though Darkseid has a universally compelling argument that everyone should submit to his totalitarian dominion the heroes of Earth keep trying to do the right thing anyway.

  (Quote)

Stephen R. Diamond February 2, 2012 at 3:38 pm

Evan’s summary is entirely inaccurate. See the whole argument and continue the discussion about morality’s unreality and the function of moral principles at http://tinyurl.com/7advgq5

  (Quote)

Leave a Comment