Natural Value

by Luke Muehlhauser on December 29, 2009 in Ethics,Guest Post

cornfield

The ethical theory I currently defend is desirism. But I mostly write about moraltheory, so I rarely discuss the implications of desirism for everyday moral questions about global warming, free speech, politics, and so on. Today’s guest post applies desirism to one such everyday moral question. It is written by desirism’s first defender, Alonzo Fyfe of Atheist Ethicist. (Keep in mind that questions of applied ethics are complicated and I do not necessarily agree with Fyfe’s moral calculations.)

cloud_break

Environmentalist thinking is plagued by a significant and substantially unchallenged flaw. It is found in the idea that what is natural has intrinsic value. Any type of human interaction with nature renders it ‘unnatural’, which then destroys its natural value.

Against this we can start with the proposition that there is no intrinsic value. Value exists in the form of relationships between states of affairs and desires. That which tends to fulfill desires is good, and that which tends to thwart desires is bad.

Because intrinsic value does not exist, it is not possible to perform any action that destroys intrinsic value. No human action, from the construction of a dam to clear-cutting of a forest – has had the slightest effect on that which has intrinsic value. The most and the least that any such act has ever accomplished, whether in terms of creating value or in destroying value, is to alter the relationships between states of affairs and desires.

In this respect, it is important to note that both are possible. Where human activities create a state that tends to fulfill then humans have improved on nature. In adding iodine to salt and vitamins to breakfast cereals, we have improved on nature. We remove causes of disease through pasteurization. We remove the inconvenience of seeds from food.

All of this is unnatural, but it enhances the only type of value that is real – relationships between states of affairs and desires.

Using intrinsic value when making claims about nature often involves pulling the same type of stunt that we sometimes see when people refer to God or scripture.

What is often happening when a person refers to God or scripture in moral arguments is that the agent has started off with his own desires. There are certain things that the agent likes or does not like. However, it is not a very effective argument to say, “You must stop that action because I do not like it.” So, instead of saying, “I do not like it when people do such things,” the theist comes up with the claim,”God does not like it when people do such things and I am the voice of God.”

What is often happening when a person refers to the intrinsic value of nature in moral arguments is that the agent has started off with his own desires. There are certain things that the agent likes or does not like – that the agent wants preserved or unaltered. However, it is not a very effective argument to say, “You must not alter that piece of nature because I would not like that.” So, instead of saying, “I do not like it when people alter nature in such a way,” the environmentalist comes up with the claim, “Altering nature in such a way destroys that which has intrinsic value.”

Actually, no, it does not.

It may alter some state of affairs in nature in such a way that the desires of certain individuals are thwarted. In other words, the change might be something that some set of environmentalists does not like. However, nothing of intrinsic value has ever been destroyed.

In some cases, we cannot even honestly claim that the desires of the environmentalists are thwarted.

Again, an analogy to religion is useful.

There may well be people who have a genuine desire to serve God. Such a person might also believe that God is opposed to homosexual relationships. She then concludes that the work she puts into opposing homosexual relationships has value because it is serving God.

However, this agent has never spent a single moment of her life serving God. We do not need to worry about thwarting this agent’s desire to serve God, because such a desire can never be fulfilled. Nothing she or any of us can do will ever make the statement, “She has served God,” true. She may well believe that certain states have value because they count as serving God. However, her beliefs are mistaken.

Similarly, there are environmentalists who put a great deal of effort into preserving what they think of as having intrinsic value in nature. However, these people have never spent a single moment in their lives realizing something that has intrinsic value. They may believe that they have preserved and protected something of intrinsic merit. However, in fact, nothing they have done or could ever do can ever make the proposition, “I have preserved that which has intrinsic value,” true.

There is no divine value. There is no intrinsic value. The only value that exists is in the form of relationships between states of affairs and desires.

Desire utilitarianism proposes that we can evaluate desires according to their disposition to fulfill or thwart other desires. The same can be said about the desire to preserve and protect that which is natural. If this desire tends to fulfill other desires, it is good. If it tends to thwart other desires, it is bad. Ultimately, the good and the bad in nature is found in those qualities that fulfills or thwarts good desires (desires that tend to fulfill other desires).

When it comes to altering nature, there are a great many things that we can do to nature that will tend to fulfill desires. We build shelter to keep out the cold, inoculate ourselves against disease, and grow enough food to eat and fortify it with vitamins and mineral supplements. We protect ourselves from natural disasters such as fires, floods, earthquakes, and tsunamis. All of these things we do because that which is natural quite often is not as good at fulfilling desires as that which we have molded in some way so as to better fulfill desires.

These interests – in better health, longer life, protection from injury and other forms of harm, freedom from pain, and the like – are interests that we have reason to promote in others, and that they have reason to promote in us.

This is not to say that nature cannot be destroyed. Nature is destroyed when it is changed in ways that a person with good desires – desires that tend to fulfill other desires – would not like. If one cannot make the case that a change is that which a person with good desires would dislike, where good desires are desires that tend to fulfill other desires, then one has not made the case that anything of value has been lost. One does not make this determination by looking at what the agent himself does or does not like.

- Alonzo Fyfe

Previous post:

Next post:

{ 135 comments… read them below or add one }

Marco December 29, 2009 at 8:47 am

Keep in mind that questions of applied ethics are complicated and I do not necessarily agree with Fyfe’s moral calculations

Isn’t that the mayor objection to desirism?
Alonzo himself says (or so I remember)that the only challenge for desirism he sees is the question if we can determine what desires actualy are.
So for instance, are a lot of desires not cultural and group dependent and are they not changing in response to current events all the time?
It seems difficult to do calculations on moving objects; it seems even more difficult if those objects appear and disappear rapidly in response to our environment.

So, take this example: There is no intrinsic value in nature itself.
First I think this a consensus all along. Nobody values trees for being trees. We value them because they take up CO2 and emit oxigen for us to breath. Greenpeace’s quest against whale hunting is/was succesfull because we think whales are valuers themselves.

Second moral dilemma’s are only dilemma’s when they involve contradicting interests. Desirism doesn’t realy give a protocol for handling these situations it seems; other than changing peoples desires so that they desire things that tent to fulfill other peoples desires.

There is a question of how realistic desirism is in practical situations that I wonder about. “Very complicated” you say, maybe too complicated to make it work? I need to see more practical examples to make up my mind I guess.

  (Quote)

antiplastic December 29, 2009 at 9:36 am

“Keep in mind that questions of applied ethics are complicated and I do not necessarily agree with Fyfe’s moral calculations.”

Could someone possibly point me to one of these “calculations”?

I’ve followed this blog off and on for quite some time, and I have never seen even an attempt at a quantifiable prediction on a first-order moral issue. But admittedly I don’t read every post, so it would great if someone could point me in that direction.

  (Quote)

Hylomorphic December 29, 2009 at 10:49 am

Why does Fyfe not calculate the desires of at least the more complex mammals? It seems clear to me bears and coyotes, and at least dolphins and gorillas, have desires. My family’s dogs seem to, and their desires (in Fyfe’s terminology) function as reasons for action. I cannot think why they should be left out of the calculus.

Of course, as I’m sure I’ve said earlier, I don’t believe that desire utilitarianism is an adequate moral theory. To borrow the language of Charles Taylor, it does not provide us with an adequate, livable account of our own immediate sense that strong judgments do refer to something beyond our own (at least, our individual) desires.

  (Quote)

Penneyworth December 29, 2009 at 2:00 pm

I don’t think any calculations have been attempted. To do so would require access to knowledge of everyone’s desires and a coherent theory of how their strengths are measured.

Here is an interesting test for a moral theory that I pose to Luke, Alonzo, and actually any moral realist: Name one moral fact that your theory has provided that goes directly against your own personal opinion. In other words, in what instance has it ever been of any use to accept a moral theory rather than accepting that moral speak is merely the expression of one’s opinion?

For example, a divine command theorist might say he personally feels that it is wrong to execute homosexuals, but thanks to his holy-writ, he can be corrected on this issue.

When has desire utilitarianism helped you take a moral stance that you are personally adverse to?

  (Quote)

Mark December 29, 2009 at 2:34 pm

What is the point of this article? How does benefit the advancement of science, social order, or humankind in general whatsoever?

It doesn’t. It just takes up space, like the majority of the mindless babble going 24-7 now on our tv’s.

So, instead of saying, “I do not like it when people do such things,” the theist comes up with the claim,”God does not like it when people do such things and I am the voice of God.”

The average Christian does not go around claiming to be the voice of God (as Fyfe fantasizes here). He or she is only evangelizing the Word of God. Microsoft has evangelists too. They go around and evangelize products such as Windows 7. Are they claiming to be the voice of Bill Gates? No. They are promoting the features and benefits of a Microsoft product.

However, this agent has never spent a single moment of her life serving God. We do not need to worry about thwarting this agent’s desire to serve God, because such a desire can never be fulfilled. Nothing she or any of us can do will ever make the statement, “She has served God,” true. She may well believe that certain states have value because they count as serving God. However, her beliefs are mistaken.

If God said ‘my law is X and if you follow law x and defend it you are serving me’ and the ‘agent’ (otherwise known as ‘human being ENDOWED with moral conscience and drawn to truth’–no matter how inconvenient that moniker is to your continuum) does just that, the ‘agent’ has served God. All the circular rhetoric in the world will not erase that fact.

How about discussing how ‘desirism’ (aka stimulus impulses that follows a mystical law no one can explain) can lead to a better society? Let’s hear the solution to your Godless proposition.

  (Quote)

Eneasz December 29, 2009 at 2:59 pm

Why does Fyfe not calculate the desires of at least the more complex mammals? It seems clear to me bears and coyotes, and at least dolphins and gorillas, have desires.

These other animals have almost no ability to shape our desires. As such they cannot really partake in morality. We may have reasons to promote a desire to protect helpless beings (including animals), or to have an aversion to causing pain (which can include animals), but – until a community of animals can substantially affect our desires – their actual desires simply won’t be calculated as a matter of fact. See also here for Alonzo’s words.

If God said

If God said anything at all then it’d be evidence he exists, and thus it would be possible to serve him. Until he does say something all we have to go on are the words of other men, and God’s actual wishes (and existence) can’t be reliably known. And it is, of course, impossible to serve a non-existent thing.

  (Quote)

Haukur December 29, 2009 at 4:18 pm

Hylomorphic: Why does Fyfe not calculate the desires of at least the more complex mammals?

It’s interesting that Fyfe is posting on the blog of the guy who won’t subscribe to humanism because it has speciesism built into the name.

  (Quote)

Kip December 29, 2009 at 9:13 pm

Name one moral fact that your theory [Desirism] has provided that goes directly against your own personal opinion.

According to Desirism, there is only one correct answer to whether something is moral or not. Yet, obviously, people can have different opinions on it (they can be right or wrong). This fact alone should suffice to answer the heart of your question. But, I’ll continue.

So, you want me to tell you something that I think is right, but that I know is wrong (or vice versa)? Hmm… for some reason, I can’t come up with anything. It’s not easy to be in such a state of cognitive dissonance as would be necessary to come up with such an example.

I can tell you some things that I’m not sure of, but I imagine that won’t satisfy you. I could tell you some things that I’ve changed my mind on, but since Luke has done this a lot, obviously that won’t get you to stop asking the question either.

Or maybe you just want something like this:

If someone broke into my house, and killed my dogs, but was not going to kill me, I’d probably kill them anyway, and justify it later by saying that I felt my life was threatened. My feelings for my dogs would probably overtake my rational self at that moment, even though I know (as I type this) that I should not have a desire to kill someone that killed my dogs, unless it is self-defense.

Meh, you probably wanted something impossible, though, huh?

  (Quote)

lukeprog December 29, 2009 at 10:04 pm

Penneyworth,

Are you asking for an example of when I believed or felt that x had a certain moral value, but under desirism x has a different moral value?

  (Quote)

lukeprog December 29, 2009 at 10:05 pm

I’m not sure what you guys are talking about. Desirism does account for the desires of non-human creatures.

  (Quote)

Hylomorphic December 30, 2009 at 12:43 am

Eneasz:

In the first place, I cannot see why animals’ inability to shape our desires makes it impossible for them to partake in morality. The poverty- and famine-stricken people of some parts of Africa and Asia have an inability to shape the desires of America or China. That doesn’t mean they cannot partake in morality.

In the second place, I see at least an apparent contradiction between the post you linked me to and post we’re commenting to. Why, in talking about environmentalism, does he not make at least some reference to our reasons to protect animals? I suppose it might be there implicitly, but given its relevance to the topic, it certainly appears that Fyfe is not considering it at all.

Luke:

Yes, I realize that desire utilitarianism, applied consistently, accounts for the desires of animals. I simply think Fyfe is being inconsistent in this post by ignoring them.

  (Quote)

CrazyAgnostic December 30, 2009 at 1:00 am

You are giving too much value to religion, sir. What on God´s green Earth does Evironmentalism have to do with the fable of Christ???

  (Quote)

Robert Gressis December 30, 2009 at 1:05 am

What is Fyfe’s account of the phenomenology of the reason-giving force of desires? I.e., does he say, “since I have a desire to eat meat, I therefore have a reason to eat meat”? Or does he say, “having a desire to eat meat amounts to seeing the eating of meat as worth doing”?

I don’t know that I ever see the bare fact that I have a desire to do something as giving me a reason to do anything. So, there are many things I want to do, but I don’t want to do them because they would satisfy desires; rather, I want to do them because they seem to me to be good to do.

It’s incredibly rare for the fact that I desire to do X to appear to me to give me a reason to do X. I suppose if I had a persistent desire for something that didn’t appear to me to be good, but that wouldn’t go away, then I would satisfy the desire, just in order to shut the desire up.

Anyway, this has all been covered by Warren Quinn, T. M. Scanlon, and Stephen Darwall. I just wondered what Fyfe thought.

  (Quote)

Penneyworth December 30, 2009 at 6:59 am

Kip:

“you probably wanted something impossible, though, huh”

No, I gave a perfectly clear example of what an answer would look like (the divine command theorist).

“…even though I know (as I type this) that I should not have a desire to kill someone that…”

Was it a moral theory that tells you not to kill the guy once the moment of passion is passed, or your rational mind? If the moral theory did it, then are you saying that your personal feelings and opinions still urge you to go hunt down the dog-killer, but your chosen moral theory keeps you from doing so?

  (Quote)

Penneyworth December 30, 2009 at 7:15 am

Luke:

Close enough, yes. Although a present tense version would make a stronger case for the usefulness of desirism.

In other words, “an example of a current belief that x has a certain moral value, but under desirism x has a different moral value” rather than “an example of when I believed or felt that x had a certain moral value, but under desirism x has a different moral value.”

  (Quote)

lukeprog December 30, 2009 at 7:28 am

Penneyworth,

Well, seeing as I currently subscribe to desirism, I’ve modified my moral beliefs to reflect my best guess on what the deliverances of desirism are, so there wouldn’t be a discrepancy between my moral beliefs and the applied ethics of desirism, because whenever I come to think that something is right or wrong on desirism, my belief about the morality of that thing changes to be aligned with desirism’s analysis of that thing.

  (Quote)

Haukur December 30, 2009 at 7:33 am

Hylomorphic: I simply think Fyfe is being inconsistent in this post by ignoring them.

Well, his Higher Beings post has this weird little tidbit:

“The antelope does not run from the lion because he is afraid of being eaten and killed. The antelope runs from the lion because he is afraid of lions.”

Not only a sophist but an antelope-mind-reader as well :)

  (Quote)

Kip December 30, 2009 at 7:58 am

Hylomorphic: In the first place, I cannot see why animals’ inability to shape our desires makes it impossible for them to partake in morality.

The practice of morality is concerned with shaping desires in others to form a sort of “harmonicity of desires”. If someone has no way of doing that, then they are severely limited in practicing this human social system we call “morality”.

  (Quote)

Penneyworth December 30, 2009 at 7:58 am

Luke:

Can you give an example?

  (Quote)

Kip December 30, 2009 at 8:13 am

Penneyworth: Was it a moral theory that tells you not to kill the guy once the moment of passion is passed, or your rational mind? If the moral theory did it, then are you saying that your personal feelings and opinions still urge you to go hunt down the dog-killer, but your chosen moral theory keeps you from doing so?

1) I didn’t “hunt down the dog-killer”. He came into my house, killed my dogs, and so I killed him (while he was in my house). It’s a clear case of me easily being able to argue “self defense”, since only I will know that the dog killer was not also trying to kill me.

2) My rational mind tells me that Desirism is correct. You are now asking me whether it is Desirism that is correct, or my rational mind that is correct? Seriously? What is the point of your line of questioning?

  (Quote)

Kip December 30, 2009 at 8:14 am

Penneyworth: Luke:Can you give an example, again?  

FYP.

  (Quote)

Penneyworth December 30, 2009 at 8:40 am

Kip, if I ask Sarah Palin to give an example of a newspaper she reads, and she says “All of ‘em,” should I be satisfied with the example given? sheesh

The point of my line of questioning is: when is a moral theory useful if it happens to only make moral assertions that coincide perfectly with one’s personal feelings? What does one lose if moral speak is just the expressing of one’s opinion?

Maybe this will make it clearer: if all the data was in, and desirism “calculated” that all humans over 60 should be euthanized, and you strongly disagreed with this result, would you be willing to claim that it is a moral truth, or would you be forced to abandon desirism?

  (Quote)

Kip December 30, 2009 at 8:46 am

Penneyworth: Kip, if I ask Sarah Palin to give an example of a newspaper she reads, and she says “All of ‘em,” should I be satisfied with the example given? sheesh

Luke has given you specific examples of where he has changed his mind on moral issues. You keep asking the question, though. So, that’s annoying.

  (Quote)

Kip December 30, 2009 at 8:49 am

Penneyworth: The point of my line of questioning is: when is a moral theory useful if it happens to only make moral assertions that coincide perfectly with one’s personal feelings? What does one lose if moral speak is just the expressing of one’s opinion?

Clearly, it doesn’t. Abstract & specific examples have been given.

  (Quote)

Kip December 30, 2009 at 8:52 am

Penneyworth: Maybe this will make it clearer: if all the data was in, and desirism “calculated” that all humans over 60 should be euthanized, and you strongly disagreed with this result, would you be willing to claim that it is a moral truth, or would you be forced to abandon desirism?

I would claim that it was a moral truth.

  (Quote)

Penneyworth December 30, 2009 at 9:22 am

Kip:
I would claim that it was a moral truth.  

For your consistency I take my hat off to you.
For your failure to see why this puts you in the same boat as the divine command theorist, I put it back on again.

  (Quote)

Kip December 30, 2009 at 9:49 am

Penneyworth:
For your consistency I take my hat off to you.
For your failure to see why this puts you in the same boat as the divine command theorist, I put it back on again.  

On the contrary, I look at the evidence and the facts, and form my beliefs based on those. You seem to have your opinions set in stone, and refuse to look at new evidence.

  (Quote)

lukeprog December 30, 2009 at 10:40 am

Penneyworth,

An example of how my moral beliefs about certain issues in applied ethics had to change when I (tentatively) accepted desirism as a probable account of morality?

  (Quote)

Eneasz December 30, 2009 at 10:43 am

if all the data was in, and desirism “calculated” that all humans over 60 should be euthanized, and you strongly disagreed with this result, would you be willing to claim that it is a moral truth, or would you be forced to abandon desirism?

To me this is akin to asking “If all the data was in, and science ‘proved’ that you are living in The Matrix, would you be willing to claim that is the truth, or would you be forced to abandon science?”

It’s a pointless question. It’s asking someone to pretend that something which is false is actually true, and then asking them if they would believe that true thing. If it was true in the fantasy world then obviously it should be believed as true in that world. The fact that it’s false in the real world doesn’t apply and so the question has no bearing on anything.

  (Quote)

Robert Gressis December 30, 2009 at 11:26 am

Given that desirism is desire utilitarianism, and given that Luke has claimed that his beliefs align with the deliverances of desirism, I think we can imagine a number of cases where Luke has changed his mind from what he believed before. To wit:

If there were a utility monster the strength of whose desires for eating us outweighed the strength of everyone else’s desires (combined) for not being eaten, then Luke would say we should feed the utility monster.

If Luke was the sheriff of a town, and someone was innocent of a crime, and Luke could exonerate him to the court, but the public wouldn’t accept the exoneration, and would riot and cause a number of innocent people to die, then Luke would keep the evidence to himself and let the person be executed.

If Luke had a crystal ball that showed that I was going to lead a life where significantly more of my desires were going to go unsatisfied than satisfied, and I saw this crystal ball, and I nonetheless wanted to live despite the fact that on desirism I should die, Luke would be obligated to kill me, and he would therefore want to.

At any rate, I imagine Luke’s answers to the above would be as I gave them, because I ran into two desirists in grad school who quite happily said yes to these things.

So, there you have three examples where I imagine Luke changed his mind because of his acceptance of desirism.

  (Quote)

Penneyworth December 30, 2009 at 11:42 am

Luke,

Yes, that’s what I’m asking.

  (Quote)

Penneyworth December 30, 2009 at 12:01 pm

Eneasz:
To me this is akin to asking “If all the data was in, and science ‘proved’ that you are living in The Matrix, would you be willing to claim that is the truth, or would you be forced to abandon science?”It’s a pointless question…

Your point is well taken. Now hear me out: If all the data was in and the conclusion is that desirism does in fact calculate something like my euthanasia example, my argument would not be to ignore the results. My argument would be that desirism is not morality. I would have every right to my opinion that euthanasia is immoral despite the calculations of Mr. Fyfe’s theory. This in no way says that desirism is not internally consistent. For all I know, it is internally consistent, and it can have a cookie for that.

Consider this: if all the data is in, and it turns out that there is a god, and he has in fact dictated that it is a moral obligation to stone homosexuals, I still have every right to say “fuck you god! No matter what you say, I think it’s immoral to kill homosexuals. They are helping overpopulation. Don’t knock anal sex until you’ve tried it..” etc etc.

So you see, a moral theory gains nothing by being “true,” and denying its validity is nothing like denying evidence as in your matrix/abandoning-science analogy.

  (Quote)

Eneasz December 30, 2009 at 12:02 pm

Robert – I like your sheriff example, that is an actual moral dilema (where a person is in a situation where all possible actions will thwart a strong desire that a good agent would have).

The other two require specific-omniscience so I find them unrealistically inapplicable.

who quite happily said yes

I think it would also be helpful if you distinguished between “happily because they found the answer to a hypothetical that was computationally tricky” and “happily would carry out the actions without intense moral agony”.

  (Quote)

Penneyworth December 30, 2009 at 12:11 pm

Robert Gressis,

I appreciate your parodies, but it’s only going to make Luke and Alonzo say “ur talking about a different type of utilitarianism lol.”

I’m still waiting for Luke to speak for himself on this.

Would like to hear some examples from Alonzo as well.

  (Quote)

Robert Gressis December 30, 2009 at 12:21 pm

Eneasz,

Arguably, the utility monster case is a real world example. Just replace “utility monster” with “humans” and “humans” with “animals.” It certainly appears to be most people’s view that eating animal flesh is permissible; where the desirist goes wrong is that he finds it obligatory.

As for the “happily said yes”, my desirist friends didn’t find the cases computationally tricky. I think they thought they wouldn’t have any moral agony over killing me or feeding the utility monster, though I think they’re wrong about that. I think what made them happy (and here I’m psycho-analyzing without a degree) is the counterintuitiveness of their conclusions, which made them feel smarter than the run-of-the-mill person, as well as more courageous than the run-of-the-mill philosopher. To me, it made them appear as if they were somewhere on the autistic spectrum, suffering from a mania to find a theory that would answer, not only all pressing moral questions, but all possible questions about any practical situation.

Of course, since I’m a Kantian, I may arguably suffer from the same mania.

Anyway, if you want more realistic examples, there’s Bernard Williams’s example of Jim and the Indians (Jim is a reporter in a South American country with a corrupt government, he stumbles upon a clear of twenty Indians (it was 1973 when Williams presented this example) who are being held at gunpoint by two representatives of the corrupt government. Upon seeing you, the head honcho says, “Yankee, I have a proposition for you. Either you kill one of these Indians, or my man here kills all twenty. The choice is yours.” From the desirist’s point of view, the choice is easy: kill one of the Indians. Williams thinks this is a failing of the desirist point of view; while perhaps killing the Indian is the right thing to do, it should not come off as an easy choice–it should be one over which a well-adjusted person agonizes.

If you don’t like this example, though, because the answer is too obvious, we can tweak it. The head honcho can ask you to kill two Indians in order to save the remaining eighteen; or three to save the remaining seventeen; … or nineteen to save the remaining one. At some point it should be a hard decision, but even at the point where you’re asked to kill nineteen of the twenty, the desirist thinks the choice is an easy one.

Then you have the demandingness objection to desirism, for on desirism, we have an obligation to give all we have to the world’s poor until we’re just above the poverty line ourselves, which some (i.e., all) people think is unrealistically demanding.

  (Quote)

Kip December 30, 2009 at 12:27 pm

Penneyworth: Robert Gressis,I appreciate your parodies, but it’s only going to make Luke and Alonzo say “ur talking about a different type of utilitarianism lol.”

Indeed. In fact, that’s the main argument for dropping the “utilitarian” part of the name. Robert Gressis needs to read quite a bit more about Desirism if he wishes to try to argue for or against it.

On the other hand, you do the same thing, here:

Penneyworth: Consider this: if all the data is in, and it turns out that there is a god, and he has in fact dictated that it is a moral obligation to stone homosexuals, I still have every right to say “fuck you god!

Moral obligations are not dictated by a person, or a theory. I answered your previous question tersely (the one you tipped your hat to me on) in order to not be ambiguous. In fact, a full answer would require that you have a good understanding of exactly what it means when you talk about Desirism doing “calculations” to determine if something is moral. You either don’t have that understanding, or are being purposefully obtuse.

  (Quote)

Kip December 30, 2009 at 12:30 pm

Robert Gressis: where the desirist goes wrong is that he finds it obligatory

This is wrong. Please read this: http://commonsenseatheism.com/?p=2982

  (Quote)

Robert Gressis December 30, 2009 at 12:31 pm

Penneyworth: Robert Gressis,I appreciate your parodies, but it’s only going to make Luke and Alonzo say “ur talking about a different type of utilitarianism lol.”I’m still waiting for Luke to speak for himself on this.Would like to hear some examples from Alonzo as well.  

Penneyworth,

I doubt that Alonzo would respond that way, because if he is really a desire-utilitarian with a Ph.D. in philosophy (I know he has the Ph.D., I just don’t know he’s a desire-utilitarian as opposed to some new view), he’s heard all these “parodies” (counterexamples, I would call them) before (I don’t know whether Luke has, because I don’t know how much of the normative ethics literature he’s read). They’re standard fare in the literature on utilitarianism.

Generally, desire-utilitarians give one of two responses: (1) Yes, I would bite the bullet in all these cases, but that’s only because they’re so rare that our intuitions don’t really exist for them yet. If they happened, we would have desirist intuitions.

(2) No, I deny that desirism commits me to agreeing to any of your proposals, because you haven’t taken into account desires X, Y, and Z.

  (Quote)

Eneasz December 30, 2009 at 12:38 pm

Penneyworth:
If all the data was in and the conclusion is that desirism does in fact calculate something like my euthanasia example, my argument would not be to ignore the results. My argument would be that desirism is not morality. I would have every right to my opinion that euthanasia is immoral despite the calculations of Mr. Fyfe’s theory.

I’m going to assume that we are accepting that – in this single example only – the data (and the conclusion reached by desirism), are accurate and correct. That being: post-60 euthanasia leads to more good (or less harm) than no euthanasia in our hypothetical world. It would have to be a huge amount of good to outweigh death, but such things do happen sometimes, so let’s grant that it’s true in this world.

This means that any good person, any person interested in doing more good than harm in their lives, would embrace both the euthanasia of others at 60, and his own euthanasia at that age. And he would promote this desire in his children and peers. Only someone who was so selfish and cruel s/he’d be willing to impose vast suffering on others for a few more years of life would do otherwise. Such a person should be shunned like we’d shun a rapist or murderer.

And if you were to reject post-60 euthanasia as immoral, you would be clinging to some definition of morality that states you should take an action that does more harm than good and leaves the world in worse shape than if you’d died of a heart attack. What sort of “morality” makes the world worse?

Consider this: if all the data is in, and it turns out that there is a god, and he has in fact dictated that it is a moral obligation to stone homosexuals

This would be a case of god being wrong about a moral fact. We’d have an obligation to resist, and to reshape his desires in a way that would reduce suffering, as much as is possible.

  (Quote)

lukeprog December 30, 2009 at 12:43 pm

Just so everyone knows, it’s very hard for me to answer the normative and applied ethical questions because I just haven’t studied them very much. Right now I’m still focused on understanding meta-ethics – what Bernard Williams lamented was “doing ethics without talking about morality at all,” or something like that.

However, most such objections have the form of “Desirism seems to indicate X, but I have strong intuitions that X is false, therefore desirism is false.” My reply is, “Whether or not desirism actually indicates X – and many times it does not, usually because someone has confused desirism for act-utilitarianism – your strong moral intuitions provide no warrant for thinking that X is false. You have not evolved a mental faculty capable of more-or-less directly perceiving moral values in the world. Therefore your objection is no threat to the truth of desirism.”

  (Quote)

Jeff H December 30, 2009 at 1:11 pm

Robert,

You seem to be confusing desirism with preference utilitarianism. I don’t advocate either one, but they are slightly different. I just tried to write out how the two differ from each other, but my current headache is keeping me from being able to think straight. But no matter – in addition to the link that Kip gave you, check out this page and look especially for the stuff about preference utilitarianism. It’s very unlikely that you’ve met people that are desirists, since it’s a fairly new and unknown theory as of yet. I kind of like it; it’s sort of an elegant mix of preference utilitarianism, virtue ethics, rule utilitarianism, and maybe a dash of the categorical imperative in there. I still think it’s wrong, but it’s at least elegantly wrong :P

In regard to the point someone brought up about animal desires and why these weren’t mentioned in the article. I don’t think Alonzo was being inconsistent by not bringing it up. He was dealing with the idea that nature is intrinsically valuable. He would claim not that we should protect the environment because it has intrinsic value, but rather that we should protect it because doing so fulfills the desires of other humans and animals. It provides a home, resources, etc. for all who live on it. I think animal desires fit pretty easily into that. But that wasn’t the focus of his article, so it makes sense that he wouldn’t necessarily mention it.

  (Quote)

Penneyworth December 30, 2009 at 1:14 pm

Luke,

Since you have not yet learned how to apply your ethical theory, then you have no other tools for determining right from wrong apart from your own intuition. You are in the same boat as the rest of us.

Some churchgoers think their pastor’s interpretation of holy-writ represents real moral values because he has access to god. It’s plain to see that he does not. No one does.

Isn’t it also plain to see that no one has access to all the information needed to make a desirism “calculation?” When Alonzo says he has produced an “account” (what you call a calculation) of desirism, should we take him more seriously than we do when he uses his antelope mind reading powers? Has he even posited a theory of what it means to measure the strengths of desires?

But really, we are way ahead of ourselves, because these moral values that you think intuition does not successfully perceive: they probably do not exist any more than yahweh exists. In fact, the concept of yahwah is far far more coherent.

  (Quote)

Eneasz December 30, 2009 at 1:23 pm

Robert Gressis: Eneasz,Arguably, the utility monster case is a real world example. Just replace “utility monster” with “humans” and “humans” with “animals.”

Ah. That’s a strange twist. Are you saying that we fulfill more/stronger desires from eating an animal than that animal has thwarted by dying early? Or that this is the argument used by utilitarians who wish to eat meat?

It certainly appears to be most people’s view that eating animal flesh is permissible; where the desirist goes wrong is that he finds it obligatory.

I’m not sure where you got obligatory from.

while perhaps killing the Indian is the right thing to do, it should not come off as an easy choice–it should be one over which a well-adjusted person agonizes.

I believe you’ll find desirism agrees with you on that point, as demonstrated in the earlier link about moral agony.

Then you have the demandingness objection to desirism, for on desirism, we have an obligation to give all we have to the world’s poor until we’re just above the poverty line ourselves, which some (i.e., all) people think is unrealistically demanding.

Again, I think you’ll find desirmism does not make this claim.

  (Quote)

Penneyworth December 30, 2009 at 1:31 pm

Eneasz,

Thank you for the careful, clear response. Much of what you said I agree with. To me, the key question you brought up is: what kind of morality makes the world a worse place? Well, perhaps the kind that says this: I refuse to kill my grandma so that five people that I will never meet will suffer less. Those five people have their problems. I have mine. You can’t coerce me to kill granny. I prefer a free world to a world where everyone is rationed the average amount of fulfilled desires. On top of this, I have every reason to believe that a free world would end up with a higher average level of happiness (see Human Action by Mises).

  (Quote)

Kip December 30, 2009 at 1:36 pm

Penneyworth: Isn’t it also plain to see that no one has access to all the information needed to make a desirism “calculation?”

So? No one has access to all the information needed to predict the weather, either. But, do we 1) come up with better tools to make better empirical observations, and scientific models to make better predictions, and faster computers to handle the complex calculations or 2) let Penneyworth use her intuition to tell us if it’s going to rain or not?

When Alonzo says he has produced an “account” (what you call a calculation) of desirism, should we take him more seriously than we do when he uses his antelope mind reading powers?

I disagree with Alonzo on his calculations quite often. It usually comes down to an empirical question, where more information is needed. My best guess is different than his best guess, so we disagree. One of us is wrong, but both of us are willing to change our minds when the evidence comes in.

  (Quote)

Kip December 30, 2009 at 1:49 pm

Penneyworth: I prefer a free world to a world where everyone is rationed the average amount of fulfilled desires. On top of this, I have every reason to believe that a free world would end up with a higher average level of happiness

Alonzo has written quite a lot about freedom. It makes sense that since you know your desires, and the desires of those close to you more than I do, and vice versa, that more desires will be fulfilled if we afford each other as much freedom as possible to fulfill our own desires.

Alonzo put it something like: “You are the most informed and least corruptible agent that can fulfill your desires.” Putting someone else (or another agency) in charge of fulfilling your desires will certainly result in fewer of your desires being fulfilled.

  (Quote)

Eneasz December 30, 2009 at 1:56 pm

Penneyworth: Isn’t it also plain to see that no one has access to all the information needed to make a desirism “calculation?”

Actually I find this is a common misuse of desirism (and perhaps all moral theories). One comes to the conclusion that with perfectly calibrated desire-detectors we can get a precise answer (a “calculus”, if you will) on whether something is good or bad, and how much so. A precise number. Even if such desire-detectors existed it would still be impossible to make such a calculation. We live in an uncertain world with many unknowns.

Desirism is more about finding what desires generally tend to fulfill/thwart other desires, and promoting/reducing them. As such, a love of honesty and liberty and an aversion to killing tend to fulfill, and love of unrestrained power tends to thwart. Evaluating desires as ones that we generally have reasons to make stronger or weaker is the main thrust of desirism. If anyone tries to say they’ve made a precise calculation on exactly how good/bad a specific thing is they are misapplying desirism.

should we take him more seriously than we do when he uses his antelope mind reading powers?

I know this isn’t meant to be taken entirely seriously, but observation has shown that animals without predators do not fear predators when they are introduced into the environment (and the population is quickly decimated). This suggests that they do not have the ability to determine what causes death and thus fear death, and have instead evolved to fear particular stimuli that reliably led to death for it’s ancestors (such as the smell of a lion).

Has he even posited a theory of what it means to measure the strengths of desires?

Not particularly. It is asserted that we can get an approximation of a person’s desires (and their relative strengths) by looking at the person’s actions. Similar to how courts estimate a defendant’s desires. Direct measurement seems impossible, and I personally feel it’s a weakness of the theory (altho an unaviodable one). Any neuroscientist who can help in this avenue would be greatly appreciated! (seriously)

But really, we are way ahead of ourselves, because these moral values that you think intuition does not successfully perceive: they probably do not exist any more than yahweh exists. In fact, the concept of yahwah is far far more coherent.

In desirism, moral values are the name given to “the relationship between a state of affairs and desires”. Seeing as there obviously is a state of affairs, and there are desires, and there seems to be a relationship between them, how is it coherent to say that they are non-existent?

  (Quote)

Eneasz December 30, 2009 at 2:08 pm

Penneyworth: I refuse to kill my grandma so that five people that I will never meet will suffer less. Those five people have their problems. I have mine. You can’t coerce me to kill granny. I prefer a free world to a world where everyone is rationed the average amount of fulfilled desires. On top of this, I have every reason to believe that a free world would end up with a higher average level of happiness (see Human Action by Mises)

Actually, AFAIK, desirism agrees with you on every point. :)

  (Quote)

Robert Gressis December 30, 2009 at 2:10 pm

Shoot. Now I’ll actually have to read about desirism instead of using my knowledge of preference-utilitarianism to criticize it.

  (Quote)

Penneyworth December 30, 2009 at 2:25 pm

Kip,

A few problems with your weather analogy:
-The weather exists. Objective moral values most likely do not, even if people ever managed to agree on what the term means.
-The science of meteorology has shown real progress in predicting the weather. No moral theory has shown any progress because there is no way to tell if one moral theory’s calculation was “wrong” whereas some other moral theory’s calculation was “correct.”
-Meteorology involves actual calculations whereas espousing a moral account consists of a person expressing his opinion and then projecting this opinion onto a moral theory by saying “trust me, I have all the needed information, I did the correct calculations” or “trust me, I have access to god.”
-We can readily see that meteorology does deal with real weather. We have no idea which theory deals with “the real morality,” if it exists.

You said “both of us are willing to change our minds when the evidence comes in.”

You mean when all the information about all desires and their strengths come in? Until then I believe you and Alonzo are in the same boat as the rest of us. Moral speak expresses our opinions, and there’s nothing wrong with that. That’s how it’s been for millennia before the advent of desirism.

  (Quote)

Penneyworth December 30, 2009 at 2:41 pm

Eneasz: a state of affairs and desires”. Seeing as there obviously is a state of affairs, and there are desires, and there seems to be a relationship between them, how is it coherent to say that they are non-existent?  

I stand corrected. With that definition, objective morals are coherent and real. The only problem that remains is: what about the vast majority of people who do not accept this new definition of the word morality? All theists, and many agnostics like myself, will say morality is whole different ballgame.

  (Quote)

Penneyworth December 30, 2009 at 2:47 pm

Robert Gressis: Shoot. Now I’ll actually have to read about desirism instead of using my knowledge of preference-utilitarianism to criticize it.  

You’re in for a treat. Alonzo’s exploration of morality through his dungeons and dragons “campaigns” are priceless. In no other ethics literature will you find a quite serious inspection of The Sifians of Gat and The Meelarians of Laurella!

  (Quote)

Eneasz December 30, 2009 at 3:29 pm

Penneyworth:
I stand corrected. With that definition, objective morals are coherent and real. The only problem that remains is: what about the vast majority of people who do not accept this new definition of the word morality? All theists, and many agnostics like myself, will say morality is whole different ballgame.

Ah. Well, yes. I guess that’s the problem with using five-dollar words when nickel-words will suffice. Sorry about that.

When people say “I don’t like this!” they are (if we get fancy) basically saying that the current state of affairs is thwarting their desires. When they say “X is morally bad” they are saying “X will thwart, or tends to thwart, a great many/strong desires”. When they say “Y is morally good” they are saying “Y will fulfill, or tends to fulfill, many/strong desires”.

If you trying swapping out the normal terms with the desirism equivalent I think you’ll find it’s a fairly accurate presentation of how people do use the term “morality”.

That’s also why at times people can say “I know that Z is bad, but I like it”. It seems incoherent at first, but it’s a recognition that Z tends to thwart desires generally and should be reduced, even though it’s fulfilling the current desires of the speaker.

  (Quote)

lukeprog December 30, 2009 at 5:37 pm

Penneyworth,

No, I deny the use of my ‘moral intuitions’, that’s why I wrote the post ‘Living Without a Moral Code.’

  (Quote)

lukeprog December 30, 2009 at 5:40 pm

Robert,

LOL.

I’ll give you a potential out. Just say, “Come back to me about desirism when you have some peer-reviewed literature to point me to.” That doesn’t seem to unfair to me.

  (Quote)

Kip December 30, 2009 at 5:48 pm

Penneyworth: I stand corrected. With that definition, objective morals are coherent and real.

The part you were just corrected on is integral to Desirism. You have been lambasting something of which you don’t even understand the basics. Don’t you see a problem with that? Maybe you and Robert Gressis can read up a bit, then come back with some some well-informed questions or objections? That would seem to be the intelligent thing to do, anyway.

  (Quote)

Kip December 30, 2009 at 5:53 pm

Penneyworth: All theists, and many agnostics like myself, will say morality is whole different ballgame.

What exactly would that be? Something that is real, per chance?

  (Quote)

lukeprog December 30, 2009 at 5:58 pm

Kip,

Thanks for all your comments here. I just don’t have time to respond to all this!

  (Quote)

ayer December 30, 2009 at 6:09 pm

Penneyworth: I stand corrected. With that definition, objective morals are coherent and real.

I disagree that morality is “objective” under that definition, because it does not provide that there are actions that are morally wrong independent of the mental states of all persons and in all possible worlds–e.g., under objective morality as traditionally conceived, torturing another person purely for fun and entertainment is objectively and necessarily wrong in all possible worlds, regardless of the desires or mental states of any persons in those worlds. I don’t think desirism agrees, and thus does not provide objective moral values.

  (Quote)

Eneasz December 30, 2009 at 6:19 pm

ayer:
I disagree that morality is “objective” under that definition, because it does not provide that there are actions that are morally wrong independent of the mental states of all persons and in all possible worlds

You want a morality that exists independent of thinking creatures? That seems strange to me. There is no morality at all in an asteroid crashing into a moon, or a photon exciting an electron. Morality is inherently something that thinking/feeling beings do.

–e.g., under objective morality as traditionally conceived, torturing another person purely for fun and entertainment is objectively and necessarily wrong in all possible worlds, regardless of the desires or mental states of any persons in those worlds. I don’t think desirism agrees, and thus does not provide objective moral values.

I must protest! Torture, by definition, inflicts suffering upon the tortured. It is grossly desire-thwarting. You may argue that it’s possible for someone to desire to be tortured, but if it’s desired than it isn’t really torture, is it? If a nice massage counts as torture than we’re really stretching the definition of the word past its breaking point.

  (Quote)

ayer December 30, 2009 at 6:58 pm

Eneasz: You may argue that it’s possible for someone to desire to be tortured, but if it’s desired than it isn’t really torture, is it?

Yes, if someone were a masochist who requested the torturer to slowly peel his skin off, it would still be wrong for the torturer to do so. If a Jew in Nazi Germany agreed with Nazi ideology that he deserved extermination, it would still be wrong for the Nazi to exterminate him, etc,etc,etc. This is what is meant by “morally wrong independent of the mental states involved.”

  (Quote)

Josh December 30, 2009 at 8:37 pm

“You want a morality that exists independent of thinking creatures? That seems strange to me. There is no morality at all in an asteroid crashing into a moon, or a photon exciting an electron. Morality is inherently something that thinking/feeling beings do.”

I completely agree. I used to agree with Ayer that a theory of morality would somehow need to be independent of mental states, but that seems completely incoherent.

As Eneasz stated, morality inherently requires mental states to even make sense at all. In my opinion, that’s the one minor strength that desirism has: it makes reference to thinks that actually exist (at least in some fashion), namely mental states, in order to define morality. This puts moral thought on a concrete footing.

It seems that Ayer’s argument for the traditional conception of objective morality is that “this is what we’ve meant for a long time!” But I think it’s relatively easy to show that such a conception of morality makes no damned sense (see free will for a similar story). Is there any other reason we should think objective morality (if it exists) should be independent of mental states?

  (Quote)

Robert Gressis December 31, 2009 at 1:41 am

Come back to me about desirism when you have some peer-reviewed literature to point me to.

That felt good.

  (Quote)

Penneyworth December 31, 2009 at 6:01 am

lukeprog: Penneyworth,No, I deny the use of my ‘moral intuitions’, that’s why I wrote the post ‘Living Without a Moral Code.’  

Nonsense. You stated above that you realize you do not yet have access to the calculations from your chosen moral code. Yet you still make moral decisions today. You simply have no alternative to using your own moral intuitions. Superficially projecting these opinions on desirism by saying “this is probably what will thwart the least desires” adds no validity to your decision, if you even do that.

You can write posts about denying your moral intuition as much as you like, but you currently have no alternative.

  (Quote)

Penneyworth December 31, 2009 at 6:13 am

Kip:
The part you were just corrected on is integral to Desirism.You have been lambasting something of which you don’t even understand the basics.Don’t you see a problem with that?Maybe you and Robert Gressis can read up a bit, then come back with some some well-informed questions or objections?That would seem to be the intelligent thing to do, anyway.  

I read Alonzo’s desire utilitarianism book a while ago, and this fact was certainly not crystal clear. In fact, I recall that he makes an argument for moral realism in general. He then says that as strange as it is that moral realism is true, we are forced to accept it and then search for a way to uncover what implications it entails. Nowhere does he simply redefine the word morality (if I’m mistaken, link me to it). If this were the case, Luke would have told me that any of those many many many times I asked over the past year.

I am only now accepting this redefinition for the sake of conversation with Eneasz.

  (Quote)

Penneyworth December 31, 2009 at 6:19 am

Kip:
What exactly would that be?Something that is real, per chance?  

Any number of things. For people like ayer and Vox Day, morality means “what yahweh wants lol.” For me, it’s just expressions of one’s opinions. Concepts of morality are very diverse.

  (Quote)

Penneyworth December 31, 2009 at 6:23 am

ayer:
I disagree that morality is “objective” under that definition…

Yeah, all I should have said is that such a definition is coherent.

  (Quote)

Penneyworth December 31, 2009 at 6:41 am

Eneasz:
…If you trying swapping out the normal terms with the desirism equivalent I think you’ll find it’s a fairly accurate presentation of how people do use the term “morality”…

I agree that this is true. Utilitarianism does do a better job at modelling typical moral speak than, say, divine command theory. In fact, as long as people know your definition of morality, all such statements like “x is morally correct” is perfectly clear, and one could disagree by saying “my data suggests that x actually thwarts more desires.” Thank you for making the best case I’ve yet heard for desirism. However, I do not think your version is the same as Alonzo’s version which seeks to model that mystical “real” morality that is lurking out there somewhere.

I still reject utilitarianism wholeheartedly.

Actually, Eneasz, I’d love to hear your defense of desirism with regard to the 1000 sadists. -Please don’t refer me to one of Luke’s “to be added later” faq’s :)

  (Quote)

ayer December 31, 2009 at 8:13 am

Eneasz: You want a morality that exists independent of thinking creatures? That seems strange to me. There is no morality at all in an asteroid crashing into a moon, or a photon exciting an electron. Morality is inherently something that thinking/feeling beings do.

That’s somewhat of a distortion of what I said. Of course a possible world must have thinking creatures capable of being moral agents before that objective morality can be acted upon. “Independent of mental states” means that in any possible world where thinking creatures exist, objective moral values apply (e.g., the one providing that it is always and everywhere wrong to torture a person purely for fun and entertainment, to exterminate a racial minority, etc.). Even if the mental state of every creature in that world held that it was morally right to torture, to exterminate a racial minority, etc., it would still be objectively wrong. (Just as if every creature in that possible world for some reason believed 2 + 2 =5, they would still be objectively wrong).

  (Quote)

Antiplastic December 31, 2009 at 8:27 am

Kip:

I disagree with Alonzo on his calculations quite often.It usually comes down to an empirical question, where more information is needed.

I may have missed a reply from Luke or AF somewhere in the last few dozen posts, but perhaps you can point me in the direction of even a single “calculation” under this “theory” that has ever been performed to answer a normative question?

It seems to me that utilitatians generally and Fyfists specifically use “calculation” as a prestige-term, not a technical one. They want to lend their moral attitudes an aura of authority by appropriating the language of science and its cultural cachet. So when I hear people say they disagree with another’s “calculations” — as though they were two whitecoated scientists in the lab dispassionately reviewing a computer printout — when no one seems to be able to supply even a single quantifiable example, I mentally translate “calculations” into “arguments”, and it seems to capture the substance of the claim without the excess verbiage.

Like when someone is a talented public speaker, and evangelicals say they admire his “gift of The Spirit”.

Well, OK, but I don’t find it helpful to try to elevate the authority of someone’s arguments you find persuasive by saying they really come from something transhuman, like God or Science. I think you’ve misunderstood the force of Penneyworth’s question, which hinges on internalism (another subject I’ve asked about before, gotten contradictory answers from different posters, and eventually was told “this will be addressed in future editions of the FAQ”). If believing the results of a dispassionate calculation are necessary and sufficient conditions for holding a moral belief, then persistent feelings of moral outrage in the face of cognitive acceptance of “calculations” would not be *impossible*, they would be *commonplace*. On the other hand, if (as I suspect and as I suspect PW suspects), all this talk about “gifts of the spirit” and “talented calculations” is really just hifalutin post-hoc affirmation of things, then it shouldn’t be surprising at all that there is a one-to-one correspondence between what one feels is right and what one insists the holy ghost or Mr. Fyfe declares is right.

But again, maybe there have been lots of actual calculations of real-world situations that I’ve missed. That’s why I’m asking.

  (Quote)

Kip December 31, 2009 at 9:17 am

Penneyworth:
Any number of things. For people like ayer and Vox Day, morality means “what yahweh wants lol.” For me, it’s just expressions of one’s opinions. Concepts of morality are very diverse.  

So, for ayer & Vox Day, they aren’t talking about something real.

As for you, my first thought was: why should anyone care what your opinion is? But, maybe we should back up, and have you define “opinion” first.

  (Quote)

Kip December 31, 2009 at 9:23 am

Penneyworth: However, I do not think your version is the same as Alonzo’s version which seeks to model that mystical “real” morality that is lurking out there somewhere.

I think everything Eneasz has said has been consistent with Desirism as presented by Alonzo Fyfe. As far as a “mysical morality” is concerned, that doesn’t exist. We can either drop all moral-speak (what you tend to want to do), or, as Alonzo has defended, use the moral language that exists, but change our understanding of the underlying reality. The same way we did for terms like “atoms”, and “sunsets”. Moral language is powerful, and very useful. People use it all the time. To get rid of it, and try to replace it with something else, would have less utility (I think) compared to just tweaking some of the definitions to be in line with reality.

  (Quote)

Kip December 31, 2009 at 9:43 am

Antiplastic: But again, maybe there have been lots of actual calculations of real-world situations that I’ve missed. That’s why I’m asking.

I do see that “calculation” might give that appearance, so perhaps the word should be changed. However, in every day parlance, I use the word in phrases such as “calculated risk”, when precise numbers are not even within my grasp. In other words, I’m just estimating… making educated guesses.

In this case, though, the estimates are also vague because as someone else pointed out, we have no units of “desire strength”.

Alonzo has likened this to the time before thermometers were invented. We could tell some things were hotter than others, and even talk about “degrees of hotness” (e.g. “icy cold”, “luke warm”, “scalding hot”, etc.). We could compare temperatures using our senses, even though we didn’t have a precise measuring device.

Will we be able to have a measuring device and units of measurement for strengths of desires? I think so. As neuroscience progresses, and the brain is mapped, and computers are able to better interface with the brain, we will probably see some sort of “desire strength” measurement. It may be vastly different than what we currently think of, though. It could even be the case that desires have properties that make Desirism false (e.g. perhaps desires are not malleable).

Alonzo has been pretty up-front about the fact that this is the major weakness in the theory. We need to understand a lot more about desires, intentions, beliefs, and the brain in general.

The theory is falsifiable. This is a very good thing.

  (Quote)

lukeprog December 31, 2009 at 9:48 am

Calculation isn’t so bad a word. We do Bayesian calculations all the time without having specific units or numbers to plug in to the equation.

  (Quote)

Penneyworth December 31, 2009 at 9:50 am

Kip:
So, for ayer & Vox Day, they aren’t talking about something real.As for you, my first thought was:why should anyone care what your opinion is?But, maybe we should back up, and have you define “opinion” first.  

You are free to not value my opinion just as I am free to value yours.

As far as defining the word opinion, I would suggest using a dictionary. I would do this for you, but see, between reading this blog, pretending to do my day job, and enduring intense sessions of red-faced vein-popping welt-producing aneurysm-inducing masturbation, I just can’t seem to find the time.

  (Quote)

Penneyworth December 31, 2009 at 9:57 am

Antiplastic,

Thank you. I agree completely and I think you expressed the points I wanted to make much much better. The more you post, the more I will sit back and spectate. For those who tire of my antics, engage antiplastic’s arguments!

  (Quote)

ayer December 31, 2009 at 10:00 am

Kip: as Alonzo has defended, use the moral language that exists, but change our understanding of the underlying reality. The same way we did for terms like “atoms”, and “sunsets”. Moral language is powerful, and very useful. People use it all the time. To get rid of it, and try to replace it with something else, would have less utility (I think) compared to just tweaking some of the definitions to be in line with reality.

I’m sorry, but that tactic is fundamentally dishonest and Orwellian. It reminds me of “Newspeak” (war is peace, freedom is slavery, ignorance is strength, etc.)

  (Quote)

Robert Gressis December 31, 2009 at 10:48 am

“It seems that Ayer’s argument for the traditional conception of objective morality is that “this is what we’ve meant for a long time!” But I think it’s relatively easy to show that such a conception of morality makes no damned sense (see free will for a similar story). Is there any other reason we should think objective morality (if it exists) should be independent of mental states?”

Well, look, any moral theory worth its salt is going to have to have to cohere with at least some of our intuitions about morality, right? A theory that says we are morally obligated to maximize pain and minimize pleasure would be a non-starter, for the simple reason that it doesn’t cohere with any of our fundamental intuitions about how we’re supposed to act. In other words, “this is what we’ve meant for a long time!” is pretty important for any moral theory to capture.

That said, one conception of morality that only indirectly makes reference to our mental states is a simple version of classical natural law theory. On that theory, morality is all about freely living up to certain standards. These standards are often things we can identify based on a thing’s function or nature, and what function or nature a thing has is independent of what anyone (except, perhaps, God) thinks. Let me give some examples:

(1) In a geometry class, a teacher draws a circle on the board. It’s not a perfect circle, though, because nothing any of us draws is ever a perfect circle–a real perfect circle is an abstract object or exists merely in concept.
(2) To use an example from Edward Feser, found in his The Last Superstition, imagine you discover a squirrel that, instead of enjoying nuts and frolicking in trees, lies spread eagle in the street and eats pork. Such a squirrel is defective–it’s acting against its proper function.
(3) Finally, we get to people: people have natures and functions too, and we act immorally when we purposefully go against those functions. To take an example of my own, and that, from what I remember, is a real-world example, there are some people out there who apparently don’t feel “complete” unless they have one or more of their limbs amputated. I feel there is something grotesque and immoral about this, but on a non-natural law account, it’s a bit harder to explain what’s wrong with this.

At any rate, our natures are independent of what anyone thinks, and the fact that we can act against our natures is independent of what anyone thinks, and the fact that wrongness consists in intentionally acting against our natures is also true independent of what anyone thinks.

(Similarly, on desire-utilitarianism, the fact that we ought to maximize the satisfaction of as many desires as possible is independent of what anyone thinks.)

  (Quote)

Jeff H December 31, 2009 at 10:53 am

Penneyworth: As far as defining the word opinion, I would suggest using a dictionary. I would do this for you, but see, between reading this blog, pretending to do my day job, and enduring intense sessions of red-faced vein-popping welt-producing aneurysm-inducing masturbation, I just can’t seem to find the time.  

*cough, choke* LOL. Well hey, guys, at least he’s honest :P

  (Quote)

Eneasz December 31, 2009 at 1:54 pm

ACK! So much to reply to, but much less time today. I’ll attempt to be brief.

ayer

ayer if someone were a masochist who requested the torturer to slowly peel his skin off, it would still be wrong for the torturer to do so.

In the real world, of course. I can get into reasons if you wish, but it will be a few paragraphs and it seems like you already agree. However, if I understand correctly, your objection is that there should be no possible world where this might not be wrong. To me this just sounds like a lack of imagination. You’ve never read sci-fi?

Pennyworth

Actually, Eneasz, I’d love to hear your defense of desirism with regard to the 1000 sadists.

The core of the defense is simply “Pain is bad.” The rest goes from there. However I’ve noticed that when presented with questions like this, people don’t want a detailed explanation of why pain is bad, they are challenging you to give them an argument that could be presented to the sadists which would convince them not to torture the victim. I reject this challenge as impossible. In the 1000 sadists situation, the victim WILL be tortured, and there is nothing that can stop that. They have their sadistic desires, and the power to act on them, and an argument cannot change desires, it can only change beliefs. Might as well ask me to present an argument one could give to a flat tire to convince it to change places with the spare in the trunk. Won’t happen.

If I could only interact with the sadists via formal arguments then I would lie. I’d say something like “God’ll mess you up HARDCORE if you even touch that person!” and hope that they don’t have access to the truth of the matter. It’s a lie, but it’d be worth it if it could save the victim. Not that I advocate this as a good principle in the real world, but in this hypothetical one – yeah.

antiplastic

It seems to me that utilitatians generally and Fyfists specifically use “calculation” as a prestige-term, not a technical one

I think I already did my best to answer this a bit earlier in the comments. Basically I agree that the term “calculation” is a poor term to use, for the reasons you’ve given. Kip’s reply (temperature analogy) is a more effective way of saying what I was trying to get across. Thanks Kip! :)

Ayer

I’m sorry, but that tactic is fundamentally dishonest and Orwellian. It reminds me of “Newspeak” (war is peace, freedom is slavery, ignorance is strength, etc.)

No, it’s natural language growth. Do you still use the word “planet” meaning “wandering star”? It is neither dishonest nor Orwellian to slightly modify a word’s meaning that better reflects its intended meaning (“those five specific shiney things we see in the sky that appear to be moving”, and then later “and other things that are structurally very similar”).

To be dishonest and Orwellian one would have to redefine a word in a way that is in opposition to its intended meaning. (“Clean Air Act”)

Robert Gressis

At any rate, our natures are independent of what anyone thinks, and the fact that we can act against our natures is independent of what anyone thinks, and the fact that wrongness consists in intentionally acting against our natures is also true independent of what anyone thinks.

I’m curious, would you consider it easier or harder to estimate what one’s nature is as opposed to estimating what one’s desires are? And does everyone have a unique nature, or do all humans have the same nature, or is there some breakdown based on gender/age/temperment/ethnic group/etc?

(Similarly, on desire-utilitarianism, the fact that we ought to maximize the satisfaction of as many desires as possible is independent of what anyone thinks.)

I know we’re not actually arguing this, but this isn’t strictly accurate. That would be desire-fulfillment act-utilitarianism. Desirism would say that we have reasons to maximize the satisfaction of as many desires as possible because of what everyone thinks.

  (Quote)

ayer December 31, 2009 at 2:07 pm

Eneasz: However, if I understand correctly, your objection is that there should be no possible world where this might not be wrong. To me this just sounds like a lack of imagination. You’ve never read sci-fi?

Under desirism, moral values are malleable in different possible worlds. That is why it does not provide for objective moral values, and why it cannot say that torture for fun is objectively wrong.

Eneasz: It is neither dishonest nor Orwellian to slightly modify a word’s meaning that better reflects its intended meaning

The problem is that desirism’s definition of morality is not its intended meaning. It has simply adopted the word to define an entirely different phenomenon.

  (Quote)

Robert Gressis December 31, 2009 at 2:12 pm

@ Eneasz,

“I’m curious, would you consider it easier or harder to estimate what one’s nature is as opposed to estimating what one’s desires are? And does everyone have a unique nature, or do all humans have the same nature, or is there some breakdown based on gender/age/temperment/ethnic group/etc?”

It really depends on the case. I know from personal experience that one can have subconscious desires and beliefs, and these can be very hard to excavate. I think when it comes to natures, sometimes they’re fairly easy to determine and sometimes quite hard. Just what the function of a whole person is, is quite the puzzle, but figuring out the function of the heart is easier.

As for human natures, I suspect that everyone has the same nature, but that there are certain male/female differences, as you would expect given our different procreative roles. I’m quite skeptical that there’s different natures for, say, blacks and whites, as I’m quite skeptical that “being a member of the black or white race” is a natural kind.

Incidentally, and in case I haven’t been clear about this from usage, I take it that a being’s nature ontologically determines what its functions are, though we may have to move epistemically from the functions of a thing’s parts to figuring out its nature.

“I know we’re not actually arguing this, but this isn’t strictly accurate. That would be desire-fulfillment act-utilitarianism. Desirism would say that we have reasons to maximize the satisfaction of as many desires as possible because of what everyone thinks.”

Oh, I didn’t mean desire-utilitarianism to be the same as desirism. As you may have noticed above, I have been informed that desirism isn’t desire-utilitarianism, or at least preference/desire-satisfaction act-utilitarianism. (Is it preference-satisfaction rule-utilitarianism? I assume not.) I was just using desire-utilitarianism as an example of a theory that may have to invoke at least one moral fact. (Singer’s version doesn’t, though, because he’s a non-cognitivist–I think he’s a norm-expressivist non-cognitivist, to be more precise, so he merely accepts as a norm the proposition that satisfying more desires is better than satisfying fewer.)

  (Quote)

lukeprog December 31, 2009 at 2:32 pm

Robert,

The original name for what I call desirism is “desire utilitarianism.” But it should not be confused with any form of preference utilitarianism or act utilitarianism. It is usually confused with “desire-fulfillment act utilitarianism.”

  (Quote)

Kip January 1, 2010 at 5:49 pm

ayer:
Under desirism, moral values are malleable in different possible worlds.That is why it does not provide for objective moral values, and why it cannot say that torture for fun is objectively wrong.

I recommend reading this for what I think is a good way to consider the “objective” vs. “subjective” divide: http://instruct.westvalley.edu/lafave/subjective_objective.html

The problem is that desirism’s definition of morality is not its intended meaning.It has simply adopted the word to define an entirely different phenomenon.  

What would the intended meaning be? Whose intention?

  (Quote)

lukeprog January 1, 2010 at 8:18 pm

Kip,

We wait for the neuroscientists…

  (Quote)

Kip January 1, 2010 at 10:47 pm

lukeprog: Calculation isn’t so bad a word. We do Bayesian calculations all the time without having specific units or numbers to plug in to the equation.  

From what I know of Bayesian probabilities, this is not the case — you do need a specific number. Now, the number you put in (the “prior”) can be a “best guess”, an “estimate”, but if it’s way off, then your end result will also be off.

“Units” are a requirement for measurement. So, if you plan on doing any sort of precise measurement (which is a requirement for gathering scientific data), then you need some “units”.

But, I agree that “calculation” can be used in a non-scientific, non-mathematical sense, which would not require units nor precise numbers. It just means “carefully thought out”. I do think, though, that the calculations might one day be more mathematical, along the lines of economic or meteorological calculations. We’ll need some “units” and some precise measurements for that, though. :-)

  (Quote)

Eneasz January 2, 2010 at 11:46 am

ayer:
Under desirism, moral values are malleable in different possible worlds.That is why it does not provide for objective moral values, and why it cannot say that torture for fun is objectively wrong.

It can say that torture for fun is objectively wrong in the real world. I’m a bit confused now, because I got the impression you supported objective moral values, but now it looks like you don’t believe objective moral values can exist? There is no such thing as a system of morality that can say something is wrong in every conceivable hypothetical world.

The problem is that desirism’s definition of morality is not its intended meaning.It has simply adopted the word to define an entirely different phenomenon.

Isn’t the intended meaning of morality “That which we should/shouldn’t do”?

  (Quote)

ayer January 2, 2010 at 12:36 pm

Eneasz: There is no such thing as a system of morality that can say something is wrong in every conceivable hypothetical world.

Why could there not be a moral theory that says “a sentient being torturing another sentient being purely for fun and entertainment is wrong in every possible world”? I don’t see the problem with that; in fact, it would seem that the burden would be on the person arguing that such behavior could ever be morally good.

Eneasz: Isn’t the intended meaning of morality “That which we should/shouldn’t do”?

No, that is too vague. That phraseology could be used in the context of “what should we do to construct a tasty apple pie,” which is not a use of the term should in a “moral” sense. The moral sense would be “what are we morally obligated to do or not do regardless of our or anyone else’s subjective mental state”?

  (Quote)

Antiplastic January 2, 2010 at 1:17 pm

It seems we’re pretty much agreed that “calculation” is at best an exceedingly misleading term to use. Sure, in some sense every time our brains help us navigate around the couch we are “doing calculations” at some level, but it’s a bit pompous to walk up a flight of stairs and congratulate oneself on one’s superior math skills.

The attempted analogy with temperature is clever, but I think it will have to fail on the Unit Problem instead of the Degree problem. I am an elminativist about propositional attitude psychology: I flatly doubt that there is any such thing as a desire which will be fungible across cases. ‘Desire’ in this sense is like ‘event’ or ‘object’. If I buy a 500 page book, do I own one object, or 500 thin ones? If I go to a party, have I attended one event, or several thousand smaller ones? If I want to take my friend out to dinner, is this one desire, or is this wanting actually every conscious and subconscious reason I have for doing this etc.

Sure, you can meaningfully say that Bill Gates owns more objects than I do. But can you really settle on a unit of ‘object’ that’s going to apply in all circumstances, in such a way that “maximizing the number of objects” is meaningful without qualification?

  (Quote)

Eneasz January 2, 2010 at 1:19 pm

ayer:
Why could there not be a moral theory that says “a sentient being torturing another sentient being purely for fun and entertainment is wrong in every possible world”?I don’t see the problem with that; in fact, it would seem that the burden would be on the person arguing that such behavior could ever be morally good.

The problem with that is that’s an impoverished moral theory. It says nothing about killing in self-defense, spreading lies, helping someone who’s drowning, or anything else. It’s a single line about a single act. You need to introduce some general principles that make morality applicable to a wide range of situations, and then you need to explain why those general principles are correct and not simply the whim of the author.

No, that is too vague.That phraseology could be used in the context of “what should we do to construct a tasty apple pie,” which is not a use of the term should in a “moral” sense.The moral sense would be “what are we morally obligated to do or not do regardless of our or anyone else’s subjective mental state”?

Point taken, that was too simple. However your alternative seems half-backwards. Something “regardless of anyone’s mental state” would have to exist even without ANY mental states in the universe, and thus would have to be a universal constant like gravity. You are arguing for intrinsic “goodness” and “badness”, which is just as imaginary as gods and smurfs.

I’d say a more correct statement would be “what are we morally obligated to do or not do, taking into account everybody’s mental state.”

Antiplastic -

In my understanding, you are correct. This is the weakness I spoke of earlier. I know of no way (currently) to remedy it. My only reply is that this is the best theory of all the available alternatives, and seems to most closely describe the observations made in real life. (as far as I’ve seen). All theories could use improvement, and this is a particularly young theory. Help is appreciated. :)

  (Quote)

Richard January 3, 2010 at 7:18 am

Hi. I just read this thread and couldn’t resist posting.

As a long-time critic of desirism, I was interested to see how Luke would respond to Penneyworth’s request for an example of desirism giving him a different moral position to the one he held previously. I was disappointed that, although Luke claimed such cases existed, he didn’t provide us with one. Or did I miss it?

I think Penneyworth was on the mark in his (her?) criticisms of desirism, but eventually let Eneasz off the hook too easily. Eneasz wrote:

When people say “I don’t like this!” they are (if we get fancy) basically saying that the current state of affairs is thwarting their desires. When they say “X is morally bad” they are saying “X will thwart, or tends to thwart, a great many/strong desires”. When they say “Y is morally good” they are saying “Y will fulfill, or tends to fulfill, many/strong desires”.

If you trying swapping out the normal terms with the desirism equivalent I think you’ll find it’s a fairly accurate presentation of how people do use the term “morality”.

I think Penneyworth was mistaken in accepting this claim. Many people would not accept that when they say “X is morally bad” they mean “X will thwart, or tends to thwart, a great many/strong desires”. Many theists would probably say they mean “X is contrary to God’s moral laws”. Utilitarians might say they mean “X is not such as to maximise the sum of human happiness”. The latter of these may seem similar to desirism’s moral calculus, but it’s sufficiently different for desirists to insist that their position is distinct from utilitarianism.

The more fundamental problem here is that Eneasz is taking one particular moral calculus (a way of evaluating what is morally bad) and making this his definition of “morally bad”. But it’s not a genuine definition. If it really was the definition of “morally bad”, then anyone who proposes a different moral calculus would not be making a factual error: they would be committing a contradiction in terms. It becomes impossible to argue over the correct moral calculus, because any calculus other than that of desirism is by definition not moral. One cannot even claim that desirism’s moral calculus is the correct one, because that claim becomes a tautology.

This is analogous to defining “the Moon” as “an object with diameter 2159 miles”. Whether that figure is correct or not, this is not a genuine definition, but a claim being treated as a definition. With that definition it becomes impossible to argue that the Moon has some other diameter, because as soon as you substitute a different diameter you are no longer talking about the Moon.

Eneasz’s “definition” allows us to label some actions as “morally bad”. But it gives us no reason to attribute any significance to this label. Why should we care whether rape is “morally bad” in Eneasz’s sense of the word?

Eneasz isn’t alone in committing this fallacy. I’ve seen it from Luke too. This is a type of fallacy I more often associate with religious apologists, who argue that God has certain attributes (being necessary, simple, the first cause, etc) by definition, when in reality these are claims about God, not part of the definition of God.

  (Quote)

Richard January 3, 2010 at 7:40 am

P.S. Quoting Alonzo Fyfe above…

Desire utilitarianism proposes that we can evaluate desires according to their disposition to fulfill or thwart other desires. The same can be said about the desire to preserve and protect that which is natural. If this desire tends to fulfill other desires, it is good. If it tends to thwart other desires, it is bad. Ultimately, the good and the bad in nature is found in those qualities that fulfills or thwarts good desires (desires that tend to fulfill other desires).

Here Alonzo uses the language of a claim (“desire utilitarianism proposes”) not a definition. Yet Eneasz gives much the same formula as a definition. It cannot be both.

  (Quote)

lukeprog January 3, 2010 at 8:03 am

Richard,

I’ve provided many examples in the past. One obvious one is my moral beliefs about animals. I did not use to think animals had much moral worth. Desirism forces me to accept that they have great moral worth, and that huge parts of how I live my life are almost certainly immoral.

  (Quote)

lukeprog January 3, 2010 at 8:05 am

Richard,

Roughly, desirism proposes the above definition of morality. It has a serious advantage over other definitions of morality in that it refers to things that actually exist while still retaining consistency with much use of moral language.

  (Quote)

Antiplastic January 3, 2010 at 12:49 pm

lukeprog: Richard,I’ve provided many examples in the past. One obvious one is my moral beliefs about animals. I did not use to think animals had much moral worth. Desirism forces me to accept that they have great moral worth, and that huge parts of how I live my life are almost certainly immoral.  

There’s a lot to unpack here. In ascending order of importance:

1) This flatly contradicts Eneasz’s earlier assertion that under Fyfism they have neglible moral worth.
2) This also is in pretty direct tension with your previous claims to be agnostic about virtually every moral issue pending the outcome of the right sort of calculation.
3) Most importantly, this is not an example of the thing PW was asking for. S/he was asking not merely for an example of a changed mind in light of a newly adopted moral outlook. Remember this is an attack on the (apparent) motivational externalism of the theory: what we’re looking for is an example of where your mind was changed but your heart was not.

How does your new moral belief about animals feel? Isn’t it something a little like this: “While I feel awful to think of all the animal suffering I cause and I know in my heart and mind it is wrong, I find it difficult or impractical to make the changes in my life necessary to avert such suffering. But I want to — I want to at least try, and even though I may not succeed I’m committing myself to trying, and I’ll always feel at least slightly guilty to the extent I fall short.” As a vegetarian who has tried and failed to go Vegan, this is sort of what my own internal monologue is like.

But what PW is asking for is something like this: “With my preference-o-meters and advanced scientificolgical calcumulations, I’ve concluded that current agricultural uses of animals is ‘wrong’. However, I hate animals. Hanging cats by their tails and lighting them on fire is fucking hilarious. I have no desire whatsoever to alter my behavior to reduce their suffering, and I regard the entreaties to alter my behavior and attitudes from those wusses in PETA with indifference bordering on contempt.”

If the externalist variant of DU is correct(and remember, I’ve gotten contradictory answers from different supporters about whether it entails externalism, so I may be addressing the wrong person), then the phenomenology of my latter example should be a commonplace in your moral reasoning. I’ll bet dollars to donuts that pretheoretically you (and all humans) identified the moral stances of yourself and others according to whether the beliefs entailed at least some emotive and behavioral force. And so when AF makes a moral argument for a position you pretheoretically agreed with anyway, you don’t notice a disconnect. But if “believing the result of calculations” constitute necessary and sufficient conditions for holding a moral belief, then every time you change your mind to bring it into line with DU, there is no reason whatsoever to expect your heart to go along with it.

So to the extent that DU entails any changes in your first-order moral beliefs, we would expect almost all of these examples to involve a disconnect between the cognitive and noncognitive aspects. At least some of the things you thought were morally reprehensible but now “know” to be not just permissible but morally obligatory (and vice versa) should exhibit the psychological character of my second example — it would be shocking if they didn’t.

In the absence of this kind of example, it’s at least a fair tentative hypothesis that either a) there is an internal connection between believing the results of these calculations and having the appropriate new attitudes which several people have tried and failed to get the theory’s proponents to spell out, or b) DU is largely an exercise in dressing up one’s (perfectly respectable) pretheoretical preferences in quasi-scientific verbiage, or both.

  (Quote)

lukeprog January 3, 2010 at 1:03 pm

Antiplastic,

Desirism counts up all desires, including non-human ones.

You’re right, I try not to proclaim too much in applied ethics. That’s one reason I’m not vegetarian yet; because I haven’t had time to look at the applied ethics yet. But I greatly suspect it will point towards veganism.

As for where my mind was changed but my heart was not? My veganism example stands. My mind is basically changed there, but I still don’t feel bad about eating bacon. I try to make myself feel bad about it, but it’s going to take some work.

Our internal monologues about veganism sound more or less the same.

I don’t see any way for desirism to accommodate motivational internalism. The reason my heart may slowly change to catch up with my mind’s beliefs about morality is because my heart very passionately wants to be MORAL, whatever that happens to be. So yes, you probably will have to turn to someone else with different desires to find someone whose mind conflicts with his heart on moral issues (in the long run).

Does that make more sense?

I have no rebuttal to your points about desirism not being properly spelled out. I don’t think it is. We need to write a great deal of peer-reviewed analytic philosophy on many subjects relevant to desirism before it can be spelled out as clearly as I would like.

I’m working on it, but it might be 20 years. For now I want to give desirism more exposure because I think it has some merit, but I don’t expect people to stay awake at night refuting it when it hasn’t even been defended in a single peer-reviewed work yet!

  (Quote)

Eneasz January 3, 2010 at 5:41 pm

Antiplastic:
1) This flatly contradicts Eneasz’s earlier assertion that under Fyfism they have neglible moral worth.

Hullo! Interesting use of the word “Fyfism”… you are denigrating desirism, obviously, but why? Do you feel it is creating a cult-like mentality among it’s supporters?

In clarification of my position, I must first point out that I suffer from a position similar to the one you described. Human desires are not the only desires in our world. The desires of many animals are probably equivalent to those of a newborn human infant. I strongly suspect that I should not be eating animals at all.

However this belief has little motivational power behind it. My motivation/desire to eat meat is much stronger, because I find it more delicious than anything else, ever. Animals have almost no ability to shape my desires and thus I am unmoved. Humans arguing on the side of animals are making some progress (I wouldn’t eat horse because of the social stigma it carries in America, even tho I don’t personally care), but not nearly enough. And frankly, this is what desirism states will be the case. Sometimes what you know is right is opposed to what you have the desires to do, and the desires win. Thus the importance of giving people good desires.

Of course this could just be me rationalizing away what I believe to continue doing what may be bad. I’m also often comforted by the sight of my friend’s cat (attempting to) hunt prey, and gleefully eating the meat provided to it by it’s owners. This makes my decision seem not so bad. At least my food isn’t eaten alive as it twitches in the dirt. And yet this is no argument at all, it’s obviously rationalization.

Perhaps with time this will change. I used to be vehemently anti-religion. It was Alonzo’s frequent posts about atheist bigotry against theists that first changed my mind on this, and eventually my desires as well (with help from The Slacktivist). So it’s happened before. /shrug

  (Quote)

Bebok January 3, 2010 at 7:58 pm

lukeprog: You’re right, I try not to proclaim too much in applied ethics. That’s one reason I’m not vegetarian yet; because I haven’t had time to look at the applied ethics yet. But I greatly suspect it will point towards veganism.

Here: http://www.uta.edu/philosophy/faculty/burgess-jackson/Engel,%20The%20Immorality%20of%20Eating%20Meat%20(2000).pdf is an essay arguing that to reject eating animal products as immoral you need no sophisticated moral theory, just a several commonsense beliefs. Thought it might interest you.

  (Quote)

Richard Wein January 4, 2010 at 3:45 am

Luke,

Your reply to my point about Eneasz’s “definition” of a moral term was utterly unresponsive. Why bother to reply if you’re simply going to ignore what I wrote?

On the subject of your example of a change of a mind about a moral value…thanks for replying. I won’t pursue that any further, as I’ve remembered my earlier conclusion that it’s pointless to proceed without genuine and unequivocal definitions of moral terms.

  (Quote)

lukeprog January 4, 2010 at 7:11 am

Richard,

Where did I not reply to you sufficiently? I don’t see that you have any other comments on this post.

  (Quote)

Penneyworth January 4, 2010 at 8:41 am

Luke, thanks for giving the animal example. Why didn’t you just say that the first two times I asked?

You later reaffirmed derirism’s lack of ability to actually spell out (calculate) any claims, so I am left with no reason to think that desirism was anything more than a sort of talisman onto which you are projecting the source of this change of moral stance about animals.

Please consider this: I know, and I’m certain that you know, people who have changed radically upon “finding Jesus.” Prior to conversion, this person is a violent foul-mouthed hateful drug-dealer, and after the conversion, they are the type of person who is continually on the verge of bursting out into tears from the overwhelming sense of love for god and their fellow man. And the change actually persists.

The amount of psychological factors involved in this is mind boggling, but do we have any real link between believing in ancient stories of god-men and this psychological and moral transformation? In the same way, do you have any real link between desirism and your (much less dramatic) moral transformation? Isn’t it possible that in your strong desire to reject what you call moral intuition, you have clung to something that feels as though it fills the void, and accompanying this sense of epiphany, your sense of what is moral has been changed? Shouldn’t you seriously consider the possibility that your new moral stances are simply based on new knowledge and experience, and statements like “I now believe X is moral since it appears to fulfill more desires than it thwarts lol.” are merely affirmations of your current opinion about X?

That mystical real morality lurking out there that you hope desirism best describes – what if it does not exist? Wouldn’t appeal to its authority via desirism be something repulsive to you?

  (Quote)

Penneyworth January 4, 2010 at 8:51 am

Richard, I was only letting Eneasz off the hook in that I think the moral claims that he derives from imagining the totality of all desires will not often grossly disagree with the consensus of what is called moral. Don’t forget that I strongly suspect that he will readily accept those consensus opinions and then claim they were derived from desirism.

  (Quote)

Penneyworth January 4, 2010 at 8:56 am

Eneasz,

To engage the 1000 sadists problem, you need to show how desirism leads to torture still being wrong within the given setting. The futility of changing the sadists’ minds is irrelevant.

  (Quote)

Richard Wein January 4, 2010 at 10:47 am

Luke,

Where did I not reply to you sufficiently? I don’t see you have any other comments on this post.

I made two posts under the name “Richard”. For my third post (and this one) I added my surname “Wein”, so you would realize I’m the same Richard who’s argued with you before. (Hello!) Sorry if that caused any confusion.

With regard to Eneasz’s “definition”, I was actually looking for a response from Eneasz, not you. But, if you too think that that’s a valid definition, then by all means respond on your own behalf.

  (Quote)

Richard Wein January 4, 2010 at 11:22 am

Penneyworth,

Richard, I was only letting Eneasz off the hook in that I think the moral claims that he derives from imagining the totality of all desires will not often grossly disagree with the consensus of what is called moral. Don’t forget that I strongly suspect that he will readily accept those consensus opinions and then claim they were derived from desirism.

I was making a point about your response to Eneasz’s definition, not to his moral calculus. I realize now that I misrepresented you when I wrote above that you accepted his claim about how people use moral terms. In fact, you correctly pointed out that Eneasz’s definition was one that the vast majority of people would not accept. So I apologise for that. You did, however, describe his definition as “coherent”, which I think was a mistake. As I argued above, his so-called “definition” is nothing of the sort, but is actually a claim (of a moral calculus) masquerading as a definition.

Anyway, on reflection it was unfair of me to describe this as “letting him off the hook”. You merely continued to reel him in on the same hook, while I prefer to catch him on a different hook. ;)

  (Quote)

Eneasz January 4, 2010 at 12:54 pm

Pennyworth-

To engage the 1000 sadists problem, you need to show how desirism leads to torture still being wrong within the given setting. The futility of changing the sadists’ minds is irrelevant.

Well, like I said, it’s basically because Pain Is Bad. I can expound on that if you like, but it doesn’t seem like something worth debating.

Richard-

Eneasz’s “definition” allows us to label some actions as “morally bad”. But it gives us no reason to attribute any significance to this label. Why should we care whether rape is “morally bad” in Eneasz’s sense of the word?

Honestly, I don’t much care about what words are used. If you object to my use of “morally bad” then I’ll switch to any term you’re comfortable with. I can use “stinky” if you prefer.

You shouldn’t really care whether rape is “stinky” in my sense of the word. My definition of any word has no impact on rape. What you should care about is the fact that the potential rape-victim (and those who care about her/him) have many/strong reasons(*) to prevent you from raping. Thus they promise to do unpleasant(**) things to you if you rape. And they encourage all people to form an aversion to rape, a hatred of rapists, so that even if they could get away with rape they would not want to.

I’m not trying to impose a definition and then argue from definition. The relationship between these desires, and states-of-affairs, exists no matter what you choose to call it (or perhaps doesn’t exist, if the observations/reasoning above are flawed). The actual word used is irrelevant to the facts of the matter. “Morally bad” was used because it most closely matches the common usage of the term, but if you’d like you can call it “stinky” or “green” or whatever you like.

* – those reasons being the desires that are thwarted when one is raped. I’m assuming we both already accept that desires are the only reasons-for-action that exist?
** – these unpleasant things being things that will thwart your desires. Taking your money or liberty or life for example, or simply causing you pain.

  (Quote)

lukeprog January 4, 2010 at 1:35 pm

Penneyworth,

No time now. You’ll have to wait until I have time to write more about desirism.

  (Quote)

Richard Wein January 5, 2010 at 4:37 am

Hello Eneasz. Thanks for replying.

Honestly, I don’t much care about what words are used. If you object to my use of “morally bad” then I’ll switch to any term you’re comfortable with. I can use “stinky” if you prefer.

If you were using the term “morally bad” to mean morally bad (in something like its normal sense) then it makes no sense to switch to another term. What’s the point of using the word “stinky” to mean morally bad?

If you were using the term “morally bad” as an arbitrary token, with no relation to its normal meaning, then you can equally well use any other term. But then desirism’s calculus is not a moral calculus. It’s just a calculus for assigning arbitrary tokens to things.

If you really don’t care which term you use, then you should be willing to forever stop using moral terms in the context of desirism, and replace them with smell-related terms (like “stinky”). But of course you aren’t willing to do that. You want to be able to say that desirism is a moral theory, not a theory of smell, so you will keep coming back to using moral terms.

Luke says that he passionately wants to be moral. Do you think he would be equally happy saying that he passionately wants not to be stinky?

I’m not trying to impose a definition and then argue from definition.

That’s the problem. You’re trying to make an argument about what is moral (or stinky) before you’ve decided what you mean by the word “moral” (or “stinky”).

Let’s say I want to have lots of money. Since I know that the word “rich” means “having lots of money”, I say “I want to be rich”. I could equally well say “I want to have lots of money”, but “I want to be rich” is a convenient, shorter way of saying the same thing.

But when pre-desirist Luke (or Eneasz) says “I want to be moral”, he’s doing something very different. He doesn’t yet know what he means by “moral”. He just has a desire to be able to apply the word “moral” to himself. So he goes looking for a definition of “moral” that will let him do that. And since he can’t find a genuine definition, he bamboozles himself with a deceptive pseudo-definition.

The relationship between these desires, and states-of-affairs, exists no matter what you choose to call it (or perhaps doesn’t exist, if the observations/reasoning above are flawed). The actual word used is irrelevant to the facts of the matter.

Sure, the facts about desires and their relationships (etc) remain the same, and we can describe those facts using any words we like as long as we understand what those words mean. But moral terms play no useful role in describing those facts. There seem to be three elements to your position:

1. A factual account of desires and their relationships (etc).

2. The claim that something which thwarts many desires is morally bad. (This is a simplified version for the sake of brevity.)

3. The definition that “morally bad” means that which thwarts many desires.

Given this definition, the “claim” just says that something which thwarts many desires thwarts many desires. It’s not a genuine claim, because it doesn’t tell us anything. It’s a tautology. And, yes, for the purposes of this tautology you can substitute any term you like in place of “morally bad”, because “morally bad” is just a place-holder, an arbitrary token, that falls out when you substitute the definition into the claim.

“Morally bad” was used because it most closely matches the common usage of the term, but if you’d like you can call it “stinky” or “green” or whatever you like.

If the actual word used is irrelevant (as you said above), why are you attempting to justify your use of the term “morally bad” on the grounds that it most closely matches the common usage of the term?

  (Quote)

Eneasz January 5, 2010 at 10:46 am

(sorry about the formatting, quoteblock wasn’t working for some reason)

I consider arguing about definitions to be the biggest waste of time and energy in philosophy. It accomplishes nothing while real work remains to be done. It’s like arguing whether the definition of moon is “an object with diameter 2159 miles” or “the largest object orbiting Earth” when any fool can go out at night, point at it, and say “That thing. Now let’s get back to figuring out how to land on it.”

I’ll briefly engage the last post, but after that I’m done with definition arguments. Feel free to have the final word.

What’s the point of using the word “stinky” to mean morally bad?

To demonstrate that the sounds/symbols used to describe what we’re evaluating don’t particularly matter. The map is not the territory.

If you were using the term “morally bad” as an arbitrary token, with no relation to its normal meaning, then you can equally well use any other term.

This is true of all words.

you should be willing to forever stop using moral terms in the context of desirism [...] But of course you aren’t willing to do that. You want to be able to say that desirism is a moral theory, not a theory of smell, so you will keep coming back to using moral terms.

Because those words are already established and mean basically the same thing. How annoyed would you be if you’re working on a moon-lander and people keep insisting you refer to the moon as ‘the cheese’ because they have a quibble about the exact definition of that big thing in the sky? Ultimately the sounds/symbols used to represent that big thing don’t matter, but there’s no point in deliberately introducing confusion.

When someone asks “Is this the morally right thing to do?” they aren’t asking themselves “What does this act smell like?”. They want to know what a good person would do in this situation, or what will bring about the best consequences for all, or something similar. And these are the questions that moral theories are striving to answer. Thus the use of moral language by moral theories. It isn’t as complicated as you’re making it out to be.

Luke says that he passionately wants to be moral. Do you think he would be equally happy saying that he passionately wants not to be stinky?

If this was another world where “stinky” meant what “moral” does, then yes. He doesn’t want to be a series of sounds/symbols. He wants to be that which they map to. What he wants to be doesn’t change just because he may speak a different language.

But when pre-desirist Luke (or Eneasz) says “I want to be moral”, he’s doing something very different. He doesn’t yet know what he means by “moral”. He just has a desire to be able to apply the word “moral” to himself. So he goes looking for a definition of “moral” that will let him do that. And since he can’t find a genuine definition, he bamboozles himself with a deceptive pseudo-definition.

Have it your way then. I’ve bamboozled myself with a deceptive pseudo-definition. Do whatever you want to with this knowledge. In the meantime, I’ll continue trying to discern how states of affairs impact desires, how desires motivate intentional actions, how certain desires can be molded to reduce conflict and increase harmony, and which those may be. This may not be the study of “morality”, but if it leaves the world a better place I’ll be happy just the same.

Given this definition, the “claim” just says that something which thwarts many desires thwarts many desires.

And the moon is the moon. But the map is not the territory, and I’m more interested in the territory than this obsession with the map.

  (Quote)

Richard Wein January 6, 2010 at 1:32 am

Eneasz. You’ve evaded the issues and failed to respond to my arguments. You seem entirely unconcerned that your so-called “moral” calculus is a mere tautology. When I demonstrated that this was so, you merely replied with another tautology (“the moon is the moon”), as though that is somehow a rational response. I was ready to clarify my arguments further, but you say you won’t discuss the subject any more.

You, like Luke, have such a massive mental block on this subject that talking to you is like talking to a brick wall. Luke is clearly a very rational guy in general, and I’m sure that one day the mental block will be penetrated and he’ll see the fallacy of his position. When he does, he’ll probably kick himself for not taking seriously enough the attempts to show him his fallacy, and for spending so much time propagating nonsense. Perhaps you will too.

  (Quote)

Richard Wein January 6, 2010 at 2:35 am

P.S. To be fair to Luke, although he has persisted with equivocal pseudo-definitions of moral terms, he has never responded with such nonsense as “we could use any other term instead” and “definitions don’t matter”, as Eneasz and Alonzo Fyfe have.

  (Quote)

lukeprog January 6, 2010 at 6:56 am

Richard,

I know I haven’t had time to present the case for desirism in a very compelling way. Hopefully one day I will have time.

  (Quote)

Richard Wein January 6, 2010 at 10:46 am

It’s a bit late now, but I’ve thought of a nice response to Eneasz’s claim that he’s willing to replace “morally bad” with any other term I like. OK then, replace “morally bad” with “morally good”. That will reverse all desirism’s claims.

  (Quote)

Richard Wein January 6, 2010 at 10:53 am

Luke,

Doesn’t the fact that you’ve failed to make a compelling case after spending thousands of hours on the subject suggest to you that there may be no compelling case? Don’t you think it would be better to come up with a compelling case before propagating it further?

The reason you can’t make a compelling case is because desirism’s moral calculus is tautologous. It’s not even false.

  (Quote)

Kip January 6, 2010 at 2:55 pm

Richard Wein: Please define “moral” & “morality”. Thank you.

  (Quote)

Eneasz January 6, 2010 at 5:56 pm

Richard Wein: OK then, replace “morally bad” with “morally good”. That will reverse all desirism’s claims.  

http://www.youtube.com/watch?v=FiFKS62wQAA

(I figured it’d be best to end on a humorous and yet still relevant point :) )

  (Quote)

Richard Wein January 6, 2010 at 6:11 pm

Hi Kip. I don’t think there’s much problem with the words “morality”, or with “moral” when it means “pertaining to morality”. The dictionary definitions will probably suffice. The problem comes with terms of moral judgement, like “right”, “wrong”, “bad”, “good”, “should”, “obligation”, and “moral” in the judgemental sense.

I don’t need to define these moral terms for purposes of this argument, since I’m only criticising desirism’s use of moral terms and not using them on my own behalf. However, since you ask, I say that moral terms cannot be defined in the sense of giving an alternative way of saying them that isn’t another moral term. We can only describe how people use them. Roughly speaking, people use them as if they were genuine descriptive terms, while in reality they are merely attaching these terms to things of which they approve or disapprove.

  (Quote)

Richard Wein January 6, 2010 at 6:16 pm

This “palindromic sketch” is good too ;):

http://www.youtube.com/watch?v=YwWI1aHpzy0

  (Quote)

Antiplastic January 6, 2010 at 9:24 pm

Richard Wein: It’s a bit late now, but I’ve thought of a nice response to Eneasz’s claim that he’s willing to replace “morally bad” with any other term I like. OK then, replace “morally bad” with “morally good”. That will reverse all desirism’s claims.  

That’s a rather damning point indeed.

When Ben Franklin was working out his initial vocabulary for electricity, he coined the terms “positive” and “negative”, and hypothesised that electricity is a kind of fluid that flows from positive to negative. Today we know that inasmuch as electricity can be said to flow, it moves in the opposite direction. But nothing turns on this matter of arbitrary notation; the theory works just fine in clearly marking out the phenomena which were initially only referred to ostensively.

Luke, are you really okay with opening the first words of your biography with “All my life, I’ve passionately wanted to be immoral and commit all kinds of atrocities”? Or isn’t it rather that we all already have an intuitive grasp of what these words basically point to, even if more detailed analyses might help us refine them?

  (Quote)

Richard Wein January 6, 2010 at 11:47 pm

Thanks for the support, Antiplastic, but my last post on this subject was actually missing the point, and I think yours is too for the same reason. I think we all agree that changing the term without changing the referent (the thing the term refers to) doesn’t affect the validity of a claim or argument. So replacing “morally bad” with “morally good” wouldn’t matter if everyone concerned was aware that the new term referred to the same old referent.

A much better response to Eneasz’s “I could use any other term” red herring would have been to simply say: sure, but so what? Changing the term without changing the referent doesn’t change an argument. A tautology is still a tautology whatever term you use.

Eneasz isn’t really interested in changing the term. He actually keeps on using the same term. He is just engaging in a ploy to evade defending his definition. In fact, he never actually explained why “I could use any other term” was supposed to save him from having to defend his definition. But obviously it doesn’t. If it did, he could use any definition he liked, and still claim he was talking about the same referent! Unfortunately his ploy not only served its purpose of distracting his own attention from the real issue, it partly distracted me too.

  (Quote)

Richard Wein January 8, 2010 at 12:04 am

I thought I’d have one last attempt at trying to get through to Luke and Eneasz by use of a simply analogy. Their argument is analogous to this…

E&L: God exists.
R: What do you mean by “God”.
E&L: By “God” I mean “that which exists”.
R: But that’s not what people actually mean by “God” (as shown by these examples). More fundamentally, it makes your original claim a mere tautology.

Eneasz continues…
E: I don’t care which word we use. I’ll use “stinky” instead of “God” if you like.
R: I don’t want you to use another word. I want you to address my argument.
E: Discussing definitions is a waste of time. Goodbye.

The only sensible response to an accusation of tautology (besides agreeing with it) is to show that your claim and your definition are not the same. But Luke and Eneasz refuse to even try to do that. They just won’t address the argument.

  (Quote)

Richard Wein January 8, 2010 at 12:34 am

P.S. And here are their actual claim and definition, slightly reworded to make the distinction clearer…

Claim: A desire that tends to thwart more desires than it fulfills is morally bad.
Definition: “morally bad” (as applied to a desire) means “tends to thwart more desires than it fulfills”.

  (Quote)

lukeprog January 8, 2010 at 7:29 am

Richard,

At this point I’m really lost as to what your objection is, but you are fully welcome to reject desirism until I can write some more substantive defenses of it.

  (Quote)

Richard Wein January 8, 2010 at 8:59 am

Luke, my objection was that your central claim is a mere tautology. Let me try to make it even clearer.

If you define “God” to mean “that which exists”, then the claim “God exists” is meaningless. It’s just saying “that which exists exists”. Are you with me so far?

Well, you’re making the same kind of mistake. You claim that a morally bad desire is one that tends to thwart more desires than it fulfills. But then you also make this your definition of a morally bad desire. To make this clearer, I’ll rewrite it in a way that is more clearly a claim, and then in a way that is more clearly a definition…

Claim: A desire that tends to thwart more desires than it fulfills is morally bad.
Definition: “morally bad” (as applied to a desire) means “tends to thwart more desires than it fulfills”.

The claim is meaningless for the same reason as in the “God exists” case. Your claim just says “a desire that tends to thwart more desires than it fulfills…tends to thwart more desires than it fulfills”.

I suspect the reason you have difficult seeing this is that you have a strong intuition about what “morally bad” means, and when you read your claim you are mentally interpreting it in the light of your intuitive sense of “morally bad” instead of applying your stated definition.

The only way to refute my argument is to deny you’re making that claim or deny you’re using that definition.

  (Quote)

lukeprog January 8, 2010 at 9:46 am

Richard Wein,

Desirism offers some “reforming definitions” for moral terms ala Richard Brandt (1979), just as physicists offered a reforming definition for the word “atom” that did not entail indivisibility, but still was pretty close to what we had always meant by the word “atom.”

  (Quote)

Kip January 8, 2010 at 11:41 am

Richard Wein: We can only describe how people use them. Roughly speaking, people use them as if they were genuine descriptive terms, while in reality they are merely attaching these terms to things of which they approve or disapprove. 

Okay. So, something is “morally right” if it is something that is “approved of”? Very well. And what does that mean when someone “approves of” something?

  (Quote)

Kip January 8, 2010 at 12:12 pm

Richard Wein: Claim: A desire that tends to thwart more desires than it fulfills is morally bad.
Definition: “morally bad” (as applied to a desire) means “tends to thwart more desires than it fulfills”.
The claim is meaningless for the same reason as in the “God exists” case. Your claim just says “a desire that tends to thwart more desires than it fulfills…tends to thwart more desires than it fulfills”.

You’re confused. You’ve simply stated the definition in two separate ways.

In this case, the “claim” should be that a specific desire would be considered “morally bad” because it fits the definition.

  (Quote)

Richard Wein January 9, 2010 at 1:56 am

Luke,

Once again you didn’t address my argument. Would you kindly give me straight answers to the following simple questions about your position.

Do you make the following claim?
Claim: A desire that tends to thwart more desires than it fulfills is morally bad.

Do you use the following definition?
Definition: “morally bad” (as applied to a desire) means “tends to thwart more desires than it fulfills”.

I hope you will agree that a person rationally defending a position should be willing to answer reasonable questions aimed at clarifying it. It’s not as though I’m asking for a lengthy explanation. Two simple instances of “yes”, “no” or “that’s close to it” will do.

  (Quote)

lukeprog January 9, 2010 at 2:06 am

Richard,

Claim: A desire that tends to thwart more desires than it fulfills is morally bad.

Yes, though it’s not really a “claim” but a proposed reforming definition.

Definition: “morally bad” (as applied to a desire) means “tends to thwart more desires than it fulfills”.

Yes.

  (Quote)

Richard Wein January 9, 2010 at 1:40 pm

Luke, now you’ve clarified that, I’d like to have one more go at persuading you. I know you probably don’t want to spend any more time reading me. But please just read this one post, think about it carefully, and, if you’re not impressed, I won’t bother you again. I won’t respond further unless you ask me too.

Claim: A desire that tends to thwart more desires than it fulfills is morally bad.

Yes, though it’s not really a “claim” but a proposed reforming definition.

This is a little unclear, so before continuing I’m going to check that you understand that I’m making the distinction between
(a) a moral calculus (a claim about how to work out which things are morally bad); and
(b) a definition of the meaning of “morally bad”.

I’ll assume that you understand this distinction and why it’s important not to confuse the two. And then I’ll take your reply as meaning “I intend this statement as a definition, and not as a moral calculus”.

Now, my assertion is that, despite your denial, this actually is your moral calculus, and you’re just giving it as a definition because (a) you don’t have a genuine definition, and (b) it saves you having to justify your moral calculus, as the calculus follows directly (tautologically) from your “definition”.

As evidence that it’s really a moral calculus and not a definition, I give the following:
A. The content of the statement makes sense as a moral calculus, and it’s clear that, as a moral calculus, it’s one you would approve of.
B. The words in which desirists typically couch this statement make it look like a moral calculus. Though most cases are ambiguous, so they can be interpreted as definitions if you so wish, I can give one example which is unambiguously a moral calculus.
C. As a definition it entirely fails to reflect how moral terms are actually used.

C is the decisive factor, because if it fails to be a reasonable definition, then it’s useless for your purposes regardless of whether it’s actually a moral calculus. On the other hand, if it’s a reasonable definition, then the fact that it looks like a moral calculus too is just an interesting coincidence. But I hope the evidence of A and B will help you to take C more seriously.

I don’t have anything to add in support of A. I think it’s self-evident. In support of B I’ll just give the unambiguous example I mentioned. This is taken from Alonzo’s article in your OP:

Desire utilitarianism proposes that we can evaluate desires according to their disposition to fulfill or thwart other desires. The same can be said about the desire to preserve and protect that which is natural. If this desire tends to fulfill other desires, it is good. If it tends to thwart other desires, it is bad. Ultimately, the good and the bad in nature is found in those qualities that fulfills or thwarts good desires (desires that tend to fulfill other desires).

This passage specifically mentions “evaluat[ing] desires according to their disposition to fulfill or thwart other desires”, and this evaluation is to establish whether they are good or bad. This makes it unambiguously a statement of a moral calculus. It cannot be intrepreted as a definition.

Finally to C, whether your alleged definition is a reasonable one. I hope you will agree that it’s vital for you to justify your definition. After all, if no justification were necessary, then anyone could call their moral calculus a definition, and slip that moral calculus in by the back door. Moreover, since you’re not presenting a moral calculus for justification, the justification you give for your definition is the only justification underlying your claim to have justified moral facts. As such, it should be considered central to your case. The insistence by Alonzo and Eneasz that they don’t need to justify their definition is patently absurd.

I see that your new FAQ has a section (3.03) that appears to be aimed at doing this. I say it misses the point and fails. I’ll be happy to address it if you ask me to. But for now I don’t need to, as I’m going to show the converse, i.e. that your definition is unreasonable, because it fails to reflect a central feature of how people use moral terms.

The central feature of moral terminology which your definition fails to reflect is prescriptivity. Last year you referred me to a post of Alonzo’s where he wrote:

At the root of it, if somebody wants to assert that there is some sort of objective moral truth (as I do), they had better be prepared to defend moral claims as being both (at the same time) descriptive and prescriptive. Insofar as it is a moral claim, it must be prescriptive.

Good point. But claims based on your definition are not prescriptive. For example, I think you’ve said you are trying to reduce your desire to eat meat because the desire to eat meat is morally bad. Applying your definition, this is equivalent to claiming: “the desire to eat meat tends to thwart more desires than it fulfills”. But this is a purely descriptive factual claim about desires. There’s no prescriptive element. It doesn’t tell you to change your desires.

It’s not surprising that your definition fails to capture what “morally bad” really means, because you’ve just taken a moral calculus and called it a definition.

  (Quote)

Kip January 9, 2010 at 3:55 pm

Richard Wein: But this is a purely descriptive factual claim about desires. There’s no prescriptive element. It doesn’t tell you to change your desires.

Based on all the other baffling things you’ve said, and other things you’ve failed to address, I’m not convinced you even know what you mean by “prescriptive”.

But, I’ll respond anyway.

The prescriptive aspect comes from the fact that what someone should do is that which fulfills the desires in question. That’s what “should” means. And that’s what it means to “prescribe” something.

All your questions have been answered by Alonzo Fyfe, if you care to read what he’s written:

http://alonzofyfe.com/article_du.shtml
http://alonzofyfe.com/desire_utilitarianism.shtml
http://www.lulu.com/content/505269

  (Quote)

lukeprog January 9, 2010 at 4:30 pm

Richard Wein,

It’s not that I’m unimpressed by your posts, not that your objections are illegitimate. The problem is that I don’t have time to explain myself right now. I’m getting there, but very slowly. I have a lot more work to do.

  (Quote)

Richard Wein January 10, 2010 at 11:28 am

I know I said I wouldn’t post again, but I did something a bit silly and confusing in my last post, and being a perfectionist I can’t resist putting it right. I ended up trying to prove to Luke that his claim (“a desire that tends to thwart more desires than it fulfills is morally bad”) is a moral calculus. But that’s so obvious he probably wouldn’t deny it, providing he understood what I meant. So please ignore those bits of my last post. I should have just made the following points:

– Luke has made his definition (the meaning of “morally bad”) the same as his moral calculus (a claim of how to evaluate which desires are morally bad), which makes his moral calculus a meaningless tautology.

– Such a definition logically cannot be valid. And, not surprisingly, when we inspect it, we see that it fails to reflect the element of prescriptivity which is central to moral claims.

  (Quote)

Kip January 10, 2010 at 6:11 pm

You have no idea what you’re talking about. Read some stuff, then come back.

  (Quote)

Richard Wein January 10, 2010 at 11:53 pm

I’m afraid I still may not have made myself 100% clear to those who are confused about tautologies. The following should make my meaning unmistakable.

Luke: I claim that a desire that tends to thwart more desires than it fulfills is morally bad.
Richard: What do you mean by “morally bad”?
Luke: By “morally bad” I mean “tends to thwart more desires than it fulfills”.
Richard: Then you’re just claiming that a desire that tends to thwart more desires than it fulfills…tends to thwart more desires than it fulfills.

This is analogous to…

Luke: I claim that an object less dense than water is bouyant.
Richard: What do you mean by “bouyant”?
Luke: By “bouyant” I mean “less dense than water”.
Richard: Then you’re just claiming that an object less dense than water is less dense than water.

“Less dense than water” is not the meaning of “bouyant”. An object is bouyant because it’s less dense than water. Similarly, “tends to thwart more desires than it fulfills” is not the meaning of “morally bad”. A desire is morally bad because it tends to thwart more desires than it fulfills (according to desirists).

  (Quote)

Leave a Comment