Desirism: More Questions Answered (part 3)

by Luke Muehlhauser on August 20, 2009 in Ethics

I’ve answered four more questions on my Desirism F.A.Q. Here they are:

{6.08} What’s wrong with virtue ethics?

Virtue ethics says that right acts are those done out of a virtuous character. But what is virtuous? Virtue is a habit or quality that allows a something to fulfill its purpose. The virtues for an axe are sharpness and durability. The virtues for a hunting dog are a sensitive nose, stealth, and obedience.

But notice that the virtues of a hunting dog are assigned by someone outside the dog: its master. So are the virtues for an axe. An axe’s purpose is to chop wood only because we gave it that purpose. If a hurricane lodged a sharp, slender piece of rock into the end of a short stick, that stick-and-stone would have no intrinsic purpose. It would only have a purpose if somebody came along and decided to use it to chop wood, or pound in a tent peg, or prop up a stool, or set a trap to catch a mouse.

If humans have a purpose, it must be intrinsic to humans or else assigned from the outside, for example by God. The second option fails because God does not exist. The second option fails because intrinsic purpose does not exist. Nobody has ever shown me evidence that intrinsic purpose exists. (And if you think evolution provides intrinsic purpose, please see question {6.10}.)

Actually, desirism can be considered a kind of virtue ethics, but with no commitment to intrinsic purpose or externally-assigned purpose, and a foundation for virtue in consequentialism. See here.

{6.09} Isn’t happiness the sole good?

Many people believe that:

  1. All action is aimed at the agent’s happiness.
  2. Therefore, happiness is the sole good.

Ignoring the legitimacy of inference from (1) to (2) for now, let me explain why (1) is false.

All action is not aimed at the agent’s happiness.

Let’s do a thought experiment. A mad scientist has captured you and a close friend you care about deeply. He gives you two options, and you must choose one:

  1. Your friend will have his memory erased, but will be set free in good health. He will be given reason to believe that you are safe and happy. Your memory will also be erased, and you’ll be given reason to believe that your friend is being tortured endlessly. You will hear his screams. But you will be fed and cared for and given as much freedom as possible.
  2. Your friend will be tortured endlessly on a remote island. Your memory will be erased, and you will be given reason to believe your friend is healthy and happy. You will be fed and cared for and given as much freedom as possible.

Many people, perhaps most people, will choose Option 1 – even though this does not maximize their own happiness. (There are also other examples, ones we encounter in real life.) Thus, it is false to say that people always aim toward their own happiness.

Happiness is not the only project that people aim at. We have other projects.

But it is true that we always aim at what we desire. We may desire our own happiness, but we sometimes desire other things as well. BDI theory, the most widely-accepted model of intentional action, claims that:

Belief + desire -> intention -> intentional action

Happiness theories claim something else, which is false:

Belief + desire that “I am happy” > intention -> intentional action

Happiness is not the only thing we desire, and it is not the only thing that motivates us toward intentional action.

Also see: one, two, three, four.

{6.10} Isn’t morality just an evolved sentiment? What’s wrong with evolutionary ethics?

It’s probably true that many of our feelings about what is right and wrong (tasteful and distasteful) have evolved.

But how can we leap from “I get a feeling of distaste when I think about rape” to the claim “Rape is morally wrong”? A mere feeling justifies no such conclusion.

Some say that evolution programmed us with moral opinions in favor of cooperation and altruism because these enable the formation of a functional society.

That may be true, but there is no valid inference from “X is an evolved trait” to “X is morally good.” If men had evolved, like male lions, to kill our step-children upon taking a new mate, would this make killing step-children moral? Or, consider this: It seems we have evolved a disposition to seek dominance over others whenever possible. Does this make the pursuit (and use) of dominance over others moral? These are invalid inferences.

Another problem is that not all our moral sentiments are evolved. Many are socially programmed. Slavery went from near-universal acceptance to wide disdain in the space of three generations. That is not the work of gradual evolution.

Evolutionary ethics also suffers from a modified Euthyphro dilemma. The evolutionary ethicist says, “What is good is that which is loved by our genes.”

To which I respond with a paraphrase of Socrates: “Is it good because it is loved by our genes, or is it loved by our genes because it is good?” Fyfe explains the dilemma:

If it is good because it is loved by our genes, then anything that comes to be loved by the genes can become good. If humans, like lions, had a disposition to slaughter their step children, or to behead their mates and eat them, or to attack neighboring tribes and tear their members to bits (all of which occurs in the natural kingdom), then these things would be good. We could not brag that humans evolved a disposition to be moral because morality would then be whatever humans evolved a disposition to do.

If, instead, it is loved by our genes because it is good, then we have not yet answered the question of what goodness is.

{3.07} But why should I accept your definition of morality?

Definitions are not important. Language is an invention. Changing our words does not change what is true of the world.

Consider the propositions at the core of desirism:

(1) Desires exist.

(2) Desires are the only reasons for action that exist.

(3) Desires are propositional attitudes.

(4) People seek to realize states of affairs in which the propositions that are the objects of their desires are true.

(5) People act to realize states of affairs in which the propositions that are the object of their desires are true, given their beliefs – meaning that false or incomplete beliefs may thwart their desires.

(6) Some desires are malleable.

(7) Desires can, to different degrees, tend to fulfill or thwart other desires. That is, they can contribute to realizing the propositions that are the objects of other desires true, or contribute to preventing the realization of those propositions.

(8) To the degree that a malleable desire tends to fulfill other desires, to that degree people generally have reason to promote or encourage the formation and strength of that desire. To the degree that a malleable desire tends to thwart other desires, to that degree people generally have reason to inhibit or discourage the formation and strength of that desire.

(9) The tools for promoting or inhibiting desires include praise, condemnation, reward, and punishment.

These are true propositions, and they make no reference to morality, or even “value.” If you decide that the word “morality” refers to intrinsic value or God’s commands or categorical imperatives or imaginary social contracts, then you are welcome to use the word “morality” in that way, but you will be talking about things that do not exist.

Value terms are about reasons for action, and moral value terms are usually spoken of in the context of a universal consideration of reasons for action (not just prudential reasons for action). Desirism is a theory about reasons for action, and especially about a universal consideration of reasons for action. That’s why I think it makes sense to consider it a theory about morality.

Also, desirism accounts for many common features of moral theory, which is another reason to consider it a theory about morality. See question {3.04}.

Also see: one, two, three.

If it is good because it is loved by our genes, then anything that comes to be loved by the genes can become good. If humans, like lions, had a disposition to slaughter their step children, or to behead their mates and eat them, or to attack neighboring tribes and tear their members to bits (all of which occurs in the natural kingdom), then these things would be good. We could not brag that humans evolved a disposition to be moral because morality would then be whatever humans evolved a disposition to do.

If, instead, it is loved by our genes because it is good, then we have not yet answered the question of what goodness is. Unfortunately, an account of goodness is a prerequisite to making and defending this theory of value. How can we demonstrate (or how can we attempt to falsify) the thesis that what is good is loved by our genes if we have no account of what goodness is that is independent of what is loved by our genes.

Previous post:

Next post:

{ 41 comments… read them below or add one }

Chuck August 20, 2009 at 8:53 pm

I would just point out there are other tools for promoting or inhibiting desires. There is a whole school of thought that says one need not (and should not) resort to praise, condemnation, reward, or punishment in raising children. If it’s better for our kids, then why not society at large?

  (Quote)

lukeprog August 20, 2009 at 10:09 pm

Chuck,

Desirism’s focus on praise, condemnation, reward, or punishment does indeed need some defense. Also, I’m pretty sure I’ll end up disagreeing with Fyfe on this very subject, but I’m not sure yet.

  (Quote)

Kevin August 21, 2009 at 6:33 am

Luke writes:

All action is aimed at the agent’s happiness.
Therefore, happiness is the sole good.

Ignoring the legitimacy of inference from (1) to (2). . . [and]
there is no valid inference from “X is an evolved trait” to “X is morally good.”
 
True, so how can there be a valid inference from “person P desires A” to “A is morally good” or “person P ought to do A”?  Moreover, how can there be a valid inference from the various, though sometimes overlapping, desires of various persons to any universal reasons for action?  How do you get from persons P, Q, R, etc. desire A, to A is universally good or right?
 
Finally, isn’t Desirism prudential, in spite of your rejection of this?  Is it not a means to efficiently allow each individual to maximize her desires, rather than a universal moral code?
 
To sum up the random questions, while I like Desirism, I can’t help thinking you’ve made an invalid move from the desires of individuals to universal reasons for action, or universal duties.  Since we’re trying to avoid non-existent entities like intrinsic value, categocial imperatives, etc., shouldn’t we lump universal reasons for action or universal duties in this category?  In reality, there are only my desires, yours, his, and so on.  Unless I desire to see the desires of others fulfilled, how do their desire hold any moral sway over me?
 

  (Quote)

lukeprog August 21, 2009 at 6:42 am

Kevin,

There is no valid inference from “person P desires A” to “A is morally good.”

Also, there are no universal reasons for action. Nor are there categorical imperatives.

“How do their desires hold any moral sway over me?” This is an excellent and common question, and will be added to the FAQ when I have time.

  (Quote)

Chuck August 21, 2009 at 7:31 am

It occurs to me. How much of the confusion surrounding desirism/desire utilitarianism (how it differs from preference utilitarianism) might be solved if there was just a good wikipedia entry for it?

  (Quote)

Taranu August 21, 2009 at 11:47 am

Luke, I was listening to Shelly Kagan’s course on death, and in the last lecture he touches on utilitarianism and deontology.  In case of utilitarianism he says the following: Imagine that there are 5 patients in a hospital whom are going to die because of organ failure (one of liver failure, one of heart, two of kidney, one of lungs). A healthy guy walks into the hospital for a medical exam and the doctor realizes he’s suitable to be an organ donor for all 5 patients. The doctor faces a choice: he can finish that guy’s exam and let him go or he could kill him and save the other 5 patients. Well Kagan focuses a bit on the detailes, but the bottom line is that since utilitarianism has a consequence only approach to morality it is better to kill that guy and save the 5.
So my question is how does Desirism deal with this?
 
PS: he gives this example starting from 17:20

  (Quote)

Kip August 21, 2009 at 12:04 pm

Taranu: Luke, I was listening to Shelly Kagan’s course on death, and in the last lecture he touches on utilitarianism and deontology.  In case of utilitarianism he says the following: Imagine that there are 5 patients in a hospital whom are going to die because of organ failure (one of liver failure, one of heart, two of kidney, one of lungs). A healthy guy walks into the hospital for a medical exam and the doctor realizes he’s suitable to be an organ donor for all 5 patients. The doctor faces a choice: he can finish that guy’s exam and let him go or he could kill him and save the other 5 patients. Well Kagan focuses a bit on the detailes, but the bottom line is that since utilitarianism has a consequence only approach to morality it is better to kill that guy and save the 5. So my question is how does Desirism deal with this?   PS: he gives this example starting from 17:20

Funny, I used this exact example in a presentation I recently gave on Desire Utilitarianism.  DU would say that 1) in order for the doctor to kill the innocent patient he would have to lack an aversion to killing innocent people, and 2) because desires are persistent, the doctor would be more inclined to kill innocent people in other circumstances also.  In general, the desire to kill innocent people is something that we all have many and strong reasons to promote a strong aversion against.  Therefore, we condemn (and punish) any doctor who does this — even to save the lives of 5 other people.  It is “morally prohibited”.

  (Quote)

Chuck August 21, 2009 at 12:04 pm

I think desirism would say it is better to promote a strong aversion to killing in the general population. This aversion would such that the doctor wouldn’t even consider this a choice.

  (Quote)

Chuck August 21, 2009 at 12:05 pm

Doh! You beat me to it.

  (Quote)

Kip August 21, 2009 at 12:08 pm

Kevin: In reality, there are only my desires, yours, his, and so on. Unless I desire to see the desires of others fulfilled, how do their desire hold any moral sway over me?

“Moral sway”?  You probably just mean “sway”, right?  The answer to that is that we use our social tools (praise, condemnation, reward, punishment) to mold the desires of people.  For those people who are not swayed by the use of these tools, they are usually labeled “sociopaths” and many end up locked up somewhere to keep them from hurting other people.

  (Quote)

Kip August 21, 2009 at 12:15 pm

Chuck: It occurs to me. How much of the confusion surrounding desirism/desire utilitarianism (how it differs from preference utilitarianism) might be solved if there was just a good wikipedia entry for it?

I definitely agree.  We need this.  And it might be good to have a separate “wiki” just for Desirism.  It would be nice if there were a central place, with all the up-to-date information neatly organized that people could reference to get details on the theory.

  (Quote)

Kip August 21, 2009 at 12:18 pm

Chuck: I would just point out there are other tools for promoting or inhibiting desires. There is a whole school of thought that says one need not (and should not) resort to praise, condemnation, reward, or punishment in raising children. If it’s better for our kids, then why not society at large?

What school of thought is this?  I’d like to see what better methods for molding desires you are advocating.

  (Quote)

Chuck August 21, 2009 at 2:44 pm

Anything by Alphie Kohn. For example, Unconditional Parenting

  (Quote)

Chuck August 21, 2009 at 2:45 pm

Oops. Make that, Alfie Kohn.

  (Quote)

lukeprog August 21, 2009 at 4:35 pm

Chuck: How much of the confusion surrounding desirism/desire utilitarianism (how it differs from preference utilitarianism) might be solved if there was just a good wikipedia entry for it?

There was one attempt, but it was deleted for not being ‘notable’ enough, in June 2007 I believe. If any papers supporting it had been published to scholarly journals, I think it would stay up.

  (Quote)

Kevin August 21, 2009 at 5:13 pm

Kip: “Moral sway”?  You probably just mean “sway”, right?  The answer to that is that we use our social tools (praise, condemnation, reward, punishment) to mold the desires of people.  For those people who are not swayed by the use of these tools, they are usually labeled “sociopaths” and many end up locked up somewhere to keep them from hurting other people.

No, I meant “moral sway,” since other types of rules or influences can affect a person’s decisions and responsibilities.
 
You’ve addressed the practical aspects of applying this or any moral theory, but the issue I’m getting at is what justification is there for my obligation to take the desires of others into consideration.  Perhaps it boils down to potential or actual punitive measures (e.g., if I don’t respect the desires of others, my own desires are less likely to be fulfilled) and enlightened self-interest.  If so, then Desirism seems little different from most common sense morality, such as the Golden Rule.

  (Quote)

EvanT August 22, 2009 at 4:47 am

Hi Luke,
I’m not sure your “happiness” examples at 6.09 are entirelly convincing. Your logic suggests that people do not take action to prevent their happiness from being compromised by a paradigm shift. Lemme explain:

For visualization purposes let me simplify things and assign a +1 value to actions that increase happiness in the scenarios and -1 to actions that decrease happiness. It’s an oversimplification, I know, but bear with me.

SCENARIO #1
+1 Your friend lives comfortably
+1 You live comfortably
-1 You have to live with your friend’s screams
(so far we have a total of +1, BUT…)
+1 Someone spills the beans and you learn the truth

The paradigm shift produces a total of +2
(+1 for you alone if you ignore the friend’s happiness, but I don’t think you can since the friend’s happiness and lack of is used to produce happiness for you as well)

SCENARIO #2
-1 Your friend suffers
+1 You don’t
Total of 0
-1 Paradigm shift. You learn your friend’s is suffering.

The paradigm shift produces a -1.

So you see that the first scenario has better future perspectives and since these scenarios involve humans, paradigm shifts are plausible enough to be taken into account when considering options.
————
But you didn’t pick these examples light-heartedly, did you? (sneaky, sneaky). Lemme attempt to adjust for God, Heaven and Hell as per the mainstream notions.

SCENARIO #1
+0 Friend suffers in Hell (-1), but deserves it (+1)
+1 You are happy
+0 You hear his screams(-1) but he deserves to suffer (+1)
(this assumes modern notions. Older notions of hell and heaven would count this clause and the first one as a source of happiness for the saved)
+0 No possibility for a paradigm shift

Total happiness of +1 (+3 for the medieval theology)

SCENARIO #2
+0 Friend suffers (-1) but deserves it (+1)
+1 You are blissfully happy
+0 No possibility for a paradigm shift

Total happiness of +1 (+2 for medieval theology)

Scenario #2 is simpler. Better for modern sensitivities.
Scenario #1 is better for medieval tastes.

I know I’ve presented things TOO simplified, but I’d like to hear your opinions on this… pythagorean take :P

  (Quote)

Antiplastic August 22, 2009 at 6:20 am

In general, the desire to kill innocent people is something that we all have many and strong reasons to promote a strong aversion against.  Therefore, we condemn (and punish) any doctor who does this — even to save the lives of 5 other people.

 
Wait, huh?
In order to break the window to rescue a kitten from a burning building, you need to “lack an aversion to smashing windows” and since in general, the desire to smash windows is something that we all have many and strong reasons to promote a strong aversion against, we condemn (and punish) any fireman who does this?

  (Quote)

Kip August 22, 2009 at 6:21 am

lukeprog: There was one attempt, but it was deleted for not being ‘notable’ enough, in June 2007 I believe. If any papers supporting it had been published to scholarly journals, I think it would stay up.

Weird.  There is one for Preference Utilitarianism:  http://en.wikipedia.org/wiki/Preference_utilitarianism  .  I agree that publishing something would be good.

  (Quote)

lukeprog August 22, 2009 at 6:33 am

Evan,

You’re not considering my scenario. In the scenario I presented in my thought experiment, there is no discovery or “paradigm shift.”

  (Quote)

Kip August 22, 2009 at 6:34 am

Chuck: Anything by Alphie Kohn. For example, Unconditional Parenting

I put that on my Amazon wishlist — so if I ever have kids, I’ll read it.  Right now, though, my thought is that there is definitely a place for praise & condemnation, reward & punishment in rearing children.  Those terms are general enough so as to allow for intelligent application geared toward the specific situation.  I was given reward ($) for making good grades when I was a kid — and I think that was a very good thing.  It gave me an incentive to become a good student, which provided me the tools to do a lot of other good things.

  (Quote)

lukeprog August 22, 2009 at 6:34 am

Preference utilitarianism is a very popular theory, one endorsed by dozens of published academics. So it will have no trouble keeping a Wikipedia page.

  (Quote)

Kip August 22, 2009 at 6:41 am

Kevin: Perhaps it boils down to potential or actual punitive measures (e.g., if I don’t respect the desires of others, my own desires are less likely to be fulfilled) and enlightened self-interest.  If so, then Desirism seems little different from most common sense morality, such as the Golden Rule.

It’s not just potential punitive measures, it’s actually changing people’s desires, which lead them to actually want to do different things.  We use the law to use people’s current desires to shape their behavior (for fear of punitive measures).  We use social tools to change their desires which will lead to change in behaviors (not out of fear of punitive measures).  I don’t say “thank you” to the stranger at the grocery store because I fear that she might hurt me if I don’t.  I say it because it makes me feel good to think that it might make her feel good if I do.  It’s a win-win.
The “golden rule” is not a moral theory, it’s just a pretty good “rule of thumb”, and it definitely fits with Desirism.  It has problems, though, which Alonzo Fyfe has written about.

  (Quote)

Kip August 22, 2009 at 6:42 am

lukeprog: Preference utilitarianism is a very popular theory, one endorsed by dozens of published academics. So it will have no trouble keeping a Wikipedia page.

But it doesn’t reference any of them.

  (Quote)

Kip August 22, 2009 at 6:48 am

Antiplastic:   Wait, huh? In order to break the window to rescue a kitten from a burning building, you need to “lack an aversion to smashing windows” and since in general, the desire to smash windows is something that we all have many and strong reasons to promote a strong aversion against, we condemn (and punish) any fireman who does this?

No we don’t.  We recognize that smashing windows is not something worth having a strong aversion against.

  (Quote)

EvanT August 22, 2009 at 8:11 am

Jeez, Luke.
I know you didn’t include it. But your argument is “Many people, perhaps most people, will choose Option 1 – even though this does not maximize their own happiness.” This is an appeal to public consensus. I’m just arguing that no ordinary person would limit his thinking to a strict “thought experiment” like that.
But besides this, even without the paradigm shift, the numbers still come up on my side.
SCENARIO #1
+1 Your friend lives comfortably
+1 You live comfortably
-1 You have to live with your friend’s screams
Total of +1

SCENARIO #2
-1 Your friend suffers
+1 You don’t
Total of 0
In any case, I’m arguing against the appropriateness of the example in this instance (too theoretical to be meaningful). And people DO tend to fall back on notions of decency and assumed selflessness on scenarios that are TOO hypothetical. The grandma’s ranch might be more appropriate. BUT, I still think that you should ponder a bit more on my “assumed fact” that people WILL consider a benign or malign paradigm shift when considering which course of action would have a more felicitous outcome.
You should also define happiness a bit better. I mean, does your definition include the epicurean notions of hedone or ataraxia, for instance?

  (Quote)

Kevin August 22, 2009 at 8:23 am

Kip: It’s not just potential punitive measures, it’s actually changing people’s desires, which lead them to actually want to do different things.  . . . I don’t say “thank you” to the stranger at the grocery store because I fear that she might hurt me if I don’t.  I say it because it makes me feel good to think that it might make her feel good if I do.  It’s a win-win.

Agreed.  Punitive measures are not the only means of changing behavior.  But why change behavior at all?  And where does my obligation to change my behaviors or those of others come from?  Because it benefits me as well as others?  If so, is that not some version of the Social Contract?

  (Quote)

Chuck August 22, 2009 at 11:03 am

Kip:  I was given reward ($) for making good grades when I was a kid — and I think that was a very good thing.  It gave me an incentive to become a good student, which provided me the tools to do a lot of other good things.

It sounds right, doesn’t it. Unfortunately, research says otherwise. Read the book.

  (Quote)

Antiplastic August 22, 2009 at 2:11 pm

Kip: No we don’t.  We recognize that smashing windows is not something worth having a strong aversion against.

Say again? If some neighborhood punks are smashing your windows, you’re indifferent? If I see that sort of thing, I usually call the cops, but maybe that’s just me.
Look, the point here is that there are things which most people agree are ceteris paribus bad, but if life only ever presented us with immaculately desirable or undesirable choices, there wouldn’t be any need for morality, would there? *In general* we don’t like it when people smash other people’s windows, or take bonesaws to others’ ribcages and tinker with their hearts, but when a doctor is performing surgery or a fireman is rescuing a baby we applaud these things because the consequences *outweigh* the badness. That’s the whole point of consequential-ism.
And that’s the whole point of (alleged) counterexamples like the Organ Donor — to show that consequentialists lack the courage of their convictions. It’s no reply at all to simply reassert that the bad outcome (smashing windows, killing one innocent) is bad. *Of course* people “have reason to have an aversion” to people randomly going around killing innocent people. The point is to explain why, in consequentialist terms, the one-dead-guy outcome is worse than the five-dead-guy outcome. Or wouldn’t you agree that people “have reason to have an aversion” to five people dying who could easily have been saved?

  (Quote)

Justin Martyr August 22, 2009 at 4:52 pm

Value terms are about reasons for action, and moral value terms are usually spoken of in the context of a universal consideration of reasons for action (not just prudential reasons for action). Desirism is a theory about reasons for action, and especially about a universal consideration of reasons for action. That’s why I think it makes sense to consider it a theory about morality.

 
Ok, let’s suppose that Sam is one of the 1,000 sadists. Why should Sam set aside his desires for the desires of others? I don’t think Sam would want to sacrifice for the sake of a tautology.

  (Quote)

Antiplastic August 22, 2009 at 7:25 pm

“To the degree that a malleable desire tends to fulfill other desires, to that degree people generally have reason to promote or encourage the formation and strength of that desire.”

 
“People generally have reason”. There’s your problem.
People *actually* have reason to promote desires whose satisfaction *actually* fulfills their desires. “Generally” people shouldn’t break other people’s windows. But “generally” people should rescue people from danger. And these generalities conflict when someone is trapped in a burning building, when you have to do things like this. You can’t just talk about “fulfilling desires” as though their fulfilments never conflict.

  (Quote)

lukeprog August 22, 2009 at 8:56 pm

Justin Martyr: Ok, let’s suppose that Sam is one of the 1,000 sadists. Why should Sam set aside his desires for the desires of others? I don’t think Sam would want to sacrifice for the sake of a tautology.

Yup! This is a great question and will be added to the FAQ later.

  (Quote)

lukeprog August 22, 2009 at 8:57 pm

Antiplastic: You can’t just talk about “fulfilling desires” as though their fulfilments never conflict.

Desirism knows this. The whole point of desirism is to change people’s malleable desires such that they do NOT conflict.

  (Quote)

Chuck August 22, 2009 at 9:58 pm

Antiplastic:   “Generally” people shouldn’t break other people’s windows. But “generally” people should rescue people from danger. And these generalities conflict when someone is trapped in a burning building, when you have to do things like this. You can’t just talk about “fulfilling desires” as though their fulfilments never conflict.

“An aversion to deceiving others (which is generally a good aversion to promote among people who recognize the costs of living in a society of deceivers) may come into conflict with the desire to save innocent lives when the Nazis come looking for the Jews that one has hid in the attic. Yet, where the desire to save innocent life is stronger the home owner will lie to the Nazis.” -A.F.

  (Quote)

Taranu August 22, 2009 at 11:33 pm

Luke, how would desirism deal with the following three issues:
 
1. I’m not sure if I heard this in the movie God on trial or in some other place. A Jew in Nazi Germany has two healthy children and a Nazi officer comes to him and tells him to choose one of them. The one he  chosen will be spared and the other one will be killed. What should he do. As I remember it, the Jew couldn’t choose any of them and they were both killed. What does DU have to say?
 
2. I’m sure you are familiarized with the trolley paradox. Here is a version of it: A trolley is running out of control down a track. In its path are 5 people who have been tied to the track. Fortunately, you can flip a switch, which will lead the trolley down a different track thus saving the lives of the 5 . However, there is a single person tied to that track and it happens to be your only son. Should you flip the switch or not? What does DU have to say?
 
3.  This third issue is about incest. Let’s say a brother and his sister fall in love with each other and they engage in sexual intercourse. What does DU have to say about this and does it make a difference if the sister remains pregnant with a genetically deformed child or if she doesn’t as a result of their union?
 

  (Quote)

lukeprog August 23, 2009 at 7:20 am

Taranu,

#1 is “Sophie’s Choice.” I’ll respond to these in later additions to the FAQ, thanks…

  (Quote)

Kip August 23, 2009 at 7:20 am

Antiplastic: Say again? If some neighborhood punks are smashing your windows, you’re indifferent? If I see that sort of thing, I usually call the cops, but maybe that’s just me.

Destroying someone else’s property without their consent is something we have many reasons to promote having a relatively strong aversions against.  Your “smashing windows” example was too specific.  Of course, protecting lives is something we have more and stronger reasons to promote having a relatively stronger desire for.  So, if a fireman has to choose between saving someone’s life while destroying property, or not destroying property, but allowing the person to die, then they will choose the stronger desire — that to save the life.
Is this rocket science?

  (Quote)

Kip August 23, 2009 at 7:22 am

Antiplastic: *Of course* people “have reason to have an aversion” to people randomly going around killing innocent people. The point is to explain why, in consequentialist terms, the one-dead-guy outcome is worse than the five-dead-guy outcome.

Yeah, I did that.

  (Quote)

Antiplastic August 23, 2009 at 9:35 am

lukeprog: Desirism knows this. The whole point of desirism is to change people’s malleable desires such that they do NOT conflict.

Then you don’t have a psychological problem or even a social problem so much as you have a problem with the fundamental structure of reality. We all want cars that get infinite miles to the gallon and can haul an infinite amount of cargo, but in reality we can’t have this, and so we have to make tradeoffs.  And every tradeoff by definition involves arbitrating between conflicting desires.
But this is a bit of a distraction from the main point of the post you were responding to: it simply does not follow from the fact that people “generally” have reason for X that any specific person has a reason for X, and it would not follow even from the fact that all people “generally” have a reason for X that they have a reason for X in any specific instance. “Generally” chlorine is a toxic gas, but this is of exacly zero relevance to whether NaCl is a toxic gas.

  (Quote)

Yair August 23, 2009 at 2:05 pm

Antiplastic: But this is a bit of a distraction from the main point of the post you were responding to: it simply does not follow from the fact that people “generally” have reason for X that any specific person has a reason for X, and it would not follow even from the fact that all people “generally” have a reason for X that they have a reason for X in any specific instance. “Generally” chlorine is a toxic gas, but this is of exacly zero relevance to whether NaCl is a toxic gas.

Precisely so.

  (Quote)

lukeprog August 23, 2009 at 3:47 pm

Yair and Antiplastic,

Of course, this objection will be answered in a future addition to the FAQ.

  (Quote)

Leave a Comment