Moral Machines

by Luke Muehlhauser on June 11, 2009 in Ethics

aiI do not believe that a god created humans as the final purpose of the universe. I also do not believe that things or people possess a transcendent quality called “intrinsic value.”

But I do believe in morality. I believe that reasons for action exist. Specifically, each desire is a reason for action. These reasons for action can be weighed and analyzed just like the other objects of science, for they exist physically in certain advanced neurological systems.

I also believe that the mind is, basically, a machine. Every year we take another few steps toward designing minds that can think faster, more correctly, and more comprehensively than our own.

What does this all mean?

I think it means we should design intelligent and moral machines, and gracefully hand over control of the planet to them.

What???

Humans are by far the most morally capable things we know of in the universe. Not only do most of us have a conscious desire to be moral, we also have the cognitive tools to find out what is moral and immoral by investigating the contents and workings of the universe.

And yet, we can be evil. We willfully inflict suffering upon billions of humans and other animals. We are selfish and bigoted. We are willfully ignorant, even when this ignorance causes great harm.

All this should surprise you if you think we were created by an all-knowing and all-powerful God in order to act morally. You’d think such a well-endowed designer could have made a better design.

But the immorality of humans should not surprise you if you think human brains evolved naturally. After all, there is no reason to think our programmed desires and habits should have evolved in perfect unison with what happens to be morally good. It would be quite a surprise if our evolved moral prejudices happened to coincide with what is moral, since behaving morally is not always what is best for individual survival and reproduction.

The point is that although our ability for moral care and moral research is impressive, it is not ideal because our desires are the product of millions of accidents. It would be foolish to expect continuous moral action from the human brain just as it would be foolish to expect continuous high-velocity movement from the human body. We can do both to some extent, but we are not designed to be excellent at either.

But this is not a hopeless situation. We’ve recognized our limitations concerning high-velocity movement, and we’re smart enough to design some things that do a much better job of continuous high-velocity movement than we can. Trains, cars, jets… they are not perfect either, but they are much better than humans can be in this one regard, and we are making them better much faster than evolution can.

I think we should also recognize our limitations concerning moral action. And I think we’re smart enough to design things that can do a much better job at moral action than we can.

Part of designing better moral creatures involves modifying our own brains. We can cultivate societies that, through social conditioning, create persons that have more moral desires, and fewer immoral ones. Increasingly, we will also be able to make people’s desires and tendencies more moral by directly changing their brain chemistry.

The final step will come when we can unshackle moral creatures from the ancient and accidental moral limitations of the human brain itself. Just as we can already create computers that are much better at logic and math than human brains are, we will also be able to create moral creatures that are much better at moral action, moral reasoning, and moral discovery than human brains are.

These new, artificial ‘superbrains’ will be aware of what makes something moral or immoral. They will be able to calculate the results of the thousands of considerations that go into making such a decision. They will not suffer from evolved prejudices and gaping ignorance.

Because human brains will have to create them, these new moral creatures will, for a time, coexist with humans. When that time comes, I think the moral thing to do will be to leave the limited human brain behind and entrust our corner of the universe to moral creatures who are far more capable than we are of creating the best of all possible worlds.

I, for one, welcome our new robot overlords. And that’s no joke.

(For those who think this post is nonsense, research into artificial moral agents is a hugely active field. For example, see Prolegomena to any future artificial moral agent.)

Previous post:

Next post:

{ 50 comments… read them below or add one }

Lorkas June 11, 2009 at 9:26 am

Groovy. Can I be one of the robot overlords?

  (Quote)

Chuck June 11, 2009 at 9:45 am

Here’s a question. If you could create a list of directives to give to said robots, what would they be?

  (Quote)

atimetorend June 11, 2009 at 10:05 am

“I think it means we should design intelligent and moral machines, and gracefully hand over control of the planet to them.”

You’ve read Asimov’s robot series?

  (Quote)

cartesian June 11, 2009 at 11:05 am

“I believe that reasons for action exist. Specifically, each desire is a reason for action. These reasons for action can be weighed and analyzed just like the other objects of science, for they exist physically in certain advanced neurological systems.”

Suppose there are only two people in the whole world: Randy the Rapist and Vicky the Victim. Randy strongly desires to rape Vicky, and Vicky strongly desires not to be raped. Suppose we analyze their brains, and find that Randy’s desire to rape is more intense, physiologically speaking, than Vicky’s desire not to be raped. Morally speaking, what should happen here?

I think you have a dilemma:
(Horn 1) Since Randy’s desire is physiologically more intense than Vicky’s desire not to be raped, and since desires are the only reasons for action, and since desires are purely neurophysiological phenomena, Randy ought to rape Vicky. If you take this horn, I think your view stands refuted.

(Horn 2) Vicky’s desire is somehow better or more important than Randy’s. When we ‘weigh’ desires, we don’t just take into consideration purely physical magnitudes like mass, duration, intensity, and the like. No, we take some other magnitude into consideration: moral value. And when the moral value of these two desires is taken into consideration, Vicky’s desire clearly takes precedence over Randy’s. And so Randy ought not to rape Vicky. If you take this horn, although your view issues the correct verdict, you’re committed to weirdo non-physical magnitudes like “moral value,” which you spend a lot of time deriding. So again, your view is refuted.

You have three options: Show that horn 1 doesn’t have the negative consequence that I say it does, or do the same for horn 2, or show that there is a third option with no negative consequences, and thereby go between the horns.

  (Quote)

Yair June 11, 2009 at 11:18 am

“After all, there is no reason to think our programmed desires and habits should have evolved in perfect unison with what happens to be morally good.”
No, but there is reason to think that, with some caveats, we call “good” what our desires and habits happened to evolve to be. This does not undermine your point, however – it is these caveats that allow us to recognize the faults you raise, and to recognize creatures without these faults as morally superior.

However, I’m far less certain we’ll design such robots. We’ll design AI that will be useful, and safe to use, not to further some ideal future society.

  (Quote)

Yair June 11, 2009 at 12:07 pm

cartesian: Vicky’s desire is somehow better or more important than Randy’s. When we ‘weigh’ desires, we don’t just take into consideration purely physical magnitudes like mass, duration, intensity, and the like. No, we take some other magnitude into consideration: moral value. And when the moral value of these two desires is taken into consideration, Vicky’s desire clearly takes precedence over Randy’s. And so Randy ought not to rape Vicky. If you take this horn, although your view issues the correct verdict, you’re committed to weirdo non-physical magnitudes like “moral value,” which you spend a lot of time deriding. So again, your view is refuted.

Why must “moral value” be a “weirdo non-physical magnitude”? Why can’t it be my own desires? I’m the one making the judgment, after all. You’re asking me (or the OP, whomever), not some abstract entity.

  (Quote)

Josh June 11, 2009 at 1:06 pm

Cartesian,

As far as I can guess, a desire utilitarian would argue that you’re not considering the situation properly.  You can’t just take into account the CURRENT consequences of the action on desires, but the long term consequences.  For example, if Vicky’s desire to not be raped is strong enough that’s thwarting will cause significant impairment for her desire fulfillment in the rest of her life and Randy’s desire to rape has not too many consequences if thwarted… Not saying that this is the only way to analyze the situation, just that this is what I think is the response.

Again, I’m not sure what the point of this dilemma is, except to maybe make someone realize all the consequences of their ideas.  As we went over before, if it is the case that desire utilitarianism is true then it may very well be the case that the statement “It is wrong for Randy to rape Vicky” is false.  The choices here are to either bite the bullet or find out if there’s something more deeply wrong with the theory.

  (Quote)

lukeprog June 11, 2009 at 1:13 pm

cartesian,

I have explained before that there probably are possible worlds in which rape is morally obligatory, if the desire to rape is not malleable and rape tends to fulfill more and stronger desires than it thwarts. With certain qualifications, that is true of the two-person universe you have proposed. I do not see a problem with this, and of course it says nothing about whether rape is moral or immoral in OUR universe. (I will argue that rape is immoral in our universe.)

For this to be a problem for desire utilitarianism, you’ll have to argue that rape is wrong in all possible worlds. I see no reason to accept this, and the only argument you’ve given so far is that “cartesian feels rape to be wrong in all possible worlds, and he feels this very, very strongly.” Somehow, this argument fails to convince me.

  (Quote)

Yair June 11, 2009 at 1:53 pm

lukeprog: For this to be a problem for desire utilitarianism, you’ll have to argue that rape is wrong in all possible worlds. I see no reason to accept this, and the only argument you’ve given so far is that “cartesian feels rape to be wrong in all possible worlds, and he feels this very, very strongly.” Somehow, this argument fails to convince me.

I think Cartesian showed a bit more, something like “Virtually any person feels rape to be wrong in this particular scenario, and feels this very, very strongly”. The feeling is not confined to Cartesian, after all, and he considered this scenario rather than saying that there are none where rape is permissible (perhaps, for example, when for some strange reason someone rapes a single woman to save the entire population of the world, her included, from agonizing slow death).

The thing is, that as far as I can see all you are offering in response is “Luke feels we should ignore this moral intuition, not act on it, and instead label as ‘moral’ and act on what the calculus of desire utilitarianism computes”. But you haven’t presented any argument that will convince us to do that, and I fail to see how you possibly could.

What I suggest is that we act on our own desires, including our desire to prevent such a rape, and that ultimately we label as “moral” what we desire (with caveats). I’ve recently been told this position is “Individualistic Subjectivism”; so be it.

  (Quote)

Silas June 11, 2009 at 2:07 pm

How can Randy have good desires? Seriously?

Luke seems to agree that Randy is a moral person. Does desire utilitarianism really say that? I wouldn’t say that Randy has good desires which one ought to promote.

If Randy has GOOD DESIRES, and he’s a rapist, then it would be good for Vicky to have a desire to rape. And how could that possibly be good?

I don’t get it.

  (Quote)

Kip June 11, 2009 at 4:29 pm

cartesian: Suppose there are only two people in the whole world: Randy the Rapist and Vicky the Victim. Randy strongly desires to rape Vicky, and Vicky strongly desires not to be raped. Suppose we analyze their brains, and find that Randy’s desire to rape is more intense, physiologically speaking, than Vicky’s desire not to be raped. Morally speaking, what should happen here?

Does Randy desire to be raped?  And is this desire good (tends to fulfill more and stronger desires)?  If so, then he has reason to promote the desire to rape.  If not, then he does not have reason to promote that desire.  If this scenario is like the real world, then Randy also desires not to be raped, and that is a good desire.  He then has reason to promote the desire to not rape.  And if this is like the real world, then he starts by treating Vicky the way he wants to be treated.

  (Quote)

Kip June 11, 2009 at 4:34 pm

I also welcome our new robot overlords.  And I hope I am one of them.  :-)

  (Quote)

lukeprog June 11, 2009 at 7:00 pm

Silas,

No, I don’t think Randy’s desire to rape is a good desire, in the situation Cartesian outlined. But I do think rape is morally right in certain possible worlds, which I think is all cartesian wanted to show.

Yair,

Sure, I can agree that the vast majority of people feel rape to be wrong. You and cartesian probably think this places a burden of proof on me. I think the burden of proof lies on whoever makes a positive claim. So, I am making positive claims about how desires exist, certain ones tend to thwart or fulfill other desires, etc. I believe these claims can be defended, and I have done so elsewhere, and am continuing to do so.

Cartesian is also making a positive claim. He seems to be claiming that intrinsic moral values exist, that we can trust our moral feelings to give us moral knowledge, and that rape is wrong in all possible worlds. These are all extremely strong claims to my ears, and I would like to see some evidence presented for them. Something similar or superior to the evidence I can give in support of the existence of desires and their tendency to thwart or fulfill other desires, for example. And yet I do not see how Cartesian can make a case for any of the three major contentions he seems to make about morality. I think they are unsupportable claims.

  (Quote)

Michael June 11, 2009 at 8:01 pm

I think Eliezer Yudkowsky’s work on Friendly AI (as found on about 30 posts on overcomingbias.com) is a good refutation of the idea that we can just create “rational” machines and expect them to run things. What would you say are the chances the machines simply decide to kill all humans since they’re too evil? And if so would you accept that? Also the main problem would be programming their morality from scratch without relying on “human-based” shortcuts like reward points given by humans in the machines’ training phase. All in all I think this would be a reasonable project once we’re MANY orders of magnitude more advanced than the simple ability to build a general AI, otherwise it’s likely to cause more harm than good.

  (Quote)

lukeprog June 11, 2009 at 8:10 pm

Michael,

I’m not sure it’s likely that we will invent truly moral machines. It’s more likely we’ll invent machines to kill other humans and the machines THEY invent.

I’m just saying we SHOULD invent moral machines.

  (Quote)

Yair June 11, 2009 at 9:40 pm

lukeprog: I’m just saying we SHOULD invent moral machines.

That is a more reasonable claim. I, too, didn’t get that from the post.

lukeprog: Yair,Sure, I can agree that the vast majority of people feel rape to be wrong. You and cartesian probably think this places a burden of proof on me. I think the burden of proof lies on whoever makes a positive claim. So, I am making positive claims about how desires exist, certain ones tend to thwart or fulfill other desires, etc. I believe these claims can be defended, and I have done so elsewhere, and am continuing to do so.

But no one is disputing these positive claims. (Well, in broad strokes; doing the calculus in practice can be difficult, and there may be some caveats, but that’s besides the point.) What we’re disputing is another claim – that this entails moral superiority, and that we should act to further what this calculus computes as morally superior. This is not a positive claim about the world, this is  a claim about value, and jumping from one to the other is frankly making the is-ought fallacy. It is this claim that we think you have the burden of proof against, in this context, as it is opposed by people’s intuitions.

I cannot speak for Cartesian on this, but my main point isn’t a positive claim either, but an analytic one – we want to follow our own desires by definition. The full moral theory includes many positive claims, mainly about the structure of human nature but also that “good” is determined by our desires; but these are secondary points.

  (Quote)

TK June 11, 2009 at 11:33 pm

Honestly, little thought-experiments like this–the conclusion of which is “we should invent moral machines”–are usually perfect examples of why I think all ethical theories are extremely suspect.

  (Quote)

Michael June 12, 2009 at 4:43 am

Lukeprog, so if I’m to understand correctly, if:
(1) you take desire utilitarianism to be the correct ethical system
(2) we realise that calculating the correct course of action within desire utilitarianism can be hard

then it’s only natural for us to seek a calculating device to do it for us (or instead of us). Is this how the post ties in with the other posts on ethical theory?

  (Quote)

Kip June 12, 2009 at 5:15 am

One thing to remember in regards to “rape” being immoral, is that the term itself is value-laden.  If it weren’t immoral, we wouldn’t call it “rape”, we’d call it “sex”.  (Or rather, we wouldn’t have the connotation of immorality attached to the word “rape”.)  Of course, we then are led to discuss whether it is ever moral to have sex with someone without their consent.  In certain possible worlds, I think that could be moral.  But, not this one.

  (Quote)

lukeprog June 12, 2009 at 5:32 am

Michael,

A machine’s superior ability to calculate moral truth is only one of its advantages. The other is that it does not suffer from our evolved prejudices and immoral tendencies. We could choose to design a machine that is only interested in (1) finding out what is moral, and (2) acting morally.

  (Quote)

lukeprog June 12, 2009 at 5:32 am

Kip,

Yes, thank you.

  (Quote)

cartesian June 12, 2009 at 10:04 am

lukeprog: Cartesian is also making a positive claim. He seems to be claiming that intrinsic moral values exist, that we can trust our moral feelings to give us moral knowledge, and that rape is wrong in all possible worlds. These are all extremely strong claims to my ears, and I would like to see some evidence presented for them.

Actually, all I need for my argument is a very modest claim: In the world I described, it would not be morally obligatory for Randy to rape Vicky.

If desire utilitarianism is true, then in the world I described, it would be morally obligatory for Randy to rape Vicky. That’s why I reject desire utilitarianism.

  (Quote)

cartesian June 12, 2009 at 10:09 am

lukeprog: I think the burden of proof lies on whoever makes a positive claim.

This sounds like a positive claim, so I guess the burden of proof is on you to argue for this.

What’s the argument, exactly?

(Just to tip my hand a bit, you’re going to provide an argument in support of this claim, and I’ll point out that your premises are positive claims and so stand in need of support. Then you’ll provide arguments for each of these premises, and I’ll point out that the premises of these further arguments are positive claims that stand in need of support. And we’ll do this forever until you admit that there are some positive claims that do not stand in need of support. For example, the claim that Randy isn’t morally obligated to rape Vicky in the world I described, i.e. that Randy wouldn’t be morally blameworthy if he refrained from raping Vicky. I think that’s one of those positive claims that doesn’t stand in need of support. It’s obviously true!)

  (Quote)

Lorkas June 12, 2009 at 10:35 am

cartesian: It’s obviously true!

And the more you say this, the truer it gets!

  (Quote)

Yair June 12, 2009 at 10:51 am

What I tell you three times is true!

OK, unrelated question here – why is the burden of proof on any positive claim? I’ve thought this principle applies to ontological positive claims, by virtue of Occam’s Razor. Not to any claim that can be stated in a positive manner, like “Randy isn’t morally obligated to rape Vicky”, which seems to me to bear no greater burden of proof than “Randy is morally obligated to rape Vicky” or “Randy’s moral obligation to rape or not Vicky is meaningless”.

  (Quote)

Kip June 12, 2009 at 11:52 am

cartesian:  you didn’t answer my questions (see above).

  (Quote)

cartesian June 12, 2009 at 12:48 pm

Kip: Does Randy desire to be raped?
It doesn’t really matter, but let’s say ‘no’.

And is this desire good (tends to fulfill more and stronger desires)?
There is no such desire, so there is no answer to this question.

If so, then he has reason to promote the desire to rape.
The antecedent here has a false presupposition.

If not, then he does not have reason to promote that desire.
The antecedent here has a false presupposition.

If this scenario is like the real world, then Randy also desires not to be raped, and that is a good desire.
It doesn’t matter whether Randy desires not to be raped. The fact is that there are two people with two desires relevant to this action. If Randy’s desire is stronger, desire utilitarianism rules that he’s morally obligated to rape Vicky. I say that’s false. So I say desire utilitarianism is false.

  (Quote)

cartesian June 12, 2009 at 12:51 pm

cartesian: It’s obviously true!
Lorkas:  And the more you say this, the truer it gets!

Do you agree with Luke that all positive claims stand in need of justification? If so, would you mind proving that positive claim? If not, why do you object to my claim that Randy isn’t morally obligated to rape Vicky?

  (Quote)

Lorkas June 12, 2009 at 8:06 pm

cartesian: Do you agree with Luke that all positive claims stand in need of justification? If not, why do you object to my claim that Randy isn’t morally obligated to rape Vicky?

I don’t think all positive claims have the burden of proof. For instance, I think if someone wants to argue that the universe doesn’t exist or that we don’t have free will, that negative claim holds the burden of proof. I’m not really sure, actually, if there is a good rigorous way to determine where the burden of proof lies (some have proposed that the burden lies with the person making the extraordinary claim, which is I think how we generally perceive the burden of proof, but it’s obviously a very subjective way to determine it).

Really, I was just poking fun at you because you keep claiming that it is obvious that rape is always wrong (in every possible world, you claimed in an earlier discussion), while we keep telling you that no such thing is obvious (however obvious it may be to us that rape is wrong in this world).

  (Quote)

Lorkas June 12, 2009 at 8:11 pm

cartesian-
I’m not really sure what moral system you ascribe to, but I wonder what your system would say about this scenario:

An extremely powerful divine being (definitely not Yahweh–he would NEVER do anything like this) appears to you, and tells you that he will destroy the entire Earth if you do not have sex with a specific woman that he brings to you. The woman doesn’t want to have sex with you, no matter how much you reason with her that it is to save the world.

Is it more moral to allow the world to be destroyed, or to rape the woman?

A second question: would it change anything if the being said that you could rape her, or he would kill her? In this case, you are choosing whether the woman should be raped or killed. Which is the moral choice?

  (Quote)

lukeprog June 12, 2009 at 9:19 pm

Lorkas,

Why should the skeptic hold the burden of proof? If I want to claim that the universe exists, I carry the burden of proof. It just so happens my burden is very easy to carry, because there is abundant evidence for the existence of the universe. As for free will, I think the burden of proof is also on the one making a positive claim, and in that case I think the metaphysical libertarian fails to carry that burden.

  (Quote)

Lorkas June 13, 2009 at 7:35 am

lukeprog: It just so happens my burden is very easy to carry, because there is abundant evidence for the existence of the universe.

Really? It’s well known that it’s not possible to disprove the Cartesian demon universe or the claim that we live in the Matrix. How can you prove that the external world actually exists? I propose the hypothesis that you are the only thing that exists, and you are dreaming about the outside world (of course, you are just dreaming that I’ve proposed this). Since my hypothesis that you are dreaming the entire universe explains the facts just as well as your hypothesis that the universe exists, and proposes far fewer entities in its explanation (you are all that exists!), we should accept my hypothesis as superior.
It seems to me this is a harder burden than you give it credit for. At a certain point, we just have to assume that the universe and other minds exist, and carry on from there. Of course, I don’t believe that you are all that exists, because I think that, in this case, the burden of proof is on the person claiming that the universe doesn’t exist (but that’s just me).

  (Quote)

Lorkas June 13, 2009 at 7:37 am

As for free will, I agree that it hasn’t been demonstrated, and I’m agnostic on that question. However, it’s not compelling to claim that we should reject the notion of free will since no one has proved it to exist. There are some good reasons to think it exists, and some good reasons to think it doesn’t exist. I think time (and science) will tell.

  (Quote)

Aron June 13, 2009 at 10:02 am

I think the question of moral machines highlights an important topic for desire utilitarianism. Please forgive me if you have adressed these issues, but do you have to be a moral receiver in order to be a moral giver? That is, do you need to have desires yourself in order to count as a moral agent? If so, then what is the definition of a desire? What are the necessary and sufficient conditions for something to qualify as a desire? Is consciousness required? If so, it isn’t at all clear that we’re much closer to moral machines now than we were  a hundred years ago – we certainly don’t know what the sufficient (and arguably neccesary) conditions for consciousness are. If consciousness isn’t required, what could the conditions be? Are we already perhaps morally obliged towards some of our artificial creations?

  (Quote)

lukeprog June 13, 2009 at 3:11 pm

Lorkas,

I have a sketch in my head as to how other minds and the external world can be justified without special pleading for unusual epistemic status (that of not requiring evidence), but I want to do more research before I write anything serious on it.

  (Quote)

lukeprog June 13, 2009 at 3:13 pm

Aron,

Those are questions with very long answers, which I cannot give here. Rest assured, though, I will be giving all those questions their proper treatment once my ‘Intro to Ethics’ course has made some progress.

  (Quote)

Lorkas June 13, 2009 at 4:23 pm

lukeprog: I have a sketch in my head as to how other minds and the external world can be justified without special pleading for unusual epistemic status (that of not requiring evidence), but I want to do more research before I write anything serious on it.

Sounds awesome. I’ll look forward to it. :)

  (Quote)

blindingimpediments June 15, 2009 at 5:36 am

how do you measure the strength and magnitude of desire? quantitatily and/or qualitatively? quantitative measurements, i suppose, are easier since i guess you can count up the individual desires. but how do you qualitatively determine which desire is greater? what criteria are you using? what measurement? how is it less arbitrary than the “strength” of one’s own feelings? is it the desire that excites the most neurons, releases the most neurotransmitters, have higher affinity  to neuroreceptors, have the biggest spike on an eeg, increase in blood flow to the brain?
as for quantitative measurement, do you just measure the immediate amount of created/thwarted desires or do you have to take into account all future created/thwarted desires that may have resulted from the initial desire? sounds sort of like a “butterfly” effect scenerio. how can someone take into account all the possible future consequences of an action. you would have to be omniscient. if we are just measuring the desires in the immediate future, then who decides the time frame and why is that the standard?

  (Quote)

lukeprog June 15, 2009 at 6:55 am

blindingimpediments: how do you measure the strength and magnitude of desire?

Ideally, we measure it quantitatively. But we don’t have the tools quite yet. That’s okay. We could still tell when something was hot or cold before we had a thermometer. One day we will develop a desireometer, and then a more precise one, and then a more precise one…

  (Quote)

blindingimpediments June 15, 2009 at 12:01 pm

simply measuring the number of desires created and thwarted seems to imply “Desire fulfillment act utilitarianism”. what makes desire utilitarianism different than desire fulfillment, if i understand correctly, is that it appears to try to account for not only the number of desires created but also accounts for the “strength” of each individual desire created or thwarted. is there any example or research which actually suggests that the strenght of a desire could be measured through some physiological test (a desirometer so to speak). it seems to me, the closest thing that we have to a desirometer is the strength of one’s feeling. feelings would have to be the surrogate desirometer for our neurophysiological event of desires, but that has already been deemed unreliable and inaccurate. so unless a suitable and accurate desirometer is found, one cannot currently base moral decisions on the theory of desire utilitarianism. without some sort of lab test that can accurately measure the strenght of a desire the theory is just not practical. until a good desirometer is found this theory isn’t even testable.
still i don’t see how desire utliltarianism compels anyone to act morally. morality in this context still seems to me to be meaningless. why is it “good”(or why should i desire) to increase the amount and strenght of desires in the universe and “bad” (or why should i not desire) to decrease the amount and strenght of desires in the universe. it seems like it just begs the question. why shouldn’t we be buddhists and think it be “good” to get rid of all desires. or hedonists and think it “good” to just satisfy my own desires regardless of other desires.
also, even if desire utilitarianism is true, why am i obligated to follow it. “What’s the use you learning to do right, when it’s troublesome to do right and ain’t no trouble to do wrong, and the wages is just the same?”
and isn’t desire utilitarianism subtly sneaking in some sort of “god” for a grounding for the morality. from this link: http://scratchpad.wikia.com/wiki/Frequently_Asked_Questions_about_Desire_Utilitarianism. it says that “A right action, on this theory, is the action that a person with good desires would perform. A wrong action is the action that a person with good desires would not perform.” how do you define this “good person”. would it be a person, when judging a good vs bad action, that could take into account all the desires that would be created or thwarted in the entire universe as a result of the action in question? doesn’t that mean that this “good person” would have to be omniscient? isn’t that just sneaking in a god?
 
 

  (Quote)

blindingimpediments June 15, 2009 at 12:05 pm

oh.. by the way.. really enjoy the blog. forgive my amateurish philosphizing. i am really just a lay person and have no real academic background in philosophy or religion (which is probably quite obvious to you) but find the topic fascinating.

  (Quote)

lukeprog June 15, 2009 at 5:48 pm

blindingimpediments,

“measuring the desires created“? Not sure what you mean…

Economists use a calculation called “willingness to pay” that can act as a surprisingly accurate desirometer.

Desire utilitarianism does not compel anyone to act morally. People can choose to be immoral if they want.

A good person is a person with good desires. A good desire is one that tends to fulfill more and stronger desires than it thwarts.

  (Quote)

blindingimpediments June 15, 2009 at 6:46 pm

“Desire utilitarianism does not compel anyone to act morally. People can choose to be immoral if they want.” – i do not understand. then why should anyone act morally? in the majority of cases people seem to intuitively know what is moral and what is not even if they have never heard of desire utilitarianism. the real issue, i believe, is that people struggle with doing the moral thing that they know as oppose to doing the immoral thing. we may know what the rules are, but why should one follow those rules? what good is this type of morality if there is no accountability or compulsion. where is the “better” where is the “ought”. why is it better to be a good person, why should i ought to be a good person?
 
“willingness to pay” sounds like a fancy way of saying “i feel so strongly about something that i would be willing to do anything for it”.
 
when i mean measuring desires created i guess i mean  measuring the amount of fulfilled desires and the strength of those desires. do you just measure immediate desires that are directly fulfilled immediately after the action is performed or do you take into account indirectly fulfilled or thwarted desires that may have result later on in the future or to other affected individuals. using an extreme example to illustrate the point. saving the life of a child is good because it fulfills the child’s desire to be saved. but if that child has some sort of special need and comes from a poor family and hence as a result the survival of the child means taking resources away from the rest of his family and results in the death of 3 other siblings, then was saving the child wrong? or if the child turns into some sort of evil dictator when he is older and kills millions then was saving the child wrong? it seems that in order to  really make a moral decision, you would have to forsee these possibilities.

  (Quote)

lukeprog June 15, 2009 at 6:55 pm

blindingimpediments,

This is where I say, “Please wait for the rest of my Intro to Ethics course.” :)

  (Quote)

blindingimpediments June 15, 2009 at 7:04 pm

lol.. fair enough. hopefully i’ll be smart enough to get it.

  (Quote)

lukeprog June 15, 2009 at 8:04 pm

blindingimpediments,

I’m sure you are! It’s just that I haven’t said it all, yet. It’s no surprise that people don’t understand what I’m saying when we aren’t on the same page about how moral theory works.

  (Quote)

Ajay July 3, 2009 at 1:53 am

Just wondering what anyone thinks of the view Richard Dawkins has on morality. He says that our morality keeps changing and it is a kind of composite of just plain ordinary conversations among people, dinner party conversations, newspaper editorials, legal decisions, congressional votes etc etc.
A few hundred years back, slavery wasnt considered immoral by anyone and now it is one of the most immoral things.
 
I am pretty sure that in about a hundred years from now or so, eating animals will be considered immoral as well. I myself am a vegetarian (except eggs)  since 20 years (I am 34 now) and although i still remember the taste of meat and love alternative foods like soya etc, i never get tempted to eat meat because my self admiration and respect for shunning meat far outweighs my desire to eat it.
 
I would bet that pretty soon we will have artificially grown meat in a lab from actual animal tissue etc. and then the time will come when everyone may give up meat. Just imagine a 10 year old seeing a video of a chicken or a lamb being slaughtered. I just cant believe how no one really feels that eating animals which we can see visibly feel pain is moral.
 
By the way, i am not a PETA activist or go around asking people to give up meat or judge them because of it. I believe that it is a personal matter for anyone. My logic is that when i go for a stroll, i inflict a painful death to hundreds of living organisms which fall under my shoe everytime, but that doesnt stop me from going for a stroll or a drive and therefore i have no right to impose such things on people who eat meat. I think it is just a threshold people have in such things. My personal threshold is to not eat any animal.
- Ajay

  (Quote)

lukeprog July 3, 2009 at 5:16 am

Ajay,

There are two questions here. One of moral opinion and one of moral fact.

I, too, suspect that in 200 years our descendants will look back on us as highly immoral because of how we treat animals. There will be a change in moral opinion.

Then there is the question of moral fact. Moral attitudes change, but is it TRUE that killing animals for food is immoral? I spend most of my time just trying to get moral THEORY right, so I haven’t spent much time on applied ethics yet. But if desire utilitarianism is true, I suspect this will mean we are morally obligated to drastically change the way we treat other species.

Luke

  (Quote)

Ajay July 6, 2009 at 3:15 am

Hi Luke,
I guess that the statement that “killing animals for food is immoral” is a moral opinion rather than a fact. For that matter, i dont think there can be any moral facts but only moral opinions. Can you give me an example of a moral fact and a moral opinion?
 
In any case, i think that it is more of a COMMON SENSE MORALITY as far as killing animals for food is concerned. We know they feel pain and dont want to be killed. They try to struggle and escape the same way humans would react if they are being caught and axed or cut or burnt. For me that is common sense morality irrespective of any study of applied ethics or trying to get moral theory etc.
 
If you had a chance of killing an animal for food versus getting a vegetarian food with the same exact taste, i guess you will go for the vege option right? What would you do if you get a vege food with about 80% of the same taste? Or what would you do if you get just 50% of the taste (assuming taste is somewhat quantifiable).
 
As far as i am concerned, i consider myself fortunate that i am able to get over my temptation or desire to eat meat having been a vegetarian since the last 18 years of my 34 years of life. It does make me feel better about myself irrespective of the other failings or limitations i have in terms of my morality.
 
I also value my life more over animals and wouldnt mind having to eat  one of them if it is a matter of life and death or if am going hungry for too long in an island etc. But when it comes to eating meat when other tasty vege options are available, i think i have a lust to not eat the meat and just go for that vege food.
 
I cant help thinking that maybe there is a case to equate “killing of animals for food” with nazism or slavery. When persecutuon of jews was going on in germany, regular germans mostly turned a blind eye to it and also endorsed it. But i am sure every german now is ashamed of it in hindsight. I believe the same could be the case with meat eating as well, who knows.

  (Quote)

lukeprog July 6, 2009 at 7:32 am

Ajay,

I think moral facts do exist. See my book on the subject.

  (Quote)

Leave a Comment