Morality in the Real World 09: How to Measure Desires

by Luke Muehlhauser on November 16, 2010 in Ethics,Podcast

In episode 09 of Morality in the Real World, Alonzo Fyfe and I explain how you can measure desires.

Download Episode 09

You can also listen to this podcast at archive.org, or subscribe in iTunes or out of iTunes. See the full list of episodes here.

Every five episodes we answer audience questions, so please do post your questions and objections below. Make sure your questions address the topics of this episode only. If we plan to address the subject of your question in later episode, we will not answer it in the next Q & A episode. You can also leave your question in audio and we will play it back during the Q & A episode and respond to it: call 413-723-0175 and press 1 to leave a voicemail.

Transcript of episode 09:

LUKE: Alonzo, I just had a brilliant idea.

ALONZO: Uh-oh. Now I’m worried.

LUKE: Hear me out. I was thinking, since we called this podcast “Morality in the Real World,” why don’t we start talking about morality in the real world rather than the desires of fictional characters such as Alph, Betty, and Scrooge. Hmmm? Brilliant or super-brilliant?

ALONZO: We have been talking about morality in the real world. We have been talking about things in the real world that are true. For example we said that desires provide people with reasons to mold the desires of others.

LUKE: Yeah, but are we just talking about how many angels can dance on the head of a pin? I can just hear people complaining that none of this can be applied to a world with billions of people with all of their desires, and animals with all their desires.

They’re thinking: “You’ll never get from the simple world of Alph and Betty to something applicable to the world in which we make our day to day decisions.”

ALONZO: Okay, let’s not make things more complicated than they need to be. We don’t need absolute precision to get useful results.

I mean, astronomers have to deal with the fact that every object in the whole universe influences the motion of every other object. They can’t even know all of the things that exist in the universe, let alone measure their gravitional influence on every other thing. You don’t see them throwing their hands up in frustration over the inability to calculate the motion of objects through space.

LUKE: Right. Astronomers look at the most important influences and set aside the rest as trivial.

ALONZO: Well, we can do the same thing relating desires to other desires – to determine if a desire tends to fulfill or thwart other desires. For example, we know that people generally have a great many desires that can’t be fulfilled if the agent dies. I suspect that I’d find it very difficult to continue this podcast if I were to find myself suddenly dead.

LUKE: Our conversations would tend to be a bit one-sided after that.

ALONZO: So, desires that tend to result in getting people killed are likely to rank high as desires that people generally have many and strong reasons to inhibit or weaken. We may not be able to calculate the precise degree to which people have reason to inhibit such desires, but we can know that the value is pretty high.

That’s like computing the orbit of the earth around the sun and recognizing that we have no reason to worry about the influence of some undiscovered planet orbiting around some dim star on the far side of the Andromeda Galaxy.

LUKE: But, come on, Alonzo, how are we supposed to add my desire for coffee to your desire for Diet Dr. Pepper and make some value calculation?

ALONZO: You might as well ask, “How are you supposed to add your desire for coffee with my aversion to the pain of being slowly burned alive over a bed of hot coals?”

See, the example you gave me is one in which two desires are so much alike that it’s hard to see a difference between them. However, the fact that it’s hard to see the difference in some cases – like the case you used – doesn’t imply that it’s impossible to see differences in more obvious cases.

Let’s say your main goal in life is to minimize desire-thwarting. You have two controls, one in each hand. If you releases the left control, nuclear warheads will detonate in ten major cities around the world. If you release the right control, then a random stranger in Libby, Montana will experience a mild electrical shock for 5 seconds.

Now, we move these controls further and further apart so you’re going to have to let go of one of them eventually in order to hang on to the other one. Here they go. They’re moving apart. You have five seconds to decide which to drop. Remember, you have one goal: to minimize the total amount of desire-thwarting.

LUKE: Well, I would hold onto the control that will prevent the ten nuclear bombs from going off. Obviously.

ALONZO: You don’t have any problem comparing the different desires of different agents and determining which option thwarted the most and the strongest desires.

LUKE: Well, okay, but what if I want the bombs to go off? What if I want to cause as much desire-thwarting as possible?

ALONZO: If somebody wants to cause as much destruction as possible, he still knows with almost perfect certainty which option will fulfill that desire. The claim that he cannot know – or be very certain – which option would cause the most harm (the most destruction) is absurd. Of course he knows.

It’s the same with height. In some cases it’s difficult to tell which of two people is taller. You can’t point to an example of two people nearly the same height and conclude that it’s impossible ever to know when one person is taller than another.

LUKE: But at least we have a way of measuring height. How do you measure the strength of a desire? What are you going to use for a desire-o-meter?

ALONZO: Okay, that much is true. We don’t have a clear idea of what a desire-o-meter would look like. And that’s a problem. However, it’s not the first time people have faced that kind of a problem.

LUKE: Well, sure. I suppose you’re right. Two thousand years ago, people didn’t know how to measure temperature. And sometimes it was really hard to tell if one thing was warmer than another. They would have had a hard time imagining the thermometer, but that didn’t prove that temperature couldn’t be measured.

ALONZO: Temperature is a good example for another reason. Think about how they actually started measuring temperature.

LUKE: By grabbing two things and judging which felt hotter, I guess.

ALONZO: That’s good, but I was thinking about the invention of the first thermometers. The first thermometers did not actually measure temperature.

LUKE: Yeah, I guess the early thermometers measured the volume of a liquid, because people noticed that most liquids expand when they get warmer, and so they measured how much space a liquid took up in order to get some estimate of its temperature.

ALONZO: They used a proxy for temperature – something that wasn’t temperature itself but something that temperature affected.

LUKE: Okay, so maybe we can measure something by looking at what it affects. What do desires affect?

ALONZO: Choices. Desires effect choices. We already saw that when we looked at Betty’s desire to fulfill Alph’s desire and compared it to Betty’s desire to scatter stones. Those two different desires had two different effects on how Betty would act in situations where Alph died. Betty with the desire to fulfill Alph’s desires quit scattering stones when Alph died, while Betty with the desire to scatter stones continued to scatter stones when Alph died.

LUKE: That’s not entirely reliable, though. Two people with the same desires, but different beliefs, can make different choices. If we are both thirsty, and I believe that the water in this pitcher has been poisoned, but you believe it’s good clean water, you would drink it, and I will not. But that doesn’t mean our desires were different.

ALONZO: Of course. We have to go to some effort to make sure we are measuring the same thing. This is also true when we measure temperatures. What liquid do we have in the thermomenter? How much is there? What is the diameter of the tube the liquid is expanding into?

Unfortunately, that’s how things are in the real world. We just have to look at the background conditions that apply when we take a measurement.

Now, consider your desire for coffee and my aversion to being roasted alive over a hot flame. Let’s take a person and give him your desire for coffee, and my aversion to being slowly roasted. Now, let’s give him a choice. Let’s give him two options.

Option 1: His coffee desire is fulfilled. He has a nice warm cup of coffee, but he is being roasted over an open fire.

LUKE: Uh-oh.

ALONZO: Option 2: His desire not to be roasted over an open flame is fulfilled, but he does not have any coffee,

Given these options, which do you think this agent will choose?

LUKE: He’d probably say: “I’ll skip the coffee, thank you very much, and avoid being roasted over an open flame.”

ALONZO: Are you sure you don’t want any coffee? Our coffee comes with a free 24 hours on a metal grill above a bed of hot coals.

LUKE: I’m thinking he’s still going to decline that option.

ALONZO: So, yeah, we can approximately weigh your desire for coffee against my aversion to being roasted on an open fire. Somebody concerned to fulfill desires can well know that my desire not to be roased over an open flame is stronger than your desire to have some coffee, and if given an option can reasonably be expected to choose the option that prevents me from being roasted over an open flame.

LUKE: So what you’re saying is that whatever this imaginary agent would choose, that’s the right thing to do, right?

ALONZO: You’re testing me. No, that’s wrong. We are not saying anything about what people should do. We are refuting the claim that, when it comes to deciding what people should do, we lack the ability to compare the desires of different agents. I was providing you with a way, here and now, we can compare different desires between different agents in the real world.

LUKE: Okay. But remember what I was talking about earlier. People are going to want to know how to get to conclusions about what should be done in the real world. Ultimately, that’s what this podcast is supposed to be about, right?

ALONZO: Of course. We’ll get there. But that’s going to require knowing how to look at desires in large populations. We have looked at cases in which we can make obvious comparisons. We have looked at a technique for making somewhat more precise comparisons. Now let me give you another trick for looking at desires in large populations. For this, we’ll need to go back to Alph and Betty and their distant planet.

LUKE: Alph is gathering stones and Betty is scattering stones.

ALONZO: That’s them! But let’s make this a bit more complex. Let’s say there are 487,000 Alph-clones, and each of them just wants to gather stones. And there are 73,000 Betty-clones, and each of them just wants to scatter stones.

LUKE: “Alph and Betty: Attack of the Clones!”

ALONZO: See, and you were worried that people might not see this as relevant to morality in the real world.

LUKE: Silly me.

ALONZO: Well, we’re in this deep. We have this world with Alph and Betty clones. Now, we’re going to plop another person on this planet. Which should it be, another Alph clone, or a Betty clone? Can you look at all of these desires and tell me whether these people have more reason to summon another Alph or another Betty?

LUKE: Well, since there are more Alph clones than Betty clone, I guess I would suggest a Betty clone to even out the numbers.

ALONZO: But you have to make some assumptions to get that answer.

LUKE: Well, I suppose. I assume that each Betty clone can scatter stones as quickly as each Alph clone can gather stones.

ALONZO: That makes it too simple. Lets take away that assumption. Some Alphs and Betties are more efficient than others. Our cloning process has some bugs in the software. Some of these clones end up being really tiny, so it takes 10 of those Alphs working together to gather one stone. They hoist it up, put it on a wagon, pull the wagon to the pile, unload it, then take their wagon and go out get another stone. Other Alphs and Betties are big, strong, and efficient. They can pick up stones one-handed and either hurl them out into the field to scatter them or hurl them into the pile to gather them.

LUKE: Incoming! Heads up, little ones!

ALONZO: They’ll be careful.

LUKE: They’d better.

ALONZO: So, now, can you tell me whether this population has more and stronger reasons to bring in a new Betty clone or a new Alph clone?

LUKE: Um… well, I guess you could just look at the piles of stones. If the piles are getting bigger over time, that would argue in favor of bringing in a stone scatterer. And if the piles are getting smaller over time, then that would argue for bringing in another stone gatherer.

ALONZO: Right. There’s no complex math involved. If the piles are getting smaller, then the stone scatterers will soon face a point where they will not be able to scatter stones. They have a reason to call for this new person to be a stone gatherer. The gatherers, however, have no reason to call for bringing in another scatterer. They’re too busy gathering stones, and they see their stone-gathering possibilities going on into the indefinite future.

LUKE: Hold on. That went by a little fast. You’re saying that there is no reason to choose a scatterer? And that’s because the piles are getting smaller, so none of the gatherers are facing a prospect of thwarted desires, so they have no reason to call for bringing yet another scatterer into this world?

ALONZO: That’s exactly right. Which means that in spite of the numbers of people, the lack of information on how efficient people are, and knowing nothing about how badly people want to gather or scatter stones, we can use the size of the piles as a proxy to measure the reasons to bring another stone scatterer or stone gatherer into the world.

LUKE: Well, okay, in a world of Alph and Betty clones. But can you tell me how this can possibly be useful in the real world?

ALONZO: Okay, here’s a taste of how this type of information can be useful in the real world. Think of something in the real world where the piles are getting smaller and smaller and, if they disappear entirely, we face a lot of thwarted desires.

LUKE:  You mean something like food, clean water, or oil.

ALONZO: Yeah. Something along those lines.

LUKE: Okay.

ALONZO: Let’s go with oil. For the sake of this argument we are not going to get into a debate over whether global supplies are actually shrinking. We only need to look at the fact that, if they shrink, desire-thwartings will result.

LUKE: Most definitely.

ALONZO: One of the ways that we can reduce the desire-thwartings of diminishing stockpiles is by giving people an aversion to those things that are causing the stockpiles to diminish. What if we could give people a pill where they simply do not want some of the things that involved drawing down the stockpiles of oil? Then, there would be fewer thwartings of desires caused by a diminishing supply.

LUKE: I can see two effects. There would be fewer desire-thwartings because there would be fewer desires to thwart. Obviously, to the degree that people didn’t want things that required consuming oil, then to that degree the lack of oil wouldn’t bother them.

And, another way that desire-thwartings would diminish is that people would not be drawing down the stockpiles so quickly, so the desires that require having oil around would continue to be fulfilled for a longer time.

ALONZO: If we could give people an aversion to using large energy-guzzling vehicles to run to the bank three blocks away, then, at least in that context, the price of gas would not bother them because they wouldn’t need gas. Meanwhile, that would leave more gas for fulfilling desires that we can’t change so easily.

LUKE: So, you’re saying that, in the real world, we should give people an aversion to driving around in big gas-guzzling vehicles.

ALONZO: No. Let’s phrase it this way. In the real world, if the quantity of oil diminishes, a lot of people will have many and strong reasons to promote an aversion to driving around in large gas-guzzling vehicles. That seems true.

LUKE: That’s not the same thing as what I said.

ALONZO: Well, maybe it is, maybe it isn’t. That’s a question for a future episode.

LUKE: Well, before we go on to that future episode, let’s see if people have any questions about our last 4 episodes. If you want to ask a question about what we’ve discussed so far, you can leave that question in the comments on the website, or you can call 413-723-0175 and leave your question in the voicemail, and we may play it back on the air and respond to it.

ALONZO: What was that number again? I wasn’t ready.

LUKE: Alonzo, you don’t have to do that. This is a podcast, not a radio show. People can just jump back 10 seconds if they need the number.

ALONO: Okay.

LUKE: Alright, so send in your questions, and we’ll see you next time!

Audio clips

(in order of appearance)

  • “Hour Five” from Somnium by Robert Rich
  • “Whipping the Horse’s Eyes” from Feast of Wire by Calexico
  • Fluidscape” by Kevin MacLeod*
  • “Pizzicato” from Sylvia by Leo Delibes / Andrew Mogrelia
  • “Moonlight Sonata” from Makara by E.S. Posthumous

* marks royalty-free music. With copyrighted music, we use only short clips and hope this qualifies as Fair Use. Fair Use is defined in the courts, but please note that we make no profit from this podcast, and we hope to bring profit to the copyright owners by linking listeners to somewhere they can purchase the music. If you are a copyright owner and have a complaint, please contact us and we will respond immediately. The text and the recordings of Luke and Alonzo for this podcast are licensed with Creative Commons license Attribution-Noncommercial-Share Alike 3.0, which means you are welcome to republish or remix this work as long as you (1) cite the original source, and (2) share your remix using the same license, and (3) do not use it for commercial purposes.

Previous post:

Next post:

{ 59 comments… read them below or add one }

MichaelPJ November 16, 2010 at 4:28 am

ALONZO: So, desires that tend to result in getting people killed are likely to rank high as desires that people generally have many and strong reasons to inhibit or weaken. We may not be able to calculate the precise degree to which people have reason to inhibit such desires, but we can know that the value is pretty high.

This bit sent my alarm bells ringing!

Desires that are likely to get me killed are desires that I have reasons to inhibit, but desires that get other people killed are irrelevant to me, unless I happen to care about other people.

For example, living in a first-world country where I am unlikely ever to have to fight in a war or somesuch, the death toll involved in wars does not give me a reason to try and inhibit them through government pressure etc.

I’m keeping a close eye on that “people generally”!

  (Quote)

Yair November 16, 2010 at 5:20 am

MichaelPJ – Yes, but so far they have been careful enough. We’ll see later how this turns out.

Alonzo et Luke –

I’m entirely unhappy with the vague “measurement” being done here. It amounts to no more than vague intuition – they very thing you said you’re against. Where is the science? Where are the precise definitions of desires, how to measure them in the lab, and how to measure them in the real world? I’m not at all convinced that your intuitive methods would really work in real-world situations where ethical thinking is really needed. How in hell am I going to determine the effect of instituting Meidcate on a federal level on thwarting and fulfilling the entire gamut of human desires across all of the USA? Those are mightily complicated issues. I don’t have clear intuitions on this – I don’t even know the facts about what the various relevant desires are! Your intuitive method seems far too limited to function adequately in real-world dilemmas, and doesn’t offer scientific validation of its results. I don’t expect your theory to have all the answers, but I at least expect a direction about how we may go about discovering the answers.

A real world science that measures people’s preferences in regards to choice is economy. One of the things economists will tell you is that people are irrational. This means that deducing the strength of their desires from their choices is not at all a trivial manner, if it even has real meaning. They would also tell you that economic effects (such as the buildup of stone piles) can be due to inequality in economic power or the costs of achieving contrasting goals rather than in desire. And as I recall, you made it a point to talk about primary desires, desires-as-ends, whereas now you appear to be talking about desires-as-such, both as ends and means, primary and secondary alike.

In short, deducing the primary desires of real humans in the real world is an enormously difficult task, and determining what will further or curtail them “in general” is an even more difficult one.

I don’t think any of that is a serious objection to your theory – it’s just a short empirical lacuna that needs to be filled later. You don’t need to be able to measure desires to be able to talk about them as if you could, and see where that lands you. But it would benefit your theory if you shored up the “measurement problem” in a more serious, scientific-minded, manner.

  (Quote)

Josh Nankivel November 16, 2010 at 5:22 am

I’m confused about weighing the desires of myself, against the desires of a few, against the desires of many.

In the 10 nuclear warheads vs 5 seconds of electric shock thought experiment, if the 10 nuclear warheads explode out in space where no one will be harmed, you’d go that way. Perhaps I’m leaping ahead to morality before I should, but this approach seems to leave any group that is in a minority at risk of being oppressed because in the sum total, more desires have been fulfilled.

Love the show, very well produced and I can’t wait for the next one!

  (Quote)

Zeb November 16, 2010 at 6:09 am

My question for the next podcast is, “What counts as an action?” I ask this because I am skeptical of the claim that desires are the only reasons for action that exist.

You have said that automatic body processes like breathing are not actions, but what about instinctive actions, like putting your hand up when someone throws something at you, or trained actions like when a cop pulls the trigger when he sees a suspect reach for his pocket? What about actions that happen in “flow,” like when a linebacker jukes left to avoid a tackle, or a jazz musician chooses a certain not during a solo? And what about the Zen master who would claim to have eliminated desire, but still does things? While you could construct a narrative with desires and beliefs to explain all of those actions, what makes you think that at the moment of choice desires and beliefs are physically present? If they are present, in what physical form?

Finally, outside the human realm, what counts as an action? The barking of a dog? The dance of a bee? Etc. Where is the cut-off? (Just conceptually, empirically speaking.)

  (Quote)

JS Allen November 16, 2010 at 6:59 am

It seems like Alonzo dodged both questions entirely:

Q. How do you quantify desires in the real world?
A. If it’s a choice between pie heaven and getting tortured for eternity, it’s easy.

Q. How do you know the micro situation applies to macro?
A. First assume that there is a pile of rocks…

I can’t imagine any economist who gave answers like these, being taken seriously. And “utility” in economics is a heck of a lot easier to quantify.

  (Quote)

Luke Muehlhauser November 16, 2010 at 7:07 am

JS Allen,

This episode originally went into ‘willingness to pay’ and all that, but in the end we only had time to answer more basic questions, for example to rebut the idea that because we can’t make precise measurements now, therefore precise measurements are impossible. Some responses in this thread say things like “You haven’t explained exactly what a desire is or how it can be quantified!” But remember, that was true of temperature as well, and it didn’t mean temperature could never be measured, and it didn’t mean we couldn’t already tell when some things were hotter or colder.

  (Quote)

JS Allen November 16, 2010 at 7:10 am

@Zeb – I think Alonzo is drawing the line at “intentional” actions. This leads to circularity, since “desire” and “belief” are essential to the common definition of “intentional”.

To answer you next question, it’s a complete mystery why intentional action was chosen as the cutoff point.

He is also smuggling in “choice” without bothering to give a coherent definition. I don’t believe that classical BDI requires “choice”, but he’s smuggling it in to create a way to stack-rank desires, apparently. In the past, he has been seemingly inconsistent about “choice”. At one point, he said that all moral questions are matters of choice; but then at another he said that choice doesn’t impact the morality of a question.

  (Quote)

josef johann November 16, 2010 at 8:07 am

Yair,

Your intuitive method seems far too limited to function adequately in real-world dilemmas

Just like there are going to be questions of economics, and questions about temperature, and questions about quantum mechanics of spectacular complexity, there will be questions about the weights and interrelations of desires of spectacular complexity.

But such questions don’t decide the much more fundamental issue of whether, as a fact of the matter, desires are real in the first place.

Imagine if I said “Gee, the financial crisis was really complicated. I can’t tell whether the collapse of Lehman Brothers or the Fed’s IOR policy did more to cause the financial crisis. So I’m skeptical that money really has anything to do with it.”

That would be ridiculous and myopic. A real argument wouldn’t consist in gestures toward complicated things that are difficult to understand, because that’s going to be true regardless, and it fails to separate true theories from false ones.

And that said, there still are every-day interactions where we do have a credible understanding, which we base on our access to our own first-person experience and the supposition that other people are like us (i.e. feel pain in cases where we feel pain, need food in cases where we would need food). And no one doubts the validity of this save in philosophical exercises.

So we can’t cash in this long-distance fuzziness as if it suddenly means we don’t know how to act in everyday cases: should I put a leash on my rottwieler? Should I make sure this food is safe for my friend with a peanut allergy?

And that said, you can even do science armed with nothing more than that first person experience. Take the case of English physicist Henry Cavendish- he had no scientific instruments with which to precisely measure electric current. So he subjected himself to bodily shocks to determine the strengths of currents, and discovered many laws about electricity. He may even have been the original discoverer of ohm’s law.

Which is to say, there are indeed clear cut, everyday cases where the best understanding of our own first-person experiences is that they describe facts in a world that hold true for other people, too. That I can’t precisely estimate the desires of an entire country does not mean I can’t approximately estimate the desires of a handful of people in cases in my everyday life.

  (Quote)

Garren November 16, 2010 at 10:12 am

I’ve no objection to the podcast based on the impracticalities of desire measurement.

  (Quote)

Yair November 16, 2010 at 11:07 am

josef johann,

Which is to say, there are indeed clear cut, everyday cases where the best understanding of our own first-person experiences is that they describe facts in a world that hold true for other people, too. That I can’t precisely estimate the desires of an entire country does not mean I can’t approximately estimate the desires of a handful of people in cases in my everyday life.

I don’t dispute that. However, there are also cases where this just doesn’t cut it. These become more prevalent and serious in politics and setting large-scale policies, where millions of lives are involved in all their rich complexities. And those are very important cases.

Again, this problem is not severe. But as it is, the above treatment is not enough. A more scientific, precise, objective, and yet practical approach would ultimately be needed – but that’s a hurdle for the application of the theory in practice in the far future, not for the basic theory itself.

  (Quote)

Jeff H November 16, 2010 at 11:39 am

I’ve got a question for the next podcast.

If desires are factors that lead to choices, does that not mean that a sleeping person has no desires? Or if you include hypothetical or potential desires, does that not mean that, in principle, desires will never be fully measurable, even if one measures brain states?

  (Quote)

JS Allen November 16, 2010 at 12:01 pm

Some responses in this thread say things like “You haven’t explained exactly what a desire is or how it can be quantified!” But remember, that was true of temperature as well, and it didn’t mean temperature could never be measured, and it didn’t mean we couldn’t already tell when some things were hotter or colder.

Demonstrating that temperature can be measured is very far from explicating a coherent quantification scheme for desires. A coherent quantification scheme requires, at a minimum, a comparison operator and some sort of schema. I can imagine a proposal that would be coherent, but thus far it appears that Alonzo hasn’t even tried. Maybe I’ll throw him a bone and propose one later — not that it would help the overall theory.

And, of course, the issue of quantification at macro scale was completely ignored; probably because Alonzo knows that it won’t work.

In this context, it’s interesting that you mention temperature, since it was Edward Lorenz’s simulation of temperature readings from meteorological stations that led to his insight about the “butterfly effect” and the challenges of making macro predictions in complex systems. Desirism’s approach to desire-engineering is even more naive than the pre-Lorenz approach to temperature engineering.

Of course, desires (whatever they might be) are decidedly not like temperatures. Benoit Mandelbrot began his prodigious career by looking at volatility of pork futures, which would presumably be something like Alonzo’s parable of rock-pickers outlined in your podcast. Since then, we’ve learned a tremendous amount about large-scale behavior in complex systems of human interaction. We know that these systems exhibit severe volatility, usually exhibit Power law distributions, and suffer from “winner take all” effects. Is that the sort of “ethical” system Alonzo envisions?

It’s true that there are many unknowns. But, given what we do know, it’s simply irresponsible to be promoting a theory which assumes some mechanical extrapolation of some unquantifiable folk-theoretic construct. When has any theory like that ever worked?

  (Quote)

josef johann November 16, 2010 at 12:48 pm

JS Allen,

Of course, desires (whatever they might be) are decidedly not like temperatures.

All that matters is that they are like enough to temperatures so far as is necessary for the analogy to hold. We have first person access to the experience of temperature, and can fuzzily gauge what it is and tell the difference between different temperatures.

Can we not measure the strength of our own desires, whatever they are? Can we not, in virtue of a shared human condition, make credible estimations as to the desires of others in a wide variety of cases?

  (Quote)

josef johann November 16, 2010 at 1:00 pm

Yair,

However, there are also cases where this just doesn’t cut it.

Once we’ve conceded the principle, we’ve conceded the principle. Successive applications of a principle we’ve agreed to be true ought to be no more troubling than the first application of it. It just necessarily follows as a possibility once we’ve granted that its true even in the simplest cases. Unless you are skeptical of regularity in nature, or subscribe to some form of irreducible emergentism. Maybe you do! Then I think, the onus is on you to justify your introduction of new emergent, (or irregular or whatever) behavior.

If we can’t see how to apply desire calculus in complicated circumstances, I’m not convinced that it’s a genuine problem for the theory rather than a failure of imagination on our part.

  (Quote)

Tshepang Lekhonkhobe November 16, 2010 at 1:59 pm

I enjoyed this episode much. It’s very simple and elegant.

I find it unfair for others to demand ‘precise measurements’, especially after you guys mention the temperature thing. Thanks so much guys.

  (Quote)

Luke Muehlhauser November 16, 2010 at 3:42 pm

Thanks, Tshepang.

  (Quote)

Steven November 16, 2010 at 4:44 pm

Very interesting episode. I suppose my biggest problem is that temperature can be measured outside of human perceptions, but morality, it seems, is still restricted to whatever desires conscious beings may have, which seems to indicate that morals aren’t some “real natural property” of the world, but rather the by-product of whatever desires a conscious entity happens to have. Am I interpreting this wrong?

The other two points that struck me were:

1. What I’m wondering though is, what if we could give people a pill where they wont value their life anymore and have the desire to kill themselves? And, although I understand that we aren’t up to making moral calculations, if somebody thought it plausible to convince many people to kill themselves, would that be acceptable?

2. This is the most interesting one: What about the desire to commit suicide?

  (Quote)

Steven November 16, 2010 at 4:48 pm

For my first paragraph, I should add that whereas temperature affects living and non-living organisms, morality seems to only affect beings with desires. While this may seem trivial and obvious, the greater implication is that there ultimately can’t be any real reference outside our own preconceptions (which form our beliefs and, by extension, our desires), so, unlike temperature, we may never find a real way of finding “real” morals.

  (Quote)

dan November 16, 2010 at 6:15 pm

Luke,

Wouldn’t it be easier to put your faith in christianity and accept their moral way or living, instead of having to eek out a new system that requires just as much faith if not more? Thanks

  (Quote)

PDH November 16, 2010 at 6:23 pm

dan, if the Christian God does not exist then that is not an option. We can’t just pick what we want to the truth to be.

I think Luke would disagree with your claim that Desirism requires as much faith as Christianity. What do you mean by ‘faith?’

  (Quote)

JS Allen November 16, 2010 at 8:13 pm

Can we not measure the strength of our own desires, whatever they are? Can we not, in virtue of a shared human condition, make credible estimations as to the desires of others in a wide variety of cases?

We certainly think we can. But the empirical evidence shows that self-reports of desires and intentions are notoriously unreliable. If anything, it seems that the folk-theory concepts of “desire” and “intention” are mostly useful to communicate post-hoc rationalization of our behavior to others.

Alonzo is on the right track when he talks about measuring what people actually do. We can create probabilistic models of how people will behave in particular situations, and we can even stack-rank their propensities to do one thing versus another. We don’t need to drag in concepts like “desire” to accomplish this, though, and in fact such concepts as “desire” and “choice” just muddy the waters. Experimental evidence suggests that our brains involve multiple competing subsystems when building up to an action, and the “winner” is determined by all sorts of extrinsic factors — mood, level of sleep, priming, and even magnetic pulses :-) It is only after the winner has been selected that a person concocts a rationalization about how that was the outcome she “desired”.

So yes, on a per-agent basis we can roughly stack-rank propensities for behavior. We can even create profiles of what an “average” agent would do.

But this is still a long way from being a coherent quantification scheme that can operate at even the micro level. The whole reason that Alonzo talks about quantity and intensity of “desires” is so that he can prescribe ranking; which is kind of a moot point on a per-agent basis, since you infer the intensity by observing behavior. The quantification is only interesting in cross-agent comparisons, or when considering combinations of desires. For these, you can’t just observe behavior — you need to select some quantification scheme that allows you to compare.

Until some quantification scheme is proposed, it’s impossible to evaluate desirism. We can’t just hand-wave and say that the end result will be the same regardless of what quantification scheme is chosen — that’s just false. These choices are fundamental to the model, and different quantification schemes will lead to entirely different results. And of course, if you actually try to use desirism to decide a real-world moral problem, you need to have a consistent way of quantifying. Otherwise, and I suspect this is the case with Alonzo’s columns, you just end up basing your judgment on vague intuitions and then hand-waving to make your decision appear to fit into desirism’s framework.

  (Quote)

Yair November 17, 2010 at 12:40 am

josef johann,

If we can’t see how to apply desire calculus in complicated circumstances, I’m not convinced that it’s a genuine problem for the theory rather than a failure of imagination on our part.

I agree. I’m not clear precisely what it is we’re disagreeing on, if anything. All I’m saying is to apply the desire calculus we need to have a reasonable way to do so – if our way only works reliably in some cases, that’s a problem. It is not an insurmountable problem, at least in principle – it’s just a problem of lack of imagination and ingenuity on our part on how to extend the calculus of desires further.

JS Allen,

If anything, it seems that the folk-theory concepts of “desire” and “intention” are mostly useful to communicate post-hoc rationalization of our behavior to others … Experimental evidence suggests that our brains involve multiple competing subsystems when building up to an action, and the “winner” is determined by all sorts of extrinsic factors — mood, level of sleep, priming, and even magnetic pulses :-)It is only after the winner has been selected that a person concocts a rationalization about how that was the outcome she “desired”.

I think you are somewhat overstating the case against “desires’. The above evidence shows that humans are composed of different parts, with their own “desires” or parts thereof – not that there are no desires. People still act to reach certain goals, based on (among other things) their beliefs and reasoning. They still have various motivations brewing inside them, that are still driving them towards certain end-goals (rather than just direct actions), thorough a process that can be broadly construed as “reasoning” (albeit not a perfectly rational one), which relies on certain “beliefs” (which are different for different parts of the person and under different conditions) – the picture is more complicated then the naive, folk-concept, but still constitutes “desires” in practice as a good effective theory for the most part.

If I’m mistaken, please refer me to the major articles showing that.

Until some quantification scheme is proposed, it’s impossible to evaluate desirism. We can’t just hand-wave and say that the end result will be the same regardless of what quantification scheme is chosen — that’s just false.

Well, we can tentatively assume the results of the accurate measurements won’t be substantially different from the intuitive judgment in “clear” cases. That allows us to proceed to evaluate other aspects of the theory. It would, after all, be rather pointless to develop a detailed measurment theory only to later discovered it isn’t needed – that desirism is flawed for some other reason regardless, so it was all a waste of time.

If we find that desirism passes muster in other respects, we can then return to this assumption and check it. But it appears to me the above assumption is a plausible one. It is akin to saying we can estimate length by counting steps, and worry about more accurate and objective measures only if we discover that determining the length actually matters for something.

if you actually try to use desirism to decide a real-world moral problem, you need to have a consistent way of quantifying.

Very true. However, I’m willing to let this pass as a “quibble” – I want to get to the heart of the theory. So far, there is nothing to use, there is no moral prescription at all.

  (Quote)

josef johann November 17, 2010 at 6:18 am

Yair,

if I read your last comment more carefully, I would have seen this sentence:

a hurdle for the application of the theory in practice in the far future, not for the basic theory itself.

And I would have realized I do not disagree with you.

  (Quote)

Alonzo Fyfe November 17, 2010 at 7:15 am

MichaelPJ

Desires that are likely to get me killed are desires that I have reasons to inhibit, but desires that get other people killed are irrelevant to me, unless I happen to care about other people.

Except the desires that get other people killed are likely to be the same desires that get you killed. At the very least, their existence puts you at risk. Furthermore, you need the cooperation of others to
inhibit desires that would get you killed, and they could use your cooperation to inhibit desires that get them killed. So, an overall project of inhibiting desires that tend to get people killed results.

For example, living in a first-world country where I am unlikely ever to have to fight in a war or somesuch, the death toll involved in wars does not give me a reason to try and inhibit them through government pressure etc.

First, there’s the propect that first-world countries are built on an aversion to war – so, without a strong aversion to war, you would not be living in a first-world country, and you have many and strong reasons to promote what turns out to be a pre-requisite for the possibility of first-world countries.

Second, people who do suffer the desire-thwartings of war have reason to cause in you an aversion to war. And, to the degree that they succeed, to that degree you do have a reason to try to inhibit wars. They affect you not because you may be killed by one, but others who may be killed have had reason to cause you to have an aversion to their being killed.

Third, an aversion to war is a next-door-neighbor to an aversion to killing. And people in first-world countries still have a risk of being killed (murdered), and promoting an indifference to war risks promoting an indifference to killing.

However, if after all of this it is truly the case that you, an overall aversion does not contribute to a state that fulfills any of your desires – then you have no reason to promote an aversion to war. Fine.

That does not change the fact that people generally have many and strong reasons to promote an aversion to war and, correspondingly, have many and strong reasons to condemn those who are indifferent to war. You may not have an aversion to war – but they sure have a lot of many and strong reasons to use social forces to give you one.

I’m keeping a close eye on that “people generally”!

Please do. They’re tricksy.

  (Quote)

Alonzo Fyfe November 17, 2010 at 7:25 am

Yair

How in hell am I going to determine the effect of instituting Meidcate on a federal level on thwarting and fulfilling the entire gamut of human desires across all of the USA? Those are mightily complicated issues. I don’t have clear intuitions on this – I don’t even know the facts about what the various relevant desires are!

Well, according to desirism, those are mighty complicated issues. We don’t even know the facts about what the various relevant desires are. Nor do we have an easy answer to the question of what states of affairs will result and how they would relate to those desires.

These ARE the facts of the matter and desirism says that these ARE the facts of the matter.

This is the world we live in and in which we have to make decisions. Desirism SAYS that some questions are hard to answer and if anybody comes along and says that they have an “easy formulae” that gives easy answers to these questions – well, those people are wrong. The difficulty in knowing the desires and which states of affairs will result says that these types of questions have no easy answers.

There are some things in science that are just as complex. Scientists do not have a solution to the three-body problem. If three or more bodies of approximately the same size are in orbit around a mutual center of gravity, science is not able to accurately compute the motions of those three objects. The math is just too complex.

Desirism still tells us which types of information is relevant – it tells us what to look for. It tells us what arguments are good arguments and what arguments are bad arguments. It tells us not to believe the person who claims to have an easy answer because easy answers rely on reasons for action that do not exist.

when it comes to this type of question, desirism is not going to say, “This has an easy answer and the easy answer is P.” Desirism is going to say, “This is a really hard question and we are going to have to do a lot of research to try to figure out the right answer – but, at least, I can point you to the types of questions that we need to answer. It’ll take a lot of hard work, but that’s the direction we have to travel.”

And that seems to be the truth of the matter.

  (Quote)

Alonzo Fyfe November 17, 2010 at 8:22 am

JS Allen

Can we not measure the strength of our own desires, whatever they are? Can we not, in virtue of a shared human condition, make credible estimations as to the desires of others in a wide variety of cases?

Actually, and I think interestingly, research suggests that we have no direct knowledge of our own desires and that we “theorize” our own desires the same way we theorize the desires of others – and are just as prone to mistakes.

There is a reason for this. There are no evolutionary reasons for an ability to sense one’s own desires. However, there are many evolutionary reasons for the ability to accurately predict and explain the behavior of others. So, we have evolved an ability to explain and predict the behavior of others, and we apply those methods to ourselves.

Furthermore, this method of explaining and predicing the behavior of others also gives us insight as to how we can alter the dispositions to behave in others (by altering the environment in ways that alter the desires of others so as to alter the states that others will tend to realize or avoid realizing). Again, this is a useful tool.

Ultimately, we may well come up with a better way for explaining and predicting intentional behavior – and better tools for influencing the behavior of others. That method might not make any reference to beliefs and desires. However, until we have such a method, we have no option but to use the best method we currently have (which is one that employs beliefs and desires).

In fact, Luke (I think) is strongly sympathetic to the claim that a better way will be discovered that makes no use of beliefs and desires while I hold that such a ‘better’ system is likely but not extremely likely.

This is akin to noticing that Newton’s theories fail in certain circumstances and that a better theory is required. This did not give people reason to throw out Newton’s theories entirely – not if they still provided useful conclusions in many day-to-day situations. In fact, Newtonian physics is still used in engineering because, though it has its problems, it yields answers that are good enough in many real-world cases.

  (Quote)

JS Allen November 17, 2010 at 9:40 am

It is akin to saying we can estimate length by counting steps, and worry about more accurate and objective measures only if we discover that determining the length actually matters for something. … However, I’m willing to let this pass as a “quibble” – I want to get to the heart of the theory.

 
I already agreed that it’s possible to roughly stack-rank “desires” on a per-agent basis. Technically, that is too charitable — empirical evidence shows that “desires” are not commutative. That is, people often prefer A to B and B to C, but prefer C to A. It happens all the time. This would seem to complicate efforts to quantify desires on a per-agent basis. However, like you, I’m willing to let that pass as a quibble.

However, we still don’t know how to compare cumulative quantities of desires within the same agent, let alone in multi-agent scenarios. We don’t even have any reliable intuitions about this. Since this sort of quantification is absolutely essential to desirism, I maintain that desirism is incoherent until this is defined.

I’ve professionally built models of complex systems in a variety of domains. Over the years, you learn that there are certain things that make an immense difference in the outcome of certain types of model, and which cannot be left unspecified. This is one of those things.

So far, there is nothing to use, there is no moral prescription at all.

If you’re saying that desirism is an interesting idea that should be further specified and explored, then I’m with you. The problem is that Alonzo is promoting this as a coherent moral system, and is using it to make real-world moral proclamations, despite these very large (and probably fatal) flaws.

Earlier this year, Mitchel Heisman also published a 1904-page “ethical theory” which was praised by many as being brilliant and erudite. Like Alonzo, he refused to submit his ethical theory to peer-review. Like Alonzo, he scorned “ivory tower” thinking and preferred to paint in broad brush-strokes rather than investigate the holes in his theory. And, like Alonzo’s supporters, he argued that it was urgent to take action before confirming the finer details of his theory. Like Alonzo’s theory; Heisman’s theory was a sprawling work of hundreds of contradictory pages that took years to write, but which could have been more clearly communicated in 5 short pages.

Here is a quote from Mihaly Csikszentmihalyi’s flow:

Laypersons with an ax to grind sometimes turn to pseudoscience to advance their interests, and often their efforts are almost indistinguishable from those of intrinsically motivated amateurs. An interest in the history of ethnic origins, for instance, can become easily perverted into a search for proofs of one’s own superiority over members of other groups. The Nazi movement in Germany turned to anthropology, history, anatomy, language, biology, and philosophy and concocted from them its theory of Aryan racial supremacy. Professional scholars were also caught up in this dubious enterprise, but it was inspired by amateurs, and the rules by which it was played belonged to politics, not science. Soviet biology was set back a generation when the authorities decided to apply the rules of communist ideology to growing corn, instead of following experimental evidence. Lysenko’s ideas about how grains planted in a cold climate would grow more hardy, and produce even hardier progeny, sounded good to the layperson, especially within the context of Leninist dogma. Unfortunately the ways of politics and the ways of corn are not always the same, and Lysenko’s efforts culminated in decades of hunger.

Inventing your own ethical theory might be a fun hobby. Go for it. But if you want to start promoting it as a way to order people’s lives, you need to get it straight. When there are glaring faults, and the pieces don’t cohere together, you’re not allowed to wave your hands and say “Trust me; I got the basic contours right.” Atheists, of all people, shouldn’t tolerate such behavior.

  (Quote)

Yair November 17, 2010 at 10:20 am

JS Allen,

If you’re saying that desirism is an interesting idea that should be further specified and explored, then I’m with you. .

I don’t read the other desirism-related stuff here for the most part. This series is supposed to be the up-to-date and most clear presentation of desirism, so I’m focusing on that. So far, this series haven’t progressed to making prescriptions, normative claims, or even defining “morality”. I’ll what to make of it as a moral theory when it is explained as such.

I can make guesses, but… I rather not. I’ll wait for the clearest presentation of the ideas that I can get.

However, we still don’t know how to compare cumulative quantities of desires within the same agent, let alone in multi-agent scenarios. We don’t even have any reliable intuitions about this. Since this sort of quantification is absolutely essential to desirism, I maintain that desirism is incoherent until this is defined.

What you seem to be saying is that we can’t really sum up desire intensity within an agent. That would indeed be a problem. I’m not aware of such findings, however. It seems rather counter-intuitive to me, as it naively appears that different factors do “add up” in making an intentional decision. Decision theory would also require such summation, and despite their irrationalities people do often abide by it or something close to it – implying that summation does occur. My guess is therefore that summing up intensities (probably non-linearly) would be a good effective theory for the most part. Again, if there is literature to the contrary – I’m not aware of it.

Comparing desires between agents is another kettle of fish, but again the naive feeling is that intensities can be gauged, and I am not aware of evidence to the contrary.

As long as desires can be summed in this manner, as an effective psychological theory under some constraints, then the desire-based description that desirism relies on is fine within these constraints.

  (Quote)

Alonzo Fyfe November 17, 2010 at 12:34 pm

JS Allen

[E]mpirical evidence shows that “desires” are not commutative. That is, people often prefer A to B and B to C, but prefer C to A. It happens all the time. This would seem to complicate efforts to quantify desires on a per-agent basis. However, like you, I’m willing to let that pass as a quibble.

It sounds to me as if you are talking about the transitive party.

If A>B and B>C then A>C.

The Stanford Encyclopedia of Philosophy has a detailed discussion of preference transitivity. It provides some counter-examples, but falls far short of making any claim that preference transitivity fails in a way that would create a serious problem to the theory we have here.

Many of their counter-examples rely on cases where there are immeasurable minute differences among members but major differences across the whole set.

For example, 1000 cups of coffee each a little sweeter than the next. An individual might not be able to notice a difference between cup 467 and 468 (or express a preference) between cups 167 and 168. However, he has a clear preference of cup 167 over cup 600. It violates the principle of transitivity to claim that a string of indifferences can yield a preference.

However, this type of objection would create a difficulty in the transitivity of height. We can create a string of objects where each is a tad longer than the other – but the differences are too small to notice. This could be interpreted as yielding the result that a string of “equal lengths” can yield a “greater length”.

The problem comes from making the invalid inference, “If we cannot tell the difference, then there is no difference.”

To answer this problem, I would argue that it makes more sense to hold that we cannot always tell, in cases of close similarities, which option fulfills the most and strongest of our desires.

In the second type of case, I could not figure out how to generate a second example that made any sense.

It was described as follows:

Consider an agent choosing between three boxes of Christmas ornaments (Schumm 1987). Each box contains three balls, coloured red, blue and green, respectively; they are represented by the vectors 〈R1,G1,B1〉, 〈R2,G2,B2〉, and 〈R3,G3,B3〉. The agent strictly prefers box 1 to box 2, since they contain (to her) equally attractive blue and green balls, but the red ball of box 1 is more attractive than that of box 2. She prefers box 2 to box 3, since they are equal but for the green ball of box 2, which is more attractive than that of box 3. And finally, she prefers box 3 to box 1, since they are equal but for the blue ball of box 3, which is more attractive than that of box 1. The described situation yields a preference cycle, which contradicts transitivity of strict preference.

Well, the agent prefers the Red 1 to Red 2, Green 2 to Green 3, and Blue 3 to Blue 1.

But, is Red 3 equal to Red 1 or Red 2? The example seems to require R3 = R2 and R3 = R1 even though R1 > R2.

If you assume violations of transitivity, then expect violations of transitivity in the results.

Now, my first suspicion is that there is something I do not understand in this example – and I would need somebody to explain it to me.

One of the things I could not find is any claim that there is “empirical evidence shows that “desires” are not [transitive].”

However, I think you for prompting me to take a look at that.

  (Quote)

cl November 17, 2010 at 1:10 pm

Alert! Alert! Somebody forgot to properly escape a BOLD tag!!

Overall, this episode was a step in a better direction, and I don’t have anything to add that isn’t covered in my response. Though I still lament the fact that it took nine episodes to get to the “real-world” part, I’m glad you guys are finally starting to get away from Pandora and Alpha and Betty and all that stuff. BTW, how many of these podcasts are planned? Is there a set number? On the one hand, I can see the advantages of not conforming to a pre-determined number of episodes. On the other hand, not knowing how many are planned means you can keep saying “we’ll address that in another episode” indefinitely!

JS Allen / Yair,

However, like you, I’m willing to let that pass as a quibble. [JS Allen, to Yair]

I think the imprecision of the tools can be legitimately called a quibble at this point in the podcast, as I agree with you both that desirism hasn’t even offered anything coherent – as JS Allen put it – nor has it offered anything to use – as Yair put it. Although, if Luke and Alonzo actually do demonstrate the coherency of the theory, I think the importance of precision would escalate beyond what we might call a quibble.

JS Allen,

…empirical evidence shows that “desires” are not commutative. That is, people often prefer A to B and B to C, but prefer C to A. It happens all the time. This would seem to complicate efforts to quantify desires on a per-agent basis.

True. Just today, a commenter on my blog framed this problem in the context of the Borda-Condorcet controversy. His links were interesting and I suggest them.

The problem is that Alonzo is promoting this as a coherent moral system, and is using it to make real-world moral proclamations, despite these very large (and probably fatal) flaws.

YES, YES and YES again. Hallelujah to that. I find this immensely frustrating, and in direct contradiction of Luke and Alonzo’s stated affinity for justified claims. For example, with nothing more than a variant of “people generally have reasons to promote an aversion to smoking,” Alonzo condemns smoking as an “irrational desire.” (1) Or, Alonzo states that, “Unless there is some sort of medical condition at work, the parent of an obese child is an abusive parent by that fact alone.” (2) Alonzo condemns television sitcoms, reality shows, and spectator sports as “a waste of time” and asserts without justification that “we” would be “better off” if we “were to condemn them.” (3)

Worse, hardly anybody else seems to think this is the equivalent of arguing with a 2×4.

Earlier this year, Mitchel Heisman also published a 1904-page “ethical theory” which was praised by many as being brilliant and erudite. Like Alonzo, he refused to submit his ethical theory to peer-review. Like Alonzo, he scorned “ivory tower” thinking and preferred to paint in broad brush-strokes rather than investigate the holes in his theory. And, like Alonzo’s supporters, he argued that it was urgent to take action before confirming the finer details of his theory. Like Alonzo’s theory; Heisman’s theory was a sprawling work of hundreds of contradictory pages that took years to write, but which could have been more clearly communicated in 5 short pages. … Inventing your own ethical theory might be a fun hobby. Go for it. But if you want to start promoting it as a way to order people’s lives, you need to get it straight. When there are glaring faults, and the pieces don’t cohere together, you’re not allowed to wave your hands and say “Trust me; I got the basic contours right.” Atheists, of all people, shouldn’t tolerate such behavior.

Wow. You are on fire today. Right on, JS Allen, right on.

1. Alonzo Fyfe, Irrational Desires, February 26, 2010, Atheist Ethicist; 2. Alonzo Fyfe, Gluttony and Superlust, August 25, 2010, CSA; 3. Alonzo Fyfe, Trivial Hobbies, June 17, 2010, Atheist Ethicist

  (Quote)

JS Allen November 17, 2010 at 2:20 pm

@Alonzo – Yes, I mean “transitive”. And I did say that it’s a minor quibble, and is perhaps the smallest of the potential problems with desirism. It’s the sort of quibble that could become important at macro scale, though. If it were simply a matter of random perturbation in similar preferences, it would wash out, but it doesn’t appear to be that way.

I think it’s a red herring to frame the per-agent quantification question as a question about “precision”. The coffee cup example is useless, IMO. The colored balls example is more to the point, and we see this all the time with more complex choices like cell phones, cars, etc. There are a few different plausible explanations for this transitivity presented in the literature — different framing effects; effects of priming; and so on. But like I said, I think it’s a side show that would only be important (if at all) when you try to model it at macro scale.

At the micro scale, we still need to explain how “desires” (or action-preferences, or action-potentials) are quantified cumulatively on a per-agent basis, and also cross-agent. Once we’ve done that, it becomes possible to evaluate the larger theory and do some models.

FWIW, I think your explanation about why people commonly use folk-theory words like “desire” is plausible enough for now. But that’s orthogonal to your use of the word “desire” in your moral theory. Unless your moral theory is endorsing people to “mind read” other people, then I don’t see how it’s anything but counterproductive to use folk-psychology mind-reading language. You’ve repeatedly stated (wisely, I think) that we need to judge propensity to action off of probabilistic empirical measures of actual behavior — not mind reading. If so, we don’t need the concept of a “desire”, because we’re not talking about folk-theory desires. We’re talking about probabilistic propensities to display preferences. We don’t need the ontological model, because we already have the math. Adding an unnecessary ontological bucket in the middle is a recipe for compounding error.

  (Quote)

JS Allen November 17, 2010 at 3:16 pm

BTW, I would advise against using things like the “three body problem” or “temperature” as analogies for desirism. I’m going to give you the benefit of a doubt, and assume that you’re just uneducated about complex systems. But these analogies are obviously awful, and open you up to accusations of sophistry.

As I’ve already mentioned, Edward Lorenz’s results were a whole different level than the 3-body problem. You need to understand exactly how the 3-body problem differs from Lorenz’s scenario, and then decide which scenario is most accurate in describing desirism. This is science from 1969, so there is no reason to get it wrong. Next, you could look at the work that Mandelbrot has done with biological systems, and honestly assess any similarities to desirism. Then, look at complex interconnected systems that include human actors, and see what sort of models are most apt for describing them — and try to honestly assess which models are likely to be the best analogues for desirism. Hint: the most appropriate models will look nothing like a simple 3-body problem.

The only thing appealing about the physics analogies is that they give a superficial appearance of plausibility and help maintain the wishful thinking.

  (Quote)

Alonzo Fyfe November 17, 2010 at 4:58 pm

JS Allen

BTW, I would advise against using things like the “three body problem” or “temperature” as analogies for desirism.

I wouldn’t dream of it.

I use the three body problem and temperature to play a role in logic known as “disproof by counter-example.” If somebody claims that A implies B, it is a legitimate response in logic to respond by giving examples of “A and not-B”.

So, if somebody infers that a current inability to assign numbers to a phenomena implies that it is impossible to measure, a counter-example of an inability to assign numbers to something that eventually came to be measured is legitimate.

When somebody claims that the inability to conclude easy answers to complex questions implies that a system is not scientific, it is legitimate to respond that you can find a great many examples in science of phenomena that do not lend themselves to easy answers.

  (Quote)

Josh Nankivel November 17, 2010 at 5:11 pm

“So, if somebody infers that a current inability to assign numbers to a phenomena implies that it is impossible to measure, a counter-example of an inability to assign numbers to something that eventually came to be measured is legitimate.”

Yes, and Sam Harris uses this method in the same way in “The Moral Landscape” – even though I may not agree with many of the points Harris makes, he uses disproof by counter example to great effect to illustrate that morality can be studied scientifically, even if we can’t quantify it precisely.

  (Quote)

JS Allen November 17, 2010 at 6:27 pm

I wouldn’t dream of it.

Actually, you did it. Quoting you verbatim:

There are some things in science that are just as complex. Scientists do not have a solution to the three-body problem.

The fact that you would call the three-body problem “just as complex” as desirism, shows that you are painfully ignorant of complexity theory. There is no comparison.

You are also failing to understand the criticism. The criticism isn’t that we don’t know precisely how a system like desirism might play out. The criticism is that we have every reason to believe it will behave quite like to analogous well-studied complex systems. If it does behave as other analogous systems suggest, desirisms prescriptions about “moral behavior” are likely to be just as enlightened as telling people to control the weather by flapping their arms. And of course, a good deal more destructive — people doing pointless rain dances might get tired legs, while people moralizing one another can do a great deal of actual damage (something I’d expect you to be extra sensitive towards, given your background).

So, if somebody infers that a current inability to assign numbers to a phenomena implies that it is impossible to measure

You’re either failing to understand the quantification criticism, or deliberately misrepresenting it. You need to coherently define what it is that you’re quantifying before you can talk about maximizing its “number” and “intensity”. You’ve done that, sort-of, on a per-agent basis — you’ve proposed that we maximize “action preference” and explained how we can measure that on a per-agent basis (and, inexplicably, insisted on calling this thing a “desire”). You haven’t even offered up a coherent definition of what that would mean in cross-agent scenarios, or when quantifying cumulative bundles of action-preferences on a per-agent basis.

BTW, being able to do it on a per-agent basis is useless for desirism, since this quantification collapses into tautology — you’ve simply measured empirically what people do, and then prescribed that they do what they do. Pretty pointless.

As far as I can tell, you haven’t even tried to present a coherent cumulative or cross-agent definition of something that would be relevant to desirism, and which might conceivably be quantifiable at some point in the future.

  (Quote)

Alonzo Fyfe November 17, 2010 at 9:42 pm

JS Allen

You seem to be putting a lot of energy into criticizing a theory called “desirism” that is substantially your own invention. In doing so, you appear to be making a lot of claims about what I will say – about, for example, the “dangerous” prescriptions that I would make. However, on the basis that I have a pretty good idea of what I will say and the implications that I will draw, I have some ability to judge the sense of your warning that I will soon be making prescriptions that, if followed, spell the end of the human race – because I fail to understand the complexity of the situation.

Of course, one of the problems with this type of complexity is that all options create risk, including the option of doing nothing. The inference some draw, “There is risk no matter what you do so do nothing,” is, itself, dangerous.

So, the real question to ask, then, woiuld be, “Do you have something better?” Can you offer people a set of options that will improve their chances of preventing their children from being raped, their friends from being murdered, and that which they need to survive from being taken or destroyed?

Are you offering anything constructive? Or are you just complaining that, “It’s really difficult?”

I ask this in part because I appreciate the difficulty. I have not spoken about a great many issues that a lot of people claim to know the answers to – precisely because desirism tells me that it would be very difficult to determine the right answer, and do not have the time or resources to do all of the work necessary to figure out that answer.

Desirism tells me that, in a lot of cases, I have to say, “I don’t know”.

However, even in those cases, I can make suggestions that at least will clear out a lot of the deadwood. For example, people involved in that discussion who claim that intrinsic value properties contain reasons to favor a particular result are wrong. Those who claim that God prefers a particular option are making false claims. And even, in many cases, “Anybody who says that the answer to this question is easy is not only mistaken, but irresponsibly mistaken.”

Hmmm. Sounds like something that you have been saying. Imagine, a theory of morality that says that some problems are very complex, and it would be irresponsible for anybody to claim that they have an easy answer.

Are you really disagreeing with what I write? Or are you disagreeing with a theory of your own construction that you have opted to assign to me so that you can have something to criticize.

  (Quote)

Josh Nankivel November 18, 2010 at 4:27 am

JS Allen,

“The fact that you would call the three-body problem “just as complex” as desirism, shows that you are painfully ignorant of complexity theory.There is no comparison.”

Please explain yourself. This is an assertion that I, as a reader of your comment, find confusing. How do you compare the complexity of the three-body problem with human morality and reasons for action? (note: I am ‘painfully ignorant’ of complexity theory and it’s status as a valid and tested theory of science)

“we have every reason to believe it will behave quite like to analogous well-studied complex systems.If it does behave as other analogous systems suggest, desirisms prescriptions about “moral behavior” are likely to be just as enlightened as telling people to control the weather by flapping their arms.”

What are these ‘analogous well-studied complex systems’ – can you be more specific and make your case? To me, you seem to be asserting things without much justification or explanation. Do you want to teach, or chastise?

  (Quote)

JS Allen November 18, 2010 at 6:03 am

What are these ‘analogous well-studied complex systems’ – can you be more specific and make your case?

I’ve given several pointers to popular introductions throughout this comments thread. You could try reading Mandelbrot’s “Fractal Geometry of Nature” or better, “Fractal View of Financial Turbulence”. Then try reading Taleb’s “Fooled by Randomness”. Read up on “Power laws” (I gave the link). Also, read Popper’s “Open Society and it’s Enemies”.

  (Quote)

Zeb November 18, 2010 at 6:22 am

Wow Alonzo, I think you have just fulfilled JS Allen’s accusation of sophistry made in that other thread. Instead of continuing to grapple with his criticisms of his theory, you accuse him of willfully creating a strawman of your arguments just to have something to criticize. You have in fact strawmanned him. While I am becoming more and more convinced that is the correct moral theory, I have found all of JS Allen’s criticisms to be completely on point and well articulated. His comments in this and the sophistry thread are, as far as I can tell, a comprehensive catalog of exactly what desirists need to deal with to shore up their theory. I don’t know what if any moral theory JS Allen subscribes to, but he could answer, “Do you have something better?” with, “The version of desirism that deals honestly with these criticisms.” You should be thanking him for uncovering the weaknesses that will allow you to perfect your possible discovery of the answer to an ancient philosophical question.

  (Quote)

JS Allen November 18, 2010 at 6:43 am

So, the real question to ask, then, woiuld be, “Do you have something better?” Can you offer people a set of options that will improve their chances of preventing their children from being raped, their friends from being murdered, and that which they need to survive from being taken or destroyed?

Now we get to the crux of the matter. You’re not proposing a philosophical “moral theory” at all. That would imply something that is coherently documented and submitted to peer review.

In fact, you’re proposing a radical new monolithic social system that you can “offer people” to solve all of their deepest physical and economic security needs. If that doesn’t send shivers down people’s spines, they haven’t read their history books. This is why Luke’s quote here was so jarring:

We cannot stand by and wait for a completed neurobiology before we try to live morally. Morality cannot wait. As in every domain of science, we do the best we can with what we know now, and we accept that our theories will probably have to be revised in light of new evidence in the future.

There are so many problems with this statement, and with your own statement above, that it’s hard to know where to begin.

For starters, you are both saying that it was impossible to live morally before desirism came along. I want people to pause and think about that statement. Do you really believe that? Heck, in your most recent comment you flat-out claim that unnecessary rapes and starvation will occur if people don’t adopt desirism. I wish you would plaster your quote above in the header of your site, so people know what they’re getting into.

Second, since you’re pitching your system as a way for humanity to reduce violence and starvation, one would expect you to have some passing knowledge of the history of economics and political science, which represent mankind’s past attempts to solve these problems. You would also need to convincingly explain why your system will be better than the alternatives at accomplishing these goals. Before Luke can be the Lenin to your Marx, the both of you have a tremendous amount of work to do.

I find it revealing that, when asked to define the most basic terminology about your system, you dodge the questions yet again. Luke’s response in this thread was:

in the end we only had time to answer more basic questions

You spent years writing up your system, and people have been asking you to provide a coherent definition of desirism for years, and your excuse is that you “ran out of time”?

  (Quote)

Luke Muehlhauser November 18, 2010 at 6:44 am

Zeb,

Which of JS Allen’s criticisms do you find most compelling? It’s hard for me to know what to engage. For example, the claim that we haven’t given explicit definitions for desire or choice are of course correct. We’re getting there. It takes time.

  (Quote)

Alonzo Fyfe November 18, 2010 at 7:38 am

JS Allen

For starters, you are both saying that it was impossible to live morally before desirism came along. I want people to pause and think about that statement.

False.

I hold that desirism best explains and predicts much of morality that has existed. In a separate post, I recently applied desirism to understand the concepts of negligence and excuse. Certainly, I am not about to claim that it was impossible for negligence and excuses to exist before desirism came along. And yet desirism, better than any other theory, explains the major moral elements of negligence and excuse – and it provides an account of when accusations are justified and excuses are valid – that are already a part of our common moral practices.

I hold that desirism would allow us to improve the applications of these concepts by allowing us to clear some of the garbage away from these common elements of negligence and excuse, but I would hardly argue that they did not exist, or exist in the form they currently have, in the absence of desirism.

I hold that moral systems are possible among animals – those that have desires that can be affected by the behavior of others. Like all other tools, animals would be able to use the tool of morality more efficiently if they understood more about it, but it is not the case that they cannot have a morality because they cannot have a concept of “desirism”.

Heck, in your most recent comment you flat-out claim that unnecessary rapes and starvation will occur if people don’t adopt desirism.

You make it sound as if I am issuing a threat.

I was responding to the inference that “the existence of risk and uncertainty entails that not acting is better than acting.” My point was that for a response to be useful (say, at reducing rapes and murders), it must be a response that reduces risk and uncertainty – a statement that merely acknowledges its existence implies nothing about what to do.

I would stand by that statement. I do not see anything that can be said against it. But, since you needed an interpretation that you could find reason to criticize, I guess yuo found one.

You would also need to convincingly explain why your system will be better than the alternatives at accomplishing these goals.

Because it does not make references to reasons for action that do not exist.

It describes some of the things we can know about the reasons for action that do exist. However, once again, if you are accusing me of having (or asserting) that I have perfect knowledge of these facts and relationships and that there is no more work to be done on this issue because I have done it all . . . your accusations are baseless. I will do what I can do with the resources at my disposal, admit that there is much more work to be done, and hope that others with knowledge in fields I do not have the time to acquire might find some merit in seeing where they can go with what I have provided.

You spent years writing up your system, and people have been asking you to provide a coherent definition of desirism for years, and your excuse is that you “ran out of time”?

Desirism is a theory that holds that desires are the primary object of moral evaluation and that the evaluations of actions are secondary and derived from the primary evaluation of desires.

Is that not enough detail for you? You want me to go into more detail and explain what I mean by a “desire” and “moral evaluation” and “actions” and the methods by which the evaluations of actions are linked to evaluations of desires, and say something about HOW desires are to be evaluated?

Well, answering all of those questions takes time. In fact, we could probably spend several podcast episodes covering each part of this definition. At at any part in the process of “defining desirism” one can make the accurate criticism that “you still have not said everything that can be said about desirism, so your definition is incomplete.”

Nope, it’s not. It will never be complete.

  (Quote)

Alonzo Fyfe November 18, 2010 at 8:20 am

Zep

I wish to start with this:

You should be thanking him for uncovering the weaknesses that will allow you to perfect your possible discovery of the answer to an ancient philosophical question.

There is something about a person presenting criticisms in an arrogant and condescending matter that tends to provide an obstacle to a response of, “Thank you. Can I have some more?”

Instead of continuing to grapple with his criticisms of his theory, you accuse him of willfully creating a strawman of your arguments just to have something to criticize.

Some of what JS Allen says are things that do not count as criticism. He says, “A lot more work needs to be done in these areas over here.” And, in many cases, he is right. The theory would benefit from a lot more work done in those directions. In many cases, my response has to be, “I don’t have time to do that. I need to spend a great deal of time making a living and paying a mortgage.”

To which, JS Allen’s response may be, “Then how dare you suggest that you can provide perfect answers to all the world’s problems!”

But that is not something that I have ever claimed. I claim that, in this area over here, where I have the time to work, what I offer is better than many of the contemporary alternatives – subjectivism, act utilitarianism, deontology, divine command theory. However, at the same time, I say, “I do not have the time to take these ideas into those whole realms over there.

There’s the straw man – the assertion that I either am or will make claims that are far outside of the area that I have time to study, and then raising objections to the claims I have not made and, as far as I can tell, will not make because I recognize the limits placed on me by my own resources.

His comments in this and the sophistry thread are, as far as I can tell, a comprehensive catalog of exactly what desirists need to deal with to shore up their theory.

In all aspects, perfectly true. Have you read anything that I have written to the effect that there is not a list of things that could be done to shore up the theory, or that some of the things that JS Allen has listed are not on the list?

Yep, there are things that need to be done to shore up the theory. And I would love to be independently wealthy so I could spend my days doing those things.

But, the realm of human knowledge is vast and no person can possibly know more than a small fraction of it. That’s a fact that we all have to live with. I will contribute what I can.

[H]e could answer, “Do you have something better?” with, “The version of desirism that deals honestly with these criticisms.”

Can you identify a place where I have dealt dishonestly with his criticism?

In many places, my response has been, “that’s fine as far as it goes, but it is not a reason to reject the theory. That inference would be invalid.” Is that not true?

  (Quote)

JS Allen November 18, 2010 at 9:06 am

Alonzo: So you’re saying that, as long as we accept your completely unsubstantiated claim that desirism is the “best” model of human morality available, and accept your refusal to coherently define it, then you’re willing to work through the “hard problems” with us?

Sorry; that’s not humility. That’s a cult.

Until you seriously consider the possibility that desirism could be a very bad model, and show a commitment to due diligence investigating the flaws, then your pretense at open-mindedness falls flat. Why do no experts agree with your claim that desirism is the “best”? Why is there no empirical evidence to support your claim that its the “best”? You can’t just hand-wave and mumble “details, details” when someone mounts a criticism of desirism.

And as long as you keep equivocating and redefining terms, you are in no position to demand that people trust you to work on the “hard problems”. Every time someone points out an incoherence in your theory, you retreat to your insipid talking points.

So far, you’ve continued to insist that desirism is the best available theory in the whole wide world, and you are its dictator who will define it however you chose, whenever you choose. And you expect us to credit you with epistemological humility?

  (Quote)

Alonzo fyfe November 18, 2010 at 9:51 am

JS Allen

So you’re saying that, as long as we accept your completely unsubstantiated claim that desirism is the “best” model of human morality available, and accept your refusal to coherently define it, then you’re willing to work through the “hard problems” with us?

I think it is quite clear that your main intent is to distort everything I write in such a way that you may create an interpretation that you can attack.

The evidence can be found here and in the comment above if any want substantiation of that claim.

  (Quote)

JS Allen November 18, 2010 at 10:20 am

Lest anyone accuse me of dodging Alonzo’s “tu quoque”, allow me to explain a little bit about how I evaluate “moral systems”.

1) In “Open Society and Its Enemies”, Karl Popper persuasively argued that one of the greatest causes of human suffering is people who think their “-ism” is the best, and who think it should be applied globally. Facism, Marxism, Desirism. When some would-be messiah proposes a new “-ism” which promises people that it will “prevent their children from being raped, their friends from being murdered, and that which they need to survive from being taken or destroyed”, my moral sensors go on red-alert.

In 1997, already ten years into his campaign to warn people, George Soros explained how Popper’s cautions applied to Capitalism. Time and again, history has proven Popper correct, and people like Mandelbrot have provided the math to confirm Popper’s insights — he is considered one of the greatest Philosophers of Science for a reason.

Popper has a number of bits of advice about how to engineer social systems that prevent the problems of “-isms”. Systems that follow Popper’s advice will tend to get my support.

2) Alonzo claims to want to reduce rape and starvation, and offers up his system as a technique of achieving these goals. Rather than promoting an “-ism” or a “system”, though, I think it’s far more effective to focus on specific empirical trials and iterate on results, exactly the way science works. Sendhil Mullainathan and Esther Duflo, among others, have been pioneering efforts to reduce human suffering through empirical trials — succeeding where centuries of abstract “-isms” have failed. Note the stark contrast between Alonzo’s approach and Mullainathan’s approach. Alonzo demands that you adopt his terminology, ontology, and over-arching system before even interacting with the real world. In Alonzo’s system, the “-ism” is the important thing — if only you have faith in his “-ism”, then everything will work itself out eventually.

When it comes to solving real-world problems like violence and hunger, we should have zero tolerance for such superstitious rubbish. Just focus on the problems you want to solve; cut out the ontological B.S. in the middle; conduct trials; collect data; and iterate.

  (Quote)

josef johann November 18, 2010 at 10:42 am

In Luke’s desirism faq, there is this question and response which I like:

Q.Isn’t it a bit arrogant to claim that desirism is the best moral theory ever devised?

A. Maybe. But wouldn’t it be weirder if I defended a moral theory despite thinking it was inferior to another theory? Obviously, I defend desirism because it’s the best theory of morality I know of. Otherwise, I would be defending a different moral theory.

Conflating any -ism with absolutism is a tired canard that fails to interact substantially with anything. It applies with equal force (that is, equal irrelevance), to any possible belief. Seeing the tentative character of Alonzo’s own posts, even in this comment thread, about the likelihood of desirism, I can’t escape concluding that JS Allen is being greatly uncharitable, and projecting.

  (Quote)

josef johann November 18, 2010 at 10:45 am

Alonzo:So you’re saying that, as long as we accept your completely unsubstantiated claim that desirism is the “best” model of human morality available, and accept your refusal to coherently define it, then you’re willing to work through the “hard problems” with us?Sorry; that’s not humility.That’s a cult.Until you seriously consider the possibility that desirism could be a very bad model, and show a commitment to due diligence investigating the flaws, then your pretense at open-mindedness falls flat.Why do no experts agree with your claim that desirism is the “best”?Why is there no empirical evidence to support your claim that its the “best”?You can’t just hand-wave and mumble “details, details” when someone mounts a criticism of desirism.And as long as you keep equivocating and redefining terms, you are in no position to demand that people trust you to work on the “hard problems”.Every time someone points out an incoherence in your theory, you retreat to your insipid talking points.So far, you’ve continued to insist that desirism is the best available theory in the whole wide world, and you are its dictator who will define it however you chose, whenever you choose.And you expect us to credit you with epistemological humility?

…and apparently the all-or-nothing apocalyptic threat holds the cult together is… that Alonzo will refrain from working on “the hard problems” ?

  (Quote)

Tony Hoffman November 18, 2010 at 10:52 am

JS Allen, I have to say that your criticisms of Desirism both lack coherence and appear to be the product of a conspiracy-inspired mind. I believe your criticism is incoherent because I remain confused if your argument against desirism is based on its description of the real world or the outcome that you foresee were it to be effected. Until you are able to more effectively distinguish these separate concepts I have too much trouble following your reasoning here and in other threads. To be clear, is desirism incoherent for you because you are certain that desires, or something like desires, do not exist, or is it incoherent for you because it would lead to bad outcomes? (I think the first is worth exploring, but I think the second has nothing to do with coherence.)

Regarding Alonzo’s evil plans, on what do you base your fear that Alonzo has any more power than the force of his arguments? If his arguments are sophistry, then you should be able to easily demonstrate the fallaciousness of his reasoning based on his most recent presentations. I think you have not grasped two (obvious) facts: desirism, if it is correct, is a description of reality; and Alonzo does not appear to have any great powers, outside the force of his arguments, to demand that we operate as slaves to a theory called desirism.

  (Quote)

JS Allen November 18, 2010 at 11:08 am

@josef — That Q&A from the FAQ is simply a “tu quoque”. It’s like responding to someone who says, “Stop slapping your wife”, by saying “How do you propose that we beat her?”. Neither Luke nor Alonzo have demonstrated that we need any comprehensive “moral theory” at all. As I explained, it’s these comprehensive all-encompassing theories which are the source of much human suffering — and we have clearly better approaches. And if you think this is only about totalitarian systems, I guess you haven’t read the Soros article I linked.

Of course, it’s far from the only fatal flaw in desirism. Even if we decided to argue about “-isms”, desirism would be a particularly bad one. The analogy of telling people to control weather by flapping their arms, is apt.

@Tony Hoffman — I have no idea what you are saying.

  (Quote)

Alonzo Fyfe November 18, 2010 at 11:11 am

1) In “Open Society and Its Enemies”, Karl Popper persuasively argued that one of the greatest causes of human suffering is people who think their “-ism” is the best, and who think it should be applied globally.

So, Popperism cannot possibly be the best and cannot possibly be applied globally.

Facism, Marxism, Desirism. When some would-be messiah proposes a new “-ism” which promises people that it will “prevent their children from being raped, their friends from being murdered, and that which they need to survive from being taken or destroyed”, my moral sensors go on red-alert.

And then those ‘sensors’ determine how you are going to interpret their arguments so that they will end up fitting inside of the box that you have designed for them.

You do not look at what I have actually written. Instead, you have put on thes Popperesque eyeglasses that filter everything you read so that it fits within your preconceived notions – and you can’t see what is actually there.

There is nothing in any of this that desirism would object to. In fact, if it is the case that dogmatic devotion to -isms tends to create states of affairs that result in the thwartings of desires, then desirism states that there exists many and strong reason to promote an aversion to the dogmatic adherance to -isms. It would then classify such adherance as a moral wrong, one worthy of condemnation.

You claim to be able to know that dogmatic adherance to an ‘-ism’ is a trait that creates a great many desire-thwartings. And that we can know this and thus write posts that propose a universal condemnation of devotion to ‘-isms’. Even though you assert that the universe is far too complex for anybody to make such a claim that that universal claims (e.g., devotion to ‘-isms’ is to be universally condemned) must never be made.

In 1997, already ten years into his campaign to warn people, George Soros explained how Popper’s cautions applied to Capitalism. Time and again, history has proven Popper correct, and people like Mandelbrot have provided the math to confirm Popper’s insights — he is considered one of the greatest Philosophers of Science for a reason.

Now we know who your Messiah is, and I see you are just as defensive of your religion – your ‘ism’ – as the people you condemn.

Popper has a number of bits of advice about how to engineer social systems that prevent the problems of “-isms”. Systems that follow Popper’s advice will tend to get my support.

I am interested in what those social systems are.

If they work as you claim they do – that is, if desires to follow those prescriptions actually help in the fulfillment of other desires, and if an aversion to violating those prescriptions will prevent the thwarting of desires, then desirism would claim that people then have many and strong reasons to promote desires to follow those prescriptions and aversions to violating them.

However, to understand your theory, I need to know what your definition of a “problem” is, and how you determine whether or not a particular state of affairs is to be classified as a “problem”. And what is it about a state of affairs that has these qualities that makes it “worth preventing”. Without this, how can you possibly know or justify your claim that what you are doing is “preventing problems”?

2) Alonzo claims to want to reduce rape and starvation, and offers up his system as a technique of achieving these goals. Rather than promoting an “-ism” or a “system”, though, I think it’s far more effective to focus on specific empirical trials and iterate on results, exactly the way science works.

Oh. I get it.

Scientism.

Alonzo demands that you adopt his terminology….

False.

I hold that language is an invention. You can adopt whatever terminology you want. The results would be no different than choosing to use French rather than English. My claims would have to be translated into your language, but it is not the case that the truth value of a proposition depends on the language in which it is uttered. If the truth value changes as the proposition is being translated, then the translation is not accurate.

…ontology…

There is not an utterance to be made that does not depend on a set of claims about ontology . . . including any interest that you have made about the dangers of ‘-isms’. Are we not required to accept your terminology as to what counts as an ‘-ism’ and your ontology with respect to what counts as a ‘problem’ and your prescriptions that what you define as a ‘problem’ are things that ‘ought to be avoided’?

In Alonzo’s system, the “-ism” is the important thing — if only you have faith in his “-ism”, then everything will work itself out eventually.

See the point above where I wrote about how your ‘ism’ blinds you to creating interpretations of what other people say so that you can fit it into the box you have built for it. There is nothing in what I have written that fits this description.

When it comes to solving real-world problems like violence and hunger, we should have zero tolerance for such superstitious rubbish.

Zero tolerance.

This sounds somewhat dogmatic to me. It sounds like . . . I don’t know . . . like something that a Fascist or a Marxist or somebody particularly devoted to a particular ‘-ism’ might say about things that their ‘-ism’ points to and identifies as ‘the enemy’.

  (Quote)

Tony Hoffman November 18, 2010 at 11:19 am

JS Allen: @Tony Hoffman — I have no idea what you are saying.

Perhaps you need to get a handle on what is you are saying first. I think that might be a better starting point for you. Baby steps.

  (Quote)

josef johann November 18, 2010 at 11:36 am

TH,

I concur. I’m still not clear on whether JSA thinks “macro” calculations are fundamentally different in kind from micro ones, or specifically what type of complexity argument he is trying attributing to Alonzo, or where he got the idea that Alonzo was originally talking about anything other than the difficulty of calculation in general.

All this because of an analogy to temperature! The point of the analogy is to refute “micro and macro are different therefore desirism is wrong,” which JSA has repeatedly misinterpreted as if the analogy itself were intended as a full-scale solution to problems of calculating complex desires. Or as if frankly admitting to calculationally difficulty is a, totalistic refusal to acknowledge complexity.

Instead of digging into this, we get… an apocalyptic story about how paragraphs of text on a blog are irresponsible and threatening society?

  (Quote)

JS Allen November 18, 2010 at 11:36 am

@Alonzo — You’ve been making bold claims that your crackpot “moral theory” would reduce rape and starvation. I’ve given you several specific reasons why your system would likely dramatically increase the level of human suffering. Your retort is:

1) Relying on empirical trials is “scientism”
2) The alternatives are really just part of “desirism”, since desirism encompasses all
3) Rejecting “-isms” is an “-ism”, so neener-neener

You have completely descended into sophistry. You’re like a bad parody of theist arguments against atheism.

  (Quote)

Tony Hoffman November 18, 2010 at 12:25 pm

JS Allen: “You have completely descended into sophistry. You’re like a bad parody of theist arguments against atheism.”

My goodness. Why do psychological projection and theism so often go hand in hand? Why are so many theists devoid of a sense of irony?

JS Allen, you could raise some excellent points, but there’s another side of you that keeps on stepping on the one who is intellectually interesting. You’d be a lot more fun to talk to if that second guy took a seat.

  (Quote)

JS Allen November 18, 2010 at 2:05 pm

Instead of digging into this, we get… an apocalyptic story about how paragraphs of text on a blog are irresponsible and threatening society?

That’s a fair criticism, and I didn’t mean to be all “chicken little”. I’m just trying to make a point about how we should approach claims that promise to deliver us from all of our deepest fears about physical and economic security. Alonzo claimed that his theory would reduce rape and starvation — he actually DID that! There is a certain level of intellectual hygiene that should apply in these cases, and we shouldn’t be giving anyone a free pass just because he’s an atheist.

I don’t really think anyone is about to adopt desirism in the short term and use it for real-world decisions. Then again, I didn’t think anyone would actually be stupid enough to vote for the tea-partiers, and I was honestly shocked on election day. I also didn’t think anyone would make a drastic life-or-death decision based off of an incomplete and contradictory atheist “moral framework”, but Mitchell Heisman proved me wrong.

As for “digging in”, the micro to macro transition is absolutely not a simple matter of “calculational difficulty”. That’s why I reject the comparison to “three-body problem”. I gave a very brief overview of why, in the 11/16-12:01 comment, and pointers to some resources in the 11/18-6:03 comment. The Soros article briefly touches on some reasons for the disconnect as well. Taleb is working on a book about “anti-fragility” that also makes similar points.

Desirism is almost certainly not a useful reduction of how people make moral decisions. That remains to be seen. But even if it were, it would not make sense to say that we can best influence macro phenomena like violence and hunger by manipulating individual agents’ desires. That would be sort of like saying that we can best influence the weather by flapping our arms.

You’d be a lot more fun to talk to if that second guy took a seat.

OK, fine. Second guy is sitting…

  (Quote)

Yair November 18, 2010 at 2:58 pm

OK, fine. Second guy is sitting… 

Good.

You raise some very good points, but your style is extremely offsetting. I beseech you to arrest all personal attacks and all psychological analysis – no matter how obviously justified; that kind of talk just doesn’t do your case any good.

On the three body problem, I’d add that the difficulty there is not AT ALL a matter of calculation – it is a mater of analysis. There is no known way to solve the problem analytically. I don’t know how to ever begin to compare that kind of difficulty to the difficulties related to complexity that you bring up. I think both aspects are plenty difficult, thank you.

As four your main point – not any “ism” is an Evil-Ism. I frankly don’t see why you’d think desirism is as opposed to, oh, any other moral theory. Presumably you are not opposed to the study of morality entirely? We need to have some way to decide reducing suffering is a good idea if we’re gonna empirically test how to do that most effectively. We need to have some way to decide we want to set up an open society.

I suggest to cool down, and let the proponents of desirism make the case for it. You’ve raised some very good points that we’ll need to get back to if it turns out it merits it – it’s time to move on and see how the rest unfolds.

  (Quote)

Zeb November 18, 2010 at 3:54 pm

Zeb,Which of JS Allen’s criticisms do you find most compelling? It’s hard for me to know what to engage. For example, the claim that we haven’t given explicit definitions for desire or choice are of course correct. We’re getting there. It takes time.  

First, I think it is fine that you haven’t gotten there yet, as long as you do get there. I excitedly await. Just acknowledge and commit to engaging JS Allen’s valid criticisms eventually, and refrain from talking or acting like those criticisms are irrelevant.

But definition of terms is a big one; what exactly is a desire, an action, what does it mean for a desire to be “greater”, or even what does “more desires” mean? Without these in place, I agree that desiring, while promising, remains incoherent as yet.

Another is that the argument has yet to be made that desirism will make the world better in any way. Even if it is a perfectly true meta-ethical theory, widespread acceptance and attempted application of desirism could hypothetically be bad on desirism. But Alonzo cleary and strongly asserts that desirism will help improve things. With no intrinsic value on truth, correctness, or having-a-moral-theory, we need a very strong argument for desirism’s moral value. Otherwise all you can say is “If you want a true moral theory, here it is. But it might make you a bad person and the world a worse place, for all we know.”

I am interested in JS Allen’s skepticism towards the ontology of desirism. We don’t see desires, beliefs, or the intentionality of intentional actions. We just see events. What if it is true that BDI is a post-hoc rationalization of our own actions, and biologically/socially evolved heuristic for predicting other people’s actions? What if beliefs, desires, and intentionality are nothing more than useful narrative constructs that don’t refer to things that actually exist? Why bother with adhering to the ontological existence of these things when we can just measure what we’re really interested in (whatever that might be), and discover scientifically what sorts of actions and programs etc lead to the results we want? As JS Allen said, “We don’t need the ontological model, because we already have the math.” I’m just saying these are good questions that need to be answered if desirism is going to stand.

Finally JS Allen’s point about complexity and chaos is important. Moral activity in general, and application of desirism in particular, may be no more effective than rain dancing, or it might be as catastrophic as the proverbial butterfly causing a hurricane. Or it might work just like Alonzo hopes it will. Isn’t it important to figure that out? Desirism, all these criticisms withstanding, looks pretty good on paper to me. So do Marxism AND Austrian economics, paradoxically. But I must accept that real world macro-economics is way more complicated than either of those systems admits, and therefor their prescriptions fail, even though they may be based on airtight reasoning from complete and coherent theories based on obviously true principles. Since we’re not in a situation where we obviously need a drastic change in moral thinking to avert total disaster, I think JS Allen’s conservatism regarding advocating any such change when we have no idea what effect that change might actually have is valid. Doing nothing, when things are going pretty ok, might be a wiser and more moral choice than doing “the best thing we can think of.”

That said, and to be fair, I think JS Allen has gone overboard in his last few comments (cults and all that). But on the whole his comments have been great, and if he (or more likely you guys) would tease out the essential points in them, I think they would be very useful to your project. Please do answer them eventually.

  (Quote)

cl November 19, 2010 at 6:24 pm

Here’s a question that perhaps you can answer in a future episode: where are your empirical grounds for using desire fulfillment as the arbiter of moral goodness? It seems to me this assumption has never been successfully justified, and it seems to me that justifying this assumption should be the first place to start. What if we’re misguided in using desire fulfillment as a criterion of moral goodness?

**********************

As far as the thread, I purposely stayed out of this one. Perusing it impartially, it seems that right around his comment November 17, 2010 at 9:42 pm, Alonzo goes on the defensive against JS Allen. Personally, I think Alonzo took a turn for the worse when he said,

So, the real question to ask, then, woiuld be, “Do you have something better?” Can you offer people a set of options that will improve their chances of preventing their children from being raped, their friends from being murdered, and that which they need to survive from being taken or destroyed?

Are you offering anything constructive? Or are you just complaining that, “It’s really difficult?”

In everyday discourse, those might be legitimate questions to ask. However, in the context of defending one’s own theory, those are inappropriate questions to ask. Desirism’s alleged coherency is the issue here, and any “real” question to ask needs to pursue that line of questioning. In comment November 18, 2010 at 6:22 am, Zeb writes:

Wow Alonzo, I think you have just fulfilled JS Allen’s accusation of sophistry made in that other thread. Instead of continuing to grapple with his criticisms of his theory, you accuse him of willfully creating a strawman of your arguments just to have something to criticize. You have in fact strawmanned him. While I am becoming more and more convinced that is the correct moral theory, I have found all of JS Allen’s criticisms to be completely on point and well articulated.

While I won’t say I agree with all of JS Allen’s criticisms, I do agree with many of them, as previously stated in the thread. I also agree with what Zeb seems to imply: that Alonzo’s accusations are unfounded and don’t serve to advance the discussion.

That said, I would also have to side with Alonzo in a few instances here. For example, I also disagreed with JS Allen’s claim that Alonzo was – either implicitly or explicitly – claiming it was impossible to live morally before desirism came along. I think Alonzo was correct to call that out as false.

However, I think JS Allen was entirely on point for criticizing Alonzo’s “putting the cart before the horse” if you will. Alonzo uses this hitherto untested, non-peer-reviewed theory to make some pretty dangerous claims IMHO, and of them any lover of freedom should be weary.

Alonzo replies to “Zep” thus:

Can you identify a place where I have dealt dishonestly with [JS Allen's] criticism?

I can – although to be perfectly clear I cannot discern between dishonesty and negligence given the evidence at hand. Alonzo says to JS Allen:

I think it is quite clear that your main intent is to distort everything I write in such a way that you may create an interpretation that you can attack.

Even if we grant that JS Allen misinterpreted one or more things Alonzo has written, that still would not be sufficient to support the claim Alonzo makes, which is an instance of the slippery slope fallacy. Also, I think Alonzo’s penchant for castigating persistent dissenters with accusations and attacks on their moral character is rather unprofessional and off-putting. I’d much rather here objections addressed than character attacked.

  (Quote)

Leave a Comment

{ 2 trackbacks }