CPBD 070: Ron Mallon – Experimental Ethics

by Luke Muehlhauser on October 3, 2010 in Ethics,Podcast

(Listen to other episodes of Conversations from the Pale Blue Dot here.)

Today I interview philosopher Ron Mallon about experimental ethics.

Download CPBD episode 070 with Ron Mallon. Total time is 55:21.

Ron Mallon links:

Links for things we discussed:

Note: in addition to the regular blog feed, there is also a podcast-only feed. You can also subscribe on iTunes.

Transcript

Transcript prepared by CastingWords and paid for by Silver Bullet. If you’d like to get a transcript made for other past or future episodes, please contact me.

LUKE: Dr. Ron Mallon is an Associate Professor of Philosophy at the University of Utah and has written many papers on ethics, experimental philosophy, and other topics. Ron, welcome to the show.

RON: Thank you very much and thank you for having me.

LUKE: Ron, let me first ask you about experimental philosophy in ethics. What is experimental philosophy and then what kinds of questions could it help philosophers investigate in the field of ethics?

RON: Experimental philosophy is a name given to a broad movement in philosophy that involves philosophers and psychologists using mostly the techniques of experimental psychology, cognitive psychology, and social psychology to address questions that have been traditionally of interest to philosophers. And historically, it’s an outgrowth of a broad twentieth century trend towards trying to use everything but the kitchen sink to try to gain traction in answering philosophical questions.

Philosophical questions are notoriously very difficult and philosophers throughout history made all kinds of empirical and theoretical assumptions in an attempt to see if they could make progress on them. And in the twentieth century people began to use consideration from a broad range of fields, for example, from the study of language, from the study of psychology, from the study of math and game theory and computation, and then eventually computational simulations, to try to answer various questions that philosophers have found interesting. And experimental philosophy is really just a continuation of that, where philosophers and psychologists are in quite direct ways trying to address philosophical questions using experimental techniques.

While this movement, I think, is relatively young, people date it from around 2000 or so, but you might date it before that. We’ve already used these kinds of techniques to address a range of questions in the field of ethics, folk judgments about free will, and when a person is acting freely and responsibly. The nature of moral judgment: is moral judgment produced mostly by reasoning about an answer or by emotion? And then broader meta-ethical questions about what the nature moral facts or moral reality is.

LUKE: And some of that last stuff that you mentioned is often considered in the field of moral psychology. What do you think are some of the most intriguing discoveries in moral psychology so far?

RON: Moral psychology really has two somewhat different meanings. One of them is a traditional study of something like the structure of moral agency and the nature of moral action. And then the one that’s more salient in experimental philosophiy is the empirical study of moral psychology, what the nature of moral judgment is, and so forth.

I think some of the intriguing discoveries here especially concern the trends towards emphasizing the automatic, surprising, and emotional influences on moral judgment and moral behavior.

So, here are three examples of discoveries. One of them is perhaps the most famous discovery in all of experimental philosophy. Joshua Knobe of Yale has shown that foreseen but unwanted side effects are judged to be intentional if they are judged to be morally bad, but not if they’re judged to be morally good.

And that seems surprising because it seems like whether something is intentional or not should depend on the mental state of the agent and not on the moral valiance of the action. And there’s been a lot of discussion on how to explain this and even whether that way I just described it is the right way to explain it.

A second example is work by Joshua Greene at Harvard University who was trained as a philosopher at Princeton and now works as a psychologist at Harvard. He’s shown that what philosophers have called and thought of as deontological moral judgment or judgment that seems to override considerations of what is for the greatest good for the greatest number, he’s shown that these judgments seem to emerge from emotion, and this has been surprising in part, I think because the philosopher Immanuel Kant – who’s the source of much modern deontological theorizing – seems to decry the role of emotion in the production of judgment.

And then finally, I think psychologists and philosophers like John Bargh at Yale and John Doris at Washington University Saint Louis have emphasized the effects of apparently irrelevant factors on the automatic processing of our judgments and behaviors. So, when it turns out that very subtle changes in one’s environment, for example, whether one has been exposed to a list of words that prime one to think of the elderly or older people – words like Florida and Sunshine – when it turns out that just mere exposure to such words can lead you to walk more slowly, then it starts to seem like our behavior is actually largely governed by factors that are outside of our control, and this challenges our conception of ourselves as kind of rational, reasons-responsive kinds of beings.

LUKE: And I wonder if you could give an example of the research that’s been done to show each of these things in the field of moral psychology. What’s a specific example that seems to be leading to these conclusions?

RON: OK, well Joshua Knobe gave people vignettes in Washington Square Park. And the vignette is just a short story. It’s an example of what philosophers call a thought experiment. And the vignette Knobe used was something like this. The vice president of a company comes to the president and says, “We’ve got a new plan that we’d like to implement. It will increase profits but it will also harm the environment.” And the president says “I don’t care about harming the environment. All I care about is increasing profits. Let’s implement the program.” They implement the program, and sure enough profits increase, but the environment was harmed. And then subjects are asked whether the president harmed the environment intentionally, and in that case, overwhelmingly subjects say, “Yes, the president harmed the environment intentionally.”

However, if you use the same exact vignette and you replace the word “harm” with “help,” you get the opposite judgment. So, in this opposite vignette, the vice president says, “We’ve got a new plan. It will increase profits and it will help the environment.” And the president says “I don’t care at all about helping the environment. All I care about is increasing profits,” and you ask subjects if the president intentionally helped the environment, they say overwhelmingly, “no, he did not.”

LUKE: Interesting.

RON: And so that’s really surprising because it looks like that’s a case where the moral valiance of the side effects which was unwanted by the president, the moral valiance of the side effect seems to affect whether or not people judge it to be intentional.

And that’s surprising as I’ve said before, because it looks like whether something is intentional you might have thought it depends on the mental state of the person.

LUKE: And then what about Joshua Greene and then some of the other people like John Doris?

RON: Well, I gave one example of John Doris and people like John Bargh, a psychologist at Yale. There’s a post of experiments that show the effects of subtle manipulations on people’s behaviors. And the crucial thing for people like Doris is that it looks like these are manipulations that we wouldn’t endorse reflectively.

We can really understand the connection if you think about examples of where someone’s judgments or behaviors are affected by how tired they are, or how racially prejudiced they are, or how disgusted they are, for example.

And these are kinds of effects on one’s behavior that one doesn’t endorse when one finds out about them. And so finding out that the way you act depends in some way on the words you were just primed by, seems to undermine this sense that you’re kind of a reason-responsive person.

LUKE: More specific to morality that’s the type of study where we find that people’s moral judgments are significantly altered when they smell freshly baked bread versus a nasty smell or something. Right?

RON: Right, right. That’s right. There’s been a kind of run by cognitive and social psychologists on fart spray which is a commercially available novelty product that you can use that smells like flatulence. And they, for example, might spray a room with it and then have subjects in a little while later, and subjects may not even notice the smell by the time they come in consciously and none the less it might change their moral judgments.

LUKE: And then what about Joshua Greene?

RON: Joshua Greene published a paper when he was still at Princeton working in Jonathan Cohe’s lab in which he did subjects famous philosophical thought experiments called trolley problems.

So, in a classic trolley problem, sometimes called by-stander, you’re walking in the country side and you’re walking by a rail track. And there you look up the track and see that there is a runaway trolley there. And you look down the track the other way and you see that there are five people standing on the track and there’s a steeper hill there where the rail is. So you see they can’t get off the track.

You fortunately are standing by a switch and there is a fork in the trolley. So, you look down the other path and the other fork of the track and you see there’s only one person there.

And so one question is whether it’s permissible, or even more they require for you to switch the switch causing the runaway trolley to go on the side track which would save the lives of the five people on the main track but kill the person on the side track.

And this philosophical judgment about that case has been traditionally that it is permissible to switch the switch and save the five and kill the one.

But, if you change the case and you make it a case called footbridge, where now it’s a similar case, and now you are walking on a footbridge that goes over a trolley track. And you look again up the track and you see a runaway trolley, and you look down the track and see there are five people standing on the track and you realize that they will die if the trolley continues.

Fortunately, as you’re standing on the footbridge, you see that there is a large person wearing a backpack, leaning over the edge of the rail of the footbridge, and that if you just go up to that person and push them over the side, they’ll fall down onto the track and stop the trolley.

And most people, traditionally in the philosophical literature, and then this has been subsequently confirmed by empirical work, most people think it’s impermissible to push the person off the footbridge, to walk up and push them, using them to stop the trolley.

And this creates a kind of philosophical puzzle because these two cases look to be very much alike. In both cases, you are sacrificing one to save five. But, it looks like there are a number of differences between the cases.

For example, in the footbridge case you’re using the one person as a means to stopping the trolley whereas in the other case the person dies only as a side effect of the means you do, which is diverting the trolley.

So, some people have thought that that’s the salient difference between the cases that makes them morally different.

So, the reason that it’s wrong that people have thought to use people as a means is that it violates the autonomy of a person, that people have a certain kind of autonomy from which they have, for example, basic rights that we tend to recognize in American society.

LUKE: Right.

RON: And it stems from their character as kind of a rational being.

So, what Joshua Greene did was he gave subjects these kinds of trolley cases in an fMRI machine. And what he showed was that, what subjects do when they judge that pushing the person off of the footbridge to stop the trolley is impermissible, is in fact they have emotional activation. And that is, it seems to be this emotional activation that’s driving their judgment of the impermissibility of pushing the person off the footbridge.

LUKE: Whereas there’s less emotional action when they’re thinking about the case of pulling the lever?

RON: Yes. And so there’s something very emotionally activating about the very idea of walking up to someone, putting your hands on them and pushing them off this bridge which isn’t the case about flipping the switch even though this person will die.

And Greene suggests that this emotional activation is what gives rise to these judgments about the inviolability of the person.

LUKE: Now, my understanding is that, some people have interpreted this work as saying that the deontological or Kantian or rights-based view of morality really springs from our primate emotions, whereas more utilitarian judgments are activating the calculating parts of our brain and might be less subject to emotional influence.

RON: Right, and that view that you just expressed is called the dual process account of human cognition. And that’s Greene’s own view of what he shown in the experiment that the human mind has at least two fundamentally different ways of thinking about problems. One involves calculations and perhaps reasoning, especially in this case, utilitarian calculations. And the other involves these emotions and other automatic and intuitive processes that operate outside of our conscious reasoning, but give rise to these very powerful and strongly held judgments about what is and isn’t permissible to do.

LUKE: Well, yeah, those are some really fascinating cases of moral psychology. I really enjoy that field because it’s a very exciting field right now, but it can also be humbling in that the more that we discover about ourselves, the myth of the rational, fully-intentional agent is kind of evaporating.

RON: It’s certainly being challenged. And certainly, what we’re finding out in the field is that finding out more about the sources of our moral judgment leads us to question over and over again is this the kind of source of moral judgment that I ought to think leads me to some kind of apprehension of moral truth?

Or is this the kind of source of moral judgment which when I realize what it is, it’s some kind of, for example, evolutionary switch that’s being flipped on in me. Is that the kind of cause I can endorse when I realize what it’s about, what its function is? And oftentimes I think we’ll find the answer is “no.”

LUKE: Yeah, right now, the trend in research seems to be really challenging the view that we have, moral intuitions that track the truth about moral facts, like we have some kind of special faculty in our brains that can detect moral facts or know them.

Instead, it rather looks like these are simply moral prejudices, evolved moral prejudices and that is what is producing these moral judgments, rather than some kind of reliable faculty in the brain for detecting oughtness in the universe.

RON: I think that’s absolutely the right way to express the concern that the research raises, though I don’t think we’re ready to draw a skeptical conclusion yet. I mean one way to think about it is that lots of judgments can be given causal stories.

For example, judgments about what a good movie is or what a good meal is can be given causal stories that are rooted in our evolutionary history in part, in our cultural background in part, and yet at the end of the day does anyone really want to endorse the story that there’s no difference between a good meal and a bad meal or a good movie and a bad movie.

And I think that, while some philosophers might be tempted with that view. I think as we live our lives we’re not at all tempted by this view that there’s no difference between a good meal and a bad meal.

So, it could be the case that when we find out about these sources that some process of reflection and selection will allow us to preserve something like the traditional idea that we apprehend moral truth.

And at the other extreme is the idea that fundamentally we’re going to have to make the decision that is whole way of apprehending the world is rooted in error and we’ve got to find somewhere else to stand, some other way of going forward.

And I suspect that the answer in the morality case is going to have to be somewhere in the middle, as I suspect it is in the case of aesthetic judgments, like what are the differences between a good movie and a bad movie or good meal and bad meal.

LUKE: Right. And you talked about how the inference from these findings in moral psychology to the conclusion that there’s no difference between right and wrong would be a bit hasty. But, what about the inference from these findings in moral psychology to the conclusion that our moral intuitions aren’t to be trusted as the source of knowing what is the difference between right and wrong?

For example, we don’t expect physicists to consult their intuitions in order to find out what’s true about the physical facts of the world and yet it’s an extremely successful knowledge enterprise. And so, maybe what moral psychology tells us is that we don’t have certain types of epistemic access to moral facts that we thought we had, and we really have to go with more reliable means to figure out the moral facts if there are any.

RON: I like that way of putting the project. Experimental philosophers sometimes distinguish something called the negative program of experimental philosophy from the positive program. And the negative program of the experimental philosophy is just attacking the idea that our intuitive judgments of the world are a way getting at facts that we ought to trust.

And this is playing out in moral psychology in just the way you suggest that people think once we understand that our intuitions come from a variety of sources and have a variety of functions, and sometimes are as base as prejudice – a racial prejudice, age prejudice and various kinds of emotional activation – once we understand the forces of our moral judgment, then we won’t be able to trust our intuitions anymore. And I, in my own work, am very sympathetic to that view.

The only remaining question though, is exactly how do we justify an alternative view in a domain like morality or again in a domain like aesthetics? What does it mean to get rid of our intuitions and have some independent access to the domain in question? And it’s somewhat hard to get a grip on that idea.

Joshua Greene has thought that the right response to morality once you understand that there are all these disparate influences on moral judgment is to recognize that there really isn’t such a thing as moral fact, and that instead we should be something like consequentialists and endorse the ideas that what we should do and what we should judge stems from the greatest good for the greatest number.

LUKE: Wait, you’re saying that Josh mainly says there maybe aren’t moral facts but that we ought to do greatest good for the greatest number type of things. Isn’t that moral fact?

RON: He thinks that when you understand that the moral domain is shot through and through with what he thinks of as errors, then you will be tempted by the idea that there are no moral facts, and then you’ll look for some other kind of basis for judgment and action.

And he thinks that the right kind of basis is a kind of consequentialist basis. And the point I was making was simply whether or not you want to call that a consequentialist basis “morality,” once that consequentialist base is itself rooted in a way our brain processes problems and it’s also the idea that we should endorse it or embrace it is in self rooted and deeply held instead of judgments we have, the greatest good for the greatest number being a good thing. Whether or not you want to call that “morality,” you’re still in the realm of using as a decision guide using judgments which are the products of your evolutionarily produced brain and your deeply held intuitions.

So, there’s some sense in which we can try to make decisions about which of our intuitions we want to hold onto and which ones we want to get rid of. But, at the end of the day, it’s unclear how we can in the moral realm, at least, simply abandon the use of our evolutionarily produced brain and abandon our capacity to make judgments about what we hold more strongly and less strongly.

LUKE: Well Ron, what do you think about this, just because it happens to be the approach that I currently defend: This data from moral psychology and also just from kind of a Sharon Street type of skepticism about Darwinism and moral intuitions, it seems to me like one sensible approach would be to seek a set of definitions for moral terms that captures moral practices as best as possible – that won’t be perfect because people use moral terms in such a wide variety of ways. It’s more like terms about art or terms love rather than terms about electrons which is very uniform.

But, once we find a set of moral definitions that fits pretty well with moral practice and also happens to refer to things that exist rather than say, divine commands or something, then, once you’ve got that set of definitions you can simply do empirical research on moral conclusions. And there’s no mystery about how it is that we know moral facts. We don’t have to refer to our evolved moral prejudices, what we’ve been calling intuitions, and we can just do morality as basically a science.

RON: Meaning there are some specific questions about whether characterizing definitions is the right way to go. But, I think that the big issue here is that what you’re describing is a descriptive project. That is, what we’re going to do is understand that moral concepts people use and the way those moral concepts play a role in their lives and we’re going to give our best kind of characterization of that use. And then we’re going to use that characterization to go out and gain more evidence and more data and we’re going to produce an even better description of how these concepts and ideas function in people’s everyday lives. And that’s a great project for experimental philosophy and indeed, barring what are somewhat parochial disputes perhaps, that’s in a way a description of what experimental philosophers in moral psychology are up to.

But, there’s a remaining question which is how you get from the description of what people do to the question about what right and wrong are. Really, right. Because there’s a famous argument from G.E. Moore called the open question argument and without doing it too much injustice I hope we could put the idea like this that whatever the natural facts are about people and however they use terms like right and good we can still ask – it still remains an open question – “But is it really good? Is it really right?”

LUKE: So, you could say doing such and such action would produce the greatest good for the greatest number. But it seems like you could still ask, “What is that good?”

RON: Exactly. Exactly. Now, one might ask whether there’s this same gap in other domains like the aesthetic. So right, you have a meal and it’s well-prepared and everyone enjoys it and it’s judged to be the finest meal everyone ate. Does it make sense to still ask, “But, was that meal a good meal?” These people judged it to be a good meal but was it really a good meal? Or ask that of the domain of the funny. Someone tells a joke, everyone laughs. And we can ask, “But was that joke really funny?” And sometimes there is a gap, right, if we find out that a joke appealed to some prejudice that we on reflection don’t agree with, then we might say, “I laughed but the joke wasn’t really funny.”

LUKE: Huh.

RON: And so there is an open question even about funniness judgments and other kinds of judgments, aesthetic judgments. There still is an open question. But one might wonder whether it’s, how open it is compared to the moral domain.

LUKE: Well getting back to moral psychology, John Rawls suggested that moral psychologists could investigate the possibility of a kind of Chomskyan, innate moral endowment in our brains. What was he talking about?

RON: Noam Chomsky revolutionized the field of linguistics in part by suggesting that human infants are born with innate knowledge of human languages, what he called, “universal grammar.” So, the idea was that having this universal grammar in an infant’s brain is what allowed any human infant raised in any natural language environment in the world to acquire that natural language.

So, part of what Chomsky is up to is he’s making something called poverty of the stimulus argument. And the basic idea of this argument is that while children easily acquire a natural language from exposure to it that if you look, they in fact lack the kind of feedback and information, for example, they lack correction from their parents about certain kinds of errors that would support a traditional learning explanation of their knowledge of the natural language they acquire.

That is, you’d expect, if children were just listening to their environment and picking up on the language they hear that they would make certain kinds of errors that they do not in fact make. It’s called the “poverty of the stimulus” argument because the environment is impoverished with regard to the information that the children in fact have somehow as they acquire natural language. And so the only alternative is that they have the knowledge innately. This kind of poverty of the stimulus argument has a very old pedigree in western philosophy. You might trace it all the way back to Plato.

But, in any case, Chomsky’s arguments have been a model for empirical investigation across a range of other domains of human knowledge in cognitive psychology and in that way it’s not surprising that John Rawls who would make the suggestion that it might apply to morality. And this suggestion I think really wasn’t taken very seriously until in the 1990s and the early part of this decade people like Susan Dwyer, Gil Harmon, John MacHeil and Marc Hauser sought to defend this view and say that as with linguistics so the case is with morality that human infants have some sort of innate moral grammar which is of course articulated in different ways in different cultures and leads to as it were natural moral systems which differ from one another in their acquired structures but that underneath these systems is a kind of universal backbone of moral theory.

LUKE: Is the idea there that you know because we have this innate moral endowment, that’s why every culture places moral condemnation on certain types of killing but the types of killing that we outlaw differs from society to society or if this theory was true how would we expect it to manifest itself?

RON: So, you might expect there to be certain kinds of moral rules or principles that you find exhibited across the moral systems that we see, respect for certain kinds of individuality might be one, respect for community might be another, prohibitions against certain kinds of harm, prohibitions against certain kinds of sexual behavior. You might find all kinds of different fundamental moral principles that are exhibited at least in skeletal form in these different moral systems.

And then, of course, the trick is if you want to claim that some principle is exhibited across moral systems, you have to explain the apparent variation in these systems. And one way of doing that, you might say, is that these fundamental moral principles get ranked differently in different systems for example. Or, some of them get turned on and some of them get turned off in the course of a child’s moral development.

So, you might think in defense of the ranking idea that every culture respects individual rights, but in some cultures, other things are respected a lot more and so they look like they don’t respect individual rights, but in fact they do it’s just, it’s a low ranked item instead of a high ranked item.

LUKE: And what do you think the evidence suggests so far with regard to this Chomskyan moral grammar theory?

RON: I think that that view is really intriguing and that it’s a good research program. But, I think that the evidence for the view is somewhat thin at the moment. Let me just give one example of the kind of evidence, then I’ll follow up on the discussion we were just having.

One sort of evidence for the idea that there is a kind of moral instinct comes from the cross cultural presentation of moral dilemmas of the sort that philosophers have used. So, for example, cases like the trolley case that we just discussed. It turns out that if you give trolley case-like moral dilemmas across cultures, you can find agreement among very different cultures on some of the fundamental judgments that we find in trolley cases. So, that looks like it should be evidence for the idea of a universal innate moral grammar.

But, you have to think about the character of these thought experiments. When philosophers design their thought experiments, we do so with a preconception about what is relevant and what is irrelevant. For example, when philosophers first started thinking about cases like the trolley cases, we didn’t start with cases where, say, the people on the track were related to you. So then, the dilemma would be something like, you have five strangers one the one track but if you divert the track that it will be your brother there. Or we didn’t start with cases where there is a contrast between people of different races or groups. So there are five people that are members of your race on the main track but there is a member of a different race on the side track.

That was a good reason why we didn’t design the cases in that way. The reason we didn’t design the cases that way – and this is very evident in the way John Rawls uses thought experiments – is because we judge those factors to be morally irrelevant. We judge it to be morally irrelevant whether the people are related to you… We judge the races of the people on the track to be morally irrelevant. So, we don’t include them.

We don’t include them because they’re morally irrelevant, but also because we worry they might actually influence people’s judgments. We’ve already made a judgment about what counts as moral relevance and moral irrelevance.

And so when you give this thought experiment to people across cultures, you’ve already excluded some of the content, some of the philosophical content that they might want to disagree about.

So, for example, if you gave moral dilemmas that involved relationships to family members, relationships to communities across cultures, then you might expect to find just the opposite result that people differ profoundly in the way they judge you should treat your family members vis-a-vis your non family members or your in-group members vis-a-vis your out-group members. But, that’s not the way philosophical thought experiments have been designed.

LUKE: Now, in a way, it seems like it would be very unsurprising if we found that we had a kind of innate universal moral grammar because we share the same evolutionary history.

And so, there is a very common story about how our moral practice evolved from the biological utility of reciprocal altruism and other types of behavior, and then of course it’s shaped by culture. But certain fundamental moral behaviors and judgment-making processes, it seems like there would be a plausible evolutionary story, and if we do have a universal moral grammar, then evolution could account for it, just as in the case of universal grammar in language.

RON: Well, I am a strong believer in the idea that it is reasonable to say there is such a thing as human nature, that human beings share a profound number of similarities based on our common evolutionary history, and that fundamentally human beings across the earth are alike.

So, once we’ve said that and supposing that’s true, it’s certainly an important research program to look to the extent to which human moral cognition is identical across cultures.

There is another question that comes up specifically in this case which is, is the right way to describe this moral cognition as some sort of universal grammar? That is, as a system of inner knowledge which is present at birth and which is turned off or turned on or differently ranked, or something as people grow older, or is the right way to describe human moral nature as something quite different than that?

For example, that we come into the world with a lot of disparate cognitive capacities and that in American culture, we identify some of these, the opposites of these capacities as moral. But, maybe even this talk of idea that there is a moral domain as opposed to a more general kind of thinking about the world in terms of norms, is itself a culturally local product.

So then, if that was right, if everyone across the world employed norms but only in the kind of western tradition or something like that do we distinguish, specifically moral norms, then it is the right way to describe that capacity as saying we have an innate moral grammar, or is the right way to describe it as saying we have innate capacities which in our culture manifest as a system of morality, but might well have manifested differently in a different culture.

LUKE: Interesting. Now, Ron, you’ve written a book chapter on the evolution of our moral concepts and behavior, and you presented three different versions of the claim that our moral concepts and behavior have evolved. What are those three different views and what do you think of each of them?

RON: So, this was a chapter that Edouard Machery at the University of Pittsburgh and I decided to write. We were exploring more generally what all these claims about the evolution of morality mean. There are lots of claims that people make and they are supposed to have sometimes quite surprising philosophical consequences.

So, we distinguish three different claims. And the first claim is just that certain components of moral psychology that are at play in morality – for example, the possession of moral emotions, like sympathy, or empathy, or righteous anger- the idea that these components of moral psychology are products of evolution.

We think this claim is really, pretty uncontroversial; it has a lot to recommend it, for example, the due thing to be homologues of moral emotions in nonhuman animals, in nonhuman primates. And it seems like the right explanation for these homologues is that they were shared by our common ancestors.

However, when we thought about this we were sort of skeptical that anything philosophically interesting could follow from just the claim that some components of human moral psychology that are at play in morality, are products of evolution.

The second, somewhat more substantial claim, is one I alluded to a moment ago, which is really that normative cognition – that is, reasoning about the world and actions than oneself, employing norms about what’s good and bad, or permissible and impermissible – whether this was the product of evolution.

And we thought it’s again plausible to think that normative cognition is not just a product of evolution, but an adaptation, that it’s a product of evolution that specifically aided our ancestors in survival and so became adapted.

For example, the possession of norms seems to be culturally universal. There don’t seem to be any cultures in the history of humankind that don’t have some norms for reasoning about what you should do and shouldn’t do, and what’s permissible and impermissible.

And these normative capacities look like they emerged early in development, as you might have expected if they were innate. And formal models by people like Boyd and Richardson suggest that normative cognition could have been selected for. So, we’ve got these formal models that suggest kind of how possibly normative cognition could have come about.

But, again, and this is what I was alluding to earlier, it’s not clear that normative cognition is identical to moral cognition. There are a lot of different kinds of norms. There are norms about how you set a table, there are norms about who you can have sex with, there are norms about how to sit in a movie theater and watch a movie, there are norms about killing and punishing killers.

And these are all very different kinds of norms it seems like. And we distinguish some of them as moral norms in our culture, but it’s not at all clear that this reflects something about the cognitive of architecture that evolution selected, rather than a kind of distinction that we’ve made because of culture.

So, although it seems plausible to think that normative cognition evolved, it’s not at all clear that some distinctively moral form of normative cognition evolved. And it’s unclear whether you can get anything philosophically significant from the idea that normative cognition evolved.

Here’s another example. Certain religious groups, even here in the United States, have norms against consumptions of certain kinds of beverages. For example, members of the LDS church tend not consume alcohol and they tend not to consume coffee. And other Christian groups might not consume alcohol.

And you might ask, is this some moral norm or is it a conventional norm? Right? A conventional norm is a norm like we drive on the right side of the road. And of course that’s a real norm and you should do it, but it’s not reflective of moral reality, it’s a cooperative agreement.

And my own sense is that even within US culture and within these religious groups, there’s actually disagreement about whether the norm not to drink coffee, for example, or not to drink alcohol is a moral norm or just a different kind of norm, like a prudential norm, like you should save money for retirement. It’s that kind of norm.

So, the mere fact that we identify a norm in another culture, even a norm that looks like the same norm that we have here, doesn’t by itself tell us whether that norm is a moral norm there.

LUKE: And then what is the third claim about evolution and morality that you examine in your book chapter?

RON: Well, the third claim was just the claim that morality understood as a very specific type of moral cognition is itself a product of evolution. And we argue that there is really little to suggest that this is true, that this is an interesting claim philosophically, but the evidence for it is somewhat thin.

The reasons are for one thing, that moral cognition seems to extend quite broadly to include all kinds of different norms. For example, some food norms, or norms about observing religious holidays and practices, or norms governing sex, or norms that relate individuals to groups, like would you be willing to die for your country or for some other groups, norms governing revenge and punishment and so on.

So, one thing is that the mere diversity of this group of behavior is that we group together as morality, suggests that there’s going to be difficulty in producing an argument for why moral cognition in some sort of distinctive thing that could have been a product of evolution.

And another kind of thing – and this is the thing I just was talking about – is that it looks like norms, very, very broadly across cultures but even where they’re shared, they might not always be understood as moral norms.

And then, if they are not understood as moral norms, then it’s not clear that we should think about morality as a distinctive form of cognition is a product of evolution.

The third claim I think it is philosophically interesting for reasons that we were discussing earlier on, which is that it plays a role in arguments about whether if morality is a product of human evolution, we ought to give it any [inaudible 0:49:24] .

And so I think that this idea that morality understood as a distinctive form of cognition is a product of evolution, does have a kind of role to play in those philosophical arguments. But, I think there’s not a lot of evidence for it at this time.

LUKE: What do you think moral psychology and experimental philosophy in ethics has to say about normative moral facts, about what we ought to do? Does the research fit better with some meta-ethical theories than with others, or better with some normative theories than with others? Or does moral psychology debunk our pretensions to know moral facts? What do you think about that?

RON: I think that the empirical work in moral philosophy, moral psychology move debates forward by allowing us to frame our moral theories using empirical assumptions that are hopefully true, or at least, hopefully, not known to be false.

And in that way its focuses are research and attention. But, I don’t think that this research at this time resolves traditional and meta-ethical questions.

I do think, specifically, that as we find out more about the sources of our moral judgments, we’ll find out that lots of them have explanations that we won’t endorse as relevant to morality. And so that we will want to engage in a rather more radical process of rejecting certain kinds of evidence and certain kinds of considerations from our moral deliberation than we currently do.

So, if it shows anything, I think moral psychology is going to undermine what you might call naive moral realism, which is just the view that your every day moral sentiments allow you to apprehend moral reality without a lot of trouble.

And I suspect that once we engage in this process of deliberation about the sources of our moral judgment, we’re going to find more and more kinds of cognition that we have serious questions about.

Now, we already engage in this kind of deliberation. For example, we recognize that if someone points out to us that some judgment we made may have been influenced by ageism, or sexism, or racism, we recognize that as a reason to revisit the judgment and perhaps withdraw it. Because if we take that the explanatory story is true, then we realize that the judgment is invalid, that it came from the wrong kind of source to give it a warrant.

But, I suspect that as time goes on and we understand more about ourselves, that kind of story is going to be more and more common, and that our level of revision is going to be greater and greater.

LUKE: We’re going to have to think more about morality rather than trust our inner feelings.

RON: Yes.

LUKE: Well, Ron, it’s been a pleasure speaking with you. Thanks for coming on the show.

RON: Thank you very much for having me.

Previous post:

Next post:

{ 2 comments… read them below or add one }

Bill Maher October 3, 2010 at 8:38 pm

no replies? I liked this episode.

  (Quote)

Jeff H October 4, 2010 at 8:02 pm

Yes, another good episode, as usual. No earth-shattering revelations, though.

  (Quote)

Leave a Comment

{ 1 trackback }