CPBD 078: Liane Young – How to Change Someone’s Moral Judgment with Magnets

by Luke Muehlhauser on November 14, 2010 in Ethics,Podcast

(Listen to other episodes of Conversations from the Pale Blue Dot here.)

Today I interview brain scientist Liane Young about how we make moral judgments, and the implications of this data for moral philosophy.

Download CPBD episode 078 with Liane Young. Total time is 50:36.

Liane Young links:

Links for things we discussed:

Note: in addition to the regular blog feed, there is also a podcast-only feed. You can also subscribe on iTunes.

Transcript

Transcript prepared by CastingWords and paid for by Silver Bullet. If you’d like to get a transcript made for other past or future episodes, please contact me.

LUKE: So, Liane, as an amateur philosopher, I know that the best way to make progress on traditional philosophical questions is to hand them over to the scientist as quickly as possible. And that’s what’s been happening in the study of morality for the past decade or soon and it’s been very exciting. Could you maybe bring us up to speed on what psychologists and neuroscientists have been learning about morality in the past few years?

For example, we might start here: when we make moral judgments, are they mostly rational calculated judgments or are they more emotional judgments?

LIANE: That’s a great question. And that’s kind of where moral psychology started or what I’ve been calling contemporary moral psychology about a decade ago. And a lot of this research was inspired by problems and philosophy, as you say. And namely one problem in particular that’s been known as the trolley problem. So, philosophers have been dealing with this problem for decades upon decades. And in this problem the question is whether it’s permissible to harm one person to save many.

And so in this particular problem, there are two versions. In one case, a trolley is headed toward the group of 5 people, and the trolley will surely hit into those 5 people unless you turn the trolley away from those 5 people and onto one person instead. And as most philosophers have discovered and as most ordinary people judge, it’s permissible to turn the trolley away from five and onto one. It’s a numbers games and quite straight forward.

But, when the problem is morphed into another version where the questions that becomes whether it’s permissible to push a man off a footbridge, so that his body stops the trolley from hitting the 5 people further down the track than most people recoil and judge that action of pushing the man onto the train tracks is highly impermissible, morally forbidden and just simply wrong to do.

And so, that’s been known as the trolley problem. The problem being that there’s this inconsistent pair of moral intuitions, moral judgment that people seem to have. And so, moral psychologists try to figure it out why it’s that people have these different intuitions. And so, Josh Green, a psychologist at Harvard was among the first to address this problem using empirical tools.

And the ultimate kind of suggestion that came out of that research was that there are some kinds of intuitions that we have that are rooted in cognitive control processes that get us to, what I call the numbers game, five versus one is the greatest number of people.

And then another kind of moral judgment that’s rooted in more automatic intrusive affective-based responses and that’s the response told you: Don’t harm the one person, don’t push the man off the bridge particularly in the case when the harm is up close and personal as Josh would say, and particularly emotional salient. And so, Josh’s suggestion and many others has been that this dilemma arises out of competition between psychological processes that are rooted in the brain.

So, we feel this conflict between different psychological systems. And out of that, comes this age old philosophical problem of the trolley problem.

LUKE: Yeah. And so, that might show that moral judgment is really this matter of different systems in the brain talking to one another and trying to decide between them what our moral judgment should be whereas up until quite recently, I think most people assumed that moral judgment was this one function of some brain feature or just this kind of unified decision making process. Is that right?

LIANE: Yeah. That’s a really interesting point. So about a decade ago, again when moral neuroscientists took up their tools to study how we make moral judgments, it seems like we were all after this same kind of thing, which was where is the moral faculty? Where is the part of the brain that does morality or where is this unified center? And this assumption was apparent in a lot of the paradigms that were used in functional neuro-imaging studies.

So, what neuro-imagers would do would be to present subjects with either moral scenarios, which subject had to deliver moral judgments for, or nonmoral scenarios, such as sentences that were ungrammatical or featured simply conventional violations that that subject had to read and respond to as well. And what they would do to try to figure out where the moral brain was was run a subtraction.

So, the brain activity for moral stuff and brain activity for nonmoral stuff. And then the hope was that when you run that subtraction what you would get would be all the stuff that is specifically moral in the brain.

And researchers tried really hard to kind of control out all the possible confounding factors. So, you might think that moral stuff would also depend on brain regions that are important for social reasoning, reasoning about people.

Or like we talked about before, emotional processing. And so, they made sure that in their stimuli the moral scenarios would contain the same number of people as the nonmoral stimuli, that the moral and nonmoral stimuli would be similarly emotionally salient or not emotionally salient. But in fact, what ended up happening in a lot these studies is that they found greater social activity and emotional activity in the brain for moral judgments versus nonmoral judgments. It really seems like the field started moving in a different direction where people seemed less interested in figuring out where this so call moral module was in the brain.

And people started moving to figure out what are the contributions of these other processes. How does emotion contribute to moral judgment? How does social reasoning contribute to moral judgment, and so on. So, that’s been where a lot of the working psychology and neuroscience has focused on in the past few years.

LUKE: Yeah. And so, recently this new picture of moral judgments being dependent on several different systems in the brain, that’s really making our picture of moral judgment more complex. And I think that’s bolstered by another set of experiments having to do with whether or not we’re holding a warm cup of coffee for two seconds affects how we make a moral judgment, or whether or not we can smell freshly baked bread from where we’re has a big effect on our moral judgments. That also seems to be complexifying our understanding of how we make moral judgments. Is that right?

LIANE: Yeah. It’s interesting. So on the one hand, you could say that the fact that there are all of these different influences on moral judgments makes morality seem really complicated. On the other hand, it also makes morality seem kind of simple and stupid. The fact that our moral judgments and moral behaviors are influenced by the smell of the room and whether we’re holding warm coffee.

LUKE: Yeah.

LIANE: The fact that all of those influences that are clearly not specific to morality influence how we feel about the person that we’re with. If we feel warm, maybe we feel more warmly towards that person, or we make a less harsh moral judgment, for instance. And I think that this relates to just a lot of the traditional social psychology work that’s been done on morality even before the contemporary moral psychologists came onto the scene. So, we’ve known for a while of that at least, our moral behavior is influenced by all sorts of things. And just goes all the way back to the Milgram experiment where people have been shown to do really evil things under certain situational pressures. Under the pressure of authority or pressures of conformity and so on.

Ordinary people participating in an experiment at Yale University would follow orders of an authority in the form of an experimenter  at Yale to shock another innocent participant at increasingly higher levels of intensities until they thought that what they were doing was shocking another person to death.

And so before these kinds of experiments, we had very different assumptions about human nature and the extent to which we have control over our behavior and extent to which our moral character dictates our behavior. And so, I think it’ll be interesting to see where moral psychology goes from there and whether we’ll move back to studying moral behavior and figure all of that out.

LUKE: Yeah, and a recent wrinkle in the picture is that you’ve discovered that firing a magnetic pulse to a particular part of the brain just behind the right ear can affect our moral judgments. How does that work?

LIANE: Yeah. It’s pretty crazy. In this particular study, what we did was use a method called Transcranial Magnetic Stimulation or TMS for short and what TMS allows people to do is modulate brain activity. And so depending on the parameters, TMS researchers have used TMS to both enhance and also suppress brain activity in specific regions with pretty good precision: about a half to a full centimeter.

And so, what we used TMS to do was target a particular part of the brain that has been known to help us process other people’s mental state. So, a lot of this research has been done by Rebecca Saxe who studied the role of this brain region, the right temporal parietal junction like you said right above and behind the right ear, the role of this region in nonmoral contexts for how we predict and interpret other people’s behavior.

And so, you can imagine that mental state information – like information about what people are thinking, what people are intending, what people want – is really important for how we evaluate people. And so, this comes out particularly in cases when people do things that they don’t mean to do like cause accidents unintentionally or in cases where people try to do things that are mean and harmful, but fail to do them because they have false beliefs about the situation.

And so, two of the examples that we used in this TMS study were cases of accidents and failed attempts. So in one case of an accident, a person would try to put sugar into somebody’s coffee but it turned out to be poison and so they accidentally poisoned their friend.

So, that would be a case of an accident where you had a good intention but caused a bad outcome. Then a case of an attempt would be one where you tried to poison somebody you thought you were putting poison into their coffee, but it turned out to be regular sugar. And so, you failed to do that.

And so for the study, we gave our participants a whole bunch of different kinds of scenarios just like these ones and the hypothesis was that when we disrupted activity in this region we would get participants to judge moral scenarios based more on the outcome rather than the mental state information.

LUKE: Yeah, because that part of our brain that pays attention to or thinks about what other people are thinking and what their intentions are has been disrupted so that part of the brain can’t communicate as well or can’t function as well?

LIANE: It’s still a work in progress as to what exactly TMS is doing and surely TMS to one region might have effects not only in that one region, but also to other brain regions to which that region is connected and the whole neural network and system.

But the kind of the working hypothesis here is that when the RTPJ is disrupted, that region is less able to process information about intention – that we’re introducing noise into the neural firing into that representation.

And so, the representation of the mental state in the case of an accident that would be a false belief or a good intention is weakened or subjects becomes less confident in that representation. So, moral judgments end up being based on other factors that matter typically, but may be matter a little bit less in relation to how important intent is.

LUKE: So when you did this study, you either did or didn’t interrupt that brain region that we used to think about other people’s intentions. When you did disrupt it, our moral judgments turned out to be more outcome focused because we weren’t taking into consideration the intentions that they might have. When you didn’t interrupt it the intentions that we thought they might have made a lot bigger difference in their moral judgments.

LIANE: Exactly. So, typically intent matters a whole lot for our moral judgments.

This shows up not only in our ordinary judgments in our everyday interactions, but it is also codified in the law. The difference between murder and manslaughter where the action and outcome could be identical in those two cases, but what really matters a whole lot to both judges and jurors and ordinary people is that one was intentional and one was accidental.

LUKE: Right.

LIANE: We base a lot of our decisions on kind of information; who to be friends with, who to stay friends with, who to avoid in the future, who to punish, who to reward. And so, typically our moral judgments depend a whole lot on this kind of information, but also to some extent on what actually happened.

So, in the case of an accident where there is a good intent but a bad outcome, intent typically matters a whole lot and outcome matters a little bit also.

LIANE: And what happened in our study was that when we disrupted activity in this brain region, we saw that moral judgments depended more on outcome and less on intent as you would typically see in ordinary judgment.

LUKE: In one way, it’s astonishing that something as simple as a magnetic pulse could alter our moral judgment so profoundly and yet on the other hand this is exactly what we would expect if moral judgment is a function of the brain. It’s something that happens in the brain.

LIANE: Exactly. I think that this is probably an assumption that most moral psychologists and neuroscientists have coming into the field, that eventually we will find all of morality rooted in our psychology, rooted in the brain in our neural processes and along the way we will just be figuring out the details, where things are in the brain and what kind of cognitive processes support different aspects of our moral judgment.

It’s interesting to realize that in the abstract and another to kind of see it happen in an actual experiment…

LUKE: Yeah.

LIANE: …which I find it to be pretty compelling.

LUKE: Now Liane, in addition to conducting these kinds of experiments, you’ve also written about the philosophical implications for this kind of work in moral psychology. So, let’s talk about that. It seems to me that if this is how moral judgment works that it’s greatly affected by good smells or warm cups of coffee or magnetic pulses or all kinds of different things in this very messy interaction of many different cognitive systems, it seems to me very unlikely that that messiness processes would be successfully tracking with some kind of steady, objective, moral truth that’s written into the fabric of the universe. So, is that where you think the science might be leading, or do you have a different perspective? Maybe it’s just too early to predict that kind of judgment about the relation between our moral judgments and any kind of stable moral truth that might exist?

LIANE: It’s really hard to say. I think that one obvious possibility is that our psychology is really complicated because morality is really complicated. Whether there is some fact of the matter about morality, whether there are such things as stable moral truths or not. Whatever those moral truths are, they could be extremely complicated just like the world that we live in. And so, it’s possible that whatever cognitive systems are responsible for tracking those truths have to be just as complicated.

So, I’m not sure that the messiness or the braininess of our moral cognitive processes suggests to us that morality as a fact of the matter doesn’t exist per se. I do think though, that all this research suggests that morality is rooted in our psychology and in our brains.

What I can’t get around and it’s hard to figure out how to deal with this professionally, is my intuition – and I think a lot of people share this intuition – that there are some things that are actually morally right and morally wrong at the same time knowing that there are lots of good reasons for why I might think that, including evolutionary adaptedness. So just to put this into concrete terms, we all of this intuition, or at least many of have the intuition that there is something worse about murder than manslaughter.

LUKE: Yeah.

LIANE: Namely, that one is intentional and one is accidental.

LUKE: Yeah.

LIANE: And yet if you ask me why, I think that intentional murder is worse than accidental manslaughter. I don’t think I’d be able to tell you why I think intent matters. That just happens to be the sort of normative bottom line for me. As a scientist, I don’t think I’d be able to give you any sort of rational account for why intent matters normatively speaking.

Now, this relates to some of the early work of Jonathan Haidt, a psychologist at UVA and one of the founding fathers of contemporary moral psychology. What he showed us was some really astonishing findings about what he calls moral dumbfounding. Moral dumbfounding is this phenomenon where people have really confident and robust moral judgments about all sorts of things. But, when you push them on why they think the things that they think, they can’t give you any kind of coherent, rational answer.

So, the example that he worked with was a case of brother/sister incest where he’d interview presumably college undergraduates at UVA about a case where a brother and a sister decide to sleep together. Then he asked his subjects what’s wrong with this picture? Do you find that this is wrong?

I’m sure 99 percent of the people say that they find incest to be wrong and then John asked them why is that? Some say well, they could have babies and those babies could have genetic defects.

John reminds them: Actually, they used two forms of birth control, so no babies. Then subjects say well, it could lead to psychological harm particularly if they told their families and their families got upset about it. John reminds them actually they kept it a secret and it was a one-time deal and they never did it again.

And so, subjects get increasingly frustrated and dumbfounded by the fact that they still continue to think that what the brother and sister did is wrong even if they can’t come up with any justification for why they think that is. And so, John took this case to be informative of how we make all sorts of moral judgments, namely that we have these sorts of gut intuitions that are rooted in emotion that we can’t justify.

The fact that we’re dumbfounded is good evidence for the fact that these intuitions are shaky and unreliable and don’t track any real sort of moral truth. They’re essentially these emotional biases.

I think that the good picture for what’s happening in that case. But I also think that for better or for worse, that happens across the board for all kinds of moral judgments including the moral judgments that we would stand by, even in a court of law for instance. The fact that we think intentional harm is worse than accidental harm or doing it versus letting die, for instance.

And so, all these sorts of normative philosophical distinctions that ordinary people and psychologists and even philosophers can’t necessarily justify in other terms aside from the terms offered by the distinction itself.

I don’t know whether finding these distinctions in the brain or not being able to come up with further normative proof is good cause for throwing them out as biases or any sort of unstable psychological pattern.

It’s a tough question because I don’t know whether it’s a question that science will be able to answer or one that philosophy necessarily will be able to answer. I think it’s the kind of question that we all deal within our heads and just try to sort through what are the sorts of moral bottom lines that I have independent of whether I ask further justification for them.

LUKE: You mentioned earlier the evolutionary story that has been developing about why we make the moral judgments that we do and why we have the moral intuitions that we do. And especially if, as we’re finding, moral judgments or something that happened as a result of the particular way that our brains are wired which includes that holding a warm cup of coffee is going to change our moral judgments, the brain is something that evolved. So, it looks like moral judgment is this thing that evolved.

A problem that a lot of people have pressed called the Darwinian dilemma for morality is if there is some kind of objective truth about morality that’s independent of how humans happened to evolve, it would seem like an extraordinary coincidence, if the nature of our evolution just happened to be such that our brains worked to give somewhat correct moral judgments, right?

There either would have to be some kind of causal link where moral facts in the world are causing our intuitions or we would have had to happen to have evolved to have a brain chemistry that would make correct moral judgments. That just seems very implausible. What do you think about all that?

LIANE: No doubt. I think that putting on my scientist hat again and taking off my ordinary person hat, I absolutely agree. I think that a lot of this points to the possibility that morality as we all understand it doesn’t in fact exist.

I guess, what I mean by that is that if we take morality to be something along the lines of what is actually, factually right and wrong the way two plus two equals four, what guides us to find the right people who are actually good people, there are all sorts of good evolutionary debunking accounts that give us other reasons for why we feel the way that we do.

So in evolutionary terms, morality seems to be something that helps us figure out who the right social partners are, who to mate with, who to trust, who not to trust, who to punish, who to reward and so on. All of those functions don’t really line up with the ordinary concept of what morality is supposed to be.

So again, I think morality is really complicated. I think people across different cultures and surely different individuals have different ideas about the concept of morality and what things actually are right and are wrong.

But for all of those different kinds of definitions, I think that what those definitions of morality don’t include and can’t include is that morality is just a tool that happens to be adaptive for finding the right social partners, for figuring out how to regulate other people’s behavior, how to deal with people and so on.

So, I think it was Steve Pinker probably who at one point said “just because we discover that a mother treats her child in a certain way because it’s evolutionarily adaptive for her to do so, doesn’t reduce the moral significance of that particular behavior.”

But I’m not sure what I think about that. I think that for morality, there might be deeper implications. That ifmorality is just an adaptive tool, then we’re deeply mistaken in some sense about what morality actually is.

LUKE: Yeah. What occurs to me now is that even if it turns out that well, morality is really just this adaptive tool,I think we’re going to find that the morality that we evolved on the African savanna is very often going not be a very good tool in the modern world.

For example, it might have been adaptive to have this sort of inner racist that we evolved for reasons of sticking to our tribe and not trusting people who are significantly different than us and that kind of thing. That’s really going to hurt you in a world where you need to trade with Asia, and you need to live in a multicultural society and that kind of thing.

There’ve been a lot of experiments showing that a lot of us are implicit racists even if we have decided not to be racist. We still make judgments of other races that are negative in kind of very sneaky ways. There might be a lot of things like that where the values and the way of moral processing that we originally evolved over millions and millions of years is maybe not so useful even just as a tool in the modern world.

LIANE: Absolutely. I think that what you’re getting at is how psychology could potentially be useful and enlightening for people in actually getting us away from these implicit racial or gender biases that none of us or most of us wouldn’t want to have.

I think that if only moral psychology could also help us move out of that outmoded psychology and into an enlightened morality that would be really ideal. I also do think that it’s interesting though that we all have the intuition – and you might call this a part of meta-ethics – that having these sorts of biases is wrong and we want to move out of those biases.

This relates to certain kinds of morality, so about justice, internist, rights, and so on. These are all moral intuitions also. I think it’ll be really interesting to see how we make these sorts of moral decisions and what is the psychology of our meta-ethics.

How can we be confident in our meta-ethical views when we discover that those meta-ethical views are rooted in our evolved brains and our psychology, too? So, like you, I share those same intuitions and I wonder when we’ll start studying the psychology of our meta-ethics as well.

LUKE: Well, you’ve written some on that already. You have introduced this topic and said, “Hey, let’s talk about this.” You’ve written that the longstanding theories of morality, like utilitarianism, Kantian ethics, virtue based ethics; and also some of the longstanding moral dilemmas about whether it’s right to sacrifice one individual’s rights for the greater good, or how people could be morally responsible without having contra-causal free will powers – how those theories and those dilemmas may be rooted in human psychology, and could you expand on that? What did you mean by that?

LIANE: You and I started talking about the trolley problem. I think that’s one great example of how a moral theory or different moral intuitions could be rooted in different psychological systems where one intuition, namely, that it’s good to save more people than fewer people may be rooted in parts of the brain that do ordinary calculations. And these are what folks have called the controlled cognitive processes.

And then, there might be another intuition that tells us, “Don’t hurt another person” and that might be rooted in lower level automatic emotion based processes. And sometimes, particularly in the case of moral dilemmas, those two intuitions, those two outputs of psychological processes are at odds and that’s what gives rise to a particular kind of dilemma.

And of course, usually a dilemma emerges when at least in the philosophical domain, when different competing philosophical theories give rise to different answers. And the psychological suggestion is that one particular theory that says, “Don’t harm other individuals. Don’t use other people as a means to an end.”

Or a deontological theory might be rooted in the parts of the brain that respond to the salient emotional harms to other people whereas a utilitarian theory, “go with the numbers,” might be rooted in a so-called numbers part of the brain.

And so, while a philosopher might say that a dilemma arises out of different competing normative theories. A psychologist might say that, in fact, a dilemma is truly psychological. That actually each of these theories is rooted in different brain systems or different cognitive systems. Then what you’ll end up with is a dilemma also. And this is a lot of the work that a psychologist at Harvard, Fiery Cushman, has been developing.

LUKE: Well, and the part of the brain that you knocked out with a magnet to do your studies that might contribute a great deal to a virtue based type of ethics where we’re focusing on what are people’s intentions? What is their inner character? Regardless of what the results of their actions are, or whether people have rights or that kind of thing.

And the picture I’m hearing from you that you’re suggesting at least is that our moral intuitions or our gut feeling about what’s right and wrong, our judgments seem to be this interplay of all these different systems in the brain. And which meta-ethical or normative ethical theory you think is correct, utilitarianism or virtues ethics, or whatever, might depend a lot on which part of your brain is winning, so to speak, in the moral judgments.

LIANE: Right. That’s totally fascinating. We’ve certainly found individual differences in brain activity and that these individual differences in brain activity are, in fact, correlated with moral judgments on just these kinds of scenarios.

So, in one study that I ran this with Rebecca Saxe, we found that people who tended to have higher activity in the right TPJ, the part of the brain that processes mental state information, those people tended to be more forgiving of accident or harm. And how we interpreted that result was that people who had higher activity in this brain region are more able or more likely or willing to think hard that somebody who causes an accident didn’t mean to do it.

And so, they’re more likely to let that accidental harmer off the hook. Whereas other people who have particularly low activity in that brain region are more likely to hold the person responsible for causing the accident. Presumably, because they’re focusing more on the outcome rather than the fact the person didn’t mean to do it.

And this is particularly interesting as we’re starting to look at different populations too. So, there’s been a lot of work done not by me, but by developmental psychologists on moral judgment in young children. And children of a particular population that doesn’t seem to reason as much about intentions as mature adults.

So, children are also more likely to hold people responsible for accidents, to blame people for causing harm even if they didn’t to mean to do it. And so Piaget, a developmental psychologist was the first to come up with this theory and what he did was give children two scenarios where intention and outcomes were at odds.

So in one case, a little boy broke five teacups accidentally. He just swiped off the counter purely by accident. He was trying to help his mom clean up. And another boy took his mother’s teacup, just one of them and smashed it on the floor intentionally.

And young children are asked, “Which boy is more naughty?” And children around five, six, seven tend to say that the boy who broke more teacups was more naughty, even though he did it purely by accident and had good intentions.

And so, children seem to be making these sorts of outcome-based moral judgments. And it turns out the RTPJ actually has a different pattern in young children and takes a long time to mature over development. So, it’s possible that there’s a connection between what’s going on in the brain particularly in this region, and how young children make moral judgments.

LUKE: And here is something that I would love to discover. I don’t know if this has been studied yet, but I wonder if people who tend to be more calculating, logical, that kind of thing, also tend to be more utilitarian.

And maybe people who like you say already that the people who have more activity, or quicker activity, or stronger activity, or whatever in that region behind the right ear, I wonder if those people are more inclined towards a different type of normative view.

You talked about how the people with more activity in the RTPJ, you already said that they’re more likely to excuse accidents, but I wonder if it even influences which normative view they find more plausible. And I wonder if the same thing is true with the apparent relation between this very calculating approach to life and utilitarian normative theory.

LIANE: Yeah. That’s totally interesting. You had mentioned before the potential relationship between RTPJ and virtue ethics, and that’s something that I hadn’t really thought too much about before. Actually I think that there’s probably a difference even within the kinds of internal states that matter to people.

So on the one hand, character and virtue seem to matter a whole lot. And of course, this hinges to some extent on whether character traits exist as philosophers like Gil Harman and John Doris have debated. And certainly, this has been debated in the psychological literature as well.

So, a difference between those sorts of stable personality traits, on the one hand, and what I’ve been describing in the relation to the RTPJ, which has to do with these transient mental status – like what you’re thinking at the time of you action, and what your belief happens to be.

But, I wonder if the relationship between those two kinds of internal states, thoughts, and beliefs, and desires, on the one hand, and stable personality traits on the other. And whether you were to divide people up into two groups, whether you would find virtue ethicists on the one hand and people who are more inclined to reason based on transient thought, beliefs, and desires. And then on the other hand, utilitarians who are more interested in pure outcomes.

Now one hint of this, which is just purely anecdotal just in talking to utilitarian philosophers, comes again from the trolley problem. Where one of the theories that emerged out of the philosophical literature on the trolley problem had to do with intention and this was the difference between intended harms and truly foreseen harms.

So, in the case where you are pushing the man off the bridge so that his body serves as a trolley stopper, you’re using that man as a necessary means to an end and that’s considered to be intended harm by deontological philosophers. Whereas in the other case, when you are simply turning a trolley away from five people and onto one person, you don’t require that the one person be on the sidetrack in order to accomplish your end of saving the five people. His death is simply a foreseen side effect, but not intended in any robust way.

So, utilitarian philosophers tend to say, “Well, that distinction is just bunk”. That’s post-hoc rationalization for what is actually a difference in emotional salience of the harm that you’re doing in one case and the other because what really matters and what everybody knows in that both cases you know that you’re going to be causing the death of one person and saving the lives of five people, and that is after all what matters, what you’re doing and the fact that you know what you’re doing. And so, I think what is interesting about that kind of utilitarian explanation is that it also depends on the extent to which you know what you’re doing.

I wonder about the relationship between that subtle difference between intended and foreseen harm whereas the utilitarian points out “You know what you’re doing in both cases.”

And this kind of coarser distinction that, as I said before, comes out in the law and all sorts of places, namely between intentional and accidental harms. It would be interesting to push utilitarian on the extent which that kind of mental state information actually matters.

LUKE: Now if some picture like this turns out to be true about the psychological basis and the brain basis for what turns out to be the philosophical theory of morality that each of us finds most plausible, how do you think philosophers should respond to that data if that turns out to be true?

LIANE: Well, it’s interesting because most of the philosophers I talk with and interact with are among the people who are generating the data. So, I’ve had the good fortune of getting to know a whole bunch of empirical philosophers who do just the same of kinds of experiments that psychologist and neuroscientists do. I think that what a philosopher could do is think hard about these kinds of psychological solutions to the problems. I think that what philosophers end up doing and are still doing today is articulating a lot of the problems that end up getting solved at least to some extent in psychological terms.

So, when I say solved I don’t mean that philosophers pose something like the trolley problem and then scientists come up with the answer, but I think that what philosophy and science do together well is constrain each other’s programs and theories. Something that scientists can do and we have kind of been talking about this with respect to virtue ethics and situationism is that philosophers might have some theories that are based on a certain set of assumptions that are actual empirical assumptions, for instance, the fact that something like virtue or character exists. Then a psychologist might say well, actually, those factual assumptions about human behavior are mistaken.

LUKE: Right.

LIANE: In fact, situations account for a lot more of people’s behavior than you might think. So, a virtue ethicist would then deal with that problem and presumably amend the normative theories accordingly.

So, it’s interesting. One case of this that I witnessed just last week was at a workshop hosted by this organization called Culture and the Mind over in England. It brought together a whole bunch of anthropologists, psychologists, neuroscientists, economists, and of course, philosophers. One of the philosopher’s workshops gave a top on guilt. This was Gil Harmon, a philosopher at Princeton.

I had heard him talk about guilt before and his normative account specifically was that guilt is not required for morality. That a moral person need not be guilty when they do something wrong, which is a very provocative, normative theory. What was funny was that when he was introducing this idea he said what he started with was just his own introspective experience, which was that he doesn’t feel guilt and he doesn’t think of himself as an immoral person.

And of course, we all laughed and he was chuckling too. But actually a lot of philosophy happens that way.

And probably a lot of psychology too, where you start with an intuition that is based on your own experience and then you have to test it out. What Gil did was a lot of philosophy and a lot of thinking and analysis about these claims in relation to other normative theories about the role of guilt and how guilt ought to matter for morality. What Gil ended up doing at this workshop was giving a talk with a graduate student also in philosophy, Cory Malley who brought to Gil’s attention all of this psychological literature suggesting how guilt is incredibly adaptive. How guilt motivates people to behave better the next time around either consciously or unconsciously.

And that actually resulted in this kind of neat compromise where Gil discovered that what he was conceptualizing as guilt was actually known as shame in the psychological literature. And so, it was really neat to see kind of how this psychological distinction between guilt and shame was actually quite helpful in resolving these debates in philosophy and also how a philosopher could be moved, and of course, Gil is very empirically minded. So, he is doing a lot of the generation of these ideas for empirical work. It was just neat to see that it mattered to the normative theory how guilt could be adapted psychologically, functionally, and so on in everyday life.

LUKE: Yeah. It’s really exciting to see that kind of interaction between neuroscientists and psychologists and philosophers and anthropologists and economists. It’s really perhaps a golden age of this type of research in that way.

LIANE: Totally. I absolutely agree. I think that what is really neat is that I think philosophers are generating a lot of the problems that psychologists become interested in. And, then in turn, I think philosophers are becoming increasing sympathetic to the empirical work in trying to figure out how to make everything work and take into account what actually is the case in the real world into their normative theories.

LUKE: Well, Liane, about this interaction between science and philosophy, I began with this comment that was very deferential to science about how the quickest way to make progress on philosophical problems is to hand them over to the scientists, if possible. That’s a very common view actually even among philosophers. That’s basically what naturalism is. The idea is that science doesn’t need to justify itself really: science works. This is how we get to the moon, this is how we defeat diseases, all that kind of thing.

But some would say that philosophy really does need to justify itself. After all, philosophers are still struggling to answer the same questions that Plato raised 2400 years ago. So, as someone who has a foot in both worlds – in science and philosophy – I’d love to hear your answer to the question, “What good is philosophy?”

LIANE: Well, I think philosophy does a lot of good. I think that I often wonder whether moral psychology would exist as a field if not for moral philosophy or just philosophy in general. Then again, it is a funny question to ask myself because I can’t imagine moral philosophy not existing because I think that since the beginning of time we have been interested in questions about right and wrong, questions about free will, problem with evil, and so on.

And so, I think that moral philosophy arose out of ordinary people with ordinary questions about life and morality. I think that what philosophy has done and is continuing to do is articulating those important questions and the various theoretical answers to those questions, how we ought to behave and how we ought to think.

What science can work with philosophy on is what of those answers are just impossible answers and what are better answers, what are worse answers? So, philosophy could tell us what a particular solution to the problem is by reasoning through it. Psychologists can say either we do that or we don’t do that and maybe lend a hand in helping us get to the right sort of solution that a philosopher would endorse. That’s the hope of something like moral psychology, to actually make us think as better people.

I think philosophers continue to advise us on that and help us figure out what those aims are. Psychology can help us along the way and figure out possibly how we can get there.

LUKE: Well, Liane, it has been a pleasure speaking with you. Thanks for coming on the show.

LIANE: Thanks so much, Luke.

Previous post:

Next post:

{ 8 comments… read them below or add one }

Charles November 14, 2010 at 9:38 am

I think the Problem of the Trolley Problem comes down to certainty. When I pull the lever, I am 100 percent sure it’s going to work. But when it comes to the fat man, I just can’t accept that pushing someone off a bridge is going to stop it. I think there is a high probability that six people would to die.

  (Quote)

Kyle Key November 14, 2010 at 11:30 am

@Charles: I don’t think so…the lever could do nothing, the track switching mechanism could be broken, etc. But those variables are controlled for in the thought experiment…you’re to assume to both options (throwing the lever and pushing the person) will work infallibly.

  (Quote)

Charles November 14, 2010 at 3:52 pm

I can’t assume that. The most I can do is put myself in that situation and ask myself what I would do.

  (Quote)

Jeff H November 14, 2010 at 4:27 pm

Charles, that’s the point of it being hypothetical. This thought experiment assumes perfect knowledge of the outcomes of both choices. It also assumes that you only have two options. By saying “Well I can’t assume that” then you’re breaking the whole hypothetical scenario. It’s true that it’s never so clean-cut in the real world, but the point of a thought experiment is to provide a simplified example from which to produce general principles.

  (Quote)

Hermes November 14, 2010 at 5:07 pm

Luke, a technical question/problem about CSA…

I’ve noticed that if I include a link to other pages on CSA, my post doesn’t appear (immediately?).

(Then again, I did post two messages with a minute or two of each other, so maybe it’s a spam filter?)

  (Quote)

Patrick November 14, 2010 at 5:15 pm

JeffH: Its not an unreasonable response to a hypothetical that seeks to plumb moral intuitions to reply that you are simply unable to take the givens of the hypothetical as truly given, and that therefore your response to the hypothetical is not supplying the information the experimenter thinks he is receiving.

  (Quote)

mojo.rhythm November 14, 2010 at 6:56 pm

I really enjoyed this interview.

Great job Luke!

  (Quote)

Luke Muehlhauser November 15, 2010 at 8:22 am

Hermes,

I assume it gets caught in Akismet spam filter. Please let me know when that happens. Alas, I cannot configure Akismet to, for example, always allow posts from people who have posted 5+ times.

  (Quote)

Leave a Comment