CPBD 090: Toby Ord – Dealing with Moral Uncertainty

by Luke Muehlhauser on December 18, 2011 in Ethics,Podcast

(Listen to other episodes of Conversations from the Pale Blue Dot here.)

Today I interview philosopher Toby Ord. We discuss the problem of moral uncertainty: What do you do if you aren’t sure which moral theory is correct?

Download CPBD episode 090 with Toby Ord. Total time is 41:33.

Toby Ord links:

Links for things we discussed:

Recorded January 2011. The current member count for Giving What We Can is now much larger: 177.

Note: in addition to the regular blog feed, there is also a podcast-only feed. You can also subscribe on iTunes.

Transcript:

LUKE: Dr. Toby Ord is a Postdoctoral Fellow at the University of Oxford in the Philosophy Department and the founder of Giving What We Can, a charitable organization focused on giving to the most cost-effective charities in the world. Toby, welcome to the show.

TOBY: Thanks for having me on.

LUKE: Toby, much of your work concerns consequentialism. I’d like to start by asking you to explain for us what consequentialism is and what some of the major arguments against it have been.

TOBY: Okay. Well, there are three main types of ethical theory that philosophers discuss. One of these is consequentialism, which is a theory that’s fundamentally interested in making the world as good a place as possible and thinks that that’s what ethics is fundamentally about. Another type of theory is one called deontology, which says that ethics is fundamentally about obeying moral rules. Don’t lie, don’t steal, don’t kill, things like that.

Another type of ethical theory is called virtue ethics and says that ethics is fundamentally about being a person of good character — being noble, brave, and generous, having those virtues. These are three broad traditions.

A little bit more about consequentialism itself. The idea is that ethics is about making the world as good a place as possible. That’s a bit vague. You can flesh that out in different ways. There are heaps of different consequentialist theories.

The most famous of these is called utilitarianism, which is the theory that what we should do is act so as to make the world as happy a place as possible, to maximize the sum of happiness among all entities that can be happy. What you could do is you could have other types of consequentialist theory, although utilitarianism is the most famous.

For example, you could say, “Well, we don’t want to maximize the sum. We want to maximize the average.” Or you could say, “We want to maximize a weighted sum in some manner.” Or you could say that, “For each individual, what matters isn’t merely happiness, but it’s something else.” Maybe it’s a preference satisfaction, or maybe it’s some list of features that make a life good.

You could have heaps of modifications like that. You could even say that, “When an act of injustice is performed, that makes the world worse,” so you take that into account. You could have heaps of different theories like this, although utilitarianism is certainly the most important one by consequentialists.

That’s what consequentialism is, and it’s contrasted with some of these other aspects. Some arguments against consequentialism are really just arguments for the other types of theory. For example, people might just think, “Well, consequentialism isn’t very plausible, because ethics is fundamentally about hard and fast rules that you cannot ever break.” Or that, “Ethics is fundamentally about being a person of good character.” They’re positive arguments for the other approaches.

Whereas, some arguments against it are explicit attacks. A good example of that is that they say, “On consequentialist theories you’re allowed to do anything, no matter how abhorrent seeming if it leads to good enough consequences.” They’re attacking it because it lets you do bad things in order to get good outcomes. That’s one form of attack.

The other big form of attack is that it doesn’t allow you to be partial to your friends or your family or to yourself. Instead, it’s generally conceived of as a class of impartial theories where… there’s a large demand on you. If it turns out that you can help other people out more cheaply than you can help yourself, then maybe we have to give a lot of our money away in order to do this, or a lot of our time or something else. These are a couple of ways in which people attack it.

LUKE: Now, Toby, you’ve done some theoretical work which attempts to address some of the complaints against consequentialism, for example, in your bachelor’s thesis, “Consequentialism and Decision Procedures.” Could you explain what the theoretical moves you’ve made are?

TOBY: Yeah, sure. In my [bachelor's] thesis and also in my doctoral thesis, which is an extension of that work, I’ve looked at basically a different conception of consequentialism, or a different way of understanding consequentialism, which broadens its scope compared to how it’s normally construed.

On the normal accounts that you hear about consequentialism… For example, if you go into a philosophy course about it, they’ll tell you that it’s about acts. Then they’ll say that the fundamental principal of consequentialism is something like this: “An act is right if, and only if, it leads to an outcome which is better than all other outcomes.” That’s what they’ll say, and they’ll focus really on acts.

Whereas, I think that you shouldn’t just focus on acts. You should also assess everything else. Rules and character traits or motives can be assessed according to the same principal. The right set of rules is the set of rules that leads to the best outcome, and so on. If you do this, you can get some kind of a partial unification of consequentialism with its main rivals, deontology and virtue ethics. Because you can talk about rules in a similar way to how deontology does, and you can talk about virtues and character traits in similar ways to virtue ethics.

To see how this works, it’s easiest to start with an old objection that some people make to consequentialism, which is a bad objection. But in order to overcome it, you have to think of it, so it’s quite useful. The objection is that they say, “Oh, maybe consequentialism is self-defeating, because if we all behaved like consequentialists, then we’d lead to a worse outcome than if we didn’t. Or even, perhaps, just if one person behaved like consequentialists, that person would lead to a worse outcome than if they behaved in some other way.” Consequentialism would then be self-defeating.

There’s some kind of a thing that has to be answered there, but any consequentialist worth their salt would say, “Well, if it turns out that, empirically, if we believe this theory or if we act on this theory, that would lead to worse outcomes, so much the worse for believing in this theory or acting on this theory. We should endorse whichever theory it is that leads to the best outcomes. We’re not slavish adherents to this idea. We’re only interested in it insomuch as it leads to good outcomes.” That would be this kind of approach.

In my thesis I looked at this and tried to work out how can you spell this out, and what kind of a theory can really deal with this. There are quite a few aspects that are quite technical and maybe less interesting, but the basic idea really is to assess everything in terms of its consequences.

For example, suppose we’re considering what’s the best moral code or professional code for doctors. Is it the code where they say they treat the patient in front of them as being all-important and try to maximize the benefit for the person in front of them? Or is it a utilitarian type of naively calculating approach, where they try to add up the benefits and costs of every action? I think that it’s quite plausible to actually lead to better outcomes if we use the former rather than the latter, because more people will actually go to the doctors.

The idea, though, is that you’re looking, in that case, at a code of conduct, and you’re assessing it in consequentialist terms in a similar way you could assess morals that are being made in consequentialist terms, and you could even assess character traits. Is it good for them if I were being a very generous person? Well, now what would the outcomes be about? It lets us see why rules, character traits, and other things are really important for me to focus on if we are thinking about ethics, without having to forgo or focus on consequences at the same time.

One of the things that I point out in my thesis is that there’s a lot of evidence that the early utilitarians were like Mill and Sedgewick and possibly Bentham actually thought like this all along. There’s just been a bit of confusion, and in fact we have forgotten that they’ve made a whole lot of comments like this. In the 20th century, people who do care about consequentialism got very fixated on acts, but I think if you look before that, you’ll find that there is less of it.

So that’s some kind of approach. It could be seen by people like me as an attempt to unify consequentialism with the best ideas in deontology and virtue ethics.Or it could be seen by my opponents as an attempt to swallow up these other theories within consequentialism, depending on whether they look favorably on the project.

LUKE: Well, Toby I have a series of posts on my blog called “Living without a Moral Code,” and I describe my predicament as being that I’m really motivated to make the world a better place. I really want to do what’s right, maybe it’s my religious upbringing, or something that makes morality so important to me, but the problem is that I don’t know what it is that would make the world a better place, because I’m not very confident about any particular theory of meta-ethics or a normative ethics. Of course, philosophers haven’t come to any consensus on those issues either.

How could someone like myself deal with that kind of fundamental moral uncertainty where it’s not just a matter of not being sure which charity will do the best in helping people or something like that, but not being sure about fundamental moral principles at all, but wanting to be moral? How can we deal with that kind of moral uncertainty?

TOBY: Well, that’s a very big question, and it’s one of the things that I’m quite interested in. Interestingly, there’s been so little focus on that type of question in modern academic ethics, at least analytic ethics, and despite it being a universal predicament.

I mean I’m not sure of what the best moral principles are either, and no one is, or if they are sure, then they shouldn’t be sure. They don’t have enough justification to be sure. We’re all in situations where we should be given some small credence — at least a small credence — to a whole lot of different ethical theories when we think about those things.

Because we’re all in, and it’s particularly interesting to try to reckon how to deal with it. One approach might be to say, “If you look your theories, look at find the theory that has the highest credence in the sort of theory that you support the most, and to just do whatever it says.”

But in some reflection you can see that can’t be right, because if you do that it might be that your credence, let’s say, is evenly split between 100 different theories almost evenly. One of them has slightly more, and that one says to do a certain thing, but all the other theories say that that would be kind of gravely irresponsible to do that thing and that you shouldn’t do it. It seems like you shouldn’t in that case just promote one of them to be believed as if it’s true.

You might think, given that kind of case, “Well, what we need to do instead is not to think of it at the theory choice level, but to think of it as thinking about the acts, one at a time.” In that particular case, if you thought to do the act that has the highest probability of being permissible morally, then you would do the other act in that case.

It would get you out of trouble on that one, but it would run into its own problems. It might be a case where you are unsure between two different theories and you think that one of them is just slightly more probable than the other, but it turns out that, therefore, the act of the first one, let’s say the slightly more probable one, suggests has the highest chance of being permissible if they conflict.

In that case, it might be that the other one says that there’s much more at stake. It says that no, this is a huge difference between doing something that’s absolutely crazy wrong and something that’s really virtuous. Another theory that you’re more confident in says that there’s not much at stake. In that case it seems like you should hedge your bets and worry about the one that says that there’s more at stake.

Then you might think, “Well, OK, how are we supposed to deal with this?” Maybe it’s something more like the way that we deal with empirical uncertainty. What we do in those cases is we try to maximize the expected benefit. We’ve got this expected idea form decision theory about expected benefits, where you multiply the probabilities by how good things would be in the cases in order to work out what you should do.

This is, I think, a more plausible approach, but it turns out that it runs into a whole lot of trouble as well, because it’s very difficult to do comparisons of value between the different theories. Suppose one of the theories is something like utilitarianism and says we should maximize happiness. Let’s suppose that we could do something such as killing 10 people to save the lives of 11 people. Let us suppose there are no other side effects of this action. Utilitarianism says that’s good. Your other theory, let’s say Kantianism, says that’s really wrong to do that.

How are you supposed to make a decision there? How confident would you have to be in utilitarianism before you should do that? It seems like we need some way of comparing how important utilitarianism thinks the benefit of one life is to how important Kantianism thinks…

LUKE: Yeah, it’s hard to see how you could put those two theories on the same scale of value.

TOBY: Yeah. That’s exactly right. That’s a big challenge and there are a couple of different approaches to dealing with that, which are quite theoretical, but one of them is to try to work out a version where you never have to do any explicit comparisons of the values between the theories. You just look at the structure of each theory and you use some clever tricks to try to find examples where they think there’s the same structure between several cases and try to use those tricks in order to rank the theories.

I think that can’t really be done and you have to actually crack open the theories and look inside them and what they’re saying in order to get the right decision. It’s really interesting theoretical stuff there. It’s unfortunately at a very early stage of development, and there are only a few philosophers looking at this. It’s really the last 15 years or something where it’s started to become a topic in its own right.

I don’t have all that much advice as to how to deal with this practically now. Although you can see that, in some cases, these things start to come up. For example, just as the philosopher Peter Singer has an argument to do with us having obligation to donate a large portion of our money to help fight global poverty.

His argument is that it’s analogous to a situation where we saw a child drowning that was easily preventable except we would ruin our suit, say, by wading into the muddy water. Then that’s a case where most people would think that the person would act wrongly if they just walked away in that case. Yet, we tend not to think that when someone could donate an amount of money equivalent to buying a nice suit and save someone’s life, but they’d buy the suit instead. We tend not to judge them as harshly.

Here’s his argument, that there’s an inconsistency in our judgments there, and the considered judgment really that they’re both equally wrong, rather than that it’s now permissible to leave children drowning in ponds. That’s another approach you could take. Here’s his argument and it’s debated a bit in the literature as to whether he’s right about that.

You might then think, “Well, hang on a second. If I’ve got some level of credence that it’s really wrong to let this happen, then maybe I should be donating more money than I’m donating in order to hedge my bets there. Even if I think that there’s a 90 percent chance that he’s wrong and that it’s purely optional to donate money to charity, maybe this serious possibility that a lot of philosophers would say that I’m doing something really wrong. Maybe I should be taking that into account when I’m choosing my action.”

You can see how it could start to influence some things that people think. We’re often very tempted to just say, “Oh, well, as it happens, I think Peter Singer is wrong about that, so I’ll just go ahead and go my own way.” But I think that once you start to really look at how you have to deal with moral uncertainty, you can see that it’s not enough to just say, “Oh, he’s probably wrong.” You would need to be really quite sure.

Similarly, if you were firing a gun through the bushes, you wouldn’t want to be just fairly convinced that there was no one in the bush. You would actually want to, and it would be your responsibility to, do quite a lot of checking to get the probability down very small if there was someone there. I think that there’s a similar situation here.

Hopefully, that’s somehow practical help about how you’re going to weigh these things in.

LUKE: Toby, I think all those ways of dealing with moral uncertainty are way too complicated and I’m just going to become a divine command theorist.

TOBY: [laughs]

LUKE: Well, so you don’t have any answers yet?

TOBY: Not really. It’s a really difficult question, but at least this idea, I think, of moral hedging is quite important. Actually, it’s quite nice. It’s related to an idea of moral trade, which you could have as well.

If you have two, not just one person who is uncertain about something, but imagine two people who have different moral preferences, where they might actually be able to get greater benefits if they did some kind of a swap. Suppose one of them cares more strongly about issue A and the other one cares more strongly about issue B. They might be able to swap votes on the matter in order to get some mutually beneficial outcome.

There are various examples of this, but here’s an example that I know has happened. A friend of mine was in New Zealand and was interested in voting for one party. The election is a two-horse race. Her friend was going to vote for the other party, so they both decided to stay home instead of going to vote.

You could have slightly more extreme examples of that where one person was going to donate, let’s say, $100 to a gun control charity and another person was going to donate $100 to a gun use charity. I’m not sure what they’re called, like the NRA. They could both decide instead to donate the $200 to Oxfam or some other group where they both agreed [on the good] it was doing.

It would seem that that’s the type of win/win trade morally, which is quite interesting to think about those cases. I think there’s quite a lot of interaction between them and moral uncertainty.

LUKE: Well, even if you don’t have the answers yet, I’m glad that you are at least working on those issues of moral uncertainty.

TOBY: Yeah. Some of the cases can be really huge. I’ve got some philosophical examples.

I guess the poverty cases are fairly practical. There are similar issues about vegetarianism. If you hold some credence that it’s wrong to eat meat and you don’t think there’s much chance that it’s right to eat meat, then that could influence how you should decide, even if you think there’s less than a 50 percent chance that it’s wrong.

Similarly, there are some big issues when it comes to evaluating global catastrophes. If it’s something like climate change, perhaps you think that this could cause human extinction or maybe some other event could cause human extinction.

Human extinction is at least as bad as 6.8 billion deaths, but a lot of people think it’s even worse, that all of the future people who could live should count morally. Maybe they think that there’s going to be not just billions of them, but hundreds of billions or trillions. They might think that that component makes this thousands of times more important than we might naively think. Alternative moral theories say that those future people don’t count at all.

You can get some kind of radical uncertainty then about how important it is to avoid this kind of catastrophe. Is it billions of times more important than saving one person’s life or is it trillions of times more important than saving one person’s life? This kind of radical disagreement.

You might think that doesn’t matter because once it’s billions of times more important, it’s important enough, but it does end up mattering. If the chances of these things actually are very small or the level of risk that you can mitigate, you can only change the probability by one in a million or something like that.

LUKE: Well, while we’re working on those issues of moral uncertainty, we might still want to work on actually answering these fundamental moral questions, so that we’re not so uncertain about which normative theory would be right, or that kind of thing. How does one go about deciding between the different normative ethical theories that are on offer, different versions of consequentialism, contractarianism, deontology, virtue ethics?

TOBY: Yeah, that’s a good question. It certainly makes sense, even if you have a theory of moral uncertainty. How to act when you don’t know what’s going on with somebody’s philosophical question? You can see that there’s still a large value of information. Just as we might have a theory which says, “If you don’t know what the card in someone’s hand is when you’re playing poker, the right thing for you to do at the moment is to, let’s say, fold.”

But it could mean that there’s a huge value of information if you can find out something about what card they were holding. That kind of principle still makes sense here, that even better than just acting under uncertainty is removing your uncertainty or lowering your uncertainty, and then you make a more sensible act.

A lot of people in philosophy departments have been trying to argue their cases between these different types of moral theory for quite a while. Although, if you look at the history of philosophy, while it goes back more than 2,000 years, there were very few people doing it in the past and a lot more now. If you look at all the words that have been written about philosophy, I don’t know when the median word was written, but it’s probably something like 1970 rather than the year 1000.

Are these words all of equal quality? Well, the answer is probably no. Also, have there been diminished marginal returns? Is it the case that we’ve worked out a lot of the basic stuff and now, if someone wants to make progress, they’re doing it on a more constrained area? I think that’s probably true. So you can’t just do it by numbers, and I was just pointing that out.

I think that there has been a bit of progress made on some of these questions. Certainly the arguments pile up, even if people haven’t come to firm conclusions about what type of theories are better. The body of sensible argument has been increasing. There’s also, as always in philosophy, a body of stupid arguments on some of these topics, but setting that aside, the amount of sensible argument has also been increasing.

In the case of consequentialist theories, I think that there is a lot of evidence in favor of them at the expense of other theories. Although in some ways, like with my work, it’s not even at the expense of them. It’s possible that in a certain sense the deontological theories and virtue ethics theories are true. While in another sense, I think a deeper sense, the consequentialist theories can be true at the same time.

This can be true. If it is true, that we ought to follow a certain set of rules, the set of rules that we ought to follow is one that’s chosen based on its consequences. If we ought to have certain character, the character that we should have is chosen based on those consequences. We ought to act in certain ways, and the acts be chosen on their consequence.

It’s possible to have an underlying theory that fits those things, but as to how in practice to do it, that’s difficult to say. The obvious answer would be, I think, enroll in a philosophy course and so on. As to how much more certain you’ll be when you come out of the course, I’m not sure. It’s just quite tricky to do.

LUKE: Well, one of the more common ways of arguing between these different normative theories is to argue about whether the conclusions of a particular theory agree with our moral intuitions about the matter.

For example, if happiness utilitarianism tells us that we should kill an innocent person to harvest their organs to save the lives of five other people, but then our moral intuitions tell us that this would be highly immoral to kill this person, this innocent person, then that somehow counts as evidence against happiness utilitarianism.

Why trust our moral intuitions in the first place? Aren’t they just evolved moral prejudices? Do philosophers think that we have a morality module in the brain that can detect moral facts and that’s why our intuitions provide evidence for or against particular moral conclusions? What’s the thinking there?

TOBY: That’s a good question and definitely I don’t have a solid answer to it. Although I should say, this is in some ways a virtue of philosophers is that they often know when they don’t know something. Whereas if you asked them on the street, they might just assumed that they have answers to all of those questions, even though those answers are probably unjustified.

I think our intuitions do count as evidence is favor of certain moral positions. However, I think it’s fallible evidence in the same way as, suppose we want to know the relationship between the height at which you drop a ball and the length of time it takes to hit the ground. We know that there’s a parabolic aspect here, but it might be that our stopwatch that we use to measure the times, we’re not perfectly accurate at using that. It turns out that there’s noise in the data.

If we naively just try to connect all of the dots on the chart we’re making, we would get this very strange-looking line that was roughly parabolic, but had heaps of squiggles on it. I think that that would be a mistake. What we should do in that case it to use Occam’s razor and think that simplicity of your theory is a virtue. In doing so, that kind of approach helps to eliminate the noise when you’ve got fallible measurements.

Similarly, one way that we could try to do moral theory is to say that the truth about ethics is just a combination of the truth about my intuitions about every single scenario. You just add them all together. In that case we’ll come up with a moral theory that, at least for me, there are no strikes against it, because it’s my intuition in every single case.

But it’s a very implausible theory. We know that over time, the moral intuitions of a society change. We would think that if people in the past had used a theory like this, they would get very stupid answers. We also know that it differs between different people. This seems to indicate that there’s at least — assuming there’s something worth tracking at all, which I think that there is, then we know that we have got some kind of a noisy measurement off of it.

We can’t just use data fit as the only criterion. I would try to use something like the fitting of the data and also some notion of theoretical simplicity of the theory just like in science.

Not everyone agrees with that. In fact, that is actually an interesting topic. Your question broaches on several different things to do with normative ethics, epistemology of ethics. How do we come to know moral facts? Some questions about moral psychology and about evolutionary psychology, and also some questions about meta-ethics.

LUKE: Well, Toby, you’re not just a moral theorist, but a moral activist, I guess we might say, in that you’re the founder of the organization Giving What We Can. I spoke about, or I interviewed someone about Giving What We Can in an earlier interview, Nick Beckstead, who launched a chapter of your organization at Rutgers University, but I’d like to ask you how you came to found the organization and what successes you’ve had so far?

TOBY: Sure. Well, yeah, it was something like seven years ago or six years ago, I was writing an essay as part of my graduate studies here at Oxford. The essay was on the topic of, ought we always to forgo a luxury if that would allow us to save someone’s life?

At first you might think, “Yeah, well, obviously we should. That’s a pretty simple essay topic.” But what they were getting at was these cases of global poverty and whether we are perhaps always in a situation like this, where we can forgo a luxury to enable us to save someone’s life, in which case maybe would have to forgo all luxuries in our lives. It turns from something that seems quite trivial to something that seems very challenging for us.

I was looking at this and reading arguments by Peter Singer and others about this. I’d always had some sympathy for this view because of a utilitarian leaning. It really made me think hard about it, and thinking about it practically. Could I live like this? How should I live? I decided to try to work out what I could achieve in my life if I really wanted to. That’s quite a large project, and I certainly haven’t finished working it out, but I’ve done some sketching and one thing to think about is, there’s different areas of your life you can achieve things in.

For example, there’s through your work. There are also the different things that you can achieve through your friends and family, things that you can do through volunteering, and, of course, there are things that you can do by donating money. I felt, about the donating money part, is central to what I was thinking about at the time, and also, things like the type of part that is easy to quantify.

So, how much money could I donate over the course of my life? I worked out that, as a UK academic, I should be able to earn about 1.5 million pounds over my life. If I kept a similar living standard to what I have at the time as a grad student, that I would be able to give away about a million of the 1.5 million, which is quite a lot. It’s quite a nice thing to think about when you’re young, as to, what’s the most I could do if I really wanted to?

I should point out that if you’re an academic in the U.S., that you make more than that. The same in Australia. It seems, actually, like being in the UK is bad country for an academic, as far as donating money goes.

I then thought about this and thought, what could I do about it? And so, I became really interested in cost effectiveness. A colleague of mine, Gaverick Matheny, he sent me a link to a fantastic report done by the Disease Control Priorities Project, where they did some meta-analysis of a whole lot of papers that are being published on the cost effectiveness of different ways of treating various health problems.

And so, it wasn’t just a case of looking at the health problem like AIDS as one big block, but instead, they would say, “Well, there are heaps of different ways to treat or prevent AIDS. So, let’s look at approaches of, let’s say, treating one of the illnesses that you get if you’ve got AIDS and your immune system is lowered and it allows extra illnesses in.”

There’s an illness called Kaposi’s sarcoma, and they found that you can produce about one, what they call “quality-adjusted life year.” That’s the equivalent of a year of life at full health, an extra year of life at full health.

The idea is that it could actually be 10 extra years of life at 10 percent of quality of health, or two extra years of life at half quality of health. Or maybe it’s just taking 10 existing years of life and improving them by 10 percent. The idea is that it’s something which is worth the same amount as an extra year of life at full health. They call that a QALY, a quality-adjusted life year.

It’s a utilitarian type of concept. They wanted to look at, in this book, to measure these things by this and say, if you’re… funding different programs and different interventions, how many of these quality-adjusted life years can you produce for giving that amount of money?

And so, if you have $30,000 and you want to treat Kaposi’s sarcoma, you can produce about one year of quality-adjusted life. Whereas, if you, instead, do different approaches, so, for example, if you, instead, use antiretroviral drugs to fight HIV itself, you can, instead produce about 15 years of life, after the same amount of money.

If you go for education for high-risk groups, you can get it off to about 100 years of life. And then, there’s some other approaches. That’s not the best that we know of in terms of HIV/AIDS. If you look at other areas of health, you can get all the way up to about 10,000 years of life at full health for the same amount of money.

Just looking at HIV/AIDS interventions, they span about three orders of magnitude in total from the least cost effective to the most cost effective. The most cost effective does 1,000 times as much good as the least. If you’re willing to go into other areas, you can do up to 10,000 times as much good.

So this really made me very interested, for two reasons. One reason was that it meant that I could do a huge amount with my money. If I’d taken it seriously, by giving away, it’s about 1.5 million dollars (U.S.), over my life, which is a million pounds, that I would be able to, lead to something like 400,000 years of life at full health, equivalent, over my life, which is amazing.

Caring about where you give it to is also incredibly important. For the average American, it’s, I think, quite possible to give 10 times as much as they would be giving normally, and to be giving it to organizations, which are at least 10 times as effective. If someone did both those things, then it would have 100 times the impact.

I decided to think really seriously about this idea of giving more and giving more effectively, and trying to do, really, the best you can with the donations. From this, I decided to set up a society of people who are taking it seriously and who are willing to donate a significant portion of their income over the rest of their life to the places they thought could most cost-effectively fight causes or effects of poverty in the developing world.

I chose that focus area to narrow it in. I set it up about a year ago. Now we have 80 members, and together the members of the organization have pledged about 15 million U.S. dollars over the courses of their careers. That’s enough money, using these figures I had before, to produce about seven million quality-adjusted life years.

To put that into perspective, seven million years is, if that was all lived in a row, that’s enough time to take us back to the split between the homo and pan. Between the humans and chimpanzees, their common ancestor was about seven million years ago. It’s quite a mind-boggling amount of benefit that could be created by a relatively small group of people.

LUKE: You know, Toby, what I love about Giving What We Can is that it places the emphasis on doing as much objective good as possible. The problem with a lot of charity is that we’re really just giving to the charity and we’re purchasing warm fuzzy feelings about what we’re doing. And so, we’ll give to the charity that gives us warm fuzzies, rather than doing the research and figuring out what actually does the most good, in terms of, you know, quality-adjusted life years, or something like that.

And so, you know, this is just such a great thing to put the emphasis on: “You know what, if you’re going to give to charity, maybe you should give to making the world a better place, rather than purchasing warm fuzzies. If you want warm fuzzies, get them somewhere else. Let’s come together and make the world a better place.”

TOBY: Yeah, well, exactly. As one of my friends puts it, there’s a lot of people who want to make the world a better place, but there aren’t that many people who want to make the world as good a place as possible. You know, there are people who want to make a difference, but not to make as much positive difference as possible.

LUKE: Well, Toby, it’s been a pleasure speaking with you. Thanks for coming on the show.

TOBY: OK, yeah. Thanks very much for having me.

Previous post:

Next post:

{ 5 comments… read them below or add one }

Carl Shulman December 18, 2011 at 6:02 pm

The current Giving What We Can membership is 177, by the way. When was this recorded?

http://www.givingwhatwecan.org/about-us/our-members.php

  (Quote)

josefjohann December 18, 2011 at 6:13 pm

Glad to see the podcast back. Can’t wait to hear the Russell Blackford one.

  (Quote)

Jahed December 18, 2011 at 9:25 pm

I love the way you started the interview, by asking Toby about the major arguments against consequentialism. Good stuff.

  (Quote)

Bob December 19, 2011 at 11:35 am

Homo and Pan should be capitalized.

  (Quote)

Silver Bullet December 20, 2011 at 7:40 am

At last! Something good to listen to on the treadmill!

SB

  (Quote)

Leave a Comment

{ 1 trackback }