Meta-ethics: Railton’s Moral Reductionism (part 2)

by Luke Muehlhauser on April 22, 2011 in Ethics

In an earlier post, I explained Peter Railton’s moral reductionism by summarizing some of chapter 9 in Miller’s Introduction to Contemporary Metaethics. Now, I consider objections to Railton’s theory.

Objections

David Wiggins has attempted to attack Railton’s moral reductionism with a new variation of Moore’s Open Question Argument, but this attack fails (see Miller’s chapter). Instead, I’ll consider some objections (raised by David Sobel) to the ‘full-information’ analysis of non-moral goodness upon which Railton’s moral reductionism depends.

Let’s review Railton’s ‘full-information’ analysis of non-moral goodness. Using Railton’s reforming definition of non-moral goodness,

Y is good for person X if  X+ (an ideally rational and fully informed version of person X) would want the non-idealized X to want in X’s circumstances.

For example, consider Lonnie, who feels miserable and lethargic in a foreign country. He comes to desire milk, but unbeknownst to Lonnie his lethargy is due to the fact that he is dehydrated, and drinking milk will only make the problem worse. Of course, Lonnie+ would want Lonnie to desire water rather than milk, and so water but not milk is non-morally good for Lonnie.

One problem with this account, says Sobel, is that in many cases, first-hand experience is often required to understand what it is like to lead that kind of life.

To see how this poses a problem for Railton’s account of non-moral goodness, consider how person X might acquire the information required to become the idealized X+ and thus have certain desires about what X should want. One way is to acquire this information serially, Sobel suggests:

Our ideal self is expected to achieve full information by acquiring firsthand knowledge of what one of the lives we could live would be like, retaining this knowledge, and moving on to experience the next kind of life we could lead.

The worry is that Lonnie might not take Lonnie+’s advice seriously because Lonnie+’s knowledge of alternative lives has corrupted Lonnie+’s view of the type of life Lonnie is living. Thus, perhaps Railton’s full-information account of non-moral goodness can’t play the normative role he wants it to.

This may be confusing, but Sobel gives an example of “an Amish person who does not know what other options society holds for her”:

The experience of such a person could differ significantly from the experience of the same person who did have knowledge of many other options that society offers… to be able to claim that one knows what it is like to lead such a life one must experience what it would be like to be in those shoes (explicitly not what it would be like to be in those shoes with the accumulated knowledge of what it would be like to have lived a multitude of alternative sorts of lives). Attempting to give the idealized agent direct experience with what it would be like to be such an Amish person, while this agent has the knowledge of what it would be like to live many significantly different sorts of lives, will in many cases be impossible.

So the serial method of becoming X+ faces problems. Sobel suggests that another method may be the amnesia method:

The agent must have an experience of what some life would be like, then forget this and be ready to learn what some other life would be like without the latter process being affected by its position in the series. Then at the end of the learning and forgetting process we would have to remove (serially or all at once) each instance of amnesia while, on some views, adding factual information and immunity to blunders of instrumental rationality somewhere along the way.

Sobel suggests four problems with this account, though Miller believes he can overcome all of them. (See Miller’s chapter.)

There are other proposed problems with full-information accounts of non-moral goodness, but I will not consider them here. Those who study the Friendly AI problem may notice the relevance of these objections to Yudkowsky’s proposal for Coherent Extrapolated Volition.

Previous post:

Next post:

{ 12 comments… read them below or add one }

Garren April 22, 2011 at 7:12 am

Sobel’s criticism is made possible by the use of an idealized self which may have developed a different set of fundamental desires. Maybe the idealized self is in love with a person the real self has not met.

That reminds me, Railton was the one who wrote that interesting paper on moral ends, love of a person, and alienation. May well be relevant to the AI ethics thing.

http://google.com/search?q=railton+alienation+consequentialism

  (Quote)

lavalamp April 22, 2011 at 8:20 am

Re: the example about the Amish person, can you explain why he thinks that’s a bug? It sounds like a feature to me…

  (Quote)

Nisan April 22, 2011 at 8:44 am

Sobel’s Amish example was not clear to me at first. It turns out he’s talking about a naive Amish person becoming Amish-person+ by serially experiencing at least two possible future lives, one in which she retains her naive lifestyle and one in which, say, she works at an IT startup in Silicon Valley. Sobel’s point is that this won’t work if Amish-person+ experiences the Silicon Valley life *first*, because afterward she’ll be unable to experience what it’s like to live the rest of her days in Amish naivety.

The obvious solution is for Amish-person+ to experience the naive Amish life first. But Sobel points out that one can’t order lives by naivety; there are multiple dimensions of naivety. So then Sobel moves on to the amnesia method.

  (Quote)

lavalamp April 22, 2011 at 10:24 am

@Nisan, thanks, I didn’t realize that’s what was meant by “serial”, as it so obviously doesn’t work. In my head I figured all the various lives resulting from all choices are lived out in parallel and then the resulting mind states are recombined afterwards, which I suppose amounts to the same thing as the amnesia method.

  (Quote)

Eliezer Yudkowsky April 22, 2011 at 1:11 pm

These are not *objections*. This is the sort of detail work that would have to be done to actually design a CEV. It’s primitive in some ways (I think that the original CEV doc, by asking about the ordering of operations like “knew more” and “thought faster”, was already operating on lower-level granularity than this), presumably because these philosophers still aren’t thinking in terms of running an Artificial Intelligence. But it’s the right sort of thinking nonetheless.

I think I’d actually like to see them treat it as a pragmatic problem of building an actual AI and try to describe the normative form then, if anyone could possibly explain to them why this is important.

  (Quote)

cl April 22, 2011 at 4:28 pm

One problem with this account, says Sobel, is that in many cases, first-hand experience is often required to understand what it is like to lead that kind of life.

Well yeah, that’s the small problem. The BIG problem is that none of us have access to “an ideally rational and fully informed version of person X,” unless of course–you guessed it–we have something like an omniscient Being that can communicate with us. So at the very least, the atheist seems forced to concede that IF an omniscient, omnibenevolent God exists, THEN that God is the best possible source of moral knowledge. This seems to flow perfectly from all that you’ve offered from Railton thus far.

I mean, what does Railton’s theory really accomplish if the locus is purely hypothetical? IOW, how can we put it to practice? It’s not like we can ask questions of this ideally rational and fully informed version of ourselves.

  (Quote)

Luke Muehlhauser April 22, 2011 at 6:59 pm

Eliezer,

Yes, well, that’s what I want to do (after my metaethics sequence). I’m not going to wait for philosophers to cover this issue correctly, or for use in FAI design.

  (Quote)

Brian April 25, 2011 at 4:21 am

“Y is good for person X if X+ (an ideally rational and fully informed version of person X) would want the non-idealized X to want in X’s circumstances.”

I don’t understand why Y is limited to being what X+ would want X to want, when X+ has to be theoretically achievable from X through a set of circumstances, arguments, etc.

What is lost if X+ is replaced with Z, where Z wants to fulfill X’s desires (at every meta-level), where Z differs from X in that Z is omniscient as to consequences of actions? Why insist on a link between X and the ideal wanter (X+ or Z) where X can theoretically become X+ only according to a set of transformations limited by what X approves of?

And if you’re going to tell me, “X must be transformable into X+ subject only to limitations of what X+ approves of, not what foolish X approves of”, I know of an X+ who sees no important difference between having reached his current philosophical positions by having X ground up into individual molecules and assembled into a fully made X+, rather than being reasoned from being X into being X+.

Well, maybe I don’t personally know such an X+. But I *do* know of an X who sees little difference between persuading him (sufficiently far) away from certain details of his current metaphysical outlook and torturing him away from it.

Depending on always having the logical possibility of being able to go from X to X+ to find Y seems hopeless to me…and unnecessary.

  (Quote)

Bill Snedden April 25, 2011 at 8:15 am

@cl:

So at the very least, the atheist seems forced to concede that IF an omniscient, omnibenevolent God exists, THEN that God is the best possible source of moral knowledge. This seems to flow perfectly from all that you’ve offered from Railton thus far.

Hmmm…the minimal acceptable definition of “omniscient” is something like “knowing everything that can be known”, so by definition IF an omniscient being exists then of course it would have perfect moral knowledge. Not much of a concession to admit something that necessarily must be true by definition…

Of course it means little. “Perfect moral knowledge” doesn’t establish authority. Neither does “omnibenevolence”. “God” can certainly pull the weight of ontological grounding of values, but as for deontology or epistemology, not so much…

I mean, what does Railton’s theory really accomplish if the locus is purely hypothetical? IOW, how can we put it to practice? It’s not like we can ask questions of this ideally rational and fully informed version of ourselves.

It seems to me that if we admit that such a locus can serve as an ontological foundation, then we can explore the normative implications via metaphysics/induction. It’s not perfect, but then nothing in epistemology is or ever can be.

  (Quote)

cl April 25, 2011 at 4:36 pm

Bill Snedden,

Of course it means little.

In your opinion, perhaps. In mine, it means a lot.

It seems to me that if we admit that such a locus can serve as an ontological foundation, then we can explore the normative implications via metaphysics/induction.

How so?

  (Quote)

Bill Snedden April 26, 2011 at 5:43 am

@cl:

In your opinion, perhaps. In mine, it means a lot.

Well, you can’t get from “god exists” to “his commands create moral obligations” without a good deal of additional argumentation, even should one grant the “tri-omnis”. So we can’t ground normativity deontologically without some additional work. And just admitting that an omniscient, omnibenevolent being would be a source of perfect moral knowledge doesn’t necessarily mean a lot either: we have some of the same epistemic issues that we have with the “ideally rational and fully formed version of ourselves” (which actually sounds a bit like a theistic deity, if you ask me).

IOW, “omnisicent & omnibenevolent” cannot in and of themselves ground normativity, so I don’t think granting them means very much in that context.

How so?

Ummm…by looking and seeing? Empirical investigation? Philosophical inquiry? If we posit this “ideally rational and fully informed version” of a human being as the grounding of value , we should be able to investigate, scientifically (induction) and philosophically (metaphysics) what it would mean. IOW, what does it mean to be a rational moral agent (this “ideally rational and fully informed version”) and how should such creatures live their lives? Any such inquiry is bound to be fraught with epistemic issues, but that’s going to be the case with any inquiry into these types of “big” questions.

  (Quote)

cl April 27, 2011 at 9:41 am

IOW, “omnisicent & omnibenevolent” cannot in and of themselves ground normativity…

Why not? What can?

Ummm…by looking and seeing? Empirical investigation? Philosophical inquiry?

It seems you think I’m stupid, but I assure you I ask the question in earnest: How so? How might might we perform “empirical investigation” or “philosophical inquiry” on a hypothetical, fully informed, fully rational version of ourselves? Won’t our investigations be necessarily restricted to our non-fully informed, non-fully rational capacities?

Any such inquiry is bound to be fraught with epistemic issues, but that’s going to be the case with any inquiry into these types of “big” questions.

There, I agree.

  (Quote)

Leave a Comment