In an earlier post, I explained Peter Railton’s moral reductionism by summarizing some of chapter 9 in Miller’s Introduction to Contemporary Metaethics. Now, I consider objections to Railton’s theory.
David Wiggins has attempted to attack Railton’s moral reductionism with a new variation of Moore’s Open Question Argument, but this attack fails (see Miller’s chapter). Instead, I’ll consider some objections (raised by David Sobel) to the ‘full-information’ analysis of non-moral goodness upon which Railton’s moral reductionism depends.
Let’s review Railton’s ‘full-information’ analysis of non-moral goodness. Using Railton’s reforming definition of non-moral goodness,
Y is good for person X if X+ (an ideally rational and fully informed version of person X) would want the non-idealized X to want in X’s circumstances.
For example, consider Lonnie, who feels miserable and lethargic in a foreign country. He comes to desire milk, but unbeknownst to Lonnie his lethargy is due to the fact that he is dehydrated, and drinking milk will only make the problem worse. Of course, Lonnie+ would want Lonnie to desire water rather than milk, and so water but not milk is non-morally good for Lonnie.
One problem with this account, says Sobel, is that in many cases, first-hand experience is often required to understand what it is like to lead that kind of life.
To see how this poses a problem for Railton’s account of non-moral goodness, consider how person X might acquire the information required to become the idealized X+ and thus have certain desires about what X should want. One way is to acquire this information serially, Sobel suggests:
Our ideal self is expected to achieve full information by acquiring firsthand knowledge of what one of the lives we could live would be like, retaining this knowledge, and moving on to experience the next kind of life we could lead.
The worry is that Lonnie might not take Lonnie+’s advice seriously because Lonnie+’s knowledge of alternative lives has corrupted Lonnie+’s view of the type of life Lonnie is living. Thus, perhaps Railton’s full-information account of non-moral goodness can’t play the normative role he wants it to.
This may be confusing, but Sobel gives an example of “an Amish person who does not know what other options society holds for her”:
The experience of such a person could differ significantly from the experience of the same person who did have knowledge of many other options that society offers… to be able to claim that one knows what it is like to lead such a life one must experience what it would be like to be in those shoes (explicitly not what it would be like to be in those shoes with the accumulated knowledge of what it would be like to have lived a multitude of alternative sorts of lives). Attempting to give the idealized agent direct experience with what it would be like to be such an Amish person, while this agent has the knowledge of what it would be like to live many significantly different sorts of lives, will in many cases be impossible.
So the serial method of becoming X+ faces problems. Sobel suggests that another method may be the amnesia method:
The agent must have an experience of what some life would be like, then forget this and be ready to learn what some other life would be like without the latter process being affected by its position in the series. Then at the end of the learning and forgetting process we would have to remove (serially or all at once) each instance of amnesia while, on some views, adding factual information and immunity to blunders of instrumental rationality somewhere along the way.
Sobel suggests four problems with this account, though Miller believes he can overcome all of them. (See Miller’s chapter.)
There are other proposed problems with full-information accounts of non-moral goodness, but I will not consider them here. Those who study the Friendly AI problem may notice the relevance of these objections to Yudkowsky’s proposal for Coherent Extrapolated Volition.