Machine Ethics is the Future

by Luke Muehlhauser on March 4, 2011 in Ethics,Machine Ethics,Resources

For centuries, moral philosophy was principally concerned with a particular species of Earth-based primate: homo sapiens.

Recently, moral philosophy has spent more time considering non-human Earth-based animals.

Now, morality is beginning to consider the problem of moral agency in minds in general. The vast majority of possible mind designs will be implemented in machines, not in biological organisms whose mind designs are restricted by evolutionary accident and excruciatingly slow reproduction rates.

How can we design machine minds to play nice with other minds, machine or not? What kinds of minds have moral rights or can be morally responsible, and to what degree? These are the central problems of the field of machine ethics.

Within the next few centuries, machine minds will be more diverse, more plentiful, and more powerful than animal and human minds. Machine ethics is the future of ethics.

Here’s a smattering of the literature on machine ethics:

My central research project is the problem of how to design artificial moral agents (AMAs) so that they act ethically. This is the problem of AMA design. I am particularly interested in how to ensure that a superintelligent artificial moral agent (SAMA) acts ethically. SAMA design is the subject of my book Ethics and Superintelligence.

Previous post:

Next post:

{ 14 comments… read them below or add one }

mojo.rhythm March 4, 2011 at 10:35 pm

This blog definitely seems to be taking a shift to more exciting domains of inquiry. All of a sudden the name Common Sense Atheism doesn’t seem to fit! Mind you, if the website name changes to reflect your paradigm of research you will have to give up that bad ass Stephen Roberts quote at the top of the page, and we can’t have that. ;)

  (Quote)

Esteban R. (Formerly Steven R.) March 4, 2011 at 10:59 pm

This blog definitely seems to be taking a shift to more exciting domains of inquiry.All of a sudden the name Common Sense Atheism doesn’t seem to fit! Mind you, if the website name changes to reflect your paradigm of research you will have to give up that bad ass Stephen Roberts quote at the top of the page, and we can’t have that.;)  

Luke actually addressed this change in focus here: http://commonsenseatheism.com/?p=14502, and you aren’t the first to be supportive of it.
—-

Luke:

Are all of these publications beginner friendly? Or are some written to and for academics of AI and moral philosophy?

  (Quote)

DaVead March 4, 2011 at 11:15 pm

Luke, I’m sure you’ll say more in your book, but what do you think are the realistic prospects of solving the problems of machine ethics such that SAMAs will not be the end of humanity?

  (Quote)

Luke Muehlhauser March 4, 2011 at 11:23 pm

DaVead,

My current hunch is that if superintelligence arrives, we are almost certainly fucked. There are a trillion ways to get that problem wrong, and very few ways to get that right, and we don’t yet know how to get it right. Moreover, if superintelligence only arrives once, then we only have one shot to program it correctly the first time. But the people with all the power and the money to develop superintelligence will probably not be moral saints with the best interests of humanity in mind.

However, it’s not clear that we can simply delay superintelligence by whatever means necessary. There are many other impending global catastrophes, and developing safe superintelligence may be the only way to avoid them.

  (Quote)

Nao March 5, 2011 at 5:21 am

Check out the Ethical Robot:
http://www.youtube.com/watch?v=ZLdvCDFriTQ

  (Quote)

Reginald Selkirk March 5, 2011 at 8:29 am

How can we design machine minds to play nice with other minds, machine or not? What kinds of minds have moral rights or be morally responsible, and to what degree?

Hey no problem. As I understand it, we just have to build their brains out of positronic circuits, and the ethics comes along with it.

  (Quote)

Esteban R. (Formerly Steven R.) March 5, 2011 at 11:58 am

My comment from last night got blocked :\

Anyway, Luke, are these papers beginner friendly or are some for academics of AI and Moral Philosophy only?

  (Quote)

Luke Muehlhauser March 5, 2011 at 2:51 pm

‘Why Machine Ethics’ is probably the most beginner-friendly.

  (Quote)

Furcas March 5, 2011 at 10:17 pm

Hi Luke,

I discovered your blog a few weeks ago and I have to say, reading your posts from the most recent to the oldest is a strange experience. I started out very impressed, but the more I read the less impressed I am.

:-)

  (Quote)

Luke Muehlhauser March 5, 2011 at 11:44 pm

Furcas,

I shall take that as a clever compliment. :)

  (Quote)

Sean Welsh March 10, 2011 at 11:38 pm

Machine ethics is a fascinating topic. I am thinking of working on a Master’s in the area. Your book on Ethics and Superintelligence looks interesting. Have downloaded it and will let you know what I think.

  (Quote)

Paul Torek April 5, 2011 at 6:43 pm

Luke, which zero to three sources would you recommend as barking up the right trees?

  (Quote)

Luke Muehlhauser April 6, 2011 at 1:33 pm

Paul Torek,

On machine ethics for superintelligence, ‘Coherent Extrapolated Volition’ is in the right direction, but badly needs to be rewritten.

Machine ethics for narrow applications is much easier. ‘Governing Lethal Behavior in Autonomous Robots’ is the right direction in machine ethics for near-future battlefield robots (that only need to adhere to the LoW and RoE, without solving ethical theory first).

I don’t see much value in all the others, yet.

  (Quote)

Thom Blake April 21, 2011 at 1:52 pm

Incidentally, I agree with your assessment of the literature. Arkin’s book seems promising in the near-future. Though I can’t recommend Moral Machines enough – even though as kindof a survey, its age might become a problem quickly.

  (Quote)

Leave a Comment

{ 2 trackbacks }