For centuries, moral philosophy was principally concerned with a particular species of Earth-based primate: homo sapiens.
Recently, moral philosophy has spent more time considering non-human Earth-based animals.
Now, morality is beginning to consider the problem of moral agency in minds in general. The vast majority of possible mind designs will be implemented in machines, not in biological organisms whose mind designs are restricted by evolutionary accident and excruciatingly slow reproduction rates.
How can we design machine minds to play nice with other minds, machine or not? What kinds of minds have moral rights or can be morally responsible, and to what degree? These are the central problems of the field of machine ethics.
Within the next few centuries, machine minds will be more diverse, more plentiful, and more powerful than animal and human minds. Machine ethics is the future of ethics.
Here’s a smattering of the literature on machine ethics:
- Allen et al., “Prolegomena to any future artificial moral agent” (2000)
- Yudkowsky, “Coherent Extrapolated Volition” (2004)
- Arkoudas & Bringsjord, “Toward ethical robots via mechanized deontic logic” (2005)
- Cloos, “The Utilibot Project: An Autonomous Mobile Robot Based on Utilitarianism” (2005)
- McLaren, “Lessons in machine ethics from the perspective of two computational models of ethical reasoning” (2005)
- Allen et al., “Why Machine Ethics?” (2006)
- Powers, “Prospects for a Kantian Machine” (2006)
- Ganascia, “Ethical System Formalization using Non-Monotonic Logics” (2007)
- Wallach, “Implementing moral decision-making faculties in computers and robots” (2008)
- Wallach & Allen, Moral Machines: Teaching Robots Right from Wrong (2009)
- Sullins, “Artificial moral agency in technoethics” (2009)
- Wallach, “Robot minds and human ethics: the need for a comprehensive model of moral decision making” (2010)
My central research project is the problem of how to design artificial moral agents (AMAs) so that they act ethically. This is the problem of AMA design. I am particularly interested in how to ensure that a superintelligent artificial moral agent (SAMA) acts ethically. SAMA design is the subject of my book Ethics and Superintelligence.
Previous post: Reading Yudkowsky, part 18
Next post: News Bits
{ 14 comments… read them below or add one }
This blog definitely seems to be taking a shift to more exciting domains of inquiry. All of a sudden the name Common Sense Atheism doesn’t seem to fit! Mind you, if the website name changes to reflect your paradigm of research you will have to give up that bad ass Stephen Roberts quote at the top of the page, and we can’t have that. ;)
mojo.rhythm(Quote)
Luke actually addressed this change in focus here: http://commonsenseatheism.com/?p=14502, and you aren’t the first to be supportive of it.
—-
Luke:
Are all of these publications beginner friendly? Or are some written to and for academics of AI and moral philosophy?
Esteban R. (Formerly Steven R.)(Quote)
Luke, I’m sure you’ll say more in your book, but what do you think are the realistic prospects of solving the problems of machine ethics such that SAMAs will not be the end of humanity?
DaVead(Quote)
DaVead,
My current hunch is that if superintelligence arrives, we are almost certainly fucked. There are a trillion ways to get that problem wrong, and very few ways to get that right, and we don’t yet know how to get it right. Moreover, if superintelligence only arrives once, then we only have one shot to program it correctly the first time. But the people with all the power and the money to develop superintelligence will probably not be moral saints with the best interests of humanity in mind.
However, it’s not clear that we can simply delay superintelligence by whatever means necessary. There are many other impending global catastrophes, and developing safe superintelligence may be the only way to avoid them.
Luke Muehlhauser(Quote)
Check out the Ethical Robot:
http://www.youtube.com/watch?v=ZLdvCDFriTQ
Nao(Quote)
How can we design machine minds to play nice with other minds, machine or not? What kinds of minds have moral rights or be morally responsible, and to what degree?
Hey no problem. As I understand it, we just have to build their brains out of positronic circuits, and the ethics comes along with it.
Reginald Selkirk(Quote)
My comment from last night got blocked :\
Anyway, Luke, are these papers beginner friendly or are some for academics of AI and Moral Philosophy only?
Esteban R. (Formerly Steven R.)(Quote)
‘Why Machine Ethics’ is probably the most beginner-friendly.
Luke Muehlhauser(Quote)
Hi Luke,
I discovered your blog a few weeks ago and I have to say, reading your posts from the most recent to the oldest is a strange experience. I started out very impressed, but the more I read the less impressed I am.
:-)
Furcas(Quote)
Furcas,
I shall take that as a clever compliment. :)
Luke Muehlhauser(Quote)
Machine ethics is a fascinating topic. I am thinking of working on a Master’s in the area. Your book on Ethics and Superintelligence looks interesting. Have downloaded it and will let you know what I think.
Sean Welsh(Quote)
Luke, which zero to three sources would you recommend as barking up the right trees?
Paul Torek(Quote)
Paul Torek,
On machine ethics for superintelligence, ‘Coherent Extrapolated Volition’ is in the right direction, but badly needs to be rewritten.
Machine ethics for narrow applications is much easier. ‘Governing Lethal Behavior in Autonomous Robots’ is the right direction in machine ethics for near-future battlefield robots (that only need to adhere to the LoW and RoE, without solving ethical theory first).
I don’t see much value in all the others, yet.
Luke Muehlhauser(Quote)
Incidentally, I agree with your assessment of the literature. Arkin’s book seems promising in the near-future. Though I can’t recommend Moral Machines enough – even though as kindof a survey, its age might become a problem quickly.
Thom Blake(Quote)
{ 2 trackbacks }