Who’s Who in Machine Ethics

by Luke Muehlhauser on March 15, 2011 in Ethics,Machine Ethics

The field of machine ethics studies how to design machines so that they behave ethically.

It’s a relatively new field, and is called by many names: “machine ethics” (Anderson and Anderson 2006; McLaren 2005; Powers 2005; Honarvar and Ghasem-Aghaee 2009), “machine morality” (Wallach et al. 2008), “artificial morality” (Danielson 1992; Floridi 1999), “computational ethics” (Allen 2002) and “computational metaethics” (Lokhorst 2011), “friendly AI” (Yudkowsky 2001b), and “robo-ethics” or “robot ethics” (Capurro et al. 2006; Sawyer 2007).

Sometimes, the questions of “Can a machine be a genuine moral agent?” and “Can a machine be morally responsible?” are considered to be part of this field, but on this page I am only concerned with those who research the question of how to implement a moral code in a machine.

On this page, I list some of the leading researchers investigating that question. The list is far from complete, and will be updated as time passes. I’ve skipped people who only have one or two articles on how to implement ethics in a machine. Example: Gert-Jan Lokhorst.

Colin Allen [Indiana University]
Allen was lead author of “Prolegomena to any Future Artificial Moral Agent” (2000), which introduced the term artificial moral agent, introduced the Moral Turing Test, and laid out the basic prospects for a variety of approaches to machine ethics: deontological, utilitarian, virtue ethics, associative learning, evolutionary approaches, and emotional. He expanded on this work in several papers, and especially in his book with Wendell Wallach, Moral Machines.

Ronald Arkin [Georgia Institute of Technology]
Funded by the Pentagon, Arkin produced Governing Lethal Behavior in Autonomous Systems, which outlines a system for ensuring that the Laws of War and the Rules of Engagement are followed by battlefield robots.

Selmer Bringsjord [Rensselaer Polytehnic Institute]
As director of the Rensselaer AI & Reasoning Laboratory, Bringsjord and his colleagues have published dozens of papers on logic-based AI systems and machine ethics. In 2006 he gave a detailed plan for an ethical robot using deontic logic. In recent years he has outlined a plan for machine ethics using Piagetian category theory and even divine command ethics.

Michael and Susan Leigh Anderson [Michael: University of Hartford. Susan: University of Connecticut]
The couple almost always work as a pair. “Machine ethics” is their favored term, and it is their voluminous work on the subject that caused “machine ethics” to predominate over other terms for the field: see especially the special issue of IEEE Intelligent Systems on machine ethics they edited (volume 21, issue 4), and their edited volume Machine Ethics for Cambridge University Press. Their first ethical program was Jeremy, a hedonic act utilitarian. But the Andersons quickly moved on to an approach that uses W.D. Ross’ prima facie duties approach to morality, using case-based learning to resolve conflicts between duties. This was the moral system used in MedEthEx, a program that gives moral advice concerning available medical treatments, and in EthEl, an actual moral agent that uses the Anderson’s moral program to make action decisions. A quick summary of their work is available in “Robot be Good” (2010) for Scientific American.

Peter Danielson [University of British Columbia]
Danielson produced the earliest major study in machine ethics, Artificial Morality: Virtuous Robots for Virtual Games. His work has continued to focus on ethical behavior by machines in contexts of play.

James Gips [Boston College]
Gips’ “Towards the Ethical Robot” (1995) was one of the first works in machine ethics. Gips has since suggested that building an ethical robot should be named a Grand Challenge.

J. Storrs Hall [independent]
Hall wrote “Ethics for Machines” in 2000, and followed it up with Beyond AI (2007), the first full-length book on machine ethics (after Danielson’s very early book).

James Moor [Dartmouth College]
Moor wrote of “computable ethics” in 1995, and his paper “The Nature,  Importance, and Difficulty of Machine Ethics” (2006) contains a widely-used distinction between ethical impact agents, implicit ethical agents, explicit ethical agents, and full ethical agents.

Thomas Powers [University of Delaware]
Powers is has written several papers in machine ethics, most centrally: “Prospects for a Kantian Machine” (2006).

Wendell Wallach [Yale University]
Wallach is a frequent co-author with Colin Allen, for example in Moral Machines. He is probably the best source of survey articles on machine ethics. Recently, he has worked (with Colin Allen) toward implementing a machine ethics in Stan Franklin’s artificial general intelligence project: LIDA. See “A Conceptual and Computational Model of Moral Decision-Making in Humans and Artificial Agents” (2010).

Eliezer Yudkowsky [Machine Intelligence Research Institute for Artficial Intelligence]
Yudkowsky was the first write in detail about the challenges of designing “Friendly AI” that would work not just in narrow-purpose robots but in superintelligent machines that could determine the future of the human race: see Creating Friendly AI (2001). He proposes we design a “seed AI” to figure out the “coherent extrapolated volition” of mankind: “In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.” The seed AI would then program the first superintelligent machine to maximize the coherent extrapolated volition of mankind.

Previous post:

Next post:

{ 3 comments… read them below or add one }

Pablo Stafforini.com March 15, 2011 at 4:02 pm

You missed Nick Bostrom and his recent paper ‘The Ethics of Artificial Intelligence‘, co-authored with Eliezer Yudkowsky.


Pablo Stafforini March 15, 2011 at 4:03 pm

Oops, there’s an extra ‘.com’ on my name above! Feel free to delete it.


Luke Muehlhauser March 15, 2011 at 5:01 pm


Yeah, I’ll keep adding people as they keep publishing. I think Bostrom is actually working on a book on the subject.


Leave a Comment