Machine Ethics for America’s Robot Army

by Luke Muehlhauser on March 16, 2011 in Ethics,Machine Ethics

Above is the thrilling and scary documentary Remote Control War, first broadcast in February 2011.

Highlights:

  • The predator drones in Afghanistan are remote-controlled from Nevada. Operators give them directions like fly there and bomb that.
  • America has more than 7,000 unmanned aerial vehicles, and more than 12,000 unmanned ground vehicles, in operation.
  • Forty-three other countries are building or buying military robots.
  • Congress has declared that by 2025, a third of America’s ground systems must be unmanned (robotic).
  • Currently, there is one operator per robot. Many robots are autonomous in movement – the predator drones fly themselves in changing weather conditions – but no robot today makes lethal decisions on its own.
  • That won’t be the case for long. Having humans in the loop slows things down, and it’s expensive. Soon, robots will be making lethal decision on their own. Contrary to experts’ predictions, the U.S. military claims there will “always be a human in the loop.” But they also paid Ronald Arkin for several years to figure out how to make fully autonomous robots follow the Laws of War and the Rules of Engagement. Arkin published his report, Governing Lethal Behavior in Autonomous Robots, in 2009.
  • By 2030, the U.S. Air Force plans to have swarms of bird-sized flying robots that operating semi-autonomously for weeks at a time.

Also see the bestselling book Wired for War and the Popular Mechanics article America’s Robot Army.

Here’s the point:

Semi-autonomous battlefield robots are already in use. Fully autonomous battlefield robots are at most a few decades away.

When fully autonomous warbots are deployed, they will need to be programmed with a machine ethics: a program to ensure ethical behavior. So we need to start figuring this stuff out now. As Colin Allen said, we don’t want to get to the point where we say, “Oops, we should have started thinking this through 20 years ago.”

All my talk about machine ethics for superintelligent machines has alienated some readers, because superintelligence is perhaps a century away, and some readers think it will never come. But the above points should persuade anyone that while machine ethics is the future, we also need machine ethics today.

Previous post:

Next post:

{ 27 comments… read them below or add one }

Martin Freedman March 16, 2011 at 4:09 am

Luke,

When you started on AI and ethics this was my original background that got me into desirism in the first place, so wanted to point out that in machine ethics it is drones, smart missiles etc. that are the real – or at least current – issue but was too busy to point this out. No surprise you got there pretty quickly anyway.

Hope you have a great time in Berkeley.

  (Quote)

Brian March 16, 2011 at 4:56 am

“America has more than 7,000 unmanned arial vehicles…”
Seems prudent. Helvetica ones might not fire.

  (Quote)

Alexander Kruel March 16, 2011 at 5:44 am

Fully autonomous battlefield robots are at most a few decades away.

Good luck. This is always presented as something definite when it should be a possibility to take into account. Humans have been able to land on the moon, yet space colonies have never been “at most a few decades away“.

  (Quote)

Brian March 16, 2011 at 7:14 am

Alex I’m not even sure why someone would think fully autonomous battlefield robots desirable. Perhaps Luke means robots that autonomously decide whether or not to kill? If so, he should say so. However, we have probably not reached the point where a machine is better than a machine-human team at chess or jeopardy, and many people can beat the best computers at the game of go. Even after autonomous robots are possible, we will have a very long way to go until they are optimal such that organizations that care about winning will predominantly choose them.

  (Quote)

Bill Maher March 16, 2011 at 7:43 am

Will they let gay robots join?

  (Quote)

Luke Muehlhauser March 16, 2011 at 7:54 am

Thanks, Brian. Fixed.

  (Quote)

Alexander Kruel March 16, 2011 at 7:55 am

Alex I’m not even sure why someone would think fully autonomous battlefield robots desirable. Perhaps Luke means robots that autonomously decide whether or not to kill?

I hope he means robots that make decisions complex enough to take into account internal ethical considerations. If we mean some autonomous machine gun that is able to target and shoot heat sources then we’re not talking about ‘machine ethics’ since any ethical considerations are external to the machine. Either you employ such a machine and risk that it kills uninvolved humans, or you don’t. You might build in some kind of safeguard but that has nothing to do with ‘machine ethics’. Ethical behavior internal to the machine only comes into place once its scope of action becomes more than merely complicated, namely complex. That is, ‘machine ethics’ are an adequate terminology if the machine is a decision maker, an agent that a human might choose to employ but whose instrumental decisions are unpredictable. In the case of a heat sensing machine gun its scope of action is foreseeable for a human employer, therefore the burden of ethical considerations is with the human. In the case of an artificial military agent the initial decision to employ it will be the only ethical consideration that a human commander makes directly, all further decisions being unpredictable and therefore necessarily internal to the artificial agent, only handed down indirectly by its human creators in the form of some sort of ethics module. I just don’t see how much sense it makes to refer to ‘ethics’ when one is talking about machines under a very sophisticated threshold. Is a rag doll equipped with ‘emotions’ if it smiles? Seems misleading…

  (Quote)

Luke Muehlhauser March 16, 2011 at 7:55 am

Alexander,

The technological and financial difference between moon landing and space colonies is vastly larger than the technological and financial difference between current warfare robots and the warfare robots I discuss in this post.

  (Quote)

Alexander Kruel March 16, 2011 at 8:17 am

Alexander,The technological and financial difference between moon landing and space colonies is vastly larger than the technological and financial difference between current warfare robots and the warfare robots I discuss in this post.

I don’t agree in the case of fully autonomous battlefield robots. IBM’s Watson and the like are the equivalent of moon rockets, more sophisticated firework. Yet fully autonomous battlefield robots are more like space colonization. Many people are just excited by some superficially fast progress. All the progress however is being outweighed by the difficulty of further understanding and improvement. Take DNA sequences, the progress is breathtaking but the complexity and sheer amount of data does outweigh the progress, progress which itself will slow down earlier than it is able to leverage problem solving and making sense of new data. I don’t see any particular good reason to believe that we’ll be able to solve problem-solving itself by brute force. It is a possibility, nothing more.

  (Quote)

Luke Muehlhauser March 16, 2011 at 8:51 am

Alexander,

What are you thinking of as fully autonomous? By fully autonomous I just mean that the robots will make lethal decisions on their own, not that they possess human-level intelligence.

  (Quote)

Alexander Kruel March 16, 2011 at 9:09 am

Alexander,What are you thinking of as fully autonomous? By fully autonomous I just mean that the robots will make lethal decisions on their own, not that they possess human-level intelligence.

Is a thermostat a fully autonomous decision maker? Is a missile that got off course and self-destructs acting ethically or an achievement in the field of ‘machine ethics’?

Here is what <a href="http://en.wikipedia.org/wiki/Autonomy"Wikipedia to say about ‘Autonomy’:

Autonomy (Ancient Greek: αὐτονομία autonomia from αὐτόνομος autonomos from αὐτο- auto- “self” + νόμος nomos, “law” “one who gives oneself their own law”) is a concept found in moral, political, and bioethical philosophy. Within these contexts, it refers to the capacity of a rational individual to make an informed, un-coerced decision. In moral and political philosophy, autonomy is often used as the basis for determining moral responsibility for one’s actions.

The possible achievement of such artificial agents is closer to space colonization than moon landing.

  (Quote)

Curt March 16, 2011 at 9:11 am

How is success in this endevor suppossed to be measured?
I have a possible suggestion for temporary success.
That everyone working creating the intellegence software that would be used for military purposes be crucified.
That would be a simple case of killing one to save thousands. I do not think that it is at all likely that such technology would be used for the benifit of mankind. It would simply be a force mulitplier for the reptiles in human bodies that currently rule the planet.
I do not hold out much hope of preventing this train of insanity. So this would be how I would measure success in the long term. If the technology reaches a point that machines are making decisions not only about who should live and who should die but who should or should not be arrested then clearly an outcome in which machines destory humanity would be a well deserved successful outcome. On an individual basis each human death might not be deserved. But, on a collective basis it would be more than deserved.
Codependent Cannibal Curt

  (Quote)

Curt March 16, 2011 at 9:14 am

By the way, is there really such a thing as “machine” ethics or is there just ethics?

  (Quote)

Luke Muehlhauser March 16, 2011 at 9:19 am

Alexander,

In my case, I’m just talking about what Jim Moor (2006) called ‘explicit ethical agents’, such that the rules for making ethical decisions (e.g. about when to shoot and not shoot) is coded into the machine rather than handled by a human operator. Whether such a machine is “morally responsible” is not what I’m talking about.

What’s your bet on when such agents will exist on the battlefield, after watching the above documentary?

  (Quote)

Luke Muehlhauser March 16, 2011 at 9:20 am

Curt,

Machine ethics is a subfield of ethics having to do with the implementation of programmed ethical behavior in machines.

  (Quote)

Alexander Kruel March 16, 2011 at 9:21 am

By the way, is there really such a thing as “machine” ethics or is there just ethics?

I have never read up on it but my guess is that ‘machine ethics’ or ‘friendly AI’ refers to the mathematically strict formalization of ethics as an imperative (or the necessity to derive it).

  (Quote)

Alexander Kruel March 16, 2011 at 9:32 am

Alexander,In my case, I’m just talking about what Jim Moor (2006) called ‘explicit ethical agents’, such that the rules for making ethical decisions (e.g. about when to shoot and not shoot) is coded into the machine rather than handled by a human operator. Whether such a machine is “morally responsible” is not what I’m talking about.What’s your bet on when such agents will exist on the battlefield, after watching the above documentary?

I’m sorry, I don’t feel like watching a 45 min. documentary right now. When to shoot and not to shoot is what I have actually been talking about it. Progress will be made in the detection of aggressive behavior, facial recognition and the like, yet I believe such an AI will be closer to a thermostat than an ‘explicit ethical agent’. As long as the behavior of systems that are being employed is in principle predictable the ethical burden is with the being that did employ the system and not an inherent quality of the system itself. There is a difference if a human being is employing a killer drone equipped with advanced pattern recognition versus giving birth to a human being that might grow up to become a mass murder. If the system you are going to employ is vastly more complex than some automatic machine gun then one should think about the implementation of ethical behavior. In any other case we talk about safeguards. Sorry for being nitpicking such semantics, but I feel that the talk about ‘machine ethics’ is misleading people to think that they could somehow hand over ethical considerations to crude automata’s.

  (Quote)

Brian March 16, 2011 at 10:39 am

“IBM’s Watson and the like are the equivalent of moon rockets, more sophisticated firework. Yet fully autonomous battlefield robots are more like space colonization.”

It would be easy to reprogram Watson so that Watson*+Jennings would be much better at Jeopardy than either. Jennings can also do many things Watson can’t – in fact, Watson can only do one thing.

A fully autonomous robot division being better than an all human one is an *extremely* long ways away. A fully autonomous robot division being better than a hybrid one is *incomprehensibly* far away.

Many concerns are specific to one or the other. The remote pilot:UAV ratio decreasing? Happens every day. The UAV going entire missions in which it kills people without human interaction? Practically tomorrow. The UAVs going every mission without a protocol to immediately allow remote human supervision, when the machine algorithms believe it likely a human could better perform a task or make a decision? Not for hundreds of years.

Every GI has to be not only a military robot, but also a Watson. What’s the name of Mickey Mouse’s girlfriend?.

Truly autonomous AI is a military parlor trick until it is superior to humans in basically every way. I imagine civilian general AI will be developed before the military develops general AI, because to win the best approach for the military is to integrate robots and computers with humans and let every component do the things they are best at.

Some people are concerned about a rogue, mutinous computer army. These people needn’t worry. Some people are concerned that a computer will make life and death decisions based only on heuristics programed into it, without human authorization. This *is* the future. Some people are worried UAVs will be sent out that don’t need further authorization to kill yet won’t refer to humans when making decisions humans are obviously better at making, such as whether or not to blow up an inhabited building during a stand0ff when the UAV’s computer judges there is time for deliberation. This is a weird fear. The human superiority that fuels the objection is precisely why it won’t happen.

It reminds me of some arguments against utilitarianism:
Al: Under utilitarianism, you necessarily have to kill healthy people for their organs to keep others alive. That would obviously be a *terrible* society for *everybody* because of injustice and constant fear.
Bill: Umm…if it’s obviously terrible for everybody, maybe that’s why it wouldn’t have to be implemented?
Al: No. To maximize happiness, you have to do it.

  (Quote)

JS Allen March 16, 2011 at 12:43 pm

We already have machines that make decisions to kill; they are called land mines. And there already exist a class of weapons that behave more autonomously than the typical land mine; automatically directing lethal force in the direction of perceived motion, and so on.

Historically, war technology has been considered useful if it effectively kills every living creature within a specified radius. Carpet bombs, mortars, mines, automatic machine gun turrets, etc. So a mountain-climbing robot with heat-seeking ability to hunt down people in caves would be ethically no different than a carpet bomb used on a village in the grassland. I imagine that war commanders will consider “intelligent” war robots to be ethical and useful, just so long as their radius of destruction can be contained by the commanders. In other words, no complicated “ethics” artificial intelligence will be pursued (and may even be regarded with mistrust).

  (Quote)

Curt March 16, 2011 at 1:17 pm

The example of a guided missle blowing itself up when it is off course is a good example of A (as in one) success in military machine programing. Yet to me it seems absurd that anyone working for the US military industrial complex can be trusted to ethically program machines? The US MIC is after all obviously the most powerful force for evil in the world today. Quaddaffi may be a threat to 7 million Libyans. Khameni may be a threat to 70 million Iranians. But the US MIC and the political leadership that support it are a threat to 7 billion people on the planet including the 300 million Americans. No other country or military on the planet is a threat to the United States. That any computer programer would place his services at the disposal of such a continuing criminal enterprise points to a complete lack of ethical standards in the grand scheme of things. That such a programer would write a program for a guided missle to blow itself up would mean that they are simply immoral killers and not nazi immoral killers.
Obviously if we want ethical machine programming it would have to be written by ethical programers.
Therefore I would propose that before anyone starts writing machine programs that they draw ethical lines in the sand that they will not cross under threat of imprisonment. (I would of course understand about crossing any line when one is under the threat of torture. )
Covert Camahdian Curt

  (Quote)

Curt March 16, 2011 at 1:38 pm

Another thing that makes me very uneasy with giving Robots autonomy is the lack of accountablity.
The world already has a big enough problem with this as the worlds powerful politicians and CEO are never held accoountable for thier misdeeds. Theortically though things could change. If such a powerful person feel into the wrong hands such as a group of a reborn Red Army Faction operating out of Wiesbaden or Cologne and this group was angry enough with thier prisoner they could at least theoretically decide to torture them.
But if a machine makes a terrible decision what can one do to disuade other machines from doing the same thing? Haahahahahaha that is a joke but maybe you do not get it so I might need to spell it out.
What would humans do to severly punish the machine? Take it screws out slowly and watch as it screams in pain? Of course some party pooper will say, ” No silly you just rewrite its program.” But what if the machine controls its own programing?
I am not sure that a machine ever could control its own programing but if it could it would be a whole new level of unaccountability. And that seems to be the goal of some computer programers. If that level is ever reached I would hope that all the people at that time have the sense to keep seperate the ability of a machine to come to a conclusion and to be able to implement a decision. Yet based on the history of people that I have seen up to now on my short visit to this planet I would be willing to bet that they are tempted to cross the line out of what some would call MILITARY NECCESSITY.
Crimson Cardinal Curt

  (Quote)

Garren March 16, 2011 at 2:15 pm

@Curt
“Another thing that makes me very uneasy with giving Robots autonomy is the lack of accountablity.”

Could always go the old fashioned route as in the video game Portal: “Well done, android. The Enrichment Center once again reminds you that Android Hell is a real place where you will be sent at the first sign of defiance.”

  (Quote)

Curt March 16, 2011 at 3:57 pm

Sorry Garren I do not have a a clue as to what you are talking about. I have only played a video game once or twice in my life and that was a racing game.
I play Pasuer, Rummy, Hookem, Hearts, Spades, Uno, Sequence, Backgammon, and when I am with one of my Uncles I play Cribbage.
Classic Cardflaying Curt

  (Quote)

MarkD March 16, 2011 at 8:34 pm

I’ve discussed this topic with representatives of various military commands over the past several years. There is a general belief that autonomous fire capability will never be allowed because the risk of harm to civilians/structures/friendly troops/attitudes towards the US/etc. outweighs any conceivable benefits from supporting such a capacity. The most likely path for measured autonomy is under specific circumstances like for fly-home behavior for drones that have lost communications. NASA has funded more effort in this domain because they face physical limits with their communications channels. We trust NASA will never need autonomous trigger-pulling.

  (Quote)

Luke Muehlhauser March 16, 2011 at 9:53 pm

MarkD,

I hope those representatives of military commands are right, but I doubt it.

  (Quote)

MarkD March 16, 2011 at 10:13 pm

Fair enough. An interesting question is under what circumstances would such a weapon system be deployed? Nuclear strike policy is relevant here: retaliatory civilian strikes are generally considered immoral because they don’t prevent first strikes, for instance, thus undermining mutually assured destruction or even less MAD strategies like deterrence.

  (Quote)

cl March 17, 2011 at 10:43 am

How sad. “It’s too costly to have humans kill other humans. Let’s have robots kill other humans to save money!”

  (Quote)

Leave a Comment