Finding good news in times of war is tough. So when an article titled “Droning on: How to build ethical understanding into pilotless war planes” in the Economist detailing a new technology in development from the Georgia Institute of Technology’s School of Interactive Computing I was excited.

The technology reviewed in the article is called Ethical Architecture by the Georgia Institute folks and would be active in pilotless drone planes over battlefield conditions. These drones are operated remotely by specialists safely away from danger sometimes as far removed from the field as to be in Nevada. These drones are a great development over traditional piloted planes with one important area of concern: their use has been involved with missle attacks that resulted in killing innocent people. The Economist cites the tragedy of 23 June, 2009 in Makeen Pakistan where a missle deployed by US soldiers from at least two drone planes killed 80 people.

I phrased that last sentence in a very specific way for a reason. In common parlance, an article might simply say “drone attack kills civilians” and the reader gets a sense of “oh, one of those tiny little robot planes killed people.” This must sentiment must be carefully avoided when writing on these crimes. (I use ‘crimes’ very awarely. ‘Event’ or ‘tragedy’ do not fit for what happened on 23 June, 2009. An event is a catered meal and a tragedy can be a lightning strike or flood.)

On that date, mourners were gathered in a processional and gathering of prayer for two local men who had been killed earlier in the day by remote controlled drones. US intelligene wrongly placed Baitullah Mahsud, leader of Tehrik-i-Taliban Pakistan (TTP) at the funeral and that was reason enough for US soldiers to approve the firing of at least three missiles into a crowd of praying women and children. Mahsud later died in August 2009 by a remote drone fired missile strike.

Context matters, and the larger picture here gives us a good idea of why the Ethical Architecture program for drones is important. Think of it: in this case, operators somewhere in their flight consoles far away from the field have been given intelligence that a high ranking baddie is at the funeral. Multiple missiles are fired into a crowd in hopes of killing him. This action, I submit, is unethical. I invite arguments otherwise but my conscience is sure. No AI here, no ‘robots run amok’. Just decisions made by many military authorities.

That is just one situation. There are others where the operator may not have a clear field of vision available through their camera-feed or they may be unable to keep a solid understanding of nearby buildings or civilians they could jeopardize by firing. This situation is probably common. These operators are called to function at very high levels of stress and attention and in the heat of a split second decision they may neglect to ‘see the bigger picture’.
This is where Ethical Architecture comes is.

As the Economist describes:
The drone would initially be programmed to understand the effects of the blast of the weapon it is armed with. It would also be linked to both the Global Positioning System (which tells it where on the Earth’s surface the target is) and the Pentagon’s Global Information Grid, a vast database that contains, among many other things, the locations of buildings in military theatres and what is known about their current use.”

The drone would ‘learn’ in the sense that it could collect data from the blast to compare with its previous estimation. If it was wrong, it would update its understanding for the next situation.

Now the cool thing: If an operator began a missile launch protocol, the drone could put on the brakes. If it saw that there were hazards to holy sites nearby, or a blast may damage a civilian building, it could act as a ‘safety’ and require a second human opinion. Of course, its measures could be turned off by the operator.

From the Economist we know that the drone is not ‘learning ethics’. Ethics is a human endeavor as of right now and it is the developers who are ethical. It is the people around the world who will not abide by a single non-combantant death who are ethical. It is the desire to minimize violence and destruction and respect cultures’ holy sites that is ethical. The drone itself just has a ‘wider lens’ than the operator. It can know the battlefield better and alert when the ‘trigger may be pulled’ too hastily.

Many contemporary firearms have safety switches of some fashion to prevent firing until absolutely certain. This is just a multimillion dollar update.

So let me now turn to Inglis-Arkell’s article:
Right off the bat, her title is wrong and misspelled: “A New Program Teaches Ethics to Robot Soliders [sic]”.
Are drones soldiers? That’s a bit of a semantics game. Are they like K-9 dogs who (are rightly) given full credit of ‘police officers’? Or are they tools? I suggest the latter.
Is anyone teaching a robot ethics? Not really. Ethics involve cultural value, desire, relationship, accountibility. This robot assess buildings and geography and basically flashes a red light when estimated munitions damage would threaten outside target area.
It makes for a catchy banner, Inglis-Arkell, but you or your editor are stretching.

She starts off the article with talking about how the armed forces are seeking to create an unmanned frontline and asks:
How do we do this while avoiding an Asimovian situation where our robots go crazy? And is that even possible?”
Disregard the sloppy writing of whether the second question is asking if “crazy robots” or “avoiding them” is possible.
The problem is ‘unmanned’ right now and for the near future means “remotely controlled”. The issue at hand is important–remote drones–and their improvement through Ethical Architecture is important. Considerations for whether non-human intelligences will be involved in US military action is an interesting subject, but Inglis-Arkell is conflating two issues to the disservice of the issue at hand.

I also feel there is a reductive quality to the way she speaks of ethics. She writes:
The drone would then compare and contrast the expected consequences of its action with the actual consequences. If they didn’t match, it would then adjust its own behavior. The drone would learn ethics, just the way we do.”
Many people operate their lives on more than outcomes. I realize that there is an amount of interior checking that can occur where intention-behavior-desired outcome/actual outcome are balanced out by an individual and called ‘ethics’ but I personally feel it is important to emphasize the relational/social dimension. A person must be vulnerable to be ethical. They must have ‘stock’ in other’s feelings and beliefs. Ethics requires explanation or excuse for one’s behavior. These are currently out of reach of the drone planes.

So I’m feeling that Inglis-Arkell simply misunderstood the Economist. It may be that she saw the project was called Ethical Architecture and thought that perhaps rather than the project being born of ethical desire to lessen innocent casualties on the battlefield she assumed that drone planes now were social creatures who were prone to regret, responsibility, and diachronic and conflicting agendas.

She quotes Noel Sharkey of the International Committee for Robot Arms Control:
You could train it all you want, give it all the ethical rules in the world. If the input to it isn’t correct, it’s no good whatsoever, humans can be held accountable, machines can’t.”
As we saw above in the funeral bombing, humans in our military services (or the mercenary contractors that get too little attention) are not held all that accountable if they can approve missiles being fired into crowds without condemnation. This quote also muddies the issue and is a distraction. We know that a machine cannot be held accountable. No argument. It is the designers and operators that we should be concerned with.

Until we have human level AI, we must keep our focus on where it deserves to be: the human element.
And as we hold each operator of a drone and their superiors to the highest accountibility, so must we pressure our government leaders to demilitarize and demand of them an emphasis on non-violent strategies in world relations.

Inglis-Arkell closes her article saying she is doubtful human level AI will ever occur. It will.
I just hope that when our newly created equals arrive we can present to them a world without war to enjoy with us.

Here’s Inglis-Arkell’s article:
http://io9.com/5510275/a-new-program-teaches-ethics-to-robot-soliders?skyline=true&s=i

The original Economist article:
http://www.economist.com/science-technology/displaystory.cfm?story_id=15814399

Here’s the homepage of Ronald C Arkin of Georgia Tech, designer of Ethical Architecture:
http://www.cc.gatech.edu/aimosaic/faculty/arkin/

Advertisements