Robots on the Radio: interviews with Arkin, Asaro, and Armstrong on warbots

In the first of a two part program broadcast in England, Dr. Noel Sharkey interviews Dr. Ron Arkin, Dr. Peter Asaro and me on his Sound of Science program.  Stream or download the interview from England here.  (Note: the audio begins with two minutes of station promotion.)  The interview series looks at the ethics issues of using military robots that are allowed to apply lethal force on their own terms, the Laws of War and the international laws on discrimination, as well as their role in war. 

This first episode episode includes:

  • Dr. Ron Arkin, Regents' Professor, College of Computing, Georgia Tech about some of the dangers facing us in the near-future with robots that decide who to kill. Professor Arkin tells us about his work on developing an Artificial Conscience for a robot and about some of the difficult ethical decisions that both soldiers and robots have to make in war.
  • Dr Peter Asaro, the exciting young philosopher from Rutgers University in New York. Peter talks about a range of issues concerning the dangers of using autonomous robot weapons. He cautions us about the sci-fi future that the military seems to be heading towards and how a robot army could take over a city. Interestingly he makes the provocative claim that one of the first uses of insurgency was the early Americans against the British redcoats.
  • Matt Armstrong, an independent analyst specialising in public diplomacy and strategic communications working in California. Matt writes a famous blog called MountainRunner. On the programme he discusses the "hearts and minds" issues, a term he dislikes and the problems with having a robot as the "strategic corporal" of the future.

My segment begins around the 44 minute mark.  Briefly, I don't want to comment in depth on the interviews now, but my views on the subject are based on public diplomacy, counterinsurgency doctrine, and civil-military relations. To be more specific, I am looking at the informational effect of these systems, the need to build trust and show commitment among local populations, and the impact of commodification of violence, and the reduced the cost of violence, on Congressional oversight and Executive decision-making, among other considerations (see more here).

This was my very first radio interview.  As such, there were a couple of significant points I didn't get to, but hopefully the essential points were captured.  Listening to my interview again, there are a few words and phrases I will avoid next time (does FM3-24 really have "passages"??), as well as other changes.  Live and learn. 

The second episode will include interviews with Rear Admiral Chris Parry, Richard Moyes from Landmine action and military robotics people from NATO, the German and Swedish Armies as well as from the French Defense Ministry.

An important correction: I unintentionally demoted Lt. Colonel Chris Hughes when I referred to him as a Captain in the Najaf example

Also, a clarification from Ron after listening to my interview:

For future reference though I'd like to point out, that I have never advocated that robots be used as prison guards. I only use Abu Ghraib as an illustration of the propensity of ethical violations by human beings. A system capable of independently monitoring human performance would be helpful I'd suspect - but I agree completely that humans should not be removed.
I further advocate, as you do, that robots should *never* fully replace the presence of soldiers, but rather serve as organic assets beside them for very specialized missions such as room clearing, countersniper, and others as pointed out in my scenarios. These are also not intended (at least in my work) where active civilian populations are present, but only for full-out war (declared). The systems I am working on are for the next conflict (not the current one) whatever that may be - and also for the so-called "Army after next".

As Noel and Ron said, the more we talk about this in the open, the smarter we'll be in the deployment of robots. 

Your comments are appreciated. 

See also: