Implementation of ethical controls on robots

Dr. Ron Arkin, who runs the Mobile Robot Lab at the Georgia Institute of Technology, released his report "Governing the Lethal Behavior".

This article provides the basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement. It is based upon extensions to existing deliberative/reactive autonomous robotic architectures, and includes recommendations for (1) post facto suppression of unethical behavior, (2) behavioral design that incorporates ethical constraints from the onset, (3) the use of affective functions as an adaptive component in the event of unethical action, and (4) a mechanism in support of identifying and advising operators regarding the ultimate responsibility for the deployment of such a system.

Ron and I have discussed his report and have agreed to disagree. I encourage you to the report if you're at all interested in this stuff.

Our disagreement mostly hinges on his mid-19th - mid-20th Century view of war, i.e. Lawfare or industrial age warfare based on rules of war. To Ron, justifications matter and have time to be discussed. To me, perceptions matter more than fact and an engagement model based on the Laws of War might actually create too permissive of an environment. The era of Lawfare is passed.

To me, the fungibility of force decreases as the asymmetry of perception management increases (some might call that "controlling the narrative").

For more on our points of disagreement, see this post.

If you participated in my survey on the subject, which I will be expanding, you'll see I have a very different approach based on 21st Century Struggles for Minds and Wills where the facts matter little, if at all, and perceptions can turn the tide.