Of note to those following developments in autonomous lethal robots should be an article published this summer in the Columbia Science and Technology Law Review, entitled “International Governance of Autonomous Lethal Robots.” It is co-authored by a bevy of individuals calling themselves the Autonomous Robotics Thrust Group of the Consortium on Emerging Technologies, Military Operations and National Security (CETMONS), a collection of ethics and technology experts from various North American universities. According to the article:
“A variety of never-before-anticipated, complex legal, ethical and political issues have been created – issues in need of prompt attention and action… The recent controversy over unmanned aerial vehicles (UAVs) that are nevertheless human-controlled… demonstrates the importance of anticipating and trying to address in a proactive manner the concerns about the next generation of such weapons – autonomous, lethal robotics. While there is much room for debate about what substantive policies and restrictions (if any) should apply to LARs, there is broad agreement that now is the time to discuss those issues.”
This is only the most recent call for international policy attention to one of the most game-changing developments in military technology and military norms in history. In that, the ARTG joins other emerging networks of professionals bound together by the causal belief that nations will have interests in pursuing fully autonomous weapons and the normative belief that such developments should be subject to ethical regulation in advance – a precautionary principle, as it were. The International Committee on Robot Arms Control (ICRAC), for example, issued a statement in Berlin last year:
“Given the rapid pace of development of armed tele-operated and autonomous robotic systems, we call upon the international community to commence a discussion about the pressing dangers that these systems pose to peace and international security and to civilians, who continue to suffer most in armed conflict.”
At the same time, I see significant differences between the ICRAC statement and the argument in the ARTG article.
First, whereas ICRAC is concerned about the ability of weaponized robots to follow basic war law rules, ARTG suggests that “it may be anticipated that in the future autonomous robots may be able to perform better than humans [with respect to adherence to the existing laws of war].” This is not surprising since one of the ARTG authors is Ronald Arkin, who is pioneering designs for such a ethical soldier and has written an important book on the topic.
Second, whereas ICRAC has floated prohibitions on some or all uses of autonomous robots on a menu of options, ARTG authors argue “it remains an open question whether the differences between LAR and existing military technology are significant enough to bar the former’s use” and moreover appear to assume such prohibitions would not, at any rate, check the deployment of such weapons: “the trend is clear: autonomous robots will ultimately be deployed in the conduct of warfare.” ICRAC’s position is far more optimistic about the potential of norm-building efforts to forestall that outcome, and far more pessimistic about the normative value of the weapons.
In short, both ARTG and ICRAC would appear to constitute examples of epistemic communities:
“networks of professionals with recognized knowledge and skill in a particular issue-area, sharing a set of beliefs, which provide a value-based foundation for the actions of members.”
But do these groups constitute nodes in a single epistemic network due to the shared causal and principled beliefs that the weaponization of robots is proceeding apace and that proactive governance over these developments is now a necessary public good?
Or do they constitute separate, competing epistemic communities operating in the same policy space with very different visions about what that governance should look like? If the latter, do they indeed constitute counter-communities, similar to the counter-campaigns Cliff Bob is documenting in the NGO sector?
Analytically, is there a standard for making this determination as an empirical matter or is it simply a matter of how one black-boxes the emergent norm under study? If I understand the “norm” in question as a precautionary principle in favor of some preliminary ethical discussion about AWS, then both these groups have a shared agenda whatever their different viewpoints on the ethics involved. If I focus on what they argue the outcome should be, my interpretation is that they represent different agendas (that may be true within each group as well, of course, as in any community there will be differences of opinion over outcome, process or strategy).
I put this question forth largely as a bleg since I am not an expert in the epistemic communities literature and yet probably need to become one as I develop this particular case study for my book. Has someone developed a typology that I would find useful? Other thoughts or useful literature you can point me to?
0 Comments