Tag: robot warriors

Field Reports

I spent last week doing “field research” – that is, participant-observation in one of the several communities of practice whose work I’m following as part of my new book project on global norm development. In this case, the norm in question is governance over developments in lethal autonomous robotics, and the community of practice is individuals loosely associated with the Consortium on Emerging Technologies, Military Operations and National Security. CETMONS is an epistemic network comprised of six ethics centers whose Autonomous Weapons Thrust Group collaboratively published an important paper on the subject last year and whose members regularly get together in subset to debate legal and ethical questions relating to emerging military tech. This particular event was a sponsored by the Lincoln Center on Applied Ethics, which heads CETMONS and held at the Chautauqua Institution in New York.

There among the sailboats (once a game-changing military technology themselves), smart minds from law, philosophy and engineering debated trends in cyber-warfare, military robotics, non-lethal weaponry and human augmentation. Chatham House rules apply so I can’t and won’t attribute comments to anyone in particular, and my own human subjects procedures prevent me from doing more than reporting in the broadest strokes about the discussions that took place in my research, foreign policy writing or blog posts. Nor does my research methodology allow me to say what I personally think on the specific issue of whether or not autonomous lethal weapons should be banned entirely, which is the position taken by the International Committee on Robot Arms Control and Article 36.org, or simply regulated somehow, which seems to be the open question on the CETMONS-AWTG agenda, or promoted as a form of uber-humanitarian warfare, which is a position put forward by Ronald Arkin.* 


However, Chatham House rules do allow me to speak in generalities about what I took away from the event, and my methodology allows me to ruminate on what I’m learning as I observe new norms percolating in ways that don’t bleed too far into advocacy for one side or the other. I can also dance with some of the policy debates adjacent to the specific norm I’m studying. And I can play with the wider questions regarding law, armed conflict and emerging technologies that arise in contexts like this. 

My posts this week will likely be of one or the other variety. 
____________
*Not, at least, until my case study is completed. For now, regarding that debate itself, I’m “observing” rather than staking out prescriptive positions. My “participation” – in meetings like these or in the blogosphere or anywhere else these issues are discussed – is limited to posing questions, playing devil’s advocate, writing empirically about the nature of ethical argument in this area, exploring empirical arguments underlying ethical claims on both sides of that debate, clarifying the applicable law as a set of social facts, and reporting objectively on various advocacy efforts.

BREAKING NEWS — Charli Carpenter is a Machine!

THE CANARD

“All the fake news that fit to print.”

— Amherst

The academic and foreign policy worlds were rocked today by the news that Charli Carpenter — prolific academic, policy wonk, and mom — is in fact a robot. She was taken captive this morning in a rare joint operation by the FBI, the CIA, and NASA.

Friends were shocked, but not necessarily surprised. Dan Nexon, a professor at Georgetown University, said, “We always joked that Charli was a machine. She writes like a book a week. And good ones, too. Not the usual schlock we turn out.” He added, “She was always so good with technology. And she really likes science fiction. We all hoped she was just a nerd though. I guess we were fooling ourselves. I feel so betrayed.”

Indeed it was this ferocious work ethic, combined with Carpenter’s interest in robotic warfare, that first set off alarm bells in the CIA. Carpenter blogged frequently about issues of robotic warfare at the Duck of Minerva, monitored by the Company as a barometer of academic opinion on issues of international relations. It is believed that Carpenter’s research on whether there was an emerging norm against robotic warfare was a either probe of the level of human resistance that would accompany the revelation of her race of machines, or an elaborate ruse to gain access to policy-making circles so that she could collect intelligence about the state of machine-led warfare in the U.S. in an effort to prepare for an eventual takeover of the world.

The ‘human’ Carpenter

However, it was only after the FBI began monitoring the droid professor that they began to suspect that she was a machine. An anonymous source told this paper: “There were the academic writings, then all the policy work, the grant writing and management. She never missed her son’s soccer games though. And she is so pretty too. It was just too much. Her makers made a mistake by not giving her any weaknesses.” He added, “And you would give a female robot a boy’s name, wouldn’t you? It was just too obvious.” Surveillance revealed that Carpenter never slept.

Carpenter is currently being held in an undisclosed location thought to be somewhere near her home in Amherst. Our anonymous source said, “She can’t do much harm there. There are more dairy cows than people.” Previously used methods of enhanced interrogation are, of course, proving fruitless on the robot. It is not known where she came from or her precise instructions.

Carpenter’s true “self”

The revelation replaces the previous rumor among academics that Carpenter was actually an alien from the series Battlestar Galactica that she loves so much. That appears to have just been a hobby for the robot. Carpenter also seems to have developed a taste for American fantasy literature as well. Our CIA source said, “It makes you think more about the boundary between man and machine. Where does one begin and the other end? Still it is our job to protect American security. There is no room in this country for relentlessly hard-working academic robots raising well-adjusted families, no matter who it turns out they work for.”

Those were the days


Yesterday as I drove to work, two stories on NPR caught my attention with how completely out of touch the interviewees sounded about their particular fields. These are people who are highly trained, performing what used to be important–if not vital–services, and well rewarded professionally for their accomplishments. And yet, listening to them talk about the importance of preserving the culture, practices, and institutional arrangements that enabled their profession, their claims rang so hollow, so 20th century, that I was struck that they would even say such things on radio.

The culprits? Bankers and Fighter Pilots. The Bankers were all upset about the “strings attached” to the TARP bail-out money they had received. Of particular concern was the limits on executive pay, and how this was going to cause a talent drain in the financial sector. All I could think was how tone-deaf the bankers sounded–while some of these guys may have had talent, it was a talent for destruction, not necessarily talent that you want to keep around. And, have they tried looking for jobs lately? There are quite literally thousands of finance professionals out of work, ready to step in to the jobs these supposed talents are vacating.

The Fighter Pilots were not quite as egregious, but still sounded like relics of a day gone bye. Morning edition has a nice 2 part story (yesterday and today) about fighter pilots and the changing fighter pilot culture. I’m not quite going to give the full Farley here, but listening to these guys, who sound as if they stepped off the set of Top Gun and into the story, you wonder if they are living in a bygone era (yes, I know one is AF and the other USN, but half of the first NPR segment is all about Top Gun, check it out, they even have the great music).

So why is there such an emphasis on training fighter pilots?

“None of us, I think, can really say with certainty who it is that we may end up having to fight next or what their capabilities are or what weapons systems they’ll have,” [Lt. Col. Dan “Digger” Hawkins, the deputy commander at Red Flag] says. “And so that’s why we keep our skills honed with exercises like Red Flag — so that we can be ready to defend the country at a moment’s notice against whoever it is who may try to attack us.”

No one who is currently training at Red Flag has ever been in a dogfight, but the training they receive is what Hawkins calls “very realistic dogfights.”

“As far as actual live combat, I’ll believe that some of the last air-to-air kills that the U.S. Air Force had was in Bosnia back in the 1990s.”

That was before these students were even pilots.

It sounds like such a valiant culture, much like the Pony Express was a valiant way to deliver cross-country mail in its day. For the past SIX years, the US has been engaged in two wars, actual ongoing combat operations, in one case against a real enemy that had actually attacked the United States, and fighter pilots have had no place to operationalize all that wonderful training at Red Flag. Instead, they have been pushed aside by robots. These days, Drones are the US weapon of choice in fighting Al Qaeda:

Pentagon officials say the remotely piloted planes, which can beam back live video for up to 22 hours, have done more than any other weapons system to track down insurgents and save American lives in Iraq and Afghanistan.

The planes have become one of the military’s favorite weapons despite many shortcomings resulting from the rush to get them into the field.

There is a near insatiable demand for more Predators and Reapers, but none of the “pilots” don’t want to actually fly them. Its just not the same–pulling 9Gs vs sitting in a small room playing video games–they say.

Bankers and Fighter Pilots. Heroes of the 80’s and 90’s. Sounding like relics of bygone era. It would be cute, if it wasn’t so darn expensive to maintain the institutions that facilitate their cultures.

Frackin’ Toasters

In the mailbox today, I found my pre-ordered copy of Peter Singer‘s new book Wired for War: The Robotics Revolution and Conflict in the 21st Century. NPR had an interview with Singer yesterday, which gives you a good sense of his argument and some of the fascinating and frightening changes coming down the pipeline in military affairs.

I was excited to sink my teeth into this before the semester gets started, since I’m eager to update my curriculum on battlefield robots, and since I’ll be blogging in an upcoming symposium at Complex Terrain Lab on the book next month. I’ll save most of my substantive remarks for that forum, and for such time as I’ve actually read the entire book. But based on the first two pages, I have two quick initial reactions:

1) From the very first three sentences, Singer does not disappoint:

“Because they’re frakin’ cool. That’s a short answer to why someone would spend four years researching and writing a book on new technologies and war. The long answer is a bit more complicated.”

I love it – you don’t get a better hook or prose more engaging than that.

2) However I must take issue with a certain assertion in Singer’s very first (and otherwise fascinating) endnote (p. 439), on the etymology of the word “frak”:

“Frak is a made-up expletive that originated in the computer science research world. It then made it way into video gaming, ultimately becoming the title of a game designed for the BBG Micro and Commodre 64 in the early 1980s. The main character, a caveman called Trogg, would say ‘Frak!’ in a litle speech bubble whenever he was ‘killed.’ It soon spread into science fiction, appearing in such games as Cyberpunk 2020 and the Warhammer 40,000 novels. It crossed over into the mainstream most explicitly in the new 2003 reboot of the 1970s TV series Battlestar Galactica. That the characters in the updated version of the TV show cursed, albeit with a made-up word, was part of the grimier, more serious feel of the show.”

In fact, however, the word was used (ok, maybe not quite as frequently) in the earlier show as well – albeit spelled “frack.” According to Battlestar WikiBlog:

“”Frak” is derived from the Original Series expletive, “frack,” a term used in character dialogue far less often (or “colorfully”) than its counterpart in the Re-imagined Series. The Re-imagined Series’s production team said they felt that “frack” should be a four-letter word, hence “frak”. The term “frack” was obviously used in dialogue in the Original Series to comply with FCC and other broadcast decency standards because the FCC has jurisdiction over the content of broadcast TV.”

See also here… I don’t generally encourage using Wikipedia as a primary source (take heed ye Polsci 121 students) but in this case I can’t think of a better place to get a sense of the popular understanding of a made-up word’s etymology.

That aside, I look forward to reading and commenting on the rest. Good stuff.

UPDATE (11:22pm). Having put the kids to bed, am now on p. 14 – if this isn’t a good reason to go buy this book, what is? Singer writes:

“[This] book makes many allusions to popular culture, not something you normaly find in a research work on war, politics, or science. Some references are obvious and some are not (and thus the first reader to send a complete list of them to me at www.pwsinger.com will receive a signed copy of the book and a Burger King Transformers collectible).

How frakking cool is that?

Robot Soldiers v. Autonomous Weapons: Why It Matters

I have a post up right now at Complex Terrain Lab about developments in the area of autonomous weaponry as a response to asymmetric security environments. While fully autonomous weapons are some distance away, a number of researchers and bloggers argue that these trends in military technology have significant moral implications for implementing the laws of war.

In particular, such writers question whether machines can be designed to make ethical targeting decisions; how responsibility for mistakes is to be allocated and punished; and whether the ability to wage war without risking soldiers’ lives will remove incentives at peaceful conflict resolution.

On one side are those who oppose any weapons whose targeting systems don’t include a man (or woman) “in the loop” and indeed call for a global code of conduct regarding such weapons: it was even reported earlier this year that autonomous weapons could be the next target of transnational advocacy networks on the basis of their ethical implications.

On the other side of the debate are roboticists like those at Georgia’s Mobile Robot Lab who argue that machines can one day be superior to human soldiers at complying with the rules of war. After all, they will never panic, succomb to “scenario-fullfillment bias” or act out of hatred or revenge.

Earlier this year,Kenneth Anderson took this debate to a level of greater nuance by asking, at Opinio Juris, how one might program a “robot soldier” to mimic the ideal human soldier. He asks not whether it is likely that a robot could improve upon a human soldiers’ ethical performance in war but rather:

Is the ideal autonomous battlefield robot one that makes decisions as the ideal ethical soldier would? Is that the right model in the first place? What the robot question poses by implication, however, is what, if any, is the value of either robots or human soldiers set against the lives of civilians. This question arises from a simple point – a robot is a machine, and does not have the moral worth of a human being, including a human soldier or a civilian, at least not unless and until we finally move into Asimov-territory. Should a robot attach any value to itself, to its own self preservation, at the cost of civilian collateral damage? How much, and does that differ from the value that a human soldier has?

I won’t respond directly to Anderson’s point about military necessity, with which I agree, or with his broader questions about asymmetric warfare, which are covered at CTLab. Instead, I want to highlight some implications for potential norm development in this area of framing these weapons as analogous to soldiers. As I see it, a precautionary principle against autonomous weapons, if indeed one is warranted, depends quite a great deal on whether we accept the construction of autonomous weapons as “robot soldiers” or whether they remain conceptualized as merely a category of “weapon.”

This difference is crucial because the status of soldiers in international law is quite different from the status of weapons. Article 36 of Additional Protocol 1 requires states to “determine whether a new weapon or method of warfare is compatible with international law” – that is, with the principles of discrimination and proportionality. If a weapon cannot by its very nature discriminate between civilians and combatants, or if its effects cannot be controlled after it is deployed, it does not meet the criteria for new weapons under international law. Adopting this perspective would put the burden of proof on designers of such weapons and gives norm entrepreneurs like Noel Sharkey or Robert Sparrow a framework they can use to argue that such robots could not likely make the kind of difficult judgments necessary in asymmetric warfare to follow existing international law.

But if robots are ever imagined to be analogous to soldiers, then the requirements would be different. Soldiers must only endeavor to discriminate between civilians and combatants and use weapons capable of discriminating. They need not actually do so perfectly, and in fact it is common to argue nowadays that it is almost impossible to do so in many conflict environments. In such cases, the principles of military necessity and proportionality trade off against discrimination. And the fact that soldiers cannot necessarily be “controlled” once they’re deployed doesn’t mitigate against their use, as is the case with uncontrollable weapons like earlier generations of anti-personnel landmines. In such a framework, the argument that robots might sometimes make mistakes doesn’t mean their development itself would necessarily be unethical. All designers would then most likely need to demonstrate is that they are likelier to improve upon human ability.

In other words, framing matters.

The Year’s Under-reported Stories

Foreign Policy has released its annual “Top Ten Stories You Missed in 2007.” Among the contenders:

1. The Cyberwars Have Begun. However, see Miriam Dunn Cavelty’s article “Cyberterrorism: Looming Threat or Phantom Menace” in the inaugural issue of the Journal of Information Technology and Politics.

2. US Navy is in Iraq for the Long Haul. And a good thing too, if trade in the Arabian Gulf is to be protected from the emerging threat of piracy, which is on the rise off the coast of Iraq and is already thriving in other areas of the world characterized by state failure. See the International Analyst Network for more.

3. Rifts Within Al-Qaeda Widening. But is this really news? The movement has always been less monolithic than it has been portrayed by the West.

4. And my favorite, we have evidently “entered” the era of robot warriors. According to FP:

“Although militaries have used robots for everything from minesweeping to defusing bombs, the new “special weapons observation remote reconnaissance direct action system”–or SWORDS–is different. For one, it’s packing heat: an M249 machine gun, to be exact. It can fire on a target from more than 3,000 feet away. So far, three of these $250,000 robots have been deployed to Iraq to conduct dangerous ground operations that would otherwise put soldiers’ lives at risk.”

Well, now we’re ready to crush the rebels! On to Planet Hoth!

© 2019 Duck of Minerva

Theme by Anders NorenUp ↑