Page 3 of 274

Theatre and Cyber Security

By now I am sure many of you have seen the news that Sony has indefinitely postponed/canceled the theatrical release of The Interview under threat from hackers apparently connected to the regime in North Korea. It is not clear whether the threat was explicitly against movie goers or against the companies screening the film, and whether the assault would be virtual or physical in form (although the Obama Administration has suggested the theatre threat was overblown and has criticized Sony for withholding the film). What is clear is that the cancellation costs Sony tens of millions of dollars in lost production and promotion costs and has established a precedent that digital assaults can produce real world costs and behavioral changes.

Quite striking is the shift in construction of the Sony issue as a threat. Previous breaches of corporate information technology (IT) security have hardly prompted the kind of national security discourses the Sony case has generated. Indeed, the earlier disclosure of sensitive emails from the Sony IT breach did not result in discussions of national threat. Certainly, the more international and public elements of the situation suggest greater basis for making a national security claim. And yet, the appearances are deceptive. The Obama Administration specifically downplayed the possible threat to cinemas, with the Department of Homeland Security indicating there was no credible threat to cinemas or theatregoers. The cancelation of the film is certainly costly, but most of the cost is born by Sony (to the tune of tens of millions of dollars). To that end, the IT breach is not any different from other corporate IT breaches where customer information has been compromised. The North Korean element is certainly substantive, but not altogether unique. 

What the shift in discourse reveals is the socially constructed nature of threat. The public costs of the Sony IT breach are economically smaller than in other breaches, and the linkage to external state is not unique to the Sony case. So materially, there is little that obviously qualifies the Sony IT breach as a national security issue, much less something that calls for US government retaliation. The discursive shift regarding the national security ‘threat’ posed by the Sony incident highlights the utility of securitization theory for thinking about the issue of cyber security. Specifically, securitization theory directs our attention to how political actors are seeking to reconstruct the Sony IT breach in ways that justify extraordinary measures, in this case the US government risking conflict escalation with a isolated, reactive, and militarized regime in North Korea on behalf of a private economic/corporate entity. Notably, since the cancellation of the film discourses have highlighted core elements of American political identity, specifically the right to freedom of expression, as the basis of the security claim. This discursive shift suggests a societal boundary with respect to information technology issues in the United States between a private concern (Sony breach before film cancellation) and a public security matter.

Securitization also draws our attention to the political effects of security, and a consequence the costs of security. Who benefits from or is empowered by treating IT issues as security issues? What consequences arise from making IT security a national security matter? How can the state possibly mandate security measures for an issue that interweaves throughout the economy? What kinds of instabilities are created by involving states as security actors in the cyber realm with the strong potential of militarization? Certainly weak states will seek to take advantage of the asymmetric opportunities of global information technology, but the question of responsibility and countermeasures remains an open one for the most powerful and developed states in the system and whether those should lie with the state. Specifically, in past nonsecuritized (from the standpoint of the state) IT breaches, the responsibility and the cost were assumed to lie with the victimized corporation. Securitization shifts that responsibility and cost to the state.

I have long been a skeptic of the concept of cyber security as such, and for me securitization theory opens up an analytical space for critically interrogating the concept of cyber security, the process by which information technology issues are transformed into security, as well as the political and social effects of terming information technology as security.

 

**Thanks to Dave McCourt for helpful comments on this post!

 

FacebookTwitterGoogle+TumblrRedditShare

Science of Santa

The following is a guest post by Tenacity Murdie, age 12.  

Dear Readers,

Every year on Christmas Eve, Santa, a fat and happy man, takes off in a sleigh full of presents to go deliver gifts to the good boys and girls. We spend millions of dollars in preparation for Santa, but is this reasonable, or are we just throwing our money down the drain? Although many think that it is possible for Santa to travel the world in less than 31 hours (not 24 since we have time zones)[2] and successfully deliver presents to millions of children without violating any laws of physics, it’s just not possible. If Santa were to do this, he would be breaking multiple laws of physics. Some examples are: there are no known reindeer species that can fly, the actual Santa (St. Nick) is long dead, and, most importantly, there is not enough time for Santa to get to all the houses in one day. Let me explain.

Continue reading

Citizens, Beasts or Gods?

Keeping up with the current engagement with artificial intelligence (AI) is a full time task. Today in the New York Times, two (here and here) lead articles in the technology section were about AI, and another discussed the future of robotics and AI and the workforce.   As I blogged last week, the coming economic future of robotics and AI is going to have to contend with some very weighty considerations that are making our society more and more economically, socially and racially divided.

Today, however, I’d like to think about how a society might view an AI, particularly one that is generally intelligent and economically productive.   To aid in this exercise I am employing one of my favorite and most helpful philosophers: Aristotle. For Aristotle man is the only animal that possesses logos. Logos, is the ability to use speech and reason. While other philosophers have challenged this conclusion, let’s just take Aristotle at his word.

Logos is what defines a human as a human, and because of Aristotle’s teleology, the use of logos is what makes a human a good human (Ethics, 1095a). Moreover, Aristotle also holds that man is by nature a political animal (Ethics 1097b, Politics, 1253a3). What he means by this is that man cannot live in isolation, and cannot be self-sufficient in isolation, but must live amongst other humans. The polis for him provides all that is necessary and makes life “desirable and deficient in nothing” (Ethics, 1097b).   If one lives outside of the polis, then he is doing so against his nature. As Aristotle explains, “anyone who lacks the capacity to share in community, or has not the need to because of his [own] self-sufficiency, is no part of the city and as a result is either a beast or a god” (Politics, 1253a29). In short, there are three classes of persons in Aristotle’s thinking: citizens, beasts or gods.

Citizens share in community, and according to his writings on friendship, they require a bond of some sort to hold them together (Ethics, 1156a). This bond, or philia, is something shared in common. Beasts, or those animals incapable of logos, cannot by definition be part of a polis, for they lack the requisite capacities to engage in deliberative speech and judgment.   Gods, for Aristotle, also do not require the polis for they are self-sufficient alone. Divine immortals have no need of others.

Yet, if we believe that AI is (nearly) upon us, or at least is worth commissioning a 100 year study to measure and evaluate its impact on the human condition, we have before us a new problem, one that Aristotle’s work helps to illuminate. We potentially have an entity that would possess logos but fail to be a citizen, a beast or a God.

What kind of entity is it? A generally artificially intelligent being would be capable of speech of some sort (that is communication), it could understand other’s speech (in either voice recognition or text), it would be capable of learning (potentially at very rapid speeds), and depending upon its use or function, it is could be very economically productive for the person or entity that owns it.   In fact, if we were to rely on Aristotle, this entity looks more akin to slaves. Though even this understanding is incomplete, for his argument is that the master and slave are mutually beneficial in their relationship, and that a slave is nothing more than “a tool for the purpose of life.”   Of course nothing in a relationship between an AI and its “owners” would make the relationship “beneficial” for the AI. Unless one viewed it as possible as giving an AI a teleological value structure that placed “benefit” as that which is good for its owner.

If we took this view, however, we would be granting that an AI will never really understand us humans.   From an Aristotelian perspective, what this means is that we would create machines that are generally intelligent and give them some sort of end value, but we would not “share” anything in common with the AI. We would not have “friendship;” we would have no common bond. Why does this matter, the skeptic asks?

It matters for the simple reason that if we create a generally intelligent AI, one that can learn, evolve and potentially act on and in the world, if it has no philia with us humans, we cannot understand it and it cannot understand us. So what, the objection goes. As long as it is doing what it is programmed to do, all the better for us.

I think this line of reasoning misses something fundamental about creating an AI. We desire to create an AI that is helpful or useful to us, but if it doesn’t understand us, and we fail to see how it is completely nonhuman and will not think or reason like a human, it might be “rational” but not “reasonable.” We would embark on creating a “friendly AI” that has no understanding of what “friend” means, or hold anything in common with us to form a friendship. The perverse effects of this would be astounding.
I will leave you with one example, and one of the ongoing problems of ethics. Utilitarians view ethics as a moral framework that states that one must maximize some sort of nonmoral good (like happiness or well-being) for one’s action to be considered moral. Deontologists claim that no amount of maximization will justify the violation of an innocent person’s rights. When faced with a situation where one must decide what value to ascribe to, ethicists hotly debate which moral framework to adopt. If an AI programmer says to herself, “well, utilitarianism often yields perverse outcomes, I think I will program a deontological AI,” then, the Kantian AI will ascribe to a strict deontic structure. So much so, that the AI programmer finds herself in another quagmire. The “fiat justitia ruat caelum” problem (let justice be done though the heavens fall), where rational begins to look very unreasonable.   Both moral theories ascribe and inform different values in our societies, but both are full of their own problems. There is no consensus on which framework to take, and there is no consensus on the meanings of the values within each framework.

My point here is that Aristotle rightly understood that humans needed each other, and that we must educate and habituate them to moral habits. Politics is the domain where we discuss and deliberate and act on those moral precepts, and it is what makes us uniquely human. Creating an artificial intelligence that looks or reasons nothing like a human carries with it the worry that we have created a beast or a god, or something altogether different. We must tread carefully on this new road, fiat justitia

What's Happening with Climate Negotiations in Lima?

The annual climate negotiations are wrapping up in Lima, Peru tonight or likely tomorrow. Negotiators are working through the night in overtime as they seek to hammer out a blueprint that will serve as the negotiating template for next year.  This is the meeting before next year’s big meeting in Paris when expectations are high for a new climate agreement that will establish the targets and actions countries are willing to make for the 2020 period and beyond. So, what’s going on? What’s at stake? What are the key outcomes? Points of dissensus and consensus? Should we be optimistic or pessimistic? Here is one live negotiation tracker and what I believe is the proposed text that the chair proposed just as I went to bed.

Continue reading

What Does the Rise of AI have to do with Ferguson and Eric Garner?

One might think that looking to the future of artificial intelligence (AI) and the recent spate of police brutality against African American males, particularly Michael Brown and Eric Garner, are remotely related if they are related at all. However, I want to press us to look at two seemingly unrelated practices (racial discrimination and technological progress) and look to what the trajectory of both portend. I am increasingly concerned about the future of AI, and what the unintended consequences that increased reliance on it will yield. Today, I’d like to focus on just one of those unintended outcomes: increased racial divide, poverty and discrimination.

Recently, Stephen Hawking, Elon Musk, and Nick Bostrom have argued that the future of humanity is at stake if we create artificial general intelligence (AGI), for that will have a great probability of surpassing its general intelligence to that of a superintelligence (ASI). Without careful concern for how such AIs are programmed, that is their values and their goals, ASI may begin down a path that knows no bounds. I am not particularly concerned with ASI here. This is an important discussion, but it is one for another day. Today, I’m concerned with the AI in the near to middle term that will inevitably be utilized to take over rather low skill and low paying jobs. This AI is thought by some as the beginning as “the second machine age” that will usher in prosperity and increased human welfare. I have argued before that I think that any increase in AI will have a gendered impact on job loss and creation.   I would like to extend that today to concerns over race.

Today the New York Times reported that 30 million Americans are currently unemployed, and of that 30 million, the percentage of unemployed men has tripled (since 1960). The article also reported that 85% of unemployed men polled do not possess bachelors degrees, and 34% have a criminal background. In another article, the Times also broke down unemployment rates nationally, looking at the geographic distribution of male unemployment. In places like Arizona and New Mexico, for instance, large swaths of land have 40% or more rates of unemployment. Yet if one examines the data more closely, one sees an immediate overlay of unemployment rates on the same tracts of land that are designated tribal areas and reservations, i.e nonwhite areas.

Moreover, if one looks at the data supported by the Pew Institute, the gap between white and minority household income continues to grow. Pew reports that in 2013, the median net worth of white households was $141,000. The median net worth of black households was $11,000. This is a 13X difference. Minority households, the data says, are more likely to be poor. Indeed, they are likely to be very poor. For the poverty level in the US for a household containing one person is $11,670.   Given the fact that Pew also reported that 58% of unemployed women reported taking care of children 18 and younger in their home, there is a strong probability that these households contain more than one person. Add these facts regarding poverty and unemployment an underlying racial discrimination in the criminal justice system, and one can see where this going.

While whites occupy better jobs, have better access to education, have far greater net household incomes, they are far less likely to experience crime. In fact, The Sentencing Project reports that in 2008 blacks were 78% more likely to be victims of burglary and 133% more likely to experience other types of theft. Compare this to the 2012 statistic that blacks are also 66% more likely to be victims of sexual assault and over six times more likely than whites to be victims of homicide.   Minorities are also more often seen to be the perpetrators of crime, and as one study shows, police officers are quicker to shoot at armed black suspects than white ones.

Thus what we see from a very quick and rough look at various types of data is that poverty, education, crime and the justice system are all racially divided. How does AI affect this? Well, the arguments for AI and for increasingly relying on AI to generate jobs and produce more wealth and prosperity are premised on this racist (and gendered) division of labor.  As Byrnjolfsson and McAffee argue, the jobs that are going to “disappear” are the “dull” ones that computers are good at automating, but jobs that require dexterity and little education – like housecleaning – are likely to stay. Good news for all those very wealthy (and male) maids.

In the interim, Brynjolfsson and McAffee suggest, there will be a readjustment of the types of jobs in the future economy. We will need to have more people educated in information technologies and in more creative ways to be entrepreneurs in this new knowledge economy.   They note that education is key to success in the new AI future, and that parents ought to look to different types of schools that encourage creativity and freethinking, such as Montessori schools.

Yet given the data that we have now about the growing disparity between white and minority incomes, the availability of quality education in poor areas, and the underlying discriminatory attitudes towards minorities, in what future will these already troubled souls rise to levels of prosperity that automatically shuts them out of the “new” economy? How could a household with $11,000 annual income afford over $16,000 a year in Montessori tuition? Trickle-down economics just doesn’t cut it. Instead, this new vision of an AI economy will reaffirm what Charles Mill calls “the racial contract”, and further subjugate and discriminate against nonwhites (and especially nonwhite women).

If the future looks anything like what Brynjolfsson and McAffee portend, then those who control AI, will be those who own and thus benefit from lower costs of production through the mechanization and automation of labor.   Wealth will accumulate into these hands, and unless one has skills that either support the tech industry or create new tech, then one is unlikely to find employment in anything other than unskilled but dexterous labor.   Give the statistics that we have today, it is more likely that this wealth will continue to accumulate into primarily white hands.   Poverty and crime will continue to be placed upon the most vulnerable—and often nonwhite—in society, and the racial discrimination that perpetuates the justice system, and with it tragedies like those of Eric Gardner will continue. Unless there is a serious conversation about the extent to which the economy, the education system and the justice system perpetuates and exacerbates this condition, AI will only make these moral failings more acute.

Torture as War Victory: 'Zero Dark Thirty' and the torture reports

This post is the first of our ‘Throwback Thursday” series, where we re-publish an earlier post on a topic that is currently in the news, or is receiving renewed attention or debate. This original post was published February 23rd 2013 (right before the Oscars) but the main arguments about the utility and rational behind torture expressed in the movie may be worth revisiting given the recent release of the CIA’s ‘torture report.’

“This is what winning looks like”

I have to confess, I was late to watch “Zero Dark Thirty” (ODT). I read a handful of reviews and blogs about the movie, had arguments with friends about its message, and even wrote it off completely–all weeks before I bothered to watch it. I wasn’t interested in watching another American war movie, nor was I keen to see the lengthy torture scenes I had read about in the reviews. I figured I already knew exactly what the content was (are there every any real surprises in American war movies? and, didn’t we all know how this story ended anyway?) and that there was really nothing left to say. BUT, I think there is something left to say about the film.

First, let’s all be honest: most of us walked away from this movie saying to ourselves “did I miss something?” What about the film deserved all the Oscar hype, debate, and acclaim? By most standards, this was a classic, boring American war movie. In this case, the lack of plot and acting skills are made up with using violent torture scenes rather than expensive battle scenes. There is no emotional journey, no big moral dilemma that the characters are going through (I’ll get to torture soon), little plot twist (again, we all know how it ends after all), and no unique or interesting characters (don’t get me started on Jessica Chastain–what exactly about her stone-faced performance warrants an Oscar? perhaps she deserves an award for for ‘most consistent blank expression’). So what gives? Is this just another “King’s Speech”? Meaning, is this just another big movie that people talk about and get behind, but no one actually can put their finger on what was remotely interesting about it (never mind what was destructive about it)?

So I’m calling it. Not only was this movie soul-less, boring and poorly made, everyone seemed to miss the message (and it is easy enough to do). The real question about ODT is not whether or not it is condoning torture. Continue reading

The Danger In "Leading From the Front"

Kerry ISIS

The conventional wisdom about the gradual U.S. ramp up for the military campaign against ISIS is just that, all too conventional. Blistering criticisms from the Right—that the ramp up was too slow and that the President is to blame for leaving Iraq too soon—have both proved hollow. They have been fading as the U.S. and its allies have been successfully degrading ISIS. During the last two months of their successful election campaign, Congressional candidates essentially dropped this criticism from their attack ads and stump speeches. But the notion that the U.S. displayed weakness in the gradual roll out of its anti ISIS operation persists.

However, there was and still is a danger that the U.S. ramped up too soon. One of the primary strategic problems over the last five austerity addled years has been the sizable reduction in defense spending by a series of western allies (although the capabilities reductions, which matter more, have been much smaller and in some cases augmented). As important as maintaining capabilities is, there is also the necessity of strategy, which includes the willingness to use force if necessary. The U.S. attempt to “lead from the middle”, which involves allies sharing security burdens, could be impeded if allies interpret the U.S. taking the lead against ISIS as “leading from the front.” The danger is that this could result in a new round of allies reducing their spending and/or capabilities, which would be a serious setback to American national security interests.

Continue reading

Theory as thought

Recently a friend and colleague wrote me to say:

 

“The SS piece is actually really useful to me as a model for dealing with Political Science post paradigm wars.”

 

Which prompts me (as if academics ever need such a prompt) to revisit an issue I raised almost a year ago: the role of theory in policymaking. In that long ago post, I mentioned that Patrick James and I had an article under review that addressed the relationship between theory and policy from a fairly novel perspective, and I am happy to say that article—entitled “Theory as Thought: Britain and German Unification”—came out earlier this year in Security Studies.

 

In the piece, we derive inspiration from analytic eclecticism in an effort to develop a more nuanced and useful understanding of how theory interacts with the real world. In pursuit of that agenda, we make a simple but potentially controversial claim: rather than represent objective descriptions/explanations of the world, theories of international relations represent different modes of thinking about the world. These different modes are intersubjective structures and discourses that enable shared efforts to understand and explain the world. Thus, theories are actually shared logics embedded in society that enable policymakers to make sense of the world. As such, IR scholars are embedded within and develop their theories from broader currents of social meaning-making.

 

To make the argument work, we distil the core operative logics underlying realism, liberal institutionalism, and constructivism. Rather than derive explanatory building blocks from theories and apply them to empirical sources, we analyze policy-makers’ modes of thought to investigate whether they contain patterns of IR theory. We realize that doing so is part of the controversial nature of the article, as scholars operating within these traditions may reject the simplifications we undertake, or in the case of constructivism that it has enough coherence to have a unifying logic. We spend some time justifying these decisions in the article, so I leave it to readers to look there for our defense.

 

After establishing the logics, we apply them to our case study—British policymaking toward German unification. We find that, contrary to claims that these theories only explain the international system, they actually represent modes of thought that shape how actors see the world. Moreover, all three logics play a critical role in the British policymaking process, interweaving to produce a complex constructed social reality. The logic of realism clearly played an important role in shaping the perceptions of top British leadership, particularly Thatcher, of German unification as a problem. This foundational assessment played a crucial role in shaping how the British understood the events of 1989 and 1990. But it did not play an important role in how the British responded to the process of German unification. By turning to NATO, the CSCE, and the EU to integrate an expanded and quasi-hegemonic Germany within the existing network of institutions, the logic of neoliberal institutionalism played a critical role in how British policymakers constructed their policy response. Why did the logic of neoliberal institutionalism prevail over the logic of realism in directing British policy? Here the power of the logic of constructivism is evident, particularly the role of identity and rhetorical entrapment. These logics constrained British policymakers to cooperative policy options.

 

A range of implications arise from our argument, and we spend considerable time in the conclusion talking about them so I only present a couple highlights here. One of the implications that comes out of our argument is that no theory of international relations is consistently applicable across space and time. Rather, the applicability of theory to events depends on the particular mix of theoretical logics in a particular time and space. These logics, like other socially constructed systems of meaning and relation (e.g. identity) may come to be sedimented (in strategic culture for example) and thus relatively stable over the short to medium term. But scholars would be well served to problematize what theoretical logics constitute the dominant discourses and narrative in the times and places they are interested in studying.

 

Another implication addresses the divide between material and ideational approaches to IR. Material versus ideational analysis emerges as what Brecher calls a “flawed dichotomy.” Regardless of the approach under consideration, it is not possible to comprehend how policymakers understood German transformation without both. The most convincing account is one that recognizes the contributions of multiple paradigms to understanding complex international events with intertwining logics. For such reasons, frameworks ranging from the streamlined realism to the more intricate constructivism should be regarded as complementary rather than competitive in resolving the mysteries of IR.

 

A final implication regards the separation between theory and reality, and the gap between academics and policymakers. If we are right about the basis of theory, that means that theoretical development corresponds with changes in the world and how state leaders and societies come to terms with those changes. But the influence is not unidirectional. Theories also shape the world, providing systems of meaning that are taken up and integrated into shared logics. Thus, at a fundamental level there is no gap between academics and policymakers even if on a day-to-day basis such a gap seems yawning.

 

Theory is thought, both in the minds of scholars as well as actors in the ‘real’ world. Incorporating that simple observation into research on international relations holds the potential of greater illumination—from theory development to analytical veracity to bridging the gap between IR scholars and practitioners.

Friday Nerd Blogging: Star Wes Awakens

The folks here are big, big fans of Star Wars, so we were most happy this week with the new teaser. Many parodies have/will ensue. Here is the Wes Anderson take:

Continue reading

Clans

Rutgers University law professor (and bloggerMark S. Weiner has been awarded the 2015 Grawemeyer Award for Ideas Improving World Order for ideas set forth in his 2013 book, The Rule of the Clan: What an Ancient Form of Social Organization Reveals About the Future of Individual Freedom. The award includes a $100,000 cash prize and is administered by the University of Louisville.

The book makes a fairly complicated argument about clans, identity groups, liberal democracy, states, and national security. The press release ostensibly explains the highly readable book’s main argument:  Continue reading

The Perils of a M/W/F Class

Greetings, fellow Duck readers.  I realize I’ve been MIA this semester – DGS duties and ISA-Midwest stuff took too much of my non-research time.  Another factor in my absence, however: a Monday Wednesday Friday schedule. And, it sucked.[1]  Like large-tornado-near-my-hometown sucked.  Today marks the last Friday class of the semester – thank god.[2]  Even though I should be getting back to research this morning, I wanted to write a little bit about why I think 50 minute/3 day a week classes should be banned in our discipline.

Continue reading

Not Surprised is Not Good Enough: what soldier atrocities in Iraq and Afghanistan can teach us about Ferguson

 

 

 

 

 

By some strange twist of fate I happened to watch the Kill Team, a documentary about the infamous US platoon that intentionally murdered innocent Afghan men while on tour. When, in 2010 the military charged five members of the platoon, the case drew international attention due to the graphic nature of the killings, evidence that the men mutilated the bodies and kept parts as trophies, and indications that the killings were part of a wider trend of ‘faking’ combat situations in order for soldiers to ‘get a kill.’ While the premeditated killing of Afghan civilians appears completely disconnected from the Ferguson grand jury decision not to indict Darren Wilson for the murder of unarmed Michael Brown, there are several common threads that deserve unraveling. Rather than characterise ‘Ferguson’ as ‘simply’ a case of police brutality, or localised racism, or isolated misconduct, such a comparison opens up space for counter-narratives. In particular, the comparison A) highlights the systemic nature of racist, militarized, and patriarchal violence across multiple institutions, including the police and the military; B) addresses the sanctioned killing of non-white men and women as a consistent feature of the national narrative; C) indicates the desperate need to both demonise a racialised other and to measure individual and national masculinity in terms of the control and suppression of this demonised other.

So, with that pleasant list out of the way, here are 3 ways that civilian deaths in Afghanistan and Iraq are similar to the murders of innocent civilian African-American young men.

1. Creating a dark and dehumanized enemy.
Whether it is at home in the US or overseas in Iraq and Afghanistan, there is ample evidence of a generalised trend for police, soldiers, and the public to hold deeply racist views about the people they are meant to be protecting.

Continue reading

2015 OAIS Blogging Awards: Call for Nominations!

Duckies

It’s that time of year folks. We are now receiving nominations for the third annual Online Achievement in International Studies (OAIS) Blogging Awards — aka the Duckie Awards.

We are asking Duck readers to reflect back over the past year to consider the best blogging contributions to the field of International Studies and to submit nominations for the awards. Post your nominations in the comments thread or drop us a note at duckofminerva2015 @ gmail.com. We will later ask readers to vote for the three finalists in each category. Last year’s winners have generously agreed to judge the finalists and select the 2015 winners.

Also, once again we are thrilled that with the support of SAGE, Duck of Minerva and SAGE will be co-hosting the third annual IR Blogging Awards and Reception at the ISA Annual Conference to be held in New Orleans.    The reception is scheduled for the evening of Thursday, February 19, 2015.   Charli is again coordinating the program for the Awards ceremony and we’ll have details on the program soon.

At this point, we need Duck readers to submit nominations — we’ll ask you all to vote on the finalists in January. Here are the rules and nomination and judging procedures for the 2015 awards: Continue reading

World AIDS Day 2014: Five Data Points, Four News Items, and Three Films

Today is World AIDS Day, an annual day of remembrance and reflection on the global AIDS crisis held since 1988. Overshadowed this year by the Ebola outbreak in West Africa, we should keep in mind that this problem is not over even if it has receded from news coverage in recent years. Here are 5 facts and 4 news items about the state of the current epidemic to keep in mind, and 3 recent films – How to Survive a Plague, Fire in the Blood, and the Dallas Buyers Club, which help bring some context to understanding grassroots mobilization in the U.S. and internationally to combat the AIDS epidemic.  Continue reading

Ebola is Not Over

The Ebola crisis isn’t over. In the absence of new infections in the United States, Americans have moved on to other preoccupations (Ferguson anyone?), but the problem hasn’t gone away even if Google searches have plunged. There has been some positive news out of Liberia with a decline in the rate of new infections from 80 new cases per day to about 20 to 30, but the news from Sierra Leone suggests the problem is far from under control, with the end of the rainy season potentially making transit easier and facilitating the spread of the virus further. More troubling still is the new hot spot of infections coming out of Mali, eight confirmed cases in all, 7 of them related to a single Guinean imam who died in diagnosed in Mali and whose body was not handled properly as one would a deceased Ebola patient. Continue reading

Meaningful or Meaningless Control

In May of 2014, the United Nations Convention on Conventional Weapons (CCW) first considered the issue of banning lethal autonomous weapons. Before the start of the informal expert meetings, Article 36 circulated a memorandum on the concept of “meaningful human control.” The document attempted to frame the discussion around the varying degrees of control over increasingly automated (and potentially autonomous) weapons systems in contemporary combat. In particular, Article 36 posed the question as one about the appropriate balance of control over a weapons system that can operate independently of an operator in a defined geographical area for a particular period of time ought to be. Article 36 does not define “meaningful control,” but rather seeks to generate discussion about how much control ought to be present, what “meaningful” entails, and how computer programming can enable or inhibit human control. The state parties at the CCW agreed that this terminology was crucial and that no weapons systems that lacked meaningful human control ought to be deployed. The Duck’s Charli Carpenter has written about this as well, here.

Last month in October, the United Nations Institute for Disarmament Research (UNIDIR) held a conference on the concept of meaningful human control.   Earlier this month, states again convened in Geneva at another CCW meeting and agreed to further consider the matter in April of 2015. Moreover, other civil society groups are also now beginning to think about what this approach entails.   It appears, then, that this concept has become a rallying point in the debate over autonomous weapons. Yet while we have a common term with which to agree, we are not clear on what exactly “control” requires, or what proxy’s we could utilize to make control more efficacious, such as geographic or time limits, or what “meaningful” would look like.

Today, I had an engaging discussion with a colleague on a “semi-autonomous” weapon: Lockheed Martin’s Long-Range Anti-Ship Missile (LRASM). One colleague claimed that this missile is in fact an autonomous weapon, as it selects and engages a target. Another colleague, however, claimed that this was not an autonomous weapon because a human being preselects the targets before launching the weapon. Both my colleagues are correct. Yet how can this be so?

The weapon does select and an engage a target after it is launched, and the particular nature of the LRASM is that it can navigate in denied environments where other weapons cannot. It can change course when necessary, and when it finds its way to its preselected targets, it engages in a selection these targets based upon an undisclosed identification mechanism (probably similar in image recognition to other precision guided munitions). LRASM is unique in its navigation and target cuing capabilities, as well as its ability to coordinate with other launched LRASMs. Thus the question about whether it is an autonomous weapon, then, is really a question about meaningful human control.

Is it a question about “control” once the missile reaches its target destination and then “decides” which ship amongst the convoy it will attack? Or is it a question about the selection of the grid or space that the enemy convoy occupies? At what point is the decision about “control” to be made?

I cannot here answer fully this question. However, I can raise two potential avenues for the way forward. One is to consider human control not in terms of a dichotomy (there is either a human being deliberating at every juncture and pulling a trigger or there is not), but in terms of an escalatory ladder. That is, we start with the targeting process, from the commander all the way to a targeteer or weaponeer, and examine how decisions to use lethal force are made and on what basis.   This would at least allow us to understand the different domains (air, land, sea) that we are working within, the types of targets likely found, and the desired goals to be achieved. It would also allow examination of when particular weapons systems enter the discussion. For if we have an understanding of what types of decisions, from various (perhaps automated) types of information, are made along this ladder, then we can determine whether some weapons are appropriate or not. We might even glean what types of weapons are always out of bounds.

Second, if this control ladder is too onerous a task, or perhaps too formulaic and would induce a perverse incentive to create weapons right up to a particular line of automation, then perhaps the best way to think about what “meaningful human control” entails is not to think about its presence, but rather its absence. In other words, what would “meaningless” human control look like? Perhaps it is better to define the concept negatively, by what it is not, rather than what it is. We have examples of this already, particularly with the US’s policy regarding covert action. The 1991 Intelligence Authorization Act defines covert action very vaguely, and then in more concrete terms defines what it is not (e.g. intelligence gathering, traditional or routine military or diplomatic operations, etc.). Thus clear cases of “meaningless” would be to launch a weapon system without undertaking any consideration of the targets, the likely consequences, and the presence of civilian objects or persons. Or to launch a weapon that perpetually patrols. This is of course cold comfort to those who want to ban autonomous weapons outright. Banning weapons would of course require a positive and not negative definition.

States would have to settle the question of whether any targets on a grid are fair game, or if only pre-identified targets on a grid – and not targets of opportunity- are fair game. It may also require states to become transparent about how such targets on a grid are confirmed, or how large a grid one is allowed to use. For if a search area ends up looking like the entire Pacific Ocean, that pesky question about “meaningful” raises its head again.

Friday Nerd Blogging: Twitter Much?

As a very frequent tweeter, I could only watch this SNL sketch/dance number (didn’t make it to the show, just to dress rehearsal) with just a hint of shame:

Continue reading

Privacy, Secrecy & War: Emperor Rogers and the Failure of NSA Reform

On November 3, Britain’s head of the Government Communications Headquarters (GCHQ) published an opinion piece in the Financial Times, noting that technology companies, such as Twitter, Facebook, WhatsApp, (and implying Google and Apple), ought to comply with governments to a greater extent to combat terrorism. When tech companies further encrypt their devices or software, such as what Apple has recently released with the iPhone 6, or what WhatsApp has accomplished with its software, GCHQ chief Hannigan argues that this is tantamount to aiding and abetting terrorists. GCHQ is the sister equivalent of the US’s National Security Agency (NSA), as both are charged with Signals Intelligence and information assurance.

Interestingly, Hannigan’s opinion piece comes only weeks before the US Senate voted on whether to limit the NSA’s ability to conduct bulk telephony meta-data collection, as well as reform aspects of the NSA’s activities. Two days ago, this bill, known as the “USA Freedom Act,” failed to pass by two votes. While Hannigan stressed that companies ought to be more open to compliance with governments’ requests to hand over data, the failure of the USA Freedom Act strengthened at least the US government’s position to continue is mass surveillance of foreign and US citizens.  It remains to be seen how the tech giants will react.

In the meantime, the bill also sought, amongst other things, to make transparent the amount of requests from governments to tech companies, to force the NSA to seek a court order from the Foreign Intelligence Surveillance Court (FISC) to query the (telecom held) data, and to require the NSA to list the “specific selection term” to be used while searching the data. Moreover, the bill would have also mandated an amicus curiae, or “friend of the court,” in the FISC to offer arguments against government requests for searches, data collection and the like, which it currently lacks. Much of these reforms were welcomed by tech companies like Google and Apple and also were suggested in a 2013 report for the White House on NSA and intelligence reform.

Many of the disagreements over the bill arose on two lines: that the bill hamstringed the US’s ability to “fight terrorists,” and that the bill failed to go far enough in protecting the civil liberties of US citizens. This was because the bill would have reauthorized Section 215 of the PATRIOT Act (set to end in 2015) to 2017. Section 215 permits government agents, such as the FBI and the NSA to compel third parties to hand over business records and any “other tangible objects” whenever the government requests them in the pursuance of an “authorized investigation” against international terrorism or clandestine intelligence activities. In particular, Section 215 merely requires the government to present specific facts that would support a “reasonable suspicion” that the person under investigation is in fact an agent of a foreign power or a terrorist. It does not require a showing of probable cause, only a general test of reasonableness, and this concept of reasonableness is stretched to quite a limit.   The democratic support for the bill comes most strongly from Senator Dianne Feinstein (D- Calif), who is reported to have said, “I do not want to end the program [215 bulk collection],” so “I’m prepared to make the compromise, which is that the metadata will be kept by the telecoms.”

Where, then does the failure of this bill leave us? In two places, actually. First, it permits the NSA to run along on with the status quo. Edward Snowden’s revelations of mass surveillance appear to have fallen off of the American people’s radar, and with it, permitted Congress to punt on the issue until its next session. Moreover, given that the next session is a Republican dominated House and Senate, there is high probability that any bill passed will either reaffirm the status quo (i.e. reauthorize Section 215) or potentially strengthen the NSA’s abilities to collect data.

Second, this state of affairs will undoubtedly strengthen the position of Emperor Mike Rogers. Admiral Mike Rogers is the recent replacement of General Keith Alexander, the head of both the NSA and US Cyber Command (Cybercom). I refer to the post holder as “Emperor” not merely due to the vast array of power at the hands of the head of NSA/Cybercom, but also because such an alliance is antithetical to a transparent and vibrant democracy that believes in separations between its intelligence gathering and war making functions.  (For more on former Emperor Alexander’s conflicts of interests and misdeeds see here.)

The US Code separates the authorities and roles for intelligence gathering (Title 50) from US military operations (Title 10). In other words, it was once believed that intelligence and military operations were separate but complementary in function, and were also limited by different sets of rules and regulations. These may be as mundane as reporting requirements, to more obvious ones about the permissibility of engaging in violent activities. However, with the creation of the NSA/Cybercom Emperor, we have married Title 10 and Title 50 in a rather incestuous way. While it is certainly true that Cybercom and the NSA are both in charge of Signals Intelligence, Cybercom is actively tasked with offensive cyber operations. What this means is that there is serious risk of conflicts of interest between the NSA and Cybercom, as well as a latent identity crises for the Emperor. For instance, if one is constantly putting on and taking off a Title 10 hat for a Title 50 hat, or viewing operations as military operations or intelligence gathering, there will eventually be a merging of both. That both post holders are high ranking military officers means that it is most likely that the character of NSA/Cybercom will be more militaristic, but with the potential for him to issue ex post justifications for various “operations” as intelligence gathering under Title 50, and thus subject to less transparent oversight and reporting.

One might think this fear mongering, but I think not. For example, if the Emperor deems it necessary to engage in an offensive cyber operation that might, say, change the financial transactions or statements of a target, and that part of this operation  is for the US’s role to remain secret. This operation would be tantamount to a covert action as defined under Section 413b(e) of Title 50.   Covert actions have a tumultuous history, but suffice to say, the President can order them directly, and they have rather limited reporting requirements to Congress.   What, however, would be the difference if the same action were ordered by Admiral Rogers in the course of an offensive cyber operation?   The same operation, the same person giving the order, but the difference in international legal regulations and domestic legal regulations is drastic. How could one possibly limit any ex post justification for secrecy if something were to come to light or if harm were inflicted?   The answer is there is no way to do this with the current system. This is because the post holder is simultaneously a military commander and an intelligence authority.

That the Senate has refused to pass a watered down version of NSA reform only further strengthens this position. The NSA is free to collect bulk telephony meta-data, and, moreover, it is free to hold that data for up to five years. It can also query the data without requiring a court order to do so, and is not compelled to make transparent any of its requests to telecom companies. Furthermore, one of the largest reforms necessary—that of separating the functions of the NSA and Cybercom—continues to go unaddressed.  The Emperor, it would seem, is still free to do what he desires.

Friday Nerd Blogging: Princess Bride FTW

Check out this set of tweets tying together feminism and Princess Bride.  My guess is that you check out #feministprincessbride you will find many more.  The movie keeps on giving.

Some Book Publishing Tips

Yesterday, I was part of a panel at Carleton organized to provide other profs/students with suggestions about how to get their stuff published in book form.  The Canadian process is different from the American process, so I spent my ten minutes on the lessons I learned from my experiences with American publishers.

What did I say?

Continue reading

« Older posts Newer posts »

© 2015 Duck of Minerva

Theme by Anders NorenUp ↑