Tag: technology

Autonomous Weapons and Incentives for Oppression

Much of the present debate over autonomous weapons systems (AWS) focuses on their use in war. On one side, scholars argue that AWS will make war more inhumane (Asaro, 2012), that the decision to kill must be a human being’s choice (Sharkey, 2010), or that they will make war more likely because conflict will be less costly to wage with them (Sparrow, 2009). On the other side, scholars argue that AWS will make war more humane, as the weapons will be greater at upholding the principles of distinction and proportionality (Müller and Simpson, 2014), as well as providing greater force protection (Arkin, 2009). I would, however, like to look at different dimension: authoritarian regimes’ use of AWS for internal oppression and political survival.

Continue reading

Strategic Surprise? Or The Foreseeable Future

When the Soviets launched Sputnik in 1957, the US was taken off guard.  Seriously off guard.  While Eisenhower didn’t think the pointy satellite was a major strategic threat, the public perception was that it was.  The Soviets could launch rockets into space, and if they could do that, they could easily launch nuclear missiles at the US.  So, aside from a damaged US ego about losing the “space race,” the strategic landscape shifted quickly and the “missile gap” fear was born.

The US’s “strategic surprise” and the subsequent public backlash caused the US to embark on a variety of science and technology ventures to ensure that it would never face such surprise again.  One new agency, the Advanced Research Projects Agency (ARPA), was tasked with  generating strategic surprise – and guarding against it.  While ARPA changed into DARPA (Defense Advanced Projects Agency) in the 1970s, its mission did not change.

DARPA has been, and still is, the main source of major technological advancement for US defense, and we would do well to remember its primary mission: to prevent strategic surprise.  Why one might ask is this important to the students of international affairs?  Because technology has always been one of the major variables (sometimes ignored) that affects relations between international players.   Who has what, what their capabilities are, whether they can translate those capacities into power, if they can reduce uncertainty and the “fog and friction” of war, whether they can  predict future events, if they can understand their adversaries, and on and on the questions go.  But at base, we utilize science and technology to pursue our national interests and answer these questions.

I recently brought attention to the DoD’s new “Third Offset Strategy” in my last post.  This strategy, I explained, is based on the assumption that scientific achievement and the creation of new weapons and systems will allow the US to maintain superiority and never fall victim to strategic surprise (again).  Like the first and second offsets, the third wants to leverage advancements in physics, computer science, robotics, artificial intelligence and electrical and mechanical engineering to “kick the crap” out of any potential adversary.

Yet, aside from noting these requirements, what exactly, would the US need to do to “offset” the threats from Russia, China, various actors in the Middle East, terrorists (at home and abroad), and any unforeseen or “unknown unknowns?” I think I have a general idea, and if I am at all or even partially correct, we need to have a public discussion about this now.

Continue reading

Stumbling Through Foreign Policy – Not History

Last week Joe Scarborough from Politico raised the question of why US foreign policy in the Middle East is in “disarray.” Citing all of the turmoil from the past 14 years, he posits that both Obama and Bush’s decisions for the region are driven by “blind ideology [rather] than sound reason.”   Scarborough wonders what historians will say about these policies in the future, but what he fails to realize is that observers of foreign policy and strategic studies need not wait for the future to explain the decisions of the past two sitting presidents.   The strategic considerations that shaped not merely US foreign policy, but also US grand strategy, reach back farther than Bush’s first term in office.

Understanding why George W. Bush (Bush 43) engaged US forces in Iraq is a complex history that many academics would say requires at least a foray into operational code analysis of his decision making (Renshon, 2008).   This position is certainly true, but it too would be insufficient to explain the current strategic setting faced by the US because it would ignore the Gulf War of 1991. What is more, understanding this war requires reaching back to the early 1980s and the US Cold War AirLand Battle strategy.   Thus for us to really answer Scarborough’s question about current US foreign policy, we must look back over 30 years to the beginnings of the Reagan administration.

Continue reading

U.S. Options Limited Due to Will and Not Lack of Drones

Today, Kate Brannen’s piece in Foreign Policy sent mixed messages with regard to the US-led coalition fighting the Islamic State (IS).  She reports that the US is balancing demands “For intelligence, surveillance, and reconnaissance (ISR) assets across Iraq and Syria with keeping an eye on Afghanistan”. The implication, which the title of her piece implies, is that if the US just had more “drones” over Syria, it would be able to fight IS more adeptly.   The problem, however, is that her argument is not only misleading, it is also dismissive of the Arab allies’ human intelligence contributions.

While Brannen is right to note that the US has many of its unmanned assets in Afghanistan and that this will certainly change with the upcoming troop draw down there, it is not at all clear why moving those assets to Syria will yield any better advantage against IS. Remotely piloted aircraft (RPA) are only useful in permissive air environments, or an environment where one’s air assets will not face any obstructions or attacks. The US’s recent experience with its drone operations abroad have been mostly all permissive environments, and as such, it is able to fly ISR missions – and combat ones as well – without interference from an adversary.   The fight against IS, however, is not a permissive environment. It may range from non-permissive to hostile, depending upon the area and the capabilities of IS at the time.   We know that IS has air defense capabilities, and so these may interfere with operations.   What is more, we also know that RPAs are highly vulnerable to air defense systems and are inappropriate for hostile and contested air spaces. NATO recently published a report outlining the details of this fact.   Thus before we claim that more “drones” will help the fight against IS, we ought to look very carefully at the operational appropriateness of them.

A secondary, but equally important, the point in Brannen’s argument concerns the exportation of unmanned technology. She writes,

“According to the senior Defense Department official, members of the coalition against the Islamic State are making small contributions in terms of ISR capabilities, but it’s going to take time to get them more fully integrated. U.S. export policy is partly to blame for the limits on coalition members when it comes to airborne surveillance, Scharre said. ‘The U.S. has been very reluctant to export its unmanned aircraft, even with close allies.’ ‘There are countries we will export the Joint Strike Fighter to, but that we will not sell an armed Reaper to,’ [Scharre] said.”

The shift from discussing ISR capabilities to exportation of armed unmanned systems may go unnoticed by many, but it is a very important point. We might bemoan the fact that the US’s Arab partners are making “small [ISR] contributions” to the fight against IS, but providing them with unarmed, let alone armed, unmanned platforms may not fix the situation. As I noted above, they may be shot down if flown in inappropriate circumstances.   Moreover, if the US wants to remain dominant in the unmanned systems arena, then it will want to be very selective about exporting it. Drone proliferation is already occurring, with the majority of the world’s countries in possession of some type of unmanned system. While those states may not possess medium or high altitude armed systems, there is worry that it is only a matter of time until they do. For example, arming the Kurds with Global Hawks or Reapers will not fix this situation, and may only upset an already delicate balance between the allies.

Proliferation and technological superiority remain a constant concern for the US. Which is why, taken in conjunction with the known limitations of existing unmanned platforms, there has not been a rush to either export or move the remaining drone fleet in Afghanistan to Syria and Iraq. IS is a different enemy than the Taliban in Afghanistan or the “terrorists” in Yemen, Pakistan or Somalia.  IS possess US military hardware, they are battle hardened, have a will to fight and die, and are capable of tactical and operational strategizing. Engagement with them will require forces up close and on the ground, and supporting that kind of fighting from the air is better done with close air support. Thus it is telling that the US is sending in Apache helicopters to aid the fight but not moving more drones.

ISR is of course a necessity. No one denies this. However, to claim that this can only be achieved from 60,000 feet is misleading. ISR comes from a range of sources, from human ones to satellite images.  Implying that our Arab allies are merely contributing a “small amount” to ISR dismisses their well-placed intelligence capabilities. Jordan, for example, can provide better on the ground assessment than the US can, as the US lacks the will to put “boots on the ground” to gather those sources.  Such claims also send a message to these states that their efforts and lives are not enough. When in fact, the US is relying just as heavily on those boots as they are relying on our ISR.

 

Podcasting Killed the Lecturing Star

The first video ever played on MTV, back when MTV played music videos most of the time, was the one-hit wonder “Video Killed the Radio Star” by The Buggles. A lament about how new technology ended the career of a singer who was well-adapted to the production standards and genre constraints of an earlier era, the song recounts an irreversible process:

In my mind and in my car
We can’t rewind we’ve gone too far
Pictures came and broke your heart
Put the blame on VTR

Maybe this rings a faint bell for some of you. In any case, for a quick refresher, you can watch the whole thing here.

The great irony of MTV using this to launch an entirely new avenue for experiencing music (music videos weren’t new in 1981, but the idea of a basic cable channel that showed basically nothing but such videos was quite new) is that it took The Buggles’ tragic tale and drew from it, at least by implication, a silver lining: the end of the radio era was the condition of possibility for the video era, and the experience of music was thereby enhanced and transformed. Radio stars might die, but music would survive and thrive.

As I read the discussion thread that unfolded underneath my brief pedagogical query from a few weeks ago, and kept composing replies in my head that I couldn’t make the time for amidst the chaos of the opening week of the semester (and no, APSA had nothing to do with it, since I don’t go to APSA these days…but that’s material for another post entirely), I kept coming back to the thought that there was something of the sentiment of this song in many of the replies, and something of MTV’s ironic deployment of the song in my reaction. I would submit that podcasting has killed the lecturing star already, although news of that death has yet to reach all corners of the academy. Large live lecturing, like churning one’s own butter or properly loading a flintlock musket, is a historical curiosity, perhaps something one might expect to see in museums or at Renaissance Festivals being practiced as a hobby, but not in the heart of a university. But this death of the lecturer is also an opportunity for teaching, much as MTV was an opportunity for music — not wholly positive, not wholly negative, but different. And ignoring that difference, which we can keep doing in the academy for a while because of our tenuous-but-still-extant-in-many-quarters isolation from broader socioeconomic trends, is not a strategy for continuing to educate the students who keep filling up our classrooms and our campuses. Continue reading

Pedagogical query

Happy first day of Fall classes, at least at my university. A question for discussion:

Is there any value whatsoever to a live lecture delivered in front of large numbers of students, given that podcasting is now sufficiently easy and ubiquitous that anyone with a laptop or a smartphone (or a digital voice recorder or camcorder) or access to those devices via a campus IT services department can do it?

Continue reading

Krugman’s (Probably) Wrong about Apple and Network Externalities

Paul Krugman has an op-ed in today’s New York Times in which he likens the rise and decline of technology companies to Ibn Khaldun’s account of the rise and decline of dynasties: success breeds complacency and soon the barbarians are running the show. This happened, he argues, to Microsoft, which once upon a time dominated the computer industry thanks to network externalities:

The odd thing was that nobody seemed to like Microsoft’s products. By all accounts, Apple computers were better than PCs using Windows as their operating system. Yet the vast majority of desktop and laptop computers ran Windows. Why?

The answer, basically, is that everyone used Windows because everyone used Windows. If you had a Windows PC and wanted help, you could ask the guy in the next cubicle, or the tech people downstairs, and have a very good chance of getting the answer you needed. Software was designed to run on PCs; peripheral devices were designed to work with PCs.

This state of affairs bred complacency and Microsoft failed to anticipate the shift to mobile devices. Now Apple risks the same fate.

Anyway, the funny thing is that Apple’s position in mobile devices now bears a strong resemblance to Microsoft’s former position in operating systems. True, Apple produces high-quality products. But they are, by most accounts, little if any better than those of rivals, while selling at premium prices.

So why do people buy them? Network externalities: lots of other people use iWhatevers, there are more apps for iOS than for other systems, so Apple becomes the safe and easy choice. Meet the new boss, same as the old boss.

Continue reading

Dear LaTeX: It’s not you, it’s me

Dear LaTeX,

You look so pretty.  In grad school, all the cool kids were using you.  You know, the kids that had backgrounds in differential calculus and ran R even when they didn’t have to? Those kids.  I wanted to be like them and have groundbreaking papers.  So, instead of working on my arguments and methods, I downloaded you and set out to write my dissertation with your wonderful program.  I mean, if the paper looks like it was written by someone who has their stuff together, it must be a well-done paper, right?

Continue reading

Emerging Technologies, Material and Social

Recording Casualties and the Protection of Civilians from Oxford Research Group (ORG) on Vimeo.

As the lone social scientist in a room of lawyers, philosophers and technicians last week, I was struck by a couple of things. One was the disconnect between descriptive and normative ethics, or rather questions of is versus ought. Everyone was speaking about either norms and rules, but whereas the lawyers treated existing norms and rules as social facts the philosophers treated them as questions of ethics that could and should be altered if necessary on their ethical merit. Another was the disconnect between material and social technologies. Engineers in the room seemed especially likely to assume that material technology itself evolved independent of social facts like laws, ethical debates, or architectures of governance, though they disagreed about whether this was for better or worse.
I suspect, to the contrary, that there is an important relationship between all three that bears closer investigation. To give an example, an important thread seemed to unite the discussion despite inevitable interdisciplinary tensions: that both material technologies (like weaponry or cyber-architecture) and social technologies (like international laws) should evolve or change to suit the demands of human security. That is, the protection of vulnerable non-combatants should be a priority in considerations of the value of these technologies. Even those arguing for the value of lethal autonomous robots made the case on these terms, rather than national security grounds alone.

Yet it bears pointing out (as I think the video above does quite well) how difficult that very factor is to measure.
How do we know whether a particular rule or law or practice has a net benefit to vulnerable civilians? How does one test the hypothesis, for example, that autonomous weapons can improve on human soldiers’ track record of civilian protection, without a clear baseline of what that track record is? Knowing requires both material technologies (like databases, forensics, and recording equipment) and social technologies (like interviewing skills and political will).

And make no mistake: the baselines are far from clear because our social technologies on casualty counting are lacking, because nothing in the existing rules of war requires record-keeping of war casualties, efforts to do so are patchy and non-comparable, and the results are data that map poorly onto the legal obligations of parties to armed conflict. Hence, an important emerging social technology would be efforts to standardize casualty reporting worldwide. Indeed such social technologies are already under development, as the presentation from the Oxford Research Group exemplifies. Such a governance architecture would be a logical complement to emerging material technologies whose raison d’etre is predicated on improving baseline compliance with the laws of war. In fact without them I wonder if the debate about the effects of material technologies on war law compliance can really proceed in the realm of descriptive ethics or must remain purely in the realm of the philosophical.

Anyway. These kinds of “emerging social technologies” or what scholars like me might call “norm-building efforts” received relatively little consideration at the workshop, which was focused primarily on the relationship between emerging material technologies (robotics, cyberspace, non-lethals, human augmentation) to existing governance architecture (e.g. the law of armed conflict). But I think – and will probably write more on this question presently – that an important question is how emerging material technologies can expose gaps and irregularities in social technologies of governance, catalyze shifts in norms, as well as (possibly) strengthen enforcement and adherence to those norms themselves if put to good use.

Cyber Nerd Blogging: Neuroscience, Conflict and Security

Antoine Bousquet has a fascinating post at Disorder of Things on developments in neuroscience and how they are being used by militaries to 1) enhance their own soldiers and 2) degrade the abilities of their opponents. The post is in response to a report by The Royal Society on Neuroscience, Conflict and Security which outlines these developments, speculates on the future and the ethical implications of these developments.

As Bousquet notes, it’s some pretty hairy stuff:

Yet perhaps the most potentially consequential developments will be found in the area of neural interfacing and its efforts to bring the human nervous system and computing machines under a single informational architecture. The report’s authors note here the benefits that accrue from this research to the disabled in terms of improvements to the range of physical and social interactions available to them through a variety of neurally controlled prosthetic extensions. While this is indeed the case, there is a particular irony to the fact that the war mutilated (which the Afghan and Iraq conflicts have produced in abundance – according to one estimate, over 180,000 US veterans from these conflicts are on disability benefits) have become one of the main testing grounds for technologies that may in the future do much more than restore lost capabilities. Among one of the most striking suggestions is that:

electrode arrays implanted in the nervous system could provide a connection between the nervous system of an able-bodied individual and a specific hardware or software system. Since the human brain can process images, such as targets, much faster than the subject is consciously aware, a neurally interfaced weapons systems could provide significant advantages over other system control methods in terms of speed and accuracy. (p.40)

In other words, human brains may be harnessed within fire control systems to perform cognitive tasks before these even become conscious to them. Aside from the huge ethical and legal issues that it would raise, one cannot but observe that under such a scheme the functional distinction between human operator and machine seems to collapse entirely with the evaporation of any pretense of individual volition.

Noting scientific developments aimed at altering the sensory perception of enemies on the battlefield, Bousquet concludes: “The holy grail of military neuroscience is therefore nothing less than the ability to directly hack into and reprogram a target’s perceptions and beliefs, doing away even with the need for kinetic force. So that when neural warfare does truly arrive, we may not even know it.”

A couple of thoughts:

First, The Royal Society Report is interesting for its inclusion of a relatively decent overview of the applicable law that would apply to such weapons. Ken Anderson at Lawfare disagrees – suggesting that “The legal and ethical issues are of course legion and barely explored.” However, considering the report is relatively brief, the legal and ethical section does proportionally take up a large chunk of it. in addition, the report includes no less than four recommendations for suggesting improvements to the Chemical Weapons Convention and Biological Weapons Convention regimes. Interestingly, they do not suggest any improvements for law of war/IHL as opposed to arms control. I find this surprising to a certain extent. While there are principles that always apply to ALL weaponry (distinction, proportionality and necessity – and, of course, prohibition of unnecessary suffering), I would argue that neuro-non-leathal weapons are a definite grey area. (As The Royal Society report notes, altering someone’s sensory perception has radical implications for notions of responsibility in the prosecution of war crimes.)

Second, Bousquet’s last point is interesting in that it reflects the constant quest over the last century and a half to develop weapons that would end the need for the use of kinetic force. I’m presently reading P.D. Smith’s Doomsday Men a social history of the application of science to warfare and weapons of mass destruction which traces the development and logic behind such weapons that were supposed to be so terrible that they could never be used – or if used, would be so terrible as to inspire an end to warfare. This was the case for chemical/gas weapons and eventually the atomic bomb – the thought behind many of their creators that their mere possession would be enough to stop countries from fighting one another full-stop because the consequences would be so terrible.

As Smith demonstrates in his book, such a theory of non-use of weapons was a frequent theme of the science fiction literature of the time, particularly that of HG Wells:

The United States of America entered World War I under the slogan of ‘the war to end all wars’. Never has idealism been so badly used. From Hollis’ Godfrey’s The Man Who Ended War (1908) to H.G. Wells’s The World Set Free (1914), the idea of fighting a final battle to win universal peace had gripped readers in Europe and America. Wells’s novel even introduced the phrase ‘war that will end war’.
Once again, science played a vital role in these stories. A new figure emerged in pre-war fiction – the saviour scientist, a Promethean genius who uses his scientific knowledge to save his country and banish war forever. It is the ultimate victory for Science and Progress…

As James writes, these works of science fiction promoted the idea that “through revolutionary science and the actions of an idealistic scientist, war could be made a thing of the past.” In some works a terrible war is required to win the peace through science, but it is clear that in the view of many of these pre-War “science romance” novels (which would go on to inspire many of the future atomic scientists working on the nuclear bomb) that super weapons could stop war.

Should we then read neuro-weapons in this light – as part of the constant scientific quest to develop weapons which will end the need to fight?

Data, data on the wall

OK, OK, OK, I know life is short and some of us need to get a life (I’m not in Seattle by the way), but this is a really cool app from Uppsala:

From the iTunes description: “Data on 300 armed conflicts, more than 200 summaries of peace agreements, data on casualties etc, without having access to the Internet.”

I actually do think this is really cool and I can see real benefits to having this type of data at one’s fingertips, but I do wonder how these dramatic changes in the ease of access to select types of data and data summaries (even from reputable places like Uppsala) will alter research strategies, research teaching methods, the research capabilities of future scholars, and ultimately research output.

With Kate’s excellent post directly below, I wonder is this a good thing or not so good? Thoughts?

Oh, one other thing, any interest in an app for Duck?

If it won’t put me in the middle of the Baku Congress of the Peoples of the East, then I say “Meh.”

Context available through an additional click.

Lego Antikythera Machine and musings prompted thereby

Massively cool.


And here’s the earlier Lego Difference Engine:

Anyway, the juxtaposition of these two computers intersects (oddly enough) with one of the themes in the Steampunk debates I alluded to earlier. Steampunk extrapolates from the real (and imaginary) technology of the Victorian era. Cosma Shalizi identifies that period (i.e., the Industrial Revolution) as the true “singularity,” prompting Patrick Nielson Hayden to remark:

I hope Shalizi will forgive my quoting his entire post, but it seems to me to have resonance with certain recent arguments over steampunk. It might even hint at why SF (and fantasy!) keep returning to the “long nineteenth century” like a dog to its bone.

I’m also reminded of this, from one of Nietzsche’s books of aphorisms: “The press, the machine, the railway, the telegraph are premises whose thousand-year conclusion no one has yet dared to draw.”*

I’m led to wonder why more isn’t done with extrapolations of Roman technology. As Bryan Ward-Perkins reminds us in his excellent book,productivity in the Roman Empire was pretty robust–and likely significantly higher than what Europe would see for the centuries following its decline and fall. Findings such as the Antithykera Machine demonstrate rather advanced technical and scientific skills. I suspect that the later Roman Empire, let alone various periods of Chinese history, might be worth mining for an alternative technological imaginary.**

*I should note that one of the best discussions consistent with Shalzi’s argument remains that found in Stephen Kern’s The Culture of Time and Space, 1880-1918.

**Beyond the issue of SF potential, the lack of a Roman-era “industrial revolution” is a chronically under-theorized issue in comparative-historical sociology.

Mobile Duck: Wireless for War?


I am going to try to keep this short, because the function to split the page is not available in this browser …what browser, you ask? Safari for IPad, I’ll tell you.

I’ve decided to make this my first IPad post in part because I was itching to try it, but also because it seemed fitting for it’s subject matter…a talk that Peter W. Singer gave at ISA-West in Los Angeles on September 25 on his book,Wired for War. So I thought I would write about Wired for War wirelessly. Funny, right? Maybe I should keep my day job. Maybe.

Ok, my thought about this talk and the book is relatively straight-forward, but perhaps still important. Singer started his talk with a commander’s letter “home” to a “dead” robot’s company, thanking the manufacturer for sparing the military the need to write a letter “home” to a soldier’s mother …as if a mother’s grief was the true tragedy of a soldier’s death. Elsewhere in the talk, Singer noted that many people who oppose the use of robotics in the US military or criticize it from an enemy or victim perspective attack the masculinity of its users. They argue that the use of robots is cowardly, and that ‘real men’ face and fight their enemies.

Singer’s analysis, of course, did not highlight the gendered dimensions of these discourses. Still, as important work in this field like Lauren Wilcox’s has demonstrated, this is not the first time that gender discourses have been key to debates about the use of new technologies in war. While, in Singer’s terms, whether ‘we’ are ‘wired’ for war or not seems to matter, being ‘equipped’ at whatever technological level seems include meeting standards of masculinity.

America’s Spy-Roads


I was proud to get home from a five-week, 18-state trip without a single speeding ticket. Then I opened my mail to find a stern letter from the Arizona Department of Public Safety, with this photo and a citation for going 6 miles over the speed limit on the interstate:


My first response was to feel a little freaked. Clearly the robot menace has moved from the battlefield to our highways a modest revolution in roadside camera technology has occurred since the last time I was on a cross-country road trip, with potential implications for privacy and civil liberties.

My second was to really admire the AZ system and wonder why it’s not more widely used, as it began to sink in to me how extremely effective a deterrent this experience would be next time I traveled through Arizona. We exceeded the posted speed limit numerous times on the trip (only on empty, straight roads in good driving conditions of course) but were never caught by any law enforcement officer. But this spybot caught me and asked me to pay up in a professional, timely manner, and I’ll do so and be more cautious when DIA.

Now, don’t get me wrong. I’m not a fan of overly regulated roads, largely because the weight of social science says that the more rules and and roadsigns to follow, the less drivers rely on their own judgment and the more fatalities. According to The US, for example, has 36% more traffic fatalities per capita than Britain, where the rules are simpler and more flexible.

[John Staddon, a professor of brain science and psychology at Duke University, published the long article “Distracting Miss Daisy” cited above in the Atlantic last summer. He criticizes the US traffic enforcement system for training drivers to slavishly follow signs rather than pay attention to traffic conditions:

“A particularly vexing aspect of the U.S. policy is that speed limits seem to be enforced more when speeding is safe. As a colleague once pointed out, “An empty highway on a sunny day? You’re dead meat!” A more systematic effort to train drivers to ignore road conditions can hardly be imagined. By training drivers to drive according to the signs rather than their judgment in great conditions, the American system also subtly encourages them to rely on the signs rather than judgment in poor conditions, when merely following the signs would be dangerous.”]

Nonetheless, having speed limits unenforced is probably worse than not having them at all. And an automated system is far more effective (and cost-effective) than the occasional run-in with an officer. Roads are regulated spaces, so I’m not sure the civil liberties argument applies. I can live with a ticket from Big Brother when I go 6 miles over the limit (and the heads-up of seeing myself with my eyes on my passenger instead of the road) in exchange for knowing that the other speeders – including those who actually post a risk to motorists like me – are also being given an incentive, both economic and normative, to slow down.

Not all agree; Arizona’s cameras have been the subject of criticism and even civil disobedience; a bill was even introduced earlier this year to ban the cameras. Only Maryland has initiatied a similar system; while 25 states have cameras at traffic lights, few have followed Arizona’s lead and placed spy cameras on freeways. Thoughts?

ABM in the Social Sciences

While I am in the throes of designing and implementing an agent-based modeling approach to study how democracies react to extreme external shocks, I wanted to take a brief break from coding and writing to highlight two very interesting pieces in the current issue of Nature that address ABM directly. The first, “Economics: Meltdown modelling,” discusses how advanced agent-based models might be able to help predict future economic crashes—complete with a vignette where a futuristic ABM prevents a collapse. The problem, as the article asserts, is that ABM is often rejected by mainstream economists.

Many [economists] argue that agent-based models haven’t had the same level of testing…agent-based model of a market with many diverse players and a rich structure may contain many variable parameters. So even if its output matches reality, it’s not always clear if this is because of careful tuning of those parameters, or because the model succeeds in capturing realistic system dynamics. That leads many economists and social scientists to wonder whether any such model can be trusted. But agent-based enthusiasts counter that conventional economic models also contain many tunable parameters and are therefore subject to the same criticism.

This aversion to ABM is persistent throughout the social sciences, which creates an odd dynamic where ABM enthusiasts must often spend a great deal of time justifying their use before research can even begin. What’s baffling about this situation, however, is that ABM is just a tool; useful in for some research questions, but ultimately an imperfect device—just as nearly all other research methods in the social sciences are imperfect. This is precisely the sentiment of the authors of the second article, an op-ed entitled, “The economy needs agent-based modelling.” In discussing the current state of the art in analytical economic models the authors note:

The best models they have are of two types, both with fatal flaws. Type one is econometric: empirical statistical models that are fitted to past data. These successfully forecast a few quarters ahead as long as things stay more or less the same, but fail in the face of great change. Type two goes by the name of ‘dynamic stochastic general equilibrium’. These models assume a perfect world, and by their very nature rule out crises of the type we are experiencing now…As a result, economic policy-makers are basing their decisions on common sense, and on anecdotal analogies to previous crises such as Japan’s ‘lost decade’ or the Great Depression. The leaders of the world are flying the economy by the seat of their pants.

Why then, is ABM treated as being particularly fallible? As a user and developer I have pondered this many times. I believe the primary issue for many critics is the notion of “creating a universe for experimentation,” i.e. the belief that an ABM can account for all of the complexity. The easy response to such a critique is simple: no one believes that. My first exposure to ABM were zero intelligence agents, and I was struck by how such simple models could predict the dynamics of real markets (so much so, that I thought I might name a blog after them someday). Quality ABM’s focus on a narrow set of agent attributes, and attempt to glean the maximum insight from these simple mechanics. For a more philosophical response I will paraphrase the great econometrician Neal Beck in saying that, “all of statistics is a sub-field of theology.” That is, with any model we assume to know the “real truth,” but accept the inherent error and still attempt to build knowledge from the analytsis. ABM are no different, however, these models simply leverage a different technology and analytical framework to produce conclusions.

I welcome both critics and supporters of ABM to make the case for and against their use. It should be noted, however, that those railing against new technology often become victims of their own shortsightedness.

Photo: Nature

Ipod Touch Bleg

I just purchased a 16gig I-Pod touch. Its a fun new toy, and a more than ample replacement for my overflowing 2gig nano. I also added the new operating system. The benefits of a blog being the ability to crowd-source stuff like this, I ask loyal duck readers with a touch or iphone if they have any suggestions on how to get the most out of my new device. Any favorite uses, applications that are must-gets, or other cool things that can be done besides the standard music, photos, and the like?

More ISA reflections: Technology, IR, and the study of IR

To continue the theme of ISA follow-up, I wanted to mix in a few observations about the way the massive technical shift of stuff like Web 2.0 seems to be changing that which we study, how we study it, and how we conceive of what it means to study what we study. Of all, it feels as if our professional norms of what it means to study IR and how we ought to do so are the most lagging.

I attended several panels on discourse analysis. One panel focused on the study of images as discourse and featured two innovative graduate student papers investigating the discourse of photographs of Abu Ghraib and Guantanamo Bay. The two papers revealed just how powerful these images have been world-wide, impacting the understanding of the US occupation of Iraq and War on Terrorism. Gitmo, in part, has become such a powerful international symbol because of the images the world has seen of prisoners there. As a field, we have historically focused on discourse as text, privileging the primary discourses of speeches and archival records. As a discipline, we ask researchers to publish papers and present without access to LCD displays. The presenter of the Gitmo paper managed to put up some color overheads, which made her presentation significantly more effective. And my question to them was–why are you writing a paper about pictures?

It would seem to me that there is room in the field for us to innovate beyond the 10,000 word journal article and engage the Web and digital media. James DerDerian, who was discussant on one of these panels, is doing some remarkable work with documentary film. The two papers on images would be so much more powerful as multi-media enterprises but the field has no way to recognize that. And, ISA has no way to present that to a panel.

I was at another panel on Diplomacy (also with DerDerian…). Of note there was the way in which the military, especially in the US, is taking over traditional diplomacy. Counter-insurgency operations only serve to magnify this trend. And yet, I asked, why is it that the pragmatism of the military is willing to embrace these new forms of diplomacy while diplomacy looks so much as it did 30 years ago? Of all the agencies within the US government, the Pentagon is far and away the most innovative in using information technology resources. Imagine the State Department embedding journalists in the 6 party talks. Imagine the State Department’s public diplomacy program with the resources of the Pentagon’s information operations. Imagine the State Department with a website filled with cool photos like any of the .mil sites. Imagine a first-person interactive negotiating game on the state department’s website (like the Army’s first person shooter games).

Information technology is changing the stuff that we study. Information technology is changing the way we conduct our craft. And yet, some institutions seem slow to catch up. Alas, our own profession seems to be one of them.

For crying out loud, how hard would it be for ISA to just buy some wireless access for everyone already!!!

And, for crying out loud, how hard would it be to get a truly transformational diplomacy?

Frackin’ Toasters

In the mailbox today, I found my pre-ordered copy of Peter Singer‘s new book Wired for War: The Robotics Revolution and Conflict in the 21st Century. NPR had an interview with Singer yesterday, which gives you a good sense of his argument and some of the fascinating and frightening changes coming down the pipeline in military affairs.

I was excited to sink my teeth into this before the semester gets started, since I’m eager to update my curriculum on battlefield robots, and since I’ll be blogging in an upcoming symposium at Complex Terrain Lab on the book next month. I’ll save most of my substantive remarks for that forum, and for such time as I’ve actually read the entire book. But based on the first two pages, I have two quick initial reactions:

1) From the very first three sentences, Singer does not disappoint:

“Because they’re frakin’ cool. That’s a short answer to why someone would spend four years researching and writing a book on new technologies and war. The long answer is a bit more complicated.”

I love it – you don’t get a better hook or prose more engaging than that.

2) However I must take issue with a certain assertion in Singer’s very first (and otherwise fascinating) endnote (p. 439), on the etymology of the word “frak”:

“Frak is a made-up expletive that originated in the computer science research world. It then made it way into video gaming, ultimately becoming the title of a game designed for the BBG Micro and Commodre 64 in the early 1980s. The main character, a caveman called Trogg, would say ‘Frak!’ in a litle speech bubble whenever he was ‘killed.’ It soon spread into science fiction, appearing in such games as Cyberpunk 2020 and the Warhammer 40,000 novels. It crossed over into the mainstream most explicitly in the new 2003 reboot of the 1970s TV series Battlestar Galactica. That the characters in the updated version of the TV show cursed, albeit with a made-up word, was part of the grimier, more serious feel of the show.”

In fact, however, the word was used (ok, maybe not quite as frequently) in the earlier show as well – albeit spelled “frack.” According to Battlestar WikiBlog:

“”Frak” is derived from the Original Series expletive, “frack,” a term used in character dialogue far less often (or “colorfully”) than its counterpart in the Re-imagined Series. The Re-imagined Series’s production team said they felt that “frack” should be a four-letter word, hence “frak”. The term “frack” was obviously used in dialogue in the Original Series to comply with FCC and other broadcast decency standards because the FCC has jurisdiction over the content of broadcast TV.”

See also here… I don’t generally encourage using Wikipedia as a primary source (take heed ye Polsci 121 students) but in this case I can’t think of a better place to get a sense of the popular understanding of a made-up word’s etymology.

That aside, I look forward to reading and commenting on the rest. Good stuff.

UPDATE (11:22pm). Having put the kids to bed, am now on p. 14 – if this isn’t a good reason to go buy this book, what is? Singer writes:

“[This] book makes many allusions to popular culture, not something you normaly find in a research work on war, politics, or science. Some references are obvious and some are not (and thus the first reader to send a complete list of them to me at www.pwsinger.com will receive a signed copy of the book and a Burger King Transformers collectible).

How frakking cool is that?

© 2017 Duck of Minerva

Theme by Anders NorenUp ↑