Tag: digital methods

Bad Metaphors: Journals, Articles, and Pages

As a graduate student at an urban university,
I envy this RA’s large office.

New technologies adapt terms from older tools, even when they’re curiously inappropriate. Consider “dashboard,” which we use now to refer to an easy display of critical information (as in Google Dashboard) or the control panel of an automobile. Originally, however, a dashboard was (per Wikipedia) “a barrier of wood or leather fixed at the front of a horse-drawn carriage or sleigh to protect the driver from mud or other debris “dashed” (thrown) up by the wheels and horses’ hooves.

 Clearly, that term outlasted its original meaning.

Nobody cares, of course, if words find new meanings, even to the point of rendering the original definition archaic. But the analogies that shape our technologies are subject to the same dynamics. Consider, for instance, my current work habits. My workflow for reading and notetaking involves highlighting the PDFs using iAnnotate, syncing the files to my desktop via Dropbox, and finally recalling them on my monitor when I write using LaTeX. 

Every stage of this process is inefficient and governed by obsolete metaphors.

Continue reading

Gender, Violence and Digital Emergence

One of the most unsettling findings of our media and radicalisation research was the way in which the suffering of certain individual women is turned into a cause by radical Islamic groups that leads to violence by men in those women’s names. The availability of digital media, combined with a certain doctrinal entrepreneurialism by those using religion to justify political violence, has resulted in the widespread dissemination of amateur video clips depicting a specific woman’s plight and calling for reprisals. If you want to understand the link between online propaganda and offline action, it appears that representations of women’s bodies and their “honour” are often central. My project colleagues and I document two such cases in a research article published this week.

Dua Khalil Aswad, an Iraqi teenage girl of the Yazidi faith, was stoned to death on 7 April 2007 by a Yazidi mob consisting of tens of men, mostly her relatives, for eloping and spending the night with a Muslim man. Her death was recorded on a mobile cameraphone by a bystander and circulated on the internet. It was eventually picked up by NGOs and international media, where the killing was framed in terms of human rights abuses. However, the clip was also identified by so-called ‘mujahideen’ in Iraq, namely Al-Qaeda in Iraq and affiliated groups. They claimed Dua was killed because she converted to Islam. They argued her killing demonstrated how non-Islamic faiths violate human rights (they know how to call upon human rights discourse too), and that this warranted the mujahideen bringing their own kind of justice to Dua’s killers. Between April and September 2007 a series of high-profile retaliatory attacks saw the individual and collective killing of hundreds of Yazidis and the wounding and displacement of more. One of the jihadist groups involved in these attacks, Ansar Al-Sunna, posted a video justifying their violence. Dua’s death was woven into a longer strategic narrative perpetuated by jihadists concerning a war between Islam and other faiths.

Three years later, in 2010, we found considerable religious tension in Egypt and the Arab world stemming from several cases of young female Coptic Christians in Egypt who had allegedly converted to Islam and were forced by the Coptic Church, with the aid of the former Mubarak security forces, to return to Christianity. The alleged plight of these women became the subject of media debates, street demonstrations and protests by Muslims and counter-efforts by Copts in Egypt, inflammatory editorials, online speculation, and finally, violence against innocent people. One of the most prominent episodes occurred in July 2010. Camilia Shehata, a female Copt Christian in Egypt, disappeared, and allegedly converted to Islam. She then returned under the shelter of the Coptic Church and released various videos to explain her case. Her story was amplified by Christian and Muslim groups alike, but subsequent attacks in her name occurred in Iraq rather than Egypt. Al-Qaeda in Iraq took hostages in a Baghdad church in October 2010 and announced on YouTube:

Through the directions of the Ministry of War of the Islamic State of Iraq, and in defence of our weak and oppressed, imprisoned Muslim sisters in the Muslim land of Egypt, and after detailed choices and planning, a small group of jealous Mujahideen, beloved servants of Allah, launched an offensive against a filthy center of Shirk [the Church] which Christians in Iraq have for so long taken as a place from which to wage their war and plot against Islam. By Allah’s Grace, we were able to capture those who had gathered there and take control over all entrances.

The Mujahideen of the Islamic State of Iraq give the Christian Church of Egypt 48 hours to clarify the condition of our Muslim sisters imprisoned in the churches of Egypt, and to free them all without exception, and that they announce this through the media which must reach the Mujahideen within the given time period.

The Iraqi government chose to attack the hostage-takers rather than negotiate. The hostage-takers detonated their suicide bombs in the church and 53 people died.

These events confirm one thing we know: terrorist groups can derive asymmetrical benefit from digital media, since content from individual lives and incidents can be rapidly reframed to bolster longstanding narratives such as the notion of a clash between Islam and other religions. But what struck us as particularly significant was the degree of contingency involved. The line from the initial acts to the eventual victims and the way in which events are incorporated into others’ narratives seems chaotic, escaping the control of the initial actors. The economy of exchange through media is irregular: digital footage may emerge today, in a year or never, and it may emerge anywhere to anyone. The concept of agency becomes complicated. The span of things done ‘by’ Al-Qaeda is beyond its control. Is distributed agency something new, only made possible by digital connectivity, or have social and religious movements always depended upon – and hoped for – a degree of contingent taking-up of their cause?

While we cannot know why the Yazidi man with a digital camera recorded the stoning of Dua (or why he recorded others recording it with their cameras), the increasing recording of everyday life certainly produces more material for political and religious exploitation. As we have seen, this allowed Al-Qaeda to instantly reframe a woman’s life as a “sister’s” life to shame men into action. If the killing of Neda Soltan during the Iranian election protests in 2009 represented one face of today’s mix of gender, violence and digital emergence, the cases of Dua and Camilla show another.

Cross-posted from the journal Global Policy

Challenges to Qualitative Research in the Age Of Big Data

Technically, “because I didn’t have observational data.”
Working with experimental data requires only
calculating means and reading a table. Also, this
may be the most condescending comic strip
about statistics ever produced.

The excellent Silbey at the Edge of the American West is stunned by the torrents of data that future historians will be able to deal with. He predicts that the petabytes of data being captured by government organizations such as the Air Force will be a major boon for historians of the future —

(and I can’t be the only person who says “Of the future!” in a sort of breathless “better-living-through-chemistry” voice)

 — but also predicts that this torrent of data means that it will take vastly longer for historians to sort through the historical record.

He is wrong. It means precisely the opposite. It means that history is on the verge of becoming a quantified academic discipline. That is due to two reasons. The first is that statistics is, very literally, the art of discerning patterns within data. The second is that the history that academics practice in the coming age of Big Data will not be the same discipline that contemporary historians are creating.

The sensations Silbey is feeling have already been captured by an earlier historian, Henry Adams, who wrote of his visit to the Great Exposition of Paris:

He [Adams] cared little about his experiments and less about his statesmen, who seemed to him quite as ignorant as himself and, as a rule, no more honest; but he insisted on a relation of sequence. And if he could not reach it by one method, he would try as many methods as science knew. Satisfied that the sequence of men led to nothing and that the sequence of their society could lead no further, while the mere sequence of time was artificial, and the sequence of thought was chaos, he turned at last to the sequence of force; and thus it happened that, after ten years’ pursuit, he found himself lying in the Gallery of Machines at the Great Exposition of 1900, his historical neck broken by the sudden irruption of forces totally new.

Because it is strictly impossible for the human brain to cope with large amounts of data, this implies that in the age of big data we will have to turn to the tools we’ve devised to solve exactly that problem. And those tools are statistics.

It will not be human brains that directly run through each of the petabytes of data the US Air Force collects. It will be statistical software routines. And the historical record that the modal historian of the future confronts will be one that is mediated by statistical distributions, simply because such distributions will allow historians to confront the data that appears in vast torrents with tools that are appropriate to that problem.

Onset of menarche plotted against years for Norway.
In all seriousness, this is the sort of data that should
be analyzed by historians but which many are content
to abandon to the economists by default. Yet learning
how to analyze demographic data is not all that hard,
and the returns are immense. And no amount of
reading documents, without quantifying them,
 could produce this sort of information.

This will, in one sense, be a real gift to scholarship. Although I’m not an expert in Hitler historiography, for instance, I would place a very real bet with the universe that the statistical analysis in King et al. (2008) , “Ordinary Economic Voting Behavior in the Extraordinary Election of Adolf Hitler,” tells us something very real and important about why Hitler came to power that simply cannot be deduced from the documentary record alone. The same could be said for an example closer to (my) home, Chay and Munshi (2011), “Slavery’s Legacy: Black Mobilization in the Antebellum South,” which identifies previously unexplored channels for how variations in slavery affected the post-war ability of blacks to mobilize politically.

In a certain sense, then, what I’m describing is a return of one facet of the Annales school on steroids. You want an exploration of the daily rhythms of life? Then you want quantification. Plain and simple.

By this point, most readers of the Duck have probably reached the limits of their tolerance for such statistical imperialism. And since I am a member in good standing of the qualitative and multi-method research section of APSA (which I know is probably not much better for many Duck readers!), who has, moreover, just returned from spending weeks looking in archives, let me say that I do not think that the elimination of narrativist approaches is desirable or possible. Principally, without qualitative knowledge, quantitative approaches are hopelessly naive. Second, there are some problems that can only practically be investigated with qualitative data.

But if narrativist approaches will not be eliminated they may nevertheless lose large swathes of their habitat as the invasive species of Big Data historians emerges. Social history should be fundamentally transformed; so too should mass-level political history, or what’s left of it, since the availability of public opinion data, convincing theories of voter choice, and cheap analysis means that investigating the courses of campaigns using documents alone is pretty much professional malpractice.

The dilemma for historians is no different from the challenge that qualitative researchers in other fields have faced for some time. The first symptom, I predict, will be the retronym-ing of “qualitative” historians, in much the same way that the emergence of mobile phones created the retroynm “landline.” The next symptom will be that academic conferences will in fact be dominated by the pedantic jerks who only want to talk about the benefits of different approaches to handling heteroscedasticity. But the wrong reaction to these and other pains would be kneejerk refusal to consider the benefits of quantitative methods.

Semantic polling: the next foreign policy tool

George Gallup –
what have you started?

The traditional methods for a state to know what overseas publics are thinking are changing. Instead of relying on your embassy staff’s alertness, your spies’ intelligence and the word of dissidents, we’re reaching the point where foreign policymakers can constantly monitor public opinion in countries in real-time. The digitization of social life around the world  – uneven yes, but spreading – leaves ever-more traces of communications to be mined, analysed and acted upon.  In a paper that Nick Anstead and I presented in Iceland this week, we called this ‘semantic polling’, and we considered the ethical, political and practical questions it raises.

Semantic polling refers to the use of algorithms and natural language processing to “read” vast datasets of public commentary harvested from the Internet, which can be disaggregated, analysed in close-to-real-time, and which can then inform policy. It can give a general representation of public opinion, or very granular representations of the opinion and behaviour of specific groups and networks. Multi-lingual processing across different media platforms is now possible.  Companies already provide this service to pollsters and parties in domestic campaigns, and NGOs make use of it for disaster response monitoring. Given how public diplomacy has adopted many techniques of the permanent campaign, it will be no surprise to see semantic polling become part of the foreign policy toolkit.


The semantic web is the standardization of protocols so that everything on the web becomes machine-readable. This means semantic polling is about more than reading social media data. In principle, our shopping, driving, social media, geolocation and other data are all searchable and analyzable. It is only a matter of computing power and integration of data streams for this method to profile to the individual behavioural level. This also enables predictive engagement: if Amazon thinks it knows what you want, then a state, with access to more data streams, might be use semantic polling and think it knows who will support an uprising and who will not.
Ethically, do people around the world know their tweets, public facebook data and comments on news blogs are being used to build a picture of their opinion? How should journalists report on this when it happens? Politically, how will states and NGOs use semantic polling before, during and after crises and interventions? Is it predictive, valid and reliable? Will semantic polling’s real-time nature further intensify the pressures on policymakers, since the performance, credibility and legitimacy of their policies can be visualized as they are enacted? Will publics resist and find ways to circumvent it? And given that it is barely regulated at the domestic level, how could or should it be policed in international affairs?
When we thought of this paper it seemed a little bit like an exercise in science fiction, but interviews with the companies, pollsters and social scientists driving this has convinced us this is developing quickly. Our political science audience in Iceland seemed positive about this – semantic polling offers relatively cheap, unobstrusive and ‘natural’ data that might provide insights into human behaviour existing methods cannot give. Perhaps a good first step would be for people around the world to understand how semantic polling works, so they can decide what they think about it, since it is their lives that are being monitored. 

7/7 five years on: Conflicting memories make an official record difficult

Aldgate station plan, London underground

A month into the official inquest into the ‘7/7’ London bombings of July 2005, it is clear that the governmental imperative to arrive at a clear, authoritative and final account of what happened on the day might prove impossible because of the unreliability of human memory. This was an event in which cameraphone footage from the scene was reaching the BBC within 20 minutes of the first of four explosions, and iconic images and memorial rituals were in place within days and weeks. Yet it took police four months to take witness statements and now five years for witnesses to testify in court. It is no wonder that discrepancies emerge. Not unlike 9/11, there are significant differences between sweeping media- and politically-driven narratives of national mourning and the local, particular perspectives of those involved.

An official record would offer some certainty to survivors, grieving relatives, and allow for objective assessment of how well emergency services performed. The inquest must be comprehensive and include as many voices as can offer salient information, it must be precise, and it must offer consensus and closure.  At a symposium, ‘Conflicts of Memory’ at the University of Nottingham last week, my regular co-author Andrew Hoskins, who has been following the inquest, talked about the inconsistencies emerging between individuals’ testimonies and even within individuals’ own accounts. One ambulance worker said he had drawn a diagram of where bodies were in a carriage on the day of 7/7; he now can’t remember where he drew the diagram or even whether it was someone else who drew it for him.
We can see this for ourselves; witnesses’ transcripts and the evidence in court are available online, the kind of transparency our new media ecology makes so easy. For instance, we can compare witness testimonies with visual representations of what they had seen. Survivors must now try to reconcile what they thought had happened with all of the conflicting verbal and pictorial versions being put before the court now.
For Hoskins, it is only by following how, over a long period, events become stretched and extended through complex relations and layers of objects, people and rituals that we can see how consensual memories may be formed. This is not dissimilar to Latour’s argument that law (and science) are merely a set of mediations which enough people can agree to go along with for pragmatic reasons. The result, as with the 7/7 inquest so far, is imperfect. Would it be better for the inquest to settle on a definitive set of technical drawings and edit out inconsistent testimonies in order to reach an official record? This might upset survivors who feel the memory they genuinely hold, and which they have lived with for over five years, has been crossed out as a mistake.
Alternatively, the British state could allow for a loose plurality of often-ambiguous accounts to stand together. There would be costs. But with the testimonies, diagrams and other evidence archived and publicly available online, they could decide to turn it over to the public to make connections and draw conclusions themselves. Inclusive but never definitive: judgement 2.0?
Cross posted from: https://newpolcom.rhul.ac.uk/npcu-blog/ 

YouTube and Politics Part 4: Digital Methods

One more bit of coolness from the YouTube conference before I move on to other things.Richard Rogers of University of Amsterdam gave the keynote here in Amherst this past week. It included this very intriguing video produced by Govcom.org.

© 2019 Duck of Minerva

Theme by Anders NorenUp ↑