Tag: information technology

The Self-Fulfilling Prophecy of High Tech War

 

In fall of 2014, former Defense Secretary Chuck Hagel announced his plan to maintain US superiority against rising powers (i.e. Russia and China). His claim was that the US cannot lose its technological edge – and thus superiority – against a modernizing Russia and a rapidly militarizing China. To ensure this edge, he called for the “third Offset Strategy.”

Continue reading

Google Scholar Dystopia

Guest Post by Deborah Avant

Anyone remember that 1980s Terry Gilliam film Brazil – a futuristic dystopia where overreliance on bizarre and mostly broken machines led to equally bizarre social maladies? Well, I’m having a Brazil moment.

According to Google Scholar, I am no longer the author of my 2005 book. If you search my name, it has disappeared. If you search the title a (very well done) review article comes up but no book. The book – and my relation to it – has disappeared even when you search subject terms.

According to an article in the Academic Newswire, this is a common occurrence. And a quick email to six of my blogger friends revealed at least one similar recent incident.

The clincher? While the Google Scholar team was quick to get back to me, apologized, and promised to “keep this example in mind as the update the indexing algorithms” given that “our system is entirely automated” they could not fix it (!) I’m annoyed – and ever more so as I realize how many times in a day I check Google Scholar. It is free, it is easy, no need to log in. Precisely because of the ease, many programs also rely on its results. So do many students.

According to librarians, we shouldn’t. A nice page on Northwestern Library’s website reminds us we should use more than one source -for gathering information about citations as well as general research (read it here). It also explains strengths and weaknesses of each.

None, though, are as easy as Google Scholar. We need another free and easy tool and maybe one that takes advantage of “all the good metadata generously offered to them by scholarly publishers and indexing/abstracting services” mentioned in the Academic Newswire article.

Zuckerberg – what about Facebook Scholar?

Kindle Fraud

A large and growing problem, apparently. Via Jim MacDonald, who brings greater publicity to the plight of S.K.S. Perry, a self-published author who discovers that his book is already being sold by someone else on Amazon… and finds that Amazon doesn’t seem to give a damn.


Many of our readers our academics who, I assume, have discovered — or will soon discover if they use the right search terms — that their own books are available in the form of illegal downloads. This suggests that repackaging as someone else’s e-book, if it hasn’t happened, is in many a recent academic book’s near future.

Of course, those of us who don’t write about Zombies or produce trade books make little money from royalties anyway. Still, I find the whole thing disturbing on a personal level, let alone on the grounds of general ethics. These trends also bode ill for the already reduced world of academic publishing.

#Opensocietyfail

Glenn Greenwald reports on the case of Birgitta Jonsdottir, the Icelandic MP and former Wikileaks volunteer. The U.S. Department of Justice has subpoenaed Jonsdottir’s Twitter records, as well as the records from many other users of the service, from November 2009 onward on the grounds that the department believes that the records may be used in a criminal investigation.

What is newsworthy about this is not that the U.S. DoJ continues to investigate what the American government must, by definition, regard as a violation of its sovereign prerogative to release classified information. Rather, it is that Twitter requested the federal court order be unsealed to allow the affected users to object to the government’s investigation, which had hitherto been kept secret.

Twitter’s actions allow us to further refine Charli’s thoughts on the recent Foreign Affairs article by Clay Shirky. In particular, this should remind us that the U.S. can’t rely on the public sphere to always advance its state interests, and that there are real dangers to relying on a “civil society” that is principally constituted by private corporate actors in order to advance democratization.

As Shirky notes, the U.S. has partially embraced freedom, Net neutrality, and everything else cyberrific about the Web because of what it perceives as the instrumental value of those attributes. Famously, the State Department under Secretary Clinton has embraced Twitter as a tool of public diplomacy. During Iran’s summer protests in 2009, the State Department even apparently used Twitter as part of a soft-power exercise in attempted regime change. Alerted that the site was about to be taken offline for maintenance, Clinton aides worked to keep the site online during the protests. (The New York Times accounts suggests that a pair of twentysomethings did this on their own; one wonders, of course, if this isn’t a rewriting of history to account for the fact that the “Twitter revolution” was in almost every respect a giant fail whale.)

The irony that the same technologies have now become the enabling conditions for the dissemination of Wikileaks, a minor-league public diplomacy embarrassment that has also posed acute risks to specific individuals who may be named or falsely accused of espionage by unfriendly governments, is so obvious as to need no exposition. (Despite the observations from astute critics that Wikileaks, like all organizations, requires resources and access, as well as some measure of societal legitimacy, to proceed with its endeavors, we shouldn’t overestimate how high the barriers to entry are–especially for entrepreneurs who may be carrying less baggage than Assange.)

Shirky recognizes many of these arguments, and elaborates a more nuanced argument about why the United States should support an information infrastructure that will help democratize the world’s remaining and rather astoundingly resilient authoritarian states. His contention is that eventually repressive states will face a tradeoff between allowing open communications, which facilitate trade and economic growth, and choking off dissent, which requires the state to be able to throttle (both in terms of “moderate the speed of” and “strangling”) open communications.

Yet Shirky overstates this dilemma. He recognizes that samizdat and Xeroxes and fax machines and text messaging and Twitter–each generation, it seems, brings its own new revolutionary technology–have only sometimes contributed to democratizing outcomes. Yet he argues that in the long run, open communication leads to open societies. Consider the printing press and the postal service, he says. The former facilitated the Protestant Reformation and the latter the American Revolution. Quod erat demonstrandum.

Shirky’s optimistic technological determinism rests on questionable historical inferences. To paraphrase Zhou Enlai, it is too soon to tell what will be the consequences of the movable type printing press–and, a fortiori, of the Movable Type blogging software. After all, the most searching and expansive dictatorships in human history grew and matured in the twentieth century, and were every bit as enabled by twentieth-century technologies as were their democratic counterparts. (Imagine a Nazi Germany without the airplane or a Soviet Union without the telegraph, to say nothing of North Korea today without nuclear weapons.)

It is true that such regimes invest huge amounts of resources into censorship. Consider Internet censorship in Beijing. But it’s not at all clear that Beijing is trying to restrain the development of an ideal speech situation that will lead the Chinese people to rise up and demand Habermasian democracy. Rather, many accounts suggest that the CCP is more worried about the development of more nationalist and anti-corruption movements–neither of which, to say the least, is pro-democratic. Nor does the example of the USSR and of Eastern Europe offer much hope. Had Gorbachev never become General Secretary, the Soviet Union might well have been able to persist for generations longer as a decrepit, wasting regime that was nonetheless able to mobilize sufficient physical repressive power to sustain itself. In fact, it might well have turned out looking something rather like the government that Putin built, with fewer BMWs and more MiGs.

The relationship between the U.S. government and Twitter similarly demonstrates that the outcomes of the public sphere and the state’s interests are not always congruent. Twitter’s request to unseal the subpoena has led to some adverse publicity for the DoJ this weekend. And the State Department will of course have to spend some time soothing the hurt feelings of the Icelandic government, though in the long term all sides understand that the Melians Icelanders will have to give in.

The real damage to the Twitterites’ hopes for techno-democratization, however, lies in the fact that the Justice Department’s request is perfectly reasonable and justifiable by all legal standards. Twitter can’t refuse, so they protest by publicizing the request. Yet publicity in this case is simply precedent-setting, and it is a precedent that countries with Freedom House scores lower than America’s will happily cite. For repressive regimes, the benefit is clear.

A chilling effect will set in among the citizens of freer countries, as well. Just the rumor that the federal government would refuse to hire graduate students who read Wikileaks cable, as well as the more concrete instructions to federal employees and contractors not to read the material, has–in my direct, personal experience–led academics and grad students to shy away from discussing or reading such materials.

Just as important, we should remember that Google, Twitter, and Facebook are not communications technologies in the same sense that the printing press was. They are companies that require vast resources to operate and can function only with the permission of a host government. In an open society, they will promote openness. In a closed society, there is no guarantee they will do so. As always, economics and technology are important to determining political outcomes, but politics is primary.

Could Simple Automated Tools Help Wikileaks Protect Its Afghan Sources?

Julian Assange has a problem. When pressed by human rights organizations to redact any current or future published documents because they too feared the effects on Afghan civilians, he reportedly replied that he had no time to do so and “issued a tart challenge for the human rights organizations themselves to help with ‘the massive task of removing names from thousands of documents.'”

Leaving aside his alleged claims about the moral responsibility of human rights groups for his own errors, the charitable way to think about his reaction is that Assange wants to do the right thing but simply doesn’t have the capacity. Indeed, in a recent tweet he implored his followers to suggest ideas:

Need $700k for our next harm-minimization review… What to do?

Fair enough. Here’s an idea: how about using information technology?

As my husband Household Chief Technology Officer pointed out over coffee this morning, what Assange is essentially in possession of is a large quantity of text data. There are many qualitative data analysis applications that allow users to easily sift through such data in search of specific discursive properties – I use one myself when I analyze interviews, focus groups or web content. Named entity recognition software easily allows users to identify all names or places in large quantities of text. Open-source variants like AFNER are available.

Corporations and governments already controversially use such tools for data-mining, to search for connections between names and places in large quantities of text. Could they not be equally leveraged in the service of privacy and confidentiality? How hard or costly would it really be to use such tools to identify and then redact all names in a set of text automatically by computer or to have a human being (or team of beings with a clear-cut coding scheme) go through the entire dataset with keystrokes and choose what should be removed or blacked out?

For me, it would be hard, unless someone handed me a software package that already blended these elements. But that’s primarily because I’m not a computer programmer. Julian Assange is.

Questions for readers: if you understand software design and available OTS or open-source applications better than I do, how far-fetched is it to solve Wikileaks’ redaction problem in this way? Am I being daftly optimistic here? Or, do you have other ideas in response to Mr. Assange’s query? Comment away.

[cross-posted at Lawyers, Guns and Money]

Digital Burqa

A few days ago, Charli pondered “whether or not the Internet and social media empowers civil society or instead simply offers states new tools of repression and governance.” And she provided a link to an excellent video about Iranian bloggers. I haven’t been able to get the question or the video out of my head. This is not my topic/area of research, but I will offer a few tentative thoughts to see if it will spark some discussion…

What color is your burqa?

If we were to visualize the Internet, would we not see a vast social space populated by individuals (men and women) wearing burqas, niqabs, chadors, and hijabs? Even in social networks, how many people interact without securing a measure of (an admittedly illusory) “privacy”? Almost all of those who comment on this blog, for example, wear digital burqas, except the listed contributors who are hijabed. (For we are all aware of the Nietzschean dictum that to talk much about oneself is also a way to conceal oneself.)

(What fascinates me is that so many wear digital burqas voluntarily, particularly in societies which are nominally non-authoritarian. From whence does this fear of the gaze of others originate in supposedly free societies? But, I digress…)

If you ask individuals in authoritarian or non-authoritarian contexts why they inhabit these personal panopticons, they would probably tell you that their burqa gives them mobility in the public sphere while avoiding the gaze/persistent memory of undesirable others and perhaps the state. Their burqa also enables a measure of subversion and license (as does the actual burqa and niqab even in conservative societies.)

Repression is understood in this context as the lifting of the digital veil by the state and/or the incarceration of authors.

The real question for me is not why an authoritarian state occasionally seeks to lift the veil on suspected dissidents (all states do this), but why a strong authoritarian state tolerates this potentially subversive social space at all. Technophiles will say that the state has no choice in this digital age, but this argument is not convincing when one is dealing with strong, capable states. After all, how many blogs emanate from Pyongyang? Not many (if any) I suspect. States can attempt (and more of less succeed) to prevent the technology wholesale, the more challenging situation is to permit the technology but to censor/filter particular servers. So why take on this more difficult challenge in governing?

The Spider and the Web

There is often an assumption in debates about social networks in authoritarian countries that civil society is antecedent to the state. However, outside of the Anglo-American tradition civil society is certainly not an autonomous historical development. (Even within the Anglo-American tradition it is doubtful that civil society today is logically antecedent since the state shapes every element of civil society through public policies). Late developing states have consistently sought to create bourgeois civil society in a hothouse in order to catch up to the early industrializers. To borrow an evocative metaphor from Bruce Cummings’ work on the developmental state: the spider builds the web; there are no webs without spiders.

The challenge for late-industrializing states has traditionally been to create a bourgeoisie which can achieve hegemony over the existing social classes without fomenting a violent reactionary revolution.

I do not know enough about Iran since its (reactionary? alter-modern?) revolution to say why its state permits this potential site of resistance. However, I do think it is worth asking the question. My hunch (and it is only that) is that the state hopes to create a particular modern bourgeoisie with “Iranian characteristics” (on the Chinese model) while exposing and expunging the secular, cosmopolitan, counter-revolutionary bourgeoisie. In Hegelian fashion, the state projects its role as restoring a threatened organic unity.

It is unclear to me whether the young bloggers/tweeters of Iran have established hegemony within their society. Internet penetration in Iran has grown dramatically in recent years and it is well above the regional average. However, the bloggers’/tweeters’ frequent appeals in English to a global audience cast some doubts in my mind. But again, I do not know enough and hope others will correct me. Perhaps, when the authoritarian state has stamped out real threats to its survival, it occasionally lets the reformers de-legitimize themselves by appealing to the international community in the language of the global hegemon. As the Iranian state frequently expresses concerns about foreign subversion, this seems like a plausible scenario.

In one conversation I had with an Iranian blogger (who ironically used Chinese software to acquire his/her chador), s/he rejected the notion that their struggles against the state were assisted by the US State Department’s efforts to buttress Twitter. Of course, the core issue is whether American assistance/intervention is perceived as marginal by the majority of the Iranian population.

Coming Soon…..Ataque de Panico


Uruguayan producer Fede Alveraz uploads a short 4-and-a-half minute film of giant robots destroying Montevideo to YouTube and now has a $30million dollar contract to develop a feature length film of it. Alveraz says it cost him $300 to produce — call me a skeptic on that point — but it is cool how quickly information technologies open space for new voices….

Flash the message…ten red balloons

DARPA’s recent balloon challenge. In case you missed it, DARPA placed ten 8-foot red balloons around the country on Friday and issued a challenge to groups to be the first to identify the correct location of all of them. The winning group received $40k.

Here’s how DARPA defined the challenge:

a competition that will explore the roles the Internet and social networking play in the timely communication, wide-area team-building, and urgent mobilization required to solve broad-scope, time-critical problems. The challenge is to be the first to submit the locations of 10 moored, 8-foot, red, weather balloons at 10 fixed locations in the continental United States. The balloons will be in readily accessible locations and visible from nearby roads.

The MIT team developed a web site to develop an expansive social networking structure with financial incentives to participate:

Have all your friends sign up using your personalized invitation. If anyone you invite, or anyone they invite, or anyone they invite (…and so on) win money, then so will you!

We’re giving $2000 per balloon to the first person to send us the correct coordinates, but that’s not all — we’re also giving $1000 to the person who invited them. Then we’re giving $500 whoever invited the inviter, and $250 to whoever invited them, and so on….

DARPA said the point of the exercise was to observe strategies for social networking and to determine the reliability and credibility of social networking information. More than 4,000 groups participated and it only took the MIT group nine-hours to identify the correct location of all ten balloons.

The results demonstrate that credible monitoring and reporting do flow through social networks — and the right mix of technology, organization, and incentives can produce impressively quick results. But, I’m curious what you all think this means? DARPA representatives said this was a bit of a fishing exercise and that they weren’t sure exactly what they were looking for or where the data would lead them — but I wonder how you all would interpret this event and what DARPA might conclude from it?

Sting Operations

Maureen Dowd’s op-ed Stung by the Perfect Sting rattled some cages in the blogosphere this week. Laura McKenna calling her a whiner, implying the post was really about her own bad blogger press. Tim Burke claiming she is dissing bloggers by failing to reference our own grand debates over anonymity. Danny being Danny Drezner accusing Dowd of comparing bloggers to muggers. The column seems widely interpreted as a slam against the new media.

I was sorry that none of these posts engaged the actual story in the article, which had almost nothing to do with the blogosphere per se. Part of this is Dowd’s fault: her argument was poorly executed and buried under asinine introspection (we bloggers would never exhibit careless narcissim.) But look past the fluff and at issue is an important and (yes, Tim) timely legal question raised by not one but two rulings just this month: Should a person’s right to anonymous speech shield him/her against defamation suits?*

Anonymous speech is protected by the First Amendment. But defamation is not. So what recourse does a plaintiff have when slandered anonymously? At Digital Media Laywer, David Johnson explains the “chicken and egg” problem this way:

If trial proves that the speaker is liable for defamation, then his anonymity was not entitled to First Amendment protection and should be disclosed. If trial proves that the speaker is not liable for defamation, then his anonymity was entitled to First Amendment and should not be disclosed. However, disclosure of a speaker’s identity is generally required for a court to determine whether his words were defamatory. In other words, you have to disclose his identity to determine whether his identity should be disclosed.

One way around this is the “summary judgment standard” set out in Doe v. Cahill, a 2005 Delaware ruling on whether or not Patrick Cahill, a City Councilman, could obtain the identity of anonymous blogger John Doe for the purposes of a libel suit. Daniel Solove explained the summary judgment standard in a blog post in that year:

In this case, Cahill was a public figure, and to prevail in a defamation lawsuit, he had to prove that (1) Doe made a defamatory statement (damaging to Cahill’s reputation); (2) the statement was concerning Cahill; (3) the statement was published (disseminated to others); (4) others would understand the statement to be defamatory; (5) the statement was false; and (6) Doe made the statement with actual malice (he either knew it was false or acted in reckless disregard of the truth).

Solove criticizes the New York rulingfor using a looser standard in the case referenced by Dowd. The plaintiff Liskula Cohen, arguably also a public figure, had been vilified on an anonymous blog as “skankiest in NYC” and was only required to show her case had merit to convince the court to order that Google reveal the blogger’s identity. But even if they had used the Doe v. Cahill standard it is hard to see how they would not have ruled in Cohen’s favor. The only hangup may have been the requirement that the plaintiff demonstrate a defendant’s “malice” but this would seem rather an unfair hurdle when a defendant’s identity is unknown. Hence the chicken and egg dilemma.

Did the court make the right choice? Should a person’s right to anonymous speech (generally, not just in the blogosphere) protect them against defamation suits if filing the suit essentially requires knowledge of the defendant’s identity?

Dowd’s key argument is: No. She, however, is talking not only about defamation but also about various pernicious forms of cyber-bullying and hate speech as well. (She is also not, of course, opposing anonymous or pseudononymous deliberative argument ala The Federalist Papers; it is a straw man to claim that she has “conflat[ed] and tar[red] all anonymous commentary because some act rudely on the Internet” when in fact she carefully distinguishes constructive pseudonomity from mere character assassination.)

On this, I’m with Dowd. I am an advocate of pseudononymous (and to some extent anonymous) blogging, but I am against mindless slanderous invective for its own sake. It cheapens political deliberation, distracts us from the issues, and sets a bad example for our children. As a commenter wrote over at Copyrights and Campaigns:

“Having read the Federalist Papers, I don’t recall Publius defaming as ‘skanks and hos’ those who disagreed with the adoption of the Constitution.”

My fellow political bloggers are correct to point out that this behavior is also not representative of most anonymous bloggers or commenters. But that’s precisely the reason to agree with Dowd and with the court’s decision. Ultimately, “Anonymous Blogger” Rosemary Port’s defense rested on the claim that no one takes the blogosphere seriously as a source of facts. According to the ruling:

“The Blogger argues that even if the words [‘skank’ and ‘ho’] are capable of a defamatory meaning, ‘the context here negates any impression that a verifiable factual assertion was intended,’ since blogs ‘have evolved as the modern day soapbox for one’s personal opinions,’ by ‘providing an excessively popular medium not only for conveying ideas, but also for mere venting purposes, affording the less outspoken a protected forum for voicing gripes, leveling invective and ranting about anything at all.'”

To the extent that this perception is true (that is, to the extent that bloggers get tarred in the public eye as mindless opinion-spouters) it’s not because of people like Dowd, but because of people like Port who abuse their anonymity to defame others – an act that is in fact not protected by the First Amendment – and then claim this as some kind of moral high ground.

________________________
*The case raises other interesting questions as well. For example: what is defamation? The court found that allegations of sexual promiscuity count, and I would grudgingly agree, though you could have a whole feminist debate about what that signifies. I also think you could argue, though Cohen did not, that this was not simply defamation but a kind of hate speech – in fact, had the blogger turned out to be male, I think we’d be hearing precisely such claims of misogyny – interesting double standard. Also, Rosemary Port has now sued Google for complying with the court’s order – hard to imagine that she has a case, since Google’s terms of use state it will hand over information if required to do so by the government, but as Solove points out perhaps Google was negligent in failing to go to bat for her? Worth watching to see.

The “Neda” Effect in Sri Lanka


Yesterday Channel 4 in the UK aired the above video, allegedly recorded on a mobile phone and smuggled out of the country by human rights activists, apparently of Sinhalese soldiers massacring Tamil noncombatants earlier this year. The Sri Lankan government (naturally) argues it is a fabrication. Human Rights Watch’s James Ross says “there is no way to tell if the footage is genuine,” but argues that the release of the film underscores the need for an “objective” inquiry into atrocities – by both sides – during the conflict.

I agree with Human Rights Watch in general – that whatever the validity of the film, truth-letting is politically necessary in order to move the country beyond two and half decades of armed struggle.

But I’m not so satisfied with Ross’s claim that we can’t know if the film itself is valid, since ultimately footage like this will increasingly matter, in Sri Lanka and elsewhere, as post-conflict justice is pursued through courts.

Anyway, are there really no standards of evidence emerging for user-generated video such as this? Channel 4 at the UK has described the measures it took to authenticate the film before airing it, including qualitative comparisons to similar footage from the Bosnian war. It’s interesting to think about what kind of authentication could hold up in a hypothetical war crimes court.

Would it not be useful to know more, for example, about how the UK acquired the video? How it made its way from the soldier who shot it to the human rights activist who passed it along to the journalist? One can imagine a number of legitimate scenarios; one can imagine others. Answers to these questions can be found, and have a bearing on the credibility of the film. Retracing that chain to the original cell phone could lead to additional facts of the case, a skill already in use by cyber forensic researchers in domestic contexts. And relevant evidentiary standards must be under development by US law enforcement agencies, cell phone video is increasingly being used to investigate criminals and agents of the state alike.

Not being a cyber forensics expert, I don’t claim to know what these standards are or offer suggestions as to how to view this particular video artifact. But such solutions should be devised, as claiming “one can never know for certain” will ultimately be self-defeating for the human rights community, feeding into the denials of abusive governments. The “Neda effect” – the use of cell-phone video to capture and make visible acts of brutality – has the potential to shift the balance of power between governments and citizens, but also the potential for abuse and misdirection. Human rights organizations should be taking the lead in figuring out how institutions of international justice can leverage such technology while mitigating its side-effects, rather than shrugging it off altogether.

Regime adaptation and anti-regime collective action

Mark Beissinger, in a fantastic article entitled “Structure and Example in Modular Political Phenomena: The Diffusion of Bulldozer/Rose/Orange/Tulip Revolutions” (abstract), develops an account of what he terms “modular revolutions”:

In the study of collective action, the notion of modularity has often been applied to the borrowing of mobilizational frames, repertoires, or modes of contention across cases. The revolutions that have materialized among the post-communist states since 2000 are examples of a modular phenomenon in this sense, with prior successful examples affecting the materialization of subsequent cases. Each successful democratic revolution has produced an experience that has been consciously borrowed by others, spread by NGOs, and emulated by local social movements, forming the contours of a model. With each iteration the model has altered somewhat as it confronts the reality of local circumstances. But its basic elements have revolved around six features:

1) the use of stolen elections as the occasion for massive mobilizations against pseudo-democratic regimes;
2) foreign support for the development of local democratic movements;
3) the organization of radical youth movements using unconventional protest tactics prior to the election in order to undermine the regime’s popularity and will to repress and to prepare for a final showdown;
4) a united opposition established in part through foreign prodding;
5) external diplomatic pressure and unusually large electoral monitoring; and
6) massive mobilization upon the announcement of fraudulent electoral results and the use of non-violent resistance tactics taken directly from the work of Gene Sharp, the guru of non-violent resistance in the West.

Beissinger also contends that not only do anti-regime movements learn–and derive inspiration–from past revolutions, but that regimes learn as well; in fact, they take proactive steps to disrupt the processes that lead to successful “color revolutions.”

Regimes have adapted by preventing adequate election monitoring, particularly by western organizations such as the OECD; in consequence, there’s no independent authority around to declare elections fraudulent. They’ve gone after independent media and otherwise attempted to limit the ability of regime opponents to coordinate with one another or get their message to the broader public. And so on and so forth. (We’ve even blogged about this kind of thing a bit in the context of Russia’s last national election).

Beissinger’s conclusion on this front is pessimistic for the success of future “color revolutions.” Regime adaptation, he argues, will outpace the strategies and tactics of democratic (or, at least, anti-regime) movements.

If this all sounds familiar, that’s because we’re seeing a stunning example of such adaptation in Iran: access cut to social networking technology and websites (including, possibly, Tehran Bureau), cutting cell phone communications, as well as a media blackout that extends, apparently, to jamming BBC reports, shutting down foreign media bureaus, and throwing out foreign journalists. They’ve deployed a massive presence in Tehran (and presumably in other major cities); some of their security forces as roving the streets on motorcycles in an attempt to quickly, and brutally, crack down on unrest.

In at least one respect, the true facts about the Iranian election–which we are unlikely to ever know–are secondary to a basic fact: we’re seeing a vivid example not only of regime adaptation to a particular “revolutionary” process, but also strong evidence–at least so far–that modern communications technologies have failed to tip the balance when it comes to “networks” against “the state” to the degree that many, many scholars, pundits, and social theorists have claimed.

Which, oddly enough, is what my recent book concludes is a “lesson” of the Reformations Era for the present period.

YouTube Politics Part 2

Max Harper, who piloted the concept of the Blueprint for Change videos for President Obama’s 2008 campaign, provided a point-by-point playbook today for how the Obama campaign used Web 2.0 to win the election.

At first, I found myself wondering how he could speak so candidly about it. But then again, Harper and everyone in the room understood one key feature of the political revolution he was describing: that because of the dynamic relationship between information technology and politics, every single thing he told us about campaign strategy and Web 2.0 would be out of date anyway by 2012.

© 2017 Duck of Minerva

Theme by Anders NorenUp ↑