Category: Technology

Trolling Me Softly

While the Russia probe is expanding to include naïve 36-year old Harvard graduates, pundits all over the world have been worried about elections in other countries. The massive WikiLeaks dump (pun intended) on Emmanuel Macron’s campaign in France did not work, so the next troublesome case seems to be Germany (the UK is fine, they are already leaving the EU).

Continue reading

Morality matters: What the United debacle, the Pepsi ad, and Bill O’Reilly have in common

Social media was abuzz last week with three big missteps by major corporations. Pepsi unveiled a failed television advertisement intended to render homage to the social protest movement in the U.S. but that instead trivialized the protests and appropriated their imagery for financial gain; the New York Times revealed allegations of sexual harassment against Fox host Bill O’Reilly and that the host and Fox paid out nearly $13 million to five women in exchange for their silence; and United airlines dragged a boarded passenger, David Dao, off a plane in order to allow its staff to catch a flight to Louisville. It is tempting to think that the moral outrage expressed on social media was a fleeting fit of slacktivism with little purpose. But, it is more than that. Continue reading

If You Post It, They Will Come

I know most of you are busy watching the all-too-real reality horror show of the 45th administration, but there has been some interesting news coming  out of Russia (sorry, no meteorites or Putin’s nipples). On Sunday, somehow almost 90  thousand people went out on the streets in 87 cities all over Russia to protest against corruption. The unsanctioned demonstrations were met with brutal police crackdown with around 1000 protesters arrested in Moscow alone. To make matters worse, none of the TV channels reported the disturbances (apart from Russia Today, to be fair). Channel One spent about an hour on “news of the week”, castigating Ukraine, ISIS, discussing the London terrorist attack, Alaska sale to America and Rockefeller’s life among other things. Nothing to see here, move on.

What happened?

Continue reading

Ethical Robots on the Battlefield?

Every day it seems we hear more about the advancements of artificial intelligence (AI), the amazing progress in robotics, and the need for greater technological improvements in defense to “offset” potential adversaries.   When all three of these arguments get put together, there appears to be some sort of magic alchemy that results in widely fallacious, and I would say pernicious, claims about the future of war.  Much of this has to do, ultimately, with a misunderstanding about the limitations of technology as well as an underestimation of human capacities.   The prompt for this round of techno-optimism debunking is yet another specious claim about how robotic soldiers will be “more ethical” and thus “not commit rape […] on the battlefield.”

There are actually three lines of thought here that need unpacking.   The first involves the capabilities of AI with relation to “judgment.”  As our above philosopher contends, “I don’t think it would take that much for robot soldiers to be more ethical.  They can make judgements more quickly, they’re not fearful like human beings and fear often leads people making less than optional decisions, morally speaking [sic].”  This sentiment about speed and human emotion (or lack thereof) has underpinned much of the debate about autonomous weapons for the last decade (if not more).  Dr. Hemmingsen’s views are not original.  However, such views are not grounded in reality.

Continue reading

The Value Alignment Problem’s Problem

Having recently attended a workshop and conference on beneficial artificial intelligence (AI), one of the overriding concerns is how to design beneficial AI.  To do this, the AI needs to be aligned with human values, and as such is known, pace Stuart Russell, as the “Value Alignment Problem.”  It is a “problem” in the sense that however one creates an AI, the AI may try to maximize a value to the detriment of other socially useful or even noninstrumental values given the way one has to specify a value function to a machine.

Continue reading

Algorithmic Bias: How the Clinton Campaign May Have Lost the Presidency or Why You Should Care

This post is a co-authored piece:

Heather M. Roff, Jamie Winterton and Nadya Bliss of Arizona State’s Global Security Initiative

We’ve recently been informed that the Clinton campaign relied heavily on an automated decision aid to inform senior campaign leaders about likely scenarios in the election.  This algorithm—known as “Ada”—was a key component, if not “the” component in how senior staffers formulated campaigning strategy.   Unfortunately, we know little about the algorithm itself.  We do not know all of the data that was used in the various simulations that it ran, or what its programming looked like.   Nevertheless, we can be fairly sure that demographic information, prior voting behavior, prior election results, and the like, were a part of the variables as these are stock for any social scientist studying voting behavior.  What is more interesting, however, is that we are fairly sure there were other variables that were less straightforward and ultimately led to Clinton’s inability to see the potential loss in states like Wisconsin and Michigan, and almost lose Minnesota.

But to see why “Ada” didn’t live up to her namesake (Ada, Countess of Lovelace, who is the progenitor of computing) is to delve into what an algorithm is, what it does, and how humans interact with its findings. It is an important point to make for many of us trying to understand not merely what happened this election, but also how increasing reliance on algorithms like Ada can fundamentally shift our politics and blind us to the limitations of big data.   Let us begin, then, at the beginning.

Continue reading

Analogies in War: Marine Mammal Systems and Autonomous Weapons

Last week I was able to host and facilitate a multi-stakeholder meeting of governments, industry and academia to discuss the notions of “meaningful human control” and “appropriate human judgment” as they pertain to the development, deployment and use of autonomous weapons systems (AWS).  These two concepts presently dominate discussion over whether to regulate or ban AWS, but neither concept is fully endorsed internationally, despite work from governments, academia and NGOs.  On one side many prefer the notion of “control,” and on the other “judgment.”

Yet what has become apparent from many of these discussions, my workshop included, is that there is a need for an appropriate analogy to help policy makers understand the complexities of autonomous systems and how humans may still exert control over them.   While some argue that there is no analogy to AWS, and that thinking in this manner is unhelpful, I disagree.  There is one unique example that can help us to understand the nuance of AWS, as well how meaningful human control places limits on their use: marine mammal systems .

Continue reading

© 2017 Duck of Minerva

Theme by Anders NorenUp ↑