Tag: bias

Emotions, Unconscious Bias, and Publishing

This is a guest post by Jana von Stein, Senior Lecturer, Political Science and International Relations Programme, Victoria University of Wellington (New Zealand)

The recent scandal surrounding Harvey Weinstein’s alleged sexual abuse and harassment of dozens of women has gotten me thinking about an experience I had not too long ago. To be sure, there are differences: what happened to me was not sexual, my suffering was short-lived, and I sought justice. But there were at least two important similarities: my experience was deeply gendered and offensive, and I didn’t tell many people. Why? Because I worried about the career implications. I didn’t want the many good and decent men in my field to perceive me as a male-basher. Continue reading

The Value Alignment Problem’s Problem

Having recently attended a workshop and conference on beneficial artificial intelligence (AI), one of the overriding concerns is how to design beneficial AI.  To do this, the AI needs to be aligned with human values, and as such is known, pace Stuart Russell, as the “Value Alignment Problem.”  It is a “problem” in the sense that however one creates an AI, the AI may try to maximize a value to the detriment of other socially useful or even noninstrumental values given the way one has to specify a value function to a machine.

Continue reading

Algorithmic Bias: How the Clinton Campaign May Have Lost the Presidency or Why You Should Care

This post is a co-authored piece:

Heather M. Roff, Jamie Winterton and Nadya Bliss of Arizona State’s Global Security Initiative

We’ve recently been informed that the Clinton campaign relied heavily on an automated decision aid to inform senior campaign leaders about likely scenarios in the election.  This algorithm—known as “Ada”—was a key component, if not “the” component in how senior staffers formulated campaigning strategy.   Unfortunately, we know little about the algorithm itself.  We do not know all of the data that was used in the various simulations that it ran, or what its programming looked like.   Nevertheless, we can be fairly sure that demographic information, prior voting behavior, prior election results, and the like, were a part of the variables as these are stock for any social scientist studying voting behavior.  What is more interesting, however, is that we are fairly sure there were other variables that were less straightforward and ultimately led to Clinton’s inability to see the potential loss in states like Wisconsin and Michigan, and almost lose Minnesota.

But to see why “Ada” didn’t live up to her namesake (Ada, Countess of Lovelace, who is the progenitor of computing) is to delve into what an algorithm is, what it does, and how humans interact with its findings. It is an important point to make for many of us trying to understand not merely what happened this election, but also how increasing reliance on algorithms like Ada can fundamentally shift our politics and blind us to the limitations of big data.   Let us begin, then, at the beginning.

Continue reading

Empathy, Envy and Justice: The Real Trouble for Algorithm Bias

Rousseau once remarked that “It is, therefore, very certain that compassion is a natural sentiment, which, by moderating the activity of self-esteem in each individual, contributes to the mutual preservation of the whole species” (Discourses on Inequality).  Indeed, it is compassion, and not “reason” that keeps this frail species progressing.   Yet, this ability to be compassionate, which is by its very nature an other-regarding ability, is (ironically) the different side to the same coin: comparison.  Comparison, or perhaps “reflection on certain relations” (e.g. small/big; hard/soft; fast/slow; scared/bold), also has the different and degenerative features of pride and envy.  These twin vices, for Rousseau, are the root of much of the evils in this world.  They are tempered by compassion, but they engender the greatest forms of inequality and injustice in this world.

Rousseau’s insights ought to ring true in our ears today, particularly as we attempt to create artificial intelligences to overtake or mediate many of our social relations.  Recent attention given to “algorithm bias,” where the algorithm for a given task draws from either biased assumptions or biased training data yielding discriminatory results, I would argue is working the problem of reducing bias from the wrong direction.  Many, the White House included, are presently paying much attention about how to eliminate algorithmic bias, or in some instance to solve the “value alignment problem,” thereby indirectly eliminating it.   Why does this matter?  Allow me a brief technological interlude on machine learning and AI to illustrate why eliminating this bias (a la Rousseau) is impossible.

Continue reading

© 2017 Duck of Minerva

Theme by Anders NorenUp ↑