Tag: artificial intelligence

Swan Song – For Now

In some sense, it is with a heavy heart that I write my last permanent contributor blog post at the Duck.  I’ve loved being with the Ducks these past years, and I’ve appreciated being able to write weird, often off the track from mainstream political science, blogs.   If any of you have followed my work over the years, you will know that I sit at an often-uncomfortable division between scholarship and advocacy.  I’ve been one of the leading voices on lethal autonomous weapon systems, both at home in academia, but also abroad at the United Nations Convention on Certain Conventional Weapons, the International Committee for the Red Cross, and advising various states’ governments and militaries.  I’ve also been thinking very deeply over the last few years about how the rise, acceptance and deployment of artificial intelligence (AI) in both peacetime and wartime will change our lives.  For these reasons, I’ve decided to leave academia “proper” and work in the private sector for one of the leading AI companies in the world.  This decision means that I will no longer be able to blog as freely as I have in the past.   As I leave, I’ve been asked to give a sort of “swan song” for the Duck and those who read my posts.  Here is what I can say going forward for the discipline, as well as for our responsibilities as social scientists and human beings.

Continue reading

The Value Alignment Problem’s Problem

Having recently attended a workshop and conference on beneficial artificial intelligence (AI), one of the overriding concerns is how to design beneficial AI.  To do this, the AI needs to be aligned with human values, and as such is known, pace Stuart Russell, as the “Value Alignment Problem.”  It is a “problem” in the sense that however one creates an AI, the AI may try to maximize a value to the detriment of other socially useful or even noninstrumental values given the way one has to specify a value function to a machine.

Continue reading

Algorithmic Bias: How the Clinton Campaign May Have Lost the Presidency or Why You Should Care

This post is a co-authored piece:

Heather M. Roff, Jamie Winterton and Nadya Bliss of Arizona State’s Global Security Initiative

We’ve recently been informed that the Clinton campaign relied heavily on an automated decision aid to inform senior campaign leaders about likely scenarios in the election.  This algorithm—known as “Ada”—was a key component, if not “the” component in how senior staffers formulated campaigning strategy.   Unfortunately, we know little about the algorithm itself.  We do not know all of the data that was used in the various simulations that it ran, or what its programming looked like.   Nevertheless, we can be fairly sure that demographic information, prior voting behavior, prior election results, and the like, were a part of the variables as these are stock for any social scientist studying voting behavior.  What is more interesting, however, is that we are fairly sure there were other variables that were less straightforward and ultimately led to Clinton’s inability to see the potential loss in states like Wisconsin and Michigan, and almost lose Minnesota.

But to see why “Ada” didn’t live up to her namesake (Ada, Countess of Lovelace, who is the progenitor of computing) is to delve into what an algorithm is, what it does, and how humans interact with its findings. It is an important point to make for many of us trying to understand not merely what happened this election, but also how increasing reliance on algorithms like Ada can fundamentally shift our politics and blind us to the limitations of big data.   Let us begin, then, at the beginning.

Continue reading

Kill Webs: The Wicked Problem of Future Warfighting

The common understanding in military circles is that the more data one has, the more information one possess.  More information leads to better intelligence, and better intelligence produces greater situational awareness.  Sun Tzu rightly understood this cycle two millennia ago: “Intelligence is the essence in warfare—it is what the armies depend upon in their every move.” Of course, for him, intelligence could only come from people, not from various types of sensor data, such as radar signatures or ship’s pings.

Pursuing the data-information-intelligence chain is the intuition behind the newly espoused “Kill Web” concept.  Unfortunately, however, there is scant discussion about what the Kill Web actually is or entails.  We have glimpses of the technologies that will comprise it, such as integrating sensors and weapons systems, but we do not know how it will function or the scope of its vulnerabilities.

Continue reading

Autonomous Weapons and Incentives for Oppression

Much of the present debate over autonomous weapons systems (AWS) focuses on their use in war. On one side, scholars argue that AWS will make war more inhumane (Asaro, 2012), that the decision to kill must be a human being’s choice (Sharkey, 2010), or that they will make war more likely because conflict will be less costly to wage with them (Sparrow, 2009). On the other side, scholars argue that AWS will make war more humane, as the weapons will be greater at upholding the principles of distinction and proportionality (Müller and Simpson, 2014), as well as providing greater force protection (Arkin, 2009). I would, however, like to look at different dimension: authoritarian regimes’ use of AWS for internal oppression and political survival.

Continue reading

The Self-Fulfilling Prophecy of High Tech War

 

In fall of 2014, former Defense Secretary Chuck Hagel announced his plan to maintain US superiority against rising powers (i.e. Russia and China). His claim was that the US cannot lose its technological edge – and thus superiority – against a modernizing Russia and a rapidly militarizing China. To ensure this edge, he called for the “third Offset Strategy.”

Continue reading

Autonomous or "Semi" Autonomous Weapons? A Distinction without Difference

Over the New Year, I was fortunate enough to be invited to speak at an event on the future of Artificial Intelligence (AI) hosted by the Future of Life Institute. The purpose of the event was to think through the various aspects of the future of AI, from its economic impacts, to its technological abilities, to its legal implications. I was asked to present on autonomous weapons systems and what those systems portend for the future. The thinking was that an autonomous weapon is, after all, one run on some AI software platform, and if autonomous weapons systems continue to proceed on their current trajectory, we will see more complex software architectures and stronger AIs.   Thus the capabilities created in AI will directly affect the capabilities of autonomous weapons and vice versa. While I was there to inform this impressive gathering about autonomous warfare, these bright minds left me with more questions about the future of AI and weapons.

First, autonomous weapons are those that are capable of targeting and firing without intervention by a human operator. Presently there are no autonomous weapons systems fielded. However, there are a fair amount of semi-autonomous weapons systems currently deployed, and this workshop on AI got me to thinking more about the line between “full” and “semi.” The reality, at least the way that I see it, is that we have been using the terms “fully autonomous” and “semi-autonomous” to describe the extent to which the different operational functions on a weapons system are all operating “autonomously” or if only some of them are. Allow me to explain.

We have roughly four functions on a weapons system: trigger, targeting, navigation, and mobility. We might think of these functions like a menu that we can order from. Semi-autonomous weapons have at least one, if not three, of these functions. For instance, we might say that the Samsung SGR-1 has an “autonomous” targeting function (through heat and motion detectors), but is incapable of navigation, mobility or triggering, as it is a sentry-bot mounted on a defensive perimeter.   Likewise, we would say that precision guided munitions are also semi-autonomous, for they have autonomous mobility, triggering, and in some cases navigation, while the targeting is done through a preselected set of coordinates or through “painting” a target through laser guidance.

Where we seem to get into deeper waters, though, are in the cases of “fire and forget” weapons, like the Israeli Harpy, the Raytheon Maverick heavy tank missile, or the Israeli Elbit Opher. While these systems are capable of autonomous navigation, mobility, trigger and to some extent targeting, they are still considered “semi-autonomous” because the target (i.e. a hostile radar emitter or the infra-red image of a particular tank) was at some point pre-selected by a human. The software that guides these systems is relatively “stupid” from an AI perspective, as it is merely using sensor input and doing a representation and search on the targets it identifies.   Indeed, even Lockheed Martin’s L-RASM (long-range anti-ship missile), appears to be in this ballpark, though it is more sophisticated because it can select its own target amongst a group of potentially valid targets (ships). The question has been raised whether this particular weapon slides from semi-autonomous to fully autonomous, for it is unclear how (or by whom) the decision is made.

The rub in the debate over autonomous weapons systems, and from what I gather, some of the fear in the AI community, is the targeting software. How sophisticated that software needs to be to target accurately, and, what is more, to target objects that are not immediately apparent as military in nature.   Hostile radar emitters present little moral qualms, and when the image recognition software used to select a target is relying on infra-red images of tank tracks or ship’s hulls, then the presumption is that these are “OK” targets from the beginning. I have two worries here. First, is that from the “stupid” autonomous weapons side of things, military objects are not always permissible targets. Only by an object’s nature, purpose, location, use, and effective contribution can one begin to consider it a permissible target. If the target passes this hurdle, one must still determine whether attacking it provides a direct military advantage. Nothing in the current systems seems to take this requirement into account, and as I have argued elsewhere, future autonomous weapons systems would need to do so.

Second, from the perspective of the near term “not-so-stupid” weapons, at what point would targeting human combatants come into the picture? We have AI presently capable of facial recognition with almost near accuracy (just upload an image to Facebook to find out). But more than this, current leading AI companies are showing that artificial intelligence is capable of learning at an impressively rapid rate. If this is so, then it is not far off to think that militaries will want some variant of this capacity on their weapons.

What then might the next generation of “semi” autonomous weapons look like, and how might those weapons change the focus of the debate? If I were a betting person, I’d say they will be capable of learning while deployed, use a combination of facial recognition and image recognition software, as well as infra-red and various radar sensors, and they will have autonomous navigation and mobility. They will not be confined to the air domain, but will populate maritime environments and potentially ground environments as well. The question then becomes one not solely of the targeting software, as it would be dynamic and intelligent, but on the triggering algorithm. When could the autonomous weapon fire? If targeting and firing were time dependent, without the ability to “check-in” with a human, or let’s say, that there were just too many of these systems deployed that “checking-in” were operationally infeasible due to band-width, security, and sheer man-power overload, how accurate would the systems have to be to be permitted to fire? 80%? 50%? 99%? How would one verify that the actions taken by the system were in fact in accordance with its “programming,” assuming of course that the learning system doesn’t learn that its programming is hamstringing it to carry out its mission objectives better?

These pressing questions notwithstanding, would we still consider a system such as this “semi-autonomous?” In other words, the systems we have now are permitted to engage targets – that is target and trigger – autonomously based on some preselected criteria. Would these systems that utilize a “training data set” to learn from likewise be considered “semi-autonomous” because a human preselected the training data? Common sense would say “no,” but so far militaries may say “yes.”   The US Department of Defense, for example, states that a “semi-autonomous” weapon system is one that “once activated, is intended only to engage individual targets or specific target groups that have been selected by a human operator” (DoD, 2012). Yet, at what point would we say that “targets” are not selected by a human operator? Who is the operator? The software programmer with the training data set can be an “operator,” and the lowly Airman likewise can be an “operator” if she is the one ordered to push a button, so too can the Commander who orders her to push it (though, the current DoD Directive makes a distinction between “commander” and “operator” which problematizes the notion of command responsibility even further). The only policy we have on autonomy does not define, much to my dismay, “operator.” This leaves us in the uncomfortable position that distinction between autonomous and semi-autonomous weapons is one without difference, and taken to the extreme would mean that militaries would now only need to claim their weapons system is “semi-autonomous,” much to the chagrin of common sense.

Citizens, Beasts or Gods?

Keeping up with the current engagement with artificial intelligence (AI) is a full time task. Today in the New York Times, two (here and here) lead articles in the technology section were about AI, and another discussed the future of robotics and AI and the workforce.   As I blogged last week, the coming economic future of robotics and AI is going to have to contend with some very weighty considerations that are making our society more and more economically, socially and racially divided.

Today, however, I’d like to think about how a society might view an AI, particularly one that is generally intelligent and economically productive.   To aid in this exercise I am employing one of my favorite and most helpful philosophers: Aristotle. For Aristotle man is the only animal that possesses logos. Logos, is the ability to use speech and reason. While other philosophers have challenged this conclusion, let’s just take Aristotle at his word.

Logos is what defines a human as a human, and because of Aristotle’s teleology, the use of logos is what makes a human a good human (Ethics, 1095a). Moreover, Aristotle also holds that man is by nature a political animal (Ethics 1097b, Politics, 1253a3). What he means by this is that man cannot live in isolation, and cannot be self-sufficient in isolation, but must live amongst other humans. The polis for him provides all that is necessary and makes life “desirable and deficient in nothing” (Ethics, 1097b).   If one lives outside of the polis, then he is doing so against his nature. As Aristotle explains, “anyone who lacks the capacity to share in community, or has not the need to because of his [own] self-sufficiency, is no part of the city and as a result is either a beast or a god” (Politics, 1253a29). In short, there are three classes of persons in Aristotle’s thinking: citizens, beasts or gods.

Citizens share in community, and according to his writings on friendship, they require a bond of some sort to hold them together (Ethics, 1156a). This bond, or philia, is something shared in common. Beasts, or those animals incapable of logos, cannot by definition be part of a polis, for they lack the requisite capacities to engage in deliberative speech and judgment.   Gods, for Aristotle, also do not require the polis for they are self-sufficient alone. Divine immortals have no need of others.

Yet, if we believe that AI is (nearly) upon us, or at least is worth commissioning a 100 year study to measure and evaluate its impact on the human condition, we have before us a new problem, one that Aristotle’s work helps to illuminate. We potentially have an entity that would possess logos but fail to be a citizen, a beast or a God.

What kind of entity is it? A generally artificially intelligent being would be capable of speech of some sort (that is communication), it could understand other’s speech (in either voice recognition or text), it would be capable of learning (potentially at very rapid speeds), and depending upon its use or function, it is could be very economically productive for the person or entity that owns it.   In fact, if we were to rely on Aristotle, this entity looks more akin to slaves. Though even this understanding is incomplete, for his argument is that the master and slave are mutually beneficial in their relationship, and that a slave is nothing more than “a tool for the purpose of life.”   Of course nothing in a relationship between an AI and its “owners” would make the relationship “beneficial” for the AI. Unless one viewed it as possible as giving an AI a teleological value structure that placed “benefit” as that which is good for its owner.

If we took this view, however, we would be granting that an AI will never really understand us humans.   From an Aristotelian perspective, what this means is that we would create machines that are generally intelligent and give them some sort of end value, but we would not “share” anything in common with the AI. We would not have “friendship;” we would have no common bond. Why does this matter, the skeptic asks?

It matters for the simple reason that if we create a generally intelligent AI, one that can learn, evolve and potentially act on and in the world, if it has no philia with us humans, we cannot understand it and it cannot understand us. So what, the objection goes. As long as it is doing what it is programmed to do, all the better for us.

I think this line of reasoning misses something fundamental about creating an AI. We desire to create an AI that is helpful or useful to us, but if it doesn’t understand us, and we fail to see how it is completely nonhuman and will not think or reason like a human, it might be “rational” but not “reasonable.” We would embark on creating a “friendly AI” that has no understanding of what “friend” means, or hold anything in common with us to form a friendship. The perverse effects of this would be astounding.
I will leave you with one example, and one of the ongoing problems of ethics. Utilitarians view ethics as a moral framework that states that one must maximize some sort of nonmoral good (like happiness or well-being) for one’s action to be considered moral. Deontologists claim that no amount of maximization will justify the violation of an innocent person’s rights. When faced with a situation where one must decide what value to ascribe to, ethicists hotly debate which moral framework to adopt. If an AI programmer says to herself, “well, utilitarianism often yields perverse outcomes, I think I will program a deontological AI,” then, the Kantian AI will ascribe to a strict deontic structure. So much so, that the AI programmer finds herself in another quagmire. The “fiat justitia ruat caelum” problem (let justice be done though the heavens fall), where rational begins to look very unreasonable.   Both moral theories ascribe and inform different values in our societies, but both are full of their own problems. There is no consensus on which framework to take, and there is no consensus on the meanings of the values within each framework.

My point here is that Aristotle rightly understood that humans needed each other, and that we must educate and habituate them to moral habits. Politics is the domain where we discuss and deliberate and act on those moral precepts, and it is what makes us uniquely human. Creating an artificial intelligence that looks or reasons nothing like a human carries with it the worry that we have created a beast or a god, or something altogether different. We must tread carefully on this new road, fiat justitia

What Does the Rise of AI have to do with Ferguson and Eric Garner?

One might think that looking to the future of artificial intelligence (AI) and the recent spate of police brutality against African American males, particularly Michael Brown and Eric Garner, are remotely related if they are related at all. However, I want to press us to look at two seemingly unrelated practices (racial discrimination and technological progress) and look to what the trajectory of both portend. I am increasingly concerned about the future of AI, and what the unintended consequences that increased reliance on it will yield. Today, I’d like to focus on just one of those unintended outcomes: increased racial divide, poverty and discrimination.

Recently, Stephen Hawking, Elon Musk, and Nick Bostrom have argued that the future of humanity is at stake if we create artificial general intelligence (AGI), for that will have a great probability of surpassing its general intelligence to that of a superintelligence (ASI). Without careful concern for how such AIs are programmed, that is their values and their goals, ASI may begin down a path that knows no bounds. I am not particularly concerned with ASI here. This is an important discussion, but it is one for another day. Today, I’m concerned with the AI in the near to middle term that will inevitably be utilized to take over rather low skill and low paying jobs. This AI is thought by some as the beginning as “the second machine age” that will usher in prosperity and increased human welfare. I have argued before that I think that any increase in AI will have a gendered impact on job loss and creation.   I would like to extend that today to concerns over race.

Today the New York Times reported that 30 million Americans are currently unemployed, and of that 30 million, the percentage of unemployed men has tripled (since 1960). The article also reported that 85% of unemployed men polled do not possess bachelors degrees, and 34% have a criminal background. In another article, the Times also broke down unemployment rates nationally, looking at the geographic distribution of male unemployment. In places like Arizona and New Mexico, for instance, large swaths of land have 40% or more rates of unemployment. Yet if one examines the data more closely, one sees an immediate overlay of unemployment rates on the same tracts of land that are designated tribal areas and reservations, i.e nonwhite areas.

Moreover, if one looks at the data supported by the Pew Institute, the gap between white and minority household income continues to grow. Pew reports that in 2013, the median net worth of white households was $141,000. The median net worth of black households was $11,000. This is a 13X difference. Minority households, the data says, are more likely to be poor. Indeed, they are likely to be very poor. For the poverty level in the US for a household containing one person is $11,670.   Given the fact that Pew also reported that 58% of unemployed women reported taking care of children 18 and younger in their home, there is a strong probability that these households contain more than one person. Add these facts regarding poverty and unemployment an underlying racial discrimination in the criminal justice system, and one can see where this going.

While whites occupy better jobs, have better access to education, have far greater net household incomes, they are far less likely to experience crime. In fact, The Sentencing Project reports that in 2008 blacks were 78% more likely to be victims of burglary and 133% more likely to experience other types of theft. Compare this to the 2012 statistic that blacks are also 66% more likely to be victims of sexual assault and over six times more likely than whites to be victims of homicide.   Minorities are also more often seen to be the perpetrators of crime, and as one study shows, police officers are quicker to shoot at armed black suspects than white ones.

Thus what we see from a very quick and rough look at various types of data is that poverty, education, crime and the justice system are all racially divided. How does AI affect this? Well, the arguments for AI and for increasingly relying on AI to generate jobs and produce more wealth and prosperity are premised on this racist (and gendered) division of labor.  As Byrnjolfsson and McAffee argue, the jobs that are going to “disappear” are the “dull” ones that computers are good at automating, but jobs that require dexterity and little education – like housecleaning – are likely to stay. Good news for all those very wealthy (and male) maids.

In the interim, Brynjolfsson and McAffee suggest, there will be a readjustment of the types of jobs in the future economy. We will need to have more people educated in information technologies and in more creative ways to be entrepreneurs in this new knowledge economy.   They note that education is key to success in the new AI future, and that parents ought to look to different types of schools that encourage creativity and freethinking, such as Montessori schools.

Yet given the data that we have now about the growing disparity between white and minority incomes, the availability of quality education in poor areas, and the underlying discriminatory attitudes towards minorities, in what future will these already troubled souls rise to levels of prosperity that automatically shuts them out of the “new” economy? How could a household with $11,000 annual income afford over $16,000 a year in Montessori tuition? Trickle-down economics just doesn’t cut it. Instead, this new vision of an AI economy will reaffirm what Charles Mill calls “the racial contract”, and further subjugate and discriminate against nonwhites (and especially nonwhite women).

If the future looks anything like what Brynjolfsson and McAffee portend, then those who control AI, will be those who own and thus benefit from lower costs of production through the mechanization and automation of labor.   Wealth will accumulate into these hands, and unless one has skills that either support the tech industry or create new tech, then one is unlikely to find employment in anything other than unskilled but dexterous labor.   Give the statistics that we have today, it is more likely that this wealth will continue to accumulate into primarily white hands.   Poverty and crime will continue to be placed upon the most vulnerable—and often nonwhite—in society, and the racial discrimination that perpetuates the justice system, and with it tragedies like those of Eric Gardner will continue. Unless there is a serious conversation about the extent to which the economy, the education system and the justice system perpetuates and exacerbates this condition, AI will only make these moral failings more acute.

The Politics of Artificial Intelligence and Automation

The Pew Research Internet Project released a report yesterday, “AI, Robotics, and the Future of Jobs” where it describes a bit of a contradictory vision: the future is bright and the future is bleak. The survey, issued to a nonrandomized group of “experts” in the technology industry and academia, asked particular questions about the future impacts of robotic and artificial intelligence advances. What gained the most attention from the report is the contradictory findings on the future of artificial intelligence (AI) and automation on jobs.

According to Pew, 48% of respondents feel that by 2025 AI and robotic devices will displace a “significant number of both blue-and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.” The other 52% did not envision this bleak future. The optimists did not deny that the robots are coming, but they estimate that human beings will figure out new jobs to do along the way. As Hal Varlan, chief economist for Google, explains:

“If ‘displace more jobs’ means ‘eliminate dull, repetitive, and unpleasant work,’ the answer would be yes. How unhappy are you that your dishwasher has replaced washing dishes by hand, your washing machine has displaced washing clothes by hand, or your vacuum cleaner has replaced hand cleaning? My guess is this ‘job displacement’ has been very welcome, as will the ‘job displacement’ that will occur over the next 10 years.”

The view is nicely summed up by another optimist, Francois-Dominique Armingaud: “The main purpose of progress now is to allow people to spend more life with their loved ones instead of spoiling it with overtime while others are struggling in order to access work.”

The question before us, however, is not whether we would like more leisure time, but whether the change in relations of production – yes a Marxist question – will yield the corresponding emancipation from drudgery. In Marx’s utopia, where technological development reaches a pinnacle, one is free to “do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without ever becoming hunter, fisherman, shepherd or critic.” The viewpoints above have this particular utopic ring to them.

Yet we should be very wary of accepting either view (technological utopianism/dystopianism) too quickly. Marx, for instance, was a highly nuanced and careful thinker when it came to theorizing about power, freedom, and economics. Mostly because we must realize that any relations in the market are still, at bottom, social and political ones between people. In fact, if one automatically assumes that increased automation will lead to greater personal time a la Marx, then one misses the crucial point of Marx: he was talking about his communist ideal. Up until one reaches that point – if it is even possible – technological development that results in the lessening of labor time “does not in fact lead to a lessening of bound time for the producing population. Quite the contrary, the result of this unprecedented transformation and extension of society’s productive powers is the simultaneous lengthening and intensification (…) of the working day” (Booth, 1989). Thus even though I am able to run my dishwasher, my washing machine and my vacuum cleaner at the same time, I am still working. In fact, given the reality that in my household my partner or I do these tasks on the weekend or in the evenings, means that we are working “overtime;” so much for “spending more life time” together.

Indeed, the entire debate over the future of AI and automation is a debate that we’ve really been having already, it just happens to wrap up neatly all of the topics under one heading. For when we discuss which jobs are likely to “go the way of the dodo” we are ignoring all of the power relations inherent here. Who is deciding which jobs go? Who is likely to feel the adverse affects of these decisions? Do the job destroyers have a moral obligation to create (or educate for) new jobs?  Is there a gendered dynamic to the work? While I doubt that Mr. Varlan’s responses were intended in gendered terms, they are in fact gendered. That this work was chosen as his example is telling. First, house cleaning is typically unremunerated work and not even considered in the “economy.” Second, these particular tasks are seen as traditionally feminized. Is it telling, then, that we want to automate “pink collar jobs” first?

When it comes to the types of work on the chopping block, we are looking at very polarized sets of skills. AI and robotics will surely be able to do some “jobs” better. That is where “better” means faster, cheaper, and with fewer mistakes. However, it does not mean “better” in terms of any other identifiable characteristic from the endpoint of a product. A widget still looks like a widget.   Thus “better” is defined by the owners of capital deciding what to automate. We are back to Marx.

The optimistic crowd cites the fact that technological advances usher in new types of jobs, and thus innovation is tantamount to job creation. However, unless there is a concomitant plan to educate the new—and old—class of workers whose jobs are now automated, we are left with an increasing polarization of skills and income inequality. Increasing polarization means that the stabilizing force in politics, the middle class, is also shrinking.

The optimism, in my opinion, is the result of sitting in a particularly privileged position. Most of those touting the virtues of AI and robotics are highly skilled, usually white men, considered as experts. Experts entail that they have a skill set, a good education, and a job that probably cannot be automated. As Georgiana Voss argues, “many of the jobs resilient to computerization are not just those held by men; but rather the structure and nature of these jobs are constructed around specific combinations of social, cultural, educational and financial capital which are most likely to be held by white men.” Moreover, that these powerful few are dictating the future technological drives also means that the technological future will be imbued with their values. Technology is not value neutral; what gets made, who it gets made for, and how it is designed are morally loaded questions.

These questions gain even greater consequence when we consider that the creation on the other end is an AI. Artificial Intelligence is an attempt at mimicking human intelligence, but human intelligence is not merely limited to memorizing facts or even inferring the meaning of a conversation. Intelligence also carries with it particular ideas about norms, ethics, and behavior. But before we can even speculate about how “strong” an AI the tech giants can make in their attempt at freeing us from our bonds of menial labor, we have to ask how they are creating the AI. What are they teaching it? How are they teaching it? And if, from their often privileged positions, are they imparting biases and values to it that we ought to question?

The future of AI and robotics is not merely a Luddite worry over job loss. This worry is real, to be sure, but there is a broader question about the very values that we want to create and perpetuate in society. I thus side with Seth Finkelstein’s view: “A technological advance by itself can either be positive or negative for jobs, depending on the social structure as a whole. This is not a technological consequence; rather, it’s a political choice.”

© 2017 Duck of Minerva

Theme by Anders NorenUp ↑