The Pew Research Internet Project released a report yesterday, “AI, Robotics, and the Future of Jobs” where it describes a bit of a contradictory vision: the future is bright and the future is bleak. The survey, issued to a nonrandomized group of “experts” in the technology industry and academia, asked particular questions about the future impacts of robotic and artificial intelligence advances. What gained the most attention from the report is the contradictory findings on the future of artificial intelligence (AI) and automation on jobs.
According to Pew, 48% of respondents feel that by 2025 AI and robotic devices will displace a “significant number of both blue-and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.” The other 52% did not envision this bleak future. The optimists did not deny that the robots are coming, but they estimate that human beings will figure out new jobs to do along the way. As Hal Varlan, chief economist for Google, explains:
“If ‘displace more jobs’ means ‘eliminate dull, repetitive, and unpleasant work,’ the answer would be yes. How unhappy are you that your dishwasher has replaced washing dishes by hand, your washing machine has displaced washing clothes by hand, or your vacuum cleaner has replaced hand cleaning? My guess is this ‘job displacement’ has been very welcome, as will the ‘job displacement’ that will occur over the next 10 years.”
The view is nicely summed up by another optimist, Francois-Dominique Armingaud: “The main purpose of progress now is to allow people to spend more life with their loved ones instead of spoiling it with overtime while others are struggling in order to access work.”
The question before us, however, is not whether we would like more leisure time, but whether the change in relations of production – yes a Marxist question – will yield the corresponding emancipation from drudgery. In Marx’s utopia, where technological development reaches a pinnacle, one is free to “do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without ever becoming hunter, fisherman, shepherd or critic.” The viewpoints above have this particular utopic ring to them.
Yet we should be very wary of accepting either view (technological utopianism/dystopianism) too quickly. Marx, for instance, was a highly nuanced and careful thinker when it came to theorizing about power, freedom, and economics. Mostly because we must realize that any relations in the market are still, at bottom, social and political ones between people. In fact, if one automatically assumes that increased automation will lead to greater personal time a la Marx, then one misses the crucial point of Marx: he was talking about his communist ideal. Up until one reaches that point – if it is even possible – technological development that results in the lessening of labor time “does not in fact lead to a lessening of bound time for the producing population. Quite the contrary, the result of this unprecedented transformation and extension of society’s productive powers is the simultaneous lengthening and intensification (…) of the working day” (Booth, 1989). Thus even though I am able to run my dishwasher, my washing machine and my vacuum cleaner at the same time, I am still working. In fact, given the reality that in my household my partner or I do these tasks on the weekend or in the evenings, means that we are working “overtime;” so much for “spending more life time” together.
Indeed, the entire debate over the future of AI and automation is a debate that we’ve really been having already, it just happens to wrap up neatly all of the topics under one heading. For when we discuss which jobs are likely to “go the way of the dodo” we are ignoring all of the power relations inherent here. Who is deciding which jobs go? Who is likely to feel the adverse affects of these decisions? Do the job destroyers have a moral obligation to create (or educate for) new jobs? Is there a gendered dynamic to the work? While I doubt that Mr. Varlan’s responses were intended in gendered terms, they are in fact gendered. That this work was chosen as his example is telling. First, house cleaning is typically unremunerated work and not even considered in the “economy.” Second, these particular tasks are seen as traditionally feminized. Is it telling, then, that we want to automate “pink collar jobs” first?
When it comes to the types of work on the chopping block, we are looking at very polarized sets of skills. AI and robotics will surely be able to do some “jobs” better. That is where “better” means faster, cheaper, and with fewer mistakes. However, it does not mean “better” in terms of any other identifiable characteristic from the endpoint of a product. A widget still looks like a widget. Thus “better” is defined by the owners of capital deciding what to automate. We are back to Marx.
The optimistic crowd cites the fact that technological advances usher in new types of jobs, and thus innovation is tantamount to job creation. However, unless there is a concomitant plan to educate the new—and old—class of workers whose jobs are now automated, we are left with an increasing polarization of skills and income inequality. Increasing polarization means that the stabilizing force in politics, the middle class, is also shrinking.
The optimism, in my opinion, is the result of sitting in a particularly privileged position. Most of those touting the virtues of AI and robotics are highly skilled, usually white men, considered as experts. Experts entail that they have a skill set, a good education, and a job that probably cannot be automated. As Georgiana Voss argues, “many of the jobs resilient to computerization are not just those held by men; but rather the structure and nature of these jobs are constructed around specific combinations of social, cultural, educational and financial capital which are most likely to be held by white men.” Moreover, that these powerful few are dictating the future technological drives also means that the technological future will be imbued with their values. Technology is not value neutral; what gets made, who it gets made for, and how it is designed are morally loaded questions.
These questions gain even greater consequence when we consider that the creation on the other end is an AI. Artificial Intelligence is an attempt at mimicking human intelligence, but human intelligence is not merely limited to memorizing facts or even inferring the meaning of a conversation. Intelligence also carries with it particular ideas about norms, ethics, and behavior. But before we can even speculate about how “strong” an AI the tech giants can make in their attempt at freeing us from our bonds of menial labor, we have to ask how they are creating the AI. What are they teaching it? How are they teaching it? And if, from their often privileged positions, are they imparting biases and values to it that we ought to question?
The future of AI and robotics is not merely a Luddite worry over job loss. This worry is real, to be sure, but there is a broader question about the very values that we want to create and perpetuate in society. I thus side with Seth Finkelstein’s view: “A technological advance by itself can either be positive or negative for jobs, depending on the social structure as a whole. This is not a technological consequence; rather, it’s a political choice.”
The last profession to fall will be the oldest. When robots take over prostitution, then that’ll be it for any human every getting any job ever again.