Rousseau once remarked that “It is, therefore, very certain that compassion is a natural sentiment, which, by moderating the activity of self-esteem in each individual, contributes to the mutual preservation of the whole species” (Discourses on Inequality). Indeed, it is compassion, and not “reason” that keeps this frail species progressing. Yet, this ability to be compassionate, which is by its very nature an other-regarding ability, is (ironically) the different side to the same coin: comparison. Comparison, or perhaps “reflection on certain relations” (e.g. small/big; hard/soft; fast/slow; scared/bold), also has the different and degenerative features of pride and envy. These twin vices, for Rousseau, are the root of much of the evils in this world. They are tempered by compassion, but they engender the greatest forms of inequality and injustice in this world.
Rousseau’s insights ought to ring true in our ears today, particularly as we attempt to create artificial intelligences to overtake or mediate many of our social relations. Recent attention given to “algorithm bias,” where the algorithm for a given task draws from either biased assumptions or biased training data yielding discriminatory results, I would argue is working the problem of reducing bias from the wrong direction. Many, the White House included, are presently paying much attention about how to eliminate algorithmic bias, or in some instance to solve the “value alignment problem,” thereby indirectly eliminating it. Why does this matter? Allow me a brief technological interlude on machine learning and AI to illustrate why eliminating this bias (a la Rousseau) is impossible.
There are two features that many of our present day AI rely on heavily: representation and search. Representation is the ability of the system to create (represent) a “state-space.” The state-space is all the potential nodes of a problem or the “knowledge needed”, the permitted moves within the system, the constraints, the initial state and the desired goal state. The algorithm then searches the state-space for the answer, goal or solution. There are many more complicated items, techniques, and qualifiers here, but for present purposes we can think of these two basic things. Thus when you search for CEO on Google, there needs to be some set of information, rules, constraints, etc. (state-space), and an algorithm searching that space to give you a set of potential answers.
Machine “learning” is really the ability of the algorithm to figure out the correct answer, as opposed to a human pre-selecting a group of potential answers. The learning system is fed an enormous amount of data – some of it is hand coded to “train” the AI on – but once the training is complete, the system can continue to take inputs (new data points) into its existing state-space and update it to formulate new representations. “Deep learning” like the kind we often hear about (such as beating humans in Go) is a technique of AI – using a neural network – layered on top of successive other neural nets. Deep learning is notoriously difficult to unpack, to figure out why an AI spit out a particular answer, and is often touted as a “black box.” While this is true, there are ways to unpack the weights of particular neurons in the net, and then see, or perhaps estimate, why the AI is delivering a certain output.
So where does algorithm bias and Rousseau come into play? I hope you might begin to see. First, is the problem of the data. In other words, we have a representation problem on our hands. Whatever “knowledge,” or data, facts, images, words, phrases and/or the correlations between them, gets fed into an AI teaches the AI to “give us” particular answers when we search for them. I explore this a bit in a recent paper, but the take home is this: we live in a biased world already. This is because, like all good constructivists, I think that we cannot separate out ourselves from our structures and our structures from us. Thus the data chosen by the programmer, or even if there is unstructured learning (i.e. no hand coding), is still coming from our world with all of its messiness, bias, innuendo, inequality and injustice. So even if scholars, media, governments, corporations and professional bodies come together to formulate policy on AI bias, they can’t eliminate it because it is designed to work for us in our world. In short, it is aimed at humans for human centered tasks, so finding “non-biased” data is impossible. We can attempt to mitigate biased effects and systemic injustice, but we cannot remove it from our data entirely.
This leads to the second but more complex and subtle point: if we create an AI that is better at identifying the bias than we are, and I would say thereby solving the value alignment problem, we need to create a social AI. Presently many of the deep learning techniques rely on reinforcement learning processes. That is, they take a sort of behaviorist/B.F. Skinnerian approach to training a system. They write an algorithm, give it certain goals and constraints, and then let it figure out what to do based on a reinforcement when it gets something correct. Yet this system tends to lead to highly negative results (much like act utilitarianism) when considered in social settings. Thus the AI needs to be “raised” in a multi-agent and social structure. To do this successfully, however, we need to return to Rousseau’s insights about compassion. Regardless of what kind of intelligent agent you have (human or artificial) if you want it to succeed in a social system it needs compassion.
But compassion is, itself, premised on a capability of self-awareness and reflexivity. It is grounded in the ability of an agent to put itself in the condition of another agent and reason (or feel) about what that would be like. This is the same cognitive capability that also makes it possible for pride, vanity and envy. For if I can compare myself to you, I can do it beneficently or maleficently. This capacity for comparison is the root—according to Rousseau—of inequality and injustice. All other structures are built upon it. Thus if he is correct, we cannot create bias free algorithms (or AIs) because we will have recreated the social dynamics that drive bias to begin with.
0 Comments