Keeping up with the current engagement with artificial intelligence (AI) is a full time task. Today in the New York Times, two (here and here) lead articles in the technology section were about AI, and another discussed the future of robotics and AI and the workforce.   As I blogged last week, the coming economic future of robotics and AI is going to have to contend with some very weighty considerations that are making our society more and more economically, socially and racially divided.

Today, however, I’d like to think about how a society might view an AI, particularly one that is generally intelligent and economically productive.   To aid in this exercise I am employing one of my favorite and most helpful philosophers: Aristotle. For Aristotle man is the only animal that possesses logos. Logos, is the ability to use speech and reason. While other philosophers have challenged this conclusion, let’s just take Aristotle at his word.

Logos is what defines a human as a human, and because of Aristotle’s teleology, the use of logos is what makes a human a good human (Ethics, 1095a). Moreover, Aristotle also holds that man is by nature a political animal (Ethics 1097b, Politics, 1253a3). What he means by this is that man cannot live in isolation, and cannot be self-sufficient in isolation, but must live amongst other humans. The polis for him provides all that is necessary and makes life “desirable and deficient in nothing” (Ethics, 1097b).   If one lives outside of the polis, then he is doing so against his nature. As Aristotle explains, “anyone who lacks the capacity to share in community, or has not the need to because of his [own] self-sufficiency, is no part of the city and as a result is either a beast or a god” (Politics, 1253a29). In short, there are three classes of persons in Aristotle’s thinking: citizens, beasts or gods.

Citizens share in community, and according to his writings on friendship, they require a bond of some sort to hold them together (Ethics, 1156a). This bond, or philia, is something shared in common. Beasts, or those animals incapable of logos, cannot by definition be part of a polis, for they lack the requisite capacities to engage in deliberative speech and judgment.   Gods, for Aristotle, also do not require the polis for they are self-sufficient alone. Divine immortals have no need of others.

Yet, if we believe that AI is (nearly) upon us, or at least is worth commissioning a 100 year study to measure and evaluate its impact on the human condition, we have before us a new problem, one that Aristotle’s work helps to illuminate. We potentially have an entity that would possess logos but fail to be a citizen, a beast or a God.

What kind of entity is it? A generally artificially intelligent being would be capable of speech of some sort (that is communication), it could understand other’s speech (in either voice recognition or text), it would be capable of learning (potentially at very rapid speeds), and depending upon its use or function, it is could be very economically productive for the person or entity that owns it.   In fact, if we were to rely on Aristotle, this entity looks more akin to slaves. Though even this understanding is incomplete, for his argument is that the master and slave are mutually beneficial in their relationship, and that a slave is nothing more than “a tool for the purpose of life.”   Of course nothing in a relationship between an AI and its “owners” would make the relationship “beneficial” for the AI. Unless one viewed it as possible as giving an AI a teleological value structure that placed “benefit” as that which is good for its owner.

If we took this view, however, we would be granting that an AI will never really understand us humans.   From an Aristotelian perspective, what this means is that we would create machines that are generally intelligent and give them some sort of end value, but we would not “share” anything in common with the AI. We would not have “friendship;” we would have no common bond. Why does this matter, the skeptic asks?

It matters for the simple reason that if we create a generally intelligent AI, one that can learn, evolve and potentially act on and in the world, if it has no philia with us humans, we cannot understand it and it cannot understand us. So what, the objection goes. As long as it is doing what it is programmed to do, all the better for us.

I think this line of reasoning misses something fundamental about creating an AI. We desire to create an AI that is helpful or useful to us, but if it doesn’t understand us, and we fail to see how it is completely nonhuman and will not think or reason like a human, it might be “rational” but not “reasonable.” We would embark on creating a “friendly AI” that has no understanding of what “friend” means, or hold anything in common with us to form a friendship. The perverse effects of this would be astounding.
I will leave you with one example, and one of the ongoing problems of ethics. Utilitarians view ethics as a moral framework that states that one must maximize some sort of nonmoral good (like happiness or well-being) for one’s action to be considered moral. Deontologists claim that no amount of maximization will justify the violation of an innocent person’s rights. When faced with a situation where one must decide what value to ascribe to, ethicists hotly debate which moral framework to adopt. If an AI programmer says to herself, “well, utilitarianism often yields perverse outcomes, I think I will program a deontological AI,” then, the Kantian AI will ascribe to a strict deontic structure. So much so, that the AI programmer finds herself in another quagmire. The “fiat justitia ruat caelum” problem (let justice be done though the heavens fall), where rational begins to look very unreasonable.   Both moral theories ascribe and inform different values in our societies, but both are full of their own problems. There is no consensus on which framework to take, and there is no consensus on the meanings of the values within each framework.

My point here is that Aristotle rightly understood that humans needed each other, and that we must educate and habituate them to moral habits. Politics is the domain where we discuss and deliberate and act on those moral precepts, and it is what makes us uniquely human. Creating an artificial intelligence that looks or reasons nothing like a human carries with it the worry that we have created a beast or a god, or something altogether different. We must tread carefully on this new road, fiat justitia