Keeping up with the current engagement with artificial intelligence (AI) is a full time task. Today in the New York Times, two (here and here) lead articles in the technology section were about AI, and another discussed the future of robotics and AI and the workforce. As I blogged last week, the coming economic future of robotics and AI is going to have to contend with some very weighty considerations that are making our society more and more economically, socially and racially divided.
Today, however, I’d like to think about how a society might view an AI, particularly one that is generally intelligent and economically productive. To aid in this exercise I am employing one of my favorite and most helpful philosophers: Aristotle. For Aristotle man is the only animal that possesses logos. Logos, is the ability to use speech and reason. While other philosophers have challenged this conclusion, let’s just take Aristotle at his word.
Logos is what defines a human as a human, and because of Aristotle’s teleology, the use of logos is what makes a human a good human (Ethics, 1095a). Moreover, Aristotle also holds that man is by nature a political animal (Ethics 1097b, Politics, 1253a3). What he means by this is that man cannot live in isolation, and cannot be self-sufficient in isolation, but must live amongst other humans. The polis for him provides all that is necessary and makes life “desirable and deficient in nothing” (Ethics, 1097b). If one lives outside of the polis, then he is doing so against his nature. As Aristotle explains, “anyone who lacks the capacity to share in community, or has not the need to because of his [own] self-sufficiency, is no part of the city and as a result is either a beast or a god” (Politics, 1253a29). In short, there are three classes of persons in Aristotle’s thinking: citizens, beasts or gods.
Citizens share in community, and according to his writings on friendship, they require a bond of some sort to hold them together (Ethics, 1156a). This bond, or philia, is something shared in common. Beasts, or those animals incapable of logos, cannot by definition be part of a polis, for they lack the requisite capacities to engage in deliberative speech and judgment. Gods, for Aristotle, also do not require the polis for they are self-sufficient alone. Divine immortals have no need of others.
Yet, if we believe that AI is (nearly) upon us, or at least is worth commissioning a 100 year study to measure and evaluate its impact on the human condition, we have before us a new problem, one that Aristotle’s work helps to illuminate. We potentially have an entity that would possess logos but fail to be a citizen, a beast or a God.
What kind of entity is it? A generally artificially intelligent being would be capable of speech of some sort (that is communication), it could understand other’s speech (in either voice recognition or text), it would be capable of learning (potentially at very rapid speeds), and depending upon its use or function, it is could be very economically productive for the person or entity that owns it. In fact, if we were to rely on Aristotle, this entity looks more akin to slaves. Though even this understanding is incomplete, for his argument is that the master and slave are mutually beneficial in their relationship, and that a slave is nothing more than “a tool for the purpose of life.” Of course nothing in a relationship between an AI and its “owners” would make the relationship “beneficial” for the AI. Unless one viewed it as possible as giving an AI a teleological value structure that placed “benefit” as that which is good for its owner.
If we took this view, however, we would be granting that an AI will never really understand us humans. From an Aristotelian perspective, what this means is that we would create machines that are generally intelligent and give them some sort of end value, but we would not “share” anything in common with the AI. We would not have “friendship;” we would have no common bond. Why does this matter, the skeptic asks?
It matters for the simple reason that if we create a generally intelligent AI, one that can learn, evolve and potentially act on and in the world, if it has no philia with us humans, we cannot understand it and it cannot understand us. So what, the objection goes. As long as it is doing what it is programmed to do, all the better for us.
I think this line of reasoning misses something fundamental about creating an AI. We desire to create an AI that is helpful or useful to us, but if it doesn’t understand us, and we fail to see how it is completely nonhuman and will not think or reason like a human, it might be “rational” but not “reasonable.” We would embark on creating a “friendly AI” that has no understanding of what “friend” means, or hold anything in common with us to form a friendship. The perverse effects of this would be astounding.
I will leave you with one example, and one of the ongoing problems of ethics. Utilitarians view ethics as a moral framework that states that one must maximize some sort of nonmoral good (like happiness or well-being) for one’s action to be considered moral. Deontologists claim that no amount of maximization will justify the violation of an innocent person’s rights. When faced with a situation where one must decide what value to ascribe to, ethicists hotly debate which moral framework to adopt. If an AI programmer says to herself, “well, utilitarianism often yields perverse outcomes, I think I will program a deontological AI,” then, the Kantian AI will ascribe to a strict deontic structure. So much so, that the AI programmer finds herself in another quagmire. The “fiat justitia ruat caelum” problem (let justice be done though the heavens fall), where rational begins to look very unreasonable. Both moral theories ascribe and inform different values in our societies, but both are full of their own problems. There is no consensus on which framework to take, and there is no consensus on the meanings of the values within each framework.
My point here is that Aristotle rightly understood that humans needed each other, and that we must educate and habituate them to moral habits. Politics is the domain where we discuss and deliberate and act on those moral precepts, and it is what makes us uniquely human. Creating an artificial intelligence that looks or reasons nothing like a human carries with it the worry that we have created a beast or a god, or something altogether different. We must tread carefully on this new road, fiat justitia…
Recently, my wife I introduced our teenage kids to Bladerunner. – My son had already read “Do Androids Dream of Electric Sheep?” – and the movie got me thinking about creating artificial intelligence from a different point of view. In Bladerunner, androids are indistinguishable from humans, except that they have no emotions. They are intelligent, and created specifically to be slaves. In the movie, they are clearly conscious. OK – That’s sci-fi. But it is worth asking – if we continue to increase and improve AI, will we, at some point in the future, create a conscious entity? Even if it is one that is not bio-based, and looks nothing like us. Since at this time, we have no idea where consciousness comes from, can we predict it? Will we know once it has been created? And, assuming we can create conscious entities, how will we treat them? What rights will they have?
Heuristic learning machines, that can distinguish between their “bodies” and other bodies, may in fact resemble humans more that highly programmed machines that cannot learn, but only execute sophisticated instructions. Add to these learning bodies a set of sensors that interpret harm or potential harm (heat harm, mechanical harm, memory loss, power loss sensors for starters) and the potential to react to the environment in ways that may appear emotional, even if not in the human sense.
Now we have an type of intelligence (a learning machine that senses and adapts to conditions of harm) that may be at least beastly, if not human. This type of intelligence, if its learning mechanisms are robust and its learning duration long enough, may evolve over time to form cooperative exchanges with other entities, perhaps even learn the initial stages of trust (sharing resources against agreed upon exchange pacts). This intelligent machine isn’t human (still have those emotional bits to parse), but it’s certainly more animated than IBM’s Watson (which is a dumb machine with sophisticated coding and a large database).
Imagine, now that some of these learning machines become employed (or purchased for deployment). One set become domestic apparatuses in our homes (let’s put aside the idea of chauffeuring for now). They can sense harm and seek to avoid or adapt to it. Can share resources with other entities. And can execute against human designed tasks. A learning machine like this in a home with sufficient resources and humans that understood its capabilities would likely become more beloved than the most charming pet. Yet it would be far more capable than any existing pet, potentially, if we allowed the learning heuristic to continue to grow. So this machine would adapt to it’s home and the humans there would adapt to it and develop feelings for it (if this seems doubtful, listen to the CEO of iRobots talk about the feelings owners have for their very stupid Roomba vacuums).
You may see where this is going. At what point prior to fully developed, biological humans, does a learning machine have to go before rights, autonomy, freedom, and self-actualization kick-in? I would argue some limited form of these features would be advocated by some long before the human being level intelligence became apparent (the parallel of course are the animal rights advocates and philosophers).
So we train the AIs on the virtue ethics canon. Any AI worth its silicon should be able to handle ethical protocols that are non-complete and non-hierarchical. :)