There are a host of moral problems surrounding intelligent technology. Perhaps the two main ones are 1) the economic problem (automation and job loss) and 2) the existential risk problem (wayward technology destroying all humans). Both are real and deserve serious attention.
But the coolest problem is about the moral standing of machines. To simplify, we can call it the ‘moral machine’ problem: at which point is a piece of technology a member of the moral community? When does a robot have rights? And how will we know when/that they do? Can we ever think of a machine as a person or moral agent? In a 2017 book, David Gunkel calls it the ‘machine question‘.
People find this topic to be very exciting. Yet something confuses me.
On standard moral assumptions, there is no ‘moral machine’ problem at all. There is nothing to discuss.
If the question is “when does a piece of technology deserve moral consideration?” the answer, given how ethics typically goes, is simple: “when it possesses the trait(s) necessary for moral consideration.” Or simpler: “when it looks like us in the relevant ways.” This is sometimes called the ‘relevant properties’ approach. With that, the job of the moral philosopher is done.
Of course, it is disputed within moral theory what the coveted traits are (or whether looking for traits is the right strategy). The usual candidates are sentience, rationality, autonomy, consciousness, linguistic ability, and possession of interests. But we can be quite confident that, whatever the traits are, if some entity possesses them, it has some type of moral standing.
What this tells us is that the ‘moral machine’ problem is actually one of two possible problems:
- What are the traits necessary for moral consideration or status?
- When does a piece of technology possess the trait(s)?
In the case of 1, we are doing standard ethical and metaethical theory. The discussion is not about technology per se. In the case of 2, we aren’t doing ethics at all. We’ve answered the moral question. Now we simply have a technical or empirical problem. Once we resolve 1 and 2, and assuming that technological development continues along current trajectories, we will face a third, social problem:
- How do we get people to acknowledge the moral standing of machines?
Here the job typically falls to socially conscious activists. Following them, the rest of us must work to overcome our biases. We know the answers to 1 and 2, but our resistance to welcoming the marginalized others fully into the moral community is a pernicious bias. A social justice movement (or an expansion of the unified social justice movement) is necessary.
As an example of the three problems we can look at Peter Singer’s landmark book Animal Liberation. Setting aside the truth of its main claims, the book serves as the paradigm and a useful analogy to how people typically think about the moral standing of technology.
- Moral. Singer is not supplying anything new at the level of moral theory. He has his avowed allegiance to Bentham and utilitarianism, and so he knows already what traits are necessary for moral consideration. Besides some extrapolation or generalization, no innovation is needed.
- Technical. We all know, according to Singer, that the animals we use for food and testing can suffer. It is common sense. Singer does argue for it in places, but it is mainly for purposes of completeness. Regardless, as we saw, this isn’t itself an issue of moral theory. The topic is belongs more to animal biology or perhaps philosophy of mind.
- Social. Here is Singer’s main concern. The book is a call to action. We ought to liberate animals! The term ‘speciesism’ is a perfect instance of Singer’s aim to get people to acknowledge the moral standing of nonhuman animals and act in a way consistent with that fact. While he did not invent the term, he consciously invoked it to place the problem of animal suffering in a social justice context.
When we call animal liberation a moment or movement of moral progress, we aren’t referring to progress in moral theory, at least not in any substantive way. We mean that it is social progress through the realization of a persistent moral failure. Anyone, philosopher or not, can be the person who calls attention to it. No extensive expertise in ethics is required.
We tend to assume that future issues, like the moral status of machines, will share the same structure. We are witnessing the “expanding circle of moral concern”: gradually more and more entities are folded into the moral community. On this model, moral progress is less a matter of theory and more a process of technical understanding and, primarily, overcoming biases.
If the circle is going to expand to encompass machines, then, just like with nonhuman animals, there is deep no moral problem. There are primarily technical/empirical and social/prejudice problems. There is little for moral theorists, as moral theorists, to do.
So when we talk about the ‘moral machine’ problem, which problem are we talking about?
To step back, the framing of my discussion assumes a particular approach to ethics and questions about the prospective inclusion of entities in the moral community. At times, there is a recognition that the prior exclusion of some entities from the moral community should prompt a reexamination of the dominant moral theories that either led to or justified the exclusion. The moral machine problem that demands more attention is whether the reexamination, a return to the basics, it warranted now. Without, we risk a coming oppression.