There are a host of moral problems surrounding intelligent technology. Perhaps the two main ones are 1) the economic problem (automation and job loss) and 2) the existential risk problem (wayward technology destroying all humans). Both are real and deserve serious attention.
But the coolest problem is about the moral standing of machines. It is all the rage. To simplify, we can call it the ‘moral machine’ problem: at which point is a piece of technology a member of the moral community? When does a robot have rights? And how will we know when/that they do? Can we ever think of a machine as a person or moral agent? (Dave Gunkel calls it the ‘machine question‘)
People find this topic to be very exciting. Yet something confuses me.
On standard moral assumptions, there is no ‘moral machine’ problem at all. There is nothing to discuss.
If the question is “when does a piece of technology deserve moral consideration?” the answer is simple: “when it possesses the trait(s) necessary for moral consideration.” Or simpler: “when it looks like us in the relevant ways.” With that, the job of the moral philosopher is done. No mystery. Pack up the books and go home. No op-ed necessary.
Of course, it is disputed within moral theory what the coveted traits are (or whether looking for traits is the right strategy). The usual candidates are sentience, rationality, autonomy, consciousness, linguistic ability, and possession of interests. But we can be quite confident that, whatever the traits are, if some entity possesses them, it has some type of moral standing.
What this tells us is that the ‘moral machine’ problem is actually one of two possible problems:
- What are the traits necessary for moral consideration?
- When does a piece of technology possess the trait(s)?
In the case of 1, we are doing standard ethical and metaethical theory. The discussion is not about technology.
In the case of 2, we aren’t doing ethics at all. We’ve answered the moral question and moved on. Now we simply have a technical or empirical problem on our hands.
Once we resolve 1 and 2, and assuming that technological development continues along current trajectories, we will face a third, social problem:
- How do we get people to acknowledge the moral standing of machines?
As an example of the three problems we can look at Peter Singer’s landmark book Animal Liberation. It serves as the paradigm and a useful analogy to how people typically think about the moral standing of technology.
- Moral. Singer is not supplying anything new at the level of moral theory. He has his avowed allegiance to Bentham and utilitarianism, and so he knows already what traits are necessary for moral consideration. No innovation needed.
- Technical. We all know that the animals we use for food and testing can suffer. It is common sense. Singer does argue for it in places, but it is mainly for purposes of completeness. Regardless, as we saw, it isn’t itself an issue of moral theory.
- Social. Here is Singer’s main concern. The book is a call to action. We ought to liberate animals! The term ‘speciesism’ is a perfect instance of Singer’s aim to get people to acknowledge the moral standing of nonhuman animals and act in a way consistent with that fact.
When we call animal liberation a moment or movement of moral progress, we aren’t referring to progress in moral theory. We mean that it is social progress through the realization of a persistent moral failure. Anyone, philosopher or not, can be the person who calls attention to it. No expertise in ethics is required.
We tend to assume that future issues, like the moral standing of machines, will share the same structure. We are witnessing the “expanding circle of moral concern”: gradually more and more entities are folded into the moral community. On this model, moral progress is less a matter of theory and more a process of technical understanding and, primarily, overcoming biases.
If the circle is going to expand to encompass machines, then, just like with nonhuman animals, there is no moral problem. There are only technical/empirical and social/prejudice problems. There is nothing for moral theorists, as moral theorists, to do.
So when we talk about the ‘moral machine’ problem, what are we talking about?
However we answer, the conversation must confront some of our most foundational assumptions about ethics. In my view, there is a moral machine problem, but to recognize and formulate it, we must go back to the basics.