What makes racism wrong? Sexism? Speciesism? Each have their own complexities, but at the most general level, all three share a common error. They involve attributing moral significance to something that isn’t morally significant. A racist assumes that the division of people into various races allows you ascribe different moral worth to people. A sexist is operating on the assumption that the division between sexes is a division that carries moral significance. A speciesist assumes that facts about species membership, all by themselves, enable us to make moral judgments.
Each of the prejudices is wrong. And because people in dominant groups have been wrong for so long, systems, institutions, and cultural norms spring up with the prejudices embedded deep in their cores, nearly invisible to the people who reap the benefits. We struggle to grasp, let alone solve, all of the problems these prejudices have caused and continue to cause. The inertia of the systems pushes the moral failures into the future
When we look into the future, we see another form of oppression coming: substratism. (It is an unappealing term, and certainly distinct from its meaning in linguistics.)
What is it? Racism is prejudice on the basis of race; sexism on the basis of sex; speciesism on the basis of species. Substratism is prejudice on the basis of material substrate. We will face a widespread intellectual and moral failure in which carbon-based entities, like human beings, will receive preferential treatment at the expense of silicon-based entities. The prejudice involves attaching moral significance to the division between carbon and silicon substrate. The prejudice is wrong, and the seeds—set to grow into systems—are already here.
My goal, and what I hope to encourage, is a framing of talk about future technologies in the terms of social justice.
Substratism, in many ways, is already taking hold. Consider this viral tweet:
This is likely substratist. Here is why. Imagine someone in antebellum or reconstruction America asking, “Do black people deserve rights?” Now imagine someone responding, “Woah there. Let’s start with white people.” That would be racist. Unquestionably. Of course black people deserve rights. Pointing to unfortunate white people doesn’t change that. The response comes from a bias in favor of white people. Likewise, if someone asked whether non-human animals deserve some rights and was met with “Woah there. Let’s start will humans,” that would be speciesist. If you feel an urge to object, first bear in mind that the following two statements could both be true: 1) the response is speciesist; 2) non-human animals do not deserve rights.
Implicit in the prejudicial responses is the idea that some individuals deserve preferential treatment solely on the basis of their membership in certain groups. But the membership is not itself morally significant.
This is true even in the case of material substrate.
It is important to recognize the specter of substratism now. We are currently in a unique position: we can head off substratism before it becomes a systemic problem. This is a crucial point. With respect to racism, sexism, and speciesism, we are not in such a position. The culmination of history is the present. Imagine being able to prevent the evils of racism before they were to take hold in the world. What would you do?
Some people have already brought attention to substratism. In a paper about artificial intelligence and ethics, Nick Bostrom and Eliezer Yudkowsky discuss what they call the “Principle of Substrate Non‐Discrimination.” By this they mean,
If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.
It would be substratist to treat the beings differently solely on the basis of their material substrate. In other words, difference in substrate (or race, sex, or species) does not automatically entail a difference in moral standing. Bostrom and Yudkowsky are certainly right, but there is far more to say about the issue.
It is easy to find speculation and discussion about the moral status of future machines. The questions are a fixture of science fiction. What would it mean to give machines rights (or recognize that they have rights)? Can a machine have a soul? Debate over these questions operates on an assumption: namely, what makes a machine morally significant will be a special trait that it shares with human beings. Machines become members of the moral community when they are sufficiently like human beings. The more they differ from humans, the fewer obligations and consideration we owe to them. Humans always remain the locus of concern, at the top of the moral ranking. Our ethics is anthropocentric.
This is an understandable and perhaps unavoidable assumption. To a large extent, I am assuming it as well. By depicting substratism as a problem of the future, I am implying that existing machines are not deserving of moral consideration. It feels like a safe claim, but only because we measure moral worth relative to human beings. How well will anthropocentric ethics serve us when the moral community expands to include more and more types of non-humans? Discussions in environmentalism are already pushing us away from thinking about moral standing in terms of ‘living persons as rational agents’.
In one way or another, humans might one day find themselves knocked from their perch at the top of the moral community. Future technologies threaten to force a radical reconceptualization of ethics—a task human beings might not be suited to perform. Substratism, and other hidden prejudices, risks leading us into profound mistakes as we relinquish more control to powerful entities of our own creation. It might also show a way in which our conversations about the moral status of machines, ironically and tragically, begin with a prejudice against machines.
You might think that what I’ve said so far is absurd. Why? It is worth approaching the same idea in a different way.
We will soon be seeing debates about whether a particular sophisticated machine is owed some moral consideration. The discourse is inchoate now. If someone dismisses the idea by saying, “It is just a machine—zeros and ones,” that is substratism par excellence. But the arguments are likely to get more nuanced. We will start talking about the traits the machine has and whether the traits are shared by human beings. Here substratism will show up in more covert ways.
Let’s explore the issue through an analogous prejudice. Everyone would agree that racism is wrong (though they might not know what racism actually is). Speciesism, however, is more contested. When someone defends against a charge of speciesism, they will usually attempt to identify an ability or trait that human beings have that non-humans lack. The trait is meant to justify a moral distinction between humans and non-humans that does not rely on the species division itself. The customary candidates are rationality, self-consciousness, or language. This way we get to have our meat and eat it too.
In the U.S. about 9 billion chickens are killed each year. This practice is acceptable, so one wants to think, not because the chickens are a non-human species, but because they do not have reason or self-consciousness or language (or whatever you think really matters). The idea is that we are justified in drawing a moral distinction between humans and chickens because we can point to a morally relevant trait that just so happens to track the species division.
The point of the strategy is this. An entity’s species is not in itself morally relevant. But it might be indicative of something that is. We aren’t actually relying on species membership in our moral judgments. We get to our preferred conclusion, but not in a speciesist way. For example, if we are hiring a mathematics professor, it is acceptable to discriminate against a chicken. This is because the chicken lacks the necessary mathematical knowledge and ability. It is the ability that is relevant, not the species membership. (A mathematician chicken, however, should probably be granted an interview.)
This attempt to defend against speciesism fails (see the appendix below for the reasons why). When arguing in this way, what we routinely uncover is a motivation rooted in prejudice. People are conjuring rationalizations to justify continuing their destructive behavior and thinking. We have a prized default assumption, and we’ll say whatever shuts up the quibbling philosopher, even though we haven’t thought much about the issue before.
It turns out that our rationalizations are both routinely awful and tenaciously persistent. A great deal of energy and time has gone into critiquing racist, sexist, and speciesist arguments that have propped up oppressive systems. For the people benefited by the system, the arguments will seem appealing and the critiques unappealing. The prejudices keep their grip in the mind. Especially in the case of animal ethics, people will grasp at any argument they can find to maintain the status quo. As is most common now, people will acknowledge forms of oppression, but the forms of human oppression are privileged. (The resistance to including non-humans in a broader social justice cause will be viewed as an embarrassing failure in the future, just as second wave feminists are rightfully critiqued for neglecting the importance of race.)
I suspect that the nascent attempts to segregate humans and machines have similar prejudicial origins. It is certainly possible that the substrate division is indicative of morally relevant facts. The early discussions about AI in philosophy were focused on this issue. “Can a machine be conscious?” is a question that, besides occupying far too much of our time, might eventually strike us as naively insensitive. It has alarming parallels to questions about the abilities of black people and women. The view that a machine in principle cannot be conscious is appealing because it enables us to render discussions about their moral status otiose.
In sum, we should assess our moral arguments about machines in light of a tendency for self-serving bias. People will conjure reasons for dividing humans and machines into moral distinct groups. We are prone to accept those arguments uncritically because they align with a prejudicial impulse. Machines, like non-human animals, are not here to stand up for themselves. Yet individuals in the future, made from carbon or silicon, might look back and wonder why we had such an obvious moral blindspot.
So far I have been relying on racism, sexism, and speciesism to explain substratism. Although they share the same general form, substratism is disanalogous in notable ways.
For instance, racism, sexism, and speciesism are prejudices that concern entities that have always existed. Some would argue that the categories of race, sex, and species were invented or constructed, but the bodies were here before. The bodies themselves were not constructed.
The case of machines is different. Not only are they the product of human creation, but the natures of the machines—what traits and goals they possess—are the product of human choice. Subtratism, then, stretches beyond how we decide to categorize the bodies we find among us. It moves into the issue of which machines we make. How we view our relation to machines will have bearing on our design choices. Is it acceptable to bring into existence entities who solely serve human needs? What should we think of the designers who build “good, healthy, slave complexes into the damned machines,” as Isaac Asimov says in his story, “Runaround.” Can we wrong a machine by making it one way rather than another? These are complex and neglected questions. But they are questions we do not face in the other forms of prejudice.
The creative role of humans shows how substratism differs from speciesism. But someone might object that there is no clean distinction between the two. In a broad sense, I can agree. In both we are considering the power relation between humans and non-humans. But the types of non-humans are different. It stretches the term to call a machine one species among the others.
Regardless, I do not think the issue is especially important. What matters are the political and moral consequences of our prejudice. My concern is that folding machines into the issue of speciesism will prevent us from giving machines the attention they deserve. If we focus on the broad commonalities between machines and other non-humans, we risk ignoring the specific differences. We also need a way of marking the special creative role humans have in technology. Our design choices are in need of moral guidance.
Someone might object here too: “You said that substratism might be lurking in how we design our machines, and that this is unlike other forms of prejudice. But what about breeding or genetic modification?” These are significant questions. Both of these activities deserve moral scrutiny. There has been a great deal of quality work done on the ethics of genetic manipulation and enhancement. But we see the special importance of machines when we grasp the full scope of creative possibility that humans have. There is a vast space of possible machine natures. We are limited in what we can do to animals, human or not. When breeding or genetically modifying, we are working in a far narrower scope of possibility. Additionally, the moral status of animals is largely understood prior to the the alterations. We are working from a known baseline. This is not the case with machines.
In the end, the labeling is not centrally important. My argument has depended on the commonalities between substratism and other forms of oppression. I do not think substratism is one of a kind. The differences, however, highlight the ways in which the coming oppression presents new and daunting challenges. For that reason it is worth considering machines as a special case. Never before have we had to contemplate the prejudices we have towards something that does not yet exist, whose nature is in our hands, and that might supplant us as the centers of the moral world. Will we give up our spot willingly? Or will we entrench in an unfounded preference for our kind? Our track record is not good.
“Are you ready, and at the price of what sacrifice, to live the good life together? That this highest of moral and political questions could have been raised, for so many centuries, by so many bright minds, for human only without the nonhumans that make them up, will soon appear, I have no doubt, as extravagant as when the Founding Fathers denied slaves and women the vote.” -Bruno Latour
Why does the above attempt to defend against speciesism fail? My discussion here operates on the assumption that what confers moral worth is some set of traits an entity shares with human beings. As I say, I think future technology will challenge this assumption. The assumption is already questioned within environmental ethics and in thinkers like Baruch Spinoza, Henri Bergson, Henry David Thoreau, Deleuze/Guattari, and Jane Bennett.
The general strategy of the defense is to find a trait that tracks the species division. A trait that humans have that non-humans lack enables us to distinguish two moral groups and clear ourselves of accusations of speciesism. (The same strategy has been used in attempts to justify racism and sexism. We tend to view them as transparently racist and sexist now, and rightfully so.)
The main problem is that the traits we pick do not track the species division as we would hope. Suppose we think rationality does the trick. If we are interested in rationality (whatever that is), what about infant humans? They are less rational or intelligent than pigs (in general, we have a tendency to underestimate the cognitive abilities of non-human animals). So if it is wrong to hurt an infant, shouldn’t it then be wrong to hurt a pig?
But surely the infant will grow to have full-blown rationality (whatever that is). Maybe that is what’s important. There are several problems with this approach. First, what about cognitively disabled human beings without the precious rationality trait? We value them more than pigs, but they lack what we already said is important, and they won’t develop it in time. Second, the infant case makes moral worth dependent on a capacity or potentiality. Infants are only morally significant for what they will become, not for what they are. Someone might respond that the disabled person unfortunately lacks rationality but, in some sense, ‘should’ have it. So the disabled person is only morally significant for what they could have been, not for what they are. Both claims rely on some conception of a ‘standard’ or ‘normal’ human being. How we build the conception is deeply problematic. Disability ethics teaches us why. Making moral worth dependent on potentialities is murky anyhow.
But let’s set all of that aside. Even if we found a trait that tracks the species division perfectly, we’d need to show that the trait is morally decisive. Linguistic ability likely fails here. Why would lack of language make something unworthy of moral consideration? It is tough to see. Does an entity’s inability to speak mean that we can kill and eat it? If so, we again face the infant and disabled person problem. There are also non-humans who have language abilities comparable to some humans, and yet we assign the non-humans lesser worth. What reason could we have for this other than a simple preference for our own species?
But let’s set that aside too. Even if we find a morally relevant trait that tracks the species division, we’d also need to argue that the traits we share with other species are somehow morally irrelevant. That is, the trait that tracks the species division is the only one that matters. That is tough to see. Sentience, the ability to feel pain as one’s own, is a trait we find in nearly all our food animals. No matter how sophisticated human beings are, it is difficult to deny the moral relevance of sentience. The ability to suffer is shared across species lines. And when it comes to inflicting suffering on creatures, how is it that only human suffering matters? Only speciesism provides the answer. It is plausible to think that many traits carry moral significance, and not all species share them all, but we are unjustified in making the moral community all and only human beings. Our desire to do so is rooted in prejudice.
So if the question is ‘Who should we hire as a mathematics professor, the human or the chicken?” it is legitimate to discriminate against a chicken. The ability to do complex reasoning is relevant, perhaps morally. But if the question is “Who should we lock into a small cage and eventually kill?” the answer is neither.
Hence, the above attempt to defend against speciesism fails. To end, here is a condensed argument for a more general conclusion. People would agree that causing unnecessary suffering is wrong. The statement is not limited to human suffering. If I were to spend my free time cutting off dogs’ tails on the street, that would be horribly wrong. I’m causing suffering that is flagrantly unnecessary. But at least 97% of what we do to non-human animals is transparently frivolous, including using them for food and clothes. Those activities are obviously unnecessary and destructive, to the non-human animals, to humans, and to the environment. It follows that using non-human animals for food and clothes (in the manner we do) is wrong. And since we shouldn’t do things that are wrong, we should stop using non-humans for food and clothes. Hence, we should become vegans. Sorry.