“Are you ready, and at the price of what sacrifice, to live the good life together? That this highest of moral and political questions could have been raised, for so many centuries, by so many bright minds, for human only without the nonhumans that make them up, will soon appear, I have no doubt, as extravagant as when the Founding Fathers denied slaves and women the vote.” -Bruno Latour
What is the error at the core of racism, sexism, and speciesism? Insofar as they can describe the attitude or beliefs of individuals, they involve attributing moral significance to something that isn’t morally significant. For instance, a racist assumes that the division of people into various races allows you ascribe different moral worth to people. A sexist is operating on the assumption that the division between sexes is a division that carries moral significance. A speciesist assumes that facts about species membership, all by themselves, enable us to make moral judgments.
Because people in dominant groups have held these views for so long, systems, institutions, and cultural norms spring up with the prejudices embedded deep in them, nearly invisible to the people who reap the benefits. We struggle to grasp, let alone solve, all of the problems these systems have caused and continue to cause. The inertia of the systems pushes the moral failures into the future
When we look into the future, we see another form of oppression coming. Let us call it substratism.
Racism is prejudice on the basis of race; sexism on the basis of sex; speciesism on the basis of species. Substratism is prejudice on the basis of material substrate. We can expect to face a widespread intellectual and moral failure, individual and systematic, in which carbon-based entities, like human beings, receive preferential treatment at the expense of silicon-based entities. The prejudice attaches moral significance to the division between carbon and silicon substrate.
Substratism, in many ways, is already taking hold. Consider this viral tweet:
This is likely substratist. Here is why. Imagine someone in antebellum or reconstruction America asking, “Do black people deserve rights?” Now imagine someone responding, “Woah there. Let’s start with white people.” That would be racist. Unquestionably. Of course black people deserve rights. Pointing to unfortunate (or even oppressed) white people doesn’t change that. Likewise, if someone asked whether non-human animals deserve some rights and was met with “Woah there. Let’s start with humans,” that would be speciesist.
If you feel an urge to object, first bear in mind that the following two statements could both be true: 1) the response is speciesist; 2) non-human animals do not deserve rights. In other words, one can give a bad, prejudicial, or bigoted argument for a conclusion that happens to be true.
Implicit in the prejudicial responses is the idea that some individuals deserve preferential treatment solely on the basis of their membership in certain groups. But the membership is not itself morally significant.
This is true even in the case of material substrate.
It is important to recognize the specter of substratism now. We are currently in a unique position: we can head off substratism before it becomes a systemic problem. This is a crucial point. With respect to racism, sexism, and speciesism, we are not in such a position. Imagine being able to prevent the evils of these views before they were to take hold in the world. What would you do?
My goal, and what I hope to encourage, is a framing of talk about future technologies in the terms of social justice.
Dominant Ethics
Some people have already brought attention to substratism. In a paper about artificial intelligence and ethics, Nick Bostrom and Eliezer Yudkowsky discuss what they call the “Principle of Substrate Non‐Discrimination.” By this they mean,
If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.
It would be substratist to treat the beings differently solely on the basis of their material substrate. In other words, difference in substrate (or race, sex, or species) does not automatically entail a difference in moral standing.
It is easy to find speculation and discussion about the moral status of future machines. The questions are a fixture of science fiction. What would it mean to give machines rights (or recognize that they have rights)? Can a machine have a soul? Debate over these questions typically operates on an assumption: namely, what makes a machine morally significant will be a special trait that it shares with human beings. (Some thinkers, like David Gunkel and Mark Coeckelbergh, take themselves to be rejecting the assumption). Machines become members of the moral community when they are sufficiently like human beings. The more they differ from humans, the fewer obligations and consideration we owe to them. Humans always remain the locus of concern, at the top of the moral ranking. Our ethics, in short, is all too often anthropocentric.
This is an understandable and perhaps unavoidable assumption. To a large extent, I am assuming it as well. By depicting substratism as a problem of the future, I am implying that existing machines are not currently deserving of moral consideration. It feels like a safe claim, but only because we measure moral worth relative to human beings.
How well will anthropocentric ethics serve us when the moral community expands to include more and more types of non-humans? Discussions in environmentalism are already pushing us away from thinking about moral standing in terms of ‘living persons as rational agents’. In one way or another, humans might one day find themselves knocked from their perch at the top of the moral community. Future technologies threaten to force a radical reconceptualization of ethics—a task human beings might not be well-suited to perform. Substratism and other hidden prejudices risk leading us into profound mistakes as we relinquish more control to powerful entities of our own creation. It might also show a way in which our conversations about the moral status of machines, ironically and tragically, start from a prejudice against machines.
Machines and Animals
To diffuse some skepticism, it is worth approaching the same idea in a different way. Debates about whether a particular sophisticated machine is owed some moral consideration are inchoate now. If someone dismisses the idea by saying, “It is just a machine—zeros and ones,” on a plausible interpretation, that is substratism par excellence. But the arguments are likely to get more nuanced. We will start talking about the traits the machine has and whether the traits are shared by human beings. Here substratism will show up in more covert ways.
Consider an analogous prejudice. Everyone would agree that racism is wrong (though they might not agree on what racism is). Speciesism, however, is more contested. When someone defends against a charge of speciesism, they will usually attempt to identify an ability or trait that human beings have that nonhumans lack. The trait is intended to justify a moral distinction between humans and nonhumans that does not rely on the species division itself. The customary candidates are rationality, self-consciousness, or language. This way we get to have our meat and eat it too.
For example, in the U.S. about 9 billion chickens are killed each year. This practice is acceptable, so one wants to think, not because the chickens are a nonhuman species, but because they do not have reason or self-consciousness or language (or whatever you think really matters). The idea is that we are justified in drawing a moral distinction between humans and chickens because we can point to a morally relevant trait that just so happens to track the species division. An entity’s species is not in itself morally relevant. But it might be indicative of something that is. We aren’t actually relying on species membership in our moral judgments. We get to our preferred conclusion, but not in a speciesist way.
Most agree that this attempt to defend against speciesism fails. When arguing in this way, what we routinely uncover is a motivation rooted in prejudice. People are conjuring rationalizations to justify continuing their destructive behavior and thinking.
It turns out that our rationalizations are both routinely awful and tenaciously persistent. A great deal of energy and time has gone into critiquing racist, sexist, and speciesist arguments that have propped up oppressive systems. For the people benefited by the system, the arguments will seem appealing and the critiques unappealing. As is most common now, people will acknowledge forms of oppression, but the forms of human oppression are privileged. (The resistance to including nonhumans in a broader social justice cause will be viewed as an embarrassing failure in the future, just as second wave feminism is rightfully critiqued for neglecting the importance of race.)
It is possible that the substrate division is indicative of morally relevant facts. The early discussions about AI in philosophy were focused on this issue. “Can a machine be conscious?” is a question that, besides occupying too much of our time, might eventually strike us as naively insensitive. It has alarming parallels to questions about the abilities of black people and women. The view that a machine in principle cannot be conscious is appealing because it renders discussions about their moral status otiose.
In sum, we should assess our moral arguments about machines in light of a tendency for self-serving bias. People will conjure reasons for dividing humans and machines into moral distinct groups. We are prone to accept those arguments uncritically because they align with a prejudicial impulse. For now, machines, like nonhuman animals, are not here to stand up for themselves (at least in ways we are primed to accept). Yet individuals in the future, made from carbon or silicon, might look back and wonder why we were so morally oblivious.
New Directions
So far I have been relying on racism, sexism, and speciesism to explain substratism. Although they share a general form, substratism is disanalogous in notable ways.
For instance, racism, sexism, and speciesism are prejudices that concern entities that have always existed around us. Some would argue that the categories of race, sex, and species were invented or constructed, but the bodies were here before. The bodies themselves were not constructed by us.
The case of machines is different. Not only are they the product of human creation, but the natures of the machines—what traits, characteristics, and goals they display—are the product of human choice. Subtratism, then, stretches beyond how we decide to categorize the bodies we find among us. It moves into the issue of which machines we make. How we view our relation to machines will have bearing on our design choices. Is it acceptable to bring into existence entities who solely serve human needs? What should we think of the designers who build “good, healthy, slave complexes into the damned machines,” as Isaac Asimov wrote. Can we wrong a machine by making it one way rather than another? These are complex and neglected questions. But they are questions we do not face with the other forms of prejudice.
The creative role of humans shows how substratism differs from speciesism. But someone might object that there is no clean distinction between the two. In a broad sense, I can agree. For both we are considering the power relations between humans and nonhumans. But the types of nonhumans are different. It stretches the term to call a machine one species among the others.
Regardless, I do not think the issue is especially important. What matters are the political and moral consequences of our prejudice. My concern is that folding machines into the issue of speciesism will prevent us from giving machines the distinctive attention they deserve. If we focus on the broad commonalities between machines and other non-humans, we risk ignoring the specific differences. We also need a way of marking the creative role humans have in technology. Our design choices are in need of moral guidance.
Someone might object here too that if substratism is lurking in how we design our machines, and that this is unlike other forms of prejudice, what about breeding or genetic modification? These are significant questions. Both of these activities deserve moral scrutiny. There is work on the ethics of genetic manipulation and enhancement. But we see the special importance of machines when we grasp the full scope of the creative possibility humans have. There is a vast space of possible machine natures. We are limited in what we can do to animals, human or not. When breeding or genetically modifying, we are working in a far narrower scope of possibility. Additionally, the moral status of animals is largely understood prior to the alterations. We are working from a known baseline. This is not the case with machines.
In the end, the labeling is not centrally important. My argument has depended on the commonalities between substratism and other forms of oppression. I accordingly do not think substratism is one of a kind. The differences, however, highlight the ways that the coming oppression presents new and daunting challenges. For that reason it is worth considering machines as a special case. Never before have we had to contemplate the prejudices we have towards something that does not yet exist, whose nature is in our hands, and that might supplant us as the centers of the moral world. Will we give up our spot willingly? Or will we entrench in an unfounded preference for our kind? Our track record is not good.
[…] They belong to the same category under capitalism because both can produce wealth for capitalists. The material substrate of the producer is not what matters. But there is also a distinction between producers that can be members of the “true community,” […]
LikeLike
Bernard Williams pretty much put all this “speciesism” nonsense to rest in his essay, “The Human Prejudice.”
LikeLiked by 1 person
Ha. Well, not quite.
LikeLike
[…] I will end with a more speculative point. In the future it is plausible that there will exist entities without meatspace bodies. The topic is found in the discussion of simulations and whether we are in one. If we are, the ‘hierarchy of being’ built into our language about technology implies that we become lesser, downgraded entities (though even that inference is tenuous). We feel like news that we are simulations would be a bummer. Given my arguments, that conclusion would be a mistake. It would be guilty of what I call substratism. […]
LikeLike
[…] that human beings have inflicted on nonhumans. The bloody farms are a natural consequence of the anthropocentrism at the core of ethics. A similar story can be told about environmental devastation and the dangerously parochial […]
LikeLike
[…] Once we resolve 1 and 2, and assuming that technological development continues along current trajectories, we will face a third, social problem: […]
LikeLike
[…] The whole concept of ‘speciesism’ exists because human treatment of nonhuman animals is structurally similar to other forms of prejudice. They involve dominant groups subjugating and marginalizing others […]
LikeLike
[…] machines possess those traits too. So offloading all labor onto machines would be wrong. I call it substratism. Our desire to put an end to human labor would fail, perhaps by definition, since we couldn’t […]
LikeLike
[…] When will we need to give machines rights? When will it become wrong to use them as free labor? The widespread assumption is that we will know “moral machines” are among us when they look like us. That is, they will […]
LikeLike