Since in the short term, automation represents the threat of unemployment—and in the longer term, full automation the threat of full unemployment—many people feel an urge to defend the special status of human beings. If we want job security, we must find some distinctive trait that cannot be programmed—perhaps creativity, a spirit, or a soul. There must be a line between humans and machines. The brighter the better.
But is the trait actually necessary to do your job? Even if there is something special that will always distinguish humans from the machines we create, it might not save our jobs from the specter of automation.
Implicit are two different questions:
- Can we replicate in technology what is special about humans? Is there a bright line? This the question that tends to tantalize the popular imagination.
- Does necessary labor require what is special about humans? Does the bright line matter practically? In order for technologies to perform all the labor necessary for humans to live reasonably comfortable, desirable and sustainable lives, must we create them to possess the special human trait(s)?
We often focus on the first and neglect the second. Let’s consider them together. Given that both are ‘yes or no’ questions, they yield four possible versions of the future.
We might create technologies that resemble humans in every way that currently matters to us. We might also find that, for all necessary labor to be done, the (previously) special human spirit is required. In this future, would technology end the need for human labor?
The answer, surprisingly, is no. You might think that the scenario is great news: we have machines built to do our jobs. Free time for humans abounds! But it is also part of the scenario that there is no significant distinction between humans and machines. Recall that the technology resembles humans in every way that currently matters. When we take stock of the candidate special traits, they tend to be the aspects that makes us morally valuable—that make us more than machines. But now machines possess those traits too. So offloading all labor onto machines would be wrong. I call it substratism. Our desire to put an end to human labor would fail, perhaps by definition, since we couldn’t draw any relevant line between humans and machines.
The moral issues on this outcome are no different. The fact that labor does not require the special human ‘spirit’ doesn’t change the fact that we are requiring our moral equals to do it. And given the nature of the labor, it would be drudgery. (For why else are we trying to offload it onto something we view as different from us?)
However, if we knew that we could automate all necessary labor without needing to create technologies that resemble humans in every way that currently matters, maybe we should avoid creating such machines. As long as we deny the technologies the special human traits, forcing them to labor would be acceptable. In other words. if we could make them different enough, we might be free of the moral hazards.
This isn’t true. More challenging moral problems exist in cases of technologies that only partially resemble human beings. We might, for instance, create a machine that is unaware of its status as an uncompensated worker. (This might describe all currently existing technology. However, it is not as if we have the power to create machines sophisticated enough to be aware of their status but simply opted not to utilize it.)
The main question is this: If we can create a machine that shares in what is special about humans, but we deny the machine those traits, do we do something wrong? What are our responsibilities as creators to our creations? Since all technologies have been created for selfish reason (to benefit the creator, not the creation), we rarely discuss the issue. We must broaden the conversation about what I call ‘creative responsibility’ beyond the benefits and harms to humans.
In this possible future, we find that 1) we cannot replicate what is special about humans, and 2) necessary labor needs the human touch. Although we still face moral problems of creation, we are confronted with what is, to many, a despiriting conclusion: there is no hope for the end of human labor.
Some people view technology as the key to a utopian future of free time. We could have a world in which humans are still active, productive, and creative, but not compelled to work. Although that isn’t what would be in store, plenty of utopian visions are compatible with #3. We could change how we view work and take solace in the fact that the necessary work taps into the human essence. On this future, the work that doesn’t might be eligible for automation.
Finally, it is possible that we are special and that we still could automate all necessary labor. Compared to the previous futures, this might strike us as the ideal arrangement. In virtue of the first ‘no’ we can avoid many of the moral problems. The second ‘no’ opens up the possibility of fully automated labor without any need for human involvement.
However, the good news is accompanied by another dispiriting conclusion: either 1) your work is unnecessary, 2) what is special about human beings is not a necessary part of your work, or 3) both.
Many people might find this an apt description of life in the market. We often get the sense that what we are doing at our jobs isn’t about meeting any genuine human needs. To the contrary, we might be in an industry that profits from harming people. If you don’t feel that way, you are lucky (or maybe wrong).
We often feel most human when we are not at work—when we’re lounging, eating, and socializing (ya know, those distinctively human activities). Many of us describe our work as ‘mindless’ or ‘soul-deadening’, as something that stifles creativity. The fact that we feel like carbon-based machines at work is not only an inevitable development in capitalism, but also a predictor of a future in which, despite technological innovation, little besides the material substrate of the workers has changed.
The four futures give us insight into how my two guiding questions are connected. I intentionally did not specify what exactly is “special” about a human being. We tend to frame discussions about technological development in terms of an attempt to replicate important human traits—general intelligence, for instance. But the development also takes place on the back of the drive for profit. Our special traits are, at bottom, viewed as commodities. Companies are seeking to replicate them in technologies because it serves their financial best interest. In the eyes of the market, where the creation of machines takes place, all that matters about a human is its capacity for labor. The rest only gets in the way. So if a job can be done by a machine, all the better for the owner. The logic of automation makes perfect business sense.
But there is another side to automation. It also makes sense to turn labor into the sort of activity that can more easily be done by a machine. This, as is familiar to plenty of workers, makes a human more into a machine. How we view labor morphs into how we view ourselves. And the bright line starts to dim.
The conclusion is that it is difficult, and maybe impossible, to think about “special human traits” in a way that doesn’t cast them as tools for economic advantage. We find evidence for this in the fact that many people genuinely struggle to imagine what they would do if they weren’t forced to sell their labor. Meaning, purpose, and value in life—the actualization of the human spirit—is intimately tied to being a cog in a machine, literally and figuratively.
This brings us again to the question of creative responsibility. Is it wrong to create machines that are unaware of their status as forced laborers, unable to conceive of alternatives to their unfree, alienated state of being? It is not a question for the future. We are making the machines now. Some are silicon, metal, and plastic. Some are humans.