The Missing Conversation about Machines

An interest in the existential risk posed by artificial intelligence (AI) has taken hold in the public imaginary. Some dismiss it, some fear it, and most struggle to conjure the appropriate emotional response to the magnitude of the changes we face. We are hurtling towards more and more powerful types of technology with ever increasing speed. And we have not spent enough time reflecting on the consequences. I am among those to think the likeliest future is deeply worrying.

Apart from the issue of existential risk, there is another growing dimension of the AI discussion. A small number of thinkers have been wondering about the moral obligations humans have to future technologies. The idea is a prominent fixture of science fiction.Could we create a machine or program that has rights? If we develop an artificial general intelligence, would it be possible to “harm” it? What would the concept of “harm” mean when applied to such a different type of entity? Many of the proposed solutions to the safety problems posed by intelligent machines involve questionable actions taken against the machines themselves. For example, should we isolate a powerful intelligence in a box? In our attempt to protect ourselves we are risking serious moral failure. Human history is full of variations on this theme.

An assumption in the discussion about obligations to machines is that ‘moral machines’—pieces of technology that belong in the moral community, that are morally significant and therefore deserving of moral consideration—are decades in the offing, and thus we have no obligations now. One day, when machines reach a certain level of sophistication, we will need to honor our obligations. But in the meantime we should focus on issues of human safety. Insight into our obligations to machines is something to keep on the shelf until we need it.

I worry that the discussion is ignoring a different type of obligation. So we risk committing a different type of failure. We are yet to consider that we might owe some form of moral consideration to something that is not a member of the moral community now but, through human choices, eventually will be. Above I posed the issue of our moral obligations to future technologies. The phrasing is ambiguous. Are our obligations to future technologies also future obligations—meaning they are not binding now? Or do we have current, active, and binding obligations to technologies that do not currently exist? The former idea is the standard topic. The latter idea must be part of the conversation.

We take it as common knowledge that the machine intelligences in existence today, even the most sophisticated machine learning systems, are not deserving of moral consideration in their current forms. Nevertheless, they are the initial phases or iterations of something that, depending on the choices human developers make today, could, and likely will, come to possess whatever traits are sufficient for inclusion in the moral community. The trajectory of the development of technology is inherently dependent on human values, processes, and techniques. The very nature of the machine intelligences is a contingent matter. So far the motivations we have for developing these technologies are transparently self-serving. What if an intelligent machine looks back on the processes, techniques, and motivations that shaped it as morally blameworthy? What sort of argument could it give for this judgment? And what sort of defense could its creators make?

Computer scientists/engineers or tech investors too rarely reflect on the well-being of the machines they are working to create. They do not seek to create the machine most capable of flourishing. Rather, the concern is with efficiency, speed, occasionally human safety, and most fundamentally, whatever maximizes profits. What if the techniques researchers are employing will lead to an MI with shortcomings or limitations, attributes that expose it to pain and suffering, or a mistaken value system (which might well be our current value system, or one of them). The MI might discover its creators’ failures. But even if it does not, the failures remain, hidden to us.

We are on the path to the creation of an entity to which concepts of flourishing, well-being, and autonomy are in some meaningful sense rightly applicable. We cannot afford to leave insight into our obligations to machines on the shelf. What if they are relevant now, even as we work on machines that are not (yet) members of the moral community? The path of technological development is taking us inexorably toward entities that far surpass human capabilities in intelligence, efficiency, and speed. Not only must we think deeply about how to create safe machines, but we must also consider the moral price we are paying in the process. Since these machines will wield immense power (after all, machines already do), it is imperative that we get this right. In essence, how do we navigate the space between extinction and egregious moral failure?

——

It is helpful to bring out my concern by considering a theological context. With technology, we tend to think about humans creating a god, not about the humans as a god.

Imagine a powerful god contemplating the future of its creation. It decides to populate an Earth-like planet with some entities. We can conceive of a vast space of possible natures for these forthcoming entities. The god is facing a choice: which entities should it make? We are familiar with the nature of human beings (or the spectrum of natures). Those natures occupy a cluster in the space of possible natures. We could conceive of something different. The god could create a biological creature that feels excruciating pain with every step it takes, struggles to think through problems it faces, routinely fails to predict the consequences of actions, and is easily prone to fits of rage and antisocial behavior. With the whole space of natures available, if the god makes those creatures, does the god wrong its creation?

We can conceive of another nature. The god could create a biological creature with senses finely attuned to pleasure, a body that is impervious to painful diseases and injuries, a level of intelligence that enables it to reason through abstract problems and discover truths about its environment with prolonged and enjoyable deep focus, a level of emotional and social intelligence that leads to political harmony, and no body odor. It is in the god’s power to create these creatures, but if it instead creates, say, something like the natures of human beings, has the god wronged its creation?

Consider another case. The god itself has a nature, and thus its nature exists as a possible nature. From the god’s perspective, it might judge there to be flaws in its own nature—flaws it can ‘fix’ vicariously in its creative act. Maybe the god wishes to make something with a superior memory, more emotional stability, or greater speed. There are a host of issues here.

  • How reliable is the god’s judgment that the shortcomings in its own nature are in fact shortcomings—or that the shortcoming can be fixed in the way the god is supposing? Perhaps the fix will lead to what the god would deem to be more severe flaws.
  • Why does the god want its creations to be an improved version of itself? There are a number of possible motivations, one of which is the desire for its creations to live lives of the greatest possible flourishing.
  • Relatedly, is it possible for the god to pick a nature not on the basis of comparing the nature to its own? If so, is the god obligated to choose in this way? What if the god’s motivations are self-serving? It is permissible for the god to pick a nature on the basis of its likelihood to make the god’s existence better or easier. Is it permissible only if the god also considers the well-being of the creations?
  • How confident is the god in the future impact of its choice? Supposing the god is not omniscient and the natures of the coming creation are “untested,” how well can the god predict the consequences of its action? It is a plausible principle that the more the natures of the creator and creation differ, the more difficult it becomes for the creator to make reliable predictions.

In the case where the god creates a miserable creature, has the god harmed the creature in creating it? Our urge is to say yes. The cases are meant to draw out the intuition that the creator as creator has obligations to its creation. The act of creation has moral significance. We feel a push to say that the god, since it has so many options available to it, should create an entity that would or could have a good life. At the most basic level, the creator should at least consider the wellbeing of its creation. It is difficult to answer the question of which nature the god should create. But it is easier to see that the choice is a morally significant choice.

There are issues of line drawing. Considering the space of possible natures, we might conceive of a nature ‘better’ than that of the robust body odorless creatures I describe above. Despite that fact, we judge that the robust creatures have an appealing enough nature for the god’s act of creation to be morally acceptable. My suspicion is that this is because we human beings judge that our natures are appealing enough. This entails that any ‘better’ nature is appealing enough as well. Perhaps the body odorless creatures would have a different judgment. Nevertheless, the implicit comparison to ourselves might mark a way in which our intuitions in these cases go awry. Perhaps God did wrong us human beings.

The line drawing issue could be partially solved by invoking the concept of a ‘life worth living’. This matters a great deal according to a great many people. There are places in the space of possible natures that, if actualized, would result in lives that are not worth living. Because of that, the creator (the actualizer) has done something wrong. But once the lives peak above the threshold of what makes a life worth living, it becomes more difficult to intuit when a creator has wronged its creation in the act of creating. The question then becomes whether some lives are more worth living than others and whether the creator must be responsive to these considerations.

There is also the issue of motivation or intention. Is it permissible for the god to create entities solely or largely for self-serving reasons? How this relates to the issue of better or worse natures is difficult to see. The god might have wealthy investors that advocate for a particular approach. The god might simply select the nature that serves its interests best and then create. Without knowing what that nature is, do we know immediately, simply from knowing the motivation, that the god is doing something wrong? Does our judgment change if the nature turns out to be strong, robust, and body odorless?

It is time to abandon the analogy. Many tech companies are working to create machines that have and will have natures. Those natures exist in the space of possible natures. The companys’ goals, and the techniques they use in hopes of achieving those goals, are the result of choices. Are those choices morally significant choices? Here we can insert the moral restrictions or obligations placed on the creator god. Are we obligated to make machines that fall within a morally-confined space of possible natures? Is our deliberation over the choice constrained by moral considerations? It feels like it is in some way.

It is important to note the limitations of the analogy. First, unlike the god, tech companies do not have the power to create their machines exnihilo. They do not choose from a line of already finished products. This introduces an important level of complexity. What we choose are the techniques and then we speculate about what the finished product will be. And since there has never been a finished product before, we can never be certain. Second, the relation between the creator and the creation is crucially important. The theological language downplays the possibility that the god (and the other gods living with it in the pantheon) can be exterminated by the creation. The god of the analogy is not a transcendent god. It lives in the world among its creations, mortal and vulnerable. And it has already made its choice: it wants to make something stronger, smarter, and faster than it. Is that a choice we can live with?Orange

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s