Creative Responsibility: a Call for Papers

This post is a bit unusual. It outlines and motivates a potential research project. Usually philosophers keep such thoughts to themselves. They are the key to professional identity and prestigious employment. But I believe in the importance of the project and, for entrenched political and institutional reasons, cannot turn my full attention to it (yet). So I call on you for help. As you will soon discover, the move from a personal, private, individualistic mode of philosophical research into a collective, communal, holistic mode is part and parcel of the very project I propose to you.

“Philosophy with a deadline”

Philosophical discussions about ethics and technology have recently taken on a tone of urgency. And for good reason. As new technologies are touching more aspects of our lives, often in destabilizing and destructive ways, it is more difficult to understand the world, ourselves in it, and which future we should build.

The standard problems in philosophy are now never far from technology. We talk about:

The standard procedure is to filter the pressing topics through established philosophical paradigms. What do the dominant ethical, political, and metaphysical theories tell us when applied to technology?

The issue that exemplifies the urgency is Artificial Intelligence. Here the problems are more complex, large scale, and impactful. According to many thinkers, AI will challenge human beings’ place on top of various hierarchies. Strong AI is described as “our final invention.” After it is here, it will be better at inventing than human beings. (I.J. Good made this argument in 1965. Nick Bostrom strengthens it in Superintelligence). And we are likely to have only one chance at creating such an AI. There will be no shutting it off and trying again.

These considerations provide a new perspective on automation. The human beings who theorize about machine labor tend to think themselves immune to the trend. When we consider scales of technological development as broad as the potential intelligence explosion, we see that they are mistaken. There are no tasks, intellectual or otherwise, that are in principle reserved for human beings. The fact that machines routinely show superhuman competence in tasks that theorists theorized were unique to humans should at least instill in us a default sense of humility.

It is therefore not implausible to claim that the types of technological development on the horizon will touch academic philosophy at its core. I mean this in two ways:

  1. The emergence of intelligent technology is likely to overthrow traditional paradigms in philosophy. Conventional concepts and theories would be forced to change, perhaps radically.
  2. The best philosophers might no longer be human beings.

(1 might, but need not, be true because of 2. The same or similar consequences might obtain if the development of intelligent technology focuses instead on enhancement or cyborgification.)

The claim is not that this will happen, but instead that it is likely enough to happen to demand serious attention. There is a significant chance that we are doing “philosophy with a deadline,” as  Bostrom says. For many thinkers, concern about powerful forms of AI is the result of a precautionary principle, not beliefs about the high likelihood of emerging superintelligence.

Although my claims may seem hyperbolic, the project I will describe is not tied to any particular outcome or range of outcomes in technological development. I am outlining the existing literature and, in doing so, drawing out a significant and often overlooked implication: the philosophical problems in technology are not merely applied philosophy, as they are usually treated. Inherent in the problem to which we are applying traditional philosophical theories is the plausible outcome that technological development will bring about foundational changes to traditional philosophical theories. At its most abstract, the project takes seriously the prospect of a changing ethics and uses it to explore possible changes. I worry that much of the recent work about technology is misguided in some of its basic assumptions.

Let us focus on one specific assumption.

A Plan

The standard discussions in ethics and AI ignore what I call creative responsibility, which I explain below. Dominant moral theories struggle to accommodate it. The most fruitful way to make progress on it is to reject what we might call individualism, the idea that moral theories should take individual entities as the primary (and probably the only) units of concern. In individualism the individual (human being) is the locus of ethics. Collective or wholes only show up downstream, so to speak, if they show up at all.

Individualism is critiqued within environmental ethics. Some people have proposed moral theories in which wholes are the foundational or primary moral entities.

In this framing, one could argue that 1) the creation of intelligent technology, but also creation generally and technology generally, should be seen as an environmental issue, and thus a topic for environmental ethical theories, and 2) a theory of moral holism (of the sort found in environmental ethics) should be seen as the most fruitful way of approaching the ethics of creation, specifically AI but technology generally.

To simplify, one could 1) move technology into environmentalism and 2) move environmentalism into technology.

In what follows I will motivate the project further by saying more about creative responsibility and individualism. If creative responsibility is a genuine problem, and if there are good reasons to reject individualism, it follows rather quickly that the project is a worthwhile and urgent one.

Before proceeding, we should notice that not only is the project motivated by urgency, but there is also a metaphilosophical social justice component. Everyone can acknowledge that the health of philosophy is dependent on expanding its literatures, values, and voices. So far the predominant critiques have been about the racism and sexism embedded in philosophy. They are illuminating and essential. Environmentalism—both about nonhuman animals and whole ecosystems—is another form of institutional critique that must be valued and integrated. It currently is not. An intersectional social justice movement involves not only anti-racism and anti-sexism but also anti-speciesism and environmentalism. Continuing to push these voices to the fringe perpetuates the oppressive traditions that we all must confront.

Standard Problems

Above I started with ethics and technology generally and then focused on AI specifically. If we enter the discussion about ethics and AI, we find that it falls into two spheres:

  1. Existential Risk

Eliezer Yudkowsky has been warning us about the threat of competent intelligent technology for several decades. Other notable contributors are Bostrom, Stuart Russell, and Max Tegmark. I.J. Good and Alan Turing provided the first speculation about this problem. Elon Musk, Bill Gates, Stephen Hawking, and the Effective Altruism movement have boosted the message. Few mainstream academic philosophers have joined the conversation, however.

The concern is with how we make safe pieces of technology. Bostrom calls this the “control problem” and, borrowing from Yudkowsky, shows that the problem is not at all easy to solve. Outcomes in which the human species goes extinct are numerous and plausible.

Bostrom also focuses on the “value-loading problem.” Namely, how can we ensure that an intelligent piece of technology shares our values? We encounter the same problem with lower stakes in the case of programming driverless cars. Every major think piece outlet has an article about the ethics of driverless cars that references the trolley problem.

2. Moral Machines

The second sphere is about extending moral consideration to technologies. In culture there is a great deal of speculation about how the moral standing of machines changes as they become increasingly complex and intelligent. When will we need to give machines rights? When will it become wrong to use them as free labor? The widespread assumption is that we will know “moral machines” are among us when they look like us. That is, they will share a certain set of traits that deem them worthy of moral consideration—perhaps rationality, consciousness, sentience, or some combination.

From what I can tell, this discussion moves back and forth between two different discussions:

1. There is the moral question of which traits are necessary or sufficient for inclusion in the moral community. We might also question whether searching for morally significant traits is the proper way to approach the issue at all. Regardless, there is little or nothing specific to technology in the question. This is good old fashioned moral philosophy.

2. There is the empirical question of when a piece of technology possesses the relevant traits. This is not moral philosophy. It is a technical question that assumes an answer to the moral question.

There is a neglected third discussion. It is relevant to how we would conduct the first two.

3. There is the social question of how we get people to acknowledge the moral standing of machines. This is a matter of social justice activism. Even if we complete the first two discussions, plenty of people would be resistant to welcoming technologies into the fold. This is no more than prejudice. I call the prejudice ‘substratism’.

The Problem of Creative Responsibility

The second topic does not involve the ethics of creation. It is about how we should categorize and behave towards the entities we find among us. The first, however, is about moral issues that exist prior to the existence of the technologies in question. Yet they are about the technologies only indirectly. The locus of concern is still the (types of) entities we find among us. Our creations should not harm the humans.

We thus find a potential third topic: the ethics of creation (i.e. moral issues that exist prior to the existence of the created entity) in which the locus of concern is the created entity itself. We could call it ‘creative responsibility’: what responsibilities does a creator have to its creation in the act of creating? The problem is about how we answer the question given the standard tools of ethics. (We should distinguish two problems: the problem of creative responsibility itself and the metaproblem that creative responsibility does not register as a moral issue in dominant theories.)

There are two ways to construe the problem.

  1. Narrow. We can wonder about our responsibilities in situations in which we are creating an entity that has moral worth. That is, we are assuming that the entity in question is a member of the moral community. There is a further distinction to draw:
    1. Fixed Nature Choices. These are situations in which the nature of the entity is not a matter of choice; rather, the choices are about when or in what environment to create the entity. These problems have familiar literatures. Examples would be the morality of having children or breeding nonhuman animals.
    2. Nature Choices. These are situations in which the nature of the moral entity is a matter of choice. The problems here are less familiar but discussed with respect to genetic/neuro/moral enhancement, disability, and domestication. Due to the broad scope of possible natures, AI is a particularly challenging version of this problem. To illustrate, Yahweh faced nature choices on the fifth and sixth days of creation.
  2. Broad. We can wonder about the ethics of creation generally: for example, the choice to create or not create, choices about the creation of environments, and the choice to create an entity with moral significance or one without. What do we do prior to creating, when the notion of creation is merely formal and without content? How should we think about and behave towards the eventual creation? What values should determine or contribute to how we act creatively? How should we conceive of our eventual creations, knowing that the conceptions will guide our creative acts, as well as the natures of the creations?

Notice that the broad version of the problem precedes the narrow. Yahweh faced this problem in the beginning, and each day after. The choice to build an AI in the first place is a matter of creative responsibility. The question of what nature a machine should have is the broad problem since many of the possible natures, we assume, would not be moral entities. (See Plato’s Timaeus 29e-30c for an explicit statement of the broad version.)

The narrow version leads to what gets called the Non-Identity Problem (NIP). The fixed nature choices run into the standard NIP: your decisions can’t make a particular child better or worse off because such choices change the identity of the child in question. Nature choices yield more challenging versions of the NIP. With a focus on AI, we might call the NIP that exists in that domain the ‘Non-Identical Machines Problem’ (NIMP): since design choices will affect the machine’s nature (and hence identity), the choices are unable to make the machine better or worse off. And, as an important complicating factor, we have already started creating the machine.

The broad problem of creative responsibility deserves focused attention. Assumptions about it are built into all creative endeavors. Yet in the dominant moral theories, the broad version does not register as a moral problem. When we talk about the ethics of having children or building an AI, we are either skipping to the narrow version or considering the interests of other individuals. The latter, like the existential risk problem, isn’t a problem of creative responsibility.

Two cases of creative responsibility are especially pertinent:

  1. Technology. The urgency presented by technological development motivates the project. What guides a designer’s choice to develop a particular type of technology? Why drive towards strong AI? The choice presupposes an image of the world—an environment populated by entities that the designers are creating, and a relationship among the entities and the rest of us. This environment, we are left to hope, is better than (many? most?) other relevant possible environments. But the image is a matter of choice. Others are available. So, prior to a discussion about which nature an AI should have, we should discuss the choice to create an AI in the first place. Some people have this discussion (e.g. Bostrom and Yudkowsky), but not in terms of creative responsibility. Should we make the best of all practically possible worlds in general or the best for us humans?
  2. A research project. What you write will be a creation, the product of creative acts. The nature it will have is currently a matter of choice. What responsibilities do you face? A significant part of the project would include the fact that, whatever the conclusions or considerations, one must always be aware that the project is itself an embodiment of values. They are also the very values that this project seeks to examine. All creations, including pieces of philosophy, are statements about creative responsibility in the broad sense.

The broad problem of creative responsibility is difficult to formulate. This is in part because I am resistant to phrasing it as a “problem.” Often, problems emerge when we are comparing individuals or competing interests. And philosophy becomes a collection of individual problems or puzzles. Ultimately, I am advocating for the rejection of individualism. I will say more below about how the broad problem of creative responsibility contributes to the rejection of individualism. But a part of the project would involve attempting to formulate what ‘creative responsibility’ is as precisely as possible. Whether it attaches to a “problem” is another matter.

On that front, I can be precise about the useful ambiguity of the phrase ‘creative responsibility’. My main focus is on the responsibilities that creators have to their creations in the act of creating. Yet the phrase can also carry the sense of being creative about responsibility, considering the act of philosophizing about responsibility as one that involves recognizing one’s status as a creator creating. I mean both. This second sense of ‘creative responsibility’ makes two points.

  1. I stated in the first section that a significant implication of the literature around ethics and AI is that technological development might bring about radical changes to philosophical theories (see Shannon Vallor for a similar discussion, in Technology and the Virtues, p. 9). Hence, insofar as our creative acts will bring about changes in moral theory, our theorizing about creative responsibility now could inform creative acts that literally change how we view responsibility. Our creative acts in technology are indirectly creative acts about responsibility. More mildly, our acts would at least create new responsibilities.
  2. The ambiguity pulls out the metaphilosophical and methodological components embedded in the project. No matter the conclusions, they are implicit statements about creative responsibility.

Environmental Ethics

Here I propose one route towards an outline of our creative responsibilities. It involves turning to environmental ethics. To get a better sense of what the theories would look like, we should note two distinctions:

Anthropocentrism vs. non-anthropocentrism

Anthropocentric moral theories are human-centered in the sense of finding human beings to be the source of moral worth or intrinsic value. (People give different definitions and different distinctions. I am talking loosely here. See Callicott, Thinking Like a Planet, p. 9 for a discussion of three different senses of anthropocentrism.) In the words of Richard Routley, an early environmental ethicist, humans are the “base class” of ethics (“Is There a Need for a New, an Environmental Ethic”). We can distinguish two types of anthropocentrism:

  1. Strong. Humans are the only entities with intrinsic moral worth. The moral community includes only human beings. There are two versions of this view:
    1. Speciesist. One could privilege human beings out of a simple prejudice. Humans are more valuable because they are humans. Full stop. We just look after our own.
    2. Principled. Philosophers typically realize that they need a reason stronger than simple species membership to justify human superiority. The move is usually to find a trait that is seemingly special to human beings and then vest it with decisive moral significance. This provides the same outcome as speciesist strong anthropocentrism but on the basis of what appears to be non-speciesist principles. Principled strong anthropocentrists can defend themselves by saying that they are open to including other entities in the moral community. They would simply need to be very similar to human beings.
  2. Weak. Humans are not the only entities with intrinsic moral worth, but they have the most. We should prioritize the interests of human beings over non-humans. We can distinguish the same two versions as in strong anthropocentrism:
    1. Speciesist. Humans are obviously the best. But other animals are obviously like us in various ways. So our responsibilities towards them is commensurate with their perceived similarities to us.
    2. Principled. The procedure here is the same as in principled strong anthropocentrism, but the morally relevant traits are found in some nonhuman animals. The traits do not track the species division. Bentham and Singer’s ethics would fall into this category. Nearly all moral arguments for veganism presuppose principled weak anthropocentrism.

Non-anthropocentrism is the rejection of these views. Environmental ethics began by questioning the assumption that humans sit on top of, or entirely make up, the moral community. There is a case, made by numerous thinkers, that all anthropocentrism is speciesist. Its “principled” forms are rationalizations of an underlying prejudice, like we would say of a “principled” white supremacy or “principled” patriarchy. (This claim might appear absurd on the surface. It should taken in the following sense: the same ideology that underlies white supremacy and patriarchy also underlies the systematic exploitation of nonhuman animals. No comparison between the oppression of women or people of color and nonhuman animals is assumed. See Ko, “By ‘Human’, Everybody Just Means ‘White’” and “Addressing Racism Requires Addressing the Situation of Animals.”)

The history of ideas reveals that anthropocentric views have been employed again and again to underpin the conclusion that humans are either exclusively or overwhelmingly valuable relative to all other creatures and that other creatures are therefore ours to do with as we will (Warwick Fox, A Theory of General Ethics, p. 5). As John Passmore says, “It is constantly assumed that whatever else exists does so only for the sake of the rational” (Man’s Responsibility to Nature, p. 15). And, in undeniable effect, anthropocentrism shows up to save us from seeing the blood, suffering, death, and exploitation of hundreds of billions of creatures as an urgent moral calamity. To the extent that it is a problem, anthropocentrism shows up again to assure us that human problems are always more important.

According to many environmental ethicists, dominant moral theories are not innocent of the large scale oppression of nonhuman animals. They are the intellectual accessories to the crimes. If social justice is the fight to end oppression, the fight against speciesism must be included. An aspect of anti-speciesism is the interrogation of philosophical theories for prejudice.

I have not mentioned the environment. For anthropocentrists the prospect of recognizing intrinsic value in the environment is futile and almost laughable. Plenty of versions of anthropocentrism recognize a prudential or indirect interest in preserving the environment. But at bottom, the environment’s value is purely instrumental or economic. Many environmental ethicists have noticed that anthropocentric environmentalism divides the house against itself: underlying the prudentialist arguments for human restraint and respect of nature is the presupposition that the environment is here for humans use. Dominant moral theories are not innocent of the large scale environmental devastation.

In sum, the anthropocentric assumption has wreaked tremendous destruction, and continues to do so, as we face widespread factory farming and global warming. According to many environmental ethicists, dominant traditions in ethics are, and continue to be, complicit.

Individualism vs. non-individualism

I agree that non-anthropocentrism must become mainstream in moral philosophy, but it isn’t the main concern for the project. Instead, I would be more interested in expounding a moral non-individualism. As I stated, individualism is the idea that moral theories should take individual entities (primarily humans, when combined with anthropocentrism) as the units of concern, the base class. Individualism excludes collectives like families, societies, and planets. To the extent that they appear in our ethics, they are mere aggregates of the individuals. Their value is, at best, derivative.

Non-individualism rejects this view. The form of non-individualism that I will consider is holism: the idea that wholes (e.g. families, societies, or planets) should be the base class.

The two distinctions are orthogonal. We therefore have four options.

  1. Anthropocentric individualism. The dominant moral theories
  2. Anthropocentric holism. Human collectives are the primary concern of ethics. This view starts to look like some frightening political ideologies.
  3. Non-anthropocentric individualism. The moral community includes human and nonhuman individuals of various sorts. How far the community extends will vary by theory. Some environmental ethicists are individualists who give arguments for the intrinsic value of trees or bodies of water (e.g. Paul Taylor).
  4. Non-anthropocentric holism. Nonhuman wholes are the base class of ethics.

The arguments for rejecting individualism are similar to the arguments for rejecting anthropocentrism. On individualism, global warming is only a problem insofar as it affects some human beings and maybe some nonhuman animals. I wager that we feel the pull of non-individualism deep within when we learn about the scope of anthropogenic global warming. We might also feel that, if we are to have any hope of meeting the challenges that the problem poses, we must reassess the place that wholes—like the planets, biomes, and perhaps life itself—have in our ethics.

To motivate the project, it is worth listing more reasons for rejecting individualism. The final reason will be the problem of creative responsibility. Please bear in mind that I am not trying to establish the truth of holism here. I am trying to establish the viability of a project that employs holism.

  1. The dominance of individualism is little more than contemporary Western parochialism (see Carolyn Merchant, “Mining the Earth’s Womb”). This does not make individualism false, but it should cause academic Philosophy to be more open to different approaches to ethics. Besides in western environmentalism, one can discover forms of non-individualism in indigenous and eastern philosophy. Resistance to holism is usually a suspicion that such “further out” approaches are fueled by intuition or passion rather than rationality and rigorous argument. Ironically, the suspicion is usually fueled by intuition or passion.
  2. None of the solutions to the NIP have stuck. They usually have deeply counterintuitive consequences of their own. Since the NIPs in nature choices (like the NIMP) are more difficult, solutions are more difficult to come by. Among the more promising proposals are rejections of what is called the narrow ‘person-affecting principle’—the idea that harms must accrue to some specific individual. I cannot harm someone before they exist because they are not affected. A wide person-affecting principle instead considers the harms of populations, regardless of the identities of the individuals that make up the population. This rejects the basic framing of the NIP, but perhaps wisely. The NIP assumes individualism. If we embrace holism, we can see a path towards a solution (or dissolution).
  3. There is something wrong with anthropogenic global warming that extends beyond knock on effects for humans. For many people, holism resonates deeply. If we expand the audience beyond analytic moral philosophers, the assumption that non-individualism is counterintuitive is simply false. People routinely have experiences of losing their individuality in a greater whole.
  4. We can conceptualize a metaphysical argument for non-individualism. The assumption that wholes are no more than aggregates of parts is simply false. What, who, and how we are is necessarily situated in broader contexts (see John Dewey, Democracy and Education, ch. 22, especially sections 1-2). We can connect moral implications to the metaphysics. The health and character of individuals is determined by the wholes of which they are a part. I do not put this forward as self-evident. It, however, is plausible enough to buy holism some consideration.
  5. On a related point, technological development might well force concepts like ‘personal identity’ and ‘individual’ (in the relevant sense) to change. There are plausible futures in which the number of machines that a machine is would be constantly changing, indeterminate, or difficult to determine. Fanciful philosophical thought experiments are moving closer to reality. If our ethics requires an underlying concept of ‘individual’, our ethics might break. We do not face this prospect in holism. No matter our creations, they exist in a broader context. In such a situation of moral uncertainty, other things being equal, theories that can be functionally applicable in a greater number of possible futures should receive preference.
  6. Although it is not a direct reason for rejecting individualism, we can note that a holistic non-anthropocentric anthropomorphism is a possible viable position. I am thinking here of certain versions of the World Soul, deep ecology, or the Gaia hypothesis. The theories are non-anthropocentric because the soul or life of the individual is derivative of the whole. The soul of the world is more fundamental than mine. I am an organism in a bigger organism, like Spinoza’s worm in the blood. In short, the theory is non-anthropocentric precisely because it is holistic. Perhaps people would be more inclined to accept these versions of non-individualism. Although western philosophers do not favor them, many other people do.
  7. Some theories of environmental ethics are criticized for operating under the assumption that the ‘environment’ or ‘nature’ is trees, fields, streams, wild animals, and mountains. There is a bias towards bucolic Walden Pond-style experiences. There is a neglect for what Warwick Fox calls the “built environment.” An environmental ethic should include a serious treatment of the integration of human creations with what was here before. Holism helps in this regard. If our concern is for wholes like planets or regions, we cannot focus only on the green and blue parts. The grey is just as much a part of the whole.
  8. Finally, the fact that the broad problem of creative responsibility does not show up as a moral issue within dominant ethical theories counts in favor of preferring a non-individualistic ethics. Such acts of creation are not (or need not be) about individuals. How, then, could we make progress in theorizing about them? The key lies in holistic ethics. We can view our creative acts—even those that create individuals, with or without moral worth—as taking place in a broader context. The whole in which our acts and resulting creations exist will be altered. If we can find productive ways of thinking about the goods of wholes, we can outline our creative responsibilities.

When the problem of creative responsibility is made concrete in cases of technological development, like with AI, non-anthropocentric holism can give guidance. The genuine moral significance of our choices now shows up to us. We can recognize that our choices are not distinguished from considerations about the health of the planet or environment. We can avoid NIPs. We can see the discussions about existential risk and moral machines in a new, more accurate light. To the latter, since our investigation is rooted in a social justice critique, we can guard against speciesism and substratism and build a world without them. Currently, we are not on that path. And there is only so much time before the world is built.Green

 

5 thoughts on “Creative Responsibility: a Call for Papers”

  1. Lots to digest here. A question, and a hypothetical.

    What does “social justice” mean, in a context where the moral units are “families, societies, and planets”?

    Suppose there were a large asteroid on target to hit the earth. It would wipe out most of the ecosystems, and surely, human civilization. But we have good reason to believe that, a 100 million years hence, the earth will be again teeming with a large variety of life and ecosystems – but not humans. If we could, would we be morally obligated to divert the asteroid, and if so, why? Then substitute the asteroid for anthropogenic global warming.

    Like

  2. I don’t have an answer to the second question. It is the sort of issue that we should be thinking about.

    To the first question, I actually think social justice movements make most sense in the holistic context. Discussion of systems of oppression is difficult within the individualistic context. I discuss this in the voter guide, my first venture into holism.

    Like

  3. […] story, “Runaround.” Can we wrong a machine by making it one way rather than another? These are complex and neglected questions. But they are questions we do not face with the other forms of prejudice. […]

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s