Is it moral to imbue machines with consciousness?

02

By Curt Hopkins

Artificial intelligence has become ubiquitous, even if we don’t often recognize it. It’s now a standard part of a car’s GPS software, customer service tools, and online music recommendations we come across, and even in our toys.

The development and adoption of the technology has been so rapid that what we can expect from AI—or how soon we’ll get there—no longer seem clear. And it’s forcing us to confront a question that hasn’t dogged previous computer-research efforts, namely: Is it ethical to develop AI past the point of consciousness?

The proponents of AI call out the ability of self-regulating, intelligent machines to preserve human life by going where we cannot safely go, from inside nuclear reactors to mines to deep space. The detractors, however, who include a number of high-profile and influential figures, assert that improperly managed, AI could have serious unintended consequence, including, possibly, the end of the human race.

To begin untangling this moral skein, and possibly sketch a path toward policies that could help guide our path, we talked to five experts—a scientist, a philosopher, an ethicist, an engineer, and a humanist—about the implications of AI research and our obligations as human developers.

Moral Code: The Ethics of AI

Watch the video

FROM CREATING TOOLS TO CREATING PARTNERS

KIRK BRESNIKER: Chief architect for Hewlett Packard Labs

I don't think we’re in imminent danger of stumbling into a consciousness that we create ourselves. We're more likely to fool ourselves into imagining that we have by simulating behavior we associate with consciousness.

Is it ethical to continue this pursuit? I think it is.

I think that we have a stewardship role to fill as we move from machines being mere tools that we use to amplify our capabilities to partners that we can use to complement our abilities as conscious beings.

The artificial consciousness, crafted for purpose, which can go to areas, or regions, or environments in which we as humans are limited because of our biological nature is something we should be actively pursuing.

Systems based on non-biological substrates could exist in environments incompatible with organic life: The vacuum and radiation of space, the hot zone of a pandemic, or the darkness and pressure of a mid-ocean trench. If they incorporate materials which do not consume energy or resources to maintain state (retain its stored inputs), they could offer us intelligences with a completely new relationship to time, carrying us through the void to distant stars.

At some point, biologically-evolved systems crossed a threshold, and perhaps it wasn't a bright line. Perhaps it slowly increased, perhaps it had to do not only with the individual, but with the communities, and with the ability for those communities to pass on information not only biologically in DNA, but also as culture, as learned understanding, as something that can be accelerated by tools.

Unfortunately, we don't have reliable narrators to tell us how that happened. The only existing case that we know is ourselves. It's hard to think about that second case study of artificial consciousness.

THESE AWFUL UNKNOWN QUALITIES

IRINA RAICU: Internet Ethics program director, Markkula Center for Applied Ethics at Santa Clara University

The term “artificial intelligence” may actually hurt this debate, because I don't think we’re really talking about intelligence. Intelligence is a holistic, broad-view understanding, drawing on all manner of considerations from genetic pre-programing to education to culture to personal stories. I don't think what we're talking about with AI, for at least the near future, has really anything to do with intelligence, let alone consciousness.

A philosopher colleague presented a slide that explores what she described as "superior artificial moral intelligence," adding, in parentheses, "E.g. more consistent, better at predicting consequences and at optimizing competing goods."

I would argue with part of that position. I don’t believe, for example, that consistency is inherently good. We have laws, but we also have discretion, which allows human decision makers to take all kind of factors into consideration. So would a more consistent AI, unable to exercise discretion, really be superior in terms of morality?

Assuming we could create an AI so complex that it matched some consensus definition of consciousness, then what kind of rights would such an AI have? Would AI entities be just like humans in the way we treat them? If they had exactly the same consciousness that we do, then I'd be inclined to say yes. But would they really be just like us in every way? Or would their morality have a more utilitarian character?

The Victorians were consumed with a kind of systematized moral algebra, which did not work out well for the people to whom it was applied. As Charles Dickens put it, in Hard Times, “Supposing we were to reserve our arithmetic for material objects, and to govern these awful unknown quantities [i.e., human beings] by other means!” The Victorian times are far behind us, but humans are still wonderfully unknown quantities whom even complex AI cannot fully map.

HOW WOULD A MACHINE HAVE THE MECHANISM TO SUFFER?

MIRANDA MOWBRAY: Lecturer in computer science at the University of Bristol

What is consciousness? Is a mosquito conscious? A bee? A dog? A dog is certainly sentient, and can communicate.

We already breed dogs, changing their genetic and temperamental characteristics quite considerably. We use them as our servants.

I don't think that we have an ethical obligation not to breed dogs to be better human companions. I think we do have an ethical obligation to treat dogs humanely.

The question, for me, is: Can they suffer pain? If you had a conscious being that couldn't suffer, I don't think that we would have ethical obligations towards it. It’s very hard to work out how a machine would have the mechanism to suffer. And therefore it's more like an insect, which doesn't have a central nervous system; you can pull off its legs and it doesn't seem to react. It's not a nice thing to pull legs off flies, but I don't think it's an ethical problem in the same way as hurting a dog.

There are some people who would say that pretty much anything sentient is conscious. Some people argue passionately, for example, that chimpanzees are conscious and therefore we ought to give them some aspects of ethical rights. Some people even say that bacteria are conscious.

The borderline between what different people say is conscious and what people say is not conscious is really not well defined. And if you define it in a certain way, you end up saying a smart thermostat is proto-conscious. So it's a kind of argument by extremes.

There’s nothing unethical about developing AI with consciousness, however you might define that. However, I think there are things you should worry about. In order to make most AI work, you need enormous amounts of data. And this requires the collection of that data with all its potential threats to privacy. A large proportion of the usable data that AI will run on will be in the hands of companies rather than governments or individuals, and AI built to use this data is likely to be controlled by the companies that have the data, rather than having more democratic control.

These things are inherent in the designers, not the AI. But I think we should be careful about them.

I REJECT THE PREMISE

MARK BISHOP: Professor of cognitive computing, Goldsmiths, University of London

I refute the possibility of machine consciousness. It seems to me that you might as well have a conversation about tooth fairies. I don't see any machine, now or ever, having any phenomenal consciousness at all.

The Artificial Consciousness Test for machines inheres the notion that a test is required, because if we predicated consciousness to a computational entity, it would potentially put other ethical responsibilities onto us as interactors with that entity. For example, if a robot were conscious, would it be ethical to send it into a place that could potentially be harmful to the robot?

But I don't see any reason for believing that a machine could be at all conscious.

I think the very act of building a computer from individual logic gates gives you a very deep insight as to what's going on in computers. You don't imagine there's anything supernatural happening. I think people forget that what are going on in computers are simple forced modal changes that occur due to particular changes in voltages. The computer has no more free will than a lever. If I press down on one side of a lever the other side comes up and the lever can't elect to do anything about that. That's just the way it is.

THE INTERNAL STATE

We're using a model of something that's human-like and trying to apply it to a machine. But that model is problematic because how we regard machines and interact with them can change from culture to culture and by situation.

AI sometimes cues us—via product design like a human-like shape, or its capabilities, such as natural language recognition—that it is different than some other tech or tools we use. Incorporating those human-like or animal-like qualities into AI elicits reactions in us where we tend to interact with it as if it was a living thing.

We often inaccurately respond to it as if it had an “internal state,” possessing elements that indicate consciousness: intelligence, emotions, motivations, self-awareness.

I believe we're in a cultural transition period that will last for 20-50 years. In this period, we’re dealing with what I call an “accommodation dilemma,” in which we're learning to live with tech that is smart and interacts with us—sometimes responding or acting in human-like or animal-like ways—but are not really living entities. During this period, we're going to have to rapidly pay attention to the tendency we have to attribute lifelike associations with AI and robots, and form policies to deal with all of these various ethical issues.

In the end, I don’t believe there will be machine consciousness in the way it is often described, as something akin to human-like consciousness. It will be human-influenced because we imparted our ideas about what that intelligence should look like to our AI.

Join experts from across the AI world to explore how we can augment human capabilities without sacrificing our humanity.

Learn more about how AI can transform your business, and the steps to get started.