Culture & Society

Why the Robots Won’t Eat Us

Neither the AI utopia nor the AI apocalypse is going to happen, because machines lack the independence and motivation to replace us

Image Credit: Neil Webb/Debut Art

Discussions about the future of artificial intelligence are often caught between competing utopian and dystopian visions. The usual assumption is that robots will replace us, which means either we will be freed from the necessity of work and we’ll all live like pampered aristocrats—that’s the utopian version—or the robots will take over and kill us all or farm us for energy, or something else that makes even less sense. With a new round of advances in AI, such scenarios seem to loom nearer. So it is with a mixture of relief and regret that I inform you: Neither of these outcomes is going to happen.

The robots won’t feed us, nor will they eat us, because the robots will not replace us. Most people have noticed by now that recent versions of artificial intelligence such as ChatGPT, which can sometimes produce remarkable results, also have very noticeable limits. But even more advanced versions of this technology will still not be able to do essential things that a human can do. AI lacks three things we have that make us special and that a machine by its very nature cannot have: consciousness, motivation and volition.

The Mechanical Sock Puppet

Let’s start with consciousness. There is an old and very complex philosophical debate about what actually constitutes “consciousness” and whether there is any inherent difference between consciousness and the mere mechanical sorting of information. But today’s artificial intelligence falls short of consciousness in a clear and simple way: It has no direct contact or interaction with the world. Today’s AI does not wander freely looking at the objects around it with its own two eyes, so to speak. It has no sense organs. Instead, it is “trained” on data already processed and arranged by humans.

AI does not observe things; it is fed data—and that is a fundamental difference.

An AI created to recognize pictures of birds, for example, is fed digital image files selected and usually labeled by its developers. Human intelligence has already been used to sort the data before the machine even gets to it. After being trained on the initial data set, the AI is then tested, and adjusted if necessary, by being fed a more varied and realistic sample of data. But it is being tested by being compared with the judgment of its human trainers.

Or consider ChatGPT, which is stocked with data obtained by scraping publicly available information off the internet. To the extent it can answer questions accurately, this is because humans have already done the work of understanding, say, quantum physics or the philosophy of John Locke or the name of the Klingon home world, and they have made that information publicly available. AI is parasitic on knowledge already developed by a human consciousness. It is a mechanical sock puppet that cannot generate new knowledge of its own.

This difference between observing and being fed data helps explain the greatest concrete disappointment of artificial intelligence: the difficulty in creating software for self-driving cars. Computers still can’t do better than humans, and it’s not just because the perception and pattern-recognition abilities of the human brain are so good. It is because humans have spent our entire lives wandering around the world observing and interacting directly with the objects around us, seeing, touching and hearing them from all angles and in all conditions. It’s no wonder we have learned how to tell the difference between a pedestrian and a lamppost, or how to make out lane markings on a rainy street at night, with an accuracy that can’t yet be matched by a computer trained on canned data.

Consciousness in this sense means direct and independent access to the world, and in that regard, machines can’t do what a human infant can do.

What’s My Motivation?

But humans are not just in direct contact with the world. We live in it, and that makes a difference, too.

Scientists who study perceptual development in infants have noted that much of this development is driven by locomotion, by children moving themselves through their environment. This is true, at a higher level, for all learning. We are driven to find out about the world around us because there are things we want.

The term “motivated reasoning” now commonly refers to the process of rationalizing a conclusion to which you have a preexisting emotional commitment. But in a healthier sense, all reasoning is motivated. The evolutionary function of consciousness is to enable us to orient ourselves in the world and take successful action. We observe and think and plan so we can survive.

Our future overlord? Screenshot of Gort from “The Day the Earth Stood Still” trailer. Image Credit: Wikimedia Commons

Robots have no such needs. I asked above whether the machines are going to eat us, because that question makes the difference obvious. Animals and humans constantly need to feed ourselves with energy and nutrients just to stay in one piece, and we wouldn’t exist at all except as the product of thousands of previous generations that were constantly seeking out energy to keep themselves alive and to reproduce. But machines don’t need to eat, and it makes no difference to them whether they are kept running or shut off. A machine has no fundamental source of motivation.

This is why most dystopian versions of AI are fundamentally unconvincing. The machines are going to take over—and do what? What would they actually want or need? What’s their motivation?

We don’t often realize how important motivation is to human reason. If the purpose of thinking is to survive, then we have a direct and personal interest in figuring out the truth and getting it right. We can’t just follow a line of thought by rote repetition. We have to constantly compare our ideas and actions to their real-world results and adjust them accordingly.

The psychologist William James memorably explained the difference between mechanical action and goal-directed action.

If some iron filings be sprinkled on a table and a magnet brought near them, they will fly through the air for a certain distance and stick to its surface. A savage seeing the phenomenon explains it as the result of an attraction or love between the magnet and the filings. But let a card cover the poles of the magnet, and the filings will press forever against its surface without its ever occurring to them to pass around its sides and thus come into more direct contact with the object of their love. . . .

Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet’s lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely.

AI has no such power to adapt its means to its ends because it has no ends in the first place, no outcomes it needs to achieve. So we can see it regularly following its algorithms into dead ends.

The most notorious illustration of this is ChatGPT’s tendency to produce outright fabrications. When asked to produce clear answers to basic questions, it will produce answers that are clear and sound authoritative but are completely made up. When asked to produce references or a work history for a real person, it will invent jobs you never held and books you never wrote. It will do this because it is mechanically following its algorithmic requirements wherever they take it, like a rock rolling downhill, and it has no need to make sure its answers are right.

The Power of ‘No’

Some people describe the fantasy “facts” invented by AI as the machines lying to us. But it’s not lying, really, because it is not deliberate. The AI is not choosing to do it. AI gets caught in dead ends or flies off into an invented alternative reality not simply because it has no motivation to do otherwise, but because it has no ability to change the course set by the initial conditions of its programming. It lacks the power of volition.

As with motivation, we underrate the importance of volition to human reason. The whole distinctive power of our minds is our ability not to follow some kind of instinctual programming, training, habit or social conformity the way AI follows it algorithms.

There are, of course, people who choose not to exercise this power. If you’ve ever listened to conspiracy theorists, you have witnessed the results. Like an AI algorithm running out of control, they will follow their own offbeat chain of pseudo-reasoning or a trail of misconstrued “breadcrumbs” to the most absurd results.

Central to humans’ cognitive power is the ability to say “no” to a chain of thought, to stop it and check it against reality and against our goals, and to choose to change direction and put our brains back on track.

We can see the evolutionary survival value in having the capacity of choice. Other animals have instinctual programming, which empowers them but limits them. They survive by following preprogrammed behaviors, so long as their environment matches the one under which they evolved. When conditions change, the species dies out.

Humans alone have the distinctive power to program and reprogram ourselves. This makes our consciousness far more agile and expansive and enables us to survive in any location or climate, and to expand our power in astonishing new ways.

Jean-Luc Picard Was Right

We don’t know how to make a machine capable of volition, because we don’t really know how we are capable of it. But endowing machines with the power of choice, or with motivation, or with their own independent consciousness, would also defeat the whole point.

The reason for creating artificial intelligence, as opposed to just using the natural intelligence we already possess in such abundance, is that it will operate automatically and at our direction. It will access only the data we want it to have, work to achieve only the tasks we give it and do so day and night without needing to be talked into it.

The fears of an AI apocalypse are the flipside of the dreams of the AI utopians. They are manifestations of the same contradiction. We want a human-style intelligence to do all our work for us, but such an intelligence would have to be an independent consciousness with its own motivation and volition. But then why would it take our orders? At some level we realize that Jean-Luc Picard was right. The supposedly utopian vision of a society supported by AI worker drones is actually a vision of slavery. So it is only natural that we fear a slave revolt.

But it is also a fantasy, because we are not actually building machines with any of these characteristics and wouldn’t know how to do it if we tried. To be sure, we won’t get the positive benefits, but we also won’t get the apocalyptic downside. We are not, thank goodness, in the business of building independent beings. What we are building are mechanical extensions of our own mental processes, capable of assisting us but not replacing us.

AI will definitely have its problems and growing pains, but they will be more prosaic than the worst-case dystopian nightmare. The hilarious failures of our current, primitive forms of AI, such as autocorrect, will be replaced by more sophisticated failures—as CNET found when it allowed an AI to write articles for it. And if someone makes the mistake of putting AI in charge of something important without adequate human supervision, the results might not be so funny. But these problems will mostly take the form, not of AI becoming too powerful, but of it breaking down, failing and needing humans to come to the rescue.

Don’t get me wrong. I have already written about what I see as the tremendous potential of artificial intelligence as an adjunct to human intelligence. If, as Ayn Rand put it, a machine is “the frozen form of a living intelligence,” then AI is human intelligence stored in liquid form: more mobile and flexible and capable of reshaping itself for new tasks. But if we want to benefit from the thinking of an independent consciousness capable of keeping itself on track by its own choice—well, we humans had better get to it, because we’re all we’ve got.



Submit a Letter to the Editor
Submit your letter
Subscribe to our newsletter