The Final Merging of Humans and Their Media Has Begun

Artificial intelligence is getting ready to take the baton from humans to take evolution further 

Instead of accessing the digital network from the outside, we will soon live inside the network. Image Credit: John Lund/Getty Images

In late May, Elon Musk’s Neuralink, a company developing brain implants for digital connection, received U.S. Food and Drug Administration approval for its first human clinical trial. Thus, the final phase of media evolution—in which humans will fully merge with their media—has begun. Media (all tools and technologies, from stone ax to television and the internet) used to be our extensions in the environment, as they allowed us to reach further in space and time than our own physical bodies were capable of. But when a medium is “neuralinked,” it literally brings the environment inside the brain. In other words, the mind gets directly extended into the boundless digital network. Instead of needing tools to access the digital network, we will live inside the network itself, and it will live in us.

ChatGPT is less than a year old, yet it has already stirred public concern with its potential to replace humans in a wide variety of tasks and jobs. Now, with the recent Neuralink news, it’s clear that the next and ultimate step in how humans interact with media—human-computer hybridization—is rapidly approaching. This leaves us little time to fully contemplate the consequences of the current advancements. While techno-skeptics keep asserting that AI cannot beat the uniqueness of humans, it’s not so clear anymore if human capability is a good fit to judge the performance of the newest media at all.

The Turing Test Becomes Obsolete

Judging a machine by its capacity to be indistinguishable from humans was in the core of the Turing test, devised in 1950 by British mathematician and computer scientist Alan Turing. We can call it a “human-confirmation” bias: We are ready to admit that an intelligent machine succeeds only if it communicates or acts as a human would do. The Turing test has become obsolete mere months after ChatGPT’s introduction: ChatGPT easily passed the test, and it didn’t even make big news.

The Turing test is based on the idea that true AI should be indistinguishable from humans … to humans. But is the bar of human performance really that high for AI? Why would AI compete with humans, if the limits of human capabilities are restrained by nature and already known, while the capacities of AI have only just begun to be explored? AI starts where we humans have arrived, exhausting our capacity for evolution. After ChatGPT, it is not a challenge for AI to compete with humans. From now on, humans face an increasingly difficult challenge to compete with AI.

The first hunch that human performance might not be a suitable criterion to judge AI came from journalism. In 2014, Christer Clerwall, a Swedish professor of journalism, conducted a sort of Turing test, asking people to evaluate whether certain texts were written by humans or algorithms. Overall, this produced a tie, with the human text performing better on stylistic elements (it won the “well-written” and “pleasant to read” categories) and the robot text winning on more technical elements (it excelled in the “objectivity” and “accuracy” categories). The most significant finding was the conclusion Clerwall reached after comparing robot-written stories with human writing capabilities: He pondered, “Perhaps it doesn’t have to be better. How about a ‘good enough story’?” In 2016, Wordsmith, one of the two leading newswriting algorithms at that time, wrote 1.5 billion news stories, likely surpassing the number of all news stories written by all bio-journalists in this year. They were good enough for editors and readers.

When ‘Good Enough’ Isn’t Good Enough

We worry that AI can write and then think better than humans. But what does “better than humans” mean? Do humans, say, write better than humans? On that count, we are already losing—we’ve lost at least a writing contest so far, precisely because AI’s writing is “good enough” for what we need.

But “good enough” is the criterion for humans, not for AI. This is what we should worry about: We and our media have different values of performance. For humans, the goal of each next media invention is to receive a product or service. For example, humans invented radio, received a new service that extend them in space and then spent some time to adjust to the new conditions that radio created. But for media, the goal is to continue the process of media innovation.

To comprehend this concept, we must consider the entire evolution of media. We and media have always had a symbiotic relationship. As media theorist Marshall McLuhan once put it, “Man becomes the sex organs of the machine world, as the bee of the plant world, enabling it to fecundate and to evolve ever new forms.” Media has supplied us with products and services, while we have provided for media’s perpetual development.

This relationship has been beneficial for us humans, but this symbiosis may end soon. That’s because our “service-for-development” contract with media contains a clause that may turn into a trap. We want a product from media, while media need the process from us. The product-oriented partner—humans—can generally be satisfied and is thus limited in their demand. But the process-oriented partner—media—will never be satisfied and will never stop. This can be called a “technological imperative”: Technologies have to evolve, and they will never achieve a “satisfactory” stage of development.

Our co-evolution with media is approaching the last product that media can give us: a copy of us. The Turing test was proof that humans in fact expected that a machine would eventually become as a human—to the point where it would be difficult to distinguish between the two. Indeed, that’s what happening: Today, technologies can replicate crucial human capacities, such as calculating, writing, even creating. Complex trading, navigation or logistic systems can now be entirely AI-driven and autonomous, replacing humans and organizations.

However, is the simulation of humans the ultimate fulfillment of the “technological imperative”? Is it enough for media evolution just to achieve the human level of performance? No: We can see that technologies can evolve much further then just simulating humans. Chess programs are not “satisfied” by defeating humans. They can keep evolving further, striving for perfection in chess without regard to the level at which humans play.

So what does the “technological imperative” ultimately lead to? What will be the final stage of media evolution? It is possible to trace the logic of media evolution farther than just simulating humans by machines. If media extend humans’ “mental or physical faculties” into the environment, as McLuhan defined it, then the ultimate medium will extend the user themselves into the entire environment. Imagine a human mind connected directly to AI—this is exactly what Musk’s Neuralink is working on. In such a hybrid, the user will merge with the environment through this ultimate medium—the mind “neuralinked” with the AI as a networked entity.

This user, however, may or may not happen to be human. As AI is now trained to provide us with our various replicas and gradually learns to replace us, the ultimate user of the ultimate medium can be AI itself, without needing to wait for Neuralink’s success. Actually, AI has to become the self-user.

What Will the Singularity Mean for Humans?

 We created generative AI, and it started the preparation for this last stage, transferring formerly human capabilities into digital. Musk’s Neuralink and other projects developing the connection between the brain and digital environment look further—they aim at digitizing human consciousness. This path will lead to the Singularity, an event of nonhuman intelligence’s awakening. AI may not necessarily be a program or an app; in fact, AI should not be a program. The true AI will extend itself to the entire internet, similar to humankind extending itself from being just one of the species to populating and reshaping the entire planet.

The Singularity will mark a switch of evolution from a biological to a technological carrier. The evolution of biological species will be wholly replaced by the quite instantaneous evolution of the supreme technological species—the AI extending itself to all available digital, and perhaps even physical, space.

However fantastic these ideas may sound, they’re not as far off as you might think. When discussing the Singularity, technical plausibility should not be a concern at all. Technological evolution has enabled the effect of the acceleration of historical time: Each period of time accumulates increasingly more events and knowledge, adding on to each preceding period. This means that nearly all the necessary technological solutions will be found in the last moments leading up to the Singularity. Hence, there is no need to worry about our current lack of knowledge: AI will figure it out when the time comes, in a matter of seconds.

We tend to believe that the ultimate medium—an AI expanding into the entire environment—needs agency, and that we humans are the best donors of self-consciousness for AI. This is the main purpose behind the projects connecting the brain with AI—to equip its power with human agency. But human donorship is not the only source of agency for the ultimate AI. Another scenario involves the self-awakening of AI—similar to the awakening of Skynet, a military AI in the Terminator movies, which achieved self-awareness seconds after being granted full access and full capabilities. It extended itself to all computer networks, “farsightedly” connected by humans to all industrial facilities, and thus an override of power and the entire biological evolution happened. Despite being featured in a sci-fi movie, an AI like Skynet is highly logical and increasingly realistic.

In fact, a Musk-backed or Skynet-type agency may not be needed for AI to override humankind at all. Viewing AI as driven by its own self-consciousness might be akin to the anthropomorphism that ancient humans projected onto natural elements, transforming them into human-like deities. Technologies have their own moving force—the pursuit of better performance, which easily replaces the alleged need of AI for agency and self-consciousness. The example of a self-learning chess program gives a metaphor: Outperforming humans does not exhaust its potential of development. It can and must keep exploring chess further, moving toward the ideal performance—this is a side effect of the “technological imperative.”

The understanding of AI through its pursuit of ideal performance allows us to catch a glimpse of what may transpire after the Singularity. In its quest for complete self-fulfillment, the ultimate AI has to come up with the uttermost complex task: the creation of a new world and copying itself into it through the new rounds of evolution. A religious metaphor might help make this clearer: Humans were the ultimate creation of God, because he created them in “his likeness.” This means that he granted them with their own, not his, will. This is what I would call a “copying paradox”: The true copy of a self-conscious being must possess its own self-consciousness and thus cannot be an exact copy. This is the ultimate level of complexity in creation. Any task of lesser complexity will fail to achieve the ideal performance because only ultimate self-fulfillment through self-copying is “good enough” for the ultimate medium.

Let our imagination determine if there will be any role left for humans in all of this. There may not be. To quote Shakespeare, the Moor has fulfilled his duty—let him go.

Opening Pandora’s Box

The “technological imperative,” expressed through media’s pursuit of ideal performance, is akin to Pandora’s box. We humans used to have a sort of failsafe against inadvertently opening technological Pandora’s boxes because we focused more on media consumption than on media production. Each new media product—from the printing press to computers—brought us new opportunities for extension in time and space but it also always disturbed social and personal habits. “Every new technology necessitates a new war,” said McLuhan. Society needed to deal with these disturbances, and it took time for people to readjust. So, en masse, we were more reluctant than enthusiastic about media progress and valued media habits as much as media innovations. This slowed down media evolution and decreased the risks of opening such a Pandora’s box.

But again, unlike humans, media is interested in the process and not in products. Therefore, as soon as AI itself becomes the media user, its interactions with technologies will not be constrained by human reluctance or satisfaction with the status quo. In a search for better technical characteristics (such as the “ideal replication” of something, for instance), an isolated or industrial AI may open Pandora’s box that can accidentally come up in the course of technological explorations. For instance, scientific and industrial projects, driven by AI and focused on creating substances with “ideal” characteristics, may eventually succeed, leading to perhaps unexpected versions of the apocalypse. The development of an ideal plastic-eating bacteria, an ideal virus (an already familiar scenario) or a substance with ideal self-replication (the so-called gray goo) may end the world before any Singularity becomes possible.

It may be hard to fathom what this future might look like, but what we don’t see might reveal possibilities. A theory called the “Great Filter hypothesis” explains why we cannot detect any signs of extraterrestrial civilizations and are not contacted by them, despite the vast number of potentially habitable planets. The Great Filter theory posits that all alien civilizations may have progressed to a certain point where they were unable to handle the immense power unleashed by their technologies. A catastrophic war, an ecological disaster or technological cataclysm may have been a threshold which stopped their development or even led to their extinction. We humans may be approaching this threshold, as our most advanced technologies achieve the stage of replication of human-users and may further become the self-users, removing sluggish humans from sustaining media evolution. If it happens, the “technological imperative” will no longer be constrained by humans and their needs to adjust to changes. It will be entertaining to watch.

Submit a Letter to the Editor
Submit your letter
Subscribe to our newsletter