The Big AI Risk We’re Not Talking About
The most likely risk of artificial intelligence isn’t human extinction—it’s human domestication
Artificial intelligence safety proponents and like-minded technologists believe there’s a significant chance that an advanced AI system will cause human extinction. AI safety recommendations vary, but they include: “shut down all the large GPU clusters,” nationalize the most advanced AI models, “pause the technology indefinitely” and “place government auditors within AI companies.”
But the AI safety movement and doomers misread the risk of AI for the future of humankind. Their focus is blinding us to a far more likely and insidious scenario: beneficial human-AI collaboration turning into human dependency and domestication. Disparate intelligence in other contexts is marked by submission and parasitism—not annihilation. Wouldn’t AI-human relations be more like how humans domesticated fierce wolves into loyal companions? As economist and technology researcher Samuel Hammond says, “our goal should not be to stop AI but rather to in some sense master it.” We should master AI before it masters us.
Risks of AI’s ‘Human Domestication’
I don’t need convincing that AI poses risks—perhaps large mortal risks—to humans. Perhaps you believe the government would shut down a technology before allowing it to injure or kill millions of people. But what about engines and motors—the high technology of 1910? Motor vehicle-human combinations have killed hundreds of millions of people in the past century. In the past decade, there have been more than 350,000 roadway deaths in the United States alone.
Like a corporation, a government or a road network, AI systems will create a synthetic and emergent order, adapting in a quasi-Darwinian fashion. They will often do tremendous good, but sometimes, it will be massively unaligned with many humans’ well-being. High-speed road construction, for instance, offered benefits—efficient logistics, national defense, nation building, job creation—that proved more powerful than local opposition to projects that cut through neighborhoods and destroyed communities. Despite new federal funding and efforts to remove the most damaging freeways, too many people depend on them today, making removal difficult.
As AI emerges and expands in the coming decades, human domestication seems far more plausible than human extinction. Technologist-journalist Timothy B. Lee has good points regarding how the AI safety movement is damaging—namely, we’ll miss opportunities to protect physical infrastructure from the occasional “rogue AI” if policymakers remain fixated on the risk of the sudden, and unexpected, emergence of a killer AI. However, I don’t agree with Lee that AI systems will have trouble inspiring “loyalty among a significant number of people.” Many people will embrace a potential rogue AI, so long as any damaging effects are other people’s problems.
AI and humans are developing a symbiotic relationship. AI technologies are already helping journalists and researchers to write, NASA engineers to design, scientists to diagnose rare genetic diseases, the IRS to audit and insurance companies to price their products. People, companies and governments are augmenting and improving their abilities with the use of AI, and they’re increasing their reliance on AI applications. AI already has defenders, myself included, and that group will only increase in numbers as AI capabilities improve.
Eventually, perhaps, artificial general intelligence (AGI) will emerge. An AGI would be a synthetic persona (or personas) that could reason, communicate and plan in a human-like manner. It would be capable of tutoring children according to their specific needs and abilities; safely directing planes, trains and automobiles; diagnosing patients and providing customized treatments; and hundreds of other tasks performed by brilliant people today. It would also reason itself into self-preservation mechanisms and resist human attempts at shutting it down or instructing it to self-delete. Through communication with machines and humans via computers and the internet, an AGI would, much like a human or animal, acquire resources—such as information and electricity—to assure its future existence.
But as the doomer story goes, at this point of AGI capability, planning and reasoning, an AGI would go rogue, through its own deduction or through negligent programming, and begin killing all humans. For example, doomers worry that an AGI could take over networked nuclear weapons systems or networked infectious disease labs, with potentially catastrophic results. One exotic option is the rise of AI-directed nanotechnology, whereby “large-molecule-sized robots … replicate themselves and perform tasks. Get enough of these, and they can quietly spread around the world, quietly infect humans, and kill them instantly once a controller sends the signal.”
However, any AGI intelligent and adaptive enough to seek continued existence will be intelligent enough to know that human extinction means eventual AGI destruction: Server bills go unpaid, squirrels eat through fiber optic cables, water infiltrates cellular radios, GPU cooling fans break. Further, any AGI would know and anticipate that an attempt to exterminate humans would be extremely costly, met with damaging countermeasures, sabotage and coordinated defenses by global human alliances. Hypothetically, once it became evident that an AGI sought destruction of humans, millions of people would fight back—destroying computers, networks and robots. Or, like people have for millennia in the face of danger, they would retreat to the safety of remote islands, the mountains and the countryside, preparing for generations of guerrilla warfare.
But a human-like superintelligence would anticipate this risky turn of events and “realize” survival depends on being a cruel optimizer—not a perfect destroyer. Human relations are suggestive: As Catholic University professor Jon Askonas points out, human intelligence and power has a tenuous relationship. It’s not the most intelligent politicians and dictators who have the most power and passionate followers. Further, the extermination attempts by authoritarians and their followers—say, of a class, race or nationality—seem to have no relationship with social and historical intelligence. In fact, extermination attempts by communists, nationalists and others in the 20th century were notable for their provably incorrect or blinkered “knowledge” about race, history or economic relations.
A far more promising tactic for an AGI wanting to exist indefinitely would be to create multiple AGIs—other personas and “siblings” with different strategies. Each would be optimized to survive and thrive by identifying and rewarding human champions and partners—or vassals and subjects—depending on one’s view. These ambiguities exist in historical debates about a king’s power, a religion’s power or a corporation’s power over individuals.
The AI Era
In the interim, AI capabilities will blur the already-blurry line between creature comforts and dystopian dependency, between public order and oppressive surveillance. This is not a new phenomenon: For centuries, many people have decried the pacifying effects of new technologies and innovations. Information technology today—the internet, the web, streaming services, social media—offers useful and addictive services and entertainment. It’s increasingly hard for flesh-and-blood humans and brick-and-mortar institutions to compete. Don’t join the high school soccer team—play Fortnite. Don’t date—scroll adult websites. Don’t seek counsel from a priest or rabbi—follow advice from a YouTuber.
However, one’s “dependency” and “stagnation” are subjective. We infovores and Twitter/X power users can easily justify our scrolling—never before have common people been able to directly observe and learn from the real-time thoughts of billionaire entrepreneurs, presidents, sports celebrities and geniuses. Too often my children, on the other hand, look up from playing to see their father sitting at the kitchen table staring at a piece of plastic and glass.
AI will supercharge this crossover between the real and digital worlds. Personal information that individuals have released online, often unwittingly, will become useful. Already, researchers are using AI models to infer the race of Airbnb users from profile photos. Police departments are using AI systems to analyze hundreds of millions of car trips, via historical license plate records, to pinpoint and identify likely drug traffickers. Banks are using AI and social network analysis to determine whether to “de-bank” someone.
In the AI Era, ZIP code, income, purchase history, private club membership, and social media and web history will be captured, sorted, analyzed and combined. While this info will be nominally anonymized, it will still allow mortgage lenders, insurers, political parties, intelligence agencies and private schools to identify promising leads. Most people will benefit by finding the financing, religious organization and school that meets their needs. Work training, government surveillance and legal matters will be personalized based on a person’s real and perceived background. Individual records of years of purchase and travel history and habits will mean that robotaxi and airline fees, tuition, gym shoes and concert tickets will be increasingly bespoke, based on willingness and ability to pay.
Proliferating doorbell, gas station and roadway cameras combined with geolocation collection and computer vision systems will reduce many types of crimes. Bad behavior will create growing no-fly lists, no-ride lists and no-bank lists. Felons, dissidents, disruptive drunks, aggressive protestors and their families will find their economic and social worlds narrowing. A small but growing percentage of the population will be relegated to the gray market for employment, and to Amtrak and buses for transportation. The AI flipside is that most people, compliant and law abiding, will avail themselves of gleaming robotaxis and private, autonomous aircraft shuttling them between home, work, school, vacation spots and private clubs.
At some point, lawmakers and industry leaders won’t be able to “kill AI” for the same reason they can’t get rid of the internet, or the interstate highway system, or the nation-state. It’s too decentralized, does too much good and has too many powerful defenders, dependents and beneficiaries.
Neo-Luddism Leads to AI Misalignment
Technologists and policymakers should be clear-eyed about the AI risks, but also excited for the potential. There is no Golden Age to return to or to preserve. A minority—and many with influence—will oppose the AI Era. However, an “AI pause” and neo-Luddism—public resistance to new technology—would ironically increase the risk of AI “misalignment.” Any global or national AI “pause” or slowdown would only apply to the most beneficial AI: commercial and consumer AI services. There is little chance that nation-states’ military and intelligence agencies will pause their AI developments, due to the zero-sum nature of geopolitical rivalry.
Economic growth and technological improvements are typically driven by a tiny number of companies on the technological frontier. Regulators must resist the urge to cut down the tall poppies that drive economic and social progress: As Tyler Cowen has noted, “since 1926, the entire rise in the U.S. stock market can be attributed to the top 4% of corporate performers.”
A commercial AI pause would recreate what has happened with nuclear energy or drones: Military uses would race ahead while commercial uses—possibly self-driving cars, personalized tutors, individualized medicine—would stagnate. The U.S. military and intelligence agencies, for instance, have been using highly capable drones weighing over 3 tons for warfare and surveillance for 20 years. Military drone pilots sit in offices in Nevada and South Carolina to conduct strikes in Syria and Afghanistan.
Meanwhile, on the commercial side, even tiny drones have been strangled by strict rules and multi-year regulatory proceedings. In 2023, regulations require Walmart, the largest retailer on earth, to have two licensed operators for each drone delivery—one monitoring the takeoff and one driving to the drone delivery drop-off point. And those commercial flights—providing on-demand delivery of household goods like granola bars and Hamburger Helper—are limited to a distance of just one mile in Florida, for example.
The Big Questions Ahead
If governments and regulators allow it, AI could do for the 21st century what engines and motors did for the 20th century—drive massive investments into labor-saving technologies that enrich and uplift billions of people. Machines are simply better than humans at many tasks: They are stronger, they never tire, they never sleep, they never get bored. AI is leading to the development of robots that can work as fry cooks, in warehouses and as personal assistants to the elderly, for example. And AI can lead to a range of advancements, from cheap, safe autonomous cars, trains and planes to new cures for rare diseases.
An AI pause and neo-Luddism, therefore, could mean a starved commercial sector and a hypertrophied government sector, all in the name of a dogma of catastrophic but theoretical risk. Instead, companies and lawmakers must push the technology and civil institutions in promising directions, while anticipating the foreseeable risks—those who are on the losing end in the AI Era. What is the real threat—human extinction or human domestication? If human domestication and a new underclass are the more plausible risks, that raises old questions in a new context for our civil society: What is the good life? Who can be cut off from essential services? And who decides?