Discover more from Discourse
AI Alignment and Future Threats
The biggest challenge as AI use increases will be limiting government power and surveillance
By Brent Skorup
Though it would have been hard to predict even a year ago, AI safety is now a major public policy priority. President Biden last week called on AI companies to protect users’ safety. Days later, OpenAI, an industry leader, released its “approach to AI safety.” Technologists and theorists are concerned about “AI alignment”—which means, as one prominent research center puts it, methods to “align future machine learning systems with human interests.”
But too many companies, analysts and regulators are worried about the wrong things. Any talk of “AI alignment” and “AI safety” must prioritize the primary alignment problem of the past century: constraining state surveillance, power and violence. Bureaucracy and technology have greatly centralized power in the past two centuries. AI is, like motors and the internet, a powerful general-purpose, labor-saving technology. It has great commercial and economic promise but is being used by governments and political parties to disempower and dominate.
Bureaucracy and AI technologies are increasingly being turned “inward” on U.S. citizens and residents, mostly for intelligence gathering and law enforcement. Five years ago, U.S. intelligence officials boasted over 130 pilot projects using AI technologies. Presumably, promising technologies are shared with domestic-facing law enforcement and intelligence agencies. As researchers and government agencies consider AI use cases and alignment, they must put social pluralism and individual liberties front and center. No one can predict AI uses and second-order effects with precision, but there are a handful of areas where AI uses, without new constraints, will create imminent alignment and safety problems.
The Feuding Governing Classes
AI technologies must be analyzed in light of America’s loss of esteem for social pluralism. The U.S. governing factions are stuck in a dangerous arms race to dominate each other. My colleague Martin Gurri points to a related problem in many nations, that the governing classes have lost their control of media narratives and the trust of the public. In the U.S., rather than acknowledging the new reality and attempting reform, the governing classes have turned on each other. It is a sad development that political violence, threats against family members, spying and investigations by intelligence agencies, secret surveillance warrants relying on false statements, threats of imprisonment and actual imprisonment are today real possibilities for U.S. political leaders, their staff members and judges—not to mention lesser forms of intimidation and sabotage.
Perhaps the most alarming fact is that this instability is happening despite relative tranquility globally—no Great Depression, no armies of Hitlerites or Stalinists rolling across Eurasia redrawing dozens of national boundaries, no compelling ideology seriously competing with the various flavors of liberal democratic capitalism. Further, the U.S. population is largely happy; Gallup reports that in 2022 about 90% of Americans are satisfied with their lives, a percentage that has been steadily increasing for 40 years. However, Americans’ personal happiness has almost completely decoupled from quality of governance. Pew also finds that Americans’ satisfaction with the direction of the U.S. is low and declining rapidly.
Elite power contests between small factions can quickly spiral out of control. AI use cases including sentiment analysis, social network analysis, document review and financial auditing will supercharge politicized lawfare and investigations. Overcriminalization, abuse of prosecutorial discretion, dragnet surveillance and politicized law enforcement are all preexisting problems in the U.S. AI technologies will tend to empower the government and political classes at the expense of the public. Media will splinter further, government and private surveillance of political enemies will increase and lawfare and investigations will become more precise and disruptive.
AI alignment means exploring specific applications of AI services and anticipating uses that undermine human interests. A few areas in particular demand technologist, researcher and lawmaker attention: censorship, surveillance and critical services cutoffs.
Social Media Censorship
In recent years, public-private coordination over censorship—so-called voluntary censorship—has ramped up for social media. As Gurri remarked, “There’s no precedent for what’s going on unless you go back to Franklin Roosevelt’s wartime censorship.” During the 2020 elections, for instance, Twitter employees noted internally that many FBI agents seemed to be tasked with doing manual keyword searches to find and flag users’ content for removal. This public-private censorship could easily be automated with AI technologies, making it even easier for the government to limit national debates and narratives.
Federal lawmakers should examine and require much more disclosure about how intelligence and law enforcement agencies communicate with social media company employees. AI technologies will make it cheaper to censor and will, absent reforms, escape oversight. At the very least, government employees should maintain records of the “disappearing” communications they use with social media company employees. Recent disclosures reveal, for instance, FBI plans to create government-to-social-media-company Signal communications and a mysterious Teleporter system, apparently a government-to-social-media-company portal for content takedown requests, timed to self-delete.
Surveillance of streets and other public places—via doorbell cameras and traffic cameras—is now routine. There are tens of thousands of cameras in D.C.-area public schools alone. Several coinciding developments will favor the installation of far more high-definition computer vision cameras in downtowns and along roadways, including massive new funding for fiber optics, falling costs of cameras and sensing technologies and the installation of 5G wireless. Given the tens of thousands of annual roadway fatalities and homicides in public places, many public officials will welcome new street cameras as a low-cost, life-saving technology. Further, automobile and transportation companies are embracing driver surveillance and connected technologies to make fleets and autonomous vehicles safer.
There is a place for more cameras and connected technologies. Most of us willingly carry tracking devices—smartphones—on our person 24-7 because of the convenience and safety GPS offers. However, we tolerate GPS tracking via phone largely because it is a private company, not government or law enforcement, that is capturing our location information.
I suspect many people would similarly tolerate (or welcome) more cameras and connectivity along roadways if it meant safer drivers and more compliance with basic insurance and licensure laws, which are violated by tens of millions of drivers daily. Roadside cameras can be a powerful way to ensure safer vehicles and safer streets. But to preserve civil liberties, roadway infrastructure and technology should be funded, maintained and operated primarily by private companies.
Drone surveillance is a fast-growing law enforcement method of newsgathering and private surveillance. Computer vision and AI technologies will make drone surveillance and recordkeeping much cheaper and easier to accomplish.
Troubling legal arguments are being advanced for the domestic use of drone surveillance and autonomous operations. Further, many agencies believe that warrants are not required for aerial investigations. Courts are divided, but some are permitting routine, warrantless drone surveillance of American neighborhoods and businesses.
For instance, a Michigan appellate court ruled last year in Long Lake Township v. Maxon that drone surveillance photos of homes gathered without a warrant are admissible as evidence in civil cases, which include a wide range of investigations such as zoning inspections, labor rule violations and noise complaints. Such drone recordings and photography cannot be excluded from civil proceedings, even if gathered unconstitutionally. This year, an appellate court ruled in Ohio v. Stevens that police drones can be freely flown over private land, so long as they don’t fly too close to homes. With these green lights from courts, government drone surveillance can be expected to become routine as costs of evidence-gathering and operations fall.
Private surveillance and harassment will also get worse, absent strong protections of landowner property and privacy rights. Many drone operators and governments argue that drones must be allowed to fly at any altitude the operator finds convenient. A Virginia drone law, for instance, expressly exempts most drone pilots from criminal trespass laws, even for drone flights within 50 feet of homes and businesses. Laws like these limit the ability of landowners to protect their air rights and encourage invasive newsgathering and private surveillance.
Few landowners, HOAs or residents want drones flying at low altitudes without landowner consent. Further, government uses of drones will undermine landowners’ property rights and privacy if air rights and evidence-gathering rules are not codified. Federal and state lawmakers need to anticipate harassment, spying, nuisance and evidence-gathering via drones and respond with minimum altitude requirements—say, 200 feet—to constrain intrusive newsgathering and surveillance by private individuals. Clear warrant and altitude requirements should also be imposed on drones used for regulatory investigations and law enforcement.
Critical Services Cutoffs
For generations, government officials have cut off (or threatened to cut off) media outlets and political enemies from critical services such as postal and financial services. This has typically been a very manual process in wartime, which has limited the scope of abuse. However, with increasingly networked critical services and AI technologies, and a trend of using emergency measures during peacetime, identifying and denying critical services to dissidents, enemies and scofflaws is becoming much easier.
Governments frequently fail to achieve official objectives. In recent years, governments around the world have struggled to control COVID-19 spread, demonstrations and dissident political movements. When government officials cannot achieve their goals directly via regulation or policy, some have turned to denying critical services to individuals or groups.
Officials in Japan, for instance, have proposed drastic measures to achieve green energy objectives, including, according to Japan Today,
the ability to remotely turn down privately owned air conditioner/heater units. The goal would be to decrease energy usage during expected power shortages, which the committee feels are a growing concern as Japan attempts to shift towards renewable energy sources such as solar power, where the amount generated can be affected by day-to-day climate, making it difficult to stabilize the amount of total power available.
In 2020, Los Angeles officials authorized and ordered water and electricity shutoffs for residents and businesses believed to be violating social-distancing and lockdown rules. Officials in Canada shut down demonstrators’ bank accounts and suspended vehicle licenses without a court order, the BBC reports, in an effort “to crack down on anti-vaccine mandate protests.” Officials also “broaden[ed] Canada's ‘Terrorist Financing’ rules to cover cryptocurrencies and crowdfunding platforms, as part of the effort.”
Suspending access or control to financial services, air travel, auto licensure and even HVAC systems is apparently viewed by many officials of Western liberal governments as a legitimate tool to quell protests and enforce contested or unpopular policy goals. Central bank digital currencies, if implemented by U.S. regulators, and new mandated reporting of even minor financial transactions will extend government financial surveillance intimately into most Americans’ lives.
Peaceful protestors and dissidents are increasingly likely to be surveilled and recorded by public and private parties. Political groups are already purchasing and publicizing GPS tracking information about opposition protestors. As the Detroit News noted in a 2020 story:
The tracking of hundreds of Michigan Capitol protesters’ cellphones from two rallies this spring has raised questions about protecting privacy rights, while testing limits of what location data can predict about the spread of the novel coronavirus from packed public events.
A liberal advocacy group most recently pursued the anonymized cellphone location data from more than 400 devices from individuals who attended the American Patriots Rally on April 30 in Lansing against Gov. Gretchen Whitmer’s stay-home orders and used it to gauge their locations the next day in a bid to determine the potential spread of the virus from the demonstration.
The location of 422 cellphones within a roughly four-block radius of the Capitol showed individuals traveled back to areas throughout the state, including West Michigan, Metro Detroit, Northern Michigan and the Indiana border after the event. Given these current realities and public policy trends, AI technologies will make previously manual shutdowns of critical services much faster and easier to implement.
Worrying About the Right Things
AI use for intelligence gathering and law enforcement should not (and cannot) be banned outright. There are some areas where the public will welcome low-cost surveillance and enforcement. Homicides and roadway fatalities, for instance, often happen in public places, destroy tens of thousands of families annually and result in hundreds of billions of dollars in economic damage. Local police with computer vision tools can deter crime and better identify violent criminals and dangerous drivers. Further, most people will want white collar crime, massive frauds and financial crimes to be identified with sophisticated software and algorithms.
Nevertheless, policymakers and companies must consider the imminent applications of AI, especially in the areas of censorship, surveillance and cutoffs of critical services. Unfortunately, there is less that can be done, policy-wise, to mitigate intraclass fighting among U.S. governing elites. Their escalating legal battles, sabotage and rhetoric represent a large alignment problem for the American public and for nations around the world, a problem that will grow as parties and government officials adopt AI technologies. The first thing the governing elites must do is clear: Admit they have a problem.