The Crisis of Technical Deference in AI Policy
While some AI expertise is necessary, policy led solely by technologists will be prone to blind spots and misfires
In May, the Senate lit up the AI policy conversation with two dueling artificial intelligence hearings: one on the use of AI in government and the other on AI oversight. Comparing the two, most have acknowledged their contrasts: The Sam Altman-led oversight hearing featured high drama (for the Senate, that is), while the relatively staid AI in Government hearing featured substantive policy proposals.
These contrasts in substance, however, disguise a deeper thematic lockstep. During the oversight hearing, uncertain senators looked to a trio of computer scientists, hoping their technical understanding might yield effective solutions. Meanwhile, the witnesses at the AI in Government hearing proposed a formalization of this exact approach: Most proposals focused on the government’s pressing need for technologists and how to coax computer scientists into the agencies to manage the vexing challenges of the AI age. The shared theme: AI is complex, and for solutions, we must turn to the technical experts.
The Senate’s emphasis on technical expertise is hardly surprising. Again and again, officials have defaulted to the position that AI policy should be led by technologists. Today there is a rarely recognized yet growing problem of excessive deference to technologists baked into formal AI policy and governance structures. Government officials have staffed, centered and prioritized the leadership and perspectives of well-credentialed technologists, computer scientists and Silicon Valley natives to address AI challenges. Almost altogether missing are other types of relevant and necessary nontechnical expertise. This is despite AI’s predicted impact on a wide diversity of disciplines beyond computer science, including healthcare, transportation, labor economics, creative media and aviation.
While many AI policy decisions no doubt demand the input of technical expertise, by elevating only this one perspective, officials are accepting the risk of decisions made without considering the potential diversity of AI applications, uses and externalities. AI is a general-purpose technology, and to manage its unpredictable and varied impacts, we need to foster a general-purpose slate of experts at the policy helm. If we don’t, we risk decision-making blind spots and inevitable policy misfires.
Silicon Valley Rabbit Hole
To understand this challenge requires understanding just how deep this technical deference goes. An important piece of this puzzle, the piece we saw on display in the Senate, is external influence. In early May, it was unsurprising that the Biden administration kicked off its AI policy push with a high-profile meeting with Silicon Valley’s top AI leaders. The effect was to highlight the central importance of these technical voices in the administration’s budding AI regulatory efforts. In the following months, the influence of these external technical experts has been further reinforced during subsequent meetings and negotiations.
While the influence of these experts cannot be measured, the impact can be profound. Copying Biden’s playbook, U.K. Prime Minister Rishi Sunak held his own summit with Silicon Valley’s top AI leaders. In this case, their influence appeared to prompt policy shifts. While previously advocating a relatively light-touch, optimistic approach to AI technologies, Sunak’s government followed the meeting with a messaging shift to match the growing technologist-led consensus on AI risk. Days later, his government adopted one of the very policy prescriptions Sam Altman and Gary Marcus pushed during May’s Senate hearings.
Even when AI experts do not have direct contact with decision makers, their technically informed words still hold incredible sway. Since Sam Altman debuted his marquee proposal for an AI licensure regime to regulate powerful AI models, decision makers have taken his advice to heart. On May 25, representatives in the Pennsylvania legislature introduced legislation to study, and perhaps pioneer, Altman’s proposal. These technologists, while hardly wonks, are unquestionably taking a leading policy role and their ideas being actively pushed to the front of the policy agenda.
The unique external influence of these celebrity AI technologists, however, is only the tip of a bigger policy iceberg. During the late 2010s, Congress created a variety of AI bodies, offices and commissions to help build the bedrock of federal AI policy. In almost every case, officials have defaulted to filling these influential appointments with AI technologists. Until recently, the now-vacant directorship of the National Artificial Intelligence Initiative Office was held by Lynne Parker, a distinguished expert in robotics. Likewise, on the National Security Commission on Artificial Intelligence, all commissioners were computer scientists, Silicon Valley insiders or technical agency bureaucrats. Finally, only four members of the White House’s National AI Advisory Committee have nontechnical backgrounds, while the other 22 are either technologists, academic computer scientists or Silicon Valley executives.
While these federal structures are exceedingly influential, they admittedly hold little direct authority. On the horizon, however, are a constellation of efforts to back technologist leadership with real teeth. In Congress, there are several floating proposals to fill regulatory gaps and take preemptive AI action by establishing a new AI regulatory agency. While such an agency needn’t necessarily reiterate this pattern of technologist-led governance, in most proposed cases that is the exact model advocates seek. Commenting on his own draft legislation, Rep. Ted Lieu (D-Calif.) claims “legislators lack the necessary knowledge to set laws and guidelines” and believes we should instead look to the expert guidance of an agency armed with “tech-savvy personnel.”
Similar proposals are taking shape in the states. A recent bill introduced in the New Jersey state legislature would create a catch-all “artificial intelligence officer.” Again, expert deference is explicitly the point. In the words of its sponsor, it’s not in “[New Jersey’s] best interest for . . . a state legislator to try to overprescribe what that public policy [around artificial intelligence] looks like.” Instead, New Jersey should “set up a mechanism to allow individuals with deep experience in this area to utilize that experience to frame out what that public policy should look like.”
While there are certainly exceptions, in the vast majority of cases, AI policy leadership has been handed to technologists.
Not Without Reason
So how did this happen? It’s important to recognize that this emphasis on technologist-led policy is not completely without reason and follows clear logic.
Today, the government faces significant AI capacity shortfalls, despite the technology’s clear promise. According to a recent MeriTalk federal government survey, 87% of those in government leadership believe their agencies have significant AI resource gaps. Further, half of agencies report that previous attempts to implement AI programs have met failure due to a lack of necessary expertise. AI experts are clearly needed.
The deeper roots of this technical deference, however, rest in the long-festering malaise of government technology capacity shortfalls. On multiple occasions throughout the past decade, government has failed to meet digital-age demands. While in some instances, these failures are simple knowledge gaps—such as the late Senator Hatch’s famous confusion over the basics of social media—in many cases, such as the disastrous 2013 rollout of Healthcare.gov, technical shortfalls mean a failure to deliver promised government services. In its annual report on “high-risk” challenges facing the federal government, the Government Accountability Office states that despite modest government tech capacity improvements, most of their 34 “risks” are directly rooted in IT malaise.
Government tech and AI capacity shortfalls are very real and must absolutely be considered. If the government does not understand AI or how to use it, its policies will inevitably fail to match the complexity and diversity of this technology. Without capacity-building action, we invite waste and another Healthcare.gov-esque debacle. It’s clear that reasonable efforts to build technical capacity, understanding and talent are needed at some level; that’s not the issue. This well-meaning emphasis on technical talent becomes a problem, however, when it is the only solution, new talent is overconcentrated in top-level posts and this talent crowds out other necessary points of view.
Blind Spots and Brittle Realities
Why do I label this trend a crisis? The term is a bit melodramatic, but today’s AI leadership holds unique long-run importance. In the next few years, choices are going to be made and laws written that will form the bedrock rules and guardrails that will direct AI’s future. If only one type of expert is writing those rules, blind spots and misfires are guaranteed.
The fact is, technologists simply don’t know everything. In a recent Bloomberg column, economist Tyler Cowen rightly noted that “true expertise on the broader implications of AI does not lie with the AI experts themselves.” While technologists can speak to the nuances of AI architecture, the electronics of graphics processing units, machine learning methodology and AI capability, they cannot speak with authority on all potential AI use cases, challenges and impacts. Understanding and muddling through AI’s economic influence, legal effects, copyright implications, education uses, labor force impacts and many, many other questions demands nontechnical expertise. By engaging a full variety of nontechnical experts, AI choices will be better informed, well targeted and grounded in the complexity of real-world application.
The information asymmetries inherent in our current technologist-led approach are not just theoretical. Earlier this year we saw a glimpse of how lopsided expertise can distort policy conclusions.
In 2020, the Trump administration and Congress, as part of the AI Initiative Act, commissioned the National Artificial Intelligence Research Resource Task Force, a body charged with studying the creation of a “national artificial intelligence resource.” In the words of the task force’s final report, the envisioned resource would be “a shared research infrastructure that would provide AI researchers and students with significantly expanded access to computational resources, high-quality data, educational tools, and user support.” To study this vision, Congress specifically mandated the task force’s 12 leaders be “technical experts in artificial intelligence or related fields.” While the Biden administration wisely devoted additional resources toward staffing an auxiliary economist and nontechnical policy expert, Congress’ intent was clear: Give the technologists the reins.
In this specific case, the explicit centering of technical leadership proved an odd fit. The AI Research Resource is a policy prescription aiming to address widespread concerns that scarce resources, market design, barriers to entry and the high costs of compute may prohibit those outside big tech from innovating in AI. While technical knowledge is absolutely required for such a study, the field that can best address the core of these problems is not computer science but economics.
By tasking computer scientists to solve an economic problem, Congress yielded a report lopsided toward what engineers do best: product design. It contains detailed explanations of how a resource might be administered, implementation plans for its services and a range of further “product” details. Meanwhile, only two paragraphs in the 104-page document are devoted to establishing the shape of the underlying problem. Without a grounding in economics, the task force was unequipped to research and scrutinize the very barriers to entry it was trying to solve. The resulting blind spot: Market research indicates that computing barriers are in most cases not a substantial barrier to entry. Naively assuming the opposite, the report proceeds to recommend solutions to this seemingly nonexistent problem.
None of this lengthy discussion means that the AI resource is a bad idea, that the report shouldn’t be implemented or that the task force failed. The point is, technologists are only human, only know so much and—like everyone—have their limits. Expecting that AI technologists are suited to understand and solve every facet of every AI problem is simply asking too much of one group. Meeting AI’s unwieldy complexity requires pluralism; only with diversity in expert leadership can we understand the diversity of AI use, impact and design. Again, technologists are needed, and in most cases governance structures do not have enough technical capacity. Still, placing the full load of AI challenges onto the “AI experts” isn’t going to work. (As an AI scholar and computer scientist myself, I can say that with some confidence.)
Correcting Course
So, how do we proceed?
Unfortunately, addressing our instinctual deference to AI technologists doesn’t have a simple solution. Both the public and decision makers are going to continue to take seriously the ideas and prescriptions of tech experts like Sam Altman—as they should. Improvement, however, means recognizing and socializing the limits of those views, taking them with a heavy grain of salt and supplementing them with alternative and nontechnical ideas.
When it comes to government AI leadership, however, direct steps toward reducing technical deference are tractable. In legislation, Congress should not bind administrative hands by mandating technical leadership, as they did with the Research Resource Task Force. Likewise, when staffing administrative bodies and bringing in external advisers, decision makers should consider what nontechnical knowledge might inform AI challenges and build out a critical mass of those experts.
These recommendations, while actionable, should be seen only as a basic first step. Where they fail is in the bigger problem of engaging, educating and activating the nontechnical policymakers and staff equipped to contribute these diverse perspectives. Education and training are naturally one piece of this puzzle. Congress should consider legislation such as the AI Leadership Training Act, which would upskill and train federal supervisors and management to understand AI and its applications.
To create a pathway for applying this training and ensuring long-term impact, agencies should then consider incentives to promote deeper engagement. At the State Department, leadership is testing “designated technology tours” where diplomats are assigned to engage and study select critical technologies for several years, in exchange for employment record credits and possible preference toward advancement. Applying a version of this meritocratic model across agencies could incentivize a broad base of long-term interest and engagement. Further, tech-education credits on staff records could later be used to help staff AI-relevant decision-making posts when needed.
Naturally, none of these suggestions will act as a silver bullet. Further testing, iteration and ideation will be needed to promote the interdisciplinary, diverse expert talent and workforce we need to step up, influence and lead AI policy.
Looking Forward
Since the general AI regulatory push began in May, we have begun to see small glimmers of expert diversity. Vice President Harris has supplemented the administration’s Silicon-Valley-heavy kickoff event with a similar event featuring a slate of labor, civil rights and consumer welfare advocates. Hardly a comprehensive roster—but a welcome start. Meanwhile, Sen. Chuck Schumer (D-N.Y.) has announced tentative plans to balance previously tech-heavy AI hearings with a series of senate-wide “AI Insight Forums” slated to bring in “AI Experts” as well as “advocates, community leaders, workers, [and] national-security experts.” Certainly, a positive nod toward expert diversity.
These efforts represent positive momentum toward progress, and hopefully these modest steps will kick-start engagement from nontechnical experts. Still, much, much more is needed on the part of government, and AI policy diversity must be a top priority. Thankfully AI is still new, and our course can indeed be corrected. By acting to make changes now, we can guide governance toward a more diverse, robust policy reality.