We Really Need To ‘Have a Conversation’ About AI ... or Do We?
What we REALLY need is to talk about the ‘concern trolling’ that is standing in the way of AI innovation
By Adam Thierer
Last month, New York Times columnist Kevin Roose wrote a piece entitled “We Need to Talk About How Good A.I. Is Getting,” even while admitting how, “It’s a cliché, in the A.I. world, to say things like ‘we need to have a societal conversation about A.I. risk.’”
He doesn’t even know the half of it. If you’ve read enough essays, books or social media posts about artificial intelligence (AI) and robotics—among other emerging technologies—then chances are you’ve stumbled on variants of these two arguments many times over:
1) “We need to have conversation about the future of AI and the risks that it poses.”
2) “We should get a bunch of smart people in a room and figure this out.”
Who can possibly disagree with those two pearls of wisdom? Well, I can—because they have become largely meaningless rhetorical flourishes which threaten to hold up meaningful progress on the AI front.
I’m not at all opposed to people having serious discussions about the potential risks associated with AI, algorithms, robotics or smart machines. But I do have an issue with: (a) the astonishing degree of ambiguity at work in the world of AI punditry regarding the nature of these “conversations,” and, (b) the fact that people who are making such statements apparently have not spent much time investigating the remarkable number of very serious conversations that have already taken place, or which are ongoing, about AI issues.
In fact, it very well could be the case that we have too many conversations going on currently about AI issues and that the bigger problem is instead one of better coordinating the important lessons and best practices that we have already learned from those conversations.
On “Having a Conversation” About Emerging Technologies
No new insights. Critics consistently throw up barriers to AI innovation through their handwringing. Image Credit: XKCD
Back in 2013, I wrote an essay asking, “What Does It Mean to ‘Have a Conversation’ about a New Technology?” It was spurred by this funny XKCD comic, which depicted two figures discussing the merits of a new technology. The comic suggested that if you want to pretend to sound wise about a new technology, just begin by saying “we need to have a conversation about [this new technology] before ...” and then fill in the blank with the most discussed technology du jour.
That old cartoon always made me laugh—I even printed it out and stuck it on my wall (you know, as a conversation starter!). It accurately summarized what I noticed then to be the astonishing prevalence of some variation of that line in books, blog posts, editorials and tweets—and it’s just as ubiquitous today. Everywhere you turn, someone is suggesting that “we need to have a conversation” about some sort of technology, and increasingly, they are referring to AI. Tech critics are also fond of complimenting each other for being “thoughtful” for raising the idea of having such conversations.
I have several questions about that “we need to have a conversation” aphorism, and whenever I hear someone utter the line in public, I ask them to answer a few questions:
What is the nature or goal of that conversation?
Who is the “we” in this conversation?
How is this conversation to be organized and managed?
How do we know when the conversation is going on, or when it is sufficiently complete such that we can get on with things?
And, most importantly, aren’t you implicitly suggesting that we should ban or limit the use of that technology until you (or the royal “we”) is somehow satisfied that the conversation is over or yielded satisfactory answers?
When I raise such questions, it often evokes a very strong response. Some people sneer and act as if I’m an amoral monster for even having the audacity to ask them. But what likely makes them most uncomfortable is that I’m forcing them to finish their sentences when they probably do not want to. As I noted in my earlier essay, “I can’t help but think that sometimes what the ‘we-need-to-have-a-conversation’ crowd is really suggesting is that we need to have a conversation about how to slow or stop the technology in question, not merely talk about its ramifications.” That is, the presumption inherent in most calls to have a conversation about a new technology is that it should be considered guilty until proven innocent.
Rise of the “Concern Trolls”
Explosion. Recently, there's been a big increase in “concern trolling” around high-tech issues like AI. Image Credit: Dictionary.com
Daniel Castro, vice president at the Information Technology and Innovation Foundation, recently wrote about the explosion of “concern trolling” around high-tech issues and how it is making rational discussions about tech policy more difficult. “Concern trolls are people who act like they support a particular belief, but really work to oppose it,” he says. “They pretend to hold the opposing view so that their criticism, often masked as ‘concerns,’ carries more weight.” He elaborates:
Concern trolling can be a successful tactic because it stalls serious discussions from moving past the questions and concerns of the trolls. Concern trolls can repeatedly raise the same questions, ignoring valid answers to make it seem like their concerns are justified. Or they raise concerns about a small problem but act as if it is more significant than it really is. In tech policy, such concern trolling has become routine.
Castro recommends not engaging directly with the concern trolls except to call out their behavior: “To be effective, technology supporters need to ensure they and others are familiar with the tactics of concern trolling so that when it occurs, they can identify it and call it out, thereby elevating tech policy discussions beyond the concerns of the trolls.”
I generally agree with Castro, but one problem with the “don’t feed the trolls” strategy is that critics are often raising some legitimate concerns about many emerging technologies that do require ongoing deliberation. For innovation defenders, our position cannot be that we shouldn’t be having discussions about some of these matters. (We also need to be careful about labelling them “trolls” because that term carries a lot of extra baggage and could undermine good-faith negotiations with others in the future over some important policy issues.)
On the other hand, everyone needs to be mature enough to agree that some “conversations” run the risk of becoming never-ending, unsatisfying exercises because there are no easy answers associated with the countless complexities of computational systems and processes. There are both known unknowns and unknown unknowns all around us with AI and algorithmic systems.
Unfortunately, when pushed on this point, all those uncertainties lead the “let’s-have-a-conversation” crowd to advance some variant of the Precautionary Principle for AI and computational systems. In other words, they use this radical uncertainty as a rationale for sweeping preemptive controls on emerging tech and advocate what two leading law professors unironically call “unlawfulness by default” as the standard for many AI systems (i.e., a regulatory standard that would make all new algorithmic technologies illegal until some faceless bureaucrat gets around to considering whether they should ever see the light of day).
That absolutely must not be our default because it would be a disaster for society, significantly undermining the many benefits that could flow from computational systems. Just imagine if every AI-driven recommender system you already enjoy in your phone, voice-activated assistant or favorite shopping site was considered forbidden by law until it went through years of regulatory wrangling for permission to operate. Now multiply that regulatory snafu across the entire economy for every future algorithmic application or process. It’d be the death of digital innovation.
“Get the Smart People in a Room”
Some folks in the “let’s-have-a-conversation” crowd at least are willing to admit the limits of their own crystal ball-gazing abilities, and a few of them are even willing to acknowledge some of the costs associated with prior restraints on innovative activities. This often leads them to put forth their other pithy pearl of wisdom: “We need to get a bunch of smart people in a room and figure this out.”
Well, now why didn’t someone think of that before? Why haven’t we gotten the Very Smartest People together in a room to get this done?
Sorry to be snarky, but this line similarly angers me because it is as remarkably ambiguous as the “let’s-have-a-conversation” line. The same “who, what, where, when and how” questions are equally applicable here, perhaps even more so.
But there’s a bigger problem with the “get-the-smart-people-in-a-room” argument: We already have had tons of the Very Smartest People on these issues meeting in countless rooms across the globe for many years. In an earlier essay, I documented the astonishing growth of AI governance frameworks, ethical best practices and professional codes of conduct: “The amount of interest surrounding AI ethics and safety dwarfs all other fields and issues. I sincerely doubt that ever in human history has so much attention been devoted to any technology as early in its lifecycle as AI.”
To better track the volume of activity going on in this space, Gary Marchant and a team of scholars at Arizona State University College of Law undertook an enormous effort to count the number of official AI efforts currently underway, and identified an astonishing 634 proposed governance frameworks that were formulated just between 2016 and 2019 by governments, academic groups, NGOs and various companies and major trade associations. All these efforts are aimed at addressing the so-called AI alignment problem of bringing algorithmic systems in line with important human needs and values, such as safety, security, privacy, fairness, nondiscrimination and more. And that’s 634 AI conversations in a three-year period alone: There have been many other major AI governance efforts launched since 2019.
Critics can always argue that these efforts don’t go far enough, but they cannot claim that there are no “conversations” happening around AI governance today or that the Very Smartest People aren’t already engaged in them. The better critique would be premised on coordination challenges: How do we unify and refine the important best practices and governance principles in all those efforts?
I also discussed that problem at greater length in an earlier essay on polycentric governance of AI: “While greater coordination of all these AI ethical best practice efforts will be needed going forward, it doesn’t necessarily need to come in the form of heavy-handed, top-down, one-size-fits-all regulatory regimes—domestically or globally.” More specifically, as I argued in another article, the goal should be “to refine and improve soft law governance tools, perhaps through better voluntary certification and auditing regimes to hold developers to a high standard as it pertains to the important AI ethical practices we want them to uphold.” This more bottom-up and flexible approach to governance represents a better way to balance safety and innovation for complicated, rapidly evolving computational and computing technologies.
Returning to Kevin Roose’s recent Times column, he observed how “[t]here are already plenty of Davos panels, TED talks, think tanks and A.I. ethics committees out there, sketching out contingency plans for a dystopian future. What’s missing is a shared, value-neutral way of talking about what today’s A.I. systems are actually capable of doing, and what specific risks and opportunities those capabilities present.” But that isn’t quite right. Again, there are countless conversations going on right now that are digging deep into those exact issues in a “shared, value-neutral way.” Some critics have just apparently not bothered reading through the literal thousands of pages of materials that those conversations have produced. I’ve spent years following all those “conversations,” and I am just barely able to keep up with them all.
The Conversations “We” Have About Tech Every Day
Finally, the ultimate problem with the “let’s-have-a-conversation” logic is that it ignores that fact that the most important conversations society has about new technologies are those we have every day when we all interact with those new technologies and with one another. Wisdom is born from experiences, including activities and interactions involving risk and the possibility of mistakes. This is how progress happens.
That might sound cold and unthoughtful to some of the academics and ethicists who believe that they (or, again, the royal “we”) can adequately foresee and preemptively address every potential risk before allowing a new technology into the wild. But in a sense, AI critics want something akin to a classic 2005 parody from The Onion: “Everything That Can Go Wrong Listed.” The article joked how “[a] worldwide consortium of scientists, mathematicians, and philosophers is nearing the completion of the ambitious, decade-long project of cataloging everything that can go wrong.” The goal of this “project” was to create a “catalog of every possible unfortunate scenario” such that, “every hazardous possibility will be known to man.”
Of course, building such a list isn’t possible. Again, there are countless unknowns in the world of emerging technology, especially as it pertains to algorithmic systems and computational technologies. Demanding that all such hypothetical problems be addressed preemptively means that humanity will be denied the many potential benefits of these technologies if they are kept off the market. I wrote an entire book on this precise point: “[L]iving in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.”
We won’t ever be able to “have a conversation” about a new technology that yields satisfactory answers for some critics precisely because the questions just multiply and evolve endlessly over time, and they can only be answered through ongoing societal interactions and problem-solving. But we shouldn’t stop life-enriching innovations from happening just because we don’t have all the answers beforehand.