Quite a Fall for Digital Tech
For social media and digital tech companies, a bad fall may portend an even worse 2023
By Jonathan Cannon and Adam Thierer
It has been a tumultuous couple of months for America’s information technology companies and social media platforms. Amazon, Meta, Microsoft, Salesforce and other tech firms have announced major job cuts and markets have soured on tech stocks. Meta alone suffered a staggering 74% loss in stock value from the beginning of the year through November. Meanwhile, the stunning collapse of cryptocurrency exchange FTX has raised new questions about the future of crypto markets and prompted policymakers to take a more active interest in reining in that emerging marketplace.
Amidst this misery for digital tech, Elon Musk’s takeover of Twitter has led to turmoil at the popular social media site. Musk ushered in mass layoffs and confusing content moderation policy changes that caused uncertainty and concern for users. He even got into a few notable spats with lawmakers like Sen. Ed Markey (D-Mass.), who threatened that Twitter would “pay a price” if Musk did not fix the site’s verification system to his liking. “Fix your companies. Or Congress will,” Markey warned. Musk laughed off Markey’s concerns on Twitter, telling the senator that his social media account “sounds like a parody.”
But these calamities are potentially a prelude to an even bigger piece of bad news for social media and digital tech companies. That’s because next year, the Supreme Court will be considering two cases that threaten to further upend America’s information technology sector and severely damage its future prospects.
A Supreme (Court) Reckoning
In October, the Supreme Court announced it will be considering Gonzalez v. Google, a case that could significantly narrow the scope of Section 230 of the Telecommunications Act of 1996, the law which shields digital platforms from liability for content posted by third parties. The Gonzalez case involves a family that sued Google alleging that the algorithm on YouTube’s platform boosted terrorist content and lead to the death of Nohemi Gonzalez, who was murdered in an ISIS terrorist attack in Paris in 2015. The family argues that Google knowingly permitted ISIS to post hundreds of videos on YouTube that incited violence, potentially recruiting supporters to join the terrorist group. They also contend that Google’s algorithm helped radicalize those interested in ISIS’ content by recommending to them similar videos and accounts that helped spread ISIS’ message.
Citing Section 230, Google moved to dismiss the claim. Both the District Court and Ninth Circuit Court of Appeals agreed, but the family then appealed to the Supreme Court, which agreed to hear the case. The court is now considering the following question: “Does section 230(c)(l) immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?”
Concurrently, the high court is reviewing another case, Taamneh v. Twitter, that raises issues of platform liability similar to Gonzalez. While the facts of the cases are similar, the lower court did not address Section 230 directly, and instead focused on whether or not Twitter could be liable under the Anti-terrorism Act. But since the case touches on the First Amendment, and due to the similarity of the facts between it and Gonzalez, the court could ultimately address Section 230 in its analysis.
In either or both cases, if the court moves to weaken liability protections for digital platforms, the ramifications will be profoundly negative. While many critics today complain that the law’s liability protections have been too generous, the reality is that Section 230 has been the legal linchpin supporting the permissionless innovation model that fueled America’s commanding lead in the digital information revolution. Thanks to the law, digital entrepreneurs have been free to launch bold new ideas without fear of punishing lawsuits or regulatory shenanigans. This has boosted economic growth and dramatically broadened consumer information and communications options.
Section 230 sent a signal to talent and investors, too. By making permissionless innovation the lodestar of information technology policy, skilled immigrants and global venture capital flowed to the U.S. because it offered the best and most open environment for technology entrepreneurs. Essentially, Section 230 lured the best and brightest to our shores and then investors followed the talent.
In terms of global competitive advantage, this is the most significant—and completely lopsided—economic success story of modern times. American tech companies have crushed the competition in virtually every segment of the information economy. It is hard to even name a European company that is a leader in the info tech world. That’s due in no small part to the European Union’s endless layers of bureaucracy and compliance requirements for data-driven innovators.
While China has made significant strides in recent years, its firms—with the exception of TikTok—remain a distant second in most digital technology fields. It is not hyperbole to conclude that America won the first round of the digital revolution, and the permissionless innovation model that Section 230 fostered was the secret sauce that made it happen.
Turning Back the Clock
This remarkable policy success story could now take a dark turn with Congress and the courts, potentially opening the door to greater politicization and litigation for online speech and digital commerce. If some policymakers get their way, America could be winding the clock back by 25 years and returning to the analog era’s disastrous, overly bureaucratic regulatory regime, which required the blessing of federal, state and local bureaucrats before anything could get done.
Consider what it potentially means for the next generation of online innovators if these court cases go badly and Section 230 is scaled back or gutted. Section 230 has been a legal cornerstone of the entire ecosystem. All the large-scale platforms we depend on for our online experience would never have gotten off the ground without its protection. As University of Arizona law professor Derek Bambauer noted in a recent post for the Brookings Institution, large platforms like Facebook and Twitter have over a billion active users. With this quantity of new content added every second, there is no way that platforms could manually curate all content submitted. As a result, they have developed and implemented complex algorithms to sift and sort content. And while these algorithms are improving, they also constantly make mistakes.
More importantly, these platforms have relied on being able to host third-party content without fear of opening a Pandora’s box of private litigation and endless challenges from governments. By removing these protections, platforms will be forced to significantly increase their moderation practices to reduce risk of suits from zealous litigants. Besides the chilling effect this will have on speech, it also will put up a cost-prohibitive barrier for smaller entrants who lack the resources to have an army of content moderators to find and eliminate undesirable content.
An Avalanche of New Permission Slips
This bad news on the judicial front is matched by growing legislative threats on other issues of importance to the digital technology sector. For instance, Congress has recently been considering a slew of bills that would impose new regulatory obligations on digital technology companies, including the Kids Online Safety Act, the Journalism Competition and Preservation Act, the American Innovation and Choice Online Act and the Open App Markets Act, among others. These bills would be the death of permissionless innovation in the U.S. tech sector because innovators would be forced to navigate a dense thicket of new regulations before launching products or making new business deals.
There has also been an explosion of new regulatory proposals for artificial intelligence and machine learning, with federal, state and local lawmakers floating preemptive controls on algorithmic technologies. Under the guise of “algorithmic fairness,” these new regulations would send in the code cops to micromanage computing and software decisions at every juncture. These proposed regulations could have major geopolitical ramifications since AI is becoming the next major global technology battleground, with competition intensifying with China, in particular. Threats to AI also would significantly impair the ability of platforms to effectively moderate content, further hamstringing the industry should Section 230 be weakened or removed.
Even in the absence of formal regulation on these fronts, we can expect even more of the political “jawboning” we’ve seen over the past few sessions of Congress. Elon Musk’s spat with Sen. Markey was amusing to many, but it will likely just encourage even more political strong-arming of tech CEOs in coming years.
If America’s digital sector gets kneecapped by the Supreme Court, or if new regulations or legislative proposals scale back Section 230 protections, it will be significantly more difficult for U.S. firms to continue to lead in the development and commercialization of new technologies. Weakening Section 230 will hamstring platforms’ ability to effectively curate content, and likely lead to tighter scrutiny and less flexibility with content it allows, or disallows. If anything, abolition of Section 230 would likely result in less speech, not more. Indeed, if Section 230 protections are eliminated the combination of accumulating liability threats and other regulatory issues will discourage investment in next-generation computational services and platforms and undermine the nation’s technological competitiveness.