Twitter Bars the Gate
Jack Dorsey’s resignation is another sign that the established web has been co-opted by the elite. But a new, wide-open web is emerging
On Nov. 30, Twitter founder Jack Dorsey abruptly resigned as CEO of the company. In a resignation letter tweeted to the world, Dorsey claimed that he was granting adulthood to his corporate offspring: Twitter no longer needed to be “founder led” or exposed to “a single point of failure.” He closed on a peculiar note of self-congratulation and a possible swipe at rival Mark Zuckerberg of Facebook: “there aren’t many founders,” he boasted, “that choose their company over their own ego.” And with that, he was gone.
Some observers—Matt Taibbi and Mike Solana among them—have suggested that Dorsey’s departure will accelerate the decline of Twitter from an open platform to a schoolmarmish protector of elite dogmas. If true, this would be part of a larger trend. The digital landscape, once the information equivalent of the Wild West, is now policed by a handful of giant platforms—and all, at the moment, seem focused to the point of obsession with controlling what the public should be allowed to say online.
Though criticized for the many one-sided decisions made at Twitter during his tenure—booting out Donald Trump while tolerating members of the Taliban, for example—Dorsey at least betrayed something like a bad conscience about the nakedly political nature of these choices. Referring to a Capitol Hill appearance Dorsey made earlier this year, Solana portrays him as a zealous but ineffective advocate of free speech:
Jack Dorsey appeared before Congress looking like a haggard, bearded sage from the future, fallen back in time from some dystopian hellscape to save us from ourselves. Not only did he not trust Congress with the power of censorship, he didn’t trust himself. In fact, he argued, it was a power that should literally not exist. Finally, he declared under oath that he was presently attempting to make sure censorship of the sort being considered by Congress could … never be considered again.
That outlook placed him at odds with the censorious monks who currently run Twitter in fact, if not in name. All we need to know about the state of our information institutions is that Dorsey, the company’s founder, didn’t stand a chance to win that fight.
Preventing ‘Physical Harm’ Through Social Media
The day after Dorsey’s departure, Twitter announced a more restrictive policy for what would be allowed on the platform. Because of “growing concerns about misuse of media and information,” every person depicted on videos or still images would have to give permission as a condition of posting. “Sharing personal media, such as images or video, can potentially violate a person’s privacy, and may lead to emotional or physical harm,” proclaimed the earnest but anonymous rule-maker on the company blog. “The misuse of private media can affect everyone, but can have a disproportionate effect on women, activists, dissidents, and members of minority communities.”
The statement failed to specify whose “growing concerns” it was addressing—possibly because the answer was self-evident. The political and media elites have always loathed the digital dispensation. They consider the web to be the mother of lies. These people are now in control and in full reactionary mode. Privacy and “harm” have become important to them largely as a means to silencing certain forms of expression.
The pose of defending the helpless is wholly hypocritical. From the first days of the web, visual persuasion has worked disproportionately against power—sometimes even against injustice. As early as 2011, the Syrian opposition would post powerfully moving videos of young men slaughtered by the regime during street protests. Similarly, the public in revolt has often signaled its political strength through images of immense anti-government crowds. Here, for instance, are a number of examples from 2019. Today, tweeting images of a million-person march presumably would require a million signatures.
Why would Twitter move to reduce the public’s ability to post disturbing and controversial content? I think the question answers itself. Twitter wants to tell a specific story about the world. On some subjects, such as identity and politics, the company aspires to sound more like The New York Times and less like the Tower of Babel. To get there, it needs tighter control over its own content. If I’m guessing correctly, the policy on visuals is likely to be the first of many steps in that direction.
The targeting of “personal” or “private” media is telling. On a straightforward reading, the policy appears to exempt “public” media—in other words, government and the news business—from added privacy requirements. If an image has already been “covered by mainstream/traditional media,” we are told, privacy restrictions can be waived. It may be that the reactionary progressives at Twitter dream of pushing the public out of the coverage of events, to clear a spot where the old elite media can dwell in the splendid style of 1965. (After an early misfire of the policy, however, 12 Washington Post reporters were deplatformed—all were soon restored with full honors.)
But why would the company risk losing users by favoring its own competition? That question also answers itself. Clearly, Twitter intends to shed users of a certain hue of opinion. It would be happy to drive Trump supporters out the door. That would appease the Zoomer zealots in its own workforce and flatter the journalistic types who, with Trump gone, are the loudest, most self-important voices on the platform. The relationship between digital platforms and mass media, in any case, has never been competitive: It’s parasitical. Nothing would change on that score.
The Future of Our Information Oligarchy
We stand at an immense cultural distance from the “Declaration of Independence of Cyberspace” of 1996: “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind.” The early heroes of information technology were in part hippies, in part pioneers who imagined “cyberspace” as a frontier beyond the reach of power. Their outlook propelled innovation through the early days of Web 2.0, with its global proliferation of wikis and blogs. Then the frontier seemed to close. Google, Facebook, Amazon, Apple—all stretched barbed wire along their extensive borders until scarcely any open land remained.
It is worth noting, for the sake of accuracy, that for a time the masters of the web behaved as a true aristocracy. Google made vast amounts of information available to ordinary people across the world. From the Arab Spring onward, dozens of anti-establishment protests were enabled by human connections made on Facebook. Facebook’s younger sibling, Twitter, even earned an entry on Wikipedia for “Twitter Revolutions.” Until 2016, the great platforms allowed the public to move fast and break things. Reflexive silencing of opinions offensive to elite tastes began only following Trump’s election.
Aristotle observed that every aristocracy is liable to degenerate into an oligarchy: the rule of wealth and self-interest. That transpired with astounding rapidity in the domain of information. The old ideal was to “organize the world’s information and make it universally accessible and useful”; the “growing concerns,” as we have seen, are about “the misuse of media and information.” The weary giants turned out to be the digital corporations, eager to preserve their profits by arranging a discreet surrender to the flesh and steel of power.
If, in bending to elite dogmas, the platforms silenced political opinions of every kind, the effect would be the equivalent of noise. However, the silencing has been entirely lopsided. I believe a case can be made for pushing the mute button on Trump, who has behaved with nihilistic abandon since the 2020 election. But tens of millions of Americans voted for Trump and share many of his opinions. Are they also beyond the pale? If so, where is the line to be drawn? And who, exactly, gets to decide?
By fixating on the “misuse” of information, the platforms have assumed the mantle of papal infallibility. The predictable consequence has been the silencing as false of a long list of reports that were later found to be true or at least plausible: the famous Wuhan laboratory leak origin story for COVID-19, for one, or the obscure controversy over Hunter Biden’s laptop. The all-too-human reality is that truth must forever be a matter of perspective. Silicon Valley has bowed to the perspective of Washington, D.C.—of old white men like Joe Biden and Anthony Fauci. That, of course, is the dream utopia of the reactionaries.
Yet a different interpretation can be placed on the same set of events. The web, like the universe, keeps expanding. There’s always a beyond—always a frontier. Beyond the gated and guarded spaces of Web 2.0 lies the open prairie of Web 3.0, sheltered by decentralized protocols like Bitcoin and Ethereum. According to the cleverest people I know on the subject, Web 3.0 is what you get when you apply all the lessons of its predecessor and build an environment that makes it structurally impossible for the few to dominate the whole. It’s the permanent frontier.
This is where the tech visionaries are headed. Jack Dorsey, hipster lord of the last age of the web, may have resigned from Twitter—but he’s scheming to erect a platform for decentralized Bitcoin exchange. Zuckerberg is betting on the metaverse, a mysterious realm powered by crypto. They and others whose names we don’t yet know are opening up an untamed continent teeming with exotic fauna and disruptive potential—where, if history is any guide, the rest of us will eventually follow.