‘Malinformation’ and the Wrong Truth
By accepting the concept of “misleading truth,” democratic society departs from its foundations
The most recently published portion of the Twitter Files revealed an interesting turn in the battle against fake news and disinformation. The experts who lead this fight and provide theoretical foundations and practical guidelines for social media openly admitted that truth must be restricted if it is “misleading” truth. For describing this kind of truth, the experts use the word “malinformation.” This debate is no longer about free speech: It’s about the very epistemology of truth.
The ‘Censorship-Industrial Complex’
In his latest series of the Twitter Files, journalist Matt Taibbi focuses on what he refers to as the “censorship-industrial complex.” After the 2016 elections, Facebook and Twitter came under increasing scrutiny from the political establishment. To prevent potential regulatory backlash, social media companies began collaborating with state agencies and engaging a wide array of expert organizations to seek guidance on content moderation.
Delegating political and expert responsibility for content moderation to expert organizations recognized by the establishment was a smart move. These organizations would flag wrong content in the interests of democracy but also on behalf of the establishment. Accepting their guidance in the fight against misinformation, social media platforms demonstrated their compliance and secured their political and regulatory stability.
Taibbi notes that many of these expert organizations were funded by taxpayers through publicly funded programs and grants. Media funds like the Knight Foundation, among many others, pledged additional support. As a result, the fight against fake news and disinformation became both a noble cause and a lucrative business, firmly aligned with the state, political and corporate establishment. And its mission expanded as time passed: While originally focused on protecting democracy against foreign interference and domestic terrorism, this industry later expanded to include tackling COVID disinformation and promoting vaccination, performing some smaller tactical political tasks along the way, such as shadow banning undesired topics and activists.
As a result, over the course of about five years, an alliance was formed between the state, “disinformation” expert institutions and social media platforms controlled by corporations, closely resembling the propaganda model that Edward Herman and Noam Chomsky described in the 1980s. The only significant difference is that instead of traditional news media, social media platforms now play a key role in this alliance. This arrangement can be even described in Herman and Chomsky’s language: Social media platforms need to secure a “license to do business,” political elites require “manufacturing public consent” and experts seek, well, expanding their validating power. The semblance with the propaganda model, it seems, did not come across Taibbi’s mind, so he called it the “censorship-industrial complex,” implying a coalition of state and business as in the “military-industrial complex.”
Among the Twitter Files, Taibbi discovered what he referred to as the “ultimate example of the absolute fusion of state, corporate and civil society organizations”: the Stanford Internet Observatory, whose “Election Integrity Partnership” (EIP) was one of the most active “flaggers” in the Twitter Files. Taibbi notes that EIP was partnered with state agencies “while seeking elimination of millions of tweets.”
After the 2020 election, Taibbi continues, EIP was renamed the “Virality Project,” and the Stanford lab was onboarded to Twitter’s ticketing system, “absorbing this government proxy into Twitter infrastructure—with a capability of taking in an incredible 50 million tweets a day.” In one “remarkable email,” as Taibbi put it, the Virality Project recommended that social media platforms take action even against “true content which might promote vaccine hesitancy.”
Defining ‘Malinformation’
The “remarkable email” to which Taibbi points defines “true content that might promote vaccine hesitancy” as “viral posts of individuals expressing vaccine hesitancy, or stories of true vaccine side effects. This content is not clearly mis or disinformation, but it may be malinformation (exaggerating or misleading). Also included in this backet are often true posts which could fuel hesitancy, such as individual countries banning certain vaccines.”
The term “malinformation” emerged just in the past year or so. The word was created by combining two language patterns: 1) mis- and dis- information, and 2) malware— “malicious software.” It seems the theory of combating disinformation became so nuanced and all-permeating that it required to include information that is real, not fake, but is used to “inflict harm on a person, organization or country.” Or, as another definition goes, “malinformation is classified as both intentional and harmful to others”—while being truthful.
Describing true information as “malicious” already falls into a gray area of regulating public speech. This assumes that the public is gullible and susceptible to harm from words, which necessitates authoritative oversight and filtering of intentionally harmful facts. But the email quoted by Taibbi goes even further. It does not include intent or harm in the definition of malinformation at all. Rather, “malicious” is truthful information that is simply undesired and “misleading” from the point of view of those who lead the public somewhere. In other words, malinformation is the wrong truth.
The casuistry of the new definition reaches the truly Orwellian heights when it exemplifies malinformation by “stories of true vaccine side effects.” When you see TV pharmaceutical commercials, you listen to the long list of side effects—it is required by law in the name of public safety. But malinformation experts decided that similar information about vaccines maybe “exaggerating or misleading.” Taibbi emphasizes that none of those who led this “effort to police Covid speech” have health expertise. They just extend their “misinformation” expertise to promote vaccination.
The relationship here between the speech regulators and citizens is somewhat similar to that between adult and child. The younger and more vulnerable children are, the more likely adults will hide from them the truth that is harmful and “misleading” at a younger age. However, in political doctrines based on a “big state,” this “adults-children” power dynamic is modified. It’s not that children are dependent and incapable and therefore some truth needs to be hidden from them; it’s that some truth needs to be hidden to keep them dependent and reliant on wise guidance. “Misleading” has become a buzzword among the discursive elites in their pursuit of protecting democracy from unsanctioned speech, though the idea rather serves to protect institutional monopoly over information.
Of course, the partisan bias in combating misinformation has been known for years. However, never before have high-ranking and influential experts (given the scale of their impact on digital public speech) openly acknowledged the supposed need to restrict true information. While parties involved in defining misinformation may have exhibited bias, the notion of truth remained sacrosanct. Yet we have now seen this boundary crossed.
Is There a Future for Truth?
On the same day that Matt Taibbi published his series on the “censorship-industrial complex,” he and another Twitter Files journalist, Michael Shellenberger, were invited to testify before Congress. Despite their expectations, the hearing did not focus much on the fusion of state agencies, expert organizations and social media platforms in large-scale policing digital speech. Instead, some members of Congress concentrated on the personalities of journalists themselves or their motives.
This pattern prevails in the public debate on free speech in recent times. The essence of the matter seems to not bother people in the news media and other institutions of representative democracy. After the hearing, writer Wesley Yang expressed this sentiment: “But I may be mistaken about how successful the pivot has been toward celebrating censorship and thought control as moral imperatives. The echo chamber astroturfed here by ‘disinformation experts’ ... may have grown larger enough to encompass the nation. Did a few years of saturating in a manufactured memeplex hostile to free speech fundamentally alter American character?”
Suppressing the wrong truth in the name of a noble cause could be viewed as a consequence of growing partisanship and polarization. I believe, however, that causing this political manifestation is a much deeper epistemological shift brought about by the transition from print literacy to digital orality.
The absolute truth did not exist before literacy. The right and the wrong were defined by applying beliefs in actions. The criterion of truth was practice, to use Marx’s wording. In the oral state of mind, the world is run by multiple supernatural forces. Whatever you do, you first need to reach an agreement with supernatural patrons controlling this or that resource or activity. The same is applied to relations between humans: Truth is established in all kinds of talking. Polytheism favors the relativity and practicality of truth, so the truth can and shall be negotiated with the support of good offerings and skills of persuasion.
It took centuries of writing for humans to detach the truth from immediate practical outcomes. Writing separated “the known from the knower” and allowed expressing the reality in abstractions. Symbols of writing taught humans that the essence of things could exist independently of practical validation: The essence of things has its own true value in the space of thoughts.
The symbols of early writing, pictograms, still bore a resemblance to the objects they represented, but the alphabet severed any ties with reality. The letters had no physical resemblance of the things they signified. The alphabet gave the mind the pattern of absolute abstraction. Robert Logan of the University of Toronto, with the contribution of Marshall McLuhan, suggested that the alphabet played a significant role in the emergence of codified law, monotheism, abstract science, deductive logic and individualism. He called it the alphabet effect. The idea of absolute truth was in the core of all those alphabet-induced phenomena.
The alphabet ended orality. Plato rebelled against Homer and excluded truth from the transient experience of the short-sighted prisoners of the cave. The truth of the literate mind is absolute and cannot be negotiated in application. It belongs to nobody and is inscribed in the Book: the Scripture, legal code and textbook. Even newspapers bore the residual gleam of this Platonic light. The literate mind might not be able to reach the truth but knows about its unequivocal existence.
Thus, the shift between these two states of mind—from orality to literacy—changed the concept and the perception of truth. We are living through a similar shift, but now in the opposite direction, as literacy has been fading under the pressure of electronic and now digital media. First, electronic media displaced the linear, sequential and therefore considerate uptake of information, typical for text, with the instant and panoramic perception of news and opinions provided by radio and television. A rational and structured view of the world yielded to an emotional and impulsive one. Next, digital media made available instant self-expression and simultaneous communication to billions of people, creating an unseen hybrid of personal and mass communication: digital orality.
In fact, a similar return to the “oral state of mind” described by Walter Ong, or “retribalization” described by Marshall McLuhan, had occurred before. Logan notes that, after the fall of Rome, the Roman civil administration based on law was replaced by a new form of government based on the older oral traditions of the Germanic tribes. In this system, subjects pledged their loyalty to the tribe and warlord through a verbal oath of allegiance. The codified and systemic law of literate culture yielded to social regulation based on personal loyalty. From being in the core of the law, the truth was demoted to serve the transient everyday needs of the tribe and became once again open to negotiation or taken by force.
The epistemological superiority of absolute truth was fully inaugurated in Europe only after the printing press reshaped minds and society. Since then, law, science and rational logic have come to dominate the West, all based on the imperative of absolute truth. The United States was built upon this epistemological principle.
But as the Gutenberg Galaxy is coming to an end, digital media retrieve the simultaneous, oral-like mode of communication. The internet has emancipated authorship, granting everyone the capability of debating public issues. Such an environment can’t help but challenge the monopoly of knowledge, logically leading to what Martin Gurri called the “crisis of authority” in his book “The Revolt of the Public.” The monopoly of absolute truth, which was already challenged by electronic media in an alliance with postmodernist philosophy, has now finally fallen.
Digital orality retrieves the preliterate epistemology of truth. Truth is negotiated again. Trading truth for practical outcomes has become increasingly prevalent in public discourse, now including not only politics but also the areas that were once governed by science, such as healthcare and many others.
One could argue that the creation of a term equating the undesired truth with “misleading information” by a relatively unknown expert may not seem significant. However, others might say that this expert was in charge of a machine that processed “an incredible 50 million tweets a day,” as Taibbi put it, in a system controlling public discourse without any public oversight (until Musk took the control). But even this is not the point.
The point is that the term “malinformation” openly admits the malevolence of some truth in a system that is allegedly designed to fight untruth. I see it as a symptom indicating that the epistemological shift from absolute truth to negotiated truth is near completion. The political turmoil we see is only a surface-level manifestation of this shift: The cultural and generational consequences will be much deeper. The next generation will not challenge or bypass the absolute truth; they simply will not know what it is.