Discover more from Discourse
Protecting Children From Social Media Is More Nuanced Than It Seems
Tech fears are overblown, and critics are focusing on the wrong problems
By Patricia Patnode
Adjusting to new technologies is often difficult, leading otherwise thoughtful people to overreact to these developments with over-the-top condemnations and ill-considered prohibitions. A recent example of this phenomenon is Christine Rosen’s National Review article entitled “Ban Kids from Social Media,” which indulges in the kind of techno-panic we have seen before with the advent of television and video games—and even books when they became widely available in the 1800s.
A few months ago, Wall Street Journal columnist Peggy Noonan made nearly the same arguments as Rosen, and Vox co-founder Ezra Klein has repeatedly brought up evils of the digital age and the particular difficulty of protecting children, illustrating how prevalent this fear is. But this social media panic is overblown, and critics tend to focus on the wrong problems. Policymakers should more carefully consider the source of social media’s dangers before intervening to regulate its use.
This panic is not just characteristic of commentators and pundits; it also is painfully obvious when I talk to my parents and grandparents about social media. My 85-year-old grandparents still do not have an internet connection in their house, and they very strongly believe that smartphones are destroying our society. There certainly has been a cultural shift as smartwatches and smartphones have become completely integrated into our daily lives. But I’m sure my great-grandparents had similar concerns when televisions and radios were introduced.
Meanwhile, my mother thinks that my friends and I “spend too much time scrolling on Facebook,” when in reality no one my age (Generation Z) has checked their Facebook account in months. Instead, we spend most of our time on our cell phones texting one another in group chats or using other apps such as BeReal and Snapchat. Both apps are intended for direct person-to-person communication, which is experientially much different from independently scrolling on a website. It’s closer to mailing very brief letters, which we would consider to be a more endearing medium of communication. But her criticism is well intentioned and not entirely off base. Screen time for teens averages around nine hours a day—in other words, most of their waking life—and to someone who didn’t grow up with this type of communication style, it is certainly a shock. But whether this new type of communication is good or bad is not yet clear.
Regulating Social Media Is Hard
One of the major difficulties with regulating social media is that it’s unclear what that term actually means. What distinguishes any website or blog with a comments section from a social media network? Rosen’s article categorizes Discord as social media. But while the website certainly connects people, it’s more like an internet-based group chat than a Facebook-style newsfeed of independent posts that don’t require direct conversation with other people on the site. For Rosen and those who think like her, does every group-messaging function—including texting—qualify as social media?
Further, sometimes digital platforms that have a similar function are categorized differently. YouTube is generally not thought of as social media, even though, except for the typical length of its videos, it is similar to TikTok, which has been labeled a social media app. (The new YouTube Shorts feature makes the similarity even more striking.) But TikTok denies that it is a social media site and instead prefers the term “entertainment platform.” Media and communication programs evolve quickly and in ways that regulators, and consumers, cannot anticipate. If lawmakers somehow come up with a coherent definition of social media, web developers will simply be incentivized to design a new communications app that slightly evades that definition.
On a related regulatory topic, the Federal Trade Commission has struggled to regulate undisclosed ads by social media influencers; part of the trouble is identifying what counts as an undisclosed ad. The agency produced an updated guidance document, “Disclosures 101 for Social Media Influencers,” in 2019. But journalists and YouTubers are still cataloguing the ongoing query of what legally must be disclosed versus what doesn’t have to be disclosed. Direct paid advertisements are impermissible, but what about a brand, or brand founder, that pays for an influencer’s vacation a few years before that influencer reviews the brand’s product on their platform?
Regulators Are Focused on the Wrong Problems
Despite the concerns of media pundits and of older generations, there is a major disconnect between the perceived harm of social media and its actual dangers. This is fine, and can even be a little funny, but only as long as the people crafting social media regulations are fully aware of what’s actually happening. Unfortunately, that doesn’t seem to be the case.
Bothersome behaviors and bad manners, such as looking at one’s phone throughout a family meal or consistently getting in trouble at school for having a cell phone out, are not sufficient evidence that social media is causing societal harm. Extremely passionate photographers or readers could disrupt social situations or abstract themselves just as easily as a 13-year-old with an iPhone. Understanding the actual problem is essential to prescribing an effective solution, either through legislation or by tech companies themselves. There are many real issues with social media, such as data privacy, bullying and general safety, particularly for children. But most regulations (both current and proposed) seem to be focused on other, much less serious concerns.
One problem is that social media legislation often deals with the perceived phenomenon of “algorithm addiction,” a popular term that popped up in a (now-dead) California bill that, if enacted, would have made social media companies liable for causing children to become addicted to social media. Such a law would further empower the false narrative that people, and particularly children, lack agency when interacting with algorithms. We see this theory underlying Rosen’s claim that “Instagram is harming teen mental health” and in the “Facebook whistleblower” Senate hearing that spawned weeks of sensational headlines about how social networks are causing teens, particularly teen girls, to self-harm or commit suicide. But the reality is that the research into this causal link is still ongoing, something even some social media skeptics acknowledge. So far, the claim that there is a direct link between Instagram use and teen depression is simply not a proven fact.
There certainly are some negative mental health effects. For example, teenagers sensitive to eating-disorder triggers may be better off not following certain influencers or accounts on Instagram, but it isn’t possible to uniformly ban all eating-disorder-triggering accounts because every person’s triggers are unique. Moreover, the same subjective eating-disorder triggers can arise by watching television or reading magazines. People’s Instagram feeds are not hypnotizing them to keep scrolling, just like the tastiness of potato chips is not causing them to keep eating—that is, unless they have an underlying addictive personality. As with junk food, it is the job of the parents—not the government—to teach their children how to maintain a healthy relationship with social media and to seek psychological help when necessary.
Finally, while the negative effects of social media are highlighted in the news, the positive effects are often downplayed. More and varied paths of communication help us to make new friendships as well as maintain relationships that otherwise would have faded. And, especially during the pandemic, these outlets have helped us combat social isolation.
What Is the Solution?
Social media companies want users to enjoy themselves on their websites and foster a dependency because that is how they make money. These companies have a strong incentive to foster positive user experience. For example, in response to user demands, Instagram and Twitter brought back chronological ordering of their feeds as an option in the app settings. So the market itself may do much to correct the feared negative consequences of social media, especially in the long run.
Regulating social media companies, forcing them to “fix” their algorithms and imposing a minimum age for users will not decrease web traffic among teenagers; it will merely encourage minors to break the law. Instead, we need rational, education-centered approaches to solving social media problems, such as the law recently enacted in Delaware requiring online literacy courses in schools to better equip children to use the internet safely. These types of approaches not only prevent governments from falling into paternalism, but they also effectively help target legitimate consumer privacy and security concerns without forcing companies to drastically alter services that literally billions of people use and benefit from every day.