It no longer makes sense to speak of free speech in traditional terms. The internet has so transformed the nature of the speaker that the definition of speech itself has changed.
The new speech is governed by the allocation of virality. People cannot simply speak for themselves, for there is always a mysterious algorithm in the room that has independently set the volume of the speaker’s voice. If one is to be heard, one must speak in part to one’s human audience, in part to the algorithm. It is as if the US Constitution had required citizens to speak through actors or lawyers who answered to the Dutch East India Company, or some other large remote entity. What power should these intermediaries have? When the very logic of speech must shift in order for people to be heard, is that still free speech? This was not a problem foreseen in the law.
The time may be right for a legal and policy reset. US lawmakers on both sides of the aisle are questioning Section 230, the liability shield that enshrined the ad-driven internet. The self-reinforcing ramifications of a mere 26 words—“no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider”—has produced a social media ecosystem that is widely held to have had deleterious effects on both democracy and mental health.
Abraham Lincoln is credited with the famous quip about how you cannot fool all the people all the time. Perhaps you cannot, but perhaps the internet can. Imperfect speech has always existed, but the means and scale of amplification have not. The old situation cannot be the guide for the new.
Section 230 was created during a period when policy was being designed to unleash internet innovation, thereby maintaining America’s competitive edge in cyberspace. The early internet was supported by a variety of friendly policies, not just Section 230. For instance, sales arranged over the internet were often not taxed in early years. Furthermore, the internet was knowingly inaugurated in an incomplete state, lacking personal accounts, authentication mechanisms, commercial transaction standards, and many other needed elements. The thinking was not only that it was easier to get a minimal design started when computing power was still nascent, but also that the missing elements would be addressed by entrepreneurs. In effect, we were giving trillion-dollar gifts to parties unknown who would be the inevitable network-effect winners.
Section 230 was enacted as part of the 1996 Communications Decency Act, a larger legislative effort within the umbrella 1996 Telecommunications Act. Section 230(c)(1) provides immunity for online services regarding user-generated content, ensuring the companies hosting content are not treated as publishers of this information. Section 230(c)(2) offers Good Samaritan protection from civil liability when the companies—or platforms, as we call them today—in good faith remove or moderate objectionable content.
After President Bill Clinton signed the 1996 Telecommunications Act into law, it was unclear how the courts might interpret it. When the dust cleared, Section 230 emerged as something of a double-edged sword. It could be used to justify censorship, and at the same time be deployed as a corporate liability shield. Most importantly, it provided the runway for the takeoff of Google, Twitter, and Facebook. (And now TikTok—which, being a Chinese company, proves that Section 230 no longer serves American interests.)
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearThe impact on the public sphere has been, to say the least, substantial. In removing so much liability, Section 230 forced a certain sort of business plan into prominence, one based not on uniquely available information from a given service, but on the paid arbitration of access and influence. Thus, we ended up with the deceptively named “advertising” business model—and a whole society thrust into a 24/7 competition for attention. A polarized social media ecosystem. Recommender algorithms that mediate content and optimize for engagement. We have learned that humans are most engaged, at least from an algorithm’s point of view, by rapid-fire emotions related to fight-or-flight responses and other high-stakes interactions. In enabling the privatization of the public square, Section 230 has inadvertently rendered impossible deliberation between citizens who are supposed to be equal before the law. Perverse incentives promote cranky speech, which effectively suppresses thoughtful speech.
And then there is the economic imbalance. Internet platforms that rely on Section 230 tend to harvest personal data for their business goals without appropriate compensation. Even when data ought to be protected or prohibited by copyright or some other method, Section 230 often effectively places the onus on the violated party through the requirement of takedown notices. That switch in the order of events related to liability is comparable to the difference between opt-in and opt-out in privacy. It might seem like a technicality, but it is actually a massive difference that produces substantial harms. For example, workers in information-related industries such as local news have seen stark declines in economic success and prestige. Section 230 makes a world of data dignity functionally impossible.
To date, content moderation has too often been beholden to the quest for attention and engagement, regularly disregarding the stated corporate terms of service. Rules are often bent to maximize engagement through inflammation, which can mean doing harm to personal and societal well-being. The excuse is that this is not censorship, but is it really not? Arbitrary rules, doxing practices, and cancel culture have led to something hard to distinguish from censorship for the sober and well-meaning. At the same time, the amplification of incendiary free speech for bad actors encourages mob rule. All of this takes place under Section 230’s liability shield, which effectively gives tech companies carte blanche for a short-sighted version of self-serving behavior. Disdain for these companies—which found a way to be more than carriers, and yet not publishers—is the only thing everyone in America seems to agree on now.
Trading a known for an unknown is always terrifying, especially for those with the most to lose. Since at least some of Section 230’s network effects were anticipated at its inception, it should have had a sunset clause. It did not. Rather than focusing exclusively on the disruption that axing 26 words would spawn, it is useful to consider potential positive effects. When we imagine a post-230 world, we discover something surprising: a world of hope and renewal worth inhabiting.
In one sense, it’s already happening. Certain companies are taking steps on their own, right now, toward a post-230 future. YouTube, for instance, is diligently building alternative income streams to advertising, and top creators are getting more options for earning. Together, these voluntary moves suggest a different, more publisher-like self-concept. YouTube is ready for the post-230 era, it would seem. (On the other hand, a company like X, which leans hard into 230, has been destroying its value with astonishing velocity.) Plus, there have always been exceptions to Section 230. For instance, if someone enters private information, there are laws to protect it in some cases. That means dating websites, say, have the option of charging fees instead of relying on a 230-style business model. The existence of these exceptions suggests that more examples would appear in a post-230 world.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearLet’s return to speech. One difference between speech before and after the internet was that the scale of the internet “weaponized” some instances of speech that would not have been as significant before. An individual yelling threats at someone in passing, for instance, is quite different from a million people yelling threats. This type of amplified, stochastic harassment has become a constant feature of our times—chilling speech—and it is possible that in a post-230 world, platforms would be compelled to prevent it. It is sometimes imagined that there are only two choices: a world of viral harassment or a world of top-down smothering of speech. But there is a third option: a world of speech in which viral harassment is tamped down but ideas are not. Defining this middle option will require some time to sort out, but it is doable without 230, just as it is possible to define the limits of viral financial transactions to make Ponzi schemes illegal.
With this accomplished, content moderation for companies would be a vastly simpler proposition. Companies need only uphold the First Amendment, and the courts would finally develop the precedents and tests to help them do that, rather than the onus of moderation being entirely on companies alone. The United States has more than 200 years of First Amendment jurisprudence that establishes categories of less protected speech—obscenity, defamation, incitement, fighting words—to build upon, and Section 230 has effectively impeded its development for online expression. The perverse result has been the elevation of algorithms over constitutional law, effectively ceding judicial power.
When the jurisprudential dust has cleared, the United States would be exporting the democracy-promoting First Amendment to other countries rather than Section 230’s authoritarian-friendly liability shield and the sewer of least-common-denominator content that holds human attention but does not bring out the best in us. In a functional democracy, after all, the virtual public square should belong to everyone, so it is important that its conversations are those in which all voices can be heard. This can only happen with dignity for all, not in a brawl.
Section 230 perpetuates an illusion that today’s social media companies are common carriers like the phone companies that preceded them, but they are not. Unlike Ma Bell, they curate the content they transmit to users. We need a robust public conversation about what we, the people, want this space to look like, and what practices and guardrails are likely to strengthen the ties that bind us in common purpose as a democracy. Virality might come to be understood as an enemy of reason and human values. We can have culture and conversations without a mad race for total attention.
While Section 230 might have been considered more a target for reform rather than repeal prior to the advent of generative AI, it can no longer be so. Social media could be a business success even if its content was nonsense. AI cannot.
There have been suggestions that AI needs Section 230 because large language models train on data and will be better if that data is freely usable with no liabilities or encumbrances. This notion is incorrect. People want more from AI than entertainment. It is widely considered an important tool for productivity and scientific progress. An AI model is only as good as the data it is trained on; indeed, general data improves specialist results. The best AI will come out of a society that prioritizes quality communication. By quality communication, we do not mean deepfakes. We mean open and honest dialog that fosters understanding rather than vitriol, collaboration rather than polarization, and the pursuit of knowledge and human excellence rather than a race to the bottom of the brain stem.
Most PopularThe Top New Features Coming to Apple’s iOS 18 and iPadOS 18By Julian Chokkattu CultureConfessions of a Hinge Power UserBy Jason Parham GearHow Do You Solve a Problem Like Polestar?By Carlton Reid SecurityWhat You Need to Know About Grok AI and Your PrivacyBy Kate O'Flaherty
GearThe attention-grooming model fostered by Section 230 leads to stupendous quantities of poor-quality data. While an AI model can tolerate a significant amount of poor-quality data, there is a limit. It is unrealistic to imagine a society mediated by mostly terrible communication where that same society enjoys unmolested, high-quality AI. A society must seek quality as a whole, as a shared cultural value, in order to maximize the benefits of AI. Now is the best time for the tech business to mature and develop business models based on quality.
All of this might sound daunting, but we’ve been here before. When the US government said the American public owned the airwaves so that television broadcasting could be regulated, it put in place regulations that supported the common good. The internet affects everyone, so we must devise measures to ensure that our digital-age public discourse is of high quality and includes everyone. In the television era, the fairness doctrine laid that groundwork. A similar lens needs to be developed for the internet age.
Without Section 230, recommender algorithms and the virality they spark would be less likely to distort speech. It is sadly ironic that the very statute that delivered unfathomable success is today serving the interests of our enemies by compromising America’s superpower: our multinational, immigrant-powered constitutional democracy. The time has come to unleash the power of the First Amendment to promote human free speech by giving Section 230 the respectful burial it deserves.