Section 230 of the 1996 Communications Decency Act exempts providers of an interactive computer service — such as social media platforms — from legal liability for user-generated content. It also allows platforms to moderate content as they see fit without fear of being sued.
There is broad consensus, even among the legislation's critics, that Section 230 is a bedrock of the modern internet. In his book The Twenty-Six Words That Created the Internet, Jeff Kosseff argues that Section 230 has become so intertwined with our fundamental conceptions of the internet that any wholesale reductions to the immunity it offers could irreparably damage free speech online.
Facebook CEO Mark Zuckerberg testified before lawmakers on Capitol Hill in April 2018. Photo: EPA
On Capitol Hill, lawmakers from both parties have struck an increasingly hostile tone toward Section 230. Republicans believe Silicon Valley executives have exploited its latitude to disadvantage conservative content. Democrats argue that Section 230 shields social media companies from social responsibility, and that platforms need more legal incentives to remove hate speech and misinformation.
Some legal scholars have proposed targeted amendments. Danielle Keats Citron of Boston University and Benjamin Wittes of Lawfare argue that immunity should only be granted to platforms that have taken "reasonable steps to prevent or address unlawful uses of their services" — an interpretive shift toward requiring "good faith" content moderation.
In my view, repealing Section 230 altogether is not a practical option: doing so would destabilize internet communication and further consolidate power in the hands of a few giant, well-funded technology companies. But proposals that merely incentivize better content moderation may also be insufficient. What constitutes a "good faith effort" to restrict harmful content? Can social media companies really be expected to police millions of posts every day? And should these companies even be empowered to make those editorial decisions? Zuckerberg likes to say that Facebook shouldn't be the arbiter of truth — on that point, I agree.
The Algorithm Problem
Any legislation that seeks to combat the most pernicious effects of social media must address the algorithms that allow hate speech, disinformation, and conspiracy theories to spread at scale — and that disproportionately amplify the most extreme voices. Algorithms drive Facebook's Newsfeed and "Suggested Groups" feature, Twitter's Home timeline, and TikTok's "For You" page. They are indispensable to the business model.
Two House Democrats, Representatives Tom Malinowski (D-NJ) and Anna Eshoo (D-CA), introduced an amendment to Section 230 called the Protecting Americans from Dangerous Algorithms Act (PADAA), which would impose liability on social media firms for algorithmically curated speech or social connections later implicated in extremist violence.
The bill is not perfect. The feasibility of predictively policing algorithmically selected speech is uncertain, and some will raise First Amendment concerns. But no single amendment to Section 230 will resolve every peril of social media. And any regulation that focuses solely on content moderation — without forcing platforms to recalibrate the algorithms that neglect social welfare — is not a substantive solution.
Zuckerberg would gladly have us believe that the debate over social media regulation is about free speech and censorship. But what we should really be talking about is algorithms.
