"Cancelling voices is a slippery slope," Spotify CEO Daniel Ek wrote last Sunday in an internal memo to staffers.

Ek's comments came in the wake of an evolving saga surrounding "The Joe Rogan Experience", a controversial podcast hosted by comedian Joe Rogan. Spotify struck a $200 million deal in 2020 to exclusively host the podcast on their platform.

The growing backlash against Rogan, whose podcast draws 11 million listeners per episode, began in January, when a group of over 200 public health experts sent an open letter to Spotify raising concerns about Rogan using his platform to spread Covid-19 misinformation. Since then, Neil Young and Joni Mitchell asked Spotify to remove their music from the platform. More recently, a viral video posted by musician India.Arie, which shows Rogan using a racial slur dozens of times, intensified calls for Spotify to act.

To be fair, Spotify's response hasn't been awful. The platform updated its rules for contributors, and will add a content advisory warning to podcasts that feature discussion about Covid-19. And, for what it's worth, Ek seems genuinely contrite, and intent on doing better — a low bar, but still a refreshing contrast to the likes of Google and Facebook, whose executives often recycle callous talking points in response to similar criticisms over content moderation practices.

But the episode raises an important question: How can public policy effectively combat the dissemination of harmful disinformation on Spotify, Facebook, YouTube, and other digital platforms?

It's a problem that lawmakers on Capitol Hill have been seriously grappling with for at least three years. Dozens of bills have been drafted in Congress to reduce the spread of harmful content online. The vast majority of proposals have focused on amending Section 230, a provision in a 1996 law that shields online platforms from liability for user-generated content. Many in the tech policy community caution against this approach, based on concerns that rolling back Section 230 could lead companies to over-censor content, and would hurt smaller companies the most. Some proposals go further, including updating competition law to limit market concentration, mandating disclosure in political ads, or expanding data privacy.

But there is an alternative approach that has received less attention. It's a path that prioritizes empowering social media users with a contextual understanding of the content they engage with, rather than merely penalizing the tech giants.

Take a moment to imagine the following: You are scrolling through your Facebook feed, and every piece of news you encounter has a "credibility score" attached to it, from 0–100. Clicking on the score, you are provided with a color-coded indicator of the source quality, the author's expertise and tone of voice, other data about the authenticity and trustworthiness of the information, and even additional links to other news articles about the topic from a different political perspective.

The Factual credibility score interface The Factual is one of many start-ups that leverages AI algorithms to rate the credibility of news articles.

To make this vision a reality, policymakers should partner with the private sector, academia, and other civil society groups to leverage emerging AI technologies to promote credibility labels on social media.

There is precedent for similar action. In November 1990, President George H.W. Bush signed into law the Nutrition Labeling and Education Act (NLEA), legislation which introduced the now iconic black-and-white health label on packaged foods. The NLEA's initial mandate was narrow. Over time, food manufacturers voluntarily added additional information to food labels, and restaurants throughout the country began displaying nutrition information on their menus.

Studies have shown that nutrition facts help consumers make informed decisions about food. The Union of Concerned Scientists found that 76 percent of adults read the label when purchasing packaged foods, and food labeling increases vegetable consumption. Researchers at the Harvard T.H. Chan School of Public Health found that restaurant customers tend to order lower-calorie foods when menus include calorie information.

So, what's the connection? The NLEA is a prime example of the government taking a "soft" approach to educate consumers about healthy eating, and shift the food industry toward making healthier products. And it is precisely the approach needed to address the present dysfunction in the social media information ecosystem.

Of course, there are important questions to consider: How could policymakers incentivize platforms to adopt credibility labels given that platforms, as private companies, are afforded First Amendment protections? Which platforms would be required to comply? And, should AI algorithms — which have played a role in the coarsening of civic discourse — be trusted to determine the credibility of news articles?

Skeptics would also be right to point out that credibility labels wouldn't address the platforms' underlying ad-based business models, which incentivize them to design their systems in a way that accelerates the viral distribution of divisive content.

These are fair criticisms, and they should be addressed. But as Renee DiResta, a disinformation researcher at Stanford, has often said, there are no silver bullets, and platforms will only redesign the user experience if public regulations push them to. While social media platforms have experimented with features like "fact-checks" and downranking of content, these tools are often "opt-in", requiring initiative on the user's part, and there are no universal standards guiding the language and design of these interventions. The companies are experimenting in isolation, with little motivation to implement lasting changes.

Furthermore, there are reasons to expect that a standardized credibility label would be more effective at combatting misinformation than other proposals. A number of studies provide evidence that fact-checking labels and other "friction" points prompt users to think more critically about the accuracy of content, and reduce sharing of misinformation. Expanding credibility labels also avoids the pitfalls of more ambitious content moderation proposals — rather than force a tech company, or a government, to determine what constitutes misinformation, or censor certain speech, credibility labels equip users with information they can use to become savvier news consumers.

As a starting point, Congress could help fund some of the private sector start-ups working to build the type of automated detection systems which can rate the credibility of news articles online. And they could work with Facebook and Google to encourage this form of self-regulation.

Thirty years ago, the government passed the NLEA to nudge Americans to consume healthier foods, and the food industry to make healthier products. Today, we must act similarly to empower people to consume healthier information.