Section 230: Can We Talk About It?

September 09, 2023

There is considerable handwringing over the fact that Elon Musk now controls Twitter and is making ad hoc decisions, based on his unusual political views, as to what posts get amplified and what gets banned. (I was briefly banned last fall, for no obvious reason.)

While many people may not like Musk’s calls on these issues, the real issue is not Musk, the issue is why anyone is allowed to get so much power. Musk’s erratic behavior and fondness for far-right propagandists has chased many people off Twitter (or “X” as he now calls it), but it still has a reach that dwarfs that of even the largest traditional media outlets. This should have caused serious concerns even before Elon Musk took it over.

Unfortunately, people who care about things like rich oligarchs dominating our media tend to be more interested in handwringing than thinking about things that can be done to change the situation. Several years ago, I suggested that repealing the protections given to Internet platforms by Section 230 might be a good route for downsizing Internet giants like Twitter and Facebook. The takeover of Twitter by a right-wing jerk makes me more convinced than ever that this is a worthwhile proposal.

 What Section 230 Does

The content of Section 230 of the 1996 Communications Decency Act is straightforward. It says:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

This distinguishes a web platform from a newspaper or broadcast outlet, which are treated as publishers and held responsible for third-party content. For example, if a newspaper or television station allows a commentator to write a column or speak on air on a topic, and what they say is defamatory, the newspaper or television station can be sued, not just the commentator.

The same applies to paid ads. If someone takes out an ad in a newspaper or a TV station and makes some outlandish accusation against a politician or a business, the newspaper or TV station is liable for damages, not just the person taking out the ad. In fact, the famous New York Times v. Sullivan case, which set out the higher standards for proving defamation of a public figure, was over an ad run in the Times, not its own content.

The laws on defamation mean that print and broadcast outlets have to vet third-party material or risk being sued. This is a big deal. In fact, the case where Dominion sued Fox, and won $787 million in damages, was based largely on third-party content. Fox repeatedly hosted people who made absurd claims about Dominion’s voting machines.

The logic of holding print and broadcast outlets responsible for third-party content is that they are wholesaling allegations that otherwise might have almost no audience. Sidney Powell standing on a street corner yelling about Dominion voting machines is an irrelevant joke. Sidney Powell speaking to millions of viewers on Fox News about how Dominion stole the election from Donald Trump is a very big deal to both Dominion’s reputation and the country.

It’s also worth mentioning that print and broadcast outlets can profit from carrying defamatory material. If hosting outlandish charges from third-party sources increases circulation or viewership, that means more revenue from advertisers. With a defamatory ad, the benefit for the outlet is even more direct.

Since print and broadcast outlets both make third-party generated defamatory material much more harmful by carrying it, and profit by carrying it, it is reasonable to hold them responsible for the potential damage it causes. However, Section 230 means that this same degree of culpability does not apply to Internet platforms.   

Section 230 means that the producers of content can in principle be held responsible for defamatory material, but the Internet platform is not responsible. I am saying “in principle” because many comments on platforms like Twitter are posted anonymously.

It would be difficult for someone who was defamed to even know who the originator of the content was. (A prominent right-wing Twitter commentator, and friend of Elon Musk, posts under the name “Catturd.” If this person defamed someone, they would have to first uncover the identity of Catturd before even initiating a lawsuit.) It is also worth mentioning that platforms’ liability is not affected even if their algorithms work to amplify defamatory content.

The same story applies to third-party ads. If some person, even anonymously, paid Elon Musk or Mark Zuckerberg millions of dollars for a large-scale advertising campaign defaming an individual or company, Section 230 means they just get to pocket the cash, but face no legal risk.

 What Section 230 Does Not Do

 There is a huge amount of confusion about Section 230. For years, and maybe still, many on the right seemed to think that Section 230 makes it possible for Internet platforms to remove posts for being racist, sexist, homophobic, or other modes of expression of far-right bigots. For this reason, they complained that Section 230 limited their free speech. That is pretty much 180 degrees at odds with reality.

Section 230 simply says that Internet platforms are not liable for defamatory content posted by third parties. Internet platforms have the right to remove offensive right-wing tripe because of the First Amendment, not Section 230. If anything, Section 230 makes it less likely they will remove loony rants from the right.

For example, without Section 230 protection, if some crazy person tweets that George Soros is funding a pedophile ring, Soros could sue Twitter for defamation. However, Section 230 protection means that Soros has no case against Twitter, just the loon who posted the Tweet.

This means that if Section 230 did not exist, Internet platforms like Twitter and Facebook, would have to comb through third-party posts and remove potentially defamatory statements, or risk being sued. For this reason, Section 230 almost certainly made it less likely that right-wing content would be removed.

Applying Defamation Law to Internet Platforms

Many of the people who get hysterical about the idea of repealing Section 230 insist that it would not be possible for Internet platforms to comb through the hundreds of millions of user-generated comments that they post every day. If the argument is that they have to avoid posting them in the first place, they might have a case, but we can restructure the law to accommodate web platforms, just as the law was reshaped to accommodate broadcast outlets.

We can require that, in order to avoid potential liability for defamatory material, platforms promptly remove material after being given notice by the person or entity claiming defamation. There is a model for this already. The Digital Millennial Copyright Act (DMCA) requires that Internet hosts remove infringing material promptly after being given notice in order to avoid being sued for copyright infringement. 

The DMCA is problematic, there are many instances where material is removed wrongly by hosts who do not want to risk being sued. Nonetheless, the DMCA does provide a model that shows Internet platforms can in fact manage to cope with take-down requests, in spite of the vast amount of content they host.

Copyright law also provides a special inducement to sue, since it provides for statutory damages in addition to actual damages, which in the vast majority of cases would be trivial. Since the law on defamation does not provide for statutory damages, frivolous removal notices would pose less risk of actually leading to trials and meaningful damages.

There is no doubt that making Internet platforms liable for defamatory statements by third parties will raise their costs, even if they do have the option of protecting themselves by removing material in response to a takedown notice. They would have to hire staff to deal with notices and set up rules for when material would be removed.

They could just go the route of blanket removal of material any time they get a notice. This might be the cheapest route, since it can probably be done largely mechanistically, requiring no review from staff or lawyers. However, this would almost certainly antagonize users, since people using Twitter, Facebook, or other sites would be seriously annoyed if they routinely had their posts removed.

The alternative would be to have some process of review. If challenging material as defamatory became a common practice, this could become costly. Presumably, there would be some staff, with minimal legal training, who could make a preliminary decision on whether something should be removed. In many cases, the alleged defamation would likely be sufficiently trivial or absurd that the complaint could be safely ignored.

However, there could be a substantial number of cases that would require serious review for possibly defamatory material. In these cases, platforms would likely err on the side of removal rather than expose themselves to legal liability. That would be unfortunate, but it is not obvious that it is better than the alternative where someone who is defamed has no recourse against the site that wholesaled the defamatory material.

This is the logic that is applied to print and broadcast outlets. There is not an obvious reason that we should be willing to say that Elon Musk can wholesale defamatory material and profit from it with impunity, but CNN and the NYT cannot.

Exceptions for Sites that Don’t Make Money from Ads or Selling Personal Information

Taking away Section 230 protection would be a big challenge for the Internet giants like Facebook and Twitter, but they could probably deal with the additional costs. For smaller sites, the costs could matter more. As I have argued in the past, we could structure a repeal to give smaller sites an out, we can exempt sites that do not sell ads or personal information.

Millions of smaller Internet hosts would then still enjoy Section 230 protection, for them nothing would change. If they supported themselves through subscriptions or donations, they could continue to operate just as they do now. This would apply even to some large sites. For example, Mastodon, a major competitor with Twitter, with millions of users, supports itself through donations. It would continue to enjoy Section 230 protection. 

For many other sites, there would be some adjustments required. Would a site like Glassdoor be able to operate by subscription? How about Yelp or Airbnb?

There are two possibilities here. Many sites would probably have difficulty surviving by marketing themselves as a subscription service. I don’t know how many people would subscribe to a site like Yelp or Glassdoor, and the marketing costs would likely be expensive relative to potential subscription revenue.

However, it is plausible that aggregators could bundle a set of sites, as cable services do now with television channels. This would not require Internet users to take advantage of, or even know about, every site included in a bundle. Presumably, they would choose from aggregators in the same way that they choose now among cable services, selecting ones that included the sites they cared about most.

People will dislike paying for something they used to get free, but this has happened with television. Fifty years ago, almost all television was free. At its peak in 2016, almost 100 million households were paying for cable services. There is no basis for assuming that people would be unwilling to pay a monthly fee for access to Internet sites that they value. 

The other route is that sites could assume the liability, but require some sort of waiver from users as a condition of service. For example, Airbnb hosts may be asked to sign a waiver of their right to sue for defamatory postings, subject to some sort of screening procedure by Airbnb. (I am not sure what their current policy is, but I assume they will not allow a racist, sexist, or otherwise offensive comment to stay on their site.)

Some sites may also stop hosting comments to avoid the problem altogether. For example, newspapers may opt not to let readers comment on pieces posted on the web.

There certainly is no guarantee that every site that now survives based on ad revenue or selling personal information would make enough through a subscription service to survive, however if our criterion for a good policy is that it never results in anyone going out of business, we would not be implementing very many policies. The question is whether we would be better off in a world where Internet platforms have similar liability for circulating defamatory material to print and broadcast outlets, or the current one where they can profit from this material with impunity. 

Reducing the Power of the Internet Giants

It is hard not to be disgusted by the idea that elected officials have to beg people like Elon Musk or Mark Zuckerberg to act responsibly in removing lies and disinformation from their platforms. At the end of the day, these are private platforms and the rich people who control them can do what they want.

This has always been the case with newspapers and television stations, many of which often pushed pernicious material to advance their political agenda or simply to make money. But, this mattered much less when it was one of many television stations or newspapers. It would be wrong to glorify a golden age of a vigorous freedom-loving media, that never existed. But even the decisions of the largest newspaper or television network did not have as much weight as the decisions on content made by today’s Internet giants. If we can restructure Section 230 in a way that leads to their downsizing and promotes a wide variety of competitors, it will be an enormous victory for democracy.

A repeal of Section 230 can be sliced and diced in a thousand different ways, and the route suggested here may well not be the best. But it is worth having a debate on this topic. Anyone who thinks the current situation is fine, where Elon Musk and Mark Zuckerberg unilaterally decide what tens of millions of people see every day, does not have much understanding of democracy.

I realize handwringing on this topic is much more marketable in major media outlets than proposed solutions, but we should be looking for some nonetheless. We know that intellectuals have a hard time dealing with new ideas, but this is a situation where we desperately need some.

Comments

Support Cepr

APOYAR A CEPR

If you value CEPR's work, support us by making a financial contribution.

Si valora el trabajo de CEPR, apóyenos haciendo una contribución financiera.

Donate Apóyanos

Keep up with our latest news