December 31, 2021
I have been engaging on Twitter recently on my ideas on repealing Section 230. Not surprisingly, I provoked a considerable response. While much of it was angry ad hominems, some of it involved thoughtful comments, especially those from Jeff Koseff and Mike Masnick, the latter of whom took the time to write a full column responding to my proposals on repeal.
I will directly respond to Mike’s column, but first I should probably outline again what I am proposing. I somewhat foolishly assumed that people had read my earlier pieces, and probably even more foolishly assumed anyone remembered them. So, I will first give the highlights of how I would like to see the law restructured and then respond to some of the points made by Mike and others.
Narrowing the Scope of 230
To my view, the best way to limit the power of a Mark Zuckerberg or Jack Dorsey to shape our political debates and influence elections is to downsize Facebook and Twitter, and possibly other sites, that can grow so large as to have an outsize influence on American politics. Restructuring the protection provided by Section 230 can be a way to accomplish this goal.
As it stands, Section 230 means that Facebook and Twitter cannot be sued for defamation for third party content, either in the form of paid advertisements or for any defamatory material that might be contained in any of the billions of posts made on these sites every month. Newspapers and broadcast outlets do not enjoy this protection for third party content.
I would propose taking away this protection for sites that either accept paid advertisements or sell personal information. This means that the only sites that would still have Section 230 protection would be sites that either had paid subscriptions or survived on donations.
Since it would not be practical for a major site like Facebook to monitor every post as it was made, I proposed a notification and takedown rule similar to what now exists with material alleged to be infringing on copyrights. Under the Digital Millennium Copyright Act, a website, such as Facebook, can be subject to penalties for copyright infringement if they have been notified by the copyright holder and fail to take down the infringing material in a timely manner.
A similar rule can be put in place for allegedly defamatory material, where the person (or company) claiming defamation notifies the website, which then would have to remove the material in a timely manner in order to shield itself from potential liability.[1] Of course, many people could make complaints alleging defamation that are not justified. If a site owner has made this assessment, the site need not do anything, but it would then risk a lawsuit just as a newspaper or television station does now over circulating defamatory items.
This sort of change would not have much impact on the vast majority of websites. A business that has its own site would generally have no third party content that it would need to worry about.
Some business sites do have customer reviews of products. For example, many retail sites allow customers to comment on items they purchased. These reviews could include some potentially defamatory comments.
A business could decide to pre-emptively get rid of its review section, avoiding any potential problems. Alternatively, it could take responsibility for monitoring its reviews and be prepared to remove potentially defamatory reviews if a complaint is made. (I assume that most of these review sections already require some degree of moderation, at least to remove comments that are obscene, racist, or in other ways offensive.) It may also, as a substitute, simply have links to sites that host reviews.
There are also a large number of sites that would still enjoy 230 protection by virtue of the fact that they do not have paid advertising or sell personal information. For example, this would be the case with most websites for policy organizations, universities, or other non-profits.
There would be a clear issue with many sites that are essentially dependent on third party content for their business. This would include sites like Yelp, which is based on customer reviews of businesses, or Airbnb, which prominently feature guests’ reviews of hosts.
Without Section 230 protection these sites could be held liable for defamatory comments in these reviews. These sites could make the decision to accept responsibility for moderation (they already moderate to exclude offensive content) and be prepared to remove posts that are called to their attention as potentially defamatory.[2]
Another option would be to go to a subscription model where users paid some monthly or annual fee for use of the service. Even large sites could be supported with a fairly limited number of subscribers paying a modest fee.
As I noted in an earlier Tweet thread, the employee-employer website Glassdoor had revenue of $170 million in 2020. This could be covered by 3 million people paying $5 a month or 1.5 million paying $10 a month. That hardly seems like a big leap for a major website.
It is more than a bit far-fetched to claim such fees would make these sites exclusively for the rich. In prior decades it was common for working class and even poor people to have subscriptions to newspapers, which cost them far more in today’s dollars than $10 a month. There are currently over 290 million smartphone users in the United States and almost all of them are paying far more than $10 a month for their service. Needless to say, we do not have 290 million rich people in this country.
Of course, there is no guarantee that every service that exists today on an advertising model would survive a switch to a paid circulation model, but so what? Companies go out of business all the time, that is capitalism. If it turned out that very few people were willing to shell out money for a site like Glassdoor, we can infer that there were not very many people who valued the site.
I don’t mean to be glib about the prospect that sites that some people may value a great deal may not survive this sort of change in regimes, but almost all policy that accomplishes anything positive will also have negative effects. The growth of Internet retailing put many old-line retailers out of business. And the growth of Facebook, partially fueled by Section 230 protection, has helped to put many newspapers out of business. If we think we have a policy that won’t have any undesirable effects, then we probably don’t understand the policy.
If we saw many sites switching to a paid circulation model, it is likely that we would see aggregators that charge a fee for access to a large number of sites. This would be similar to the combination of television channels offered by major cable providers. This means that instead of individually subscribing to Yelp, Glassdoor, etc., it would be possible to subscribe to a service that provided access to a wide range of sites.
It’s understandable that people would not be happy about paying for access to sites that had previously been free, but we see this all the time. Most newspapers now have paywalls, and many don’t even allow a single article to be viewed for free. (In the past, it was common for newspapers with paywalls to allow free access to some limited number of articles per month.)
Forty years ago, free broadcast television accounted for the vast majority of viewing. Households spent just $3.15 billion on cable TV in 1980, the equivalent (relative to the size of the economy) of $22.7 billion in 2020. By comparison, households spent $96.3 billion on cable television in 2020 (more than $700 per household), more than four times as much as in 1980.[3] In short, there is ample precedent for people being willing to pay for items that were formerly available for free, if they value them.
Would This Change Hurt Facebook?
Mike argues in his piece that changing Section 230 in the way that I have proposed would work to the benefit of Facebook, arguing that Facebook is actually now pushing for eliminating Section 230. It is true that Facebook is lobbying to have Section 230 changed, but it does not seem to be advocating eliminating this protection, at least for itself.
I’ll confess to not fully understanding the changes Facebook is advocating, but according to the Electric Frontier Foundation (EFF), it would amount to protection from liability for defamation, if a company spent a certain proportion of its revenue monitoring its site for offensive, dangerous, or defamatory material. That is certainly not the same as asking Congress to eliminate Section 230 protection altogether. (The EFF piece is titled “Facebook’s Pitch to Congress: Section 230 for Me, but not for Thee.”)
If Facebook had to operate without Section 230 protection, as I am proposing, it could face liability for defamation if it left material posted after being given notice by someone claiming damages. It is possible that it would just ignore these notices and operate as it does currently, but it seems more likely that it would take down much of the material that provided the basis for complaints. In fact, if we can extrapolate from the experience with copyright infringement claims, websites have in general been overly cautious after being given notice, often removing material that is not actually infringing.[4]
If we assume Facebook goes the compliance route, many users will see posts removed from their Facebook page over claims that they are defamatory. It seems likely that this would upset users, causing many of them to look for sites that will not remove their posts. Since sites that did not depend on advertising or selling personal information would still enjoy Section 230 protection, it seems likely that many current users would opt to leave Facebook for these alternatives.
I also pointed out that as a simple financial matter, the Facebook leavers are likely to be more affluent, since they could easily afford the fees charged for a subscription site. While most households may be able to pay $5 or $10 for a subscription site, this expense would be trivial for the 30 plus percent of households with incomes over $100,000 a year.[5] This is the group that advertisers on Facebook are most interested in reaching. If a substantial percent of higher income users left Facebook, or used the site less frequently, it would be a big hit to the company’s profits.
It is also worth noting that, even if alternative sites may be many magnitudes smaller in their potential reach than Facebook, this is not likely to make much difference to the overwhelming majority of Facebook users. While Facebook may have billions of users, almost none of its users will ever reach more than a tiny fraction of the total with their posts. If their friends and family members shared a site that was 0.01 percent as large as Facebook, in almost all cases they could count on reaching just as many viewers. As a practical matter, the billions of users that will never see a person’s Facebook page are irrelevant to them.
The other possibility is that Facebook would simply ignore complaints and leave potentially defamatory material posted on its site. Masnick seems to think this is a possibility for Facebook.
“First off, the actual bar for defamation is quite high, especially for public figures. Baker, incorrectly, seems to think that merely saying something false about a public figure is defamatory. That’s not how it works. It has to meet the standard of defamation, including the actual malice standard (which is not just that you were really mad when you said it). Second, and much more important for this situation, is that if the speaker was liable, that does not automatically mean that the intermediary would be liable. Under the two key cases prior to Section 230 becoming law, Cubby v. Compuserve and Stratton Oakmont v. Prodigy, the courts had to wrestle with what makes 3rd party intermediary liability consistent with the 1st Amendment.”
Of course, the bar for defamation is high, and especially so for public figures. That doesn’t mean that they are not brought and occasionally successful. General William Westmoreland sued CBS News in 1982 for a segment it did on his conduct during the Vietnam War. This suit survived summary judgement (wrong call in my view) and was settled just before the jury reached a verdict.
More recently, the former professional wrestler Jesse Ventura won a suit against American Sniper author Chris Kyle. After securing a judgement at the trial level, Ventura received an out-of-court settlement before the case was appealed.
But the issue of public figure defamation is the less important one. The overwhelming majority of defamation claims on a site like Facebook would not involve public figures but rather comments about a business or worker, friend, neighbor, or family member. It’s not obvious why in these sorts of cases, that Facebook should enjoy a greater level of immunity (post-notification) than a newspaper or television station.
If a person had a letter printed in a newspaper, claiming that a restaurant served rotten meat, causing dozens of customers to get sick, the paper, and not just the letter writer, could be sued for libel if the claim was not true. Why should the restaurant have no recourse against Facebook, if it allowed this false claim to continue to circulate, even after they brought it to Facebook’s attention?
Apart from the cost that news organizations incur when they defend against, and possibly lose, a defamation suit, they also incur considerable expenses to avoid facing suits. News outlets carefully comb investigative pieces to ensure that they do not contain potentially defamatory material. They review third party submissions, such as columns and letters to the editor, the same way.
Section 230 ensures that Facebook does not now have to incur these expenses. Repealing this protection will unambiguously raise its costs, both relative to the outlets that do not now have Section 230 protection and also relative to sites that would still enjoy this protection.
It is not clear what constitutional issues Masnick envisions in holding intermediaries liable for carrying defamatory material. The two cases he cites both focus on whether the intermediary could have reasonably been expected to know of the defamatory material at the time it was posted. In a case where Facebook has been given a takedown notice, they obviously have knowledge of the material. The courts have apparently not seen any First Amendment issues with holding intermediaries liable for carrying material that infringes on copyrights, it’s not clear why they would then hold that the First Amendment protects them from suits on hosting defamatory material.
Is Mark Zuckerberg a Good Guy and Should We Care?
Facebook has obviously made some effort to limit the amount of false and hateful material that circulates on its site. We can be thankful for this, even if we can debate whether it has done enough.
But the more fundamental question is whether such important decisions should be left to the discretion of a private company. The disproportionate control of the media by large corporations and wealthy individuals has long been a problem, but the situation is much more serious when a single company can have the reach of Facebook.
Even if people are reasonably satisfied with Mark Zuckerberg’s moderation of Facebook, he is not going to be running the company forever. Would people be equally satisfied if some Koch-Murdoch-type billionaire were in control? Would it be okay if they started removing any content pointing out that Donald Trump lost the 2020 election by a wide margin and that the allegations of vote fraud are absurd?
When a key communications outlet gets as large as Facebook it is a real problem. We can hope that it exercises its power responsibly, but the problem is that it has the power in the first place. At the same time, no one can reasonably want the government to dictate Facebook’s moderation policy, which would raise all sorts of First Amendment issues.
The better answer lies in downsizing Facebook so that what Mark Zuckerberg or any billionaire wants, doesn’t matter so much. Taking away its Section 230 protection is an effective route to accomplish this goal.
[1] With a site like Facebook, which effectively has a record of who has viewed any post, there could be an additional requirement that all the users that viewed the defamatory item be notified that it was removed. This would be equivalent to a newspaper publishing a retraction in response to a threat of a defamation suit.
[2] A site like Airbnb could probably also get their hosts to waive their right to sue for defamation as a condition of listing on the service.
[3] These data are taken from Bureau of Economic Analysis, National Income and Product Accounts, Table 2.4.5U, Line 217.
[4] Mike Masnick called my attention to this issue.
[5] I have argued for a tax credit system, modeled on the charitable contribution tax deduction as an alternative to copyright for supporting creative work. Such a credit would be a great way to ensure that even the poorest households could afford access to subscription sites.
Comments