February 27, 2024
That is effectively what the state of Texas and Florida are arguing before the Supreme Court this week. This argument goes under the guise of whether states can prohibit social media companies from banning material based on politics. However, since much of Republican politics these days involves promulgating lies, like the “Biden family” Ukraine bribery story, the Texas and Florida law could arguably mean that the state could require Facebook and other social media sites to spread lies.
There is a lot of tortured reasoning around the major social media platforms these days. The Texas and Florida laws are justified by saying these platforms are essentially common carriers, like a phone company.
That one seems pretty hard to justify. In principle at least, a phone company would have no control over, or even knowledge of, the content of phone calls. This would make it absurd for a state government to try to dictate what sort of calls could or could not be made over a telephone network.
Social media platforms do have knowledge of the material that gets posted on their sites. And in fact they make conscious decisions, or at least have algorithms that decide for them, whether to leave up a post, boost it so that it is seen by a wide audience, or remove it altogether.
The social media companies arguing against the Florida and Texas laws say that they have a First Amendment right to decide what material they want to promote and what material they want to exclude, just like a print or broadcast outlet. However, there is an important difference between the factors that print and broadcast outlets must consider in transmitting and promoting content and what social media sites need to consider.
Modifying Section 230
Print and broadcast outlets can be sued for defamation for transmitting material that is false and harmful to individuals or organizations. Social media sites do not have this concern because they are protected by Section 230 of the Communications Decency Act.
This means that not only are social media companies not liable for spreading defamatory material, they actually can profit from it. This is not only the case for big political lies. If some racist decides to buy ads on Facebook or Twitter falsely saying that they got food poisoning at a Black-owned restaurant, Mark Zuckerberg or Elon Musk get to pocket the cash.
The restaurant owner could sue the person who took out the ad, if they can find them (ads can be posted under phony names), but Facebook and Twitter would just hold up their Section 230 immunity and walk away with the cash. The same story applies to posts on these sites. These can also generate profits for Mr. Zuckerberg or Mr. Musk, since defamatory material may increase views and make advertising more valuable.
The ostensible rationale for Section 230 was that we want social media companies to be able to moderate their sites for pornographic material or posts that seek to incite violence, without fear of being sued. There is also the argument that a major social media platform can’t possibly monitor all of the hundreds of millions, or even billions, of posts that go up every day.
But removing protections against defamation suits does not in any way interfere with the first goal. Facebook or Twitter should not have to worry about being sued for defamation because they remove child pornography.
As far as the second point, while it is true that these sites cannot monitor every post as it goes up, they could respond to takedown notices that come from individuals or organizations claiming they have been defamed. There is an obvious model here. The Digital Millennium Copyright Act requires that companies remove material that infringes on copyright, after they have been notified by the copyright holder or their agent. If they remove the material in a timely manner, they are protected against a lawsuit. Alternatively, they may determine that the material is not infringing and leave it up.
We can have a similar process with allegedly defamatory material, where the person or organization claiming defamation has to spell out exactly how the material is defamatory. The site then has the option to either take the material down, or leave it posted and risk a defamation suit.
This would effectively be treating social media sites like print or broadcast media. These outlets must take responsibility for the material they transmit to their audience, even if it comes from third parties. (Fox paid $787 million to Dominion as a result of a lawsuit largely over statements by guests on Fox shows.) There is not an obvious reason why CNN can be sued for carrying an ad falsely claiming that a prominent person is a pedophile, but Twitter and Facebook get to pocket the cash with impunity.
We can also structure this sort of change in Section 230 in a way that favors smaller sites. We can leave the current rules for Section 230 in place for sites that don’t sell ads or personal information. Sites that support themselves by subscriptions or donations could continue to operate as they do now.
This would help to counteract the network effects that tend to push people towards the biggest sites. After all, if Facebook and Twitter were each just one of a hundred social media sites that people used to post their thoughts, no one would especially care what posts they choose to amplify or remove. If users didn’t like their editorial choices, they would just opt for a different one, just as they do now with newspapers or television stations.
It is only because these social media sites have such a huge share of the market that their decisions take on so much importance. If we can restructure Section 230 in a way that downsizes these giants, it will go far towards ending the problem.
Modifying Section 230 won’t fix all the problems with social media, but it does remove an obvious asymmetry in the law. As it now stands, print and broadcast outlets can get sued for carrying defamatory material, but social media sites cannot. This situation does not make sense and should be changed.
Comments