Section 230 of the Communications Decency Act is one of the internet’s most important and most misunderstood laws. It’s intended to protect “interactive computer services” from being sued over what users post, effectively making it possible to run a social network, a site like Wikipedia, or a news comment section. But in recent years, it’s also become a bludgeon against tech companies that critics see as abusing their power through political bias or editorial slant. Just this week, Sen. Josh Hawley (R-MO) introduced a major (and highly unpopular) amendment, claiming Section 230 was designed to keep the internet “free of political censorship.”
But that’s just not what happened, says US Naval Academy professor Jeff Kosseff, author of the recent book The Twenty-Six Words that Created the Internet. Twenty-Six Words is a nuanced and engaging look at the complicated history of Section 230, which was put forward as an alternative to heavy-handed porn regulation and then turned into a powerful legal shield through a series of court rulings.
Kosseff tells The Verge that he doesn’t think Section 230 is perfect. It’s led to truly painful outcomes for victims of harassment and defamation who can’t make platforms take down posts or sue them for damages. But anybody who thinks the modern internet is broken should at least understand why the law was created, why it’s so fundamental to the web, and why people are interpreting it in ways the architects never intended.
This interview has been condensed and lightly edited for clarity.
Could you lay out the state of the law before Section 230?
To really understand Section 230, you have to go all the way back to the 1950s. There was a Los Angeles ordinance that said if you have obscene material in your store, you can be held criminally responsible. So a vice officer sees this erotic book that he believes is obscene. Eleazar Smith, who owns the store, is prosecuted, and he’s sentenced to 30 days in jail.
This goes all the way up to the Supreme Court, and what the Supreme Court says is that the Los Angeles ordinance is unconstitutional. There’s absolutely no way that a distributor like a bookstore could review every bit of content before they sell it. So if you’re a distributor, you’re going to be liable only if you knew, or should have known, that what you’re distributing is illegal.
Then we get to these early internet services like CompuServe and Prodigy in the early ‘90s. CompuServe is like the Wild West. It basically says, “We’re not going to moderate anything.” Prodigy says, “We’re going to have moderators, and we’re going to prohibit bad stuff from being online.” They’re both, not surprisingly, sued for defamation based on third-party content.
CompuServe’s lawsuit is dismissed because what the judge says is, yeah, CompuServe is the electronic equivalent of a newsstand or bookstore. The court rules that Prodigy doesn’t get the same immunity because Prodigy actually did moderate content, so Prodigy is more like a newspaper’s letter to the editor page. So you get this really weird rule where these online platforms can reduce their liability by not moderating content.
That really is what triggered the proposal of Section 230. For Congress, the motivator for Section 230 was that it did not want platforms to be these neutral conduits, whatever that means. It wanted the platforms to moderate content.
In your book, it sounds like the discussion revolved around porn and defamation. Those were the things people were really worrying about. Is that an accurate reading?
Yeah, it’s kind of quaint! Or… mostly. The biggest concern would be sort of “indecent but not obscene” pornography and defamation. But I will say there were some really tough cases in the earliest days of Section 230. There’s a case — the second case ever decided under Section 230 — where a mother was suing AOL in Florida because its chatrooms were used to market pornographic videos of her 11-year-old son.
But, obviously, with the internet playing a much more vital role in society and playing a much more central role in society, the number and the complexity of these cases has increased significantly.
You also wrote about a man who was getting harassed because someone posted his phone number under ads for Oklahoma City bombing-related merchandise.
That was the Zeran v. America Online case, the first case ever decided, and that really set the precedent for all of Section 230. The guy had his phone number and name posted on a really tasteless ad about the Oklahoma City bombing less than a week after it had happened, and he was just getting constant threats. He had to go on psychiatric medication. It was really horrible for him. So there’s always been this balance that we’ve had to strike between the ability to speak freely online and these real harms that people are suffering.
At what point did people start seeing Section 230 as something with a partisan political bent that favored one party or another?
There are really two competing criticisms, and I think it really started after the 2016 election.
You have the Nancy Pelosi [Democratic] argument that the platforms are not moderating enough. I think that was driven a lot by some of the propaganda and fake news from foreign governments that was appearing on platforms. Then you have the Ted Cruz and Josh Hawley [Republican] argument that they’re moderating too much because, as they stepped up their moderation efforts, they started to make these judgments about what content violates their terms of service.
Casey Newton’s article on The Verge obviously illustrates one aspect of it. When you think about what the moderators are dealing with at this rapid-fire pace, and then to expect all of their decisions to satisfy everyone, that’s not going to happen. Especially with platforms playing a much larger role and platforms stepping up their moderation practices.
It’s really been in the past year where we’ve seen this argument that Section 230 requires “neutrality.” Now, that’s always a judgment Congress could make. But I spoke with both [Section 230 architects] Sen. Ron Wyden (D-OR) and former Rep. Chris Cox (R-CA) extensively, and I spoke with most of the lobbyists who were involved at the time. None of them said that there was this intent for platforms to be neutral. In fact, that was the opposite. They wanted platforms to feel free to make these judgments without risking the liability that Prodigy faced.
How do you think the mistake that Section 230 is about splitting internet services into categories of “platform” or “publisher” came about?
I have no idea! That’s just not, I mean… I don’t know.
Was it around the same time that Section 230 became a political issue?
Yeah, it’s been in the past year or so. There has been an increased amount of criticism of the platforms. But it’s not accurate. There’s a First Amendment distinction between publishers and distributors. But since Section 230’s been in place, that’s not really been an issue for the internet.
Could you expand on the difference between a publisher and a distributor?
Basically, a distributor under the First Amendment would be liable for content it distributes only if it knows or should have known about illegal content. So that would be a bookstore or newsstand. But a publisher, which would be like a newspaper, can be sued and face the same liability as the author.
Prodigy was treated as a publisher before Section 230 and CompuServe was treated as a distributor. Now, until the Zeran case was decided, there might have been a little uncertainty. Would Section 230 just mean that all online services and websites should be treated as distributors, meaning they’re liable if they knew or should have known about material? But what the Fourth Circuit said is that distributor liability is basically a subset of publisher liability. So by saying you can’t be treated as the publisher or speaker, that means that you’re not going to be liable at all.
That could have been very different. If the Florida court had been the first to rule, there’s a really good chance that you would have had a narrower interpretation of Section 230 that basically said, “Once you know about potentially illegal content, you have to take it down or face liability.” But that’s not what happened. So Section 230 has been this really broad shield.
Could people actually restructure Section 230 in a way that applies differently to “platforms” and “publishers”?
I can’t really say that because I don’t really know what they mean by that. Would a platform have to be something that takes absolutely no steps to filter any content whatsoever? If you look at Casey’s article, I’m not sure that’s the world we want to live in — with all of that horrific, vile content that we’re talking about. I mean, if that’s where we want to go, I would probably not want to be on the internet.
But I’m also not an absolutist 230 defender or a defender of platforms. I think they’ve made a lot of missteps. I see really legitimate concerns about saying, “Well, maybe your procedures and policies are biased against certain groups of people and certain political viewpoints.” Platforms need to have a much more transparent method of communicating what they’re doing. I think that because there was, frankly, a level of arrogance for so long in the tech community, the story isn’t really getting through.
What’s your ideal vision of how Section 230 should work now since we’re in a very different world from the ‘90s?
My top issue is that there are some platforms where I think they really are taking a role in creating the content — not most of them, but there are some really bad actors that do. Section 230 usually gets invoked at the motion to dismiss stage, so there’s often not an opportunity for the plaintiff to show that the platform helped create the content. So I’d like to see, in certain cases where the plaintiff has painted a relatively strong case, a judge to have the discretion to allow limited discovery just on that issue before making a decision.
Another area I think could be addressed is that Section 230 does have a few exceptions: it has an exception for intellectual property law, and it also has an exception for federal criminal law. The problem is that states often lead the charge on issues like sex trafficking, revenge pornography, and so forth, and there’s no exception for state criminal law.
There are some legitimate reasons for that, including that states can pass sort of extreme laws that can be really chilling to online speech. What I would like is if we could have some kind of limited exception for state criminal prosecutions under state laws that at least operate within the confines of federal criminal law. That would allow states to enforce laws that at least have enough similarity to federal criminal laws that you’re not subjecting platforms to 50 different state laws.
Do you have specific thoughts on Sen. Hawley’s bill?
I get the concerns that people have expressed. I have not seen any good data about whether certain political viewpoints are being suppressed and to what extent, so I can’t really even say whether that is a problem. I understand the concern if it were a problem because platforms do have such sweeping scope. But I mean, I’m a former journalist and First Amendment attorney, so I obviously get anxious when there is a commission of five political appointees making decisions about whether a platform is politically neutral.