In an age of misinformation and conflict, Section 230 of the Communications Decency Act, a key rule of the internet era, is at the center of a major political debate. But any changes could create liability headaches for site operators.
The Communications Decency Act (CDA) of 1996 was hugely controversial and one of the most widely debated laws relating to online conduct.
Passed as a part of a broader telecom bill, it was widely seen as an attempt to censor digital culture and became the subject of one of the internet’s first grassroots campaigns—and the courts sided with the activists. Most of the law has been overturned for two decades.
One section of the law that did stick around, however, has ironically turned into an essential building block of digital freedom. Section 230 of the CDA shielded owners of an online service from legal liability, a move that helped to open up the digital ecosystem for different kinds of businesses—such as forums, digital review sites, blogs with comment sections, and (of course) social networks. The Electronic Frontier Foundation calls it “the most important law protecting internet speech.”
But now, decades after its passage, this section of the law is drawing some passionate criticism not unlike what the rest of the CDA generated nearly 25 years ago. And the reason has everything to do with the political climate.
Last year, Congress passed a law aimed at fighting sex trafficking that effectively reined in a part of the CDA. Its passage led many social networks and websites, most notably Tumblr and Craigslist, to scale back their services due to newfound legal liabilities.
And in recent months, the rest of Section 230 has found itself under attack from all sides of the political spectrum. It’s gained a reputation of allowing for misinformation and hate speech to run rampant online, while allowing for potential bias in moderation strategies, leading prominent politicians in both parties to speak out.
Perhaps the most vocal voice in favor of scaling it back has been British comedian Sacha Baron Cohen, best known for his edgy characters Borat and Ali G. He recently argued for a legislative solution to the issue during a speech given after accepting an award from the Anti-Defamation League that was later adapted into a Washington Post op-ed. Cohen spoke out against the damaging nature of misinformation and racism online and the failure of major companies to properly moderate what happens on their sites.
“Facebook, Google and Twitter are unthinkably rich, and they have the best engineers in the world,” he wrote. “They could fix these problems if they wanted to.”
Some have already argued against this effort in the association space. Michael Petricone, senior vice president of government affairs for the Consumer Technology Association, recently wrote in an op-ed for The Hill that the threat of legal liability would lead many providers to actually moderate their content less, for fear of rocking the boat.
“Weakening this law would allow an unprecedented level of online censorship, whether through new legal caution or a new regulatory mandate,” Petricone wrote. “In fact, to put this in perspective: Anyone who has ever forwarded an email, a picture, or any political speech has been protected by Section 230.”
Internet Association’s President and CEO Michael Beckerman claimed that the law was widely misunderstood in comments to CNBC.
“A lot of the rhetoric that’s coming out of Congress is almost the opposite of what the reality is,” Beckerman said in September. “Section 230 is what enables all user-generated content online.”
Assessing the Impact
It’s early, but given the attention the issue is receiving at the moment, it’s worth keeping an eye on what comes of it—especially because it could directly affect how associations manage their own services, as well as how they interact with the outside world.
A June bill introduced by Sen. Josh Hawley (R-MO) gives us an idea of what potential legislation could look like, even if the specifics end up quite different. The act [PDF], which is thus far stalled and is seen as unconstitutional by some legal experts, would force audits for large technology companies to ensure their moderation strategies are politically neutral.
It has a specific carve-out for small and medium-sized companies (i.e., those with less than 30 million users or half a billion in revenue), as well as for nonprofits. But considering that so much of our conversation happens or is managed by large companies—and backend technology vendors such as Cloudflare have come up in Section 230 discussions as well, as they have hosted or offered services to controversial websites—the ripple effects could reach beyond front-facing websites or large social networks. After all, just as Section 230 created an array of new tech ideas, any effort to rein them in could stifle those ideas.
And Hawley’s legislation is just one proposal—another that focused on Cohen’s concerns about misinformation and hate speech would likely look dramatically different. But both would have a similar effect of creating liabilities in places where there weren’t liabilities before, and that could create problems through the digital ecosystem.
As the original Communications Decency Act has shown, there is great potential for ripple effects when we mess with the basic tenets of digital law—and not just for the big guys, either.
So keep an eye on this one. It’s a debate that could complicate an already messy digital landscape, depending on how the dice roll.