The USMCA trade deal introduced a little-noticed provision that will have significant implications for Canadian internet users. It will make internet companies immune from liability for the content users post, meaning the Googles and Facebooks of the world will not be liable for the hate speech or reputation-damaging lies users post.
Until now, Canada has operated more like Europe, where a company might be liable if it fails to effectively deal with content complaints. This will change with implementation of the USMCA and tech companies will be free to regulate their sites as they wish without risk of liability.
For the most part, this is a good proposition. We should all be concerned about protecting freedom of expression online. We should also strive to make the responsible party liable for abusing that privilege, namely the person who authored and posted the content, not the tech company. However, companies should have some responsibility in this regulatory space, and the USMCA makes it difficult for Canada to carve our own path on what this responsibility should look like.
The USMCA provision is modelled on a U.S. law, Section 230 of the Communications Decency Act, which is heralded in many circles as the law that made the internet the free space that it is today. Without protecting tech companies from liability, the argument goes, they will not innovate and censorship will prevail. This was all well and good when the law came into effect 20 years ago. Section 230 created stability after a series of questionable legal decisions. However, the internet has matured since the 1990s and in taking stock, one side effect of Section 230 is notable: as much as it has enabled free speech online, it is also the source of so much misery. The revenge pornography that you can’t get a site to take down, the lies or privacy invasions that are immortalized online. If it is an American-based site, you can thank Section 230.
The chief flaw of Section 230 is its bluntness. It reflects polarized policy discussions of the 1990s — that there are only two options of censorship or freedom. This narrative still exists, but it is not accurate or helpful. For one, a goal of Section 230 was to encourage corporate responsibility, but nothing in the provision maps how to achieve this. As a result, some companies have sophisticated systems to deal with online abuse, such as Facebook’s Community Standards, largely borne out of public pressure rather than the influence of Section 230, while other companies have no system in place to deal with online abuse, yet still enjoy the protections Section 230 affords. The language of the USMCA is similarly flawed in immunizing tech companies from liability and being silent as to principles of responsibility.
This is unfortunate timing, because we have learned a lot in the last 20 years and new species of regulation are emerging that take a more nuanced approach to encouraging corporate responsibility. Consider our privacy laws, which are principle-based and give marching orders to companies on a set of standards, but are undefined in how a company might achieve those goals. Other examples include reporting requirements, dispute resolution standards and other notice rules. Even the U.S. is backtracking from Section 230, with amendments introduced earlier this year. This regulatory nudging is the future of technology regulation, particularly if we are committed to both freedom of expression and competing rights such as the right to privacy, security and reputation.
It remains to be seen whether there is wiggle room in the USMCA to design some Canada-specific regulations. What is clear is that we just agreed to a trade provision modelled on a 1990s conception of the internet.
Emily Laidlaw is an associate professor in the faculty of law, University of Calgary, and author of Regulating Speech in Cyberspace: Gatekeepers, Human Rights and Corporate Responsibility (Cambridge University Press, 2015)