As I post this, it’s still incredibly easy to access video of the New Zealand terror attack. Only a bit of searching found it still available on Facebook, where the massacre was first live-streamed before going viral on other social media platforms such as Twitter and YouTube. The gunman wanted amplification, and he got it. It was even easier to find the shooter’s rant, infused with white supremacy and deep familiarity with the online world and associated subcultures.
Not that tech companies aren’t trying to counter it. Indeed, they have every incentive to — both in the name of human decency and as companies already under tremendous pressure for inadequate content moderation. But a fast as the videos are pulled down, they are reuploaded. The platforms, despite cutting-edge AI and thousands of human moderators, are again proving “no match for the speed of their users; new artificial-intelligence tools created to scrub such platforms of terrorist content could not defeat human cunning and impulse to gawk,” writes Charlie Warzel in The New York Times.
But is this solid evidence of “massive incompetence” by Big Tech, as media columnist Margaret Sullivan charges in The Washington Post. I wish I knew for sure, but it’s doubtful. Live content appears particularly tricky to moderate. And throwing a legion of moderators with the best technology at the problem has proven insufficient even when just dealing with video.
Now there almost assuredly will be activists calling for new rules to make platforms more liable for the content on them. (This is probably already happening.) Yet that hardly seems like a solution — even putting aside the risk such a move presents to the fundamental openness of the internet — if the level of moderation effectiveness that the public and politicians want simply isn’t yet possible. Indeed, the recent announcement by Facebook of a strategy shift — toward encrypted person-to-person messaging, rather than one-to-many sharing — suggests more people and better AI won’t be a solution anytime soon. Easy solutions have yet to be found to this “impossible job.” And what measures are taken will assuredly have trade-offs in terms of the Cowen Trilemma: scalability, effectiveness, and consistency.
None of which should take Big Tech off the hook in terms of devoting more resources to the problem, particular when it comes to amplification. But it is the culture that built up the internet — one the shooter is intimately familiar with — that is at least as much the problem here as the internet’s basic infrastructure or how tech firms are responding. Users should probably have better moderation tools at their disposal, but what about our fellow humans who actively desire this sort of content in their feeds or timelines? Or the politicians who egg them on, whether explicitly or with subtlety, or ignore what appears to be a global supremacist movement that hates the West?
Some panicky pols would temporarily (I hope only temporarily) shut down various platforms during events like those in New Zealand. While terrorists take advantage of our open society, they also hate it. Let’s not do their job for them.