A considerable amount of attention in the UK has been placed on regulating the internet.
While some have argued that it is simply not possible or counterproductive, fears over security risks tend to bolster calls for regulation. This is particularly the case when extremist and terrorist content continues to be hosted online, unchallenged.
Work by my colleague Mubaraz Ahmed, for example, has found that each month, more than 484,000 Google searches take place globally using keywords that return results dominated by extremist material. More concerning is that fact that ‘high risk’ keywords like ‘caliphate’ and ‘Dabiq’ – the name of Islamic State’s English-language magazine – go largely unchallenged.
Coupled with this ease of access are newly created powers within the United Kingdom’s Counter-terrorism and Border Security bill. Now, it is crime to view terrorist-related material online three or more times, with a penalty of up to 15 years in jail.
In my opinion, this penalty on the end user is a little unfair. If the responsibility for consumption lies with the consumer, easier channels for reporting extremist content and those who promote it must be created.
In fact, one could question why internet giants are not made to take more responsibility for allowing extremist or terrorist content to remain on their platforms – which leads us back to the debate on regulation.
The first issue is cooperation. Extremist content, instructional terrorist material, as well as funding campaigns to raise money for terrorist groups, can be found on all parts of the internet – with varying degrees of accessibility. Therefore, regulation of the internet will only be possible with the cooperation of multiple government agencies, private sector companies, and end users, particularly when it comes to regulation to remove harmful or hateful material, and content that threatens national security. With the often hostile environment that tech companies are forced to face in Parliamentary enquiries on their self-regulation, this seems challenging.
Second, we can only determine the legal liability of online platforms for the content they host after deciding how best to deal with unacceptable online content: that of an extremist and/or illegal nature. The removal of extremist and terrorist content from the internet – particularly in the case of artificial intelligence programs that may do ‘bulk’ removals – creates a risk that evidence needed for prosecution of individuals disseminating content or providing material support to terrorist organisations may be lost. Technology companies should work with law enforcement to ensure that this material is not simply removed, but archived effectively to understand patterns of behavior.
Third, on the part of technology companies, greater transparency is needed when it comes to government definitions of terrorism and extremism for legislative purposes, particularly on definitions of terrorism online. Given there is no comprehensive international legal definition of terrorism and the internet is a global space, it is perhaps unsurprising that technology companies have struggled to remove content seen as facilitating radicalization on their globally operating platforms.
The existing powers and regulations available in the United Kingdom to audit and regulate the internet are unclear. Further complicating the matter is the fact that companies such as Google and Facebook operate as quasi-monopolies and enjoy dominant market positions.
The most desirable option when it comes to moderating content, is to apply greater pressure on these companies to promote, implement, and approve a self-regulatory model where transparency and accountability of the removal of extremist content hosted on these platforms is made publicly available through the publication of a quarterly report.
Such reports should reference statistics on content flagged by users, outcome of investigated content, decision-making systems employed by these companies on content removal, case studies, and areas for improvement.
Transparency will further incentivize technology companies to cooperate in this field, and has the potential to foster further innovation in the successful removal of extremist and hate content.
Crucially, the public should be able to report and flag extremist content found on the internet to those companies hosting such content – and these concerns should be taken more seriously, with a conversation between company and user.
For example, there is still no ‘flagging’ system for users to report instructional terrorist manuals or disturbing extremist content on Google search results, with software often auto-predicting extremist literature or directing vulnerable people who may consume this content to more extremist literature (in multiple languages). Internet users must be able to flag content as specifically terrorist-related on all social media sites, rather than as just ‘hate content’.
An example of a solution could be the creation and dissemination of trusted third-party programs for platforms like Google, and other search engines, to make such extremist material less visible.
And finally, it is the responsibility of technology companies to ensure that algorithms do not lead their users to sites containing terrorist propaganda, based on their search history.
Regulating them on any failures on the above will help us move beyond a model of all talk and little action.