Terrorists will move to where they can’t be moderated


At Tech Against Terrorism we’re gravely concerned about the terrorist use of the internet, but perhaps not for the reasons imagined. Given the increased clamour for government regulation of citizens’ online content in the UK, EU, Australia, Canada and many other jurisdictions, you might think that terrorists and violent extremists are rampaging across social media with impunity. This could not be further from the truth.

Despite appearances, over the past decade governments have done very little to define criminal thresholds for online content and to enforce existing laws; “designation lists”, whereby groups are defined as terrorist in law, are often the only legal basis available to platforms to determine what constitutes terrorist content.

Over the past decade, big tech has filled this void by devising its own rules and regulations. In most cases, tech companies decided to tackle terrorist content years before governments even started to acknowledge the threat. The violent far-right is a good example: to date there are only a handful of terrorist-designated far-right organisations globally, and most of these are already defunct. At the international level, governments can’t even agree on the definition of terrorism, so Facebook was forced to make its own.


Today, the vast majority of content that incites violence or is affiliated with designated terrorist organisations is automatically removed from large platforms before anyone sees it. The minority of content that does get through has typically been significantly altered to evade detection.

Most designated terrorist organisations have long since abandoned mainstream social media in favour of alternative platforms or self-developed technologies that rely on decentralisation and enhanced levels of encryption. The most persistent examples of terrorist content are confined to smaller messaging apps and social media platforms that have limited funding and resources to moderate content at scale. Terrorists and violent extremists are also increasingly developing their own apps and running their own websites.

The internet in five years’ time could well look drastically different from today. The two opposing drivers of this change are the public’s demand for increased privacy and government demand for increased control over its citizens’ online activity. The paradox will be that public speech on large, centralised platforms such as Facebook, Twitter, TikTok and WhatsApp will likely become much more closely regulated by governments.


But the inevitable result will be much greater use of decentralised social media, data storage and messaging platforms. The so-called DWeb – the decentralised web – might not be a household name now, but it almost certainly will be in the future. However, the nature of the DWeb means that content moderation is currently much more difficult, if not impossible. Think Facebook meets Napster, but with added security, in-built anti-censorship and an underlying cryptocurrency.

Amidst this confusion, terrorists and violent extremists are likely to find yet more ingenious ways of exploiting the internet for their own purposes. Instead of becoming distracted by proposals for vaguely-defined “legal but harmful” content as in the UK, we instead should be much more ambitious in considering how to apply existing laws about criminal conduct online more effectively, and how to anticipate content moderation requirements for an internet where decentralisation and end-to-end encryption are commonplace.

These problems will pose awkward regulatory questions, which doesn’t bode well for governments resigned to fixing the internet with the bluntest of instruments, or not at all. Why is it that designated terrorist organisations are currently able to register their own domain names? Why are so few far-right extremist groups recognised as terrorists? Why does law enforcement have such limited funding and resources to investigate online hate speech and incitement to violence?

It’s easy to ask the tech sector to “do more”, but in practice, this is an admission of government dereliction of duty. Governments are all too prepared to blame the internet for society’s ills without putting in the groundwork to improve online governance. If individuals commit crimes online by inciting violence then they should be investigated, prosecuted and, if found guilty, sentenced: short-circuiting this judicial process will not make society safer.

Decentralised social media and file storage will likely become the norm within the next ten years. The question will become: how do we devise a decentralised content moderation mechanism that is based on consensus and prevents criminal use?

More great stories from WIRED