

Timo Lenzen
When rioters stormed the US Capitol on January 6, social media platforms could no longer deny their role in undermining democracy. Mark Zuckerberg admitted that the risk of allowing Donald Trump’s continued access to Facebook was “too great” after the then-US president appeared to be egging on the insurgents via his social media accounts. Twitter acknowledged that Trump’s tweets could “inspire others to replicate the violent acts” against the heart of US democracy.
But Trump is not the only authoritarian leader abusing social media, and America is not the only country to suffer from it. Variations of this story are playing out in the Philippines, Cameroon, Libya and a dozen other countries that rarely hit the headlines.
The organisation we work for, the Centre for Humanitarian Dialogue, has spent the past 20 years mediating peace talks. We sit between rebel groups and governments and try to forge ceasefires and political negotiations. It’s risky work that doesn’t always pay off. In recent years, a new phenomenon has made an already difficult job much harder: sophisticated networks of mis- and dis-information on social media.
Take Libya, where the United Nations is holding peace talks to end years of fighting. As they negotiate peace inside a hotel in Tunis, fake accounts originating from outside Libya are fanning the flames of war on Facebook and Twitter. They have alleged corruption, circulated false agreements and called for violence against those involved in the talks.
In Colombia, social media was used to deliberately undermine the legitimacy of the government’s peace talks with the FARC rebels, ultimately contributing to the failure of the agreement to pass a public referendum.
Elsewhere, the link between social media and violence is even more direct – see the cautionary tale of Myanmar, where disinformation on Facebook was widely implicated in the Rohingya genocide. So – what should be done?
First of all, we should not wait until after the fact to act. It took a genocide in Myanmar and an insurrection in Washington to get platforms to take concerted measures to address violence. But it’s not as if there weren’t warning signs.
Looking out for those signs isn’t easy in a war zone. It requires local language knowledge and an understanding of context that algorithms can’t provide. Finding people who have those skills is hard, but the companies must do better. Their coverage of the “rest of the world” remains dramatically under-resourced. If social media companies continue to offer their services in fragile states where the risk of violence is high, they must invest seriously in limiting the harm their platforms can do.
And precisely because war zones are complicated, the platforms should listen more to those whose job is to understand and address conflict – organisations such as the UN, or specialist disinformation researchers. These people could guide platforms on how misinformation can incite violence, and alert them ahead of events likely to attract disinformation. For instance, if a crucial round of peace talks is scheduled, social media companies could be told so that they can step up their resourcing and monitoring. Neither mediators nor the platforms are having this kind of conversation right now.
That is a problem. It’s now commonplace for social media companies to put in special measures around elections, with the adoption of stricter moderation rules and the investment of extra resources into enforcement. The rationale is obvious: elections are a major civic process and what happens online can undermine their integrity. But the same can be said of peace talks, which often determine not just who is in government but the very building blocks of the state.
With so much at stake, it’s no surprise that such negotiations will attract spoilers. Every war has individuals invested in more conflict, and many of them are acquiring sophisticated information operations capacities, either independently or from foreign backers.
That’s why social networks need a policy for peace talks – one that starts with being aware of them. Right now, Facebook staff sit down regularly to look at election calendars and decide which one could lead to violence, allocating company resources accordingly. But there is no process where the company looks out for countries entering into a delicate process of negotiations. This gap must be filled.
Companies wouldn’t need to reinvent the wheel. Many of the policies deployed by Facebook and Twitter around elections could be adapted to protect peace talks. Misinformation about talks could be labelled in local languages, with links to genuine information such as official statements by the UN. Content that aims to intimidate negotiators should be removed or labelled. All of this can be done in a way that still allows for legitimate criticism of the process.
That said, while social media companies clearly need to step up, we should be realistic about how much they will actually do to protect peace. However bad Libya’s civil war gets, it will never get as much attention as what happens in the US. There are also limits to how much social media “whack-a-mole” can achieve. Pulling down networks and accounts makes life harder for bad actors, but they’ll return in another guise before long. Labelling misinformation after it has appeared won’t change the minds of many.
The challenge is stopping the issue at the source. Could mediators persuade warring parties to put down not just their guns but their fake Facebook accounts, too? Are digital ceasefires possible? This wouldn’t be easy. But it’s worth trying, as nothing would make more of a difference than dealing with the problem before it hits social media platforms in the first place.
The barriers are less steep than many peacemakers assume. Often mediators mistakenly feel that what happens online is too opaque to do anything about. But the capacity to detect information operations exists, in organisations like Graphika, the Stanford Internet Observatory, and the Atlantic Council. Ignorance is no longer an excuse.
Then it’s up to mediators to find a way to persuade and pressure those at the negotiating table to exercise restraint – online as offline. Just like in any negotiation, if one side is vastly more powerful than the other, compromise won’t be easy to find. But if everyone is suffering perhaps an online detente can emerge.
We are at a turning point in the history of big tech. The question is whether we want the debate to focus narrowly on defending Western democracy – or we take a wider view, that includes countries at war where social media is proven to cause real-world harm. Fixing the online world will be a sisyphean task, but just like ending wars in the real world, we have to try. Let’s not wait until the next ethnic cleansing abetted by online hate speech to have this conversation.
Maude Morrison is a social media and conflict mediation adviser at the Centre for Humanitarian Dialogue. Adam Cooper is a senior programme manager at the Centre for Humanitarian Dialogue
More great stories from WIRED
🏎️ Lewis Hamilton opens up about activism and life beyond F1
🌊 Netflix’s Seaspiracy explores the impacts of overfishing. But will it make people change their behaviour?
🎧 Which music streaming service should you choose? We test Spotify, Apple Music and more
🔊 Listen to The WIRED Podcast, the week in science, technology and culture, delivered every Friday
👉 Follow WIRED on Twitter, Instagram, Facebook and LinkedIn