The pandemic has shown how central global social media, messaging and collaboration platforms have become in people’s everyday lives. However, we don’t yet understand the trade-off between the security and privacy afforded by those platforms and the real costs of using them. We’re used to being told that privacy is a commodity, for example, but no one is really sure what is being sold, or what the real price is.
In 2022, that will change. We will finally see the “privacy versus security” argument exposed as a false dichotomy and the reality of the privacy intrusion generated by most “free” services made clear. Next year, consumers will be able to make increasingly informed decisions about the wider societal impacts of the services they use.
Technology connects our world in ways which were unthinkable even a decade ago. It magnifies our ability to create many positive outcomes, but it also holds a mirror up to societies and it can – and does – enable and exacerbate real harm, where real crimes have real victims. Criminality used to be local – local perpetrators, local harm and local law enforcement. Many global platforms now enable globalised criminality, from investment scams through transnational organised crime to child sexual exploitation.
This isn’t by design and many of the platforms work with law enforcement to discover and prosecute criminality hosted or enabled by their platforms or channels. But we are now seeing a worrying trend of companies designing out public safety on the premise of designing in some form of privacy.
One argument against designing systems to allow for proper public safety is that it would create a catastrophic vulnerability – risking global cyber-security and allowing oppressive regimes to conduct mass surveillance. I don’t believe this. Encryption isn’t a fragile snowflake, it’s maths with well-defined properties. It can be designed to prevent bad things from happening, while providing transparency and audit when needed. To use a real-world example, we do not build schools without fire doors, even though they are a potential vulnerability because they contain locks that could be picked, and walls don’t. Instead, we understand that they are something to be monitored with cameras and alarms; something to be managed properly. The alternative – people at risk of burning to death – is unconscionable. In the online world too, safe design is as important as user privacy. Next year, companies and consumers will see that they are both core requirements of technology.
Some people believe that artificial intelligence will fix everything, but that’s unlikely. For many types of harm or abuse, metadata alone isn’t enough to train an AI model reliably. As offenders find new ways of abusing services, it is very hard to evolve AI models without some access to content, which is where you find human intent. Using AI in this way would also lead to a dystopian Minority Report-style future, where a “magic box” decides that someone has probably committed a crime, but we have no evidence either to exonerate or to convict them, because no one can see the content that decision was based on.