In May 2019, Melissa Polinsky, director of Apple’s global investigations and child safety team, faced investigators working on the UK’s inquiry into child sexual abuse. During two hours of questioning, Polinsky admitted Apple employed just six people on its global team responsible for investigating child abuse images. Polinsky also said the technology Apple used to scan for existing child abuse images online was “effective”.
Fast-forward two years and Apple’s work to tackle child sexual abuse material has fallen off the rails. On September 3 the company made a rare public u-turn as it paused plans to introduce a system that looks for known child sexual abuse materials, or CSAM, on the iPhones and iPads of people in the US. “We have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features,” Apple said in a statement, citing the “feedback” it had received.
So what does Apple do next? It’s unlikely the company can win over or please everyone with what follows – and the fallout from its plans have created an almighty mess. The technical complexities of Apple’s proposals have reduced some public discussions to blunt, for-or-against statements and explosive language has, in some instances, polarised the debate. The fallout comes as the European Commission prepares child protection legislation that could make it mandatory for technology companies to scan for CSAM.
“The move [for Apple] to do some kind of content review was long overdue,” says Victoria Baines, a cybersecurity expert who has worked at both Facebook and Europol on child safety investigations. Technology companies are required by US law to report any CSAM they find online to the National Center for Missing and Exploited Children (NCMEC), a US non-profit child safety organisation but Apple has historically lagged behind its competitors.
In 2020, the NCMEC received 21.7 million CSAM reports, up from 16.9 million in 2019. Facebook topped the 2020 list – making 20.3 million reports last year. Google made 546,704; Dropbox 20,928; Twitter 65,062, Microsoft 96,776; and Snapchat 144,095. Apple made just 265 CSAM reports to NCMEC in 2020.
There are multiple “logical” reasons for the discrepancies, Baines says. Not all technology companies are equal. Facebook, for instance, is built on sharing and connecting with new people. Whereas Apple’s main focus is on its hardware and most people using the company’s services to communicate with people they already know. Or, to put it more bluntly, nobody can search iMessage for children they can send sexually explicit messages to. Another issue at play here is detection. The number of reports a company sends to NCMEC can be based on how much effort it puts into finding CSAM. Better detection tools can also mean more abusive material is found. And some tech companies have done more than others to root out CSAM.
Detecting existing child sexual abuse materials primarily involves scanning what people send, or upload, when that piece of content reaches a comany’s servers. Codes, known as hashes, are generated for photos and videos and are compared with existing hashes for previously identified child sexual abuse material. Hash lists are created by child protection organisations, such as NCMEC and the UK’s Internet Watch Foundation. When a positive match is identified, the technology companies can take action and also report the finding to the NCMEC. Most commonly the process is done through PhotoDNA, which was developed by Microsoft.
Apple’s plan to scan for CSAM uploaded to iCloud flipped this approach on its head and, while using some clever cryptography, moved part of the detection onto people’s phones. (Apple has scanned iCloud Mail for CSAM since 2019, but does not scan iCloud Photos or iCloud backups). The proposal proved controversial for multiple reasons.