Apple’s proposals to scan iCloud Photos for child sexual abuse material (CSAM) have proved controversial among privacy advocates. But did you know that the company has been scanning iCloud Mail for several years? Not many people do.

On August 23, 9to5Mac ran a story titled “Apple already scans iCloud Mail for CSAM, but not iCloud Photos.”

The article revealed a statement from Apple which confirmed that the company has been scanning iCloud Mail uploads for child sexual abuse material (CSAM). Apple did not reveal any details about its scanning operations but said it did not currently scan iCloud Photos.

Apple prides itself on user privacy, so the story caused quite a stir. It’s particularly significant in light of Apple’s controversial plans to scan iCloud Photo uploads before they leave users’ devices.

This news came as a surprise to some people—and it was picked up by several other outlets including Gizmodo and CNet, the latter of which characterized Apple’s comments as a “revelation.”

But this story isn’t quite as revelatory as many people think. We’ve actually known about Apple’s scanning of iCloud Mail messages for several years.

This raises a number of questions:

  • Did Apple properly inform its users about its existing scanning technology?
  • What technology is Apple using to perform these scans?
  • If Apple is scanning email for CSAM, why does the company make so few reports to NCMEC?
  • If Apple has been scanning iCloud Mail for so long, why are the company’s proposals for scanning iCloud Photos uploads such a big deal?

I’ll address each question in turn.

 

Is this in Apple’s privacy notice?

For the sake of this article, I’ll assume Apple’s iCloud Mail scanning takes place exclusively in for U.S. users, as this is where Apple plans to roll out its iCloud Photos scanning policy.

Apple’s main privacy policy, last updated June 1, 2021, contains the following paragraph in the section titled “Apple’s Use of Personal Data”:

Security and Fraud Prevention. To protect individuals, employees, and Apple and for loss prevention and to prevent fraud, including to protect individuals, employees, and Apple for the benefit of all our users, and prescreening or scanning uploaded content for potentially illegal content, including child sexual exploitation material.

This provision appears to have been in Apple’s privacy policy since May 2019.

9to5Mac also presents the following passage from an January 2020 archived Apple child safety page

We have developed robust protections at all levels of our software platform and throughout our supply chain. As part of this commitment, Apple uses image matching technology to help find and report child exploitation. Much like spam filters in email, our systems use electronic signatures to find suspected child exploitation. We validate each match with individual review. Accounts with child exploitation content violate our terms and conditions of service, and any accounts we find with this material will be disabled.

Apple’s iCloud terms also state the following:

Apple reserves the right at all times to determine whether Content is appropriate and in compliance with this Agreement, and may screen, move, refuse, modify and/or remove Content at any time. 

Weirdly, earlier versions of these terms (until mid-2020) used the word “pre-screen” rather than “screen.”

It’s fair to say, then, that Apple did provide its users with some sort of notice of its practices—although frankly, the above explanations are somewhat vague.

But privacy notices aside, Apple’s iCloud Mail scanning has been picked up by the media before.

In a February 2020 article, Forbes reported on a search warrant, filed in Seattle, which revealed that Apple had passed seven emails containing CSAM to law enforcement. 

Then in January 2020, Apple’s chief privacy officer Jane Hovarth said that the company had started “utilizing some technologies to help screen for child sexual abuse material.”

So, Lovejoy’s story is interesting in that it further confirms Apple’s existing policy—but the fact that Apple scans iCloud Mail for CSAM is not actually a “revelation.”

  

What technology is Apple using to perform these scans?

Apple does not appear to have provided a detailed explanation of how it scans iCloud Mail messages for CSAM. I’ve asked Apple to comment on this.

The company’s technical description of its proposals for iCloud Photo-scanning doesn’t refer to any scanning already in place for iCloud Mail uploads.

The above-quoted passages from Apple’s policy documents tell us that the company:

  • Prescreens or scans uploaded content
  • Uses image-matching technology and electronic signatures
  • Validates matches with “individual review”
  • Disables accounts found to be uploading CSAM

Some have speculated that Apple uses Microsoft’s PhotoDNA for scanning purposes (like Google, Facebook, Twitter, and others), but this has never been confirmed. 

But isn’t iCloud Mail encrypted? Yes and no. Apple’s iCloud security overview explains:

All traffic between your devices and iCloud Mail is encrypted with TLS 1.2. Consistent with standard industry practice, iCloud does not encrypt data stored on IMAP mail servers. All Apple email clients support optional S/MIME encryption.

In other words, Mail messages are encrypted in transit to iCloud, but not on Apple’s servers. This level of encryption wouldn’t prevent Apple from scanning Mail messages on its servers.

Apple could, in theory, be scanning messages before they leave users’ devices. But given the significance of Apple’s iCloud Photo on-device scanning announcement, it seems unlikely that the company has been scanning iCloud Mail messages on its users’ devices.

 

Why does Apple make so few CSAM reports?

The 9to5Mac report that prompted this article followed up on a previous story, in which Apple’s top anti-fraud engineer, Eric Friedman, said that the company provided the “greatest platform for distributing child porn.”

I won’t speculate about what Friedman meant by this. But it’s fair to say that Apple doesn’t share much data with the National Center for Missing and Exploited Children (NCMEC) compared to its big tech rivals.

Reporting to the NCMEC is a routine activity for service providers like Facebook, Google, and Microsoft, who report tens of thousands of CSAM images each year via the group’s CyberTipline.

Here’s a selection of stats from NCMEC’s datasheet 2020 Reports by Electronic Service Providers (ESP). The center reveals that the following companies made the following number of reports of CSAM in 2020:

  • Facebook: 20,307,216
  • Google: 546,704
  • Microsoft: 97,776
  • Apple: 265

Apple’s paltry number of reports shouldn’t be surprising—if we assume that Apple respects its users’ privacy so much that it refuses to scan their communications.

But given that we know Apple does scan iCloud Mail messages for CSAM, the low number of NCMEC reports suggests that either:

  • Apple doesn’t scan many messages
  • Apple’s scanning technology isn’t very good
  • Not many users share CSAM via iCloud Mail

There’s also the possibility that Apple does detect a lot of CSAM but simply doesn’t often report its findings to the NCMEC. Recall that the company’s privacy policy only mentions disabling accounts—not reporting users to NCMEC or law enforcement.

 

Why are Apple’s proposed changes so controversial?

So given that Apple has been scanning iCloud Mail for years, why is the plan to scan iCloud Photos such a big deal? 

The main controversy with Apple’s plans to scan uploaded iCloud Photos for CSAM is not the scanning per se. As noted, and Apple aside, many companies have been doing “server-side” scanning for many years.

Server-side scanning of people’s communications is not entirely uncontroversial. But privacy groups are particularly concerned about Apple moving scanning operations onto users’ devices, so-called “client-side scanning.”

For example, the Electronic Frontier Foundation (EFF) says that client-side scanning is particularly vulnerable to abuse by authoritarian governments:

“…even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses. All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts.” 

And, writing for the New York Times, Matthew D. Green and Alex Stamos claim that: 

“Because Apple’s new tools do have the power to process files stored on your phone, they pose a novel threat to privacy… It is reasonable to wonder if law enforcement in the United States could compel Apple (or any other company that develops such capacities) to use this technology to detect other kinds of images or documents stored on people’s computers or phones.”

So, while many people appear not to have realised that Apple has been scanning iCloud Mail for several years, that doesn’t make the company’s new proposals any less controversial.

 

Apple’s CSAM Scanning Proposals at PrivSec Global

Apple’s proposals are highly significant and pose difficult questions for privacy advocates and child safety activists alike,

At PrivSec Global, September 22-23, 2021, we’ll be hosting a panel about Apple’s plans. Register now to reserve a free place and join the discussion.