Apple needs to act against fake app-privacy promises


Apple will need to become more aggressive in how it polices the privacy promises developers make when selling apps in the App Store. What can enterprise users do to protect themselves and their users in the meantime?

What’s the problem?

Some developers continue to abuse the spirit of Apple’s App Store Privacy rules. This extends to posting misleading information on App Privacy Labels, along with outright violation of promises not to track devices. Some developers continue to ignore do-not-track requests to exfiltrate device-tracking information.

The Washington Post, which recently launched its own digital ads network, has identified multiple instances in which rogue App Store apps fail to maintain a promise of user privacy.

When a user says they don’t want an app to track them, the app should respect that request. But the report cites numerous cases in which the apps continue to harvest the same information, no matter what the user requests. This data may be sold to third-party data tracking firms, or used to provide targeted advertising, the report says. What it doesn’t say is that failure to respect user wishes is a betrayal of trust.

What might help?

The Post has spoken to ex-iCloud engineer, Johnny Lin, who argues that: “When it comes to stopping third-party trackers, App Tracking Transparency is a dud. Worse, giving users the option to tap an ‘Ask App Not To Track’ button may even give users a false sense of privacy.”


That’s a harsh criticism and it seems appropriate to observe that Lin has an interest here. His company develops Lockdown, which blocks “tracing, ads and badware” in all apps, not just Safari. Perhaps Apple should adopt the same approach. But given the months of pushback the company faced when it introduced App Tracking Transparency, at Apple’s scale achieving this will take time. Surveillance capitalism has a lot of money to spend opposing such plans; as it stands users, particularly enterprise users, should take steps to protect themselves.

We do need some education

Another approach is education. Each time we see privacy problems appear, we also seem to experience claims that a number of these rogue apps come in the form of bite-sized entertainment titles aimed at casual gamers and children.

Of course, an app actively grabbing data doesn’t mind if it’s the parent who installed the app, or if it was the parent’s child on a borrowed smartphone.

Users really need to learn to be discerning around apps they use. When it comes to child-based pester power, I’d argue the safest approach will be to use Apple Arcade and let your children play anything they want from there. It’s not ideal, but it is one way to limit risk.

Embrace (but verify) gray IT apps

A third approach that should work is policy development. Enterprises should look closely at the apps used by employees on their devices to ensure they pass security policy.

Use of MDM systems and managed Apple IDs for the enterprise part should increase, while enterprises really should work closely with employees to identify apps they use. Many companies now have a problem with gray IT, apps and services employees use to get work done simply because these systems work better than the tools the company provides. In most cases, prohibition doesn’t work.


A better approach is to identify those apps and vet them against company security policy and transparently explain why some cannot be used. This must be coupled with work to ensure your own apps are at least as easy to use as grey market alternatives. This switched-on approach enhances personal autonomy across your teams far more effectively than autocratic diktats. The idea is that by working together with teams, you end up with a more secure space. You can supplement this with classic MDM solutions.

Karma police

But what will make the biggest difference is policing. Apple already says it will work with developers who fail to uphold the privacy promise, but perhaps it needs to toughen this approach. I’d argue that it should proactively monitor all apps against the privacy promises they make to ensure they meet those promises.

Those that don’t should be removed.

It’s also not enough to vet only specific apps identified by external parties. If a developer has been found to abuse privacy on one app, then all their apps should be checked.

Educated consumers and security researchers can help with this, using apps such as Little Snitch, Lockdown, Jumbo,, and an array of others to monitor activity generated by apps. If an app promises privacy it should be held to account, and one way to do so is to use apps like these to monitor privacy leaks, and tell Apple when you identify an app that leaks data without your permission.

This approach — of learning about risks, working with your internal groups (family, employees, children) to manage and minimize risk, and aggressive attempts to identify apps that fail to keep their privacy promise — should help make the environment more challenging for such egregious attacks.

What could happen next

Despite Apple’s efforts, what is happening now is that we’re being given a false sense of security when we consider an app’s privacy policy on the App Store. When an app developer promises not to steal our information, or when we ask them not to track us, we are inclined to believe them. For Apple, the next step could be to vet and verify all the apps it sells to ensure they keep the privacy promises they make.

To my mind, privacy fraud is just as bad as any other kind of fraud. Apple already polices its apps for fraudulent behavior and last year rejected 150,000 apps for being spam, copycats, or misleading to users.

Now it needs to do the same for privacy cheats.

Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.