When Google released Chrome 94 for Android (and desktop), it slipped in some naughty capabilities via an API called Idle Detection.
“The Idle Detection API notifies developers when a user is idle, indicating such things as lack of interaction with the keyboard, mouse, screen, activation of a screensaver, locking of the screen, or moving to a different screen. A developer-defined threshold triggers the notification,” Google said in a blog post
What’s so bad about that?
An excellent story in FossForce by Christine Hall (always quotable and trustworthy) cites two sources who make an eloquent case for why mobile vendors like Google might not always have users’ needs in mind.
“I consider the Idle Detection API too tempting of an opportunity for surveillance capitalism motivated websites to invade an aspect of the user’s physical privacy, keep longterm records of physical user behaviors, discerning daily rhythms (e.g. lunchtime), and using that for proactive psychological manipulation (e.g. hunger, emotion, choice),” FossForce reported, quoting Tantek Çelik, the web standards lead at Firefox browser developer Mozilla. “In addition, such coarse patterns could be used by websites to surreptitiously max-out local compute resources for proof-of-work computations [i.e. cryptomining, etc], wasting electricity (cost to user, increasing carbon footprint) without the user’s consent or perhaps even awareness.”
Jon von Tetzchner, founder and CEO at privacy-focused Vivaldi, noted that the API is blocked by default in Vivaldi’s browser. Note: Apple also said it’s not implementing the API.
“This principle of actually monitoring that you’re not in front of the computer, we see that as a privacy problem and we see it as a security problem,” von Tetzchner said. “We do see that there is maybe the potential for someone to recognize, ‘Oh, you’re not on your computer, maybe we can do some damage while while you’re not there,’ by mining cryptocurrency or the like.”
And therein lies the problem. Google is not being naive as much as focusing only on revenue and its business partners. If an advertiser, an advertising group or even game developers would find some extracurricular data valuable, Google rationalizes, then by all means let’s share it all.
Instead, companies like Google (and Apple, for that matter) need to look at mobile platforms and think, “What is the worst thing an evil person could do with this information?” In other words, they need to think like a security and/or a privacy specialist.
When Google’s developers were discussing adding this capability, did Google officials even think to have a cybersecurity executive and maybe someone from their Chief Privacy Officer’s team in the meeting? Were they ever cc’ed on memos?
I don’t know who decided this was a good idea, but I’ll bet a week of my Computerworld compensation (a tiny amount, I’ll grant you) that they weren’t involved. That’s solely based on what the team rolled out. If it weren’t Google, I might assume that the privacy and security folks were in the meetings but their advice was ignored — or, at the very least, overruled. But with Google, I’m betting they were never cc’ed or invited.
For this process to work, privacy and security considerations must be seriously explored with every new feature or product. Truthfully, it really only needs to be explored when there is any possible security/privacy problem.
That is problem two. Google’s developer execs typically don’t even see the most obvious security/privacy issues because that’s not how they look at software. They see code as a pure money-making opportunity along with market domination. (I was about to say world domination, but that’s more of an Apple and Facebook thing.)
Security/privacy can’t be treated as an afterthought. Well, it actually can be. And the result is something that looks an awful lot like Idle Detection.