CIO

Google defends 'critical' Assistant language experts who listen to your recordings

Now it needs to do more to protect users' privacy.

A report this week by VRT NWS seemingly outed Google employees for listening to users’ Assistant recordings. Now Google wants you to understand that they were just doing their job.

The Belgian broadcaster got ahold of the recordings after Dutch audio data was leaked by a Google employee. VRT says they received more than a thousand Google Assistant excerpts in the file dump, and they “could clearly hear addresses and other sensitive information.” The outlet then was able to match recordings to the people who made them.

It all sounds like a privacy pitfall, but a post by Google wants to assure you that the problem stems from the leak, not the recordings themselves. In a blog post, Google defended the actions as “critical” to the Assistant development process, but acknowledged that there may be issues with its internal security:

“We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.”

As Google explains, language experts “only review around 0.2 percent of all audio snippets,” which “are not associated with user accounts as part of the review process.” The company indicated that these snippets are taken at random and stressed that reviewers “are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google.”

That’s placing a lot of faith in its employees, and it doesn’t sound like Google plans on actually changing its practice. Rather, Google pointed users to its new tool that lets you auto-delete your data every 3 months or 18 months, though it’s unclear how that would mitigate larger privacy concerns.

Potential privacy problems

In the recordings it received, VRT said it uncovered several instances where conversations were recorded even though the “Hey Google” prompt wasn’t uttered. That, too, raises serious red flags, but Google insists that the speaker heard a similar phrase, which caused it to activate, calling it a “false accept.”

toast cover touch sensitive Michael Brown / IDG

The LED lights at the top of the Google Home lets you know it’s listening.

While that’s certainly a logical explanation, and one that anyone with a smart speaker has experienced, it’s not exactly reassuring. Since we now have confirmation that Google employees are randomly listening to recordings, including so-called false accepts, people could be listening to all sorts of things that we don’t want them to hear.

Users have precious few privacy alternatives when it comes to Google Assistant other than silencing the microphone so the Home speaker can’t listen. There’s no toggle to opt out of recordings being transcribed.

I understand why Google needs language experts to analyze recordings, but at the very least it should at least guarantee that they can only hear explicit Google Assistant queries. If employees are able to use actual queries of things like addresses and contacts to pinpoint users’ locations, we should at least be assured that only relevant audio is being transcribed.