Home “speakers” and privacy
People gave that up when they turned their DIY wall-screens on.
The voice applications people use with their Amazon Alexa and Google Assistant smart speaker devices have privacy policies, but most users don't read them and neither device maker has shown much concern about policy problems or inconsistencies. Computer scientists from America's Clemson University – Song Liao, Christin Wilson …
Excellent article and research.
So, essentially, the privacy policies that did exist were often cut and pasted from other, unrelated, products.
I doubt there is any protection, even with a legit privacy policy, that protects us from bad actors.
If I can use an analogy, it's like there's no barn door to shut because there's no barn, just an empty plot of land in the Wild West.
The only good thing is that the Google and Amazon's focus on numbers means many users will not bother installing anything because the useful/entertaining stuff has been drowned out by all the crap.
My phone has microphones and I carry that around everywhere.
Yes. And it is an important issue. But, as with all security, a risk assessment (even informal) is probably more useful than a tin foil hat.
It is well understood that phone microphones are always compromised at a low level (often in hardware/ROM firmware) and are accessible over the air to network operators and law enforcement. That is why in very high security environments phones are banned and are even stored in Faraday cage bags at site reception.
However, if your threat concerns do not include nation states or law enforcement, phone microphones by themselves are not much of a problem: any phone company or operator routinely tapping all its customers mics would be noticed quite quickly.
However, it is clear that all "voice assistants" (whether from device manufacturers, operators, or 3rd party apps) are always listening and retaining data. Many people have noticed that adverts reflect recent conversations held near the phone, even when the assistant has not been asked a question. The only way to avoid that is to uninstall them. In the case of built-in assistants it should be enough to use their setting to disable them -- if they claim to be disabled but in fact are still recording then they are clearly committing an offence.
But if you leave it enabled (listening for its trigger word), it will be recording and sending information back to its masters.
What's the situation these days with Smart TVs?
My TV is not specified in the user manual with a microphone, nor does it appear to have any obvious microphone mechanical structures.
That said, there is a button on the remote control that I allegedly need to push for it to interact with it using my voice.
I'm using a Samsung 4K TV, so I am basically starting from the default position, that it will be hackable by anyone who wants it. C**** thing even serves me adverts on the UI menu.
"We require developers of skills that collect personal information to provide a privacy policy, which we display on the skill’s detail page, and to collect and use that information in compliance with their privacy policy and applicable law," an Amazon spokesperson said in an emailed statement.
Nowhere do I see in that statement regarding provacy policies adjective on those policies like:
That seems to be what the epitaph of our civilization should be.
"We require developers of skills that collect personal information to provide a privacy policy "
No you don't, you just say you do. There are 47K+ "skills" that prove that a privacy policy is not a requirement.
I remember a public presentation by a well known data protection consultant, who said "your privacy policy is PR". And so it seems for almost every Europe relevant privacy policy we've examined in the course of a couple of years of research. Less than 0.5% have been even broadly compliant with the GDPR and literally only a couple have essentially been fully compliant.
I remember an internal corporate presentation where the expert said "your privacy policy exists to protect yourself from lawsuits". The advice was to never explicitly state you would never do X, because that is how you paint yourself in a corner and expose yourself to lawsuits. Instead, policies should give example of what you would do, and leave it open-ended.
Which is actually pretty reasonable, considering people who don't care about privacy don't read anything, and people who care don't trust anything anyway. The actual text of policies is only read by lawyers preparing a lawsuit or defending against one...
What is troubling about this is that the people who use these devices simply don't give a stuff about privacy. They have not idea how the things work and have no concerns about what is being collected. Those of us that do care are in such a small minority that it is an irrelevance . Companies will continue to spew out ever more Internet connected tat that a gullible public will buy, install and use, further increasing the data collected by these parasites.
Even if they do get caught out the standard procedure is to say "Sorry, we made a mistake" and pay a derisory fine that is the equivalent of losing 1p.
The only thing that may be a game changer is if the fines are linked to revenues and are of sufficient magnitude that it actually hurts.