re: cursing at kids
You could have given us a link to that one. It would save me a lot of time and energy.
Amazon claims it reviews the software created by third-party developers for its Alexa voice assistant platform, yet US academics were able to create more than 200 policy-violating Alexa Skills and get them certified. In a paper [PDF] presented at the US Federal Trade Commission's PrivacyCon 2020 event this week, Clemson …
Even calling them Skills is a bit misleading, as if they are simply tweaks to a single "AI" system, hosted purely by Amazon and no-one else. When in reality, "Functions as a Service" are employed in the backend, so depending on what you say, your information is routed to someone else's server.
I'd rather call them Taps, like wiretaps. The more you add, the more (scrupulous or not) third parties you are granting access to the things that you (and your kids, your partner, your guests..) say in your own private home. And that's without even considering the privacy implications of "Amazon Spot" camera telescreens.
The Stasi would've loved this thing..
Bzzzzrt ! FAIL.
It doesn't matter that results were removed after certification. That does not excuse Amazon from having certified apps that broke the rules.
If you tout a platform that only accepts rule-respecting apps, it is on you to make sure that 100% of the apps you accept respect the rules.
Pretending that you have a clean-up crew that acts after the fact is like saying that you will catch jewel thieves once they've already plundered the jewelry. You promised that the jewelry was protected.
There's a lot of "Will Someone Think Of The Children!" in this -- the researchers seem to have set up a bunch of useless sample skills (applications) in such a way that they'd pass certification so that they can get a tabloid type headline out of it rather than trying to figure out what does and doesn't work with the skills filter. For the skill itself to be useful it has to be identified and installed which might pose a problem since it will be difficult to get people to willingly install nonsense.
Although the Echo is now several years old as a concept I still think of it as a work in process. It relies on a lot of goodwill by both developers and users to be successful and its not a platform I'd trust with sensitive data just yet. ("Bit it can listen in"......"sure, there's the haystack, let's find the needle") The concept shows a lot of promise, though. Deliverately vandalizing it to prove that you can 'just because' has about as much relevance as spray painting obscenities on a wall.
They did give examples - Section 4.2 of the linked PDF.
You also implied that, because they were useless skills that wouldn't be installed by actual users, that there was no problem. Policy breaking Skills should be picked up before they reach users.
The following was a problem (from the article):
"inconsistencies where rejected content gets accepted after resubmission, vetting tools that can't recognize cloned code submitted by multiple developer accounts, excessive trust in developers, and negligence in spotting data harvesting even when the violations are made obvious.
Amazon also does not require developers to re-certify their Skills if the backend code – run on developers' servers – changes. It's thus possible for Skills to turn malicious if the developer alters the backend code"
The authors of the study have identified failings in Amazon's auditing that put its users at risk. Amazon can address these failings... or choose to ignore it and carry on. I see you're in the ignore it camp.
Apologies if my comment comes across as blunt - yours came across as deliberate misinformation!
I take it you own a smart speaker. I'd be interested to know what you use it for - I want one but can't work out what, in practice, if end up using it for.
I have an Alexa because i was given one. It sits in the kitchen, usually unplugged. We use as a speaker in the kitchen, quite often plugged in with an audio jack from an old mp3 player.
It's a good speaker, we don't really use it as a smart speaker and a lot of it's functionality is turned off. Would i buy one? Probably not.
You didn't ask me, but I'll answer: I own a few of Amazon's Echo Dots. Our primary use for them is as a cheap multiroom Spotify/Plex music system (they're all hooked up to semi-decent speakers, before any audiophiles get cross), and occasionally I'll use them to listen to UK radio (I'm in Vienna, so it's a convenient way to keep up with events in Blighty) or find out whether it's expected to rain before I head out. I also use them for making calls to some family members who struggle with phones. In my case they're almost entirely unnecessary, I could easily bin them without any real problems (pretty sure I could make the calls from the Alexa phone app, though I haven't tried), but I have two elderly relatives with mobility issues for whom they're genuinely life-improving devices, for reasons which ought to be obvious.
I partly agree with martinusher - it does seem that the researchers deliberately made useless 'skills' just to game the system and get a headline, but I'd argue that only makes things worse: if the skills in question had little relevance or usefulness to customers, that ought to have made the vetting team more suspicious, and will have meant the obfuscation ought to have been easier to spot. If you hide something malicious in a much more complex, fully functional skill, it will in all likelihood be much harder to spot than if the malicious code makes up the bulk of the skill. I dare say most malevolent parties will go to more effort to conceal their scams than these researchers did, which raises serious questions on what other Alexa skills might currently be available.
It's a fact well-known that vetting processes (of all kinds, from app stores, to content online, to personal background checks for employers) are seldom perfect, and it should come as no surprise that it's possible to make dodgy skills available to the public - users of course need to be aware of the risks and cautious about what they install on their devices - but the fact that precisely none of these researchers' rule-breaking apps were rejected is a legitimate cause for concern. If Amazon's best argument is that their post-approval auditing process would've done a better job of removing the rule-breaking skills, that only begs the question of why this process happens after, rather than before, approval.
"If Amazon's best argument is that their post-approval auditing process would've done a better job of removing the rule-breaking skills, that only begs the question of why this process happens after, rather than before, approval."
Post-approval auditing probably happens in response to customer complaints. This means they pick up only the malware that's so poorly coded that it breaks things :-)
Hmm, I was joking there but, on reflection, it might be true.
Not sure I understand why anybody's surprised by Amazon's behaviour. Jeff simply doesn't care about anything unless it's adding to his billions of $ or polishing his ego. If you're not drinking his Kool-Aid and buying his crap then you're unworthy of Amazon's attention in the "World of Jeff".
Getting some other saps to develop rubbish apps that break the rules for his platform, why would Amazon care? Twitter don't care about fake ads, Facebook doesn't care about hate speech, they all think that because they're part of some new "paradigm" that they're above the piffling laws that mere mortals have to abide by.
Exactly this. Saying there's robust auditing, then fake-apologising when something slips through the gaping audit holes is cheap. Really doing robust auditing is not. Since most users will never know, or don't care, or both, there's no impetus to improve.
This post has been deleted by its author
I think your article is very misleading.
It states that these broke policies similar to iOS and Andriod, when as any Dev knows, iOS does not have policies, rather a set of vague guidelines that are applied to random apps in a random fashion. Just because that app you have had on there for 5 years, does not mean the random ban hammer will not be applied next week.
Whereas Andriod very clearly states any data gather must be shared with Google and anyone else it decides has enough cash.
Removing "references to Nazis or hate symbols," is counterproductive.
Bad stuff happens - and when idiots try to cover it up, bad stuff happens again. Bad people won't take any notice of trivialities like rules or laws unless there is some credible downside to breaking them. Given that do-gooders seem to think bad behaviour should be excused and the "misguided" rule- or lawbreakers rewarded not punished, there is absolutely no incentive to behave nicely but plenty to be made by not doing so. Especially when those who do follow the rules get absolutely nothing for doing so.
Tell people why the Nazis were bad, don't try to pretend it never happened. That's another route to Holocaust Deniers. Except instead of a small group who claim it was all a lie, we're breeding a generation of them.
Biting the hand that feeds IT © 1998–2021