back to article Amazon's auditing of Alexa Skills is so good, these boffins got all 200+ rule-breaking apps past the reviewers

Amazon claims it reviews the software created by third-party developers for its Alexa voice assistant platform, yet US academics were able to create more than 200 policy-violating Alexa Skills and get them certified. In a paper [PDF] presented at the US Federal Trade Commission's PrivacyCon 2020 event this week, Clemson …

  1. Anonymous Coward
    Anonymous Coward

    re: cursing at kids

    You could have given us a link to that one. It would save me a lot of time and energy.

  2. cyberdemon Silver badge


    Even calling them Skills is a bit misleading, as if they are simply tweaks to a single "AI" system, hosted purely by Amazon and no-one else. When in reality, "Functions as a Service" are employed in the backend, so depending on what you say, your information is routed to someone else's server.

    I'd rather call them Taps, like wiretaps. The more you add, the more (scrupulous or not) third parties you are granting access to the things that you (and your kids, your partner, your guests..) say in your own private home. And that's without even considering the privacy implications of "Amazon Spot" camera telescreens.

    The Stasi would've loved this thing..

    1. joesomeone

      Re: Skills?

      Your $Dayjob is clearly not in marketing.

    2. iron Silver badge

      Re: Skills?

      If you have the all-listening ear of one of these devices in your home you can't call it "your own private home".

  3. Blackjack Silver badge

    No wonder...

    Microsoft ended removing Cortana from Windows 10.

  4. Pascal Monett Silver badge

    "the research [..] skewed the results by removing rule-breaking Skills after certification"

    Bzzzzrt ! FAIL.

    It doesn't matter that results were removed after certification. That does not excuse Amazon from having certified apps that broke the rules.

    If you tout a platform that only accepts rule-respecting apps, it is on you to make sure that 100% of the apps you accept respect the rules.

    Pretending that you have a clean-up crew that acts after the fact is like saying that you will catch jewel thieves once they've already plundered the jewelry. You promised that the jewelry was protected.

    You lied.

  5. martinusher Silver badge

    They didn't give any examples

    There's a lot of "Will Someone Think Of The Children!" in this -- the researchers seem to have set up a bunch of useless sample skills (applications) in such a way that they'd pass certification so that they can get a tabloid type headline out of it rather than trying to figure out what does and doesn't work with the skills filter. For the skill itself to be useful it has to be identified and installed which might pose a problem since it will be difficult to get people to willingly install nonsense.

    Although the Echo is now several years old as a concept I still think of it as a work in process. It relies on a lot of goodwill by both developers and users to be successful and its not a platform I'd trust with sensitive data just yet. ("Bit it can listen in"......"sure, there's the haystack, let's find the needle") The concept shows a lot of promise, though. Deliverately vandalizing it to prove that you can 'just because' has about as much relevance as spray painting obscenities on a wall.

    1. DryBones

      Re: They didn't give any examples

      They rattled the knob on the bank vault and the piece of stage-dressing fell over.

      Seems like it still might be newsworthy.

    2. Mark192

      You lied

      They did give examples - Section 4.2 of the linked PDF.

      You also implied that, because they were useless skills that wouldn't be installed by actual users, that there was no problem. Policy breaking Skills should be picked up before they reach users.

      The following was a problem (from the article):

      "inconsistencies where rejected content gets accepted after resubmission, vetting tools that can't recognize cloned code submitted by multiple developer accounts, excessive trust in developers, and negligence in spotting data harvesting even when the violations are made obvious.

      Amazon also does not require developers to re-certify their Skills if the backend code – run on developers' servers – changes. It's thus possible for Skills to turn malicious if the developer alters the backend code"

      The authors of the study have identified failings in Amazon's auditing that put its users at risk. Amazon can address these failings... or choose to ignore it and carry on. I see you're in the ignore it camp.

      Apologies if my comment comes across as blunt - yours came across as deliberate misinformation!

      I take it you own a smart speaker. I'd be interested to know what you use it for - I want one but can't work out what, in practice, if end up using it for.

      1. Muscleguy Silver badge

        Re: You lied

        My ex wife has an Alexa which came with her internet install. She uses it as a digital radio in essence but the idea of having something listening to me 24/7 creeps me out.

      2. BigSLitleP

        Re: You lied

        I have an Alexa because i was given one. It sits in the kitchen, usually unplugged. We use as a speaker in the kitchen, quite often plugged in with an audio jack from an old mp3 player.

        It's a good speaker, we don't really use it as a smart speaker and a lot of it's functionality is turned off. Would i buy one? Probably not.

        1. AMBxx Silver badge
          Big Brother

          Re: You lied

          >>a lot of it's functionality is turned off

          Are you sure about that?

      3. bish

        Re: You lied

        You didn't ask me, but I'll answer: I own a few of Amazon's Echo Dots. Our primary use for them is as a cheap multiroom Spotify/Plex music system (they're all hooked up to semi-decent speakers, before any audiophiles get cross), and occasionally I'll use them to listen to UK radio (I'm in Vienna, so it's a convenient way to keep up with events in Blighty) or find out whether it's expected to rain before I head out. I also use them for making calls to some family members who struggle with phones. In my case they're almost entirely unnecessary, I could easily bin them without any real problems (pretty sure I could make the calls from the Alexa phone app, though I haven't tried), but I have two elderly relatives with mobility issues for whom they're genuinely life-improving devices, for reasons which ought to be obvious.

        I partly agree with martinusher - it does seem that the researchers deliberately made useless 'skills' just to game the system and get a headline, but I'd argue that only makes things worse: if the skills in question had little relevance or usefulness to customers, that ought to have made the vetting team more suspicious, and will have meant the obfuscation ought to have been easier to spot. If you hide something malicious in a much more complex, fully functional skill, it will in all likelihood be much harder to spot than if the malicious code makes up the bulk of the skill. I dare say most malevolent parties will go to more effort to conceal their scams than these researchers did, which raises serious questions on what other Alexa skills might currently be available.

        It's a fact well-known that vetting processes (of all kinds, from app stores, to content online, to personal background checks for employers) are seldom perfect, and it should come as no surprise that it's possible to make dodgy skills available to the public - users of course need to be aware of the risks and cautious about what they install on their devices - but the fact that precisely none of these researchers' rule-breaking apps were rejected is a legitimate cause for concern. If Amazon's best argument is that their post-approval auditing process would've done a better job of removing the rule-breaking skills, that only begs the question of why this process happens after, rather than before, approval.

        1. Mark192

          Re: You lied

          Bish asked:

          "If Amazon's best argument is that their post-approval auditing process would've done a better job of removing the rule-breaking skills, that only begs the question of why this process happens after, rather than before, approval."

          Post-approval auditing probably happens in response to customer complaints. This means they pick up only the malware that's so poorly coded that it breaks things :-)

          Hmm, I was joking there but, on reflection, it might be true.

  6. StewartWhite

    & why exactly would Amazon care about this?

    Not sure I understand why anybody's surprised by Amazon's behaviour. Jeff simply doesn't care about anything unless it's adding to his billions of $ or polishing his ego. If you're not drinking his Kool-Aid and buying his crap then you're unworthy of Amazon's attention in the "World of Jeff".

    Getting some other saps to develop rubbish apps that break the rules for his platform, why would Amazon care? Twitter don't care about fake ads, Facebook doesn't care about hate speech, they all think that because they're part of some new "paradigm" that they're above the piffling laws that mere mortals have to abide by.

    1. Jimmy2Cows Silver badge

      Re: & why exactly would Amazon care about this?

      Exactly this. Saying there's robust auditing, then fake-apologising when something slips through the gaping audit holes is cheap. Really doing robust auditing is not. Since most users will never know, or don't care, or both, there's no impetus to improve.

  7. cb7

    What's to stop any innocent looking app streaming objectionable content from a server that only serves acceptable content at the time of review?

    1. This post has been deleted by its author

  8. IGotOut Silver badge


    I think your article is very misleading.

    It states that these broke policies similar to iOS and Andriod, when as any Dev knows, iOS does not have policies, rather a set of vague guidelines that are applied to random apps in a random fashion. Just because that app you have had on there for 5 years, does not mean the random ban hammer will not be applied next week.

    Whereas Andriod very clearly states any data gather must be shared with Google and anyone else it decides has enough cash.

  9. trevorde Silver badge

    Best. Alexa. Skill. Ever.

    Me: Alexa! Tell me a fart joke.

    Alexa: The fart skill is not enabled by default. Would you like to enable it?

    Me: YES! YES! YES!

    Alexa: [starts telling fart jokes]

  10. Anonymous Coward
    Anonymous Coward

    rewriting history again

    Removing "references to Nazis or hate symbols," is counterproductive.

    Bad stuff happens - and when idiots try to cover it up, bad stuff happens again. Bad people won't take any notice of trivialities like rules or laws unless there is some credible downside to breaking them. Given that do-gooders seem to think bad behaviour should be excused and the "misguided" rule- or lawbreakers rewarded not punished, there is absolutely no incentive to behave nicely but plenty to be made by not doing so. Especially when those who do follow the rules get absolutely nothing for doing so.

    Tell people why the Nazis were bad, don't try to pretend it never happened. That's another route to Holocaust Deniers. Except instead of a small group who claim it was all a lie, we're breeding a generation of them.

  11. Anonymous Coward

    Oh good...

    I was wondering how, with my busy schedule, I was going to educate impressionable young people about "Ein Volk, Ein Reich"!!

    Technology really is wonderful, isn't it!

  12. fidodogbreath Silver badge

    The Clemson boffins conclude that Amazon has been lenient toward Skill approval because it prioritizes quantity over quality

    Given the pages and pages of knock-off garbage in their product-search results, that statement describes Amazon's entire e-commerce business model.

  13. TeeCee Gold badge

    Alexa content prohibitions.

    Can we mix 'n match here?

    Collecting health information from Nazis.

    Sexually explicit symbols.

    Hate children.


    1. Anonymous Coward
      Anonymous Coward

      Re: Alexa content prohibitions.


      "Collecting health information from Nazis.

      Sexually explicit symbols.

      Hate children."

      That is a much better list of 'prohibitions' that can be 'ignored' !!!

      Thank you !!!

      :) ;)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like