Re: You lied
You didn't ask me, but I'll answer: I own a few of Amazon's Echo Dots. Our primary use for them is as a cheap multiroom Spotify/Plex music system (they're all hooked up to semi-decent speakers, before any audiophiles get cross), and occasionally I'll use them to listen to UK radio (I'm in Vienna, so it's a convenient way to keep up with events in Blighty) or find out whether it's expected to rain before I head out. I also use them for making calls to some family members who struggle with phones. In my case they're almost entirely unnecessary, I could easily bin them without any real problems (pretty sure I could make the calls from the Alexa phone app, though I haven't tried), but I have two elderly relatives with mobility issues for whom they're genuinely life-improving devices, for reasons which ought to be obvious.
I partly agree with martinusher - it does seem that the researchers deliberately made useless 'skills' just to game the system and get a headline, but I'd argue that only makes things worse: if the skills in question had little relevance or usefulness to customers, that ought to have made the vetting team more suspicious, and will have meant the obfuscation ought to have been easier to spot. If you hide something malicious in a much more complex, fully functional skill, it will in all likelihood be much harder to spot than if the malicious code makes up the bulk of the skill. I dare say most malevolent parties will go to more effort to conceal their scams than these researchers did, which raises serious questions on what other Alexa skills might currently be available.
It's a fact well-known that vetting processes (of all kinds, from app stores, to content online, to personal background checks for employers) are seldom perfect, and it should come as no surprise that it's possible to make dodgy skills available to the public - users of course need to be aware of the risks and cautious about what they install on their devices - but the fact that precisely none of these researchers' rule-breaking apps were rejected is a legitimate cause for concern. If Amazon's best argument is that their post-approval auditing process would've done a better job of removing the rule-breaking skills, that only begs the question of why this process happens after, rather than before, approval.