Re: They are still very much a work in progress
Way to miss all the points. Let's take your comment apart and look at each piece in detail:
"I'm surprised that this website, a website targeted at skilled users of computing technology, has so many commenters who are totally negative about voice assistants. We develop software for a living so we know the problem of developing and testing algorithms. Its 'non trivial', it takes a lot of time and effort to get things to function properly."
And because properly coding software is hard, we should accept massive privacy risks? Why? We don't have a problem with the devices getting the speech recognition wrong sometimes, we have a problem with data being sent out and kept without our permission. In summary, it's not the algorithmic details we have a problem with, but the operations details.
"I daresay they can eavesdrop on me but I can easily turn them off if it was important that they were unable to do so. (I'll overlook the numerous ways you can still be listened in on -- starting with the phone, computer and so on and going on to active listening systems -- you wouldn't believe how easy it is to eavesdrop --"
But your computer isn't listening unless you've been infected with malware. If you were infected with malware and it was listening to you, you'd be unnerved and upset, no? That's what these devices do by design, and we find it somewhat creepy.
"I realize that these systems represent something far bigger than just an amusing gimmick, they're groundbreaking devices in the development of what used to be called man/machine interactions."
No, they're not. They're pretty basic question/answer devices. They can do a rather limited number of things. It can be a useful interface, but the capabilities these have were available years ago.
"beyond mere commands; Alexa can tell when someone's breaking into your house, it can be asked to listen out for smoke alarms and there's even been some quite successful experiments to determine whether it can recognize the sounds of someone having a heart attack. This is cutting edge stuff,"
Yes, those things have been tested. However, given that it can't always recognize whether its own wakeword has been said or not, it can't be that cutting edge. Also, many of those use cases are kind of pointless--assuming the detection of an alarm is meant to alert someone not present, either the homeowner or an emergency service, the potential unreliability of the audio detection could be circumvented by having the alarm itself do the contacting. And once again, our issue is not with the uses of the technology but the abuses by its manufacturers.
"and, yes, it has to all go back to AWS or the Google cloud because we don't know yet how to localize the processing, nobody's quite sure what's needed, what should run where and how to package it so it doesn't require a small power station to run it (important if you're dealing with something that's running 24/7 or from a battery)."
That's incorrect. I built a thing that was kind of like a voice assistant. It had fewer questions it could answer, but as I wanted to code some of my own and my major questions were "what is the weather today" and "what time is it in [insert location]", it did just fine. I did this in part because I had an old computer I didn't know what to do with. Did I mention that this occurred in 2008 and the computer in question was built in 2003? Did I mention that all speech recognition happened locally? The devices need a connection to obtain information to say and stream media, but the manufacturer decision to make the devices pitifully powered and outsource all recognition to their systems was not done out of technical inability.
"So, let's have less of the negativity. If you don't want to play then don't bother with it. (...."
We don't. However, we still have the right to complain about it being creepy, and if we have the chance to prevent privacy violations that are, you know just technically, illegal, we'll do it. I'm tired of the "don't be negative, just don't use the thing" rubbish. On that basis, I could say "don't read our comments as you've made it clear you don't agree with them", but that would be a stupid thing for me to say.
"These technologies will evolve, there's no way to wish them away, so we either learn how to use and control them or become a slave to those who can use them."
There you go. "use *and control*" them. Our issue is that we can't control them. Some people above also don't want to use them, but I have no problem using them or having others do so as long as control can be achieved and used to obtain privacy.
"BTW -- No, I don't work for Amazon or Google. I'm a retiree -- one of those old people that are regarded with amusement because we don't understand computers....or maybe we do, since we've been riding them up from the beginning...."
Given your comments, you either don't understand the types of privacy violations these devices do or you don't care. I'm going to give you the benefit of the doubt and say you do understand and don't care, but plenty of people who have these devices don't understand what is happening to their data, and get freaked out when they discover it.