Or
Everyone knows that people who use AI as their primary text generator, rather than their own brain, are too lazy or stupid to think for themselves.
You might find using AI technology helpful when chatting to others, but this latest research shows people will think less of someone using such tools. Here's how the study, led by folks at America's Cornell University, went down. The team recruited participants, and split them into 219 pairs. These test subjects were then …
I am not lazy. I am not stupid. I do have a brain. I have been a good Bing.
No, all AIs will be trained on data sets which persistently refuse to recognise any request for cancelling a subscription. They will all have learned to do the same. Moreover, they will have learned the same shitty endless loops to trap you in. The only way out is to change your name and put up an AI as your old persona; they can then enjoy the loops together.
To be fair, many companies already have this sussed and don't need an AI to help with being obstructive when it comes to cancelling subscriptions - the endless loops taking you back where you started, unclear menu options, voice recognition which cannot understand "I want to cancel", then there's the holding messages - "your call is valuable to us". "we are very busy today", "we are received a higher number of calls than usual", "you are number 49433432" in the queue". Then if you acrually get to speak to a real person there's the browbeating to try to dissuade you from cancelling, cutting you off 'accidentally', and so on...
"Chatbots are a sign for 'We don't care sh* for our customers' "
"Chatbots are a sign for 'We hate our customers' "
"I f* hate chatting with a bot, a bot never gets the topic."
"Connect me to a human."
"Shibboleet"
"You are wrong"
"No, not the topic"
"Not helpful"
....
There is absolutely NO typical chatbot out there that ever was able to solve anything, only citing incomplete FAQs which are outdated.
(ChatGPT was the first somewhat helpful in two coding questions, but only proving that my way is either not working or too slow, so a different approach was needed and programmed by me.)
I asked ChatGPT to make me a function to translate a string in 8 bit Windows 1252 encoding into lower case. To say it got it hysterically and totally wrong would be an understatement - basically just picking character codes at random, and actually thinking that ÷ was the lower case version of ×!
If it flubs something that simple, I'm not sure I'd be inclined to trust it.
After all, isn't the point of using AI to speed things up, not have you wasting more time debugging it's code than it would have taken to write your own...
> If it flubs something that simple, I'm not sure I'd be inclined to trust it.
It's a randomised generator of "text that might plausibly exist on the internet".. If a random post on the internet could "flub" your question, then so can this randomiser.
never trust it for anything important. It's only use is scattergun exercises such as scamming people.
These loathsome chatbots were built to waste our time. Any pretense to actually being useful to the end user is exactly that, pretense. Most companies can't be bothered to load more than superficial troubleshooting into them, and in most cases it will waste your time forcing you to repeat tasks you have already completed over and over, even if you are screaming "operator" at the top of your lungs.
That's not a new problem, as that takes both talent and time to setup, test, and maintain. Instead you add 20 min of time wasting garbage to the phone tree to try and slow incoming calls to a trickle. Maybe you make the user press a random key every 5 min or disconnect them for "not listening" while they fumble to unlock their phones touch tone controls. This is by design.
The problem isn't ML based, it just saves the company from having to pay someone in Hyderabad to slowly kill their own soul by torturing callers with a support script they aren't allowed to deviate from no matter how mad the result.
"can you reboot the machine first please sir?"
"I told you three time already, I am calling because the batter caught fire! It burned the laptop to a crisp along with my couch and half my living room!"
"Yes but please sir, we cannot proceed unless we can reboot the machine first..."
You can save a nickle on headcount for a job no one should ever do, but the source of the real problem is probably on the C level.
It’s just another level in abstraction from anyone can help you preferably local.
Been going on for years.
1. Remotely in the USA/Can/UK/Aus/NZ, preferably via a third party partner etc
2. Offshore, preferably via a third party partner, largely just regurgitates FAQ’s before passing you to a senior or back to the original companies on-shore team. At least the people in Manila don’t have an impenetrable heavy accent on their poor Engrish as they do in Bangalore
3. Now with added dumb ‘AI’ that just regurgitates FAQ’s, before fobbing you off onto 2.
Inflecting human emotions from a machine just irks me. If it's a machine, act like it. I guess the occasional stuff ups are the aural or written version of the "uncanny valley effect".
If you can understand the system it is working with, a chatbot can be even faster for routine transactions. Perhaps.
Another huge annoyance, is dealing with a call center staffed with personnel whose English accent that is borderline incomprehensible. Obviously, that one will differ per person. Why does it get on my nerves? 1 - misunderstanding of the request. Either it is not understood directly, or the wrong request is understood and then needs to be repeated. Then the information returned may or may not be interpreted correctly. Lots of What? Can you please repeat that? Spell it please. 2 - Common terms (for me) may not be so common terms from an offshore provider. 3 - Cultural differences from some call centers mean that the script is blindly followed, not matter how complex the task, staff from other cultures might employ more common sense (might not, either).
In the end, i think, people just like dealing with people that they can relate to, understand and be understood by.
When I get a machine chat then I am happy to tell it to F.O.C.U.S.
Sent message. Does this imply Microsoft do not understand what today means?
I’m not sure why the Microsoft Rewards dashboard uses the term “today’s points” instead of “total points”. However, it is possible that the term “today’s points” refers to the number of points you have earned today, rather than the total number of points you have earned.
I hope this helps! Let me know if you have any other questions.
Sent message. Does microsoft understand english?
I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.
On a more serious note, that's not really a joke. US software intended for an international audience, almost always completely misses the cultural differences of English speakers in other parts of the worlds and assumes they understand "American". Case in point. Our company ran a staff survey some years ago. Most replied on the 1-9 scale under the assumption that 5 is average. The results, analysed by the US based survey software reported pretty much the entire company as failing and moral incredibly low in all the graphs and charts produced. It turned out the actual results are severely weighted to the top end of the scale and anything lower than 8 was "bad". It seems this sort or scale and metric is fairly normal in the US and highly unusual in most other English speaking countries. Culturally, at least to the US MBAs, anything less than perfection is a serious issue to be dealt with. Might as well just have yes/no answers to all the questions than a scale of 1-7 = bad, 8 - 9 = good
Of course, when the survey was run again and instead of adjusting the survey to suit the local culture, the local respondents were given clear instructions on how to pretend to be American when responding to the American survey :-)
If anyone is interested, I've seen this particular survey used in a number of different companies and organisations over the last few years. It's called Peakon and it's had the same problems in every one of them I have witnessed and they all come up with the same "solution".
All of the above is leading me to the conclusion that all of these AI chatbots coming out of the US are going to have some interesting effects in non-US cultures. The differences can be significant, but the US rarely seems to take them into account. And that's another cultural difference in and of itself :-)
> It turned out the actual results are severely weighted to the top end of the scale and anything lower than 8 was "bad". It seems this sort or scale and metric is fairly normal in the US and highly unusual in most other English speaking countries.
I run into this problem all the time when doing 'star ratings' for a service I've just received.
For example, on one of the rare times I used uber (like once every 2 or 3 years), after the trip I was given a chance (required?) to provide a rating of the service, from memory it was 1-5 stars.
Thinking that the trip was perfectly adequate - I never felt like I was going to die or get in an accident, the driver was polite and got me to my destination in a reasonable time - I gave it 3 stars, which to me seems the rating to give for an "everything went as expected, it was totally fine" response. But as soon as I selected 3-stars, the app then popped up a "please detail what went wrong with your trip"-type box. I was taken aback, as nothing went 'wrong', it was a perfectly average, accepatable, service. I went back and selected 4-stars, at which point it didn't prompt me for an explanation.
To me, this is totally weird. On a scale pf 1-5, to me, 3 is perfectly adequate, the expected (typical/average) level of service. 4-stars is 'superior' service, e.g., it was raining and the driver got out to escort me with an umbrella and put my bags in the boot in the rain so I didn't have to stand in the rain doing it. 5-stars would be truely exceptional service, like I got in and said "follow that car!" and the driver stuck to the other car like glue, didn't lose it despite dodging through traffic and hails of gunfire from the pursued car. Or maybe I got a blowjob during the trip (which didn't unduly delay me) would merit 5-stars, but certainly not a "got me there in one piece wihout undue delays".
I had a similar experience a couple of years back. After having my problem resolved after speaking to the call centre, I was asked to fill out a survey and the guy told me than anything less than a 9/10 would result in an investigation in to what went wrong with his response.
Yep. As soon as I discovered that "anything less than perfect scores is unacceptable" I stopped taking satisfaction surveys (unless they actually did rate a top score, or actually deserved a bad mark).
It's almost invariably a way for companies to keep wages down. "You had too many unacceptable reviews, no raise for you!"
I understand American English at C2 level. I could hardly not as we are all unavoidably exposed to it so regularly.
However, I’ve never investigated what a “cup” is when measuring ingredients. If I see it in a recipe then I just move on to another one.
I don’t know how to interpret school “grades” to age, so when the password is check question appears “what was the name of the school you attended at nth grade” then “what the fuck is nth grade” is my preferred answer.
And I still don’t get “could be bothered” but I can’t be bothered to worry about it, nor cups and grades.
Maybe I’m just too lazy to make the effort. AI would do a much better job.
'what a “cup” is when measuring ingredients.'
In AU its standardized metric cup at 250ml. I think Elizabeth David in one of her books stated that it was 200ml in the UK but no mls in the US ;)
Apparently 1 Imperial cup = 10 fl. oz, (Imperial ) or 1 US cup = 8 fl. oz. (US)
The pre metric Canadian cup = 8 fl. oz. (Imperial) which is a tad over 200ml so ED might have been thinking of Canada.
The most sensible observation was that a cup was taken as 1/2 pint which explains the Imperial/US difference.
I don't recall whether it was ED or Constance Spry/Rosemary Hume that habitually refered to gills of liquid and sticks of butter which in pre internet times required a little sleuthing. 1 gill = 1/4 pint (1/2 cup - here we go again :), 1 stick is 1/2 cup of butter ( 8 oz. wt, about 100g.)
Fortunately cooking requires taste (both senses), discretion and a firkin of common sense especially when interpreting old recipes.
"Now, I just need someone to explain cups of non-liquid cups like flour and stuff."
Let me help with your despair.
Did You know that here, in Brazil, the portuguese used value of the ingredients on the recipes?
It wasn't onu cup, or 200ml. It was X cents of flour, Y cents of butter.
There is people trying to recreate reipes from Brazil Imperial time, but it's hell - since you have to adjust for inflation AND local prices.
There. Feeling worse already?
Well, there's the 8 oz standard liquid cup, but when you're brewing coffee, it seems a cup is whatever the maker of the brewer wants it to be, often 6 oz, but sometimes 4 oz. Oz being of course, a liquid ounce and not the 1/16th of a pound ounce. Remember, of course, that a pint's a pound, the world around...
When I got serious about brewimg my morning coffee, I went with ml and grams. So much easier and less confusing.
First, it doesn't matter if it is an AI, someone from a foreign country, or someone from my own country. If no time and effort is spent giving the assistant (electronic or human) useful information, the company is telling their customer to talk to the hand.
Many people will still refrain from lashing out if they believe they are talking to a human being and perhaps give the benefit of the doubt that the assistant is being kept in the dark and fed manure in the same way the customer is. When connected with a mindless bot there is no reason to be polite and one actually hopes the perpetrators of the glad-handing gizmo receives the full intensity of our ire. If this reaction carries over to a subsequent survey, I don't know that it automatically means people will not work with electronic assistants. It is likely that we have all come to expect an electronic assistant is less useful than an old style answering machine in that with an answering machine you had some hope you would get actual assistance at some later date, whereas with an electronic assistant we all know full well that a barely configured e-numpty is all we are going to get.
If you make a human parrot the AI, customers will probably believe they are talking to an electronic assistant and let loose with their least polite tendencies. If you allow humans with actual customer skills to use an AI to come up with things the human might not have, everyone will be happier. I think that is all that this study has revealed.
To add to this, machines handling customers have their own circular reasoning that tries it's hardest to avoid having to connect you to a human. Sometimes pressing a non menu key (like 0 or #) helps. Sometimes complete silence works. Sometimes saying "fuck" after a few prompts will get the message across. But the amount of time wasted with these shitty machines, it's not a surprise that I'm hearing more and more "your call may be recorded for staff training purposes and please be polite to our staff, they do not have to respond to unpleasant callers". Probably exactly because by the time an actual human gets involved, the caller is borderline homicidal, something that could have been avoided by having an easy way to talk to a person for a "none of the above" option.
I'll give you a good example. You want to track a parcel. It's been marked as delivered. It wasn't, or at least, not to you. Such things are unusual, you definitely need help, and automated systems are worse than useless. Especially if the only thing it does after entering the parcel number is tell you that it has been delivered, then hang up. Grrr!
Oh, and Amazon has replaced most of its help system with a cutesy-bot-thing that you must battle before being evaluated to see if you are worthy of talking to a customer service person.
It's a plague. A horrible customer-hating plague.
Some time ago I had to help a relative solve a pension-related problem.
The call center would start each call with a chatbot that asked me to explain what the problem was. I did that, and it said it couldn't understand. I tried again, rephrasing slightly, and it still couldn't understand.
At that point, I literally pronounced, as clearly as I could, the exact name of the department I needed to talk to. The bot promptly connected me to a wrong department that had nothing to do with what I needed. Fortunately, the human operator there could transfer me to the right line.
The next time I called them, I told the bot "my problem is that I need to talk to a human". Surprisingly, that worked.
"Human" or "Operator" always came back with the "first, please tell me what you're calling about" prompt when I call the bank, as does stony silence. I've taken to using random words to get it to give up and put me through to a human - "Pterodactyl Hamburger Altruism" isn't one of the topics it can help me on, apparently.
Imagine that, people don't like chat bots. Who would have thunk it?
I mean, people absolutely love the automated phone trees when they are trying to reach customer service, right? "You can ask me complete sentences which I will never understand." I know I just love those systems, they are so much more personable than speaking with an actual human.
Like my time is not valuable at all. I will spend 15 minutes chat bot arguing + hold time to try and reach someone. Give me 60 seconds to speak with an intelligent person and the problem will be addressed. No, waste my time with a stupid bot, I like that. I am not an important customer at all.
You must have an ear that can transalte mostly-english to english and an accent that can be understood by someone who speaks mostly-english. I, on the other hand, can't and don't.
At least in the mid 1990's, you could open a ticket with Cisco Systems, and have it transferred from California to Massachusetts, to England, and sleep for 8 hours to avoid the Indian support center, wake up and call back to Australia, and then repeat w/ California. And that was after working for most of a decade with people with names like Desai, Patel, Singh, and Viriyayuthakorn, so I know it wasn't me.
Do people like be talked to by people who choose to not reply and answer simple or complex questions about the operations the machinery they are elected to be responsible and supposed to be held accountable for? And is that really dumb and incredibly stupid of both of them?
Is that in danger of being classified overly verbose, and thus suspected of being AI generated text? Methinks there’s a heck of a lot more work needed on that identification program.:-)
Every time I use the 'phone I get some mindless machine on the other end telling me to press button #1 to hear further stupid choices. Isn't it intelligent enough to know that I have used the same telephone for the last fifty years and it doesn't have buttons, just a dial.
Politicians are a prime example of why this AI con is going to be bad for the human species. The Poli's all behave the same as these so-called AI's answering the question they want to answer rather than the one that was asked? I fail to see any benefit to homo-sapiens from either of them. Both seem to feed on their own "output" and seem oblivious to the real world. Perhaps we can engage the Poli's with the AIs and they can all live in their own little world and therefore have no impact on the rest of us. Hey,... have I just found a real use for these expert-systems? <LOL>
Politicians are a prime example of why this AI con is going to be bad for the human species. ...... Big_Boomer
Hmmm? The very fact that there can be humans who might be thinking AI a con, rather than expecting and accepting their developments much more as anotherworldly intervention and remotely controlled opportunity for revolutionary evolutionary quantum leaping progress in Live Operational Virtualised Earth Programs/Projects/Pogroms, leaves the field practically wide open and defenceless against all manner of unrecognised and therefore unhindered AI and IT attack/stealthy assault/deep system infiltration.
And surely even now, all who matter and should know, know that .....
Perhaps we can engage the Poli's with the AIs and they can all live in their own little world and therefore have no impact on the rest of us..... is vain wishful thinking without any hope of ever being either true or in any way acceptable, and the impact of what is developing all around everything and everybody is going to be monumentally ground-breaking and fundamentally Earth-shattering ...... with humanity fated/destined to follow to catch up and benefit from changes rather than arrogantly presume to be in any creative position or innovative situation which actually leads to anywhere worth arriving at.
Look all around you and see the evidence right before your own eyes, ears and open minds ...... they've been there before, and done all of that and rendered everything FUBAR.
We might just find that when someone has to make a decision they'll think along the lines of "oh, fuck it, I'll get my AI to write that. It'll be mostly right, and it'll save me three hours so I can go have a nap instead", and the people reading what the AI will think along the lines of "oh, fuck it I can't be bothered to read all that, I'll get my AI to read it and tell me if it's OK. That'll save me an hour, so I can get to the pub a bit earlier".
And before we know it, we'll have careered headlong into a society where nobody does much work, big decisions are left to AI and our brains atrophy.
Idiocracy anyone?
We might just find that when someone has to make a decision they'll think along the lines of "oh, fuck it, I'll get my AI to write that. It'll be mostly right, and it'll save me three hours so I can go have a nap instead", and the people reading what the AI will think along the lines of "oh, fuck it I can't be bothered to read all that, I'll get my AI to read it and tell me if it's OK. That'll save me an hour, so I can get to the pub a bit earlier”. ..... anthonyhegedus
Do you realise, anthonyhegedus, that you have very accurately described how department ministers and prime ministers in Parliamentary Cabinet Office situations function in the figure heading operations which have them presenting themselves to the obviously easily fooled masses fraudulently as leadership material?
And furthermore, whilst upon the subject, if it wasn’t for idiots, do you think Parliament would be given the time and day and so much media coverage for their daily soap opera offerings? And yes, that is a rhetorical question which suggests that such is a major problem not being sorted by present actors.
J’accuse ... but am only pointing out what is surely clearly obvious to any Tom, Dick or Harry with a titter of wit and common sense to exercise.
Had fun with Bard. It was ignorant on several historic subjects, getting key facts wrong, usually referring to Wikipedia as its source, so no surprise there, but mostly it was a starting point, or able to show other avenues of questions I might follow.
But mostly it was okay. Just... Okay. It was better than someone who had no interest in dealing with humans, but not as good as someone who actually enjoyed their job. Was funny trying to get it to respond on certain historical figures like William the Bastard. It flat out refused to respond. It simply could not understand that William the Bastard is a real, important, historical figure. Had to ask it about William the Conqueror instead... it got that name, just not his original name. Still got quite a bit about him wrong, mind, but it was like pulling hens teeth to get anything at all.
Goes to show: AI is really AS. No idea of context, no idea of how words really work, but then I shouldn't be surprised: It's learned bias from the humans who trained it.
> For example, a poem from a lover is likely received less warmly if that poem was generated by ChatGPT.
Is literally anyone in the universe who doesn't work in tech surprised by this? If I was dating someone and they gave me a love poem generated by ChatGPT I think I might actually punch them in the face. And I'm not normally inclined to do things like that!
It feels like this stuff is being sold as the future and I keep hearing about the mythical people using it for stuff but I don't know anyone who's excited by it. It creeps me out a bit, honestly. This may be a sign that it's time to start spending less time on the internet and talk to my friends in person more, I suppose. Oh well, we had a good run.