"...also threatens AI-generated sales emails,..."
Given ChatGPT's well-documented propensity for spouting a load of complete and utter bollocks if it thinks that's what you want to hear, this is going to be different how exactly?
Salesforce is attempting to jump on the ChatGPT bandwagon with a slew of product updates based on application of the language model to its CRM software estate. The SaaS pioneer said its Einstein GPT — Einstein is its analytic and AI tool — “infuses” Salesforce’s proprietary AI models with “generative AI technology” in the hope …
Customers would welcome the toned down levels of bollocks in that case as even chatgpt wouldn't appropriate Hawaiian cultural concepts while grandstanding about every trendy topic while acting no different than any other corporate entity when over hiring and firing their so called ohana by a multi billionaire CEO and his equally overpaid Hollywood pals assigned with creative bs titles who never fail to impress with how out of touch they are by simply opening their mouths. Chatgpt is by far the less psychotic option imo.
“In a pre-canned quote, Salesforce CEO Marc Benioff said: “Einstein GPT, in combination with our Data Cloud and integrated in all of our clouds as well as Tableau, MuleSoft and Slack, is another way we are opening the door to the AI future for all our customers, and we’ll be integrating with OpenAI at launch.””
Looks like ChatGPT wrote that, so you can toss Benioff and save some moolah. Get better decisions too. Certainly better than the fop haired wanker BT Gav made as your former Strategy VP.
A couple of weeks ago, I asked ChatGPT to write for me a function in C that can convert upper case to lower case, coping with accented characters, using the CP1252 character set.
It obligingly wrote me something that was complete bollocks. It seemed to think the accented characters started at 128, only did about twenty odd characters, and the substitutions were completely gonzo. I did laugh at it thinking that the lower case version of '×' was '÷'.
I asked for a version in BBC BASIC. Again it obliged, with equally bizarre (and wrong) substitutions. It also made broken code (an = style function return in a PROCedure, for example).
So I asked if a lookup table might be better. It agreed, and spat out some code that was horribly broken (the accented characters end up with character codes over 255). I finally asked for that in C, and it gave me something so bizarre it made my head hurt and I gave up trying to work out what the hell it was doing.
Remember, this was a pretty simple test. Make a character lower case in an eight bit character set that's been around for donkey's years, also lowercasing accented characters. It's not a difficult question. I can think of three options off the top of my head (though a big switch construct would just suck as a solution; my preferred would be to make a byte array and use the character code as the index into it).
Anyway, the amount of wrongness in those responses mean that ChatGPT really isn't up to putting programmers out of work just yet.
A possibly unpopular opinion
From what I've seen of ChatGPT interactions I am of the opinion that it isn't 'intelligent', has no 'understanding' & very definitely isn't sentient
Unfortunately some people are treating it as if it is
They see it as some sort of oracle, that gives human language answers to human language questions, they don't realise that at least some of those answers are complete BS
What seems to be happening is you give it a search term like "Write me a couple of paragraphs about the band Pink Floyd", it consults it's database built from the internet for articles about "the band Pink Floyd" that are at least 2 or 3 paragraphs long, builds a template from the results & then generates a reply from that template
So it can't help but look like human language as that's what it was generated from
It's probably nothing like that in practice, but that would fit the examples so far
The problem comes from the search terms, it also seems to find results for Pink & Floyd & Band & The
So it could generate a paragraph about how Pink Floyd went down with the Titanic as "The Band" played on or about how they all wear Pink or literally any old toss it found on the internet whether it's right or wrong
Because it has no intelligence, it doesn't understand the words, it's just digesting and regurgitating our own words back to us, which sadly some people are taking as a sign of sentience, even more sadly a lot of those people are in positions of authority
There's the journalist on here who asked about himself and it said he was dead (https://www.theregister.com/2023/03/02/chatgpt_considered_harmful/), there would be lots of obituaries of people with his first name and lots with his last name, so it generated a paragraph saying he was dead, some of them were in the Guardian so it generated a fake link that looked like all the others it had seen
It's just a free associating bot spewing mashed up data taken from other sources
ChatGPT should go straight on the bonfire before some idiot puts it in place of actual people and it starts being asked questions where the answers affect peoples lives