Re: Stop the AI Marketing spin
> They do not hallucinate, they output an error... Don't let marketing win.
Huh?
You do know that the whole "hallucinating AI" comes from the deriders of the (excessive) use of LLMs, *not* from the people trying to market them?
> They do not understand, there is no intelligence, they misinterprete the command (prompt). So the "AI" failed to interpret the user command correctly and continued running which produced errors in the output.
Ah, no. The "hallucinations" are not a failure to interpret the user command. They are a failure to stop and respond "Don't ask me, not a clue mate". Instead, they just keep trolling through their nadans spitting out less and less accurate - and eventually less and less coherent - outputs, faithfully following the user request over the edge of the cliffs of sanity. Consider the stories of chat sessions where the user kept on prompting for more and more output and the results got more and more absurd: the LLM is most definitely still "following the prompt"[1], just way past the point we'd hope that it'd stop.
Using the word "hallucinate" is quite reasonable, as it gives the general User a suggestion of the way that the problem is, well, a problem. If you have a philosophical objection to the term, then suggest something else that can be used instead, to indicate that particular type of behaviour: "Gone off the rails" might serve better?
> they output an error
That's not a good replacement. It is far too broad and loses any sense of the *way* that these things are going wrong.
Plus, given how we usually refer to software behaviour, the problem is that it most distinctly is *not* outputting "ERROR: not a clue, mate"[2]. It is still doing what it was made to do, still wandering around its network, spitting out letters and words. The difference is that, now, *YOU*, the person reading those words, are starting to wonder about the usefulness of those words in that particular order.
If you tell User A that IT Person B is prone to hallucinating, to seeing/hearing things that differ from reality without B being able to realise when they have slipped, that B is not suddenly being malicious but is still reporting the best they can, then - you actually have a pretty good analogy for the LLM's behaviour and the responses can be the same: A can take B's responses with a pinch of salt and do the work to verify what B told them; or A can just stop asking questions of B entirely; or A can just decide to take B at their word every time.
Remember, we are using "hallucination" not to market these things, but to point out to Users that the machines go doolally in ways that other software doesn't: it is something new and weird that the User has to be aware of when they encounter these beasts.
[1] Whatever and however it actually does in order to "follow the User's prompt", it is still doing that same thing fundamental process the whole time
[2] And the LLM software is more than likely entirely capable of generating error messages in the way we are all accustomed to - "ERROR: out of memory", "ERROR: cheese store empty" - just we, the poor benighted Users, are not likely to see those. Unless we get to peek inside the logs.