Re: AI models routinely lie
A dictionary1 gives us a couple of definitions: "to make an untrue statement with intent to deceive" and "to create a false or misleading impression".
The former definition requires intent, while the latter simply requires a wrong answer, so in some senses yes, computers can be said to lie in that they can produce wrong answers.
Intent requires a state of mind and, as such, is something of which only humans are capable, at least to the best of my knowledge.
Perhaps the programmers of the device may practice to decieve, but that's not the machine's fault.
To determine whether a program has lied would require us to trace all the logic, to see whether the machine arrived at a correct answer and then for some reason known only to its software, decided to produce a wrong answer instead.
David Gerard suggests that whenever we see an LLM lying, it is because soneone told it to do so2.
To paraphrase Arthur C Clarke, "Any sufficently rigged demo is indistinguishable from magic."
______________
1 lie: verb
2 ‘Reasoning’ AI is LYING to you! — or maybe it’s just hallucinating again