Thank you for asking
I have just caught up with the writings of the recently deceased Harry Frankfurt, specifically his book "On Bullshit"1. AIUI, Prof. Frankfurt identified up to eight different forms of lies, one of which is Bullshit, defined as the output of a person who pays no attention to the truth or falsity of what is uttered, but simply utters that which suits the person at the time. We can all think of individuals that we know, or know of, of whom this is true sometimes. Frankfurt says that this is the worst sort of lying because of how dangerous it can be.
I submit that the output of LLMs of every sort is incontrovertibly bullshit, all the time. The model doesn't know or care whether its output is true, or verifiable, or grounded in fact. If I'm right, then nobody should be using the output of a prompt like "Should I hire $PERSON who's application reads $TEXT", much less "Should I put this person to death" because the answer is certain to be bullshit.
1On Bullshit; Frankfurt, Harry G. ISBN 9780691122946