We're back to the 1700's,
This is like the 1700s where publishers wrote whatever they like based on who would buy their pamphlet.
Seems to me we are back there now!
But then the issues of Trust and Truth came up, so Editorial Boards were created who oversaw standards and intentionally built reputations for Truth and Integrity; laid down journalistic standards, etc., so that readers who cared about that stuff could go to trustworthy sources.
Seems to me like we need that online more than ever, and specifically now in the data that feeds Large Language Models (LLMs).
Just because it's on Substack or Reddit doesn't make it true, and I fear we're back into needing Circles of Trust, because we can no longer know or trust who we're dealing with,
unless it arises from a known and trusted recommendation.
A bit like the Masons ... and so back to the future we go again!
So for LLMs, word associations should be weighted much higher from trustworthy sources, (& arguably lower for proven liars, very mucb hlike we do subconsciously as humans)
And that in turn should lessen untruths and hallucinations from AI.
I know, I know, there's a million issues over who decides who is trustworthy, how that is earned and maintained,
and how to avoid being "bought" or corrupted by malign influences, etc.
- or simply cultural biases that creep in unwittingly.
But we should at least be having the conversation.
(And don't get me started about the Platforms ducking editorial responsibility - their algorithms decide who sees what, so in my book they have culpability for pushing lies & disinfo, whatever the legal loop-hole may be! )