The Economist model
So having unpaid interns rewriting corporate press releases with no byline has now been automated ?
Sounds inefficient
Computer scientists have found that misinformation generated by large language models (LLMs) is more difficult to detect than artisanal false claims hand-crafted by humans. Researchers Canyu Chen, a doctoral student at Illinois Institute of Technology, and Kai Shu, assistant professor in its Department of Computer Science, set …
No, I'm willing to believe they really are better than humans at creating misinformation. After all, the models are built by Meta (Facebork, InstaSpam), Alphabet (Pooooogle, U-Bend), ByteDance (DickTok), Microsoft (StinkedOut, GitHub*), X (twatter), Amazon (scamazon) et al.
They have the data (far more experience than any human psychologist) to indicate what kind of post will 'go viral' - and that kind of post is almost always not the honest truth.
* apologies, i ran out of childish faeconyms
If I read the article correctly, the researchers were using machine learning misinformation detection algorithms to detect misinformation in AI-generated vs. human-generated content. It seems to me that a better test would be to see how humans do, insofar as we're ultimately the ones who have to make sense of the information. So far, all that's been demonstrated is that AIs are good at fooling other AIs, which is not necessarily useless information but is also not the whole picture.
"It seems to me that a better test would be to see how humans do, insofar as we're ultimately the ones who have to make sense of the information."
I assume the plan is to use AI to detect the misinformation at scale, thus enabling the social media mills, and possibly search engines, to filter out the crap - well, the crappier crap, anyway. As far as I can see, the entire content of social media is crap. Of course, one man's misinformation is another man's gospel, so filtering based on such trivia as mere truth might not be all that popular.
That LLMs should produce (allegedly) more convincing bullshit isn't all that surprising, though. Most humans who write misinformation are, I suspect, of the swivel-eyed, froth-at-the-mouth persuasion, and they don't tend to be the most coherent of individuals. Whereas LLMs tend to produce coherent, albeit bland material. That removes some of the red flags raised by incoherent ranting, and automatically makes the material more persuasive. After all, if presentation didn't have a major impact, nobody would use copywriters, would they?
R.L. Yacht.
I can see two greater dangers from "AI" filtering misinformation than it being used in its generation. The first is that it will work poorly and filter out genuine information. The second is that it will work too well and will allow a very small team or even a single individual to finely control which ideas are visible; with a degree of subtlety that shifts in general consensus could be prodded without the danger (to the consensus manufacturer) of whistle-blowers that's associated with a large team of human operators.
It's one thing for a troll farm to put out damaging material but I'm even more disturbed by the idea that a single appropriately empowered individual could do the latter with the software writers and maintainers thinking that their work is only being used to prevent, for example, claims that vaccines cause autism.
The second is that it will work too well and will allow a very small team or even a single individual to finely control which ideas are visible; with a degree of subtlety that shifts in general consensus could be prodded without the danger (to the consensus manufacturer) of whistle-blowers that's associated with a large team of human operators. ... Catkin
That is such as has always been realised in the past in the realm of practically remote virtual gifts, and is nowadays attempted by conventional, traditional news and media content editors and BBC [Broad Band Casting] operators ....... so nothing really new there, with it just being a novel variation on an old ancient theme .......although, and this is not something anything wise would ever dismiss as nonsensical and nothing to worry about, that is not to say nowadays it is not new ground breaking and able to be Earth-shattering too.
The difference, to me, between my previous post and traditional consent manufacturing is that the former uses actual people in place of a LLM, curating out forbidden views in a way that means a spectrum of opinions seems to still be present.
> I'm even more disturbed by the idea that a single appropriately empowered individual could do the latter with the software writers and maintainers thinking that their work is only being used to prevent, for example, claims that vaccines cause autism.
".. Or better than Kings.. Gods!"
In this case it is what they were asked to do in the prompt. LLMs may well come across differently if you ask:
Write an account of the battle of Hastings in the style of a swivel-eyed, froth-at-the-mouth raving loony conspiracy theorist.
The such prompts might actually be useful for making it look like real events are fantasies believed only by anti-vaxers.
I haven't bothered to read the paper (perhaps I will, one of these days, but it doesn't sound especially interesting, even in this field), but I feel compelled to note that there is a tremendous range of rhetorical skill among human writers, and that humans trying explicitly to deceive a detection algorithm might well do better than whatever corpus of human texts they used for comparison. It appears, from the article, that all this team demonstrated was that a particular detection system performed better against some human-produced texts than against some LLM-produced texts.
It's a data point, but it's not a very strong finding.
Would it matter to you, and cause you to go all belligerent and maddening, to discover LLLLMs ..... Learned Large Language Learning Machines .... excel magnificently and magnanimously at creating and disseminating convincing attractive future information for more than just the most intelligent of human beings ‽
:-) And would you think and dare to imagine that such an Alien Invasion with Advanced IntelAIgent Machines would actually care, and ask you what you might think about all of that?
What do you think about all of that? Would such be problematical for you, and if so, how so and why so, and however would you expect to resolve it .... or, as would most definitely probably certainly be the case in this particularly peculiar and very specific case, it to be resolved?
Now I may have read it wrong. But the testing process is to take misinformation written by humans and prompt LLMs to generate misinformation from the human misinformation. Then use LLM detectors on the human and LLM generated misinformation to identify; what exactly?
It is not misinformation it’s detecting; it’s detecting human written text or trying to. That’s what LLM detectors are trained to do.
May be there should be control tests, take true information written by humans and prompt LLM to generate content from that human text. Then run human and LLM text through LLM detectors. I wonder if they detect the same as they did using misinformation.
My experience with LLMs shows they generate incredibly neurotypical pap. Why? Because they ingest a lot of it, then - by the Law of Large Numbers - they regurgitate the most neurotypical pap.
This is no different than experiments/research that generates "the most beautiful face". Average enough "beautiful" faces and you converge towards the one that appeals to the most people. Same with LLMs, average tons of neurotypical pap and your output will be that which converges to the pap most people like/accept.
They chose ten 'evaluators' from MTurk and asked them to ascribe 'factual' or 'non-factual' to each item from two sets of 100 samples of 'AI' generated news items and one set of 100 human generated news items. Not what I would call a conclusive study, (not least because [a] the 'evaluator' sample was so small and potentially culturally biased, and [b] there were no repeat trials with different populations of subjects) but what the heck -- it's another AI-related paper published! Got to keep the grants rolling in.
However, being charitable, this is a classic example of the dangers of researching outside one's area of expertise. Computer scientsts are not necessarily proficient in psychology (which is the essence of this subject, the question being primarily a matter of human perception).
Fake news sites is the main thing in common.
So make most common TLD registration slow and difficult. Embed top level domain reputation filters into browsers. Allow low reputation TLDs for experimenting developers, but keep them out of reach unless the filters disabled explicitly.
"Embed top level domain reputation filters into browsers. Allow low reputation TLDs for experimenting developers, but keep them out of reach unless the filters disabled explicitly."
That won't do anything. to stop people from just putting their junk in an older TLD. I can get a .co.uk for pretty cheap. Sure, the name will be less clear than if I use the word of my choice because someone probably registered all the nice .co.uk domains already, but if you're blocking other TLDs, it can be managed.
Making the registration difficult doesn't help either. It might do something against scammers who like to quickly spin one up, run their scam site for about five days, then try to get a refund from their registrar, but sites intended to have misinformation stick around for a lot longer. It's also pretty easy for operators to just set up a bunch of domains, park them, and bring them online when they've got something for them to say.