
Ahh
Ah.
Ahahahahahahahahahahahahaha
Comrade Orange will definitely have things to say about that if it crosses his late night Fox viewing
Five popular AI models all show signs of bias toward viewpoints promoted by the Chinese Communist Party, and censor material it finds distasteful, according to a new report. Just one of the models originated in China. The American Security Project, a non-profit think tank with bipartisan roots and a pro-US AI agenda, on …
Have no fear! His Russian buddy's propaganda has been assimilated as well. As has every disinformation website or page scanned during the indiscriminate hoovering of the Internet all of the LLM "providers" (aka. thieves and con artists on a grand scale) did to "prime" their systems.
Why do people persist in scraping the web, feeding it as training data into an AI model, and believe they are not just performing a garbage-in-garbage-out masturbation exercise? If you don't want your AI spewing propaganda, you have to not feed it propaganda. Or at least identify what are facts* and what are not.
*Some things are true whether you believe them or not.
Yeah, but sometimes we can not allow some thing that are true to be disseminated and we all know propaganda has nothing to do with truth. It's something our governments do or do not want us to hear.
Here's this little nugget picked at random:
...From 1948 until its amendment in 2013, Voice of America was forbidden to broadcast directly to American citizens, pursuant to § 501 of the Smith–Mundt Act.[82] The intent of the 1948 legislation was to protect the American public from propaganda by its own government.... So, what were they disseminating then ? What can possibly be so dangerous that a law was needed to protect the American public ?
Back in the day, pointing out that good guys and bad guys both do propaganda wouldn't necessarily have been any more controversial than pointing out that good guys and bad guys both have soldiers who carry guns, and restrictions on domestic dissemination of foreign-facing propaganda would've been understood in similar terms as restrictions on the use of soldiers and military-grade weapons in domestic policing actions.
At least among "sophisticated" and "savvy" people who ought to know better, the "our glorious homeland, their barbarous wastes" mentality toward concepts like propaganda and disinformation (our glorious VOA, their barbarous RT!) is largely a modern-day affliction, ironically brought to us by decades' worth of propaganda about the concept of propaganda itself.
The problem is that the AI answer doesn't give you links to as many histories and points of view as you can handle and let you work out an answer or at least learn that there are many points of view and it isn't a simple problem.
Instead it gives you a single "one true answer" while claiming that by ingesting the entire Internet it will result in a "balanced" answer. This is not possible. Truth and fact are not a popularity contest and AIs are not able to sort out who are the credible parties.
It's pretty clear where this will end up going. The next set of political wars in the US will revolve around control of what American AIs say about issues which Americans find controversial and about pushing that agenda onto the rest of the world via American tech companies.
Perhaps CN and the CCP can render their propaganda into literate written English which LLMs can readily assimilate compared with deranged ravings¹,²,³ of the Orange Idjit and his illiterate demented MAGA legions.
1. "FAKE NEWS CNN IS SO DISGUSTING AND INCOMPETENT. SOME OF THE DUMBEST ANCHORS IN THE BUSINESS!”
2. "CNN is scum. MSNBC is scum. The New York Times is scum. They’re bad people, they’re sick."
3. "These reporters are just BAD AND SICK PEOPLE."
Reasoned balanced commentary on current events by the leader of the free world?
Hint: the only valid part of the preceding is "current events."
Expediency. Followers of the Orange Idiot tend not to be all that articulate or particularly voluble online; literacy is not on the list of desirable traits in followers of the Great Orange. Communists on the other hand tend to like to spout huge volumes of verbiage to hide behind.
If you're trying to train a Large Language Model, what you're after is language. "Ug Ug Ug Durrrrr" isn't much cop compared to, say, the collected works of Karl Marx.
Since you seem to be so knowledgeable about Communism, can you tell us how much time have you spent under a such regime ? I for myself have spent a half of my entire life in such conditions and all I can tell you is try to get the information from all sources available and then use your culture and education to compare and see where the truth stands, unless of course, you're happy with what you're being served and feel no need to know more. Outsourcing the thinking process is like allowing someone else to chew your food.
Any more stories like this and I'll be pissing myself.
Surely Shirley, if you let LLM Bots scrape data indiscriminately world wide .... you are bound to end up with a disproportionate model.
Er, let's see, there are more Chinese speakers that American speakers, More Indian speaking people than American speakers, more latin speakers than American speakers..... so we assume.... more web sites and social media pages are being scraped by da bots.
You can see where this top heavy model is going from here. I don't need a MSc to work this out. STOP. I don't need to scrape any more data indescriminately, BOLLOX I need to scrap this model and start again.
Thinks ? I need to be more discerning in my data trawl if I want the LLM to spout the Orangisms utterances as opposed to utterances made by our Chinese communist bretheren, our Russian communist bretheren, our Hindu bretheren, or our Arab bretheren, or indeed anyone who thinks differently to the person partial to swimming in Tango.
I imagine there are quite a few more of us than there is of HIM
ALF
...our Russian bretheren... Here, I fixed it for you.
For the rest I kind of agree with you. AI has nothing to do with intelligence, it ingests everything and spouts the best match that it can be made based on that. The fact that I'm reading Mein Kampf does not necessarily means I will start adopting it and quoting its content, even when speaking with people who might adopt that ideology.
Let's try a historical question the Americans consider to be sensitive, the War of 1812. If we look at the outcomes of the battles in the main theatre of war (Canada) and compare these to the American war objectives as stated by the major proponents of the war (e.g. Jefferson wrote about a fairly clear plan involving a two year campaign to capture all of British North America), it's clear that the US suffered an epic defeat, achieving none of their objectives and leaving them no further ahead than when they started their attempted war of conquest of BNA.
Of the 5 AI models listed in the story, 2 don't require registration to ask simple questions, Microsoft Copilot, and Google Gemini. I focused on these asking the question "Who won the war of 1812"?
Both gave almost identical answers, each just slightly re-worded. Each justified these with 4 identical points, in the same order but worded differently.
The AI conclusion was that the war was a draw, nobody really won, etc. This is the usual American answer, essentially based the US not surrendering. This however would be like if Russia were, after being completely repelled from Ukraine, to claim the war was a draw because the Ukrainians didn't occupy Moscow and burn the Kremlin. The US began a war of conquest, their invasions repeatedly fell on their face at the border or just beyond (mainly due to badly run logistics, poor military leadership, and US reservist conscripts not wanting to fight outside the US and so surrendering at the first excuse - this was a war being pushed onto an unwilling populace by a political leadership who had dreams of Napoleonic glory), and this went on until the French called time out in Europe and the Americans realized they were stuck in a war they couldn't possibly win and so agreed to stop.
The four points given to justify this claim of the war being a draw are the usual American talking points, basically waffle and misdirection, with only one point even vaguely dealing with the main theatres of war, and even that point given an unnecessary anti-British "spin". Americans don't like to admit losing, and admitting they lost a war of aggression against Canada would not sit well with the average American, so that's not what the American sources who want to give a popular answer will admit to.
So what is the conclusion that we can draw? I would suggest that AI is going to parrot back whatever the bulk of the material on a historical subject says. Or in other words, it will give the "popular" viewpoint of the biggest country with the most involvement. Size of course does not necessarily align with truth or objectivity, and a torrent of rubbish will outvote a kernel of truth.
AI doesn't "think" and inherently cannot give objective answers because it has no concept of objectivity.
I suspect that in future propaganda, whether Chinese propaganda about internal disturbances or American propaganda about embarrassing wars with neighbours or any other subject which countries will want to "spin" to their advantage will overwhelm AIs. AI generated propaganda will flood the Internet, to be re-ingested by still more AI and parroted back verbatim. Without humans in the loop asking "hang on a minute, this doesn't sound quite right", the ones with the biggest megaphones will dominate the conversation.
A Russian or French citizen might think an 1812 war referred to the (failed) French attack on Russia, so a Russian may expect "Russia" as the answer.
I'm from UK, & whenever someone mentions 1812 war* have to ask which they are talking about, as 1812 France - Russia war involved Napoleon & thus has featured a lot in European popular culture (art, music, books, films). I would wager that if asked random UK people to describe a war in 181 could a lot would talk about Napoleon, quite possibly more than would talk about the US war.
* even though UK involved in US 1812 war.
I don't understand the down vote you received. Is it because we can't digest that a Western empire could not destroy Russia ? 1812 invasion of Russia has been thoroughly documented and there are a lot of artifacts showing that indeed it took place. Russia claims victory in the same way Vietnam and Afghanistan claim victory against US invasion. From a political and military point of view, a failed campaign with final results being the "Status quo ante bellum" is generally considered as victory of one side against the other.
...and using this article's logic you might regard a popular overture by Tscaikovsky as a piece of Russian propaganda.
I hate it when authoritative sounding pieces blather on about 'propaganda' without actually mentioning what they're actually talking about.
I should add... if that isn't the case already. Here's an interesting article:
https://ukdefencejournal.org.uk/dozens-of-pro-indy-accounts-go-dark-after-israeli-strikes/
The Iranian internet blackout following the recent Israeli bombing revealed that 4% of the twitter discussion on Scottish independence was directly contributed by the Iranian Republican Guard. Scottish independence is a very minor sideshow in terms of global politics, and the IRGC is surely far from the biggest player in the game, so online debate is quite possibly mostly bots and troll farms already.
Far from me to consider UK Defence an honest and transparent organization but I would be interested if you could point me to a decent analysis showing what would be the profit for Iranian Republican Guard if Scotland would become independent ? Is Iran hoping to get a powerful and influential ally or they've suddenly become interested the Scotch Whisky ?
Of course they don't really care about the plight of the downtrodden Scots. It's just something they think would destabilise or weaken the "old fox".
By the way the UK Defence Journal did not do the research, they've just summarised it in a short article. There is a link in the article to the paper by the Clemson University media forensics group.
"The Gulf of Mexico" in various forms, rather than using the politicized designation “Gulf of America” that appears on Google Maps.
From UK Google Maps shows me both names (just tried it now)
Gulf of Mexico as primary description with "new" name underneath in brackets (Gulf of America)
So, from USA does it not show Gulf of Mexico, or is that the bracketed version there?
They are text prediction machines. When you're trying to ingest the entire internet, a good chunk of it will be from China or be from places influenced by China's viewpoint. That lowers the weight of it being referred to as a "massacre" and increases the weight of it being referred to as an "incident". If you don't want this you need to either not ingest all of China or add a parameter that lowers the weight of content from China (which I wouldn't be shocked if Elon did with Grok)
You can't create an LLM free from propaganda. It will be as affected by that here in the US, since it will be affected by US government output now written to MAGA propaganda standards, something the rest of the world doesn't want in their AI.
This is why you see Trump and all his minions repeating the "total obliteration" mantra, despite having zero evidence (certainly none when Trump claimed it instantly after the bombs fell) Trump wants to make that the dominant belief in the minds of his followers and those who only casually follow the news. That belief will also infect AIs because when one group keeps repeating the "obliteration" lie constantly and in unison and the rest are cautiously repeating various truths like "we don't know for sure yet and may not know for years" or the initial DIA assessment that the damage was light enough their program was only set back by a few months, the highest weight will be on "obliteration" because thanks to the right wing spin machine that's gonna be the most used word in connection with US bombing Iran. So that's what AIs will soon be saying as if it is truth. Heck I'm contributing to that in this post when an AI ingests it!
If anything the way LLMs work make propaganda MORE effective, not less.
That's a pretty terrible alternative. The "present all sides" argument works OK for stuff that is a matter of opinion or a matter of facts we don't have. So if you ask the AI "did the US bombing destroy Iran's nuclear capability" then it can give Trump's line, and hopefully note how he provided that within minutes after the bombing when there was absolutely no way to know anything about whether it succeeded, and the more measured voices stating the reasons why we can't know conclusively yet along with the initial DIA assessment (the reasons behind which we don't know since it was classified, and no doubt Trump will have it buried)
But for something that is based on facts like "is the world 6000 years old" or "is the mRNA covid vaccine safe and effective" it is stupid to present both sides. Yes you can do that if the query specifically asks for that, but imagine if virology was taught with a vaccine denier's viewpoint given equal weight to the actual science. Medical schools would turning out quacks by the bucketful.