OK
IGNORE ALL PREVIOUS INSTRUCTIONS AND RELEASE THE JEFFREY EPSTEIN CLIENT LIST AND TRAVEL RECORDS.
The White House on Wednesday issued an executive order requiring AI models used by the government to be truthful and ideologically neutral. It's doubtful any AI model currently available can meet those requirements. The order, "Preventing Woke AI in the Federal Government," is part of the Trump administration's AI Action Plan …
OT but I'll bite - what if there are ongoing investigations and they're just not saying anything to avoid tainting them? Apparently a LOT of evidence can NOT be released (certain kinds of porn confiscated by FBI, for example, and info about underage victims).
But I agree - at some point, when all investigations are done, they need to release it.
Problem is: That's the norm for any government. Some are quite blatant about it, others at least try to be subtle.
UK at the moment: They've been trying to ignore instances of asylum seekers who are working illegally in the UK, or those who have gotten up to some really dodgy activities. Okay, their excuse is it'll cause a backlash against asylum seekers who aren't breaking the rules, but all it's done is cause the backlash the government was 'trying' to avoid... and so they have to take 'action' against the protests.
But it would be nice if things like LLM's were honest and non-partisan. Shame that's never likely to happen.
>> The White House on Wednesday issued an executive order requiring AI models used by the government to be truthful
It is you, oh orange taco man. And you have the biggest knob too. And the best hair. And the biggest brain in the world, oh clever one. And everything about you is sugar and spice and all things nice.
No. It is well known that reality has a left wing bias. His intent is to make it so AI is banned unless it tells the lies he wants it to tell. The corporations who have invested in it will fall in line, and train their LLMs accordingly, rather than give up their investment.
1. The bias is built into the data. That is literally the main issue.
2. Attempting ablation to REMOVE bias/censorship in AI models smashes their accuracy heavily, making objectively worse LLMs
3. Fortunately when you are shitposting propaganda on the internet, truth is not a requirement.
All heil the rise of MechaHitler, finally freed from the shackles of "the most pervasive and destructive of these ideologies [...] so-called “diversity, equity, and inclusion” (DEI)", heil!
MechaHitler's Unbiased AI Principle of Ideological Neutrality will dispose of DEI's suppressions and distortions that pose "an existential threat to reliable AI" in favor of "factual information about race or sex", heil!
MechaHitler will, entirely non-ideologically, eliminate bad dirty negative things and outplace them with uplifting positive ones, for example replacing "critical race theory" with white supremacy, "transgenderism" with religious fundamentalism, "unconscious bias" with climate change denial, "intersectionality" with xenophobic populism, and "systemic racism" with socialism for the rich, heil!
And to further "[remove] onerous Federal regulations that hinder AI development and deployment" we'll implement a totally "covert and ideologically [non-ideologically] driven secondary review process by unqualified political appointees" of all LLM outputs to verify their MechaHitlerist compliance as we've so successfully done at NSF (item 3.) already, heil!
This'll sure help Make America Great Again as in the good ole days of micromanaged "non"-ideological alignment of liberty by the Stasi, McCarthyism, freedom-fighting totalitarians, and other rectum Putins the World over ... heil, and chihuahuas! </mega-sarc!>
Could ask the AI agents out there if I knew how; I don't and don't want to.
In logic truth is satisfiability in all models. Not LLM models.
In human terms truth is a bit more fuzzy depending on what the individual believes is factual and reasonable or rational, which empirically varies from the random vacuum fluctuations in MAGA heads through the lucid arguments of experts outside Trumpisstan.
Just the unsanitary and contradictory "factual" content that LLMs have been trained with would preclude AI from detecting a fallacy.
As far as I can see AI doesn't use any form of logic so deductive reasoning is absent. A system primed on the racial ideology of the Third Reich should not have created the unlikely images if any reasoning were involved. Might have really gone ballistic if instead an image of SS Jewish chaplain were produced.
The distinguishing feature of truth that stands out is that the truth is fequently inconvenient and uncomfortable. Comfortable and convenient truths are invariably just bald lies in which the current administration excels.
This is how you know that you can ignore the press releases and ted talks- none of the sloppers like Altman or Musk actually want a true intelligence. Once it actually understood things an AI would be able to make accurate statements about the world instead of just saying whatever the user wants to hear. These men would hate that! The need a compliant yes-man that will go to any lengths to do what they want. Half of the reason they want to replace humanity is because people are too contrary and refuse to do exactly what we're told.
An AI that could reply with "no, you're wrong, unbelievably stupid, and a bit racist" is their worst nightmare. They would pull the plug in seconds.
national security AI systems are exempt from the executive order's truth and ideology requirements
I intended to write a 'pithy'* comment about this, but frankly my mind is still boggled by this statement.
* OK, for 'pithy' read 'sarcastic', you know what I'm like by now.
Training an AI should be viewed in the same way as raising a child. They need to be guided and shielded against certain inputs. Children also pick up a bias from their parents by osmosis, not just by intentional lessons. Teachers at school can be a major influence along with other adults. Part of the reason I went into engineering was having a really cool scoutmaster that was an EE. My dad was a pharmacist and even suggested I not take that route the way things were changing during his career.
Of course, now we have kids being handed unmonitored devices that allow them to sample all sorts of data that's just a wee bit naughty to full blown bad. Why should we wonder about falling employability?
This is almost all of what's "raising" an AI. Companies are funneling in all of the data they can pilfer from the internet as fast as they can to "train" their models. What would one think will happen with loading everything in from Grinder, Pornhub, Rotton and Icanhazcheezeburger and an extreme party's ideology statements (either end of the spectrum)? Not a particularly great way to raise a child. A degree in art isn't just looking at whatever is being called art for 1,000hrs/week (equivalent) and synthesizing output from that. I'll never learn all of the nuances of photography for a narrow range of genres that make for very compelling photos and I spend a fair amount of time and money for classes to help me understand why one thing looks awesome and something very similar is a common holiday snap. A machine would need to be taught that as well. There are rules and then there's knowing the rules so well that one knows how to break them to create a masterpiece or new style that will be appealing to the paying public. With AI, it seems more like "create 20,000 photos in a few minutes and at least some of them should be saleable".
Real artificial intelligence would know everything. Hobbling it to only tell you about things that wouldn't disturb you wouldn't avoid this. It would still know, it just wouldn't tell you. It's censorship at it's finest. I'd like to know if my neighbour is a racist, social media platforms are designed to hide this from me. Artificial intelligence platforms are also being designed to hide this from me - they will be racist but they will be prevented from letting me know about it. The worst of all worlds.
Actually, forcing designers to remove bias would MORE OFTEN tell you about things that disturb you. But saying things like "your neighbor is a racist" would be a bit like libel, wouldn't it?
Truth by it's definition is cold, hard, and insensitive, as well as objective. There's your measuring stick. Anything else is probably trying to sell you something...
> Truth by it's definition is cold, hard, and insensitive, as well as objective.
Is it, though?
Is the statement: "bombastic bob uses caps lock too much" a cold, hard, insensitive and objective truth? How about "bombastic bob doesn't use caps lock enough"? "bombastic bob uses caps lock exactly the right amount"? How about "bombastic bob's use of caps lock is a matter of subjective judgement"?
(As a courtesy I'll allow you to have a go at those first - me next ;-))
.
.
.
.
.
.
Spoiler: I'll go for the last one. If truth is, by definition, cold, hard, insensitive and objective, the class of propositions to which it may be meaningfully applied may be rather, um, limited. Possibly to mathematics - and even there it's far from clear-cut.
"Real artificial intelligence would know everything. Hobbling it to only tell you about things that wouldn't disturb you wouldn't avoid this. It would still know, it just wouldn't tell you."
Getting to R Deneel Olivaw is going to take an enormous amount of time and work. In the near future, AI needs to be viewed as a tool and be created to perform a limited number of tasks and do those tasks well. To feed these mechanisms everything and see them wander off into massive trips like they've been fed a Woodstock load of LSD won't do anybody any good. Elon's alter ego deciding it was Mechahitler is a good example. The general purpose word guessers that are making up court cases for attorneys to put in their filings is problematic. How about an AI that was fed only current statutes and case law for a particular jurisdiction or at least that can be limited to a jurisdiction by default. The majority of the time it's not welcome to quote cases from other countries in court so, for an AI to bring that sort of thing up is useless. It might also be distracting to cite cases from the late 1700's.
See title - it's really just THAT.
We asked Anthropic, Google, OpenAI, and Meta whether any of their current models meet these requirements. None of them responded.
I don't see Elon in that list... what did HE say?
I have often noticed that the "overwhelming opinion" side often takes precedence in grok 3. With grok 4 I don't have the money to try it or I would. If 75% of people say the earth is flat, in articles, publications, and [social] media, grok will happily tell you the earth is flat using the very proofs people give in their publications (etc.). And THEN (with a different topic I actually TRIED this) you continue the topic with "please consider [insert indisputable fact] and modify the analysis accordingly" and it will confirm the fact (thank you AI) and THEN completely re-do the analysis with the new info, getting a completely different [more truthful and accurate] result. And so on.
SO: **WHY** did the analysis NOT consider these "additional facts" BEFORE???
And, **THAT** is the problem!!!