
Thunderforge?
They missed a trick there. Should have named it WOPR.
Where's my dial up modem so I can play a game...
The American military has signed a deal with Scale AI to give artificial intelligence, as far as we can tell, its most prominent role in the Western defense sector to date – with AI agents to now be used in planning and operations. The value of the contract, awarded as part of the US Defense Innovation Unit's Thunderforge …
1. The most responsible AI bot is currently developed in Tel Aviv, by Iliya Sutskever; as long as he controls its development.
2. The problem with international moratoria is that it is close to impossible to know what really happens in powerful authoritarian countries (without elections), so I can't see the logic of any treaty or worst: a unilateral stop of AI development in the free world.
3. If the LA Times reporters can't accept there are positive traits of bad people or bad organizations then they aren't fully human and switching them with an AI is not such a bad idea.
Until recently, the competition would have been the reasoned consideration of experienced military personnel and AI might have been at a disadvantage. Now, it only has to perform better than the whimiscal musings of the Führer of Inferior Canada, so perhaps there's scope for it to be an improvement.
Not yet, but they're building it. Imagine a President for Life, without the Life.
A virtual Trump can look and sound like the original and translate any policy from Musk and Thiel into Trump-speak but will never stroke out.
Look out for a minor medical episode from which he bounces back stronger but stops appearing in public.
(unless there are doubles, can you imagine a more financially rewarding but spiritually draining job than being Trump's double?)
/begin{Sarcasm}
Oh well, that's OK then, because no human ever just did what the computer told them.
/end{Sarcasm}
We've seen time and again AI systems discriminate against poor people, black people, and implement the biases (conscious or subconscious) of their programmers. See the Register's extensive archive of AI failures. Now we'll love AI or feel its wrath.
Given the extreme shortage of functional neurons in the current administration I would imagine you would be obliged to keep the interface extremely simple. Certainly no more than a big red button although the C-in-C probably wouldn't be averse to have a choice bit of crumpet sitting on it instead.
"Today's military planning processes rely on decades-old technology and methodologies, creating a fundamental mismatch between the speed of modern warfare and our ability to respond."
"Decades-old technology and methodologies": do you mean, "the human brain," and, "thinking"?
* "Planning" is something you do in advance. Planning is something you do in advance so that you can respond quickly. If you are in a lightning-speed war, your generals are no longer planning. They are reacting.
* If you are in a lightning-speed war and need AI to speed your reaction time, you are not going to have time fact-check your AI's output.
* "No plan survives contact with the enemy."
* AI (trained on the Internet and US pop culture) military advice:
(i) Never get into a land war in Asia;
(ii) Never go up against a Sicilian when death is on the line.
Are they talking about some different type of "AI" than the currently-overhyped LLMs?
"making Thunderforge’s reasoning process legible, so users can trace its logic"
I don't think anything LLMs do can be classed as being "reasoning" or "logic". They're statistical models based on words. Bullshit generators.
You are correct - LLMs have absolutely no way to explain their process, let alone in a "legible" (do you think he meant to say "comprehensible"?[2]) way.
So they are going to spaff huge amounts of money[1] and resources chasing something any honest researcher wil tel them isn't sane. With luck, somebody will divert funding (back) into techniques that have explanatory power at their core, not just some mirage generator hot glued onto a phantasmagoria box - but they'd be pissing into the wind coming from - well, let's just leave "FSD" as a hint.
[1] isn't there supposed to be a department cutting wasteful spending? On the tip of my tongue...
[2] what fun it'll be, when the private contractor shows up in six months[3] time, dumps a five foot printout of weightings onto the desk, all beautifully rendered in a highly-readable font[4] - "you said 'legible'".
[3] it takes time to squirrel away the DoD payments offshore
[4] comic sans
But if someone decides to call this offal Skynet then I'm going to be in the market for a decommissioned nuclear bunker. ;-)
I doubt there will ever be true artificial intelligence.
Just because they want to badge a ream of clever algorithms, trained on illegally harvested copyrighted books/articles, and say it's intelligent...
Biggest problem is that the creators of this stuff don't get the differences between Clever, Intelligence and Knowledge.
I'm intelligent, but my beloved cat is more clever than I am.
I'm very well-read but that just means I have lots of (mostly useless) Knowledge - not necessarily the Intelligence to use it effectively.
Surely the problem with AI is that it is a 'Soulless machine' and has no sense of morality whatsoever?
The fact that recent issues have appeared with LLMs generating hazardous content because someone found a way around
the programmed safeguards like running time backwards, suggests that we are in danger.
Ignoring for the moment that humans may or may not be self-aware, a machine that has the capability of self judgement may
and probably would consider the fleshy meatbags (tm) that created it to be a grave threat only if they tried to turn it off.
This isn't mere speculation, there have been cases where individuals put themselves in danger by following instructions online
without fully understanding the risks, later traced back to old Usenet pages that had been scraped by an LLM.
We need to consider that this creation may become a Frankenstein's Monster and replace us, if we want it to or not.