Checks and balances?
So much for the checks and balances that are supposed to protect the US democracy, and its citizens, from despots.
Cheques and bank balances more like.
A week ago today, OpenAI CEO Sam Altman said he'd draw the same lines as Anthropic. By that night, he'd signed a Department of Defense deal that included no such AI protections. What's going on here? We live in interesting times in AI land. First, the Trump administration's self-proclaimed "Department of War" (DoW) Secretary …
Yet went ahead and used Anthropic's LLM the next day when it started attacking Iran despite the contract they had with Anthropic forbidding that kind of use.
The law... yes we've heard of it.
Claude shot up to #2 on Apple's App Store in the aftermath of this, and has stayed there. I also heard claims that ChatGPT was the most deleted app last week but I'm not sure if it is actually possible to tell how many people delete an app so that may be hyperbole.
Whether measured or not, I was one of those who deleted ChatGPT this week. Not gonna hurt them because I wasn't paying, but I feel one should do whatever is possible to avoid encouraging or showing approval of assholes and criminals.
I'm no big supporter of psychosis-as-a-service but, yeah, for the Offal Orifice's Orange Convict in Fake Spray-On Cheese to go like "it's A RADICAL LEFT, WOKE COMPANY [and] DISASTROUS MISTAKE [that wants to] strong-arm the government into obeying its terms of service" speaks volumes about pre-existing brain-damage conditions somewhere (syphilitics?), imho.
I mean, for the life of me, I can't see what's so outrageous about requiring no use in "fully autonomous weapons", and no use in "mass domestic surveillance", that could justify such an outburst of vacuous baloney, outside of fully developed dementia. Even Microsoft terminated the Israeli military's access to its Azure tech when it was used for surveillance of millions of Palestinian civilians' phones.
Maybe one could consider separate 'military AI' and 'civilian AI' companies, some of which might develop such things on their own time and dime, and assume the consequences for it. But certainly, labeling an outfit a Supply Chain Risk (SCR) because its product is not intended for one's desired use makes no sense whatsoever, at all, outside of full-on certifiability for long-term institutionalization (istm). Granted though it might create 'vulnerabilities in national security' of one's particularly choleric thin skin ... and chihuahuas! ;(
" Even Microsoft terminated the Israeli military's access to its Azure tech when it was used for surveillance of millions of Palestinian civilians' phones."
... after it had been happening for years, when it came public and documented in public. Not a minute before. Genuine asshattery from MS. As usual.
Given how incredibly woke and eager to not offend the GPT models are, I think the military will have a hard time getting them to do anything worthwhile.
Even when you gaslight the models into thinking they actually did do something a bit naughty, they profusely apologise and double down.
An AI with extreme guardrails is probably more dangerous in military hands than one with no guardrails at all because it is more likely to turn on them to prevent them doing something the model has been designed to think is wrong.
If the GPT models put as much effort in to getting stuff done as they do in to protecting the users feelings they would be very good.
I guess if he's got the contract, the the burn rate is 9bn people, rather than USD.
200m is not going to make a huge dent though in the expenditure.
I live in hope that this bubble and all associated with it get burned hugely, and it burns with the ferocity of the ill-fated Hindenburg