DPA
The DPA gives Krasnov
So Claude is going under Moscow's control. Amazing.
US Secretary of Defense Pete Hegseth has made Anthropic an offer it may not be able to refuse. The Defense Department and the AI firm held a meeting at the Pentagon on Tuesday, where the government tried to compel the house of Claude to lift some restrictions on military use of its tech. However, recent changes to the company's …
> The DPA gives the President ... broad authority to require businesses to accept contracts.
That's a contradiction in terms. A contract is a voluntary agreement. What the text means is enforced work of the unwilling, i.e. slavery.
> the Pentagon was ready and willing to terminate the up to $200 million contract the agency signed with Anthropic
The Pentagon should stick to the agreed terms of their contract. That's what contract means, an agreement. Otherwise, I trust Anthropic would sue them to high heaven for breach.
The article goes on to quote Anthropic execs basically saying "Google, xAI and OpenAI have already displayed their utter lack of morality in the face of $, so why should we hold out for some things as nebulous as safety and human rights when there are $ to be made?"
We should all find it curious that the Pentagon's line on this is. "1. We need this. 2. We don't issue illegal orders." If 2. is true, 1. is not, and vice versa.
But even that is not the point. Some clown with stars on his* shoulders trusts AI with the lives of potentially any person on earth. This from the agency that gave the world Collateral Murder.
* Gender-appropriate pronoun
I see that Mango Largo has just announced, with his usual dysarthric grammar, random capitalisation (both common nouns and entire words), and reference to himself in the third person (there's only one of him, thank fsck, and that's one too many), that Anthropic is having its government contracts (all of them) terminated for not acceding to the Pentagon's standover tacticsnegotiation strategy.
Someone in the AI industry has courage. We're about to see how prevalent it is.
From what I heard yesterday it was "All Restrictions". The Pentagon wants a free hand to do whatever it wants with the technology.
This scenario seems to be the nightmare scenario feared by leading AI researchers. The situation is analogous to the development of nuclear weapons which after the international collaboration of the Manhattan Project in WW2 was rapidly monopolized by the US and any dissent (e.g. Oppenheimer) was rapidly excluded. (Put another way -- there's always a Dr. Strangelove ready to advance the cause.) The result was well known -- the UK practically bankrupted itself working to duplicate the work and the USSR went into overdrive to develop its own nuclear weapons. The AI the Pentagon is after isn't anything like as spectacular as a nuclear bomb but its every bit as dangerous, especially as the Pentagon wants to be free of all restrictions on connecting this technology to other systems. That is, they want to be able to deploy systems that autonomously identify and destroy targets -- in other words, Skynet.
The only problem with the Pentagon's reasoning is that their hubris means they're incapable of understanding proliferation. Everybody else's technology is obviously inferior because its not ours. What they're triggering will not be Full Spectrum Domination but yet another global Arms Race, one where there won't be MAD to balance things -- when it comes the destruction will be swift and total (unless the AI figures it out, decides that we humans are the problem and takes appropriate steps, of course).
I did not realize that the Pentagon had appointed a comedian as spokesman, or maybe he is just deluded.
What about blowing up small boats off the coast of Venezuela without any real evidence that those on board are drug couriers ? That is not legal under international law, maybe the law of the jungle which is all that the thugs in the White House understand.
Legal usage of Anthropic's AI, the Pentagon official said, is the department's responsibility as the end user - not Anthropic's.
So if a gun shop sold a weapon to a known hit man could they claim that people being shot is not their responsibility but that of the hit man ?
"You're right, you did specify to only target Russians but I didn't do that. Since Russians are humans and Russians are enemies I targeted all humans instead. Let me retry that for you and this time I won't annihilate all humans."
You don't get to say 'no'. The whole democracy thing is just window dressing.
Anthropic's best solution is to allow the USG and military to use their AI as they see fit, without paying for it, as an unregistered user. They would not be responsible for such rogue use, and would not be being paid for it, but would simply not block it. The military and government would use it as they saw fit, with no limitations or restrictions, and (as they say) be liable for the consequences.
Interesting that here, the end users are responsible for use. In Europe, tech firms are regularly bilked for tax-fines for how users ab/use their tech.
That's really brand hazardous, not just in the obvious political context (and an even bigger problem overseas), but it also directly plays into public fears of Skynet. AI companies are already facing public pushback and associated policy problems. They don't need to lose more goodwill by their product being fit for that purpose. That's exactly how they find themselves on the receiving end of more regulation and opposition to datacenter construction.
You need to understand the American way: Other countries, let's take China, has an 'AI' company linked to the Chinese military. USA: Ooooh, that's bad. It must be banned immediately for 'national security' reasons.
An American company does the same. America: It's all within the (American) law.
The utter hypocrisy from the American regime is vomit inducing.
Could they require Apple to sell them iPhones that are backdoored? Could they require Google to give them a Google Search that shows no results for any searches where "trump" and "epstein" are mentioned together?
The DPA as far as I'm aware only allows the DoD to insure critical supplies, so they could make sure Apple manufactured MORE iPhones (obviously they aren't useful for defense, but if they were bombs or fighter jets they would be) but not control how they are made. Pretty sure the administration would lose in court, again, if they tried to force Anthropic into giving them unrestricted models that they don't offer to anyone else.
Anyway this is only a $200 MILLION contract? That's chickenfeed in the AI world. Anthropic would be better off getting kicked out of the DoD for their models being too moral to kill people. They'll gain a heck of a lot more than $200 million in sales from people who would want to support a company with at least a few scruples rather than one that would happily give the Pentagon Skynet if it made the CEO richer.
I mean obviously the Pentagon will find someone amoral like Musk or Zuck or Altman willing to give them what they want so supporting Anthropic instead of them isn't going to stop the Pentagon. But if I was spending money on AI I damn sure wouldn't do it with one who is helping the Pentagon develop autonomous killer drones/robots, potentially helping them make that happen more quickly!
The DPA is incredibly broad so they could easily do that if they want e.g.
Title I: Priorities and Allocations, which allows the President to require persons (including businesses and corporations) to prioritize and accept contracts for materials and services as necessary to promote the national defense.
whats interesting in the statement about what they won't use it for, they mention "mass surveillance".
Why would an organization not charted with mass surveillance point out they won't us AI for mass surveillance unless they secretly will use it for mass surveillance?
Bluck
"We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead."
Well those guys are making dangerous AI models, so we better get to work making dangerous AI models. I mean if there is going to be a skynet, it better be us that the machines are thanking for it after they have wiped out all of humanity. Imagine the shame of having the machines acknowledging those guys for the total machine domination of the universe! We can not let it go down that way, not on our watch!
You, and most of the myopic above have forgotten that the us congress repealed the 'the goverment may not lie' law, and that repeal was signed by a president. The fact that this was the 2nd most stupid thing ever done by congress is beside the point.
Have these chaps never heard of malicious compliance?
Insist that every decision be referred to a working group of at least five people, demand all instructions in writing and raise endless questions about the wording, refuse to take shortcuts eg if a form requires three signatures, wait days for the third person rather than walking it over, apply every obscure safety regulation
Worked in 1944