> It will analyze different tools like large language models and figure out how to integrate them into the military's software systems.
"How"? Not "if they should"?
> Craig Martell, the DoD's Chief Digital and Artificial Intelligence Officer and former head of machine learning at self-driving biz Lyft, said the US must examine and adopt AI despite the risks.
Apparently "how" it is.
> "We must also consider the extent to which our adversaries will employ this technology and seek to disrupt our own use of AI-based solutions."
Ah, this is all a dummy op, designed to make "our adversaries" waste their time and resources trying to interfere with a DoD department that is just using LLMs to generate bullish press releases. Please tell me this is right? Please!
Militaries all over the place are already deploying loads of stuff that was once considered the realm of the AI lab - image analysis and recognition being prime examples - but now those "just work" they aren't considered to be AI any more. Maybe that shows the sane path to deploying software - only when it works well enough to be boring? At least, before you give it the keys to the door, that funny round door lying flat to the ground.