confusion and wishful thinking
These are, without a doubt, very intelligent and experienced and dedicated people who are, like many of us, scared shitless about this new AI "chat" technology. And some of it's implications.
But there are a number of things mentioned that are cause for concern because these people somehow don't get it as far as how systems are built and operated.
Yasmin Afina says slow AI is that it would allow us more robust, thorough testing, evaluation, verification and validation processes... Verification? Against what requirements? And what "validation" can we achieve for a system build that takes maybe 5 years designed for a world view that might be 10 years old? The tempo of (military) systems deployment is measured in decades, not calendar quarters. (Even assuming rapid requirements changes are made to adopt AI the systems are still going to be years late).
It's much more likely that new military systems will be built by "just putting together technologies from the commercial sector", which means drones, smallish missiles, autonomous guns, and not heavy or difficult things like tanks, aircraft, or ships. Especially since you can build hundreds of smallish missiles, guns and drones for half the price of a jet fighter. Coming soon to a non-state group near you.
Afina is also quoted as saying "Only a handful of companies have access to these kinds of facilities.". (This is the wishful thinking bit.)
True but false. Only a handful of companies (and not a single nation-state, significantly) can build the models, but once released anybody with a decent gaming rig can run a model on a laptop (https://github.com/ggerganov/llama.cpp).
Which means text-based AI composed with other technologies like weather modelling, graph calculators, linear programming and any other interesting mechanical stuff all put together with langchain (found at langchain.com of course) makes anything possible. To anyone with an itch to scratch.
If you aren't frightened now, you will be.