Smaller is more beauteous?
"Smaller, cheaper-to-run LLMs can deliver comparable results to large closed source ones such as OpenAI's ChatGPT 4.1, the ODI said."
To my mind, the quoted statement ranks almost as self-evident.
Pursuit of 'all singing, and all dancing' LLMs may, in part, be attributed to an assumption that the more an 'AI' knows the closer it becomes to a 'general intelligence'. As of now, that assumption appears ill-founded.
Apparently, general purpose commercial 'AIs' are trained using whatever digital 'content' is to hand. 'Discrimination' appears anathema to 'AI' technicians. Hence, the phrase 'slop-in gives slop-out' is relevant when 'AI' training uses the content of Twitter (and similar) alongside the best texts Anna's Archive can offer.
Thus, it would be sensible for organisations requiring databases for interrogation by employees and/or clients to commission bespoke 'AIs'. Learned professions should consider commissioning and updating specialised 'AIs'; that doesn't preclude tapping into general purpose models. Also, it shouldn't be assumed that highly specific models require immense electricity-hungry computers for their training: hardware and software technologies advance apace. At present, relatively small resources are needed for fine-tuning and for 'distilling' enormous models to make them both adept at particular tasks and containable on modest equipment.