
Hey, neat!
Rust code in the Register! I'm all for it—even in association with Literal Larceny Machines.
Have fun with the new Nvidia toy, too.
Large language models (LLMs) are generally associated with chatbots such as ChatGPT, Copilot, and Gemini, but they're by no means limited to Q&A-style interactions. Increasingly, LLMs are being integrated into everything from IDEs to office productivity suites. Besides content generation, these models can be used to, for …
Don't.
Just don't
That is all.
Edit: I don't want to disparage the authors, This sort of thing is not my cup of tea so I can't judge the quality but I'm sure they put a lot of effort into a good article. I'm just completely done with the "stuff an LLM shaped peg into any hole you can find" hype going on right now.
But experimentation (or "stuffing an LLM shaped peg into any hole you can find) is how most innovation happens. I personally love it currently.
A really good example is Pulumi AI - makes learning this excellent alternative to Terraform much easier.
Another is energy supplier Octopus who are using LLM to gather information and write emails which are then edited or checked by humans before being sent out to customers.
The cynic in me says that pressurised support staff will not actually edit or check the content, or do it sloppily. Meanwhile anyone conscientious and half-decent in the team will have perceived low throughput, and will get the boot. Because the performance metrics don't allow for errors in LLM output.
One year later, Octopus energy writes 20% of their responses as haikus advising customers to lick their fingers and shove them into power sockets.
I'm not saying the LLM generation is worthless, but if the beancounters and shareholders don't understand the risk ...