Idiots
The implementation of LLMs has always bothered me, especially the software architecture.
If you don't know by now that you shouldn't trust external input in any way, you shouldn't be near software development in any capacity. Why is it not possible to escape or sandbox external inputs? "Technical limitations"? I think that just means "I made a shitty insecure product".
What also bothers me is the lack of any kind of optimisation. I recall seeing a quote from Sam Cuntman that people were wasting X amount of money by saying goodbye/please/thank you to chatgpt, and asked people to stop doing it. WELL MAKE A GODDAMN FUNCTION THAT HANDLES GOODBYE MESSAGES WITHOUT SENDING IT TO THE LLM THEN YOU PLANET-DESTROYING CLANKERFUCKER!!
Same goes for prompts that don't need LLMs in any way. Why not parse calculations and send them to a calculator for example? Man, LLMs suck.