If only there was a way to run a separate process...
> a prompt injection can be encountered if the user requests that Bob review a site containing untrusted content
Once again:
Why the #£&@£** is the same instance of the LLM (apparently) being used to perform the "review" of the site's contents as the one that was allowed to issue the command to load the site into a buffer? Or the results of that "review" being read by the command issuer instead of being isolated in another buffer?
If "reviewing" the site's contents can result in those contents being taken as commands[1] to an LLM then that task should be being run in a second instance that never had the ability to invoke any external commands, including reading any more data sources, in the first place. If your "agentic"(!) LLM needs to have some untrusted data passed into an LLM it should be spawning a separate process. You can call it a "sandbox" if you want. Or a "jail". Just the same way that any half-way sane use of a command to "draw a graph from the data in this spreadsheet" should be spawning a gnuplot child process (or graphviz or ...). But, no, let me guess, the expectation is that the graph drawing will be done inside the LLM as well, because it was trained on "multi-modal data" i.e. it will do that diffusion image generation thingie and give you back a graph that sort of resembles other graphs it has been trained on.
If the user is stupid enough to want to let untrustworthy input be executed then they should be being made to damn well copy and paste it themselves. There are reasons we tell people to switch off CD "autorun".
[1] not that this isn't a honking great hole in the LLMs - and the way they are used - in the first place and a bloody great red flag that they aren't sensible things to play with like this.