Re: If only there was a way to run a separate process...
This is not buffer escape or unwitting execution. It's not even poisoning instructions, which is kind of like what you're describing*. It's programmatic copying and pasting text. The bot's being told to read documentation and glean instructions from it, then execute those if they're safe. No level of separation is going to prevent it from executing stuff in that. That makes it inherently dangerous and, since the checks intending to find malicious instructions and not run them are also LLM commands, dangerous in a way that's not easy to fix. The user is intentionally allowing a program to fetch and run things from untrusted locations and it goes about as well as you'd think it would.
* Poisoning instructions is when a user executes a prompt in an LLM like "Summarize this text", and the text contains other instructions which the LLM ends up executing. That can be argued as an example of what you're describing, although since the LLM has no concept of separate instructions and data, that's also difficult to prevent. That's not happening in this case because the only instructions the LLM is executing are those it was given. It just happens that the instructions it was given are foolhardy and the protections designed to block the biggest disasters aren't big enough (and likely can never be).