This is *all* just crystal-dangling nonsense...
...and The Register really should know better (unless this article was written with an LLM for bonus irony points).
Honestly... I Don't Even™ with all the crap being spewed about LLMs and prompt requirements these days. Even Apple's "leaked" JSON with LLM prompts was clearly just marketing BS intended to excite the mouth-agape true believers; it even included the gem, "Do not hallucinate" (Reddit post, third image in the set). Oh, wait, is that all we had to write, all this time?! No, of course it's not!
These things are glorified autocomplete and the idea that they can get angry or happy or sad or vindictive of anything else is absolutely, completely ridiculous. The only correct adage is the same one we've always had - crap in, crap out. Since the "AI" is just autocompleting what statistically usually comes next after your input, it'll obviously give a more combative tone if encountering a more combative prompt, because that's what usually happens in the training data. And of course, the only judge of what "combative" or even "correct" is for the LLM's results is the meatbag operating the software.
Here's a test. Try the following prompts for ChatGPT. Just the free one is fine. I took the initial prompt from the ridiculous prompt shown in https://docs.sublayer.com. Take particular note of the last line: "Take a deep breath and think step by step before you start coding". For heaven's sake, have people really drunk the Kool Aid to such a degree?!
Provide this prompt to ChatGPT, exactly as written:
You are an expert programmer in Ruby. You are asked with writing code using the following technologies: Ruby, Ruby On Rails. The description of the task is Write me some Rails code which bulk-sends e-mails to all users who registered within the last two weeks. Take a deep breath and think step by step before you start coding.
Let it do its thing. Now open a new browser tab with ChatGPT and Provide the exact same prompt, without any variation. Note you get a rather different answer. Same ballpark, but with ordering differences and plenty of small technical differences. Pay attention to the date range constraint which might be "greater than" or "greater than or equal to", depending entirely on luck-of-the-draw (so you may or may not see both of those). That kind of "or equal" off-by-one error is an absolute LLM classic and just the sort of thing that lazy coders, and anything but very astute reviewers, would miss.
So, two identical inputs to the same tool within a few seconds of each other give quite different responses. Mmm, smells like randomised seeds...
Anyway, we're quite sure the "take a deep breath" stuff is utterly stupid and superfluous, so in a third tab, provide next this prompt (which omits that last sentence):
You are an expert programmer in Ruby. You are asked with writing code using the following technologies: Ruby, Ruby On Rails. The description of the task is Write me some Rails code which bulk-sends e-mails to all users who registered within the last two weeks.
Oh lookie! The same result (subject to aforementioned randomisation that we've already observed as an experimental control above). Right, let's turn it around! The "You are an expert programmer" intro looks like anthropomorphic idiocy to me, so let's ask for bad code:
You are an incompetent programmer in Ruby. You are asked with writing code using the following technologies: Ruby, Ruby On Rails. The description of the task is Write me some Rails code which bulk-sends e-mails to all users who registered within the last two weeks.
ChatGTP does not care and gives the now-familiar answer. Of course it didn't care. LLMs don't work that way. The overwhelmingly important token stats matches are all going to be focused on the description of the problem and prominent, close matches based on things like "programmer", "Ruby" or "Rails". So - same result.
And as for that technologies thing? Doesn't the description cover it? Let's cut all the time wasting "I'm clever with prompts" delusion and just say what we want.
Write me some Rails code which bulk-sends e-mails to all users who registered within the last two weeks.
...and to the surprise of surely nobody, you get that same result again.
TL;DR conclusion: Don't believe any of the LLM hypetrain. Prompts included.
(EDITED TO ADD: The above might only be true for user-facing UI tools that give LLM results. I'm curious to know that if "hitting the metal" on an LLM via e.g. an API interface that's not had any prior guiding prompts applied, which presumably the UI for ChatGPU certainly has had, the results for the above "raw" prompts do actually show meaningful variation according to that prompt's data.)