Ode to The Unknown
Take an inexplicable phenomenon[1], sprinkle in some random behaviour[2], leaven with the unexplained[3] and feed it the unexpected[4]. Stoke the flames[5].
It is hard enough to be sure that our carefully planned[6], deliberately coded[6] and painstakingly tested[6] systems are safe to release to the public.
The hubris of the Madmen, releasing their Thinking Machines[7] upon an unwary World, is staggering.
[1] LLMs - neural nets in general - have no ability to explain how they reach their results and are just huge piles of nadans when you look at them yourself
[2] as per the article, a seed value plus an internal stochastic walk
[3] guardrails - nice phrase, sounds reassuring; details, please? Apparently added quite quickly, compared to the boasts about how much effort it was to create the LLMs in the first place, so tacked on or really fundamental to the way the model works internally (mot that we'd ever know if the latter worked, see [1])
[4] well, did you expect that suffix to work?
[5] set another machine the goal of finding the "unwanted" results, by a process that looks rather like a maximising "fuzzing" process.
[6] ever the optimist, ignore more of the rest of The Register's articles about systems falling over in embarrassing ways.
[7] yes, yes, these don't think, they aren't really intelligent - trying to be poetic here! Yeesh.