The real shock is that somehow, researchers keep winning grant applications to study properties of LLMs when they should have known the answer in the first place because they understood the technology!
Get my tinfoil hat, because I almost wonder if some of these 'studies' are funded by the LLM makers themselves. Sure, there's a bit of bad press in "emits buggy code," but they also prop up the much larger and more important narrative that these technologies are mysterious black boxes worthy of study in the first place. As if there is ANY mystery to why a predictive tool, when fed crap, suggests more crap! Because one bad line of code, statistically, is most likely to appear near other bad lines of code!
Framing it as a problem that requires study suggests that there is an as-yet-unknown solution that could fix the aforesaid problem, when really, OpenAI just needs that sweet investor cash.