Re: Did Copilot write the report
And this is the crux.
Writing more lines of code is not a good measure of productivity. Completing coding tasks faster is not a good measure of productivity. Those miss at least two critical metrics: the quality of the software in practice over the long term (including ease of maintenance), and the development of both programming and domain knowledge among developers.
Results on the former, when using LLM assistants, are mixed at best. I haven't seen any methodologically-sound studies on the latter, but my estimate is still strongly that using LLM assistants badly impairs it.
From the article: One study from Microsoft, which now owns GitHub, found coding with an AI assistant improved productivity by more than 55 percent. But what's that study's definition of productivity? Recruited software developers were asked to implement an HTTP server in JavaScript as quickly as possible. That's a rubbish task. Any version of HTTP after HTTP/1.0 is complex and difficult to implement correctly, since a correct implementation has to conform to the standards. Implementing it quickly is not grounds for praise.
Producing lines of code quickly is not good coding. Coding is not the whole of programming. Programming is not the whole of software development.
Chasing "AI" assistants like this is a race to the bottom, a process of creating fungible, know-nothing "coders" who don't understand what they're doing.