"developers would do well to review Copilot-generated code no differently than if it was written by a teammate."
Fuck that, I'll just write it myself rather than risk some weird bug from code I didn't want.
Microsoft's GitHub on Tuesday released its Copilot AI programming assistance tool into the wild after a year-long free technical trial. And now that GitHub Copilot is generally available, developers will have to start paying for it. Or most of them will. Verified students and maintainers of popular open-source projects may …
I doubt many of the developers who would pay to use this would ever review teammates code anyway. If you're good enough to review code you'd care about good design, and know that this code spewer won't be able to design code as well as you can.
No, this is more for the "if it compiles it works" crowd.
I think a consequence of ubiquitous AI will be lots of tools like this that do an alright but not amazing job. Programming will become more about designing a specification that can be turned into code by AI system. Like many automated systems the average will be passable, normalised - we will lose both the worst and the best, leaving a sort of programmatic stodge that works fine for most people.
Maybe that is alright, maybe it frees up those of us who are really interested in the craft of programming to work on more interesting projects.
With a lot of these AI systems what this is really doing is treating an existing resource (in this case GitHub user contributions) as a free source of data to effectively crowdsource code from without credit, in the same way the art tools everyone's going crazy for are crowdsourcing art without giving the original artists any credit, arguably a form of deniable mass copyright theft. I'm interested in what happens as the outputs of these tools end up becoming part of their source corpus- are they going to generate ever more generic code or start synthesising techniques based on the system's own work? Is it -and its usefulness- going to flatten out as the proportion of human-generated content declines until it is effectively almost entirely recycling its own content?
I’m of the belief that comments should be used sparingly, ie for documenting public APIs, or for explaining workarounds for third-party bugs. Code should be written so that its behaviour is self-evident; otherwise comments need to be maintained and updated as well as code. So that rules me out (I’m assuming that the service only writes code based on comments).