Given the "complexity" (big, fat euphemism) of then code written by some human "developers" I wonder if this autocomplete will make any difference.
GitHub Copilot is AI pair programming where you, the human, still have to do most of the work
GitHub on Tuesday unveiled a code-completion tool called Copilot that shows promise though still has some way to go to meet its AI pair programming goal. If you're wondering how well it performs, in an FAQ about the service, which is available as a limited "technical preview," GitHub admitted: The code it suggests may not …
COMMENTS
-
-
Wednesday 30th June 2021 08:29 GMT Flocke Kroes
I am sure the AI will suggest the most popular patterns. As ignorance outnumbers competence then the sign of difference will depend on the human. The actual use of this software might actually be to identify useful programmers: If they use a large amount of recommended code then they lack the skill to understand what is wrong with it.
-
-
Wednesday 30th June 2021 05:16 GMT amanfromMars 1
A small but important point to contend is not possible nor highly likely.
"We use telemetry data, including information about which suggestions users accept or reject, to improve the model," the Microsoft-owned biz said. "We do not reference your private code when generating code for other users."
Should that be written in the interest of greater accuracy for clearer transparency .... We do not reference Microsoft-owned biz use of your private code when generating code for other users ‽ .
It is Microsoft after all, which be infamous for its supposed Base DNA Proclivity of Embrace, Extend and Extinguish.
And if that blanket assumption be totally wrong, I apologise for the popular presumption of their unproven guilt in such matters as matter greatly.
-
-
Wednesday 30th June 2021 09:06 GMT elsergiovolador
Stackoverflow 2.0
People don't even take time to read Stackoverflow solutions, just copy code from the suggestions one by one until something works.
This could be useful though if you write a test and then the AI will "write" code until the test is passed.
There is going to be a third excuse why do you watch cat videos, after "it's compiling!" and "it's running tests!" - "the code is writing itself, what am I even doing here!"
-
Wednesday 30th June 2021 09:50 GMT Howard Sway
The code it suggests may not always work, or even make sense
Yet more "software development is just coding" bullshit.
Why should a programmer want their thought flow interrupted by some bullshit tool offering up unsolicited random suggestions whilst they're busy carefully creating solutions to domain specific problems? A tool like this could never understand the context in which I'm writing a function or class, nor could a bunch of random from hundreds of other developers ever fit in with my own personal design and programming style, developed over decades. I mean, it might compile and work, but who wants a project of any complexity that contains 50 wildly different programming styles?
Proper coding is design too. Let me know when one of these bots does something actually creative, not just once but consistently.
-
Wednesday 30th June 2021 10:16 GMT elsergiovolador
Re: The code it suggests may not always work, or even make sense
> Why should a programmer want their thought flow interrupted by some ~~bullshit tool~~ other programmer offering up unsolicited random suggestions whilst they're busy carefully creating solutions to domain specific problems?
FTFY
And yet many developers love pair programming. I never understood why... to cover each other shortcomings and to try to last until the next payday?
If manager throws one programmer at the problem and he or she is struggling, then they may not know whether the problem is difficult or programmer is not worth their salt, but if they throw two programmers at it, what are the chances that both are <insert a common name of a waste product expelled by animals>?
Now it will be "Boss, Co-pilot can't solve it either! We need a quartet!"
-
Wednesday 30th June 2021 10:26 GMT Warm Braw
Re: The code it suggests may not always work, or even make sense
I've always had an issue with the XP approach that you sit in front of a screen and start writing code - specifically a test for some further code you have yet to write - without actually giving much prior thought as to how that code might operate.
I can see that it might work for routine cases - for example grabbing some stuff from a database at the server side and shipping JSON back to the client or at the client updating the UI from the response - but for any significant problem you need to think in advance about data representations, research algorithms with suitable performance and design interfaces that enhance scalability and maintenance. I still cling to the belief that the process starts in a notebook, not on a screen.
The "parse_expenses" code is a good example of all the things you need to decide in advance rather than just assume: how might dates be represented; if it's an "expense" what's the significance of +ve and -ve values; will you reimburse based on the exchange rate on the day of the transaction, on the day it was converted (for example paying card bill) or an arbitrary rate? In real life, the "business logic" is unlikely to be something an AI system has seen before and can't help you foresee.
And in the JSON case, data serialisation is not something programmers should normally have consciously to be dealing with: if it's not built into the language it should be handled by a library or a framework. Always better to reduce the code that needs to be written than automatically generate boilerplate that may or may not correctly infer what it needs to do,
However, it might be fun to couple two instances of Copilot together, toss in a few lines of code and see what they eventually come up with.
-
Thursday 1st July 2021 15:08 GMT Cederic
Re: The code it suggests may not always work, or even make sense
I suspect that's because XP never supported sitting in front of a screen and starting to write code without giving prior thought as to how that code might operate.
Disliking 'big upfront design' doesn't mean doing no design. Those conversations happened over two decades ago, check the c2 wiki for contemporary discussions.
Pair Programming is advocated within XP but works beyond it too. This isn't pair programming, this is code completion, lacking context and understanding of the problem domain, ignorant of the intended design and in its current form feels a horrible distraction rather than a constructive aid.
That doesn't mean it can't improve, but it's not pair programming, and it has as little to do with XP as your description of it.
-
-
-
Wednesday 30th June 2021 13:30 GMT Mike 137
Quite apart from c*ckups
This AI may be pretty incompetent, but it's only the peak of an iceberg of useless automation. For example, I've spent literally hours working out how to turn off all the "helpful" automation in Net Beans. Some of it just won't go away, none of it proved useful and some of it is darned stupid (e.g. a popup containing one line of guidance and the statement "Alt-Enter shows hints". Hitting Alt-Enter pops up an identical line of guidance). And let's say nothing about default enforced code formatting to a horribly unreadable standards.
When I'm programming, I want [a] to be entirely in charge, and [b] to have absolutely no unnecessary distractions. This IDE (as an example) seems to be designed for coders who don't really know much about programming, and I have confirmation of this situation in the number of "<insert language> interview questions" web sites around.
-
-
Wednesday 30th June 2021 14:41 GMT Cuddles
Managing expectations
"The most enthusiastic person we could find toward Copilot said the tool worked as exactly expected one time in ten. "When it guesses right, it feels like it's reading my mind," they said."
Based on the descriptions in the article, it sounds like it works exactly as expected at least nine times in ten. What the person quoted appears to mean is that it works as intended one time in ten.
-
Thursday 1st July 2021 17:24 GMT Michael Wojcik
OpenAI still wasting resources, I see
We could see it being used to automate easy, repetitive tasks – such as generating boiler-plate code
That should already be automated, or refactored out of existence. There's no excuse for software developers to be creating boilerplate code. That's either a terrible use of resources, or those developers aren't worth employing.
and suggesting library or framework functions you're not familiar with to save you the trouble of looking through documentation
As others have already pointed out, we have StackOverflow copypasta for that. It's an abysmal idea. The return on the cost of consulting documentation isn't primarily in finding an API or getting the details of how to invoke it; it's in learning about the intent behind it and recommendations for its use. Yes, documentation is often poor – but that's important to know, too.
The majority of effort in programming should be in thinking, then in research. Actually writing code should be down near the bottom. Measure twice, cut once.
I'm reminded of Djikstra's complaint about APL. Using ML to write code is exactly the wrong approach; it's applying massive resources to a wild misconception of the actual problem.
-
Tuesday 6th July 2021 15:26 GMT JDX
Re: OpenAI still wasting resources, I see
>>There's no excuse for software developers to be creating boilerplate code
Sorry but this is very naive. Any time you work against someone else's bespoke interface that doesn't happen to use standard web-services, you end up writing this kind of boring stuff. Even if it's only writing a class and mapping property names to JSON so the auto-serializer will do the rest for you.
You can write a tool to do it, but that often takes longer than the drudge-work.
-