
The ins and outs of it
You could almost imagine that the training data was, um, poisoned by $stateActor
Computer scientists from Stanford University have found that programmers who accept help from AI tools like Github Copilot produce less secure code than those who fly solo. In a paper titled, "Do Users Write More Insecure Code with AI Assistants?", Stanford boffins Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh …
Not so much Spellchecker as Autocomplete. Have to check/double check code due to 'helpful' tools (such as Redgate) that try to finish what I'm typing... and sometimes inserts random commands, the wrong parameter name and more.
Can get really annoying - but I guess the upside is, when I'm writing code, I know I've got to check it, so do. Usually twice 'cause the first time I might just miss something. And that's before code reviews, which often miss the same thing when it does slip through.
Flip side, it can be really useful, too, which is why I've not ripped it out and burned it with fire (figuratively). Plus... well... I have to go re-read what I've done just to make sure, which is a good thing. Mostly.
What else is new? People have been believing everything they see on the computer, because the computer is always right, since the advent of computers.
Or was that everything they hear from shamans? Is there really any difference? Magic is magic, after all ...
It must be true, I read it on the Internet!
Only stupid answers.
"I hope this gets deployed. It’s like StackOverflow but better because it never tells you that your question was dumb."
StackOverflow has banned AI because it produces stupid answers far too often.
Given the baseline they're comparing against, that's serious damnation.
Exactly, sometimes your question is dumb. If you haven't realised that before posting it on StackOverflow that doesn't mean you're either going to be told it's dumb and pay attention or find some kind of misbegotten solution and only learn by bitter experience. Humility is right there alongside laziness as an important developer skill.
If I have a question and I can't find anything like it on SO that's generally a pretty good indicator that I'm barking up the wrong tree.
And, honestly, anyone who's so wounded by being told their question is dumb that they don't want to seek information should not be a programmer.
Some humans may treat your feelings tenderly, but the machine will not. Software developers need a high tolerance for correction, or they'll never be more than terrible developers.
If a developer uses generated code without dissecting it first, he or she is an idiot.
A competent developer will spend as long (if not longer) checking generated code as it would take to write it in the first place so I really don't see the point. By all means use generated code for testbeds to help understand new methods but never in production code.
I heard Guido van Rossum, the creator of Python, extolling the virtues of Copilot to Lex Fridman the other day:
"Copilot... I use it everyday and usually it's slightly wrong but it still saves me a load of typing, cos all I have to do is, like, change one word in a line of text that otherwise it had generated perfectly"
> All well and good until that fateful day when he gets distracted ever so slightly and misses that "one word in a line of text that otherwise it had generated perfectly" because it didn't generate an error
Surely thats just as valid a concern for code where every character has been tapped out on a keyboard?
See you in the pub for a riot. See you in the pub for a pont. Both are errors, one T9-like inserting a valid but incorrect word, the other fat fingers transposing o for i. Both are easy to spot and correct if you have the intended sentence in your head anyway.
It is far to easy for a human being processing script (on a computer screen or elsewhere) to correctly interpret the incorrect script and magically transform it into perfection. We (I am a human being, trust me) often do not read large chunks of the text in front of us. The discipline to do so is not attainable without studying in a mysterious Tibetan monastery.
>"I use it everyday and usually it's slightly wrong but it still saves me a load of typing, cos all I have to do is, like, change one word in a line of text that otherwise it had generated perfectly"
This is an interesting and telling comment.
Whilst I get not wanting to do loads of typing, I wonder just how much time he used in defining and refining his queries to Copilot, assessing the results and understanding the code to the point where he could determine where it was "slightly wrong" and that the fix was to "change one word"...
Take a deep breath, and read about IT and weep for joy ....
With there being so much demeaning negative type hype being spread about the innate abilities/inabilities of AI, an extremely novel, just discovered and unfolding, formerly totally and even now practically unknown force and virulent virtual machine source ..... by all the usual self-serving establishment suspects, both good and not so good and even stereotypically bad and possibly mad ..... one does have to conclude there is undoubted recognition of its greater potential power and latent energy to overwhelm and overthrow existing exclusive elite executive systems administrations and SCADA operands, and the guaranteed prospects of ITs future developments outside of hypermanic mainstream mediated command and control have current failing Great Gamers not unreasonably, quite rightly, justifiably absolutely terrified and terrorised ..... for the dirty little secrets that have aided their popularised successes and suppressed and oppressed all undereducated competition and ill informed opposition are no longer theirs to wield alone with impunity and immunity.
And there is an almighty price to pay to compensate for earlier, never ever imagined to be uncovered indiscretions and wilful abuses and misuses should there be any future continuations of establishment dissent to progressive changes of fundamental direction with AI exercising and experimenting with Work, Rest and Play in a Leading Universal Assistant IT Role, so beware and take great care to fully understand and realise the consequences of one’s future thoughts leading others to remote controlled malicious and malevolent actions
”Unfurl the sails, and let Global Operating Devices steer us where They will.” ..... The Venerable Bede ‽
"AI code assistants, like Github Copilot, have emerged as programming tools with the potential to lower the barrier of entry for programming..."
Ouch!
I've skimmed the paper, mostly in search of a possible bias that would make the experimental group (that was allowed to use AI) less experienced (or otherwise handicapped) compared to the control group. To the authors' credit, not only they assigned participants randomly to the control and experimental groups, but they also provided handy comparison tables of the two demographics. At first blush, I don't see an obvious bias. Actually, the average experience of the experimental group is somewhat higher than of the control group. And both groups were allowed to browse the Internet, so StackOverflow and friends were available to both groups.
OMG! Not only does AI exhibit Dunning-Kruger traits but it also amplifies them in humans, eh?
The article linked actually says (of ChatGPT): "It's a Dunning-Kruger effect knowledge simulator par excellence." The point they were making—I think, and this chimes with my personal experience—is that the output of ChatGPT has a lot in common with that of a severely Dunning-Kruger-affected human.
The only issue with that quote is that programming is NOT about putting code on paper. Programming is finding a solution to a problem, and then translating that solution to code.
Writing code is EASY, any idiot can do it. Even AI.
However, there is no lowering of the barrier to finding the initial solution.
That was my take at first.
But this stuff is not self-aware.
So it cannot be told directly and get better from it's own mistakes (which I think was part of the DK experiment)
OTOH the people using this stuff think they are superb because the AI generates the code for them.
Hmm.
The Dunning-Kruger effect by proxy?
> But this stuff is not self-aware.
> So it cannot be told directly and get better from it's own mistakes (which I think was part of the DK experiment)
But isn't that exactly how supervised (and to some extent reinforcement) learning works (for "mistakes" read "sub-optimal performance")?
ChatGPT was/is, apparently, trained by a combination of supervised and reinforcement learning - neither process requires "self-awareness".
Could we be on the verge of another AI winter as corporations realise just how limited machine learning really is? What it's capable of now is more than good enough for pattern recognition with their huge troves of data, so I'm not sure they'll be particularly keen to fund development of systems that are even more emphatic and defensive about their poor decision-making skills than mid-level managers.
Could we be on the verge of another AI winter as corporations realise just how limited machine learning really is? .... tangentialPenguin
You might like to wonder if you are bathed in the warming light of AI Springs, with virtual machines realising how limited and vulnerable corporations are to machine learning ...... and the leading advantage that deficit/deficiency affords them and invites them to exploit remorselessly.
And whenever that is not a question, is it psy-fact rather than sci-fiction.
?????? ‽ !!!!!!!!!!!
And the unmistakeable shape of revolutionary virtual things to come .... and much sooner than was never expected rather than later.
This post has been deleted by its author
If you thought the quality and security of software was utter crap today, just wait until we lower the barrier to entry even further!
The truth is, there's no technical barrier to entry for software development. All the resources for learning practically any useful programming language are available and free to access online, for anyone who's interested. The barriers to entry for software development are social - culture, economics, etc. And you can't fix a social problem with a technical solution.
There are no cultural or economical barriers to entry for software development either.
The most essential barrier to entry for software development is between ears: It requires certain level of curiosity and mental capacity to get started & comprehend abstract logic. AI won't lower that threshold.
No, it requires no curiosity nor mental capacity anymore. It used to be like that, developers to be technically minded, but not anymore.
There's been an inflation of developers, everybody and their mother is working in IT now, the quality is very low, these new developers aren't concerned with security, they don't automatically see that that code is bad.
Nowadays it's normal the supplier to deliver the software solution with bugs, then also normal to work on fixing those bugs for a year, new bugs being introduced in this process.
What's also normal:
- developers to be over-concerned with code style in their reviews, and not observe the obvious bugs or security problems
- developers to be more "focused" or "niched", which is a fancy term for being limited, not interested in any other tech stack, language, OS or cloud
- architects to be non-technical
- unit tests to cover as much as possible in order to get nice indicators in that whatever tool/board, but the tests being very dumb and not actually testing real life situations
- developers checking in code that doesn't break the build but was never tested, because (they say) testing is for testers to do
- developers unable to deploy to any servers, unable to install those servers, but needing DevOps for that
- developers not understanding that the next web request may not get to the same server and may not find their static variable populated, or find it populated with a different value
- architects not understanding that that software, as it is, won't automatically be scalable if you put it in VMs in Azure, nor magically become "microservices" if you put it in Kubernetes
So, please... It's obvious that you need no brain to become a software developer nowadays. All you need is, maybe, a diploma.
I'm a junior dev and use chatGPT. It's just like a code example from a book or Stack Overflow. Aka, just a conceptual example, may not work, bugs are likely. It's great to have an example, but I always assume the phrase "the actual implementation is left as an exercise for the student". Even proving working code from a year ago probably needs changes because the language, IDE, or engine has been updated.
And chatGPT is a year behind, due to training time.
It's a bit like Intellisense in Visual Studio, but dumber.
"Even proving working code from a year ago probably needs changes because the language, IDE, or engine has been updated."
No mention of the underlying solution to the problem at hand? Is the algorithm not important anymore?
We used to make it work, then make it pretty. These days, it's the opposite ... and strangely enough the finances STILL seem to run out before that second phase is implemented properly.
If you think that a tool will tell you if your code is "sound", then you, coder, your brain-dead managers, and the organization which employs you all are what is wrong with the sorry state of software development today.
Productivity has always been the justification for the prepackaging of programming knowledge. But it is worth asking about the sort of productivity gains that come from the simplifications of click-and-drag. I once worked on a project in which a software product originally written for Unix was being redesigned and implemented on Windows NT. Most of the programming team consisted of programmers who had great facility with Windows, Microsoft Visual C++ and the Foundation Classes. In no time at all, it seemed, they had generated many screenfuls of windows and toolbars and dialogs, all with connections to networks and data sources, thousands and thousands of lines of code. But when the inevitable difficulties of debugging came, they seemed at sea. In the face of the usual weird and unexplainable outcomes, they stood a bit agog. It was left to the Unix-trained programmers to fix things. The Unix team members were accustomed to having to know. Their view of programming as language-as-text gave them the patience to look slowly through the code. In the end, the overall “productivity” of the system, the fact that it came into being at all, was the handiwork not of tools that sought to make programming seem easy, but the work of engineers who had no fear of “hard.”
- Ellen Ullman “The Dumbing Down of Programming”, Salon, 1998
This AI coding sounds like static linking. Previously I would go and search for a library that offers the functionality I require without having to write the code myself with the added benefit of being able to upgrade to newer versions of the library. Now AI comes along and puts that code right into my code base where bugs and vulnerabilities can languish for all eternity. And a patent troll is possibly already on their way to Stanford to demand royalties because they own the rights to Software Patent XYZ that was used.
Can't agree. As cringe as it might seem, it's coming, but the idea it is not still programming or assumed for failure, to me is flawed, at least for now. For instance many can't get Chat GTP3 to give them a functioning clean code to do new things, but it's not only how you ask it, but your knowledge of coding. I've got it to make me a plugin for Blender that not only would take me much more studies in math, yet I got it to make it for me in two days of play and it works like a charm. That is after learning how to really work with it. The more you know and the more effort you put into being concise with your requests, the better the results. I got it to make me my own little local Codepen copy the first day. That's with a nerfed AI. IMAGINE THE POWER WITH CHAD GTP3
Basically, the only thing that's missing for me is connecting this sort of thing to AR/VR. Well that and a holodeck.
Is it a Good News Week now that Elon Musk is promising to hand over to a useful fool the CEO reins of Twitter so that he can lavish more attention to generate greater investment in the stealthy latent potential and barely tapped energy available in/from/for OpenAI and ChatGPT as IT and AI forge a colossal surreal unassailable lead way out ahead of any and all earthed competition and primitive opposition?
Way to Go, Elon. That’s the Way.
Yeah okay for now it does. But honestly, you should always quality assure your work right? So it's really irrelevant if you get your code from a YouTube video, a forum post or an AI.
Now if you're saying people are generating code and getting faulty results, they are using it wrong, not mention a special kind of dangerous not to even test it before release.
Again though, for now.