get your backdoors in the training data now, if you haven't already.
75% of enterprise coders will use AI helpers by 2028. We didn't say productively
Global tech research company Gartner estimates that by 2028, 75 percent of enterprise software engineers will use AI code assistants, up from less than 10 percent in early 2023. As of the third quarter of 2023, 63 percent of organizations were piloting, deploying or had already deployed AI code assistants, said the survey of …
COMMENTS
-
-
-
Monday 15th April 2024 23:07 GMT Michael Wojcik
Re: Let’s hope it improves
Oh, it'll get better, because that bar is very, very low.
By the same token, SotA LLMs trained on large source-code copora can already complete mundane tasks (e.g. "write an Android app that does X" where X follows the well-worn pattern of CRUD operations on a simple database, front-ended with simple forms and views) as well as a great many professional developers. That's because a great many professional developers don't do anything very challenging; they're producing software that's not significantly more complicated than what I used to have the students in my university web application design class do.
(Those students were undergraduates in either the User Experience or the Professional Writing major, not CS or similar — CS students would have been doing much more interesting programming than that. This class was meant to give the students some insight into how software works and what the development process is like, so they'd be better able to communicate with developers.)
Now, that's not the sort of software I work on, and I imagine the same is true for many Reg readers. But it's important to remember that the range of difficulty and sophistication in professional software development is huge, and the range of developer knowledge and skill is commensurate. There absolutely are professional programmers who could be replaced today with LLMs with no discernible loss of quality or quantity of output, just as there absolutely are professionals who do work that current SotA models are far, far away from generalizing to.
-
-
Saturday 13th April 2024 12:10 GMT Mage
Gartner estimates that by 2028, 75%
But if reality bites it could be 10%.
Unfortunately Senior Manglement listens to vendors and juniors, not experienced devs in their own company, because they are saying what Manglement wants to hear.
What's Gartner's track record on these predictions and are they using so-called "AI"?
-
Monday 15th April 2024 23:14 GMT Michael Wojcik
Re: Gartner estimates that by 2028, 75%
I'm giving Gartner some credit on this one, since they correctly (if too conservatively) applied Amdahl's Law: A time reduction factor (inverse of speedup) of X in a fraction Y of the overall job (X, Y < 1) gives you a total speedup of XY.
As it is, I think they're overestimating both X and Y. For decent programmers, I doubt LLMs will provide a 0.5 time reduction in writing code, and more importantly, few good developers should be spending even 0.2 of their time actually writing code. There may be brief periods while implementing major new features where developers produce a lot of source, but those should be unusual. Design, testing, addressing technical debt should all be taking priority. And in particular, people who are generating the sort of code which an LLM can hand to them are probably reinventing some wheel, when they ought to be reusing an existing implementation.
-
-
Saturday 13th April 2024 16:58 GMT Joe W
Maybe for writing tests ?
Maybe... dunno, tests for unexpected inputs and stuff. Surely not for testing actual requirements wrt. rules, correctness of calculations...
Well, we'll have to see. I'm not worried, these AI assistants were trained on shite code, the results will be crap as well. GIGO, as it is known (garbage in, garbage out).
-
Sunday 14th April 2024 16:07 GMT Anonymous Coward
Re: Maybe for writing tests ?
If you write code with a lot of debug mode assertions (even if they are removed for release) then generating random test cases to blitz those assertions - that sound plausible. But the work is being done by the assertions. So it would only count as automated if the quote "AI" decided and inserted the critical assertions along with the code. (Haven't seen that so far, although I've never asked for it).
Otherwise, test cases usually require input and expected output preparation. Volume alone is not enough - the test cases should be designed around the way the code branches. I haven't had experiences that assure me that "AI" assistants currently can be trusted to do that alone.
I do however use a code assistant and find it truly helpful and actually fun to use. I would describe it's ability as pushing the boundary between syntax and semantics - and in the process offering empirical evidence that the boundary between syntax and semantics is a gray zone.
-
Monday 15th April 2024 10:54 GMT Tim 11
Re: Maybe for writing tests ?
In the future I can quite imagine an AI would be able to examine the code to identify test conditions and generate a complete set of regression tests.
It couldn't prove the existing code was doing what the user wanted but could ensure nothing got accidentally broken by a bug fix or an update to third-party components or execution environment, which is the main reason we use test automation.
This could apply to end-to-end testing as well as unit tests.
-
-
Monday 15th April 2024 23:18 GMT Michael Wojcik
Re: Maybe for writing tests ?
It's pretty straightforward for an autoregressive model with a large context window to write unit tests, particularly when the target uses a language that has a strong unit-test framework already available.
The problem is that such tests would only confirm that the units do what they're written to do. Whether that has any relationship to what they're intended to do is quite another story. The point of unit testing isn't so much to exercise the code as it is to exercise the expectations.
-
Saturday 13th April 2024 19:54 GMT steviebuk
Its still shit
My partner watches Real Housewives and knows it so well said it would be her specialist subject. So as a bit of fun I said I'd ask ChatGPT for some quiz questions and answers. 10 questions, about 4 had same answer and only 2 or 3 were actually right. The others were just wrong and it made shit up.
-
Saturday 13th April 2024 19:57 GMT steviebuk
Mark my words
It will lead to another Horizon scandal. People are using it to summerise reports and emails and not checking the results. We are testing CoPilot that summerises meetings. It claimed I said something during the meeting that was never said.
Years to come, people will use that as evidence. "Well the AI said you said it"
The likes of DWP will abuse it to summerise benefit checks and miss something critical causing a death.
-
Sunday 14th April 2024 16:22 GMT 43300
Re: Mark my words
"The likes of DWP will abuse it to summerise benefit checks and miss something critical causing a death"
That might be what they want - they don't exactly have an unblemished track record for supporting people in genuine need.
Perhaps the NHS could use AI to deal with their complaints? Training the AI for this wouldn't be difficult: whatever the complaint is about, persistently deny, obfuscate, patronise, claim black is white, take as long as possible over it, and always, always work from the basis that the complainant is lying / deluded and the NHS has been perfect in every respect. If it could manage all this it would be indistinguishable from the current system which uses actual people...
-
-
Sunday 14th April 2024 10:50 GMT wowfood
Iffy
I'm still iffy on AI.
Using it to generate code has, in all honesty been terrible. It'll just generate stuff that doesn't work, or uses libraries that don't actually work together.
But I have found (on home projects) that if I state something like. "I am trying to achieve X, here's the code samples I've tried" it'll point out quite accurately why my code is failing, and tends to give an alternative that (with tweaking) will resolve my issue, and it's actually been a boon for learning about less common coding techniques that are very handy.
All in all, it's nowhere near ready yet for general use, but as a lookup tool I've found it handy.
Just rather than treating it as a code writing tool, treat it as a reference tool and keep using your head. Only problem I'm seeing right now, is less skilled coders / new coders, over relying on it and not actually learning from what it generates so you have butchered code popping up in questions from people.
-
Sunday 14th April 2024 10:55 GMT Bebu
Amdahl's Law
«Even if you're getting 50 percent faster task completion [on coding], that's only going to be 50 percent of 20 percent. So that means only 10 percent greater improvement in overall cycle time »
Unknown to manglement types Amdahl's Law pretty much says this. Although long before my time "Time and Motion Studies" were a management tool to identify and quantify resource usage and potential bottlenecks and inefficiencies but those managers were probably made of sterner stuff - managing the logistics of the 1944 D Day Normandy landings probably slightly more demanding than a software development team. :)
I imagine any vocation that is in imminent risk of befoulment by AI/LLM is not likely to attract stellar talent. The hard part is deciding what activities would be exempt since at present its AI secret sauce with everything.
-
Monday 15th April 2024 00:24 GMT Thomas Martin
This is the type of stuff that puts false information and expectations in people's and programmer's minds. Yes, AI can generate code but even if it works, the programmer may still yet have hundreds hours of debugging and customization and even then it still may not do what they want. I programmed 33 years and there is nothing like having humans do it. AI cannot work with people. I saw an advert on TV today hawking I* W*****X and showed it generating screens of code and a supposedly happy programmer taking off. It may generate something but as another someone said, things don't work and/or don't work together. It even may make inadvertent back doors. AI can generate, but can it police itself and find its errors and problems?
Something to think about...
-
Monday 15th April 2024 16:11 GMT xyz
Considering....
That I failed miserably to book a car parking space at an airport yesterday afternoon due to the shite nature of the "UX" involved and that hacking through my own annotated code from 5 years ago leaves me wondering what substances I'd been exposed to, I can think of nothing worse than letting a plastic developer (aka an AI) cobble together lines of stuff I have to wade through to check. Also Gartner wouldnt know its corporate ass from its corporate elbow.
"AI" is good for woolly or statistical analysis but for doing the dev, give me a break. We've all been down to StackOverflow, so an AI doesn't have to.
:-)