back to article Google's DeepMind says its AI coding bot is 'competitive' with humans

Alphabet-owned AI outfit DeepMind claims it has created an AI that can write programming code, find novel solutions to interesting problems, and do it at the level of the mid-ranking human entrants in coding contests. Dubbed "AlphaCode" and detailed in a pre-print paper [PDF], the tool is said to advance on previous automated …

  1. oldtaku Silver badge

    Sure, it'll beat outsourcers

    Since most humans 'coders' are terribly educated and can only search Stackexchange for code snippets, then blindly copy and paste them and randomly beat on them till there aren't any syntax errors (and/or post replies to said same Stackexchange threads begging people to do their work for them), then yes, this is a decent advance on having to hire bottom-tier programmers who should be in another line of work anyhow.

    It's not a threat to anyone competent, and having read the pre-print I don't feel threatened at all - heck, I welcome it. The non-thinking but painstaking accounting crap is far too much of the job and I completely welcome automating that away.

    1. mpi Silver badge

      Re: Sure, it'll beat outsourcers


      The problem with most on-the-job tasks is not in the writing of the code, or designing small algorithms, it's with architecture. What should the code do, what goal should it achieve, how can it fit with the system.

      Here is a problem that never comes up in coding challenges, and which I am pretty sure no AI will be able to solve on its own for a very long time:

      "You know the lowcode-platform accounting uses, right? They get large CSVs from our new customer, and need to read them in, but the platform can't do it. We need something to bridge the gap. Oh and the bigwigs want it to log all entries in some form of summary in case we get audited...just think of something there. "

      Contextual knowledge. Communicating systems. Knowing prior code. Efficiency/Usability considerations. Architectural Problems.

      Huge problems for the problem for humans. The task is so simple, its usually the kind of work people would often hand to interns/new hires to see how they do. I bet by the time most programmers read through the specification of this really simple task, they already have at least a half-formed idea how to do it.

      This is not ot say that AlphaCode isn't a tremendous achivement. The fact that it can take a natural language description of an algorithmic problem and then come up with a solution it hasn't seen before is beyond impressive. I hope this thing makes it into a viable product that we can use not just to boost productivity but to use in novel ways, and maybe even learn from.

      That won't chang the fact that we can look forward to at least 6 more months of various announcements how this will be [the end of, a threat to, a total gamechanger, ...] for programming and software development. *sigh*

      1. Charles 9

        Re: Sure, it'll beat outsourcers

        "Huge problems for the problem for humans. The task is so simple, its usually the kind of work people would often hand to interns/new hires to see how they do. I bet by the time most programmers read through the specification of this really simple task, they already have at least a half-formed idea how to do it."

        Even for The New Guy who doesn't yet understand all the meanings behind the meanings? That's where machine-generated code stands right now. This is more school-grade problems, whst I used to read in things like The Great Computer Challenge.

        1. mpi Silver badge

          Re: Sure, it'll beat outsourcers

          > That's where machine-generated code stands right now.


          "The New Guy", as in "the guy we hired yesterday who never saw anything in our codebase", can still email accounting until someone shows him the lowcode platform...and from there he can apply what he knows about `import csv, requests, logging` et al. to produce something that works in most cases and, if he's good, only makes the senior sigh in frustration slightly.

          By the time AI can do that, it would probably be a good idea to get going with that Mars colony, because then we're not far from it walking up to my desk stating: "I need your Mouse, your Keyboard, and your Motorcycle." in monotone heavily accented english.

          1. Charles 9

            Re: Sure, it'll beat outsourcers

            That's the catch. A REAL "new guy" wouldn't know about "import csv" or what the csv's format is supposed to mean, or anything about "requests, logging, et al".

            That's where the "AI" code generators are right now. Like I said, the problem stated in the article is something that would likely be given to school-age coders undertaking something like The Great Computer Challenge (which I did in high school), where coding teams are given like three hours to code solutions to problems given to them with no advance notice.

            If the AI had been trained in the kind of activities you handle on a regular basis, it could probably be better able to interpret a request like, "The current version of our CSV parser is not capable of handling our current client's CSV file due to its extreme size (or some other reason). The system has limitations x, y, z, etc. Produce code that can perform the following: parse a CSV of arbitrarily large size (or other reason) within the limitations of the system, and produce a log that summarizes the CSV file's contents in sufficient detail to satisfy auditors." Or something of the like.

            1. Loyal Commenter

              Re: Sure, it'll beat outsourcers

              I'd like to see how it copes with "come up with a solution, with the client, on how to quote and escape this CSV file sufficiently for it to meet their needs and capabilities, for the cases where the user has entered a name like "O'Reilly" or put a comma, or double-quote in the middle of an address line.

              Of course, the correct answer is "don't use a CSV, come up with a proper file format instead," and I'd seriously like to see an AI system that can handle all the requirements gathering, specification work, and back-and-forth with the client via email, face-to-face meetings, and Teams calls, and come up with an unambiguous document at the end which is both technical enough to capture all the detail, but also readable enough for the client to sign it off and know what they are signing.

              In other words, I'll worry about my job when the IQ of such an "AI" system exceeds that of your average business analyst, senior developer, and account manager combined.

      2. Muppet Boss

        Re: Sure, it'll beat outsourcers

        Having read the paper, a real person had to write quality unit tests to, sorry, test the generated code against. No way this is going to work in real world. Wake me up when they automate writing tests as well.

        1. Loyal Commenter

          Re: Sure, it'll beat outsourcers

          Yup, sounds like "test-first" development. Once you have written the tests that define the behaviour, you fill in minimum amount of code to pass those tests. Since the tests describe the behaviour you are expecting, the actual work has already been done, and all the rest is just keyboard mashing.

    2. Evil Auditor Silver badge

      Re: Sure, it'll beat outsourcers

      ...only search Stackexchange for code snippets, then blindly copy and paste them and randomly beat on them till there aren't any syntax errors (and/or post replies to said same Stackexchange threads begging people to do their work for them)...

      Wait, there's still another coding method alive than what you just described?! Me being a miserable old sod - not that old but old enough to be miserable - I have the impression that nearly every proper coding practice ceased to exist about 15 years ago.

  2. SimplyIntricate

    Googled the answer?

    How do we know the bot didn’t just google the question and paste the first result from stackexchange?

    1. EricM
      Thumb Up

      Re: Googled the answer?

      That probably would mean that an in fact intelligent AI was finally invented.

      After all, lazyness IS a sign of intelligence :)

      1. AndrueC Silver badge

        Re: Googled the answer?

        Oy! It's called efficiency. Or at least that's what I always say at my reviews.

      2. Loyal Commenter

        Re: Googled the answer?

        Laziness, Impatience, and Hubris. The Three Virtues of programming.

    2. mevets

      Re: Googled the answer?

      Good programmers copy; great programmers paste. [ Kelsey Hightower ]

    3. AdamWill

      Re: Googled the answer?

      God, I hope not, cos if it knows how to do that then it really *is* coming for my job...

  3. Gotno iShit Wantno iShit

    It's not about the code

    The parts baking my noodle here (individually, never mind all together) are that DeepMind can understand the natural language problem, break it down into manageable chunks, find the optimal solution and explain it's working in plain English.

    Generating syntactically correct code from 'small chunks of problem' seem like the easy bit to me.

    I wonder how far it could get through an advent of code.

    1. mpi Silver badge

      Re: It's not about the code

      > and explain it's working in plain English.

      As far as I understood it, the AI didn't do that part. It derives an algorithm from the written requirements into code, the explanation was added later as part of the paper.

      Please correct me if I misunderstood something.

      1. veti Silver badge

        Re: It's not about the code

        I suspect the reason it didn't do that part is because no one has bothered to train it to. Being Google, of course, the developers wouldn't know "documentation" if it fell on them.

        But we know AIs are already good at composing natural language. In this context it knows exactly what it needs to say. I can't believe it couldn't also drum up some suitable words to express it.

  4. MiguelC Silver badge

    There's a part of the development process A.I. (whenever it comes to exist) would/will have a tough time with: explaining to the user what they really want instead of what they're telling you they want - imagine if it blindly follows user requests, what kind of shiny non working system they'll get

    1. FeepingCreature Bronze badge

      Or far more worrying, what kind of working system.

    2. Bbuckley

      Or takes orders from a racist/psycho/terrorist etc. ;-)

  5. Ken Y-N

    What does "in the top 54%" mean?

    Does it mean that it answered enough questions correctly to be rated? If so, how many out of how many attempted questions?

    According to the blog post it generates lots of possible solutions and sees which give output that is close to the expected result then further refines them. As I've seen with other impressive AI results, I suspect there's a human at the end of it who throws away the crappy results and highlights the best - GPT-3 for instance had many examples of utter gobbledygook.

    1. RobLang

      Re: What does "in the top 54%" mean?

      There's no human involved in this test. Testing the validity and performance of an algorithm is the easy bit.

    2. Caver_Dave Silver badge

      Re: What does "in the top 54%" mean?

      Means "it wasn't half good"

      Parse that for at least 3 different meanings in common UK terms!

    3. Loyal Commenter

      Re: What does "in the top 54%" mean?

      "In the bottom 47%". That's what it means.

  6. Pascal Monett Silver badge

    So it can code the creation of a string

    Wow. I'm impressed. No, really. If it took the description of a problem and turned out a working bit code as a solution, that's a good thing.

    Now, my current problem is the statistics of how fast users respond to an email recieved. Responses and forwards of said email need to be taken into account.

    So, what's the solution to that, AI ?

    1. TRT Silver badge

      Re: So it can code the creation of a string

      Took me quite a while to work out what the description of the problem actually meant. I'm still not clear on what the intended goal could possibly be useful for. But that's probably just me - I have been told that I tend to produce what people actually need rather than what they actually asked for.

  7. RobLang

    More tools is a good thing

    The world needs more software and there aren't enough of us making it. Many business apps could work just fine with higher abstraction levels than they have today. I can see this kind of AI being a great support to low-code. I've worked with plenty of people who can explain business logically but weren't interested in code, this would help them a great deal.

    Every time there's a new abstract layer, you hear calls of "programmers out of a job" and yet here we are.

    I'm more interesting in the idea that AI might find a new paradigm that programmers haven't thought of yet. In the early 00s I saw some research that would use genetic algorithms to generate lots of snippets of code to solve a task. At each generation it would take all those that worked and create another generation of code snippets. Processing power was something of a premium then compared to now and so limited it to very simple problems. I wonder if exploring the solution space using neural networks and evolutionary computing might turn up something that's useful to us all!

    1. Yet Another Anonymous coward Silver badge

      Re: More tools is a good thing

      There are no jobs for programmers because studies in the early 1960s showed that soon everyone on the planet would have to be employed in laying out circuit board designs for computers. So there would be nobody left to program them.

      So automatic tools had to be invented, which meant fewer tape-out jobs and lots more computers

    2. Steve Davies 3 Silver badge

      Re: More tools is a good thing (not)

      The more tools like this that become available the more beancounters will decide that we humans are even more expendable.

      They'll boast at the reduction in staffing costs and get the corner office.

      That will continue especially when Lord Elon Muck's robots come on the scene then we can say goodbye to all IT jobs.

      The thing is that many problems require the non-logical POV in order to come up with the right solution. I wonder how these AI/Robotic systems will handle that.

      My immediate answer is 'badly'.

      1. Yet Another Anonymous coward Silver badge

        Re: More tools is a good thing (not)

        The more tools available the lower the cost of creating code and so the more places code can be used and so the greater the demand for code and coders.

        Compilers, microprocessors, mobile computing may have threatened the jobs of mainframe operators but it hasn't been terrible for the programming profession

      2. Jilara

        Re: More tools is a good thing (not)

        I recently re-watched the 1957 movie "Desk Set" (and was reminded again how great Tracy and Hepburn were). The computer that will replace the research department (essentially Google) shows the failings of these assumptions even in the mid-20th century. The quirkiness of humans allows innovations that pure logic doesn't handle well. The premises of the movie are still valid in a lot of ways.

        1. Charles 9

          Re: More tools is a good thing (not)

          Based on this and orher things I've read, it seems more like some things can only come by chance, that logic can't find or justify everything. Gut feelings and so on (which can be both right and wrong) are just our way of "rationalizing" taking a chance.

          1. TRT Silver badge

            Re: More tools is a good thing (not)

            Do computers have GIT feelings?

  8. David M

    Just another compiler

    Isn't this just a new sort of compiler? We used to write assembler, but now a compiler does that for us, based on some input in a high-level language. This AI is doing much the same, translating an even-higher-level description into something that can be swallowed by a traditional compiler. If this catches on, the programmer's job will become the writing of that natural-language description, in a sufficiently precise way that the AI can understand and solve it.

    One area where this might get interesting is in software testing. If the AI 'understands' the problem, it should also be able to generate suitable test cases. But would you trust a system where the code and the tests were derived from the same source? Or do we assume that the AI's code is always correct, and therefore doesn't need testing?

    1. mpi Silver badge

      Re: Just another compiler

      Not really.

      A compiler takes in instructions (not a problem statement), written in an unambiguous, artificial, formal language (aka. code). It then translates the exact instructions into other exact instructions.

      This system takes in a problem statement (not instructions), written in ambiguous, natural, contextual language (aka. english). It then derives exact instructions from the problem statement.

      1. veti Silver badge

        Re: Just another compiler

        I don't think there's very much contextual or natural about that English, and I'm sure someone has been to a deal of trouble to make sure it's not ambiguous either. In fact it's not that far from COBOL. And it's not clear that it requires any less skill to write than a "real" programming language.

  9. Anonymous Coward
    Anonymous Coward

    Why are they doing this?

    Are we desperately short of coders ?

    How about taking that natural language processing and doing something useful for software development with it? Like translating badly worded support tickets into proper language with domain knowledge? And spotting when somebody was too busy/lazy to write multiple tickets and puts more than one issue in a ticket - how about an AI that can auto split support tickets by issue?

    1. Yet Another Anonymous coward Silver badge

      Re: Why are they doing this?

      Or something that can take descriptions of tasks from regular humans and auto generate unit tests so we can test against the real requirements not testing against the same mistaken understanding of the problem by the same programmer who wrote the code

    2. AndrueC Silver badge

      Re: Why are they doing this?

      Are we desperately short of coders ?

      Actually, yes, we are. We have been for..well..basically forever. It's why I've loved this career - it's as close to job security as you're likely to get.

      Whether that's the driving force for this or 'because we want to try' I don't know. I'm personally ambivalent to it. The tech is cool, I think our jobs are actually safe (but might change a bit) and anyway I'm retiring or semi-retiring next year ;p

      1. chozorho

        Re: Why are they doing this?

        This talking point has always bothered me. If we are "short of coders," then why is it that when I apply to dozens of companies for software engineering positions, the majority of them ignore me without even giving me an interview? You might counter this by assuming that I'm just personally incompetent, but even my current company has thousands of applicants (and far fewer openings!) and yet still complains that "it's a candidate's market." That's an inconsistency in my view.

        I suspect that there is a bit of a generational divide here. Maybe things are easy if you have decades of experience, but things aren't nearly as easy for young developers trying to get entry-level jobs. For this reason, I believe it's becoming a saturated market. But if I'm missing something, then I'm open to hearing it. In fact, I finally made an account here just to join this discussion.

        1. AndrueC Silver badge

          Re: Why are they doing this?

          why is it that when I apply to dozens of companies for software engineering positions

          Every company I've worked for over the last 30 years has used a recruitment agency. They might ignore job requests sent directly. That might be through choice or because their agencies insist on it. Although if I was in that position (as an employer) I would forward the communication to my agency of choice. Then again maybe I wouldn't because I'd consider it the agency's job to do that work. Or I dunno.

          But I will say that I would never contact a company directly for a programming position. If you're currently looking I a suggest you sign up with some agencies instead (it won't cost you anything - the employer pays).

          An Article.

          Another Article.

          Some good news if you're developing a career:

          What the UK government thinks.

          Another Article.

  10. TheKnowAlotGuy

    The "problem" that humans have, is to understand the problem.

    To understand the problem, most humans want to understand the "Why?".

    Usually only then, people have lowered the "bullshit" shield that prevents the actual problem solving and algorithmic design.

    Computers on the other hand, usually do not understand what "Why?" means, and thus for tersely described algorithms, are able to produce code, without any mental feeling blockages.

    This is very similar to chess, and why chess computers are able to beat the best humans, raw processing power and trying out loads of different scenes according to a rigorous, painstakingly precise order and method.

    However it is also the reason why computers are not able to beat the best humans at all times, because humans may do things that are feeling based, rather than logic, and thus the humans are able to "outwit" the computer.

    For some tasks, where the precise algorithm can be described in minute details, although in natural language, I would expect computers to excel and take over the work from humans. However the humans would simply move a short step up the process chain since someone is needed to make that detailed analysis and precise description of what the algorithm actually should do and why.

    1. TRT Silver badge

      Amen to that...

      God help us the day we have a tool that produces exactly what was asked for rather than what people thought they were asking for.

      1. Anonymous Coward
  11. DS999 Silver badge

    Only works if you have perfect specs

    How often do you get specs as detailed to specify every possible input and output like in the provided example? That's not how the real world works.

    1. Blank Reg Silver badge

      Re: Only works if you have perfect specs

      I saw such perfectly defined specs once, but bigfoot stole them and rode off into the woods on a unicorn

  12. sreynolds

    How long before.......

    I wonder how long it will be before it is used to fill out job applicants.

    1. Nifty Silver badge

      Re: How long before.......

      I'm full of myself already

  13. Il'Geller

    Textual search.

    1. Anonymous Coward
      Anonymous Coward

      Is it me or did it take a really long way of doing this? If "input" is a stream, it could validate per byte, similar to how any type of browser knows if "back a level" and "forward a level" are still valid (possibly using sum()). If both inputs were defined in full, it could of done a whole bunch of things (xor'ing byte to byte springs to mind). I'm just a old hobbyist programmer (I do it for fun), but this pop(), append() and while-over-for stuff isn't exactly "experienced" for such a problem.

      Textual search it is but, there must be some complexity not listed in the article. Sure it interpreted from written English/language, but that's nothing new.

      P.S. yes I understand that just because it LOOKS faster, that doesn't mean it is. But for this problem....?

  14. Howard Sway Silver badge

    The problem with this approach

    is that it deals only with the input and required output, and cares not a jot about how it achieves the latter. Presumably it tries millions of solutions before finding one that works, in an extreme simulation of the terrible programmer who just keeps hacking at their hacks until something emerges that meets a specification, for some percentage of use cases.

    The "solution" produced will likely be incomprehensible to humans, and will certainly not be maintainable when a quick change or enhancement is needed. As for being the long term basis for a product, forget it.

    It's a bit like modern compilers. I use the GCC a lot, and if you learn how it works you'll discover that the assembly code it produces when you optimise is absolutely nothing like your source code anymore - it literally tears it to pieces and then does bizarre stuff that no programmer would ever write by hand. But you'd never want to use that object code as the basis for the next version of your application - you just recompile your changed source code.

    To use this to create applications, you'd have to rerun the whole changed specification through the AI each time it changed, and pray that it produced a full working application again each time.

    1. DrStrangeLug

      Re: The problem with this approach

      Thats where proper automated testing comes in. If you're AI generated code doesnt pass trigger it again.

      But thats just another version of the "keep hacking till it passes" method, upped to an entire application layer rather than one small part.

      1. Howard Sway Silver badge

        Re: The problem with this approach

        Testing is a whole other class of problem with "keep trying until it works" AI code. For a really simplified example, imagine you say to the AI, "write a program that multiplies a number by 10" You then write a test that passes in the number 15 and verifies that the answer returned is 150, and the test passes.

        The problem is that the AI generated the code "return 150;"

        1. dafe

          Re: The problem with this approach

          So the AI is already at the level of web developer. That's progress.

          1. Yet Another Anonymous coward Silver badge

            Re: The problem with this approach

            Except the web developer would return a gif image of the number "150".

            Or more likely return a link to an image on Google that will go away at some random point

        2. Norman Nescio Silver badge

          Re: The problem with this approach

          Testing is a whole other class of problem with "keep trying until it works" AI code. For a really simplified example, imagine you say to the AI, "write a program that multiplies a number by 10" You then write a test that passes in the number 15 and verifies that the answer returned is 150, and the test passes.

          The problem is that the AI generated the code "return 150;"

          Hmm. If one were being obtuse, or the AI had restricted scope, you could get many 'correct' answers:

          "write a program that multiplies a number by 10"

          could give you a syntax error, because 'a number' is a string and '10' is a number. Variable type mismatch.

          another possibility is you get

          "a numbera numbera numbera numbera numbera numbera numbera numbera numbera number"

          which is 10 repetitions of the string "a number"

          or indeed, I could interpret 10 as binary to get

          "anumbera number"

          or regard "a number" as base64 encoded and multiply the integer represented by 'a number' by the integer represented by the base64 encoding of '10' since you are using 'multiply' and as you can't multiply strings, you must interpret the characters as numbers, whereby base64 is one relatively sensible approach.

          Yet again, multiplying by binary 10 is bitshifting by one position, so you might bitshift the string by one position (you can do things like that in C).

          Now, of course, the natural language approach of taking "a number' as a numeric variable and multiplying by base10 10 is the 'obvious' approach to a human, which is what the AI is trying to emulate, but in the context of the problem, a human might just check they are on the right lines by checking that the approach is correct by asking a question of the problem setter. I'm looking forward to when AIs do this and ask questions to clarify their understanding, and can show their working.

          Being human means you carry around a lot of context in your head, which most AIs lack: often interpreted as AIs lacking 'common sense'. I remember a project long ago that aimed to build a common sense database for use by AIs: you typed in natural language statements/facts to add to it's 'knowledge' of the world. I don't know what happened to it.

          In limited and well defined contexts, AIs can be great, but they can end up doing really stupid things. Humans are reasonably good at identifying out-of-context really stupid stuff.

          Using AIs to drive cars or fly aeroplanes is an interesting case: you need to show they are safer than humans, which probably means making fewer mistakes than humans (the unachievable goal is zero mistakes/catastrophes), but they are capable of making really, really stupid mistakes: like driving into the side of trailers that are the same colour as the sky, or activating MCAS repeatedly. Humans do similar things - like following GPS navigator instructions directing them off quaysides, or holding an airliner in stall while it drops many thousands of feet. The issue is not that AIs make mistakes, as humans do too, but that AIs make mistakes that are non-human in character - we don't give AIs the free pass we give to people who have a bad day, or are distracted, stressed, or panicked.

          When it comes to programming, half the task is defining the problem (analysts do have a job to do), and AIs don't currently sit down next to people and discuss the requirements: we are at the stage of spoon-feeding them baby-food in the form of well defined problem statements. If I can talk to an AI and discuss whether it would be better to use a linear search, a hash-table, or a Bloom filter for a particular application, then I think we would be getting somewhere. Converting well-defined problem statements into code is not what programmers do.

  15. Locomotion69

    Hope for less rubbish specifications

    This is an interesting development.

    Although I wonder how it would react to the standard quality of natural language problem descriptions aka "specifications", as from experience I can tell there are few who are excellent but do not address real problems, and none that are actually good in describing the problem in the first place.


  16. Draco

    Bah! It's only "competitive"

    I hear IBM's Watson was killing it in medical diagnoses.

  17. TaabuTheCat

    What problem are you trying to solve?

    That one little question, oh how it can stop someone asking for something in their tracks. Whenever I'm asked for something new or a change to existing process, procedure, software, hardware, architecture, etc., I guarantee you are going to have to answer that question before one single thing gets done. The number of times I've been asked for things that will not solve the problem is remarkable; the number of times I can't even get an answer equally so. All the AI in the world won't fix that.

  18. Warm Braw

    We won't know how smart it really is...

    ... until Google gives it the inevitable instruction to cancel itself. Not that we're likely to have to wait long to find out.

  19. Omnipresent Bronze badge

    I see Googlers

    We know what you have and that it won't stop here. Now feed it to your quantum brain that can self replicate and watch what happens to all that data we gave them.

    You are no longer useful.

  20. Persona Silver badge

    Recursive solution

    It doesn't need to be perfect.: just good enough to write a slightly better version of itself.

    1. Loyal Commenter

      Re: Recursive solution

      The laws of thermodynamics mean that, thanks to entropy, it will only ever be able to reproduce a slightly worse version of itself.

      1. TRT Silver badge

        Re: Recursive solution

        However the trick is that you don't reproduce just a version of yourself, you reproduce millions of versions and the ones that aren't better get killed before reproducing themselves.

  21. dafe

    I'm thinking it is solving the wrong problem

    It can generate Python code from English, and that is nothing short of impressive. It is trained by test cases, which is how software development is ideally but rarely done.

    What it does not do is devise a domain specific language to describe the problem in. Nor does it look for the most elegant existing tools to solve a problem. Instead, it makes the same mistake most novice programmers make: It creates a monolithic block of code that does everything in one process in the one language it knows. Not reusable, not maintainable, not provable, and not necessarily correct.

    And that seems to be by design. AlphaCode is artificially hacking together one file by increments. It is incapable of solving the more general case, then applying the solution to the specific case. Any tool it writes can't be reused or repurposed.

    1. Yet Another Anonymous coward Silver badge

      Re: I'm thinking it is solving the wrong problem

      But if it does this enough we will have a data set to train the next AI to find things that are common to all of them and have it generate a general solution !

  22. CommonBloke
    Big Brother

    Not worried

    I'm not worried in the slightest, because a big portion of code isn't made after being given clear guidelines of what the user expects, it's just the boss shouting "I NEED THIS WORKING YESTERDAY!"

    And, since we all know that managers, bosses and clients can't explain what they want the end product to do or look like, the AI will just do whatever. It'll turn into a blame game and the higher ups will despair once they're faced with the consequences of their ignorance, going back to fleshy coders that they can blame

  23. Loyal Commenter

    I'm sure this can do well in "Coding Contests"

    It's one thing to perform well, where the requirements are simple, unambiguous, and well described.

    Come back when it can compete in a "real world" coding environment, where half of your clients don't understand what they actually want, don't understand how their requirements that they can articulate fit in with existing products and processes, and a good number of requirements are completely unwritten and come down to "common sense" and "domain knowledge".

  24. Robert Grant Silver badge

    > because "writing code is only one portion of the job, and previous instances of partially automating programming (e.g. compilers and IDEs) have only moved programmers to higher levels of abstraction and opened up the field to more people."

    I like the "should." Specifying this problem to the AI doesn't look any less meticulous than writing the code would've been.

    1. Loyal Commenter

      As any experienced developer knows, "should" means "doesn't have to". Put it on the backlog for when we have some free money.

  25. aldolo

    i'm not able to solve the problem

    but none of my customers is able to provide such a complex requirement.

  26. sebbie

    Now Google interview can be passed by AI

    Probably more consistently than humans. This would be groundbreaking if you believe typical developer spends most of their time coding and not communicating, planning and translating confusing and imprecise human language into machine-friendly concepts.

  27. Funongable


    Maybe train it to find bugs?

  28. herman Silver badge
    Paris Hilton


    Great, now get it to write the requirements.

  29. Jilara

    But how about debugging?

    And how does it do with Unix voodoo? That's a serious question. Once upon a time, I was a sysadmin who had to write a lot of my own utilities, and learned enough quirks (troff strings that read right-to-left?) that I had engineers coming to me to debug their code. While we've moved on from Unix, I'd love to see how an AI would handle something a little less straightforward.

    Now we have emulators that can spit out designs for mega-gate chips based on a spec in something like Verilog, but you still have to debug it. Yes, this AI can read, but can it debug?

  30. IceC0ld

    Did anyone else notice ..........

    Designing an appropriate algorithm, along the lines of the TRANSFORMER-based architectures :EEK:

    so THAT'S where they came from

    and, even after reading from coders saying this is a good thing to remove the worst and automate the low level stuff, I REALLY can't help but think that ALL the bad Sci-Fi end of the world / mechanical overlord movies HAD to have started somewhere, and may, just maybe this was that first step ....................

  31. JDX Gold badge

    How does it know the question?

    I was curious, does it get given the same English problem description as humans or is it encoded somehow?

    If the former, then anyone disparaging this achievement has totally missed the point.

  32. Anonymous Coward
    Anonymous Coward

    Moral of the story

    Write good descriptions of the requirements.

    Then you can have average programmers achieve good results.

    The other option is to have great programmers who understand unspoken requirements and can do their own discovery.

  33. Anonymous Coward
    Anonymous Coward

    Bots doing programming......results in more jobs for real people!

    Bah...."requirements" are so 20th Century.

    The AI needs to be fed "user stories" -- you know written on yellow Post-It notes. Then we have AI-sprints -- multiple AI bots given two weeks to produce something.

    And since the AI bot in the role of "Product Manager" knows absolutely nothing about what is going on, the result is passed to the AI DevSecOps bot -- and the new code suddenly turns up somewhere in the production environment.

    At this point, someone (a real person) with a title like "Senior AI Consultant" turns up -- this employee of Google/DeepMind charges £500 an hour to tell you that he needs ANOTHER AI BOT to be installed in order to figure out what exactly happened. And since the sprints deliver something every two weeks, it only takes about a year of consulting fees for the people (yup....more real people) paying the bills to call a halt.......and start a project to get actual people to do the programming!!

  34. Binraider Silver badge

    AlphaCode has here demonstrated the ability to come with a solution to a “component”. Is the next step to try and get it to recognise what components are needed? And optimise?

    If you can do that then you will be in a position to ask an AI to write a program.

    The optimist in me says can this be done with other languages (yes) and can one get away from the abstraction penalty for readily human readable language? (Yes).

    The pessimist in me says that speccing a problem is half the battle; code tends to “write itself” if one knows what wants to do already.

    Star Trek TNG computers might not be that far away. That that is possible is amazing. Who controls the capability is another matter…

  35. trevorde Silver badge

    DeepMind vs FizzBuzz

    One interview I went to asked me to implement FizzBuzz for a positive integer less than 100.


    * divide by 3

    * examine remainder

    * generate intermediate result

    * divide by 5

    * examine remainder

    * generate intermediate result

    * output complete result


    * problem is bounded, so precompute results

    * lookup result

    * output result

    I didn't get the job.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like