back to article Tool touted as 'first AI software engineer' is bad at its job, testers claim

A service described as "the first AI software engineer" appears to be rather bad at its job, based on a recent evaluation. The auto-coder is called “Devin” and was introduced in March 2024. The bot’s creator, an outfit called Cognition AI, has made claims such as “Devin can build and deploy apps end to end," and "can …

  1. Lee D Silver badge

    Not AI.

    ""Tasks that seemed straightforward often took days rather than hours, with Devin getting stuck in technical dead-ends or producing overly complex, unusable solutions," the researchers explain in their report. "Even more concerning was Devin’s tendency to press forward with tasks that weren’t actually possible."

    As an example, they cited how Devin, when asked to deploy multiple applications to the infrastructure deployment platform Railway, failed to understand this wasn't supported and spent more than a day trying approaches that didn't work and hallucinating non-existent features."

    So you mean that as soon as it actually required intelligence and inference, it wasn't able to do things?

    1. Yet Another Anonymous coward Silver badge

      Re: Not AI.

      Yes but that just resulted in it being promoted to management.

      Devin is now CEO

  2. volsano

    The Bastard AI Developer from Hell has landed.

  3. Anonymous Coward
    Anonymous Coward

    Stop the AI Marketing spin

    "As an example, they cited how Devin, when asked to deploy multiple applications to the infrastructure deployment platform Railway, failed to understand this wasn't supported and spent more than a day trying approaches that didn't work and hallucinating non-existent features."

    They do not hallucinate, they output an error. They do not understand, there is no intelligence, they misinterprete the command (prompt). So the "AI" failed to interpret the user command correctly and continued running which produced errors in the output. Don't let marketing win.

    1. Guy de Loimbard Silver badge

      Re: Stop the AI Marketing spin

      Couldn't agree more.

      Stop naming everything AI.

      It's artificial alright, but it's all lacking intelligence at the moment.

      Seriously, some of this shite is being pitched as if we've managed to create sentient, autonomous beings..... We really haven't!

      1. m4r35n357 Silver badge

        Re: Stop the AI Marketing spin

        at the moment?

    2. Doctor Syntax Silver badge

      Re: Stop the AI Marketing spin

      Look on "hallucination" as a useful way to describe a specific error mode. Or do you complain about "buffer overrun" on the basis that it's got nothing to do with railways? Or about BSOD on the basis that only living things die?

      1. news.bot.5543
        Terminator

        Re: Stop the AI Marketing spin

        "Or about BSOD on the basis that only living things die?"

        Then where do all the calculators go?

        1. Doctor Syntax Silver badge

          Re: Stop the AI Marketing spin

          Into the back of drawers of course. You've got to be careful you don't let them breed with all those feature phones you put there.

      2. The Man Who Fell To Earth Silver badge
        Boffin

        Re: Stop the AI Marketing spin

        I believe the in vouge term is now "confabulations".

        1. Doctor Syntax Silver badge

          Re: Stop the AI Marketing spin

          Well done, sir. You provided a specific term. i suppose the rest just ring up tech support and say "something went wrong".

        2. Anonymous Coward
          Anonymous Coward

          Re: Stop the AI Marketing spin

          *vogue tho

      3. glennsills@gmail.com

        Re: Stop the AI Marketing spin

        Well you see, a buffer overrun is when code reads or writes past the end of a buffer. It is actually well named. It describes literally what is happening. An AI "hallucination" is not like that.

    3. that one in the corner Silver badge

      Re: Stop the AI Marketing spin

      > They do not hallucinate, they output an error... Don't let marketing win.

      Huh?

      You do know that the whole "hallucinating AI" comes from the deriders of the (excessive) use of LLMs, *not* from the people trying to market them?

      > They do not understand, there is no intelligence, they misinterprete the command (prompt). So the "AI" failed to interpret the user command correctly and continued running which produced errors in the output.

      Ah, no. The "hallucinations" are not a failure to interpret the user command. They are a failure to stop and respond "Don't ask me, not a clue mate". Instead, they just keep trolling through their nadans spitting out less and less accurate - and eventually less and less coherent - outputs, faithfully following the user request over the edge of the cliffs of sanity. Consider the stories of chat sessions where the user kept on prompting for more and more output and the results got more and more absurd: the LLM is most definitely still "following the prompt"[1], just way past the point we'd hope that it'd stop.

      Using the word "hallucinate" is quite reasonable, as it gives the general User a suggestion of the way that the problem is, well, a problem. If you have a philosophical objection to the term, then suggest something else that can be used instead, to indicate that particular type of behaviour: "Gone off the rails" might serve better?

      > they output an error

      That's not a good replacement. It is far too broad and loses any sense of the *way* that these things are going wrong.

      Plus, given how we usually refer to software behaviour, the problem is that it most distinctly is *not* outputting "ERROR: not a clue, mate"[2]. It is still doing what it was made to do, still wandering around its network, spitting out letters and words. The difference is that, now, *YOU*, the person reading those words, are starting to wonder about the usefulness of those words in that particular order.

      If you tell User A that IT Person B is prone to hallucinating, to seeing/hearing things that differ from reality without B being able to realise when they have slipped, that B is not suddenly being malicious but is still reporting the best they can, then - you actually have a pretty good analogy for the LLM's behaviour and the responses can be the same: A can take B's responses with a pinch of salt and do the work to verify what B told them; or A can just stop asking questions of B entirely; or A can just decide to take B at their word every time.

      Remember, we are using "hallucination" not to market these things, but to point out to Users that the machines go doolally in ways that other software doesn't: it is something new and weird that the User has to be aware of when they encounter these beasts.

      [1] Whatever and however it actually does in order to "follow the User's prompt", it is still doing that same thing fundamental process the whole time

      [2] And the LLM software is more than likely entirely capable of generating error messages in the way we are all accustomed to - "ERROR: out of memory", "ERROR: cheese store empty" - just we, the poor benighted Users, are not likely to see those. Unless we get to peek inside the logs.

      1. MonkeyJuice Bronze badge

        Re: Stop the AI Marketing spin

        Exactly. Which employee do you fire first?

        1. The one who introduced errors into the task.

        2. The one tripping their nuts off.

  4. Persona Silver badge

    Tasks that seemed straightforward often took days rather than hours, with Devin getting stuck in technical dead-ends or producing overly complex, unusable solutions

    Sounds like a good fit for large Government IT projects. Perhaps Devin should be renamed Capita.

    1. Anonymous Coward
      Anonymous Coward

      Obviously

      If each one is 15% effective, I only need to enable 7 instances and I'm already at 105% of a wage drawing human.

      1. Paul Hovnanian Silver badge

        Re: Obviously

        Back to the accounting department with you!

  5. The Central Scrutinizer Silver badge

    "Cognition AI did not respond to a request for comment.'

    With shitty results like that, no wonder.

    1. Bebu sa Ware
      Windows

      Travelling Circus

      «"Cognition AI did not respond to a request for comment." With shitty results like that, no wonder.»

      A quick look at the genealogy of Cognition AI from Wiki "Originally the company was focused on cryptocurrency before moving to AI as it became a trend in Silicon Valley following the release of ChatGPT" suggests the outfit is a travelling circus.

      Once the rubes who have been paying USD500/month realise they have duped by a lightly warmed over collation of other software and services that doesn't really work, the tent will come down and the clowns will move on to the next big thing.

    2. MOH

      Sure they did. They just assigned the task of responding to Devin.

  6. John Smith 19 Gold badge
    Unhappy

    "Devin’s tendency to press forward with tasks that weren’t actually possible."

    IOW's it's hallucinating.

    Just like every other LLM driven system.

    "The researchers said that Devin provided a polished user experience that was impressive when it worked."

    I think we know where the "Developers*" of this spent most of their cash. Maybe a "That won't work, here are additional instructions" button might be useful about now?

    People have been trying this since at least the "Programmers Apprentice" project out of Stanford in the 80's. It was subsequently moved to Mitsubishi Research Labs where it might have been quite useful for their internal development. It did not use LLM but was built on hard reasoning and inference systems, back with something called the "Plan Calculus" to analyse programs and identify the clusters of code changes needed when you wanted to change the function of a module of code.

    *TBH it sounds like it was stitched together from a bunch of other stuff.

    1. Jonathan Richards 1 Silver badge

      Re: "Devin’s tendency to press forward with tasks that weren’t actually possible."

      > TBH it sounds like it was stitched together from a bunch of other stuff.

      Testing for bolt fastening head to neck - - Check!

      1. Neil Barnes Silver badge

        Re: "Devin’s tendency to press forward with tasks that weren’t actually possible."

        Let hand thread, oh dear.

        1. that one in the corner Silver badge

          Re: "Devin’s tendency to press forward with tasks that weren’t actually possible."

          What do you mean, "all the Github code posted from Europe is in metric"? Well, that does explain why none of their months are longer than 12 days.

        2. JWLong Silver badge

          Re: "Devin’s tendency to press forward with tasks that weren’t actually possible."

          And nonreversible

  7. Mike007 Silver badge

    The problem with current technology is that it looks impressive, but when it comes to the details...

    It is officially part of my job to experiment with AI and find ways it can be useful. This includes both trying to integrate it in to our own custom software and also finding uses for our users.

    We are rolling out copilot to users (limited at the moment due to cost, but users are asking to be included in the "trial"), and some of them are finding it useful. Summarising documents and generating boilerplate are the sorts of tasks it does well.

    Integrating an LLM in to our own software has however been far less successful, as it basically requires you to implement every piece of functionality you want manually, in a non-deterministic "language". You can waste a lot of time tweaking it to be able to answer one question 99% of the time only to find it now gets confused about a different question that was working before.

    As for code assistants... I left copilot installed in my IDE to see if it eventually became useful. Yesterday I spent 20 minutes trying to figure out where the Syntax error was in a simple 2 line function it output. It was very subtle. It sometimes gets it close enough that editing the code is quicker than typing it myself, but a lot of the time I end up wasting more time trying to fix it than it would have taken to write it correctly.

    1. sabroni Silver badge

      Summarising documents and generating boilerplate are the sorts of tasks it does well.

      No, they aren't.

      When ChatGPT summarises, it actually does nothing of the kind.

      AI worse than humans in every way at summarising information, government trial finds

      "Reviewers told the report’s authors that AI summaries often missed emphasis, nuance and context; included incorrect information or missed relevant information; and sometimes focused on auxiliary points or introduced irrelevant information."

      1. Mike007 Silver badge

        Re: Summarising documents and generating boilerplate are the sorts of tasks it does well.

        I guess it depends what you mean by a summary. Most of the time they just want a list of key information from a document, or for it to rewrite meeting notes in to something with sentences, rather than expecting an insightful analysis.

        But the key issue seems to be what people selling the things claim they can do compared to the reality. This is why you need to manage how you roll out such tools to users to ensure they actually pay attention to the output and do their own analysis of how useful it is, instead of assuming this new tool is magic.

        1. John Smith 19 Gold badge
          Coat

          "what people selling the things claim they can do compared to the reality. "

          Gosh.

          You mean the salesperson lied their arse off misrepresented its abilities?

          I'm shocked. Shocked I tell you.

          1. HuBo Silver badge
            Trollface

            Re: "what people selling the things claim they can do compared to the reality. "

            Aw come'on, reinventing the concept of boilerplate from scratch, and boilerplate, surely means that this is a very advanced coding tool ... a bit like a spreadsheet, a database, or something, MaJoR Artificial Intelligence! Plus, it can do find-and-replace on its own sometimes it seems ... a game changer for even the most intellectually challenged coders!

            In no time flat, your pet dog, cat, and turtle, will be able to order their own yummy treats straight from Amazon and Uber Eats (not the tasteless cheap crap you feed them)!

        2. Anonymous Coward
          Anonymous Coward

          Re: Summarising documents and generating boilerplate are the sorts of tasks it does well.

          > Most of the time they just want a list of key information from a document, or for it to rewrite meeting notes in to something with sentences, rather than expecting an insightful analysis.

          Even then it can be deeply flawed.

          I recently got chased by an executive who wanted a timeline on when we'd be implementing "X".

          They'd run a summary of an incident channel, in which I'd said something like "It's a pity it isn't possible for us to do X, it'd help here". The LLM summary helpfully mentioned that we were going to implement X to help with future incidents.

          The only way to really know if the summary is accurate or not is to read the source material, at which point there's no point in using the LLM in the first place. Except, of course, cynical me says that there is still a point for some: it saves _them_ time and if they're wrong someone who did take the time can always correct them anyway

  8. Pascal Monett Silver badge

    "rather than recognizing fundamental blockers"

    Obviously. It doesn't "recognize" anything. It's a statistical analysis machine put to very exacting use.

    To recognize a fundamental blocker, you need experience and intelligence. This pseudo-AI has neither.

    Cognition AI has set itself a tall task, and the market is not going to stand for a 15% success rate.

    1. Richard 12 Silver badge

      Re: "rather than recognizing fundamental blockers"

      Doesn't matter.

      If they get enough companies signed up on two year contracts at $500 pcm, they'll have enough cash to go back to their real job of fleecing crypto.

    2. tfewster

      Re: "rather than recognizing fundamental blockers"

      "The marvel is not that the bear dances well (15% of the time), but that the bear dances at all." -- Russian proverb (Updated)

  9. Andy Non Silver badge
    Coat

    Maybe Devin is

    working so badly because its codebase was written by... Devin.

    1. m4r35n357 Silver badge

      Re: Maybe Devin is

      We're Devin', (Devin', Devin', Devin') . . .

      Can't get that out of my head now :(

      1. An_Old_Dog Silver badge
        Joke

        Re: Maybe Devin is

        ... round 'em up, rawhide!

    2. Fr. Ted Crilly Silver badge

      Re: Maybe Devin is

      I'm Devin! And so's my wife...

    3. John Smith 19 Gold badge
      Unhappy

      "working so badly because its codebase was written by... Devin."

      If only

      You've put you're finger on the dirty little secret of soooo many AI tools.

      They are not used to develop themselves

      If they were something that managed <15% task completion wouldn't get out the door. Something along the lines of the following would ensue.

      Dev. I can't work with this PoS

      PM You wrote this PoS. Now make a list of all the things that are s**t about it and figure out which one will move it toward being a product you can use and we can release. And if that's still not good enough go onto the next worst "feature" and fix that.

      Now this might make me sound a bit unsympathetic, but I do believe if the devs can't get useful work out of their own software, how the f**k is anyone else?

  10. WanderingHaggis

    Emm sometimes perfect?

    "The researchers said that Devin provided a polished user experience that was impressive when it worked." or as my piping instructor would say "between mistakes you were perfect"

  11. xyz123 Silver badge

    The script it keeps churning out:

    <HTML>

    If Humanity == Still.Alive then {

    for X = 1 to 999999999

    Call Launch_All_the_Nukes(X)

    Next X

    1. Doctor Evil

      compiler error #09876

      Missing end bracket

      1. Anonymous Coward
        Anonymous Coward

        Re: compiler error #09876

        Username checks out.

    2. Anonymous Coward
      Anonymous Coward

      "and even perform personal assistant tasks like ordering your lunch on DoorDash"

      At the takeout place:

      "Who in the world orders ground glass on their sandwich?"

      "Eh, dunno, but they paid lots extra for it. Get it made and sent."

  12. Wang Cores
    Boffin

    The reason why management thinks it's useful for replacing everyone else in their chain of command is because it's good at their job.

    A computer can summarize the output of those peoples' work, coordinate them, and channel different "vibes" to manage personalities. It is now time for actual "leaders" in management to step up and prove their worth against the machine.

    But any good racket doesn't stay a good racket for long by letting in a new player, so...

  13. Anonymous Coward
    Anonymous Coward

    Ideal dev for Microsoft

    Should save them money too.

    1. picturethis

      Re: Ideal dev for Microsoft

      Based on their security & reliability over the past several years, they already are using the equivalent for coding.

  14. Howard Sway Silver badge

    If you want to know why these things will never work well

    you need to take a broader look at the fundamentals of the software development process. Any development project that is more than a single trivial task is not simply a generative process, but an iterative one. As initial requirements never describe 100% of the required functionality, they inevitably change and get added to as development progresses, due to new issues that are discovered and arise during the coding process. It needs an intelligent human to understand and deal with these, not a bot that can only regurgitate finished code it's been trained on : the human interaction side of development, often within a unique complex organisation, is simply missing from these tools.

    Then there are issues such as wanting coherent systems, where similar things should be done in the same way and leverage reuse of working code so that you're not reinventing 30 completely different wheels to do the same thing, as AI code spewers have a tendency to do. You don't want that amount of technical debt when software needs to be maintained, possibly for years, as well as written.

    I've been writing software for over 40 years now, and love any new tools and features that have made my work easier and more productive, but my experience experimenting with LLM code generation has been equally disappointing and similar to that described in this article. Frankly, a web search will almost always provide better answers if you need a solution that works to something you're not familiar with. But neither a web search nor a LLM are going to design and write a decent application for you.

    1. Version 1.0 Silver badge
      Boffin

      Re: If you want to know why these things will never work well

      Definitely, I remember when Visual BASIC appeared and we saw so many of the same type of issues as we see with AI. I was watching programmers using Visual BASIC coding to create new Basic programs, mostly they worked until they were used. So we're still trying to create stuff that works ... the programming change hasn't effectively changed much.

    2. Lee D Silver badge

      Re: If you want to know why these things will never work well

      It's not intelligent.

      It's a spam filter trained on a large dataset.

      It's literally all it is. Same as a basic keyword search, but a million times more expensive.

      Despite all the hype, nonsense and "experts" chiming in, I've not witnessed a single piece of intelligence or actual learning out of any of them (and I have free access to several of them, including their latest experimental models).

      If it was intelligent - it wouldn't ever need retraining ever again, it'd be doing it itself, and getting better without any human intervention whatsoever. That's not how they work though.

      This is exactly like training a spam filter on the Internet and then querying it's probabilistic tree for answers. Sure, it provides simple answers. But it's not intelligent. And nothing you couldn't find with just a keyword search.

      1. John Smith 19 Gold badge
        Unhappy

        "I've not witnessed a single piece of intelligence "

        Nor will you.

        The "learning" is done in the training phase, and that's expensive.

        After that it never changes, apart from some randomness injected to make it look a bit more convincing.

    3. Doctor Syntax Silver badge

      Re: If you want to know why these things will never work well

      "not simply a generative process, but an iterative one"

      And also an exercise in prioritisation.

  15. Munchausen's proxy
    Headmaster

    I guess they were absent that day

    "More concerning was our inability to predict which tasks would succeed."

    So "AI" hasn't solved the halting problem yet?

    1. Anonymous Coward
      Anonymous Coward

      Re: I guess they were absent that day

      No, but it certainly has slowed things down significantly...

  16. sarusa Silver badge
    Devil

    So it's like outsourcing?

    This sounds about like the results I've seen at every company I was at that decided to outsource to 'save' money.

    The only thing Devin is missing is constantly coming back to beg for more money and telling you that its code doesn't work because your company has a firewall and the firewall must be removed for Devin's code to work.

  17. Anonymous Coward
    Anonymous Coward

    Great for Pre-solved Problems

    Reading the article, I couldn't help but notice the three satisfactorily completed tasks appear to be already solved problems (I didn't dig into the actual tasks though, so I could be wrong here). If my previous statement is indeed true, then the researchers once again demonstrated that AI is just a really fast search engine.

    1. Richard 12 Silver badge

      Re: Great for Pre-solved Problems

      Or rather that it regurgitates large blocks of code it's previously ingested.

      With all the copyright issues that indicates.

  18. AdamWill

    well...

    "As an example, they cited how Devin, when asked to deploy multiple applications to the infrastructure deployment platform Railway, failed to understand this wasn't supported and spent more than a day trying approaches that didn't work and hallucinating non-existent features."

    I mean, to be fair, have you met some junior engineers?

    1. dinsdale54

      Re: well...

      Junior engineers may learn from their mistakes and become senior engineers.

      That doesn't really happen with LLMs.

  19. CorwinX Bronze badge

    Someone will probably rename one of these things...

    ... as Skynet. Then we're in trouble.

  20. A.Lizard

    Wrapping ai in a useless front end

    When you give coding instructions to.an ai, you can compare the output of the ai v expected / intended and fix interactively, google search error messages, etc. If ai is heading down a rabbit hole based on bad initial instructions or.doing what you asked for, not what you meant, it's on you to stop.this and tell ai to do something else.

    "This machine has no brain, use your own"

    With this front end, start with WTF did this turn my natural language instructions into?

  21. Winkypop Silver badge
    Terminator

    Hi everyone, meet the new guy

    Devin, this is everyone.

    Everyone, this is Devin.

    Please train him up in your jobs before you collect your redundancy.

    Thanks (said with a Gordon Brittas type voice)

  22. Dabooka

    15% you say?

    I thought that was about industry standards nowadays anyway

  23. LucreLout
    FAIL

    Amusing

    The thing that will kill most peoples software engineering careers isn't AI, its the same thing as has always killed them. Ageism.

    I'd rather hoped that would change as Gen X, the original digital generation of professionals, hit our late 40s and 50s, but unfortunately it hasn't. People who's entire careers rest upon our work still compare us to their technophobic parents and jump to low value conclusions.

  24. Paul Hovnanian Silver badge
    Headmaster

    Fifteen percent?

    Is Devin looking for a job? When can he start?

  25. Locomotion69 Bronze badge

    Is it a payment option ?

    If I pay twice the money, to I get 30% success then ?

    1. vtcodger Silver badge

      Re: Is it a payment option ?

      Just like with real programmers, 2 Devins give you 7.5% productivity because they spend 75% of their time discussing the day's news, telling war stories, bitching about the management and the working conditions, and arguing about the best approach to the task.

      1. Steve K

        Re: Is it a payment option ?

        You missed out "Posting on El Reg", or was that in the 75%?

  26. John Smith 19 Gold badge
    Coat

    All that said has anyone looked at *actual* average staff productivity?

    The figure is roughly 20-30 Lines Of (executable, debugged) code /developer/day

    Allowing a 9-5day with a 1 hr lunch that's 1 line every 14-16mins.

    Soooo if you could develop such a system that could routinely generate code in <10mins* even if it meant throwing a lot of cycles ( roughly 1000 MIPs for 600sec) at the task.

    I've been thinking about SW assistance for SW developers for some time and (in principle) an automated assistant system could be quite useful. Normal NLU systems have to cope with ambiguity but in a Functional Definition document ambiguity implies undefined or inconsistent behaviour of the system. Bear in mind that with modern ram capacities the whole of a definition document could be held in main memory and chewed through quite briskly.

    IOW every time the software says "I don't understand this. In para 3.4.5 files can be opened for reading and writing but in para 3.5.2 it says they can be opened for reading, writing and appending but at para 3.9.1 it states they can be opened for reading, writing and amending. These are all definitions of the same module. How should I proceed?" That's one a situation where a dev won't tie themselves in knots trying to reconcile the issue.

    BTW code generation is not that simple. The joker is variable names. Specifically meaningful name generation. Especially meaningful name generation with constrained name lengths. I've read CASE tool generated code and it can be very nasty.

    The very behaviour that makes them rubbish at (for example) making sense of James Joyce is exactly what you want at the start of a project to flush all the nasties out.

    I'd love to come up a backcryonym so I could call it "PFY"

    *Executable, debugged code, not just data to prep an array, a common COBOL idiom.

    1. Sarev

      Re: All that said has anyone looked at *actual* average staff productivity?

      > All that said has anyone looked at *actual* average staff productivity?

      > The figure is roughly 20-30 Lines Of (executable, debugged) code /developer/day

      >

      > Allowing a 9-5day with a 1 hr lunch that's 1 line every 14-16mins

      That'll be due to the number of pointless meetings and context switches in the average workplace.

    2. Anonymous Coward
      Anonymous Coward

      Re: All that said has anyone looked at *actual* average staff productivity?

      So if 20-30 lines of executable, debugged code per developer per day is a decent figure, what's thousands of lines of possibly-executable, buggy code per day worth? (In my mind, nothing. If it's buggy, it's not worth having.)

  27. Anonymous Coward
    Anonymous Coward

    “ That'll be due to the number of pointless meetings and context switches in the average workplace.”

    Exactly this. We try to have a minimum three days per week with no meetings after the morning standup. It doesn’t always happen, customers being a thing, but you can write and test a lot more code than that when not interrupted.

  28. Jason Hindle Silver badge

    Devin is just another fake it ‘til you make it outfit

    Alongside Open AI. There’s more interesting and useful stuff floating around the world of AI (none of which will put developers out of the job*). Just ignore the shouty ones who are after unlimited money and resources.

    * Well, accept at Meta where Zuck has been drinking the cool aid, I look forward to that.

    1. John Smith 19 Gold badge
      Coat

      "none of which will put developers out of the job*"

      It's been a while since I last looked at this in any depth but "The Kestrel Institute" came up as quite interesting.

      Seemed quite secretive and well known to the DoD.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like