The Register Home Page

back to article 'Ralph Wiggum' loop prompts Claude to vibe-clone commercial software for $10 an hour

Open source developer Geoff Huntley wrote a script that sometimes makes him nauseous. That's becaues it uses agentic AI and coding assistants to create high-quality software at such tiny cost, he worries it will upend his profession. Here's the script : while :; do cat PROMPT.md | claude-code ; done Huntley describes the …

  1. colinhoad

    Sounds awful

    I can't begin to imagine how bad the code is that comes out of this kind of iterative slop machine.

    1. FIA Silver badge

      Re: Sounds awful

      Yeah, we used to work with Perot systems too. :(

    2. ParlezVousFranglais Silver badge

      Re: Sounds awful

      Playing Devil's Advocate here - surely that's the same argument as tutting at the younger generation for not speaking "the Queen's English" - language changes naturally over time, and it's main purpose is to convey information between two or more entities - if kids can communicate with each other perfectly happily by using terms that the older generation are completely unfamiliar with, then their language has still served it's purpose - many older people might also call this complete lack of adherence to grammar and vocabulary "slop"

      So it is with this - conventional coding has grown, changed, adopted many rules & conventions, changed those rules & conventions, realised its mistakes & improved (in some cases!) - it's a continual evolution, and this is really no different.

      If I need light in the room, I flick a switch and on comes a light. Now, I happen to know an awful lot about all the stages of how that light gets created, but none of that knowledge is necessary to flip that switch and put on the lights. If creating an app really does progress to the equivalent of a completely non-technical user flipping a switch and getting an app that does what they want, then that may make a lot of people very worried, but actually isn't that kind of outcome the whole point of IT?

      Ultimately it's just a tool to achieve an outcome - there will be bumps and bruises along the way, in the same way that light bulbs sometimes blow, and electrical appliances sometimes catch fire and do an awful lot of damage - but it's all just another step along the evolutionary path.

      Maybe some of afraid of the change, but life goes on...

      1. breakfast Silver badge

        Re: Sounds awful

        I don't think you're entirely wrong, and in a way that makes me sad. The days of software as craft are probably waning, as they were always going to. But it seems a shame if the technology to do it is this.

        In a way it is a step on the same road that took us from Assembler to C and from C to Visual Basic and from Visual Basic to Javascript - at each step you can get by knowing less about the layer below. I wonder if there is anyone out there programming web UIs today who understands everything that happens at every level when somebody uses them, from the browser script to the client OS to the client hardware to the server to the server OS to the server hardware and back. I certainly don't and I like understanding things! Once we get to the OS and below I only have the roughest idea.

        1. ParlezVousFranglais Silver badge

          Re: Sounds awful

          I agree - it IS sad - I was commenting only the other day about it being great that a bunch of students rebuilt a replica ENIAC out of not much more than cereal boxes - that kind of thing is needed if future developers are going to truly understand their subjects from the bottom up

          Unfortunately, while it is sad, it's the same journey as taken by many other crafts - try finding a blacksmith at his forge, or a master carpenter who hand makes absolutely everything - they still exist, but they aren't needed by society any more as anything more than a curio

          There's no right or wrong, it's just another step on the journey.

          1. nathan.rhd
            Pint

            Re: Sounds awful

            I agree it's the same as other crafts, but I think curio is dismissive.. We should all be reading more William Morris.

        2. K

          Re: Sounds awful

          Unfortunately this happened a long time ago, when development turned into a factory line, quantity trumped quality, and we all became beta testers

          it lost the art and creation factor, I always saw it as the cleanest form of creation.. there was art in the structure and patterns, and in many ways, this is far more important than the compiled product... cause that structure determines how easily the product can be further enhanced.

          1. Mike VandeVelde Bronze badge

            Re: Sounds awful

            Reminds me of a cartoon I saw a while back about "AI can create art now!" and an artiste sobbing, and then "AI can create code now!" and a tired programmer saying "Finally!"

            1. NoneSuch Silver badge
              Holmes

              Re: Sounds awful

              The first AI LLM's we are using today equate to the Wright Brothers first viable flights. No one envisioned 400+ passenger airliners at the time. AI will improve through use and iterative self-checks will improve output over time.

              Tutting over obvious errors in todays AI code is short-sighted and reactive. People who refuse to evolve will be proven wrong over time.

              1. nobody who matters Silver badge

                Re: Sounds awful

                LLMs are not Artificial Intelligence, and never will be.

                Genuine AI will very probably come about eventually, but is not likely to be an LLM.

                We really do NEED to stop applying the term AI to things which are quite clearly no such thing. Like Tesla's 'Autopilot' and 'Full Self Driving', it is a term that encourages the bulk of the population to expect (and assume) that these things can be relied upon to do things which in reality are things which they are incapable of doing.

                1. Jeff Smith

                  Re: Sounds awful

                  Not wanting to be that guy, but in an academic sense all of those things fit under the umbrella term of Artificial Intelligence. It includes stuff like

                  - Machine Learning

                  - NLP

                  - Robotics

                  - Computer Vision

                  - Gen AI

                  There’s also the big theoretical sci fi vision of conscious AI as a living entity, but everyone agreed to refer to that as AGI a while back now and I don’t think too many people are confused as to the distinction anymore tbh

                  1. Anonymous Coward
                    Anonymous Coward

                    Re: Sounds awful

                    fuzzy logic?

                2. Prst. V.Jeltz Silver badge

                  Re: Sounds awful

                  We really do NEED to stop applying the term AI to things which are quite clearly no such thing.

                  That ship has sailed , people ( salesdroids mainly ) were calling any kind of software "AI" for years before LLM arrived and does ... what it does ....

                  So the chances of putting genie back in bottle now are nil

                  You got more chance of restoring "Hacker" to its original meaning

                  1. ITMA Silver badge
                    Devil

                    Re: Sounds awful

                    "You got more chance of restoring "Hacker" to its original meaning"

                    Or the term "hologram" which is applied anything to looks 3D - even though that does NOT make them holograms. Not by a long way.

                3. jmch

                  Re: Sounds awful

                  "LLMs are not Artificial Intelligence, and never will be"

                  They certainly aren't AGI. What they ARE very good at is finding specific solutions in a constrained space / for a well-defined problem. That's why LLMs are fantastic at maths and coding, and suck at a simple research that a 10-year old could do. In any case, a good software development was always first and foremost someone who understood the problem and could come up with a solution - the code always came later. Any idiot can use Claude code to come up with a fantastically coded app, which, likely as not, is a great solution for a completely different problem than the one needed to be solved.

                  1. nobody who matters Silver badge

                    Re: Sounds awful

                    "LLMs are fantastic at maths and coding"

                    I think there are quite a few commentards on El Reg who might strongly disagree with your use of the word fantastic there. It may be a useful starting point, but that is all.

                    In the end, any general LLM is only going to come up with an answer that is already in its repository of data. Anything different that it concocts will come about from the way it constructs the plausible word/number/code salad that it serves up, and is as likely to be garbage as it is to be useful. If used by "any idiot" to write code, that idiot is unlikely to know enough about coding to be able to see whether the code it spews out is good or bad, and equally unlikely to know how to rework it to make it usable. If they do know enough to spot the shortcomings, they could probably have written it correctly in the first place, without needing any input from the LLM.

              2. Anonymous Coward
                Anonymous Coward

                Re: Sounds awful

                > equate to the Wright Brothers first viable flights

                Being learned in such things, I wish to point out that the Wright brothers contributed next to nothing to the history of heavier than air human flight.

                All the necessary knowledge was there since about the 1860s and it was just waiting for contributing technologies to be refined, most notable amongst them the necessity of increasing the specific power output of existing engines. The Wright workshop made no contribution to any of that. They just happened to be in the right place at the right time to become part of an emerging nation's lore.

        3. imanidiot Silver badge

          Re: Sounds awful

          But there is a massive difference between not understanding every layer in the chain and not understanding even your own layer at all. You as a developer/coder at least have an understanding of the interfaces you're dealing with and what their requirements are. You understand what functions your bit of code are using or implementing. With todays "vibe coding" slop, these "coders" don't understand a single thing about what their program is doing. They don't know what it talks to, they don't know what it does, they don't know what it doesn't do. They just know that the end result they're expecting is happening. There's no care for efficiency, there's no care for standards, there's no care for potential bugs or backdoors. Oh, shiny window appear, good times.

      2. colinhoad

        Re: Sounds awful

        When you flick a light switch, a series of deterministic steps take place to ensure the light comes on. That process has been refined over many years, is tried, tested and works. Sure, some electrical systems may have quirks, but usually these are shortcuts or bodges where the person implementing them knows it isn't *ideal* but has to get the job done. My problem with using an analogy like this to AI-based code is that it is not deterministic - it will come up with any number of different approaches each time you ask it, and you'll never get to the "best" solution unless you are already proficient enough in the output it produces to be able to tell what that looks like. As we gradually moved from machine code to assembly to C to other languages and frameworks, I accept the increasing layers of abstraction have meant none of us now are proficient at every layer of the stack. But the crucial point is that thought and experience and understanding went into creating those abstractions. Just blindly asking AI to "make code do thing" and accepting whatever it gives you means we're going to raise a generation of "engineers" who understand nothing at all. At the sight of the first bug, what are they going to do? Ask the AI to fix it? Good luck with that, because the likelihood is, it will try and refactor the entire repo to fix each individual bug, no doubt generating more bugs in the process. The maintainability, extensibility and overall long term stability of anything built this way seems highly doubtful. It would be, to use your original analogy, like flicking a light switch and sometimes getting a light come on and other times starting a fire in the upstairs bathroom; and so you flick the switch a few more times to see whether that will put out the fire, but that then causes the garage door to open and the fridge to turn off. But hey, the light came on.

        1. ParlezVousFranglais Silver badge

          Re: Sounds awful

          Now - yes, 100% - I agree we aren't there.

          But models like this WILL evolve and improve - remember for starters of course that LLM's being non-deterministic isn't strictly true - they have been seeded that way to give the illusion of creativity, but there's no particular reason that a model couldn't be trained on only "good" programming practices - in fact I'd fall off my chair if there weren't a hundred different organisations already trying exactly that.

          Given that the vast majority of code that's still out there at the moment has been crafted to the best of our collective ability, and given that daily news stories prove that much of it is as full of holes as the proverbial Swiss cheese, a tool that could iteratively figure out the best way to achieve a result, whilst avoiding all known pitfalls, and adding to "collective" knowledge as it goes, sounds at least as good a way forward to me as the path we would be on without it.

          And for sure - "ask Ai to fix it", you say with scorn, I say why not? Take this code, figure out where it's broken, and come up with a solution - rinse, repeat, until it's not broken anymore. Even then take it a stage further: "do this without using these dependancies", "make this run more efficiently" etc etc

          For a real-world example, just look back at El Reg only yesterday

          So why not give it a try a see where it leads?

          1. Anonymous Coward
            Anonymous Coward

            Sounds awful because it is

            You can't make it, a [COMPLEX] nondeterministic system deterministic through incremental re-writes. What you say is like mixing a pot of coffee and tea and trying to separate it again by pouring it through a paper filter repeatedly. Wrong way to use a tool, and sorry but a failure in logic at the outset. While tools like Claude aren't going away and will gain both utility and function, their nature will make them MORE complex not less. And no addition to their function will make them deterministic.

            Ralph may eventually ralph up the buggy 0.5 version of code for a deterministic tool before the heat death of the universe, but only if the script has a watchdog timer to kill it off before your startup drains an unacceptable amount of it's bank account.

            We have a new class of tool. We learn how to use it, just like CNC routers turned craft wood working into another programming task, or we check out and try our hand at another trade. Or those of us that can retire dust of grandpa's old workshop and become artisinal craftsmen. Engineering is what engineers do, and as the ground shifts we need to build a rebuild for the future. We need to build better tools, and we may need to build a new and parallel set of RELIABLE tools in the process. We definitely need to adapt how we train people in the discipline. People in school on the CS track are going to be flushing time and money down the drain on degree that isn't worth the paper it's printed on unless we use a prod to herd the academic side into teaching the new tools along with software engineering principles, and not asking Claude to write a slop curriculum on slop coding to separate community college students from more of the time and money they are working three jobs to pay for.

            We need to pay down three decades of technical debt in how we manage our industry, our trade, and our livelihoods. Or we loose it all and end up on the wrong side of the Starbucks counter. I for one wince whenever I hear Venti, or someone order a medium coffee by calling it a tall or a grande.

            1. ParlezVousFranglais Silver badge

              You can't make.. a ...nondeterministic system deterministic

              I'm not suggesting that you can - I'm suggesting that with the right training, either Claude (with hooks), or a Claude-like equivalent can start to generate code that follows "good" coding practices, that gives a reliable (and where necessary repeatable) output, and can iterate through huge numbers of solutions looking for one which gives the required output, far faster than a developer could do the same, and that eventually such iterations could ensure that said code is both highly secure, highly efficient and fully commented

              As for the link above for the Reg article yesterday, there is also nothing to stop that solution being checked and ranked separately by a completely different automated agent to ensure an independent test of the outcome.

              It's early days yet, but these potential outcomes shouldn't be treated with the scorn and contempt that they are currently receiving - is this kind of evolution so different from the industrial revolution, or Henry Ford creating the moving assembly line? Probably not, but I guess only time will tell...

              1. BrownishMonstr

                Re: You can't make.. a ...nondeterministic system deterministic

                Does it really need to be trained to write good code?

                In a multi-agentic system, one agent could come up with a plan a second agent writes the code, and a third agent check it follows the correct guidelines. No additional training other than the three agents being fed with different instructions: Planner, Coder, and Reviewer.

                1. MonkeyJuice Silver badge

                  Re: You can't make.. a ...nondeterministic system deterministic

                  this is just trying to hide the problem inside a homunculus- at some point, one of them has to not be shit at programming. Coding agents are almost usable at very very small scale on well known solutions to problems, but as soon as your code hits a critical mass -really probably no more than 1000-2000 highly information packed lines of code- (i.e. not j2ee), it can no longer see the wood for the trees. the initial 'gains' you see, rapidly become losses- as you are forced to increasingly micromanage the system, which will start reinventing the wheel within the same codebase, start 'optimizing' or otherwise hallucinating the need to start taking entire modules around the back of the woodshed.

                  It is an averaging machine. When you have an empty canvas, on average, things start out looking good. As soon as you hit the point of information saturation, it actively tries to turn your code into the billion lines of mediocrity it has been trained on. In order to maintain the 'potential efficiency'- you require a superhuman with laser like focus, who can sit through the monotony of hitting the try-again button, and ensure their commits are clean, and don't e.g. strip all the comments, drool random shit all over the file, or reformat everything making it impossible to sensibly use git blame anymore. The problem is, like doomscrolling instagram, this cycle makes you actively stupid and you are suddenly no longer qualified for the role.

                  I don't have anything against coding agents, _in the lab_, or even used with care for very specific cases- but this current iteration is overhyped and dangerously being positioned as 'how it's done now' to people who don't know any better. Nothing good will come of this. maybe in 5 years, but only after enough damage has been done that legislation is in place so the ai companies are financially motivated to give a toss. This is assuming we still have access to fresh water, and that nobody asks too many awkward questions in the revenue meetings.

                2. d4f

                  Re: You can't make.. a ...nondeterministic system deterministic

                  that's basically what kilocode already does.

                  people make it sound like ai-on-ai feedback loops are innovative but that's basically one of the fundamental training methods for neural networks

              2. Mostly Irrelevant

                Re: You can't make.. a ...nondeterministic system deterministic

                LLMs are deterministic, they just fake nondeterminism by feeding a pseudorandom seed into the algorithm. If you run an LLM locally you can control/reuse the seed if you like.

            2. OldSod

              Re: Sounds awful because it is

              The statement "You can't make it, a [COMPLEX] nondeterministic system deterministic through incremental re-writes." sounds similar to the claim of Intelligent Design proponents that complex biological structures can't arise through evolution. As pointed out in "The Blind Watchmaker" (Dawkins), highly complex biological structures *can* arise through a mindless process, given (my paraphrase here, I'm not using all of Dawkin's language because I don't want to bother to look it up to make sure I get it right) a fitness function (survival), a selector function (reproduction at higher rates), and enough time. Determinism needs to be part of the specification, perhaps. And one need not expect two successful attempts to meet the specification to be exactly the same code, merely to correctly perform the same function.

              A surprisingly simple process, seemingly wasteful, yet very powerful given enough repetition over enough time.

              Two points, one "micro" and one "macro":

              1) Micro point: The Latin word cognition has the root word "cog"; Romans thought about thinking as "revolving something around (in your mind)". What is one aspect of human thought other than turning something over and over until a light bulb pops on (Eureka, I've got it) when your thinking eventually produces an idea that seems to fit your constraints?

              2) Macro point: The process of evolution creates highly complex systems, given enough time. Forcing an LLM to evaluate its own output repeatedly until enough adaption occurs to satisfy some measure of "fitness" seems a lot like evolution? Instead of using survival and reproduction as the fitness and selector functions, the loop-driven LLM development process determines fitness and selection by comparison to a specification. Evolution works.

              This was done as an inspired hack. How much better can it become if it is done deliberately? If this technique proves to be useful and can be improved, it suggests a way forward for programmers that has already been mentioned by others: Programmers become the people who craft the specifications that are used to measure the success of the iterations done by the LLM. As pointed out, the "human in the loop" gets removed from the inner loop, replaced by the LLM evaluating itself against the specification. There is still a "human in the loop", and the outer loop may still be a loop, but it is a loop of refining the specification, not the code.

              1. RGH2121

                Re: Sounds awful because it is

                Isn't this essentially how AlphaGo learnt to beat the best Go master in the world. It.played millions of games against itself learning all the while.

                1. disk iops

                  Re: Sounds awful because it is

                  Go is highly deterministic. And it wasn't a brainless 'LLM' doing the learning. Algorithmic (self) optimization is of course a thing and has been for 50 years. Chess engines are likewise NOT LLMs.

              2. nobody who matters Silver badge

                Re: Sounds awful because it is

                "Forcing an LLM to evaluate its own output repeatedly until enough adaption occurs to satisfy some measure of "fitness" seems a lot like evolution?"

                I am afraid that sounds to me rather more like awarding attributes to the LLM that it does not possess - the ability to think, to evaluate, to learn, to understand - things that would constitute actual intelligence, but which in reality are qualities which LLMs do not have (and are unlikely to have in the forseeable future).

            3. J.G.Harston Silver badge

              Re: Sounds awful because it is

              CNC programming is a good point, because even with the most complex machines, the person working out what to tell the CNC machine still needs to know about and undertand the concept of rotary materials shaping, and can often "sketch out" test designs on a hand lathe to get an idea of what is needed and what physics need to be taken into account. You can't just say "lathe me a thrimble" unless you've already told the CNC machine what a thrimble is and taken account avoiding attempting to drill through the drill mount.

            4. Lipdorn

              Re: Sounds awful because it is

              LLM are deterministic in nature. The end products most of us use are not simply because they use a different seed for the random number generator each time. Were the same seed used, the LLM will produce the same output.

          2. An_Old_Dog Silver badge

            The Best of Our Collective Ability

            @ParelezVousFranglais:

            Given that the vast majority of code that's still out there at the moment has been crafted to the best of our collective ability...

            The vast majority of the code out there is not "crafted to the best of our collective ability".

            The beancounter/managerial I-don't-care-just-make-it-work-we-gotta-ship-it-now drum has been beating for decades.

          3. Anonymous Coward
            Anonymous Coward

            Re: Sounds awful

            > there's no particular reason that a model couldn't be trained on only "good" programming practices

            If that were even remotely possible, don't you think people would have done it already? Who'd knowingly and wantonly train their models on "bad" programming practices such that those practices are regurgitated as positive examples to their users.

            Thr corpus of information used to train models is also being flooded with bad output from current models. Can that be filtered out later to enable higher quality models to be trained? Probably? Will it be expensive? Also probably.

        2. PRR Silver badge
          Devil

          Re: Sounds awful

          > raise a generation of "engineers" who understand nothing

          I think that bus has sailed.

        3. NetMage

          Re: Sounds awful

          None of us is a bit strong, but those of us who came up with those layers may be almost all that are left, with those even fewer who have the curiosity to follow the rabbit hole all the way down.

        4. that one in the corner Silver badge

          Re: Sounds awful

          > As we gradually moved from machine code to assembly to C to other languages and frameworks, I accept the increasing layers of abstraction have meant none of us now are proficient at every layer of the stack.

          Major difference between accepting that we aren't proficient at every level in the stack[1] versus using LLMs for the coding task: the languages & frameworks are explainable and can be meaningfully investigated.

          If you write something in a script that drives a framework that is coded in a domain-specific language that is compiled[2] to C that is compiled to assembler that is assembled into a binary then, at each stage, you can[3] investigate the intermediate results and glean how changes in your input trickle down and effect the final result. There are even tools created to *encourage* you to learn how it all works, using the speed of modern PCs to show you "in real time" the assembler from your C++[4]... Failing that, there are other people who know the bits you don't and can be - persuaded - to describe them to you (bits of coloured paper may be involved).

          But the LLM process is totally opaque, by its very nature. Even if you manage to turn down the "temperature" on the model so that it starts spitting out the identical code for multiple runs of one prompt, you have no idea what it is going to do as you change the prompt to try and improve the final output: will it just change one line or go hairing off down a totally new avenue? And if you try the later prompt in a brand new session, without the chance that the LLM was *really* being fed the "conversation so far", how does that compare? I.e. will somebody else, given your "final prompt, the one that cracked it", be able to replicate what happened? And next week, when the LLM has been tweaked again and you can no longer actually run the same generator you did last week?[5]

          [1] although the "none" is going a bit far - some people do remember what they studied about compilers etc, although thinking about doping requirements to control electron transport in MOS junctions can be considered going off-topic when you are choosing which blur to use in your image processing script....

          [2] "transpiled" if you really, really feel the need

          [3] assuming nobody is deliberately trying to stop you doing so, that is

          [4] other languages are available etc; e.g. you can probably find someone with a UI set up to do the same thing with Lua input and the VM's bytecode

          [5] ok, this last is an artifact of the way LLMs are being presented and controlled rather than because they are actually LLMs: you can fall foul of the same effect if you use a third-party Web-hosted compiler (or higher-level framework); except that the LLMs are generally so large that you have little choice, certainly not compared to the way that every PC in use can easily host a compiler or two

      3. Bebu sa Ware Silver badge
        Coat

        Re: Sounds awful

        "purpose is to convey information between two or more entities"

        Obviously separated by space. When nearby, any perturbation of historical syntax or grammatical norms are unconsciously taken into account by those entities. When the parties are separated by space and cultures, misunderstanding will inevitably arise from deviations from those norms.

        Once separation in time is added any ephemeral changes in language might completely obscure the original meaning from the writer's or speaker's descendents.

        As it is when you are within ear·clipping distance of some linguacidal youth you often have to refrain both from "WTF is that supposed to mean" and a swift biff to their lugholes.

        † overjoyed that is not my coinage.

        1. ParlezVousFranglais Silver badge
          Coat

          ephemeral changes in language might completely obscure the original meaning

          Ah - so you've tried reading Samuel Pepys diaries in his original shorthand too...

      4. Annihilator Silver badge

        Re: Sounds awful

        I think the problem with that analogy is the light switch will continue to work, but will form part of ever more complex building jobs that don't even realise they have the light switch built in, uses up terabytes of memory to flick the switch, and you end up horrified in 4 years time that your light switch inexplicably doesn't work anymore when a part of AWS-US-East-1 has an outage.

        1. ParlezVousFranglais Silver badge

          Re: Sounds awful

          How is that any different from what we have now?

          1. Simon Harris Silver badge

            Re: Sounds awful

            Quite horrified today when I noticed in Task Manager than Notepad, which I currently have open with about 8K in total of text files, has apparently grabbed 110 MBytes of RAM. But why? What does a supposedly simple text editor do with it all?

            1. Annihilator Silver badge

              Re: Sounds awful

              I am genuinely surprised to hear it's that low. Just ran an experiment and I fully expected that a new "tab" in notepad (because of course it needs tabs now) would double the memory requirements - it doesn't. Again, pleasantly surprised.

              1. Strahd Ivarius Silver badge
                Devil

                Re: Sounds awful

                This is a bug that will be solved in the next release.

              2. Anonymous Coward
                Anonymous Coward

                Re: Sounds awful

                > a new "tab" in notepad (because of course it needs tabs now)

                Aha! Is that why some arrogant German retard decided to add tabs to KWrite?

                If KWrite ever needed tabs it would be called Kate, for Stallman's sake!

            2. AVR Silver badge

              Re: Sounds awful

              You've heard about M$ stuffing AI into Notepad, right? It's not as simple as it was a year ago.

              1. Simon Harris Silver badge

                Re: Sounds awful

                For some reason, which I can't remember now, the last tab I have open has "this is the one thing we didn't want to happen" pasted into it.

                Seems appropriate somehow!

            3. imanidiot Silver badge

              Re: Sounds awful

              This is why you use Notepad++.

          2. Annihilator Silver badge

            Re: Sounds awful

            Yes that image was in the back of my mind when I posted that.

        2. ben kendim

          Re: Sounds awful

          The Scotty Principle apllies to that light switch...

          "The more they over think the plumbing, the easier it is to stop up the drain."

      5. nobody who matters Silver badge

        Re: Sounds awful

        ".....language changes naturally over time, and it's main purpose is to convey information between two or more entities - if kids can communicate with each other perfectly happily by using terms that the older generation are completely unfamiliar with, then their language has still served it's purpose"

        That is very true.

        However, you have to make the distinction between changes that improve the ability to communicate and those which hamper it - the difference between evolution of a language and its deterioration.

        1. ParlezVousFranglais Silver badge

          Re: Sounds awful

          I think any changes are purely subjective based on the entities using the language.

          When a command gets added or a syntax changes in a programming language, often you get the same arguments about whether it's an improvement or not, but I'm not sure you could ever say that a language "deteriorates".

          Even if you could, if that was relevant to the masses, then we'd likely all be speaking Sanskrit, Mandarin or Latin as some of the most "precise" languages to have existed, but here we are happily communicating in (technically far inferior) English... ツ

          1. David 132 Silver badge
            Happy

            Re: Sounds awful

            le mi varkiclaflo'i cu culno lo angila

            :)

            1. Anonymous Coward
              Anonymous Coward

              Re: Sounds awful

              klacpe lo pulji ku

          2. nobody who matters Silver badge

            Re: Sounds awful

            "I'm not sure you could ever say that a language "deteriorates".

            How else do you describe it when a word starts being used for a totally different meaning from its original, or the scope of meaning is widened to the extent that it covers a wide range of situations rather than the original narrow and precise definition?

            The term 'Artificial Intelligence' is a prime example - a few years ago we all had a very specific view of where AI started, but in the last decade or so that definition has been widened by some to encompass entities (such as the current LLMs) which are very clearly not intelligent at all. Some people have even put forward definitions of such intelligence which would put a simple pocket calculator within the classification of 'AI'. How exactly can that be considered as anything other than deterioration of language.

            There is a long list of words and phrases which in recent years have had their meaning widened or changed, or become so over used in inappropriate situations that they have become either meaningless or now make it difficult to pin down what the speaker actually meant. There really isn't any way to view that as anything other than deterioration of the language.

        2. Chet Mannly

          Re: Sounds awful

          "However, you have to make the distinction between changes that improve the ability to communicate and those which hamper it - the difference between evolution of a language and its deterioration."

          Except hampering the ability of communication is the point of street slang for kids - they can understand it but their parents can't - and that's why they change language. When they grow up those words become the norm and their kids come up with new ones. That's evolution, not deterioration.

      6. Richard 12 Silver badge

        Re: Sounds awful

        Software isn't language.

        Language slowly changes, but that's where the analogy ends.

        The purpose of language is to convey meaning between peers. Languages change to better convey the meaning between peers, to better exclude those who are not peers, or to widen the range of peers.

        LLM vibe code vomit nearly always doesn't even compile or interpret. So it doesn't even become software.

        With careful pummelling, it can sometimes spew out something that compiles, but it has never ever been found to actually work.

      7. Confucious2

        Re: Sounds awful

        I think you are wrong.

        The Queen’s English should not ‘evolve’.

        1. Fred Daggy Silver badge
          Pint

          Re: Sounds awful

          I would place a decent wager on all language evolving.

          English has always evolved. Look at Germanic, Greek, Latin, Nordic and more lately French influences. Every day, every one experiences new things. Language evolves to convey the meaning of these new experiences.

          Groups evolve their own technical jargon (lawyers, medical, IT, different social groups, etc). It molds the group and is molded by the members. It unites and creates the insiders and others. (See Latin in the middle ages among the clergy).

          If language didn't evolve the whole world would probably be speaking a single world language.

          I offer my heartfelt contrafibularities to Mr Johnson on his new "dictionary". Its a nice historical record. But words change, hence, new editions.

          1. ParlezVousFranglais Silver badge
            Thumb Up

            Re: Sounds awful

            I'm anaspeptic, phrasmotic, even compunctuous to have caused you such pericombobulation

        2. Furious Reg reader John

          Re: Sounds awful

          Other than it has now "evolved" to be the "King's English"....

        3. Chaotic Mike
          Coat

          Re: Sounds awful

          Or the King's.

    3. Ian 55

      Re: Sounds awful

      It reminds me of the story of the the Space Shuttle's main engine.

      It was designed 'top-down' rather than out of known components. When the turbine blades began to show cracks, they had a problem: because they hadn't been designed and tested separately, no-one knew just how it behaved in various situations. Fixing the problem became a lot more difficult and expensive.

      This is going to be the problem with Claude's code. When, not if, there are problems, no-one is going to understand the components.

      1. Chet Mannly

        Re: Sounds awful

        Devil's advcate - maybe they don't need to. If Claude (or whatever AI tool it is) is good enough to creat the software in the first place it should also be good enough to alter the code to correct the problem.

        1. Anonymous Coward
          Anonymous Coward

          Re: Sounds awful

          If it's not good enough to write it, it's not good enough to fix difficult bugs in it either. Sometimes it works despite that issue, but don't count on it.

    4. Gordon 10
      WTF?

      Re: Sounds awful

      I cant believe how lazy a presumably educated techie is to write off everything AI generated as "slop" without engaging more intellectually with the discussion.

      Is it fear and denial talking?

    5. bboyes

      Re: Sounds awful

      Isn't this a lot like early cars? Unreliable, kind of expensive, not comfortable compared to today... but compared to a horse had advantages in many circumstances. They never did replace horses: here in the western US there are still places horses and mules go that no other transport can (OK helicopters maybe but that is WAY more expensive). For a while as I understand it, driving a model T you had to be some kind of mechanic to deal with issues. I still recall when in high school cars needed frequent tuneups. The first manual transmissions did not have synchros - even Dusenburgs lacked them! Now how much do you have to know about how a car works in order to drive it? Is that really a problem? I own a plug-in hybrid and a diesel F250 pickup with stick shift, manual transfer case, and manual hubs. I love them both because they are both great at very different missions. Won't programming be the same? AI may replace the boring grunt work and leave the interesting stuff to us. For those of you familiar with piston engine aviation how about manual control of turbocharger wastegates and mixture vs FADEC?

      Digital cameras also come to mind. I don't miss the darkroom or days of delay for commercial processing. Hilariously, photo SW now has filters to add the anomalies of film to the look! Even my phone has a pretty good camera. I can scan documents into a PDF and send it from my phone! No FAX machine needed! Every major paradigm shift has its pros and cons and never completely eliminates the old ways, it just makes them unnecessary 80% of the time. It this change all good? Likely not: doom scrolling! Online fraud! Etc! But the sky is not falling, or maybe it is, and always has been.

  2. stiine Silver badge

    In other news, in the last couple of weeks, Dave Plummer (@DavesGarage on youtube) had AI write a notepad replacement, so I'm not sure what to believe any more.

  3. Jonathan Richards 1 Silver badge
    Mushroom

    So...

    .. what happens if I feed in the specs, performance requirements and documentation for Claude?

    1. b0llchit Silver badge
      Pint

      Re: So...

      Then, after some very small amount of time, the universe will instantly disappear and replaced by something even more bizarre and inexplicable.

      A theory states that this has happened before...

      1. StillBill

        Re: So...

        “The Babel fish is small, yellow and leech-like, and probably the oddest thing in the Universe. It feeds on brainwave energy received not from its own carrier but from those around it. It absorbs all unconscious mental frequencies from this brainwave energy to nourish itself with. It then excretes into the mind of its carrier a telepathic matrix formed by combining the conscious thought frequencies with the nerve signals picked up from the speech centres of the brain which has supplied them. The practical upshot of all this is that if you stick a Babel fish in your ear you can instantly understand anything said to you in any form of language. The speech patterns you actually hear decode the brainwave matrix which has been fed into your mind by your Babel fish.

        Now it is such a bizarrely improbable coincidence that anything so mindbogglingly useful could have evolved purely by chance that some thinkers have chosen it to see it as a final and clinching proof of the non-existence of God.

        The argument goes something like this: "I refuse to prove that I exist," says God, "for proof denies faith, and without faith I am nothing."

        "But," says Man, "the Babel fish is a dead giveaway isn't it? It could not have evolved by chance. It proves you exist, and therefore, by your own arguments, you don't. QED."

        "Oh dear," says God, "I hadn't thought of that," and promptly vanishes in a puff of logic.

        "Oh, that was easy," says Man, and for an encore goes on to prove that black is white and gets killed on the next zebra crossing.”

        -DAdams

        1. Anonymous Coward
          Anonymous Coward

          Re: So...

          As well as this part

          "Meanwhile, the poor Babel fish, by effectively removing all barriers to communication between different races and cultures, has caused more and bloodier wars than anything else in the history of creation."

    2. ComputerSays_noAbsolutelyNo Silver badge

      Re: So...

      The same as happens when you google Google

    3. ecofeco Silver badge

      Re: So...

      The nine million names of god?

  4. Fred Daggy Silver badge
    Coat

    A combination of ..

    This seems like a combination of two things

    (1) “Ford!" he said, "there's an infinite number of monkeys outside who want to talk to us about this script for Hamlet they've worked out.”

    and

    (2) Even a stopped watch is right twice a day.

    Which will be correct first?

    1. HollowMask

      Re: A combination of ..

      ObligatoryKosh: Yes.

      1. The Oncoming Scorn Silver badge
        Alien

        Re: A combination of ..

        A stroke of the brush does not guarantee art from the bristles.

      2. Throatwarbler Mangrove Silver badge
        Angel

        Re: A combination of ..

        "The avalanche has already started. It is too late for the pebbles to vote."

    2. Jamie Jones Silver badge

      Re: A combination of ..

      1) But how do they know which version of the script is valid?

      2) But you don't know when.

      These answers apply to AI too

  5. Alan J. Wylie

    Useless use of cat

    cat PROMPT.md | claude-code

    A contender for the Useless use of cat award.

    1. Philo T Farnsworth Silver badge

      Re: Useless use of cat

      claude-code < /dev/zero

      Much better.

      1. m4r35n357 Silver badge

        Re: Useless use of cat

        claude-code < /dev/zero > /dev/null

        now feature complete

    2. steamdesk_ross

      Re: Useless use of cat

      Not useless, it will use up a lot of Claude tokens, so Anthropic can bill you more...

      Makes you wonder who posted/publicised this in the first place?

    3. Alan Mackenzie
      FAIL

      Re: Useless use of cat

      I have recently started to use $ cat <source-file> | less as a workaround to get legible text in less. Somebody thought it a great idea to add syntax highlighting to less, including illegible dark blue text on a black background. Using the pipe prevents this syntax highlighting.

      If I weren't so lazy, I'd look up the documentation to work out how to disable this illegibility, or even submit a bug report. But the problem, although irritating, isn't sufficiently so to propel me to doing anything about it. Damn!

      1. Anonymous Coward
        Anonymous Coward

        Re: Useless use of cat

        It would have been quicker to type man less than to write that drivel.

  6. MatthewSt Silver badge
    Facepalm

    Source code...?

    > clone commercial products, a job it can achieve if provided with resources including source code, specs, and product documentation

    Marvellous! Give it the source code and it can clone a product! What will they think of next?

    1. DS999 Silver badge

      Re: Source code...?

      I'm sure there are zero copyright issues with such an approach. Genius!

      Let's see it in action with some dusty deck COBOL code that's been resistant to previous efforts to modernize. Oh, but he'd say "sorry I need FULL specs and documentation, this doesn't count because those aren't present".

      It is obvious to anyone who has ever written a program more complex than "Hello, World" that the most difficult part of programming is figuring out exactly what it is you want it to do. Even a fresh out of college CS grad with zero real world experience could code something arbitrarily complex, if you gave him fully complete and exact specs in English detailing exactly what is needed. Any algorithms that are needed are already developed and provided. Any and all if/then/else decision trees are graphed/flowcharted. Everything you need is right in front of you.

      I'm willing to concede that in such a case AI could probably toss together something as good or better than the CS grad (or a 25 year experienced coder for that matter) in a single day versus months or years depending on how complex the specs were. But that's a fantasy situation, it never happens in the real world.

      1. Philo T Farnsworth Silver badge

        Re: Source code...?

        > Let's see it in action with some dusty deck COBOL code that's been resistant to previous efforts to modernize.

        I just want to see an LLM try to read a deck of tab cards.

        ABEND S0C4

      2. RGH2121

        Re: Source code...?

        Perhaps instead.of learning how to code, the CA grads of tomorrow will learn to write extremely.careful amd.good specs.

        Snd honestly.it seems.to.me that figuring put exactly what you want.something to do os a far more useful input/career than just being able to speak C++ natively.

        1. rafff

          Re: Source code...?

          "it seems.to.me that figuring put exactly what you want.something to do os a far more useful input/career than just being able to speak C++ natively."

          Figuring out what the problem really is and how to solve it are part of the development loop. One never really knows what the specs are until the job is finished. But if afterwards an AI can quickly and cheaply generate a *better* solution that a load of meatbags fighting their way though the development fog ....

          As for cloning the functionality of existing software .... bring on the IP lawyers

  7. breakfast Silver badge

    The road to maintainability hell?

    Fast creation is fun and exciting for start-ups, like a lot of things that everyone else then assumes they should also adopt, but most software is legacy. Even if Claude can create a copy, can it also maintain it or are we going to need to get humans involved in that? Will it even be human maintainable, or are you creating code that will need you to subscribe to Claude forever, even when Anthropic start charging enough to break even?

    I keep hearing that my job will be replaced with this technology, but when I look at the tech in action and its outputs, it seems pretty weak.

    1. ComputerSays_noAbsolutelyNo Silver badge

      Re: The road to maintainability hell?

      Well, that's planned software obsolescence for you.

      It is created in a way, such that it can not be maintained or repaired.

    2. Anonymous Coward
      Anonymous Coward

      Re: The road to maintainability hell?

      The real "grim meat hook reality" comes from the line where they point out that with a decent copy of your marketing material, website, and documentation, the AI slop fire hose can converge on a cheaper, faster, bug filled mess of a knockoff of your companies code. The hilarious thing is they then talk as if companies goodwill and branding could save them. No, not with the playbook they have been running since well before the pandemic. We tend to hate even the ones that our paychecks are coming from.

      What they pointed at is that they will Temuify every established profitable software company out of existence. Since the same people that buy fast fashion and fast food through Uber Eats and Door Dash are in the drivers seat, they will do what they know and buy the cheap knockoff if it looks enough like the name brand.

      Your job may not be replaced, and you still might just not want to do it anymore.

      I have already turned down offers for a position as an underpaid whipping boy expected to fix and maintain 100% uptime for managements vibe coded slop, and to be punished for their mistakes. No, not interested, and doubly so when "generously" offered it at 40k less then I was making before.

      What we should be doing is organizing our own engineering guild and mercilessly crushing the companies destroying our industry and livelihoods along with many of our places of employment. There is no moat, so we either build a trade organization looking out for OUR collective interests or we drown in the rising tide.

      1. Chet Mannly

        Re: The road to maintainability hell?

        " The hilarious thing is they then talk as if companies goodwill and branding could save them. No, not with the playbook they have been running since well before the pandemic."

        Even before that with the constant demands to subscribe to software. If a knock off can be made cheaply and sold for cash up front people will buy it to avoid having MS/Adobe et al constantly feeding off their credit cards every month for the term of their natural lives...

      2. Rogerborg 2.0

        Re: The road to maintainability hell?

        "We"? You're my opposition, not my guy, buddy.

    3. strange_flight

      Re: The road to maintainability hell?

      Jobs might not be replaced but that doesn't mean that they won't be lost.

      1. breakfast Silver badge

        Re: The road to maintainability hell?

        Absolutely true. The technology doesn't have to work (it doesn't) it just has to be able to persuade the most gullible rube in any given organisation (the CEO) that it works well enough to fire all their development resource.

  8. Natalie Gritpants Jr

    That loop will not finish

    1. Androgynous Cupboard Silver badge

      Well, you say that, but I suspect eventually we'll simply run out of electricity.

      1. The Oncoming Scorn Silver badge
        Pirate

        Or we will use it to drive Redjac out of computers

    2. Anonymous Coward
      Anonymous Coward

      Yeah, Claude's Ralph Wiggum Plugin has --max-iterations and --completion-promise options to help with that, plus a cancel-ralph command, that all sound a bit better than ctrl-c ...

      1. Paul Crawford Silver badge

        But no --max-money option?

  9. JacobZ

    What about original work?

    So, after humans have done all the heavy lifting of soliciting requirements, developing code, testing, fixing, scaling, integrating, making secure, iterating to implement additional requirements because people are really bad at identifying requirements until they can see the thing and say "no, that's not what I meant!", thoroughly documenting the whole pile and then productizing the whole thing...

    ...an AI can reproduce that for $10 an hour (it doesn't say how many hours).

    A photocopier can plagiarize a book for a lot less than $10 an hour, and that doesn't impress me either.

    1. Anonymous Coward
      Anonymous Coward

      Re: What about original work?

      Yeah, the Ralph had to run continuously for 3 months to develop cursed 'a new programming language' (TFA link, or this one) which at $10/hr is around $24k -- not really cheap for many a tinkerers -- and it is not quite considered a finished work iiuc ...

      More complicated reverse engineering fuzzing would likely take longer and be more expensive imho. In its 'Real-World Results' section though, the Claude 'Ralph Wiggum Plugin' (TFA link) notes "One $50k contract completed for $297 in API costs" suggesting one's mileage might vary some in this racket.

    2. Eric 9001

      Re: What about original work?

      Photocopying a book without following the license isn't plagiarism unless you claim to have written the book, but it is still copyright infringement.

  10. IamAProton Bronze badge

    unmaintanable code

    If the code is unmaintainable because it's a "Cathedral of complexity" with hundreds of 2nd level dependencies, obscure patterns and a pile of layers just for the sake of it OR because it's written by a bot does it really make a difference?

    After a few years of modifications it will need a full rewrite anyways so - assuming those LLMs can actually write code that works - it's going to be a matter of cost (development + maintenance)

    1. Neil Barnes Silver badge
      Holmes

      Re: unmaintanable code

      Second level dependencies? I'm afraid these days it's turtles all the way down until you get to cout().

      Which is why https://github.com/MichalStrehovsky/sizegame lists sizes of "hello world" programs up to 11MB (Haskell) compared to 2kB (assembler) (and that latter could be done in two dozen bytes in MSDOS and bare assembler.

      1. thosrtanner

        Re: unmaintanable code

        even cout has turtles below it. Because cout stuffs it into a buffer and then eventually does a write(2) (or equivalent) which then copies the buffer to the O/S and eventually ends up shoving a character (either one at a time or in blocks) at an I/O device.

        1. Paul Crawford Silver badge

          Re: unmaintanable code

          And that I/O device then has a whole graphic library to render in to pixels for the screen...

          1. Not Yb Silver badge

            Re: unmaintanable code

            I (don't) miss the days when the backspace chip could fail in the terminal.

  11. Tron Silver badge

    It's not ideal...

    ...but perhaps we could get stuff that the commercial outfits will never give us. Distributed versions of current stuff like social media that do not have centralised, censored, advertising-driven feeds, and alt networks that can function safely in dictatorships like North Korea, China, Russia and Trumpistan. Search engines that work like Google did when it began.

    It's not perfect, but it may initiate a new revolution in online services that is crowd-sourced, bottom-up rather than top-down, and distributed, rather than centralised.

    GAFA may have given us the tool to replace GAFA with something better.

  12. Bebu sa Ware Silver badge
    Pint

    Taking the piss ?

    The Ralph loop re·rendered as a (primative?) recursive function presumably locates a fixed point and the reference to Y combinator which is a fixed point combinator is potentially a warning that my leg is being pulled.

    "O brave new world,That has such people in't !"

    1. Jason Bloomberg Silver badge

      Re: Taking the piss ?

      Or just another AI evangelist selling hype.

      1. Anonymous Coward
        Anonymous Coward

        Re: Taking the piss ?

        ... but with quite unusual skill at engagingly hip snark ... simultaneously refreshing and worrisome.

      2. Homo.Sapien.Floridanus

        Re: Taking the piss ?

        Similar hype moments in history:

        Scientist: I am way ahead of Turin, MY machine can decipher the German texts instantly and with only two requirements!

        Churchill: And what are those?

        Scientist: An identical enigma machine and a list of the daily configuration settings.

        Churchill: Don’t call us, we’ll call you. NEXT!

    2. Tim99 Silver badge

      Re: Taking the piss ?

      ...Are melted into air, into thin air | And like the baseless fabric of this vision... Yea, all which it inherit, shall dissolve... Leave not a rack behind... such stuff | As dreams are made on... I am vexed. Bear with my weakness. My old brain is troubled. (The Tempest)

      Tomorrow and tomorrow and tomorrow | Creeps in this petty pace from day to day | To the last syllable of recorded time... And then is heard no more. It is a tale | Told by an idiot, full of sound and fury, Signifying nothing. (Macbeth)

      One can always find something apposite in Shakespeare?

  13. K555 Silver badge
    Joke

    "Agile and standups doesn't make sense any more,"

    Any more?

    1. Bebu sa Ware Silver badge
      Joke

      "Agile and standups doesn't make sense any more,"

      I hadn't encountered standups it this context until after COVID – I have never used any form of video conferencing – so the word only had a prior meaning redolent of promiscuity and cramped, steamed up telephone boxes where admittedly some agility was a requisite.

      The first time I heard it used in a post COVID office I was rather surprised until the penny dropped… if anyone had mentioned scrums I would have been off pdq… besides I didn't fancy any of that lot. ;)

  14. Doctor Syntax Silver badge

    "Companies have a brand that can't be cloned and goodwill that can't be cloned,...But product features can now be cloned."

    And then all you've got is a clone. Where's the ambition in that?

    1. Anonymous Coward
      Anonymous Coward

      Bwaa Haa Haa

      Companies have goodwill? I haven't seen any of that in a while. Mostly just corporates drinking the en$hitification flavor-aid and burning their companies to the ground in search of a .5% bump to quarterly reports.

      Temu isn't ambitious in it's ideals, or it's execution, but it is absolutely _voracious_ in it's appetite. This means the ambition is in the audacity to burn the software industry to the ground by fully automating the knockoff software process and doing so on an industrial scale. And it's not coming, it's already here.

      The best brands crumble after a decade of poor products, and most abandoned goodwill to strip mine their own companies. Played the game of how much blood (aka short term revenue) can we take before the body dies. How many workers can we cull before the ones that are left begin to collapse on the job, or literally set themselves on fire instead of suffer the agony and humiliation? Before they literally can't do the workload that two or three people did before? How much garbage they can ship before the customer revolts? How much they can raise the renewal? How about some micropayments?

      So they in their own hubris sowed the wind and then carefully planted the seeds of their own destruction beneath their feet as well. Destroy their brands and then double down by pouring money into to the companies and tools that would automate themselves out of existence along with their workers and their dwindling market share.

      Though to be fair I expect this fake AI bubble to burst before all that happens, and for the delivery of newer fake AI to be heralded to arrive right after cheap, clean, and nearly unlimited fusion energy.

  15. Kevin McMurtrie Silver badge

    Been here for decades

    Isn't this how the cheapest remote contractors work? It's an infinite loop of telling them to fix bugs, and they write more random code?

    I've seen contractor projects balloon to millions of lines of very enterprisey code without it ever compiling. It's the sign that it's time to leave where you're working.

    1. Doctor Syntax Silver badge

      Re: Been here for decades

      And also of the same bugs reappearing every time the disposable employee workforce turns over.

  16. Andy 73

    This is satire...

    This is satire... right?

  17. bigphil9009

    "Ralph Wiggum" it is not

    Having read the linked-to blog, I can't work out if it's intentional or a fine example of AI slop that the image of Ralph in the article looks absolutely nothing like the actual character.

  18. MattieD

    Why ask it twice?

    If the AI is capable of finding and resolving the bugs in the code it wrote, why did it include them in the first place? Why is there the need for recursion?

    Sounds like a the AI provider might be deliberately including errors so that they can charge for fixing them.

  19. Komm the Kat

    So let me get this straight... He just made a really shitty compiler that literally burns cash to run?

  20. mikesilv

    I presume this is "$10" in heavily VC subsidized compute. Which we know is nowhere near the actual cost, once these companies burn through the easy money, that's not going to keep being easy.

  21. Anonymous Coward
    Anonymous Coward

    I'm in danger!

    1. ecofeco Silver badge

      I had to scroll this far down.

  22. Nugry Horace

    "Huntley has documented how he used Ralph to create an a tax app for the ZX Spectrum, and later reverse-engineered and cloned an Atlassian product."

    All of which appears to be paywalled, so I'm none the wiser.

  23. Random as if ! Bronze badge

    Jump

    + shark : clod.ai marketing : shite

    1. captain veg Silver badge

      Re: Jump

      Since we're referencing Happy Days...

      "Sit on it, Ralph."

      -A.

  24. HuBo Silver badge
    Windows

    Interesting stuff

    Apparently, OpenAI's Codex has an internal agent loop like that, that iterates over code production (an inner Ralph) (spotted by Benj Edwards). This begs a couple questions, namely doesn't claude-code have a similar inner loop, and would that not make the external bash loop somewhat redundant? And is updating of the prompt needed (in outer loop) to get an eventually satisfying output (Huntley seems to both suggest so, and not, simultaneously)?

    But overall it's interesting that they're iterating over AI (so-called) tool application to refine outputs towards a (hopefully) convergent quality output (solution). It's a technique used at much finer scales (floating point ops) to solve PDEs approximated by linear algebraic matrix-vector systems for example (Richardson, Jacobi, Gauss-Seidel, SOR, ...) as well as optimization and inverse modeling (Newton method, Levenberg-Marquardt, ...) among others. It made me think of Sandia's iterative solution of PDEs using near-memory neuromorphic compute with proximal-only interactions, and that Richardson and Jacobi could be well-suited to that, even without spiking (i.e. plain-jane non-neuromorphic in-memory compute approach).

    Quite stimulating imho (and with the crunch of puzzling bits) ...

  25. Anonymous Coward
    Anonymous Coward

    Created a tax app for the speccy?

    And it probably only caused about 4 brownouts in major cities with the amount of power it used to run the billion iterations until it bashed out something that worked.

  26. rafff

    Actual use case?

    This may be useful for turning deliberately vague or obfuscated patents, i.e. all of them, into something implementable.

    Now about that idea you had for a cold fusion reactor ...

  27. Dinanziame Silver badge
    Angel

    The future of coding

    There are people who are apparently working to make that happen, only slightly more sophisticated:

    Welcome to Gas Town — Steve Yegge

    1. Anonymous Coward
      Anonymous Coward

      Re: The future of coding

      Engaging writing ... "an industrialized coding factory manned by superintelligent robot chimps", "Gas Town operates on the principle I call Nondeterministic Idempotence", "I am that panda" ... Can the RotM really be this entertaining?! ;)

    2. Dan 55 Silver badge
      Alert

      Re: The future of coding

      Both this and Gas Town have one thing in common:

      The developers are so far gone that you can't even understand the blog post or X thread which is supposed to clearly introduce and describe the concept, let alone the code repeatedly spewed out in a loop by the LLM.

      Perhaps it's what overuse of LLMs does to your brain.

    3. that one in the corner Silver badge

      Re: The future of coding

      From Gas Town post:

      > Gas Town solves the MAKER problem (20-disc Hanoi towers) trivially with a million-step wisp...

      Sorry, we are celebrating something that can solve a 20-disc Towers of Hanoi in about 30 hours of decidely non-trivial compute? The trivial coding problem that was used as the second example of a simple recursive function (just after Fibonacci Numbers)? And it takes something as complicated as thus "Gas Town" to do that?

      The Wonders of Science!

  28. Rogerborg 2.0

    Uh huh. "Subscribe to my Substack" is the new "Buy my book to find out the secrets of getting rich quick."

  29. a.b

    That is cool. Can it deliver working software on a 10- or 20 year timeline?

    So a LLM can clone commercial products if provided with resources including source code, specs, and product documentation. And enough tokens.

    That is cool, it really is.

    On the other hand ...

    "Can you build X for us. We've already prepared the complete spec and product documentation. And by the way here is the source code as well."

    ... doesn't sound like anything I've ever heard for a project brief.

    The reality is that clients don't know what they want. And that's not their fault.

    They know that they have a problem that they need solved - and by admitting that they are usually already ahead of their competition. But it's not always clear what the real problem is that needs to be solved - the symptons that cause the pain are known, but getting to the root of the problem is a different matter.

    And only then can we start to look at finding potential solutions. Within existing constraints - which are also usually not fully understood.

    This is how making building software in the real world works. It's messy and it's complicated, and that doesn't make for a short juicy blog post.

    But at the center of LLM software development approach seems to be the question "how to deliver X with the least amount of effort".

    And that's just not how I understand the job of delivering working software that

    The question is rather "How can we deliver the most lasting value with the least amount of resources".

    And lasting means looking at 10-20 years, at least. All of the software I delivered 10 years ago is still in use (and actively maintained), and a lot of the software I delivered 20 years ago is still in use (a lot less actively maintained). As far as I know, none of the software I wrote 30 years ago is still in use. But for some industries 30 years is nothing.

    We can only reliably deliver working software over decades by making very careful choices about the tech stack, and how it might unravel over time. That's where the "least amount of resources" comes in.

    Building our tech stack on a LLM coding agent that goes through major changes frequently and belongs to a 3rd party organisation with a shaky business model seems ... quite adventurous.

  30. DartfordMan

    So when this recursion produces the code that most accurately replicates the original application; well two things: will it accurately reproduce the bugs that the original code has? and who is responsible for the copyright infringements in the code when it reinvents the same algorithms and methods? The programmer, the AI (treat it as an individual) or the creator of the AI?

  31. captain veg Silver badge

    you what?

    "Huntley has documented how he used Ralph to create an a tax app for the ZX Spectrum, and later reverse-engineered and cloned an Atlassian product."

    Yeah. Right.

    So we're talking about something which is entirely useless and will never be used and... a clone of something which is entirely useless and nobody wants to use.

    Odd choice of project, unless you desperately don't want anyone to probe further.

    -A.

  32. J.G.Harston Silver badge

    Isn't vomiting rolph not ralph? I'm certain I wouldn't ralph into the porcelain telephone.

    1. agurney

      "Isn't vomiting rolph not ralph? I'm certain I wouldn't ralph into the porcelain telephone."

      No, it's "Hughie" .. far more onomatopoeic :) [ cf Sir William Connolly]

  33. Anonymous Coward
    Anonymous Coward

    Interesting article. Too bad several of the links are paywalled.

    Well... it's an interesting idea, but the Z-80 post is mostly "for subscribers only." as is the other link to posts on ghuntley.com.

    This seems a bit like stealth marketing for someone's personal blog.

  34. MrAptronym

    The way he talks, he sounds like a teenage edgelord on Gaia Online circa 2004 bragging about the dark magic he learned and is totally going to curse the bullies at school with.

    This sounds like the code equivalent of copying a quote into your paper and then using the thesaurus to avoid plagiarism allegations. Cool, you ended up with something disgusting to look at that conveys most of the same meaning as a thing a person already made. Congrats.

  35. DaveK23

    This is nonsense

    The bash script as posted just repeatedly sends the exact same prompt into Claude and in no way collects the output and adds it to the prompt, so WTF?

    1. Not Yb Silver badge
      Coat

      Re: This is nonsense

      Claude can modify PROMPT.md, as it's just a file in the project directory. You thought self-modifying code was bad...

      I've not tried this but if we're trying silly LLM tricks...?

      PROMPT.md: "Build a wonderful project. You may modify PROMPT.md to improve the project, as well as building the project."

      Repeat ad nauseam which would probably be around 1 repeat.

  36. Anonymous Coward
    Anonymous Coward

    So it can clone an app given the full source code and documentation? Pretty sure I can do that for free with a lot less errors

  37. Nyle

    Does this mythical $10/hour include the actual cost of the processing power utilized to generate the code and is the code as bug free as human generated and reviewed code. The article seems to leave a lot out of the analysis.

    Right now a lot of AI use is very inexpensive as they work to get everyone to embed it into their process. How long do we expect this free for all model to continue when the data center bills come due?

  38. CowHorseFrog Silver badge

    Im calling bullshit.

    Where is the source ?

    Where is the binary ?

    Just like religion we have the religious basher telling us how wonderful their god is, but their god is never seen.

  39. Paul 195
    Meh

    10$/hour - for now

    Anthropic and Open AI are keeping very quiet about the real costs of inference - the compute needed to run something like Ralph - and given their cash burn rate it is likely to be much more than they are currently charging users. If the real cost is closer to $100/hour it doesn't have the same cost advantage over humans.

    At some point we either start paying the real costs for generative AI or these companies collapse, taking hundreds of billions of dollars with them.

    1. nobody who matters Silver badge

      Re: 10$/hour - for now

      Or, faced with paying the real costs, those using these things throw up their arms in horror, and stop using them rather than pay an amount not justified by the benefits (assuming they still think there are any?)......

      ........and these AI companies collapse etc, etc.

  40. Groo The Wanderer - A Canuck Silver badge

    The thing is, I've met people online who think that doing something not far removed from "The Wiggum Loop" makes them "programmers" who "don't need us any more," and unfortunately for the industry, some of them are in management at the companies they're associated with.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon