It loook to me that the AI prompt sent a code requset to someone in Dumbai, who in turn asked some AI to write some code for him - which could work somehow if the first request was understood ccorrectly - but some people had 500M to spend...
Builder.ai coded itself into a corner – now it's bankrupt
The collapse of Builder.ai has cast fresh light on AI coding practices, despite the software company blaming its fall from grace on poor historical decision-making. Backed by Microsoft, Qatar's sovereign wealth fund, and a host of venture capitalists, Britain-based Builder.ai rose rapidly to near-unicorn status as the startup' …
COMMENTS
-
Wednesday 21st May 2025 17:47 GMT Anonymous Coward
Wasted potential - there was clearly mismanagement but it wasn't all bad
As a former employee of Builder.AI it has probably not that much to do with AI so much as not having a grasp on contractor costs and financial mismanagement.
The company used to joke that the 'AI' stood for 'Another Indian', such was the reliance on cheap off-shore development.
However, they were genuinely trying to get building blocks to quickly assemble apps and AI code to convert the adaptations to those blocks from English-language requirements to code. The issue was that they didn't also manage to incentivise the contractors to actually use any of the generated code, because being billed by the hour and given no particular need to use that code they preferred to generally throw it out and write things from scratch. This meant that the ability to work out how close they were and refine that conversion was very limited. They had some spectacular cost over-runs on certain projects, again from the contract side.
To be charitable, it did look like they generally took the hit when they took on a project for a fixed price and contractors ending up costing too much. In part they ran out of cash because they weren't that keen to go back to the customer and ask for more when it became obvious there was more work than expected. They also needed to cut staff much earlier than they did, but tried really, really hard to avoid redundancies (ultimately costing a lot more jobs as they failed completely).
I'm not going to completely stand up for the leadership as there were plenty of mistakes, but they were trying to introduce a lot of AI smarts and efficiency, and to deliver a lot of useful apps. A few less 'side quests' of unnecessary developments, a bit more willing to be hard-nosed on failed projects, a better control on contractor costs and they could still have been one of the AI darlings. As it is former employees with suddenly terminated or holding now-worthless shares are instead out of pocket with just a few lessons on where you can and can't cut corners even as a unicorn.
-
Thursday 22nd May 2025 05:27 GMT that one in the corner
Re: Wasted potential - there was clearly mismanagement but it wasn't all bad
> However, they were genuinely trying to get building blocks to quickly assemble apps and AI code to convert the adaptations to those blocks from English-language requirements to code. The issue was that they didn't also manage to incentivise the contractors to actually use any of the generated code, because being billed by the hour and given no particular need to use that code they preferred to generally throw it out and write things from scratch. This meant that the ability to work out how close they were and refine that conversion was very limited.
Assuming that is a reasonable description of their process, it raises so many questions, which includes:
They were getting the AI to generate an app, the paying contractors to fix the app (presumably to the original requirements) then planning to use the difference between the two as the metric?
Assuming that the contractors were going to hit the requirements bang on each time? Or were they feeding back in the list of the AI's failings plus (different code and list of its failings)? Instead of letting the LLM "figure it out for itself" and only iterating over the match to requirements. See below about what they may actually be training this LLM to do...
> they didn't also manage to incentivise the contractors to actually use any of the generated code
That sounds like one of the things that needed to have been figured out the day *before* engaging the first contractor! And thoroughly tested as a mechanism before making any attempt to include it as part of the LLM's training protocol. Let me guess, the first year they used in-house coders, who were salaried , informed about the purpose of their work (being part of the training protocol) and were all enthusiastic about "getting the LLM to work" as their goal, as opposed to just chucking out yet another tedious app. Then the contractors were hired and they didn't have the same goals...
No BOFH clauses like "the rate we pay is divided by the number of changed lines"? And this incentive going directly to the individual contractor, not via middlemen (i.e. a contracting firm bigger than a one-man band)?
And we're 100% sure it was not because the LLM was generating gibberish and it was actually genuinely easier to start from scratch rather than try to figure out what was actually going on? Because there is a sneaking suspicion that the LLM is effectively being trained to generate code that REQUIRES humans to correct it and it may be optimising its outputs to maximise that!
> This meant that the ability to work out how close they were and refine that conversion was very limited. They had some spectacular cost over-runs on certain projects, again from the contract side.
Running projects for paying clients whilst they still had contractors in the mix, instead of a functioning AI? (If that was even going to be possible...)
> To be charitable, it did look like they generally took the hit when they took on a project for a fixed price and contractors ending up costing too much
Nothing charitable there, they took on the clients without having a working LLM. if they were hoping to use those clients as part of the training, then that is simply the cost of training. Otherwise, it is the cost of not going back to the client and admitting that they had sold them a service that didn't exist. Isn't there a word for that, begins with 'f', rhymes with 'Lord' (as in "have mercy").
Looking back at that lot, maybe the contractors did have this company by the short and curlies after all.
-
-
-
Thursday 22nd May 2025 07:56 GMT Anonymous Coward
Re: "to leverage AI tool"
"You’re clearly not management material; going forward let’s circle round to ideate and get our ducks in a row for optimally synergistic resultations."
I wouldn't actually speak those words aloud just in case that malediction summoned one of legions of Hell; presumably the poor devil overseeing the particular circle to which manglement are justly consigned.
-
Thursday 22nd May 2025 13:39 GMT captain veg
Re: "to leverage AI tool"
I keep a box and an envelope on my work desk. It amuses me to stay outside the first while thinking and to aimlessly push the second around.
I did once catch myself using the word paradigm in conversation with a manager. Fortunately he looked blank for a moment before asking "did you mean paradidgum?".
-A.
-
-
Wednesday 21st May 2025 21:58 GMT IGotOut
The whole AI model
"Although Builder.ai's fall has roots in financial mismanagement and forecasts that were arguably over-optimistic"
That's pretty much every single AI company.
"We've got this awesome product."
"Great, how you going to make money from it?"
"Next year it'll be bigger and faster"
-
-
-
Thursday 22nd May 2025 21:00 GMT David Hicklin
Re: AI = Another Indian
> financial well-being is now tied up with this AI crap, and I didn’t even change jobs. How did that happen?
You need to retire, that how I escaped the encroachment of AI crap in the company (not to mention windows 11) after a valiant rearguard action against badly used Agile
-
-
-
-
Thursday 22nd May 2025 10:01 GMT Anonymous Coward
As I understand it it has been shown that the AI is far better suited to replacing c-suite decision making than it is at replacing coders or accountants. Funnily enough the C-Suite are reluctant to see that replacing themselves with AI is a overall cost saving advantage for the business.
I'm sure it's only a matter of time before we see headlines about the first AI CEO probably pushed my marketing initially. It'll scare the crap out of them if it turns out to do a good job.
It's ok though I'm sure they'll be delighted to do like everyone else is told to do and upskill in AI to take advantage of all the new opportunities when the machine pushes them out the door.
-
-
Thursday 22nd May 2025 08:11 GMT Anonymous Coward
Revisiting "Going Postal" - Terry Pratchett could have written
"unable to recover from historic challenges and past decisions that placed significant strain on its financial position."
as one of the excuses given for failing services by Reacher Gilt, the fraudster who had stolen the Clacks and was extracting every cents from the business while running it into the ground.
-
Thursday 22nd May 2025 11:50 GMT localzuk
What exactly is the point of AI?
In its current form, what is the point of it?
Every task I've seen it used for on a larger scale ends up needing more humans to fix its output than would've been needed to generate the correct output manually in the first place.
Great, your "AI" can generate 10,000 lines of code from a prompt. But it took you weeks to write the prompt in such a way that the AI could understand it, and then you have to spend more weeks checking the code actually does what its supposed to, and fixing the prompt to correct any errors.
The only usages I've seen for such tools so far, that actually work, are "what does this code do?" where you feed in someone's spaghetti code and it summarises it. And "convert this overly complicated PHP code for adding data to a database into simple SQL".
Everything else I've tried? Its hallucinated stuff, spat out incorrect code, etc...
-
Thursday 22nd May 2025 12:31 GMT Irongut
I recently had to rewrite 834 lines of complicated code and asked my boss, who originally wrote it for assistance. He put it into GitHub Copilot and was very impressed with the 121 lines of code it produced.
Until I pointed out the switch statement that was missing half the options. And later the seven missing methods.
After I rewrote Copilots's garbage output, the complete and working file was 537 lines of code.
The only thing good I can say about GitHub Copilot is at least its broken, partial code was easier to read than my boss' working code.
-
Thursday 22nd May 2025 18:25 GMT Claptrap314
If you know what an LLM is
you know that there is NO WAY that one can consistently produce code that compiles, let alone code that does something useful, let alone properly handles corner & edge cases.
Of course, you can tack on a module that requires the output to be run through a compiler & loop until it passes, but beyond that...
At the very best, the founders were trying the well-trod fake it til' you make it route.
-
-
-
Sunday 25th May 2025 15:24 GMT Michael Strorm
Re: Cold Hard Calculation
The Matrix was a great film but, yeah, that part was crap.
I heard somewhere that early versions of the script had humans being held captive to exploit their brains' processing/computing power, but the powers that be thought this would be too complicated for people to understand and changed it.
However, looking it up just now, I find some people disputing that this was ever the case.
-
-
Friday 23rd May 2025 12:28 GMT Groo The Wanderer - A Canuck
GitHub copilot with Claude 3.5 and the free service can't even make simple changes straight up without doing a build that generates errors it then tries to correct while virtually ignoring the original request.
That's not "intelligence." That's a damned bass-ackwards way of producing maximum screw ups and mistakes.
If Microsoft is relying on it that much, no wonder the busted "patches" and "fixes" they've been releasing have been so bad lately.
This is just the first of many bankruptcies that "Artificial Ignorance" is going to cause...
-
-
Friday 6th June 2025 11:37 GMT Slinkyhippo
I had a meeting with BuilderAI a couple of years back and it was very clear within 30 minutes that they were built on hype. Their "AI" offering really was just templates of code with very little "AI" invovled at all. In fact, they were little more than a software development house that used cheap labour in India to do most of the work.