Perfect
Developers can get back to coding rather than code reviewing AI-generated slop.
Microsoft's GitHub this week said paying GitHub Copilot customers will now face monthly limits on certain types of high-powered AI requests, and will have to pay more if they want to surpass those limits. Welcome to the AI industry's latest variation on bill shock, made famous in the telecom industry and later refined for the …
"Developers can get back to coding..."
Especially as the price for AI will increase untill it consumes almost all of the productivity increase it (might) bring.
We saw this with Windows/Office in the 1980s-1990s which nickeled and dimed it's way through the productivity improvements untill all gains ended up in MS coffers.
Sucks to be a business buying into this:
"increase copilot prices to above the level of fired programmers"
At $0.04/request, they would have to increase them a helluva lot for that to come true.
But it would be funny if meat sacks turn out to be cheaper; my AI dystopia bingo card didn't have warehouses of human programmers grinding out code for peanuts and competing on price against "premium" AI code that only the richest firms can afford.
True.
Given the recent strategy lesson on service lock-in provided by Broadcom to the whole IT industry, one should expect that not that much companies would fall for that specific type of trap a second time. But no...
Being locked into a service that can change its price tomorrow and that can withhold that service to you at will should trigger mitigation plans and planning on exit strategies, not the firing of staff to replace them with said service.
This post has been deleted by its author
I haven't used AI for coding at work at all for a few reasons, one of which is I am yet to see any example of it where it's actually useful to me. Another is that these pricing tiers make my head hurt.
At home, in Xcode 16.4, code completion seems less than useful. One example is when I'm typing the start of a function that has a definite set of parameters, and code completion decides to hallucinate some new ones based on the code in the few lines above.
I type this: `let date = Date.now.adva`, and code completion suggests `let date = Date.now.advanced(by: .day)` then I immediately get an error because there's no such thing as a `.day`.
Is it too much to ask that these things work according to the SDKs they have access to?
That was one line of code. What happens when I need some 'AI' thing to give me a lengthy function? Do I have to check everything myself? What's the point of all this AI slop?
So far, I've only found it useful when I'm learning a new API that's complicated but also consistent and well-documented. It's a niche situation, but I find that an LLM can tell me specifically what I need with the level of detail I need, whereas the docs will, by their nature, contain everything and take more time. A lot of .NET is like that.
If it's an API I already know and don't need to learn, it's useless because I can already do simple tasks faster than I can explain them to a LLM, and the LLM will hallucinate too much for complex tasks.
If it's not a complicated API, then it's useless because it takes less time to learn than to explain to the LLM what I need.
If it's not a consistent API, or is not well-documented, then it's useless because the LLM will hallucinate even on simple tasks.
So that's about it. It's a "nice to have", but I really don't think it's worth consuming so much energy to have it.
One of my colleagues, an open source dev in his spare time, grey beard and all, and very open minded and interested in current developments actually told me the exact opposite. He is no frothing at the mouth AI peddler but has nuanced and well founded opinions.
He was amazed how easy it was to get entry level jobs done. In his opinion it will only be a few years until AI will do the entry level jobs, you know, what we all did as our first jobs. Those will no longer be there. Yes, code needs to be reviewed, but so did my code back then - and honestly, so does it now, dependin on how critical it is... (we all still make mistakes, more complex ones but still we f**k up)
On these premises, the future looks like some years of money saving, followed by some years of oh fuck as the old coders retire and the new coders can't do shit, followed by some years of even more oh fuck as the LLMs have nothing new to train on and gradually lose the ability to do even simple jobs, followed by some years of rebuilding skills, followed by, if we're lucky, some sort of equlibrium
Error messages need real search, not hallucinations.
Autocomplete based on the actual AST is far better than an LLM, as it never hallucinates.
The IDEs I've used for a decade or more generate boilerplate, and it's the same each time so I don't need to check it.
Comments are to explain why, not what.
"What's the point of all this AI slop?"
Continually promising the moon on a stick to get certain people extremely rich before the bubble bursts. Have you heard the current British government going on about AI recently? It is nauseating drivel that is going to take money from more deserving causes like fixing old decrepit hospitals and paying the staff inside better, or fixing the many many potholes, or...
I've recently had the scales fall from my eyes with "AI" coding. (Once you get over that "AI" means anything and kinda live with the marketing buzzword)
Killer apps so far are:
- Reducing mundane tasks leaving time for improved (or actual) testing
- Working across multiple repos for code discovery
We work with over 400 repos with a similarish codebase, no doubt there are ways to manage that differently, but this is over 10 years+ of tech debt.
Yesterday I spotted a funky implementation of something, knew we fixed it somewhere else, ask the "AI" to find other repos with it and apply the fix to them - this is like a one liner brain fart that just seeped into production as it was empirically a totally solid solution.
Naturally it needs proper testing as no doubt each repo has a slightly different spin, but that'll be surfaced through testing.
This all happened while I was doing other stuff - "just" have to review the changes.
The IDE integrations channel your thinking into per repo usage and the power is in using "AI" across your entire codebase.
It's worth researching "AI" tools that can sit outside of a single repo.
I am still experimenting on what specific tasks I even want to use LLM for. It has always felt quite wasteful for some simpler questions, even when it saves 30 seconds of time compared to finding the correct Stack Overflow answer. This made me expect pricing changes. I've no idea will the limits affect my use or not, I guess I'll find out.
With every product they buy, sooner or later, the FSCK it up bigtime.
Why people put up with them is beyond me. Unless you are all on on Orifice 300 and working on their systems in the cloud (so that they can spy on your every move) you are not worthy of their attention.
Proudly MS free since 2016. One look at W8.0 was enough for me.
Given that the driving force behind multi-billion dollar subsidized service is to squeeze out any new competition from growing, this is a good thing. It would be far more sensible to invest that money in ways to the lower cost side - hardware and/or algorithms, including basic research.
This post has been deleted by its author