Time warp capital
Gotta love Berlin! Where Thursdays are Monday's yesterday! ;~}
Software industry veteran and developer advocate Kelsey Hightower, well known for his contributions to the Kubernetes community, has an interesting take on generative AI: he won't be paying too much attention to it for now, except insofar as how it will be instrumental in changing what it means to be a software engineer. While …
... oh shucks! Joke's on me now ... the article's text been updated to "Last week, The Register wrote about one part ..." (used to say: "yesterday ...") ... and last week's piece itself was updated to say "next week" (where last week it said "tomorrow") ... all good and professional IMHO ... but this does beg the question of whether Berlin is indeed the 3-day-weekend capital of the world, where Monday follows Thursday (days in-between being a general blur, for everyone, not just journalists), and if there might be other locales with 4-day, 5-day, and even 6-day weekends ... inquiring minds ...
That athlete analogy is worrying. Competitive athletes have relatively few years at the top. Then they're discarded. and have to find some other way of earning a living. Some may become trainers, some sports journalists but what happens to the rest? Is this what he thinks should happen to software developers? Does he work for IBM?
You demand higher wages for a short period of your career and then retire / build an investment empire like top level athletes / sports people.
Seriously, do people think they have to grind for life to earn a living? I have no intention of working for life if I can avoid it. I want to be done with full time work at the age of 45 at the latest. Retired by 50.
Anyone that says they want to work for their entire life is full of shit. "Oh I can't see myself retiring, I'll work until I die". That just makes you sound like a wanker. There are two reasons I'll get out of bed and work. 1) The work is interesting and fulfilling. 2) The money is amazing.
I've been a professional since I was 19, back in 2003...I have never been out of work...ever...and I think that is down to how I taper off my use of technology and taper in new technology. There has never been a cliff edge for me.
Recently, I've been tapering in Svelte, DaisyUI and Tailwind and I've begun tapering out React, Vue and Angular (I am aiming to ditch them within a year).
I have no intention of working for life if I can avoid it. I want to be done with full time work at the age of 45 at the latest. Retired by 50.
Whilst I understand this sentiment I also think it's a very flawed way of looking at life. The one question that nobody can answer is "how long will I live for?".
You could retire at 50 and die at 51 (I sincerely hope not, but it's possible). Equally you could live to be 100. If you think carefully about either of those scenarios the idea that you have some kind of linear working path is bollocks. No amount of retirement planning will allow you to truly know what circumstances you and the world may be in, in for example 30 years time.
A better way to look at this situation is to make sure you actually do things you want throughout your life irrespective of whether you are "retired" or not.
Sure, could also die a billionaire and never reach the bottom of your fortune and massively underachieve.
Retirement is different things to different people. Retirement to me is not sitting on my arse doing nothing...retirement for me is giving up the grind and doing what I want...and what I want is to work on my own projects...retirement is about drawing a line under delivering other peoples visions for shit money 5 days a week...48 weeks a year...being the master of your own time.
Clocking in, clocking out, working to someone elses schedule, delivering someone elses project, having to turn up at specific times and being released at a specific time, fixed holiday allowances etc etc...this is all bollocks and should be nipped in the bud as soon as is practically possible. It's no way to live. It's a shit way for humans to exist...accepting it is failure...rejecting it because you can is success.
You don't get to be Bill Gates, Mark Zuckerberg, Elon Musk, Steve Jobs (a list of possible wankers sure, but some day someone has to reach that level without ending up being a tosser, maybe it'll be one of us?) etc by delivering other peoples projects and grinding...you get to be them by delivering your own projects and realising your own vision.
True, but it's extremely difficult to become an industry leader if you aren't the master of your own time. You can't really be one without the other...the common mistake people make is thinking they need money to be an industry leader...which is untrue...what you need is time and how you get time can vary massively...some people are perfectly happy making personal sacrifices to open up time...others are not and require money to open up time...the former are more likely to succeed and be respected than the latter.
"the trick is to retire when you can rather than chase an ever growing money tree"
Absolutely, a lot of people miss this trick though because most people can be bought to some degree...it's why the most widely accepted remuneration for work is money not time.
People moan about not getting pay rises, but they don't tend to moan as much about not having any time. In a situation where I've been refused a pay rise, I've always negotiated fewer hours to compensate. You'd be surprised how often that works.
"Fine, you don't want to increase my pay by 10%, how about we reduce my time then? You get to keep your operating costs in order, but I get some more free time, I'll extend 4 days of the week by an hour, in return for a day off per week".
In this scenario, the boss thinks they're on to a winner. They escape a pay rise for what seems like a 4 hour reduction in time...but really, you're gaining a working week over the course of a month and you're now in a position to take a week off at the cost of 4 days of holiday instead of 5...so your holiday time is 20% cheaper...just never pick Monday as your day off, because it overlaps with public holidays. Always pick Wednesday...it's the least objectionable day.
Whilst this is true work enforces what I do almost every week for forty hours that could be spent doing other things. I can't do some things I would on holiday, because they need to happen during working hours. Add on regular exercise, chores, and suchlike and whoosh another week has gone by even allowing for some evenings out.
There is a balance here. It's undoubtedly better to do things throughout your working life, than to work and have no life and retire somewhat earlier, but make no mistake despite working being basically livable I wouldn't be putting up with the general daily bullshit if I wasn't paid for it, and if I won the lottery big I'd have my three months notice submitted within the week.
Most British (and many other countries) get very, very little funding, if any at all.
To put it in context in 2012 when the UK hosted the Olympics, the British Women's Volleyball team got £0 in funding. They resorted to sponsored bike rides to raise the money needed to actually compete in the games held in their own fucking country.
Quote from the BBC at the time
"Volleyball was one of eight sports to have had London 2012 funding slashed in January because of a £50m budget shortfall"
Let’s consider buggy LLM-generated code. Buggy because it doesn’t fulfil the application requirements or just plain buggy ‘cos it’s wrong.
Who reports this? Who confirms this? Who assigns this for fixing and who fixes this? Do we assume LLMs in the roles?
Assuming a successful fix is created, should this propagate to other LLMs so the same mistake isn’t copied? What process is now followed, if any, and is the benefit local-only, or global?
If the bug was found in some npm library, is the fix checked and merged? Who manages this, or if it’s not managed, what happens? Suggestions?
Reusable software components were supposed to be one of those many, now-tarnished silver bullets. Are we about to discover that we now have a big bag of dodgy, rusting iron filings? Ot doesn’t it really matter because the complexity will be managed and the results will eventually converge to bug-free nirvana?
Don’t mistake this for doomsaying, I’m a fan of predictive code completion. But there’s a case to consider for complexity management and how that’s communicated.
…
1. If you gave an LLM application requirements of any complexity it almost certainly couldn't write the code to fulfill that. It might be able to rattle off some common issues around similar application requirements, which might or might not be helpful. Just having the interaction might trigger a useful idea on the human side.
2. There is no substitute for checking and testing everything an LLM writes.
Given the above constraints, if you can still make LLM coders work for you, then why not?
"Today, unfortunately, too much of that time is spent with the ceremony between you convincing the computer what to do by writing code in the arcane language that is purpose-built for the compiler and not the human."
Yes, I'd love to have programming be easier too. The reason programming is hard is because you must clearly and unambiguously state what you want to have happen. Irreducible complexity is irreducible.
"That means we don't do a lot of rough drafts, we don't do a lot of prototyping, we don't do a lot of models and renderings. We don't go out and talk to every customer and watch them work. But imagine if you had your time back. I think the new software developer would be doing way more of those activities."
WE HAD THAT. It was called "system analysis" and "gathering requirements" and it was axed because management saw it as a waste of time and money.
The system I'm working on was developed in the late '90s after 5 months of talking to the users, seeing what they did for a living, and actually sitting down and DESIGNING the system to help them. (edit: yes it's creaky and needs a lot of work, but it's still here because it still gets the job done. edit part deux: and don't think there haven't been something like 12 attempts to replace it that have failed because management won't spend the time & money to do it right)
Today you have a bunch of useless assholes sitting in a meeting room, who have probably never talked to the users, much less know what they do, spouting off nonsense requirements for two hours, then the meeting is over and the software is expected next week. And we wonder why Oracle racks in millions for participating in these circuses. They specifically tailor contracts for "people that have no clue what they're doing"
Programming is hard because once you have done it then you need to verify that is completely correct ... that's the hardest side of it because it needs you to attempt to fix the problems you find and then return to the verification process again. I'm sure AI is designed to do a good job but it virtually never verifies its programming and most of the errors are seen as your problems, not AI.
The environment when we were coding in BASIC and then starting to use Visual Basic ... so much better to use but with new problems too - I was happy writing code with everything but always accepted that I had to deal with the users function errors. Coders have to do it because code never does it reliably.
"Programming is hard because once you have done it then you need to verify that is completely correct"
That right there is performance anxiety. If the code works, it works. There are tons and tons of different ways to do the same thing across almost all programming languages...who is to say whether one particular way is the correct way?
As programmers we look for efficient ways to do things, quick ways to do things, secure ways to do things...and if we're really lucky...interesting ways to do things...but we never look for "the right way" to do things. If we did that, we'd never get anything done. Arguing about the "correctness" of something, the philosophical side of development, that is for neck beards on message boards.
As a younger techie, many many years ago I used to worry about doing things "correctly", about how colleagues might view my code etc...but eventually I realised there is no such thing. Only the result matters. If you write some code, and it works (hopefully without introducing more complicated problems), or worse problems...then it is correct.
If you write code that solves a problem, but introduces a less complicated problem...that is still a net win...the code is still correct...it's almost impossible to write bug free code...the end goal is solve problems until the bugs only exist in areas of your solution that don't cause any major problems...even multi billion pound mega corps with thousands of devs can't write bug free code...you, working on a team a fraction of that size for a fraction of the budget they have, don't stand a chance if you worry about "correctness".
"If you write code that solves a problem, but introduces a less complicated problem...that is still a net win...the code is still correct."
what utter bullshit.
if it makes another problem in then it didn't fix it correctly, and you are a shit and lazy programmer and your colleagues who have to find and fix your shit know your a shit programmer and wonder why bosses haven't canned your ass.
but your attitude indicates your probably a very good arse licker. which explains that!
Hey, in your world it might be possible to take a product offline and refactor an entire code base for each bug fix, but in the real world that isn't possible.
It's not lazy, it's working with what you've got. Would I prefer to release a fix that is perfect? Sure. Will the projects that I work on allow me to put a product in "maintenance mode" for a few weeks while I refactor everything to accommodate a fix? Fuck no...not unless it's an absolute show stopper or the risk is so high that it's worth taking a week or two hit to the company bottom line.
Employees: *trying to get on with work*
Manager: Sorry guys, Dave in the dev team is trying to get everything perfect. We have to down tools until he's happy with it and unfortunately, since the production line is now off we have to send you home with no pay until further notice.
I'm perfectly fine with other devs hating me, that's par for the course working in tech and you don't get fired for being hated, but I can't abide by the business seeing me as a liability because my quest for perfect code costs them millions of pounds a day and brings the profitability and feasibility of the business into question...that will get you fired or worse, lead to shittier budgets in the future and a working environment where everyone is miserable because the company is constantly running out of money.
The vast majority of developers out there work on codebases they didn't create themselves that can be a decade or so old and have terrible documentation...if you work in the banking sector, it can be several decades old...and usually in these circumstances it is seldom possible for a fix to be perfect...as a developer, you're not up against whether your colleagues think your code is perfect...you're up against the powers that be deciding whether it is financially viable to keep a product going or whether they should just drop it...which can have a direct impact on whether or not they need you. Whether or not a product is still feasible could be the difference between you having a job and spending 2 years sending your CV out and waiting for another offer. If the impact to the business is greater than the value of the product, you are in the shit.
What a load of crap. You don't need to stop everything and send people home to write code that doesn't introduce new bugs when you are fixing a bug. Have you not heard of version control? TDD?
Take some pride in your work instead of folding and doing whatever the suit with no clue says they want. I pity your coworkers.
That is exactly how you end up, after a few years of doing this, with unmaintainable messes that are easier to throw away and redo from scratch than to fix. That's only efficient if all you care about is next release and next fiscal term. If the objective is to make a product that will keep making lots of money for decades while being cheap to maintain, it's an extremely inefficient approach.
"If the objective is to make a product that will keep making lots of money for decades while being cheap to maintain"
Yes, that's called business. The problem is, none of us have crystal balls. All projects set out to be the best they can be...it's time that is the enemy...what is perfect at launch, might turn out to have a showstopping bug a year down the line...because bugs might be found in libraries, browsers...factors out of your control that you rely upon...that library you used for a specific purpose might have been great when you first built the project, but further down the line CVEs might come to light and now you have to write code to mitigate those CVEs or worse, drop the library entirely and retrofit another one that isn't as good as the first library, but has fewer (if any) currently known CVEs...then a year later you're in the same boat again.
We'd all love to live in a perfect world where everything stops to wait for a perfect fix and our choices at the beginning of a project never come back and bite us, but the reality is...that's not going to happen...the wheels need to turn for money to be made and for everyone to get paid at the end of the month and none of us can see into the future to see what bugs come down the pipeline that couldn't imagine.
Whats worse is the bugs become more and more complicated over time...in my earlier days SQL injection was the new kid on the block...so many sites were affected by that, so damned many...developers at the time, didn't have the foresight to see SQL injection as being a problem...developers today also don't have the foresight to know what the next widespread bug will be...but they keep coming...new vulnerabilities are found all the time in all sorts of places and pretty much all of them nobody had the foresight to prevent...because if they did, there would be no new vulnerabilities.
Expecting that every line of code that you puke up as a dev is perfect and will never lead to a bug at some point is naive...it's why we have refactoring, regression testing etc etc...but these things cost money and money isn't infinite...doesn't matter how much money might be in the business coffers, if the product you're developing brings in X and you are spending Y to maintain it, if Y is higher than X the product isn't feasible and the suits will axe it...because nobody wants to maintain a loss making product...the only instance where that makes sense is if you're trying to keep a competitor out of a given market...but even then, you can only sustain it for a limited time.
You need to know your bugs and label them as "known limitations" which are either assigned a priority of further labeled as "won't fix". Simultaneously you have a roadmap of new features to add, and there is a skill to interleaving the development of new features and fixing the old bugs such that effort is minimized. However, without sufficient resources things have been known to get ugly.
> but introduces a less complicated problem
Yours is actually a great observation on problem solving, because many real problems have input uncertainty. This nicely explains why the lowest level business operations feel so frustrating, but that's OK, as soon as the business solves the majority of problems.
Thanks for sharing, as the dislikes may discourage.
Indeed. The dislikes won't discourage...we all start out naively thinking that one day everything we produce will be perfect...but unfortunately, the products we build have to make money, because the time and effort that goes into producing something has to be paid for...we have probably all experienced a great quality product at some point that disappeared because it couldn't make enough money to survive because maintaining the level of quality they set themselves was just too darned expensive...and at the other end of the spectrum we've experienced lower quality products that have somehow lasted for years and years without going under...despite being relatively shit...because they make money.
The difference is, an awesome product starts out mind blowing, but because of the costs to sustain that level of awesome, they either have to price themselves out of the market or they have to start making compromises...features get dropped, bug fixes take longer, the perception is that quality generally starts to decline...even though the quality may still be higher than the nearest competing product.
On the flipside, a product that started out with no grand allusions, can outlast the mind blowing one because they ultimately don't have to sacrifice anything, they never set themselves unrealistic and idealistic targets, they just wanted to get a product out...and they ultimately come out on top because they remain consistent...the mind blowing product starts to look like a money grubbing villain because of the corners that get cut and the sacrifices that need to be made to retain some semblance of sustainability, whereas the so called "inferior" product never has to compromise because they weren't idealistic...they were pragmatic, they just wanted to put out a product that was good enough and sustainable...and because they tend to outlast the "mind blowing product"...they ultimately get their customers anyway.
It's idealism vs pragmatism essentially.
> 5 months of talking to the users ... to help them
It is very likely that majority of custom software is actually quite generic and conceptually repetitive. Excel is a great example of a common denominator to many repetitive problems. Maybe it is not custom-enough, but getting there from the generalization direction. In a way backpropagation is exactly that: optimizing between generalization and customization.
The reason for emergent behavior of GenAI is that many problems have been solved many times already (incl software). The challenge has always been finding those solutions without spending months reading specialist literature or sifting through millions of lines of 3rd party code. The gen models spot those repetitive patterns and make them available instantly.
I was recently developing a program for a niche industry using Copilot, and the bot shocked me with large chunks of correct suggestions - while not perfect, it hinted I was reinventing the wheel. Which I kind of knew from the beginning, but had hard time to retrieve from 3rd party resources for obvious challenges of old-school information retrieval. Imagine what people from the 1960s would have thought of Google - they had to sweat in traditional libraries.
I believe solution space is typically very sparse. The more complex a problem is, the further away the solution is from anything else dimensionally. And, if a solution exists, it can be easily spotted by GenAI, which has much larger memory and ability to spot correlations than humans. Spotting correlations in high dimensions is nearly instant for those rare people capable to hold very complex concepts in their heads.
The biggest risk for human workers is they could be, like software, optimized-out through generalization and customization. What's more: custom software is not even needed, because the humans are not needed in the optimized system. They just don't know it yet, thinking they are unique with their skills, or plain searching for problems to their solutions (skills). Fortunately there could be space for experimenting, especially in physical space. Something like baking bread, material science, dentistry, or surgery. The question is how much demand is for such services, and which of them can be substituted with GenAI.
Except that GenAI does not give you that.
What you actually wanted was to know where that code suggestion came from, so you could use the complete, well-tested library that did the thing.
Instead, what you got was little snippets of several different, incompatible libraries (eg different up axis), complete with bonus lossy compression errors.
You almost certainly did not find all of these errors and incompatible tenets, and if the snippets are of significant size with few errors, at some point you may be sued for copyright infringement and you don't even know who to buy a licence from.
"at some point you may be sued for copyright infringement"
I've never really come to a conclusion on copyrighted code...one the one hand, we spend a lot of time doing what we do and protecting that work is important...but on the other hand, we're just re-arranging known syntax until it solves a problem and the probability that there is massive code overlap spanning all the projects out there is pretty damned high.
The originality is not in the code itself, the originality is in the experience and problem solving ability of the programmer that wrote it therefore it could be seen that copyrighting code devalues the developer that produced it because the copyright on the code holds more value than the developer that produced the code.
Companies will spend millions protecting their copyrighted code if it is necessary...but a company will never spend millions trying to retain the developer that wrote it.
Whilst my pay is based on getting things done in the shortest amount of time with the lowest possible cost. I don't care.
Nobody gets paid to write original code, everyone gets paid to solve problems using whatever code works.
There are two areas of the programmer spectrum that will be massively affected by AI....the low end, because it's going to become a lot harder to get a coding job in the future as an entry level dev...because AI will probably become better and cheaper than a typical junior dev...and the high end...because AI has the power to turn an average developer into a pretty good one and the genuinely good developers will be slowly boiled alive by ever increasing competition in the job market because the gap between an elite programmer and an average one will narrow.
Agreed on all points.
Oh, and PS, Oracle never did deliver a working product to Birmingham city Council in the UK, despite being paid over ten times the original estimate, and will likely be paid as much again to make what they have work due to the sunk costs fallacy.
The AI coding assistants are, to me, little more than copy paste of examples found on Stack Exchange and similar sites. I have been using those kinds of examples as needed for over a decade instead of wasting time studying intricacies of API details that I may or may not ever use.
Back in the 1980s I would study an API top to bottom, primarily because they were so much smaller then, and knowing everything that was possible helped to come up with good solutions to problems.
These days you can pretty much assume whatever you are looking for is there in a mature API, do it's just a matter of finding out how what you want to do is done.
What neither copy paste examples nor AI have shown any utility to me for are architectural questions. Those are often tough decisions with many trade-offs and more opinions posted on the web and shared around the water cooler, but canned opinions don't really apply to your particular circumstances most of the time.
I think just about everyone knows how the flow of software development works and the difference between how own should do something and how it actually gets done. The problem really starts with management, they tend to have the mindset that if someone's not actively banging out code (or debugging same) then they're not programming. I've met this mindset over and over and learning how to live with it, and dare I say evade it, is an essential programmer survival skill.
The real work is in the specification document, the thing that fleshes out the user requirements. All to often this is an incomplete, half-assed thing that's crammed with 'TBDs', especially after the first few pages. This should be quite a detailed description of what we're intending to make. Once fleshed out an any prototype work to verify particular components is complete then the actual coding takes not time at all. The tricky bit is that there's a lot of repetition in programming so we might not need to enumerate every component in detail if its properties are well known but knowing what to skip and why requires skill and experience, you can't just blindly copy and paste and hope it all holds together. Automatically generated code is just like a really upscale copy and paste (certainly at the moment), it may give results, appear to know what its doing and so on but it probably doesn't, someone's got to figure out how to verify it (which is not "just chuck it out and let the users find the bugs").
There are various different requirements in software, depending on the intended audience. The sectors with the deepest pockets, though, will need products that have one or more of these qualuties:
1. Highly compact software
2. Highly secure software
3. Highly robust software
4. High Performance software
5. Highly distributed software
6. Algorithmically precise software
If AI is to do anything more than replace the developers of Candy Crush clones, then it has to be capable of operating in at least one, ideally several, of these areas.
There's obviously more to it, skilled programmers are language-agnostic, paradigm-agnostic, and specification format-agnostic.
I could eaily write out a full-length challenge that would test to the limits the capacity of an AI, but let's start with those six domains.
If a vendor or a prompter can demonstrate AI's capability to do these six things, then it's worth talking about.
All I've seen of what AI has done for developers is a million and one articles on Dev.to showing me how to write an AI powered chat bot for my phone, Windows, Linux, webpage, fridge, garden shed, dead badger I found up the road while out walking this morning! All I've seen Ai do is offer bored developers something to do of an evening so they write articles to get kudos points on social media websites, no one I know has actually done anything practical with AI hooks and APIs, some I know have butchered AI hooks into software to shut their PHBs up so they can put "We have AI in our software!" stickers on the sales pitches.
When I see something useful from AI, that isn't wanking off Sam Altman's ego, then I'm listening until then just give it rest.
'Every country invests in their athletes year-round'
Really? Ever talked to even olympian level athletes? With few exceptions when sponsored by private companies, athletes are poor, underfunded, and it's their personal passion and drive and that of the underpaid expert staff that makes the difference, not govt funding.