Religion?
"It's turtles agents all the way down."
AI is easy to use, but not quite as easy as just barking "Alexa! Make me an e-commerce site." And, no, adding "DON'T HALLUCINATE" to the instruction loop won't help. More to the point, optimal AI results favor the well-fortified agent, according to speakers from IBM, Meta, and Netflix – among others – at the All Things AI …
So if AI is NOT taking everyone's job and is therefore NOT providing an obscenely sized revenue stream for the AI companies ... what will their future income look like after having spend billions over billions in AI infrastructure? That measly 200$/month subscription? This will never pay enough to wring any kind of positive ROI out of those investments...
"I want to be a programmer, whose programs consist of unambiguous instructions to a computer."
Not a lot of work for assembler programmers these days. Nice work, if you can get it. Consider the embedded world.
If you are bored, the Linux and BSD kernels are always looking for a few good coders ...
BASIC, Perl, Rust, SQL, C, Swift, C++, Python, Javascript, Brainf***k...
All these and many others are ways to give unambiguous* instructions to the computer. The specific language isn't the important part of the OP's comment.
Assembler isn't the native of the CPU either, btw. It still needs "assembling" and linking before it can be executed.
*Unambiguous if the instructions are valid - syntax errors and undefined behaviour are by definition invalid
Not a one of the languages you cite provide unambiguous instructions to the computer. Rather, they all provide instructions to a compiler or interpreter (which may or may not be unambiguous, depending on the skill and/or intent of the author(s) of the compiler or interpreter). At the end of that process, possibly with the help of a linker, it spits out a pile of ones and zeros that the CPU can read directly. Macro assemblers can do essentially the same thing, depending on the macro assembler.
I shouldn't have used the word "assembler" in that context in my original (although old-timers would know what I meant), rather I should have used machine code. Which does not necessarily need linking. Most of the programs I built in the first years of my career were lovingly hand-assembled from octets of ones and zeros. And no, no linker was required.
My first real job involving programming was writing the assembly op codes and operands out on a sheet of paper with a pen. Then filling in the hex representations on the right hand columns, which I found on a lookup table. I dealt with time by calculating the cycles required at the clock frequency. The assembly compiler doesn’t do much more than the latter that if you are working with straight instructions and not directives.
Then I punched these in via a hex keypad. So assembler can be unambiguous.
"Write me a web server" - <<too vague, decompose and try again>>
"Write me a HTTP parser, a file fetcher, an HTTP responder" - <<better, but decompose some more, please>>
...
"Write me a function to open the file whose name is the RHS of the URL" - <<hmm, bit more detail please, don't want to disappoint you>>
"Write a line that goes 'f = fopen(url_rhs); oh FFS I'll do it myself!" - <<hmmph, please yourself. Humans, they don't know what they want>>
Same way they do everything else - they copy somebody else's code.
It's other people's code all the way down, the gamble vibe coders make (and it is a gamble - the psychology of vibe coding is very much the psychology of gambling addiction) is that it will copy the code from somebody competent.
AI has allowed Ilegbodu to code in languages he doesn't yet know, such as Python, Bash, and Groovy.
If you don't know the language, you won't recognise plausible garbage.
Worse, you won't be able to spot that it only produces the desired output for the one or two cases you tested. If you're lucky.
I found it interesting that the Python output of the LLMs I've used seemed pretty good a year ago, but is obviously quite bad now. Has it actually gotten worse, or is it because I know more Python?
A better question still is why would I take seriously something from someone who doesn't know Python or Bash? If you haven't written a few bash (or equivalent) scripts, can you even really consider yourself a programmer these days?
And if you can't write some basic Python after spending 10 minutes looking at some basic tutorial material, you're not a programmer either. And when it comes to the non-basic features (e.g. list comprehensions and the like), you're not going to be able to review serious Python code without spending some time learning the language.
It sounds like this is the sort of guy who knows just enough to be dangerous around computers and happily accepts anything the AI spits out as being valid.
Do I really want to take this guy seriously, or should I treat what he says with the same sort of skepticism which we are supposed to treat the output of an AI?
> A better question still is why would I take seriously something from someone who doesn't know Python or Bash?
This is the sort of person whom AI is for.
The sort of person who has never lifted a paint brush in his life and can't see that his AI art sucks.
Code is no different but somehow a lot of "programmers" can't fellate the AIs hard enough even when it produces the equivalent of code that has 7 fingers in a vital but obscure function that looks like it's in the background. What does this tell you about these so-called programmers?
Software engineering it ain't.
> Do I really want to take this guy seriously
lol no. This is "maybe our last round of promises didn't work out but if you just AI harder the next funding round will AGI for sure!"
AI has allowed Ilegbodu to code in languages he doesn't yet know, such as Python, Bash, and Groovy.
But this context switching can get wearisome, he admitted. "At the end of the day, I'm actually kind of tired, because effectively, I spent the whole day talking to something."
So may as well be a chef then , with no understanding of heat, pectin etc ...
This post has been deleted by its author
The '10x programmer' framing misses what's actually happening. Before, you spent time writing code. Now you spend time writing prompts, reviewing code you didn't write, fixing things the AI confidently got wrong, and explaining to your LLM why its 'better' solution broke three things that were working fine. The work hasn't halved. The keyboard time has. Those are not the same thing.
> Jeffress noted that AI can usually do 80 percent of a given job, leaving the last 20 percent to be finished by a human.
ITYM the last 80%. And therein lies the problem. AI can do the boring easy bits and gets stuck on the bits everyone would have got stuck on anyway, except now you haven't done the first 80% yourself and have to spend 80% of your remaining time figuring those parts out when you could have written them yourself in the first 20% of the time you had available.
Perl: Making easy things easy and hard things possible.
AI: Doing easy things for you with 3½ fingers on each hand and making hard things impossible.
A: Simple! Just spawn a whole bunch of AI agents to (upside-down) „op„ it all for you ... the more the merrier! It'll tire you right up "at the end of the day" (as it should) and challenge your human intelligence to make up for its awesome artificial defect as you become "the conductor of your own orchestra" of context rot agents ... upliftingly! You'll be glad it does the 80% of stuff that's a piece-of-cake while you need to sweat out your butt-ugly backside to get it through the remaining 20% you don't understand, in a programming language you didn't bother learning, as you wishful prompted your life to be spent in full bliss as a social media influencer instead ...
Do all the hard work yourself so the AI agents (so-called) can no-sweat the easy stuff while your ass is machine-gun dropping bullets, with vast happiness, as a newly conned wage slave -- that is the spirit! Gotta "pay the prep tax" AI piper upfront, and the AI middleAI intermediary, or else ... </This so-called AI is blackmail!>
Weird that at this conference sponsored by companies who only make money if you buy more tokens from them they have a lot of presentations suggesting the solution to whatever your problem is has to be even more agents.
Just keep spending the money and losing touch with how to do your actual job and somehow their businesses will become profitable in spite of unbelievable costs and ridiculous inefficiencies.
I once new a really good programmer who would sit round, apparently doing nothing for months. He would then type for a couple of weeks an a program would emerge that was brilliant. I asked him what his trick was and he said, "I sit around thinking about the problem a lot.". Our bosses were frustrated with this behavior. From their perspective, if he had been typing the entire time they would have even more genius code.
AI enables a new generation of developers to avoid thinking about the problem. It enables a new generation of bosses to press for more speed - and then whine about the lack of quality later.
Should be more of that ! Everywhere !
For starters there would unarguably be far fewer prematurely and unnecessarily dead people.
There could be a global conspiracy to suppress thinking of any sort: The Abolish Intelligence project ? Ridiculous of course — any plausible conspirators amongst the usual suspects have a mental deficiency of far more than the usual number of sandwiches short of a picnic normally accorded the village idiot.
While not by profession a developer, programmer etc, I have always preferred to solve programming problems with pencil and paper long before molesting a keyboard. I find that way I that think more about the problem in abstract, its implicit structure rather than trying to attack it with some half baked code. Once the problem and its structure is clear the code more or less writes itself.
That was me when I was a programmer/architect/estimator. It drove the CEO crazy that I had sit and have a "good think" about the problem space to think of as many of the edge cases as I could, and in some cases investigate some things to better understand the specific needs of the client. He absolutely did not understand that typically the easiest part was writing the main code to solve the basic problem at hand. Knowing and understanding all of the ways things can go wrong or how a user can potentially bork the system is where 90% of the real work lies.
I would often think things through and then consult a very intelligent colleague to check my logic which would lead to a back and forth. At the end of the day it lead to solid code with few issues. Zero AI nonsense required.
Rookie boss mistake here. Assumes that 'programmers' exist just to 'write code'. They do if they have to, obviously, but the tricky bit is figuring out what to write and integrating that design with an appropriate test strategy. (Again, another rookie mistake is "It ran overnight (or over the weekend)". I've always wondered what people mean by that.)
It sounds like the AI champions have discovered the programmer's lament -- "It never does what I want it to do, only what I tell it".
"Vague instructions lead to diffuse results, he told the audience. Clearly thinking about what information you are giving to the agent is the work of context engineering, which, in the short time of agentic AI, has become an art form, if not quite a proper discipline yet."
Oh. My. God.
Someone said it out loud! Though not in so many words.
Clearly thinking about the problem, to the point of being able to articulate it to someone (or something) else, is an absolute requirement for actually being able to do the work.
Generations of pointy-haired (and pointy-headed) managers immediately cried out "But our bonuses depend on *not* doing this and blaming the resulting mess on subordinates!"
While wafting upstairs on what successes their subordinates wrest from the chaos.
Perhaps this AI thing, and the resulting "oh we can fire more people as a result" will actually make it clear that the value-add is being able to take the fuzzy stuff that the board comes up with and translate it into something that a machine can execute.
It is clear those writing about AI, never bothered to read up on the software development process. As then they would know why the phases before “coding” were so important and time consuming, plus also know that problems solved/address early result in less (expensive) work/rework further down the line.
It would thus seem, those who’s jobs are most at risk are the “keyboard junkies” - the people who didn’t bother with specifications and design but jumped straight into coding and designed on the fly; AI prompt writing, it seems needs good design, specification and (pseudo) natural language skills.
At what point does it just become more effective to create simple, deterministic processes or do the work yourself? - I dare to think rather quickly.
I posit that almost any task you need an agent to handle correctly will fall into one of two categories:
1. It's simple enough that you can handle it almost entirely through a simple chain of processes in low code solutions (Power Automate et al.). And end up with a much more reliable outcome, much more quickly.
2. It's complex enough that to get a reliable output you may as well write a full UML diagram alongside your prompt and feed it to the LLM. At which point just write the code yourself I guess?
The main way I could see these "agents" being useful is handling ambiguity within a narrow scope; a single cog/process as part of an otherwise classic and deterministic chain of processes.
But if course the zealots continue to insist on all or nothing. After all, if it can't do *everything*, why bother, right?
I suggest involving an AI agent doubles the task: you have the real task at hand ie. What the program needs to achieve and a second task orchestrating the AI agents to increase the chances(*) of a clean output.
(*) Given the way AI works, you cannot “ensure “ the output is clean, you can only increase the statistical probability of the AI creating a clean (but probably wrong) output.
I don't exactly disagree, but I think there are narrow use cases. As an example, a chatbot that can take free-text user input, then analyse it and decide which canned chat journey to put the user on (or direct them to a real person).
It's a hybrid between 'pick from these options' and 'go describe things in your own words'. The AI involvement?: 1 single chat step to take ambiguous input and put it on an appropriate track.
So there can be some benefit. However having an AI handle the entire interaction with the user is a dumb idea... And I've now seen it far too often.