Hmmmm
I much prefer a world where people can look under the covers and understand what's going on, rather than leaving it to AI. That's what a healthy relationship with AI would look like.
Nvidia CEO Jensen Huang believes AI has advanced to the point at which it's no longer necessary to prioritize computer science and coding education for the world's youth. Apologies to the past decade of CompSci grads, but your college years would have been better spent gaining expertise in areas like science, manufacturing or …
Computer Science is not (only) programming. And judging from ML generated output so far, I don't think we should just trust the system to give us correct results. I would say that journalists (who hopefully do check sources!), developers (and that includes much more than just writing code, ffs) and scientists so far are pretty safe.
Learning to think has never been a bad thing. I don't write(a lot of) code any more, but it certainly gives you an insight on how to break problems into smaller pieces while not losing sight of the bigger picture. Same with hand tool (woodworking is most accessible) skills and needlework. Learn a bit of both. It is not only fun, but does challenge your brain in a wonderful way. My kids can sew and saw in a straight line.
He said CODERS would be replaced by software.
That I can believe (eventually, some future generation of AI well beyond the current one) because that's basically rote translation of a concept into code. That's easy, a lot more people can do that than the truly difficult part of programming. That is, gathering requirements and creating a spec for what the program is supposed to do. That's the really hard part, and AI won't help you there - but he wasn't saying it would.
But assuming some humans have done that and produced a detailed spec saying exactly what the program should do and how it should react to all the various inputs and circumstances then yes I could easily see some type of automation take over and translate that into whatever computer language you prefer.
It won't do what you want, because you'll have got the spec wrong, but the one improvement of this method of programming over the current one is that instead of making code changes trying to fix the shortcomings/errors in the spec (i.e. do what the end user WANTS rather than what he told you or what he thought he wanted until he saw it in action and realized that wasn't what he wanted after all) is that after fixing the spec you can have the automation generate new code from scratch. Basically the spec (which might be part English, part flowcharts, part mathematical formulas, part government laws and regulations, part APIs from third parties, and who knows what else all combined into a toxic witches brew) becomes the source code.
I've long thought one of the reasons there are so many computer languages is that academicians think programming is writing code, so they think a language that makes that code more "elegant" will be better. They don't understand what real world programming is.
Ah, yes.
You are correct.
Just soooo many people confuse the two. I assumed that ... id-ten-T was one of them (still 90% certain)
The academics are not programmers. The academics are more theoretical and do get the algorithm part of computing science. They are not at fault for the bajillion different languages. And, to me, all languages in a certain group are same-ish. C, FORTRAN, R, Basic, Pascal all are procedureal. Then there's object oriented, and finally the weird Lisp stuff. Most things that divide languages in a group is syntax and some details (like row major / column major, pass by value / pass by reference, which is important, of course, but not enough to make them oooh so different).
《pass by value / pass by reference,》
And of course pass (or call) by name [Algol 60]. (And Jensen's device. :)
I imagine lambda expressions serve the same purpose in the fashionable languages to day. :)
When you think about it why would AI coders need high level languages they could just dive into the instruction set architecture and produce binary directly. :) They could fiddle with the microcode in processors families that allow that (Alpha PALcode?) I assume global optimization would then be a doddle.
Anyway I think he is likely right but probably not for the reasons he thinks.
I.T. was the "steam" of the age of railway mania or the "electricity" of middle 20th century. During those two periods steam and then later electricity were used for all sorts of ludicrous purposes - some just plain idiotic, stupidly dangerous - which have passed into well earned obscurity. I.T. is currently used for similarly ludicrous purposes which will also vanish.
The next hundred or so years are going to pose such challenges that much of the worse than useless contemporary bullshit will be discarded very rapidly.
Unsurprising that the manufacturer of the hardware of which AI/LLM needs truckloads, to run on would be boosting AI [MRDA.]
but the one improvement of this method of programming over the current one is that instead of making code changes trying to fix the shortcomings/errors in the spec (end user spec) is that after fixing the spec you can have the automation generate new code
If that were true the end user wouldn't need you as a middleman. That's what CEO's dream of, and promise their activist investors in exchange for larger bonuses. From that POV, AI is the new outsourcing, a good excuse for getting rid of or demoting anyone who isn't in business and sales. How's that going, Boeing?
Basically the spec (which might be part English, part flowcharts, part mathematical formulas, part government laws and regulations, part APIs from third parties, and who knows what else all combined into a toxic witches brew) becomes the source code.
AI is nowhere near that. Where it is now is another iterative improvement. Hard wiring replaced by punch cards, replaced by microcode, replaced by complied languages, added to by interpreted languages, enhanced by smart editors. AI is another programmers tool. It's limitations are just as apparent as it benefits. For those limitations to change by another few orders of magnitude is going to have to wait until the next hardware revolution, because of resource barriers.
... eventually, some future generation of AI well beyond the current one ... Maybe - meteor strikes, etc. permitting. But fuzzifying what is with what might be is not engineering, it's entertainment or dreaming (fine), or a psuedo-religion (ugh), or in the case of Jenson Huang and some others, good sales talk (expected).
If that were true the end user wouldn't need you as a middleman
The end users can't write a spec - they don't know what they want. Someone has to tease out of them what they actually want/need versus what they think they want. Someone has to ask all the right questions. If you asked the end users to write a spec today for replacing an existing/legacy system (which is what most programmers do these days) they would say "I want it to work just like the current system does". Some of them might throw in a few things here and there "I wish the screen that let you choose shipping addresses remembered which addresses had been used with which people so that the options I'd actually use would be at the top instead of alphabetical order" but mostly they'd just want things to work exactly how they have always worked. How is an automated system going to know "what the current system does"? That's where you start asking a bunch of questions, watching how people work to see what steps they follow and you build flowcharts and so on.
Writing something new is even more impossible to automate, because the potential userbase has no reference point. That's where you usually get the worst designs, because programmers (who have no idea how the people who will be using the system do their job or what would be most efficient) design something that makes sense to them. Then they get all frustrated and talk about "stupid users" because the end users understand the new app's workflow that seemed "obvious" to their on-the-spectrum brain.
No, automation will never replace that part of programming until we get a true AGI.
OK, so you change the spec to fix your bug and regenerate the code from scratch. And now you you can throw away your carefully collected and filtered bug list because those bugs are gone and a bunch of new ones have popped up!
Fix the next bug in the spec, regenerate the code again and.. Oops, same thing happened! Bin the list, start QA again, repeat. Possibly forever.
Software always starts wrong, and (sometimes) eventually becomes good through a process of incremental improvement. I don't think regenerating code in a non-deterministic way from a spec document can really be a good substitute for that process.
The other thing I notice in your example is that the spec now has to be massively more detailed and precise than a spec has ever been before - probably so dense and complex that a non-technical person cannot understand it. Now it's.. the code!
I don't think regenerating code in a non-deterministic way from a spec
Who said the process should be non-deterministic? The one requirement I would make of that "AI" is that it perform the same process so that it DOES result in the same output every time. That way if there's a bug "when the monthly spending is over $30,000 you need to use a different tax form" then you include that in the spec and next time that bug will be gone. You seem to be assuming that fixing that one item in the spec (and the spec isn't a "document", I thought I was pretty clear about that) will result in something else breaking just because that's sometimes how it happens with patching computer code.
I don't think that necessarily follows at all - if it does then you weren't precise enough about your change to the spec to fix the "bug". That's why humans need to write the "spec" (or "collect it", in the case of stuff like applicable laws/regulations that have been written by others) because they alone can understand exactly what the "bug" is and what requirements need to be changed to address it. If fixing it breaks other stuff, then either that part was screwed by up a human or the person reporting the bug did not get the bug report correct - it is either only a bug in certain cases or is a broader bug than believed.
I don't think you'll ever get a fully deterministic "AI" as a service, especially one that remains so through updates. I suppose you could run one model per project on your own servers, never alter them in any way, don't allow any randomness as part of the system, and really hope it has no bugs of its own that need fixing.
However as soon as you change your prompt (spec) if it's generating "from scratch" it'll create a different thing than it made last time. So I think perhaps what you want instead is to make your specs immutable, and every change is applied with a "patch" spec (i.e. a change request) that the AI can use to make a modification to the existing program. If all this is really deterministic then it would be possible to generate the code from each successive patch in turn and end up with reproducible results.
(Also, c'mon, it's a document. Even if it's got pictures, and equations, and emoji, and a video, it's a document. We're in an HTML document right now.)
Your "detailed spec saying exactly what the program should do and how it should react to all the various inputs and circumstances" *is* your computer program.
It sounds like you are just inventing a new programming language. It might be better than existing ones, but it is still a programming language.
It isn't a programming language if it doesn't have any spec. I purposely listed a lot of unrelated stuff (English text, flowcharts, mathematical formulas, laws, etc.) to indicate that it is NOT another language. There is no fixed format for what the spec is. That's not a "programming language", any more than a random assortment of letters and numbers is a "book".
If the "AI" truly is as capable as Huang wants us to believe, it will be capable of turning that disjointed set of information into computer code. Because that's what "coders" do after all.
Yes, that is a computer language, just one that has a lot of syntax options. There is a lot of that in languages already.
English text: "if 8 in list_of_numbers then"
Mathematical formula: Well, duh.
Flowcharts: I don't like those languages much and they're most often used to try to teach programming, but they exist.
Laws: You can drop one in. It will be your fault when the AI doesn't properly encode legalese into logic appropriate to your situation, since even a programmer nowadays requires a lawyer or two to do that correctly.
I get what you're trying to say, but a program that reads a certain form of input data and produces a program is just a compiler. The syntax it understands can become more and more complex, but if it's deterministically taking an input text and producing an output, it's parsing a language. In order to be deterministic, that language will have to have certain limitations on it as well. I'm not sure the distinction is as clear as you think it is.
Like to see an AI code OSI Transport Class 4 from the Standard specification; human programmers struggled to implement it ..
Remember a specifcation (in general) says what needs to be done, not how it will be done in a specific instance.
Would the AI realise its code needs to interface to (another AI’s) ISO OSI CLNS and utilise the iinformation contained in the OSI Transport Class 4 Service Definition?
"That I can believe (eventually, some future generation of AI well beyond the current one) because that's basically rote translation of a concept into code."
Color me skeptical on that. The problem is that in my experience expressing concepts completely and unambiguously is even harder than writing perfect code. My guess is that a lot of folks will learn -- painfully and a great cost -- that actualizing their visions will require not Artificial Intelligence, but Artificial Clairvoyance. I don't think any number of Nvidia chips are going to be able to deliver that.
Which is what a compiler does, after all ...
Not at all. A compiler has very explicit syntax rules for what it will accept, and allows no deviation from those rules. It does not accept a "concept" as input, a human must translate that concept into the very explicit syntax first. i.e. a compiler will not accept "sort the list of helicopter parts by vendor name and within that larger search order the list of parts for each vendor by part number" as input.
You need to translate that concept (which is "spec") into C (or whatever) language following the very explicit rules for C - from everything such as using the correct data structures that contain that list of helicopter parts all the way down to including a ';' after each line.
That's just syntax, though. There are languages that can express that sort in fewer statements, requiring less knowledge of the internals, and obtain the same result. A search like that in SQL looks a lot like your sentence, except that you'd use specific column names instead of things like "the vendor" and "part number". Something could take a sentence that looks natural and translate that into the sort you want. That is just a compiler that uses a language that's closer to English. I've seen a few of those, though I tend not to like them.
What you're describing is a bit unclear. It's easy to imagine a program capable of understanding a broad concept: "I want a program to translate text from one human language to another supporting at least English and Swahili", going out and building such a thing. It's not easy to imagine how you'd actually obtain it, but that's what the hype around AI appears to be doing. Meanwhile, your version appears to involve someone being much more specific about how the job gets done, but doesn't end up bringing back all the restrictions that being that specific tends to entail. You're clearly not saying that you can have a one-sentence spec, but I'm having trouble figuring out how your spec and a program differ except that more of your spec looks like English.
a compiler will not accept "sort the list of helicopter parts by vendor name and within that larger search order the list of parts for each vendor by part number" as input.
Whether you put it that way or
sort(HelicopterParts, "Vendor", "PartNumber");
or
"rendezze a helikopter-alkatrészek listáját a gyártó neve szerint, és a nagyobb keresésen belül rendezze az egyes szállítók alkatrészlistáját cikkszám szerint" (thank you, Google translate - I know no Hungarian)
is merely a matter of syntax. You're still using teh compile to go from concept to code.
English, and other human evolved languages, have been proven time and time again to be not precise enough to describe complex operations and calculations. That is why we invented mathematical notation and programming languages. Even if AI could write code, rather than just copying snippets of human-created code, you still need a precise way of communicating the spec to the AI, and that in itself is a programming language.
He said CODERS would be replaced by software.
That I can believe (eventually, some future generation of AI well beyond the current one) because that's basically rote translation of a concept into code. That's easy, a lot more people can do that than the truly difficult part of programming.
No, it is not rote, and while it might be easy to do it badly, it requires some skill to do it well.
I've seen code knocked by outsourcers from the sub-continent, and while it seems to meet the requirements, it was a horrific mess of bloated cut and pasted code, completely unmaintainable and inefficient. We ended up not using a singe line of it. The code produced in house, was elegant, efficient and maintainable.
When you use an AI which code do you think the training set will contain? Vast quantities of low quality shovelware, or a much smaller set of well written code?
I remember reading about that in - I think - PC World. And now I find that it even has its own Wikipedia article.
I've been hearing the same story for 40 years or more. I can still vividly remember my dad telling me not to become a programmer in the early '80's when he showed me an article on so-called Expert Systems. People would tell them what they wanted and it would produce a complete working program as if by magic. We all know how that fanned out.
I believe there's a deep desire among companies to replace these expensive and obnoxious software developers, as we can see with all the hypes (Expert Systems, Lo-Code, AI). It will eventually become reality but only when computers have matched human beings in intelligence. And I believe we'll have different problems on our mind when that happens!
I think perhaps the offshore coding teams might be in danger from AI (though I'm not holding my breath) - but the people who supply the specs and did the design for them will still need jobs.
And there are plenty of "coding" jobs in this country which also involve a bit more than just being a code monkey. But you've still got to know how to code.
So ignore this doomsayer from Nvidia. Go forth, young person, and learn to code. Despite the comment above, there are still plenty of jobs in tech, and still not enough people to do them.
Coding is the process of telling the machine what to do. First however you have to understand what and how you want to tell it, and that's the tricky bit.
There is a new IT discipline called Prompt Engineering, which people are making a lot of money out of. Prompt Engineers develop the natural language questions / instructions you enter at the AI/ML prompt. They can even develop template prompt texts which you can vary certain parts of (you might call these variables user input). All of a sudden, Prompt Engineers look rather like software developers.
Actually, some layers of management are good candidates for replacement by LLMs. I suspect others already have been replaced by software: e.g. I know software decides routes in logistics, that must have been at least partly a junior to mid management decision before that.
There have been computers doing financial trading for a while too.
You could ask another LLM what's wrong with it. With the proper tools (and prompt engineering!) you could even let them talk between themselves, discuss the code, fight about indentation and language preferences, disagree about variable names, and just stop working together because their partner is an a**hole. Just like real meatbag programmers!
Then you can add a third LLM to act as the group's leader and a fourth as project manager and see what happens. It'll be LLMs all the way down!
Only when I've debugged it for weeks and have started to come to the conclusion that, maybe this time, it's not my fault. It turns out that it usually still is my fault. The reason I can do this is that the compiler gives me the same result each time I compile the same code, so it is possible to change something, see if it helped, and change it back if it did not.
And sometimes, it is the fault of the tools. While I tend not to find many compiler bugs, I have found and fixed bugs in standard libraries before. It starts the same way with me assuming that my code's wrong somehow and I can't see it, and I start to read the library code to see what it is doing with my input because I can't see it. Sometimes, I notice that it's the one doing the wrong thing.
Ironic really, as I was hired to drive the forklift , yet seem to have ended up doing the programming.
But can AI help me to program?
well I can describe toolpaths and robot motions to it... but I'd have a hard time convincing myself the code spat out is correct (I have a hard time with the output of the CAM package)
But if you have to be frighteningly specific to the AI is describing what you want it to output , how is that different to regular programming as high level langauges are merely away of abstracting the lower level code into more compact yet complex commands.
EG
Add 1 to variable counter and store the result in counter.
C=C+1
Inc C
All equally valid and yet a lower and lower description of what you want done(I'd put it in binary too but cant be arsed to look up the op code )
Until AI gets the ability to describe a problem given to it, its just another programming tool and more than likely currently, an unreliable programming tool
Your point raises interesting questions:
Why does the AI need to produce high-level code that goes into a compiler etc.
Who is going to baby sit the AI: feed it the specification, taking its output to feed into the compiler, take that and run the test suite etc….
Sounds good, as long as you're willing to be the kind of person I've seen as a programmer who tells me not to tell you why I think this is a bad idea, I build exactly what you said to, and then the game of who to blame for it not working right starts up. Programmers can easily misunderstand what is needed, but usually, it isn't because they got a clear answer and rejected it. Usually, it's because their assumption on what a vague spec means was wrong. That is why, whenever possible, I spend a lot of my time making sure that I understand what you need and you understand what I can do and what I'm currently planning to do. Skipping or rushing that process usually produces code that neither of us wants to see.
I'm sure he does. He's an electrical engineer, not a programmer. Obviously, he thinks that, as long as you can find some code on the Web, you don't need a programmer.
Unfortunately, there is no pseudo-AI that is going to code Salesforce, or SAS, or an ERP. I don't care that there are now "prompt engineers", you still have to know where to place the prompt in the code.
Programmers are not going away any time soon.
On the other hand you could have said in the 60.70s that all these computers are going to need millions of people to become electronics engineers to build them.
There was even a scare in the 60s that if computer use took off at the current rate more than 100% of the population would be needed to do PCB layout ( then done by hand)
I can see the armies of in-house programmers all producing the same sales forecast and inventory reporting routines in SAP/Oracle being the first to be replaced by AI.
Unless you think there is something ineffably spiritual in Birmingham city council-tax operations that can only be done by a human.
I worked at a place a long time ago, they had a PCB assembly department that they called the wire shop. Lots of chattering ladies would sit folding component legs and pushing them through PCB holes, soldering them and trim the excess.
Then they got a flow solder machine and less ladies. Then they got automatic test equipment and a more ladies that would read the ticket from the ATE and remove the indicated solder bridges, correct the wrong components and put in the missing ones.
Then they off-shored all the manufacturing. Over the years of reading the Register I've been reading about how all IT work in developed nations was going to be off-shored to developing nations, and quite a lot of companies tried that, and much of it didn't really work out. Then people were concerned about Cloud providers taking all the infrastructure work, but seems that is going through the same cycle as the off-shoring. I expect AI will have to go through that process too, before the reality occurs.
That company that off-shored its manufacturing is now doing its own manufacturing again, albeit with pick and place machine operators. One of the pick and place ladies is from the original wire shop and is back after some decades.
Ignorance outnumbers knowledge in all fields and that is certainly true of programmers. A major contributor to the problem is most employers do not know how to feel the quality - of programmers or software. I do not know the root cause of the post office software disaster: the wrong people, poor description of the problem or lack of time (probably a mixture of all three). I can be certain that crap software outnumbers working software in the training data. That sets my expectations of the quality of the source code output of the modern generation of travesty generators. Apparently some people think they are more productive correcting Copilot output. If ML outputs a significant quantity of valid code I would be nervous about touching it because of the high probability of incoming copyright litigation.
We already have young humans copying bad code examples until they gain enough experience to know better. I am sure ML can do the first part of that faster - and generate huge quantities of garbage training data for Copilot Version 2.
When yet another shovel salesman hypes yet another gold rush my enthusiasm rivals Marvin's.
Throughout the development of tools, we've reduced the number of people required to write any given program, and by doing that, increased the number of people who want programs written. Yes, it will probably happen again, whether LLMs are good enough to do so I'm less sure, but something will eventually start to automate the basic programs that some people have to write. I don't think the people having to write them will mind too much, both because having to write that kind of thing is really boring and because there will be more things that need writing. This is a problem for someone who wants to learn one tool and keep using it forever, but that hasn't worked for programmers for any of the previous years and they should know that it's not going to work now either.
Well, I am an EE but I have done a lot of code for embedded systems of my (or my team) design.
That is an enormous market, incidentally.
Some of the skills are setting up the underlying hardware [1] which requires a rather solid knowledge of the internals of the microcontroller you happen to be using [2]. I don't see any 'AI' (which although artificial is anything but intelligent) being able to do that in the foreseeable future.
1. The are hardware abstraction libraries available, just about all of which are poorly written, are very opaque and have horrible corner case problems. Train the model on those and hilarity will ensue. I had a library function (DMA initialisation in one case) that was several call levels deep and I replaced it with 3 lines of code. The same issue exists for just about every onboard peripheral library function provided by the vendor. For precision work, using callback functions is a major no-no but they proliferate in the various HAL libraries.
2. Modern devices have several thousand pages of documentation without considering the underlying core and assembly language and it can take a long time to fully understand all the functionality and just how to invoke their operation (even order of operations can make a significant difference).
> I had a library function (DMA initialisation in one case) that was several call levels deep and I replaced it with 3 lines of code.
When I developed the RTOS for a new computer, with the EE (who designed the motherboard) and component reference documentation we developed a suite of hardware library functions/macros. We knew these weren’t particularly efficient, they did however, capture the knowledge and logic necessary to drive some of the hardware, so we could quickly build code and check its logic flow, with correct logic we could then go back through the code (ASM) and optimise it, doing as you note - replacing library code with the specific instructions necessary. Doing this enabled me to get him to modify his design (*) to better support the RTOS.
(*) from memory some of the components could be wired differently to support different modes of operation. Plus the hardware defined the ROM/RAM memory map.
Both?
'Coding education' is strictly training, and is the most miniscule element of computer science. Computer scientists set the parameters for the design of computers and solutions to computing problems. Coders implement already defined algorithms using specific languages. Both (together with other skills such as algorithm design) are necessary contributors to systems and software development, but one of the fundamental reasons why current mainstream computing technologies are such crap is that almost everyone thinks computer science, software development and coding are all the same thing -- defined by coding.
Quite apart from which, if neither is to be taught, who will be competent to check that the AI hasn't generated garbage, let alone fix its cock-ups?
It is probably true that any old CS degree/3 week boot-camp is not going to lead to an automatic highly paid job in the ways that is has (occasionally) been for the last 20years.
There is still going to be a demand for a different kind of software engineer, probably higher paid and more specialised.
And there probably should be more exposure to coding in schools - not because 90% of kids are going to be C++ devs, but because everybody is going to be dependent on computers and people shouldn't' think they are magic.
One skill AI can't replace, IT people with 20+ years experience of dealing with panicking, headless-chicken PHBs in P1 calls shouting the odds and demanding full rollbacks on a project that's 27 hours down the track and has some minor issues!
Yeah AI coding will get you out of that sort of situaiton, not years and years of on the job diplomacy training in handling managment during a P1!
As a very famous character used to say, "Am I bovvered?!"
Become a plumber, electrician or carpenter...Its hard to "virtualize" or outsource...And where I live (a giant sandbar sticking out into the Atlantic off Massachusetts called Cape Cod giving me a slight advantage if I were to kayak to from New England to England) we don't have nearly enough people with said skill sets.
...Well I was set on being a car mechanic until I learned how to code in 12th grade in 1974.
A lot of software is awful and this is without AI. Replace programmers (even poor ones) with AI and it'll be a complete shitshow.
Which means this prophecy will not happen as any company dumb enough to try will be enough of a warning to the rest that there's no I in AI and that's what the meatsacks are there for.
Wow so in the future most people are farmers, nurses or blacksmiths. There is a rich class who rule over the others, and yes there will need to be some paper pushers but there are plenty of rich sons and daughters to fill those skilled / educated roles.
Why bother educating the proles, they are happy farming and drinking on the weekends!
Why bother educating the proles, they are happy farming and drinking on the weekends!
After 20 years in IT you have no idea how much I yearn for that life - working at something outside or making things by hand and earning enough to live a simple yet comfortable life from - absolutely, yes please!
So telling young people not to learn programming or earn a degree in tech or science because AI already has it covered, will help with the talent gap in tech and science how? So more people get degrees in "How to skin a fly", a term a boss of mine used to describe the useless majors people pursue that have little to zero use in the practical world. The lack of understanding of science and tech is the reason we have people that believe the Earth is flat, or that Covid comes from 5G cellular towers, the anti-vaccination crowd. AI is still a baby, barely able starting to crawl, unable to even understand what it is saying. AI WILL be the destruction of humanity if we continue blindingly entrusting our future to it, while we pursue the challenges of making TikTok videos, hunting Pokémon or watching Taylor Swift's every move. INSANITY!!!
Being able to code is a bit like being able to write in a language such as English. Its an important skill for communicating but just as being able to write English doesn't make you a great (or even mediocre) author being able to code doesn't make you a programmer. As with English relying on generative AI produces very high quality imitative output; there may be some original gems in there but by and large its just rehashing what the machine's seen before (that is, been trained on). Coding, being a restrictive set of language skills compared to a human language, is going to be even more formulaic. Useful, but its really missing the point.
We may already be at a critical place in coding anyway, be it done by humans or machines. I've been looking at a project's code recently and I noticed that it was primarily built from ready made components -- that's fine, we're supposed to do that -- but at the same time its 'core' seems to be missing. This isn't unusual, though, because I've seen similar assemblies in other projects, so I start to wonder whether this is normal and its me that's out of step. The problem is that if a subsystem just stops working for no apparent reason (invariably resource depletion, of course) there's no easy way to debut this, waiting 72 plus hours for the system to fail and then trying to figure out what's wrong isn't very efficient. Here, no amount of fancy editors, code completion, AI or even 'languages that are guaranteed to be memory safe' will help you -- system correctness becomes a matter of faith. (In real life, at least the life I used to lead before I retired, you had to actively go looking for trouble, there was no point in running the code overnight and hoping for the best.) Anyway, judging by the general behavior of those 'apps' that infest my phone, modern Windows and what-have-you I'm probably in the minority so I'll just retreat to my vacuum tubes (valves, to you lot) and at least keep my fingers warm. But I reckon if we don't deal with this.....
Not at all. Just as with cryptocurrency before it, Nvidia isn't invested in what the process makes. They're not the gold miners. They're the shovel sellers. Shovel sellers often do well. OpenAI and other LLM farms connected to big tech will be okay. It's the smaller AI startups that will take a hammering, and they may not even need the bubble to burst to have that happen.
It is not as bad as "you will die if you do not take this quick&dirty vaccine"*, but surely the motivation is identical: spin the money wheel for Nvidia, a major producer of ANN processors.
Current AIs have the complexity of a worm brain - about 10000 neurons.
Humans have in the order of 100 billion neurons. Even cats and dogs have in the order of 1 billion neurons, which is why we can have a deep, good relationship with them. We see their creativity, their expression of love, their kindness, their acrobatic skills and many more things.
Dont believe a word of Huang, software engineering can be one of the most demanding intellectual endeavours.
Just ask the AI to perform a task that it had not inside the training material and it will fail spectacularly. E.g. solve a linear equation system, write the complete Enigma source, compare+judge the V22 and the Do31. It will fail on all of these tasks.
* very much the opposite was true, statistically speaking
Please compare all aspects of the do 31 and the v22
ChatGPT:
The V-22 Osprey and the Dornier Do 31 are both unique aircraft with VTOL capabilities, but they have notable differences:
Design and Purpose:
V-22 Osprey: Designed for military use, it combines the vertical takeoff and landing (VTOL) capabilities of a helicopter with the speed and range of a fixed-wing aircraft.
Dornier Do 31: Developed as a vertical takeoff and landing (VTOL) transport aircraft primarily for military use, intended to carry troops and equipment.
Configuration:
V-22 Osprey: Features tiltrotor technology, with rotors that can tilt from vertical to horizontal for both vertical takeoff and landing and horizontal flight.
Dornier Do 31: Utilizes a lift jet and lift engines to achieve vertical takeoff and landing, with wing-mounted engines for horizontal flight.
Performance:
V-22 Osprey: Faster and has a longer range compared to the Do 31, making it suitable for a wider range of missions.
Dornier Do 31: Slower and with a shorter range compared to the V-22, but it was designed for specific military transport needs.
Service History:
V-22 Osprey: Used by the United States Marine Corps, Air Force, and Navy for various missions including troop transport, medevac, and special operations.
Dornier Do 31: The Do 31 program was canceled after limited testing and development, so it never entered operational service.
Technological Legacy:
V-22 Osprey: Represents a successful implementation of tiltrotor technology and has been in active service for decades.
Dornier Do 31: Though the Do 31 program was canceled, it contributed to the development of VTOL aircraft technology, particularly in Germany and Europe.
Overall, while both aircraft aimed to fulfill similar vertical takeoff and landing capabilities, the V-22 Osprey has seen significant operational success and remains in service, while the Dornier Do 31 did not progress beyond the prototype stage
Me: apparently the quality level changes often. This time it correctly described the main difference. But it has one big factual error(speed), which clearly demonstrates the worm intelligence. The Do 31 is the speedier concept, as it does not have silly huge rotors. Its a real jet, unlike the V-22.
Also, not a single word about the horrible safety record of the V-22. It has killed more than 40 soldiers in accidents to date. Almost the definition of "pressed into service without reason"
Solve the following equation system: y=2x-z, y=10x-z-24, 2y=3x-2-z
To solve this system of equations, let's start by expressing yy in terms of xx and zz from the first and third equations:
From the first equation: y=2x−zy=2x−z
From the third equation: y=3x−z−22y=23x−z−2
Now, we can equate these expressions for yy and solve for xx:
2x−z=3x−z−222x−z=23x−z−2
4x−2z=3x−z−24x−2z=3x−z−2
4x−3x=2z−z−24x−3x=2z−z−2
x=2z−2x=2z−2
Now that we have xx in terms of zz, let's substitute this expression into one of the original equations to solve for yy.
Using the first equation:
y=2(2z−2)−zy=2(2z−2)−z
y=4z−4−zy=4z−4−z
y=3z−4y=3z−4
Now, we have expressions for both xx and yy in terms of zz. Let's substitute these into the second equation to solve for zz:
3z−4=10(2z−2)−z−243z−4=10(2z−2)−z−24
3z−4=20z−20−z−243z−4=20z−20−z−24
3z−4=19z−443z−4=19z−44
44−4=19z−3z44−4=19z−3z
40=16z40=16z
z=4016z=1640
z=2.5z=2.5
Now that we have found zz, we can substitute it back into our expressions for xx and yy to find their values.
From x=2z−2x=2z−2:
x=2(2.5)−2x=2(2.5)−2
x=5−2x=5−2
x=3x=3
From y=3z−4y=3z−4:
y=3(2.5)−4y=3(2.5)−4
y=7.5−4y=7.5−4
y=3.5y=3.5
So, the solution to the system of equations is x=3x=3, y=3.5y=3.5, and z=2.5z=2.5
Me: if ypu want to see a proper solution(1,3,5), use wolfram Alpha or implement the Gauss algorithm yourself.
The seeds of their downfall are already sown. They can't sell their product to China, they're only allowed to sell a downgraded version. So the Chinese will have to resort to either making their own or coming up with an alternative form of vector processor. (The only thing that's stopping them is our certain knowledge that the Chinese can't invent anything, they have to buy all their advanced stuff from Great White Brother and then steal the designs....)
They will. When they do there will be a competitor that being a Generation Two model will probably perform better, cost less and use less power.
Its happened before numerous times. There's no reason to believe it won't happen again.
The Chinese are dependent on Western tech. Idiot politicians ban them from accessing it. The Chinese work their nuts off to lose that dependency. The Chinese will soon no longer be dependent on Western tech.
The Western economy is dependent on migrant labour. Idiot politicians repatriate them during the pandemic, lock the doors and do everything they can to block them. Western economies fracture due to loss of competent labour in essential sectors: care home, health, agriculture, hospitality, retail. Economy slides. Strikes. Nothing works. Poverty increases.
UK economy booming on trade with EU, great trade deals, strong Sterling, low inflation, low interest rates, EU opt outs. Idiot politicians throw it all away with Brexit. UK undevelops. Everything is now crappy. And they wonder why people hate them.
You clearly have to be really dumb to be a politician, and you cannot make an AI that reliably stupid. So instead of becoming coders, become politicians, pay yourself loads and wreck the future, the way the current crop have wrecked the present. We will always need politicians to ruin our lives.
Where will all the training sets that nVidia GPUs will pound on to train AI models come from?
Especially if one wants to do interesting new stuff. Mundane boring old stuff will have been packaged in libraries, I assume. You might imagine an AI "copilot" that will offer useful documentation to use the libraries, but that doesn't need to be super-intelligent or require too many GPUs to train, nor will it provide much more value on top of actually reading such documentation. Someone, possibly your kids, will still need to do some thinking.
And for the new stuff - some coding. Enough to make up a decent training set for AI to help with documentation after the interesting new stuff becomes mundane old libraries.
Wait and see what will happen when there is no coffers outputs and the AIs have to l"learn" from each other. It will be a nightmare to say the least. AI as currently stands is kids toy that does not even do any real work. As long it is based on statistical convergence and not real declarative inference like human mind it will only be used to solve problems faster not create new things. That is the end of it.
Funny that, this reader can’t help but think of life on the fictional starship Heart of Gold and the Nutrimatic Drinks Dispenser when faced with the very simple user requirement: “all I want is a cup of tea”…
On the other hand, resolving that requirement will require lots of AI processors (probably more than exist today) and a correspondingly large cloud computing bill….
If we don't have anyone writing code, we won't have anyone capable of understanding code. If we have no-one capable of understanding code we lose the ability to check and control the AI written programming.That way lies madness.
You'll still need people capable of understanding basic programming so that they can go on to learn how to understand complex programming, so that they can then learn how to interpret specs, write documentation, specs, process descriptions, etc and translate these into useful components of programs. I just don't see it happening that LLMs can take over "programming" all that much. It's great at rote repetition of unoriginal thought, it sucks at actually novel invention (or actually, it just doesn't).
I program business software for a living and more than half the battle is trying to get the business to explain what they want. The notion that the business leaders can write their instructions down and an AI could write anything useful is a challenge at the moment. We constantly have to ask questions to get them to articulate what they really want, and they constantly ask for the impossible...
Eventually AI may replace most day-to-day coding, but not for a few years, and even then skilled people who know code will still be required to convert requirements into instructions for the AI to generate usable code.
OK Jensen - we propose letting an AI run the Nuclear power plant near you (What could go wrong)
When you need a medical procedure to be performed on you, the Doctor will use an AI
You are not driving or flying anywhere without an AI at the controls
Surely no objections there. You must be willing and able to put your health, wealth and security
in the hands of this espoused AI utopia?
No? Then stop spouting this drivel....
"we propose letting an AI run the Nuclear power plant near you (What could go wrong)"
I operated nuclear power plants (as a licensed, trained operator) from the age of 19 to 21. I wouldn't let AI/ML anywhere near the operation of the plant, whether reactor or auxiliaries.
Given all the self-help (self congratulatory) books on being a CEO, surely this job could be done by an AI by now.
More than enough training guff in those tomes to feed the thing and then unleash it onto the stock market.
Perhaps Jensen would like to be the first turkey to vote for Christmas now he's invested Santa Claus?
Sure, an AI can come up with some code, but will it be good code, making the most of resources available? Or will we just use ever more resources because it's cheaper to throw more resources at it than to hire someone to write it properly in the first place?
I think good coding skills will become rarer and less relevant, except we'd be wrong, because they're always relevant. AI will make mediocre the new norm. Like it's becoming already with things like customer service, build quality of goods etc.
The next hundred years or so is going to prove all the scifi writers right, I fear.
...is an end user, who can write a comprehensive spec outlining what the program should, and shouldn't do, covering everything from error handling, over UX, to security.
And do you know what we call the format of a spec that fulfills these requirements?
Code. It's called code.
... to come up with a simple paragraph outlining both the capabilities and deficiencies of current generation AI. And yet, such a paragraph has already been created by AI...
Somebody on Threads shared this AI-generated (from customer reviews) Amazon doozy:
=====8<=====8<=====8<=====8<=====8<=====8<=====8<=====8<====
✰✰✰✰✰ 3 out of 5
28 global ratings
Customers say
Customers like the appearance of the planter, mentioning it's beautiful and has a nice hanging pots. They like that the pots are weathered and have a sizeable chunk missing from the lip. They also like that it has shattered to bits and is a waste of money.
=====8<=====8<=====8<=====8<=====8<=====8<=====8<=====8<====
LLMs are great at recombining tokens But they really don't have any mechanism for knowing what anything actually means.
I use Visual Studio 2022 with the super-duper souped up intellisence or copilot (or whatever they call it) and while it does make some helpful suggestions it also make really stupid ones. I've yet to decide if the time saved by the good ones exceeds the time spent undoing the stupid ones. Most of the helpful ones were what I was going to enter anyway so all it saves is half a line of typing.
AI can do the simple stuff but is there really a demand for a million different "Hello World" applications? As soon as you start doing something really complicated the AI is worse than useless.
Also it's kind of important not to allow a skills vacuum to form when the current programmers start to die off.
AI/ML is in no danger of replacing software engineers. At best, it is capable of generating simple code, not design and the code to go with it that is efficient, scalable, supportable, etc. I use AI/ML for simple, repetitive tasks in my coding, usually having to correct the generated code, or ignore it altogether as being worthless. I started writing AI/ML with MS's ML.NET years ago, and see a lot of good uses of AI/ML, as long as the software engineer sticks to reality and not science fiction and fantasy.
AI/ML is not capable of thinking or reasoning alone. AI/ML can only do what it has been trained to do, which depend on humans with very large training data, which makes the AI/ML generative output limited to how good the training data is (which depends on how knowledgeable the humans that made the training data).
I see the future of people working in software engineering to be one where coders, hackers, "script kiddies", etc. will be largely (but not totally) eft out or replaced, but those who approach software engineering holistically with engineering and business acumen, and understanding users, able to think creatively and with deductive reasoning, to do well and have long careers.
AI/ML is a good tool (of many tools) for such software engineers but can never replace them. AI/ML replacing software engineers is pure science fiction.
I use AI/ML for simple, repetitive tasks in my coding, usually having to correct the generated code
Is the time you spend checking the generated code greater or lesser than the time it would take you to write the same from scratch?
I suspect in most cases it takes longer to check than to write it yourself and anybody who uses generated code without checking it thoroughly is going to have unpredictable and crashy programs. I'm sure AI/ML will get better but for programming critical software from design to publish, it's unlikely to ever be better than a meatsack.
A Conversation with the Founder of NVIDIA: Who Will Shape the Future of AI?
This is the title of his talk viewable on youtube.
Higher Performance in a Higher Level Language this is what I think he is saying.
I suggest that Programming will morph into Prompt Engineering.
Prompt Engineering is simply knowing the right questions to ask.
Learning what the right questions are becomes an objective.
This can take time.
Many have tried a GPT and successfully gotten the rotten result they where looking for proving garbage in is garbage out.
To those folks I request that you try again with an open and curious mind.
My experience is Prompt Engineering is the future of programming.
Given that we need multiple months and iterations to tease out requirements that were never stated at the outset, I very much look forward to this process that will understand the clients vague poorly thought out 'requirements', and then perform the miracle of generating usable code and deploying in the correct place without any issues! Doesn't sound like a pipe dream at all, just like my various 'smart home assistants' do everything I ask of them! Siri play that song that i've listened to a thousand times, 'Im sorry I couldn't find x by y in your library' *manually finds and plays song via phone* siri what's this called ? 'this is x by y' , ah yes, so the exact thing you just said doesn't exist then