Re: predicts increasing hybridisation between classic supercomputers and AI
It was about this point in the article where it became positively unsettling in its potential implications. Then picked up speed.
Bloody good article, Rupert. Thank you.
High-performance computing (HPC) has a very different dynamic to the mainstream. It enables classes of computation of strategic importance to nation states and their agencies, and so it attracts investment and innovation that is to some extent decoupled from market forces. Sometimes it leads the mass market, sometimes it …
To be honest I wasn't sure quite what the point of the article was. It had a lot of mixed comment, eg use of a different non Von Neumann architecture for specialised HPC - well DSPs are not a Von Neumann architecture and designed for a specific task so this isnt a new path to tread.
ML/AI (please stop using the term AI as its just a sales term) seems to be solving 'innovative' problems. But only when its hand-fed curated previously successful answers (so really you've just pruned the search space to make analysis tractable).
Memristors as analogues (not again...) have been around forever and a day and are going to do something innovative. But they're slow. There's some great theoretical stuff to do analogue computers but the issue is making it work reliably. That's the only reason we use binary - to make a machine that can reliably tell 1 from 0.
ML/AI (please stop using the term AI as its just a sales term) seems to be solving 'innovative' problems. But only when its hand-fed curated previously successful answers (so really you've just pruned the search space to make analysis tractable).
That doesn't seem quite right. The whole point of ML is to be able to generalise beyond the training set (i.e., curated data with successful answers); it would be completely useless if it didn't do that! Note that the "search space" in e.g., an ANN is the set of network weights. This search space is not "pruned"; rather, it is optimised with respect to the training data, usually via standard (and usually deterministic) algorithms such as backprop. Hopefully, the optimised set of weights then generalises well beyond the training set, in the sense that it doesn't over- or underfit new (uncurated) data that it hasn't seen before. That is the "learning" part in ML.
Was it?
Well, whatever the intention, it seems to me to betray a misunderstanding of how ML actually works.
For what it's worth, my take is that ML theory is maths/statistics, while ML design is engineering - which might (as in the article) have applications in science.
(I'm a mathematician, statistician and research scientist, and have occasionally had cause to use ML as a scientific tool - specifically, to model neurophysiological data.)
Nope, you cannot generalise beyond the training set, you can only hope to generalise within it provided you have a nice smooth transfer function at each of your nodes. So if you consider the multi-dimensional training set constrained within a finite hyperspace then yes, the ANN will make assumptions and generalise, but outside of that bounded training space range it's anyone's guess what an ANN is upto.
An ANN just provides a non-linear transfer function from input to output.
What I was referring to in the case of pruning was the stated example given the article where an ML system 'solved' a difficult problem. In this case it was, as I said, trained on curated solutions of the 3 body problem. There was also the case of the recent system developed to predict new and innovative proteins. Which also needed a set of successful starting points to be going on with. The system could not find solutions from scratch, it needed curated successful examples from which it started thus pruning the search space.
This is clever and it is useful, but it's conducting a pruned search - its not solving or deducing.
"Generalising within the training set" makes no sense whatsoever. That is simply not what "generalise" means in this context.
Yes, you (probably) have a smooth transfer function on each network node. The training data, along with the smooth transfer functions is set up to define a smooth function on the space of weights (an "energy" function) - probably something like the summed squared distance between the network's actual outputs, and the known corresponding correct outputs, across the training set. Algorithms like backprop then perform gradient descent in weight space to minimise the energy function. When out-of-sample inputs are presented to the optimised network, it hopefully - if the network is not overfitting or underfitting - will be better than chance (much better, if you're lucky) at generating corresponding correct outputs. That is generalisation.
An ANN is a model intended to reflect the statistical structure of the data - that is, the probability distribution from which data (both in and out of sample) will be drawn. Overfitting means that the network model is too complex (too many parameters) and the optimised model consequently too closely "tied" to the particular training sample it was optimised for - it fits noise in the data rather than capturing its true statistical structure. Underfitting means that the model is insufficiently complex to model the statistical structure of the data well, and consequently performs poorly both in and out of sample.
ANNs 101 - was doing this stuff 25 years ago.
No - this is completely wrong.
ANNs do not represent the statistical structure - they match an input-output non-linear transform represented by the training data. The transform can imply a classifier (but not always, 30 (ahem) years ago we used them as a signal predictor to support kalman filtering).
If you want to represent the statistical structure then Gaussian Mixture Models are one option. Or if you really cant be arsed go with centroids. But if you're training a classifier then all the ANN is really doing is trying to optimise a wiggly boundary between classes in the multidimensional space of your feature set.
Over-fitting occurs when you over-minimise the training set error and it fits too much to specifics of the training data - you don't need too many parameters, you can just skid down your weight space too far. That's why (ahem) 30 years ago we'd split data into train, a small visible test corpus to check overfitting wasn't taking place and then a true unseen test corpus for evaluation.
I'm sure you remember from (ahem) 25 years ago plotting the ever decreasing training error versus say the classification error and seeing where classification started to get worse.
By the way, ML here means machine learning not maximum likelihood.
There are different perspectives on ML (yes, Machine Learning), and ANNs in particular. As a mathematician and statistician I tend to lean towards a statistical view. Under this view, an ANN is simply a nonlinear regression model, which is fitted* to a finite sample (the training data) drawn from the statistical distribution of the dataset. My explanation is perfectly correct from this perspective.
*In general, model fitting might well use maximum likelihood (what are the most likely model parameters - weights, for an ANN - given the [training] data?) However, for nonlinear statistical models, the likelihood function is most often simply too hard to compute analytically, so some other convenient criterion (like the "energy" I described) is used. Often that amounts to the same thing; in linear regression, for example, the parameters obtained by minimising the mean square error are maximum-likelihood parameters.
By all means, stick to a more engineering-friendly perspective. Sure, I've used in/out-of-sample data to mitigate overfitting - as a statistician I'd call that cross-validation. (And if you can calculate the likelihood function, there are also analytic tools such as the Akaike/Schwarz/Hannan-Quinn... information criteria to estimate meta-parameters [e.g., model order].)
I'm not saying your perspective is "wrong" (apart from your comment on generalisation, which in an ML context few would disagree means quite specifically generalisation from in-sample to out-of-sample data), but nor is mine. Perhaps you might want to broaden your mindset a little.
Usually extending a regression model past the data used to fit the regression is a bad idea. Extrapolation vs interpolation. Can be done for things where the general model is known and rather simplistic...but that is usually not the case where ML is employed.
I kind of agree, but what use is a regression model (or any model, for that matter) if it is not to be extended beyond the data used to fit the model - i.e., make predictions? What else are you going do with it other than extrapolate?
Back in the early 00s, I spent a brief time as a quant/algorithm designer for a hedge fund which dealt exclusively in automated online trading. Regression models were designed and fitted to historical data, then used to predict future data. It worked. The fund was, and remains, wildly successful. Interestingly, the most successful models (at least those designed for financial instruments for which no a priori model was known), were linear. This was because they were the simplest models -- i.e., had the fewest parameters -- and could thus be fitted with the smallest estimation error. This is extremely important for financial data, where the signal-to-noise ratio tends to be tiny. The art was in getting the model order and historical window optimal, to avoid over/underfitting and to avoid fitting out-of-date data respectively; that, and careful choice of regressors (something of a "black art").
"...The resultant trained network could then not only solve previously unsolved three-body problems, but in some cases do so 100 million times faster than the standard approach..."
This has got to be not only one powerful solution, but one powerful scientific breakthrough--finally: the admittance and validation of whatever is being claimed here into the realms of true science (It HAS happened before; "plate tectonics" was, at one time, considered 'pseudoscience'. Perhaps 'cold fusion' is next).
If you're having trouble unwinding what has been said, here it is, un-wound:
"We have a network which can not only solve (previously) unsolved problems, but solve them 100 million times faster than they can't be solved..."
Martin Gardner would have absolutely loved this one.
Cold fusion is actually real. The problem is inconsistent replication. Set up 100 tests, all apparently identical, some show fusion products, some don't. The US Navy's research group has been funding research into it for quite a while, to try to work out exactly what's required to make it consistent.
"Cold fusion is actually real. The problem is inconsistent replication. Set up 100 tests, all apparently identical, some show fusion products, some don't..."
Uh-huh...Uri Geller's spoon-bending is real, too. And N-rays. And...
Not to put TOO fine a point on this, but everything following your opening five-word sentence just put the lie to that sentence. TL;DR: If you can't replicate it, it ain't science. Or, if you prefer,
"Inconsistent replication" ≢ "Cold fusion is actually real."
"...The US Navy's research group has been funding research into it for quite a while...".
I have always been under the impression that the US Navy was much smarter than all this. And, by the way, what is the EXACT name of this nebulous "...THE US Navy research group..."
Please don't make unsubstantiated claims such as these without giving attributions or citations; otherwise it is (not "kinda is"; not "sorta is"; not "maybe is". it IS...) nothing more than your own personal opinion.
Don't forget, now (I'm sure you won't)---we are all waiting for verifiable proof of the US Navy's involvement with "cold fusion".
'Course, uh, you could always dispense with these pesky proofs of veracity with a simple opening sentence, stating, "This is strictly my personal opinion."
"Everybody here who believes in telekinesis raise my right hand."
The reply is nothing short of...amazing. I was going to say "bullshit", but I'm not going to say that.
Ignore the selected icon---there is no extraterrestrial explanation for the object of this comment. There is NO explanation, except for self-serving attention grabbing...and, MOST probably, and almost assuredly, correct: the Dunning-Kruger effect.
Want proof?----simply type "The US Navy Cold Fusion Research Group" into your favorite search engine.
Then, when you get zilch---garbage---, type in the name of that ABSOLUTE, 100% credible source of information on research activity by the US Navy: "Office of Naval Research" (ONR).
Once you've gotten to ONR's home page, type "cold fusion" into ONR's "search" window.
Need any more proof of the validity of Isaac Asimov's dictum, "My ignorance is just as good as your knowledge"? ("A Cult of Ignorance", 1980).
Now you know---or, only possibly, some of you at any rate---why lawyers tell you to "shut the f..k up!".
> simply type "The US Navy Cold Fusion Research Group" into your favorite search engine.
Dunning-Krueger applies to you, Sir. Very much so. I got you to search in Google for something that doesn't exist.
Have you noticed the icon to the right of my original post? It's usually used around here at El Reg to indicate sarcasm.
Please allow me to repeat, for clarity: sarcasm.
Feel free to look up this word in Merriam-Webster. It does exist.
(Christ, you're still going. Please ensure you read my now-out-of-sequence first reply first.)
> Then, when you get zilch---garbage---, type in the name of that ABSOLUTE, 100% credible source of information on research activity by the US Navy: "Office of Naval Research" (ONR). [...]
> Now you know---or, only possibly, some of you at any rate---why lawyers tell you to "shut the f..k up!".
Wow. You really are spectacularly incompetent as well as arrogant and obnoxious, aren't you. You demonstrate in your own post that you can't manage a bloody search engine, then start abusing ST who was clearly pranking your ranting. Then finish off with wide-angle abuse.
Here's a clue: if I do a search on your insisted-upon terms ("The US Navy Cold Fusion Research Group"), the very first result is:
"U.S. Navy Cold Fusion Research. The LENR-CANR.org library includes several papers by U.S. Navy researchers at the China Lake Naval Weapons Laboratory, the Naval Research Laboratory and Space and Naval Warfare Systems Center (SPAWARS). Here are some examples:"
You'll note there that they cite 3 main groups w/in the US Navy's overall research groups.
Separately, the IEEE.org cite "Scientists at the Naval Surface Warfare Center, Indian Head Division have pulled together a group of Navy, Army, and National Institute of Standards and Technology (NIST) labs". If you're really anal about need ing ALL the NAMEZ!! then by all means, learn how to use a search engine and knock yourself out.
In the meantime, I suggest you read your closing paragraphs very loudly to yourself:
> Need any more proof of the validity of Isaac Asimov's dictum, "My ignorance is just as good as your knowledge"? ("A Cult of Ignorance", 1980).
> Now you know---or, only possibly, some of you at any rate---why lawyers tell you to "shut the f..k up!".
, and then look up the psychiatric term "Projection" and ponder deeply upon it.
Wow. You combine spectacular ignorance with some brave internet wannabe aggression.
Your understanding of science is profoundly broken. You seem to have conflated science with an attribute of a valid research-paper's write-up. You also seem to struggle with the difference between the very tail end of the scientific method --hypothesis testing-- with the whole of the scientific method.
> Please don't make unsubstantiated claims such as these without giving attributions or citations
Grow the hell up. I'm not your mummy and this is not a formal academic presentation site. If I choose to mention something, if you would like me to see if I have some refs ready to hand, feel free to ask politely. Or learn to use a search engine.
> Don't forget, now (I'm sure you won't)---we are all waiting for verifiable proof of the US Navy's involvement with "cold fusion".
Well, this WILL be hard.
https://duckduckgo.com/?q=us+navy+research+cold+fusion
Hey! Lookit all them results! Knock yourself out.
Here's a good one to get you started: a recent report (well, recent compared to the 90s and 00s when news of ongoing work would periodically trickle out) from that little-known organisation you sound like you would never have heard of:
"Whether Cold Fusion or Low-Energy Nuclear Reactions, US Navy Researchers Reopen Case
--- Spurred on by continued anomalous nuclear results, multiple labs now working to get to bottom of story"
Oh, no. I think it can be parsed into something that makes sense.
How about:
The resultant trained network can solve three-body problems 100 million times faster than the standard approach, allowing problems to be solved that would have taken too long to attempt previously.
"We have a network which can not only solve (previously) unsolved problems, but solve them 100 million times faster than they can't be solved..."
That's a, possibly deliberate, misunderstanding of what was said. The article makes it clear that the problem was a chaotic orbital mechanics three-body problem. These problems can be worked out as a formula, they are compute intensive to solve. So the Edinburgh researchers computed (very slowly) the solutions of a large dataset. Having done that they then trained a neural net using the these known solutions. Then they tested the neural net against a similar set of problems for which the solutions were not known. These "previously unsolved problems" were solved at speeds up to 100 million times faster than the standard approach. The language is clumsy but the meaning is clear if you read the paragraph that explains the approach.
Pardon me, but I completely missed that "...paragraph that explains the approach", to which you so cavalierly refer.
I, obviously, am too blinded by all the references to such esoteric subjects (the comprehension of which which my background in physics is totally inadequate to allow for) such as 'artificial intelligence', 'neuromorphic circuits/components', 'computational weight', 'machine learning', and--no "treatise" such as this would be complete without it-- QUANTUM COMPUTING"....and not just any garden variety, pedestrian quantum-computing, mind you, but "...million-qubit quantum computing." This, when the people who CLAIM to understand "quantum computing" can't explain what it s, how it works, or explain to you how to build one of these magic machines---the largest of which is only eight "qubit"s. Whatever THAT (THOSE) are.
I am completely surprised---no; make that disappointed---that no reference, nor appeal, was made to a 'superstring' computer. We are, after all, exhausting our supply of buzzwords quite rapidly; and there exists the slim possibility that the bourgeoise, the hoi polloi, may be getting wise.
Perhaps the subject should have been best left at that statement of Professor Rick Stevens:
"It's not clear how to account for quantum computing and neuromorphic components."
As AC mentioned, the current "AI" is Artificial to be sure, but is not Intelligent. It's statistical pattern matching. It can produce good results in some areas. But, answers it produces cannot be guaranteed to be correct. We've all read of examples of "AI" producing ridiculous answers because of unnoticed problems in training data.
An example of something with computers that actually has a kind of understanding is Terry Winograd's blocks world, from decades ago. But, that kind of understanding requires lots of work to create, and is infeasable for more than a limited domain. Current "AI" is usually only valid in a limited domain as well, and with it, you usually can't find out *why* it makes a "decision". There has been a bit of work on that recently.
Now, it is true that we don't really know how the human brain (which we grant the label "intelligent") reaches its decisions either. And it can be just as fooled/deluded by false or misleading input. So, how is the current "AI" any worse? For one, the human brain typically has a far far wider set of inputs ("experience"), and so has better "understanding", whatever that is.
I don't know about the rest of you, but I have no confidence in turning over anything important to an "AI" system that might be almost as good as a human brain. If I'm to turn over something important to computers, I want them to be much *better* than humans at whatever that task is. I am sceptical that current "AI" can ever achieve that. Note that areas where computers are already better than humans, e.g. chess playing, are very limited domains, and don't rely on statistical pattern matching.
There are some areas where "AI" (ie statistical pattern matching") beats humans.
In my background, cell screening and some more general medical imaging - and that's beating human experts who know they are being tested in ideal conditions.
The fancy machines are much better than the 'average' bored and tired user using typical hospital manual equipement.
I want them to be much *better* than humans at whatever that task is. I am sceptical that current "AI" can ever achieve that.
By "whatever the task is", I am assuming you mean a "generalist" AI, in the sense that humans (and other animals) are generalists - i.e., a "superhuman". I too am sceptical; personally, I think that sets the bar way too high. Bear in mind that human intelligence is the result of aeons of evolution in a ridiculously rich and challenging interactive environment (i.e., an entire planet/biosphere). It may well be that you actually need something on that scale that to design anything approaching human intelligence.
Note that areas where computers are already better than humans, e.g. chess playing, are very limited domains, and don't rely on statistical pattern matching.
Not so; AlphaGo, for example, is most certainly based on "statistical pattern matching" in the sense you use the term.
Perhaps you might wish to consider too, that it is not implausible that human intelligence itself is in fact "statistical pattern matching" writ large. This is not so far fetched; some current thinking in cognitive neuroscience is running with that idea (cf. "predictive processing" and the "Bayesian brain" hypothesis).
FWIW, my own suspicion is that AI will, in practice, not lead to human-like intelligence (hey, we already have humans for that!), but something more alien, which we shall nonetheless be comfortable to label as "intelligence".
No, I don't mean a generalist AI. With current approaches that would be quite ambitious, since likely you would have to merge *many* different AI datasets, likely running several different ways (number of layers, specific details of processing, etc.) Either that or figure out how to classify situations to feed questions into, and get answer from, those many independent sub-AI's. But, those problems can be solved by a couple more AI's, right?
Because these number-based AI systems cannot guarantee that their answer is correct, they are, to my mind, not as good as programmed solutions which *can* guarantee correctness within their domain. I want those guarantees for something which is to replace human brains at tasks of critical importance. What good is an answer that is perhaps even more wrong, but in a way that a human supervisor might not even recognize?
I had forgotten about AlphaGo, but note that it also is a relatively small domain. It is also a domain which *can* be done completely accurately with straight computation. As in the example of three-body problems, the numeric approach can be used as a pointer to a possible solution, but I would want that solution validated using the actual rules (whether they be the game rules of Go or the rules of gravitation, inertia, etc.)
I agree with your thought about this stuff yielding a more "alien" intelligence if it is pushed far enough. But, do you want critical decisions about your life made by such an intelligence, with no human oversight?
"What good is an answer that is perhaps even more wrong, but in a way that a human supervisor might not even recognize?"
You are not devious enough. If it consistently gives the wrong but *desired* answer, the fact that it is not provably correct -- and by extension is not provably incorrect -- is a feature, not a bug. In the domain of unprovable assertions, how do we even know when the ML training is "complete"? ... When the ML consistently gives the answer that we want....
Okay, I see.
It seems you want an AI that delivers "correct" answers (presumably in problem domains where there is actually such a thing as "correct" answers, such as game play). And sure, that is not terribly human-like; our (and other animals') intelligence has evolved to deal with a huge, messy, interactive world, where it is probably in general impossible even in principle to say what a "correct" answer (or indeed an "answer") even is. Rather, our intelligence evolved to maximise our reproductive success in that messy, interactive world. This, given resource constraints, is likely to produce intelligence that handles multiple trade-offs and compromises, delivering not "correct" answers, but "good enough" answers. Part of the latter might well involve solving discrete "problems with correct answers", as in games, so we have indeed evolved the ability to reason logically - up to a point. This was the aspect of intelligence that GOFAI ("Good Old-Fashioned AI") attempted to tackle, until it hit a brick wall of complexity and combinatorial explosions.
When you say "[Go] is also a domain which *can* be done completely accurately with straight computation" do you not think it relevant that to do so may take several lifetimes-of-the-universe to achieve, even with the most in-principle powerful computational resources? Likewise chaotic dynamical systems, which are in general not even in principle analytically solvable and where, numerically, the "correct answer" recedes exponentially from view as you increase the spatial resolution and temporal prediction horizon (e.g., exactly what we see with weather forecasting).
Personally, I am not so interested in solving for correct answers to well-defined problems with correct answers (you may find that odd, given I am a mathematician by trade!) - undeniably useful as that may be. I wonder too if that is really what people have in mind these days when they think of "AI", although they may well have done 50 years ago. I am more fascinated by real-world evolved intelligence, and how something comparable might be engineered. Note that that is equally, if not more useful; a self-driving vehicle, for example, requires something closer, in my opinion, to an animal-like intelligence than a discrete problem-solving intelligence (even very simple organisms are way better at navigating complex environments than the current technology).
Regarding critical decisions about my life, these are already taken by appallingly flawed humans, so what have I got to lose? ;-)
ENIAC was a dataflow computer, not a von Neumann computer - until von Neumann came up with a special wiring set-up for it that allowed it to work on different problems without being re-wired for each problem... at the cost of a good part of its performance! So ENIAC and the Arm V1 don't really "share the same basic architecture" as the article claimed.
This article seems to be the stating a problem that everyone KNOWS exists, and (a) then offering solutions which only exist in the author's mind, and/or (b) not offering hard and practical solutions which do exist, and which could be implemented RIGHT NOW.
Examples:
(a) memristors. Good buzzword. No; GREAT buzzword.
Most people don't know enough about memristors to even discuss this decades-old loser technology intelligently, but it keeps rearing its ugly head, mainly in articles such as this where an author wants to bowl you over with technical expertise and 'erudition' without resorting to good, hard rationale as to "why"...or why not. This is a case of "all sizzle" without any "steak", if there ever was one. Where is the reference to bubble memory? Why is that buzzword, that "buzz-technology", missing?
(b) "von Neumann", "von Neumann", "von Neumann", ... . There was even a major-paragraph with its own HEADING! This got really tiresome. Where was any mention of the Harvard Architecture?
The author has not, apparently, ever heard of that other popular architecture, the implementation of which guarantees, automatically, an increase in speed and performance (but he has heard of memristors). I know; I know: it's more expensive to implement. This article is, supposedly, about increasing the capabilities of supercomputers, folks.
If the world's largest manufacturer of of dirt-cheap micro-controllers can use the Harvard Architecture to be more than competitive with (the "standard") larger-word-length, more expensive, higher-clock-speed von Neumann devices from its competition, surely the adoption of Harvard Architecture for supercomputers is not too far out of the question?
Pragmatism, like nostalgia, ain't what it used to be.
According to a book I have about ENIAC, John von Neumann, although a brilliant fellow, was not principally involved in developing the architecture that somehow became named after him. The Wikipedia article on von Newman says this:
While consulting for the Moore School of Electrical Engineering at the University of Pennsylvania on the EDVAC project, von Neumann wrote an incomplete First Draft of a Report on the EDVAC. The paper, whose premature distribution nullified the patent claims of EDVAC designers J. Presper Eckert and John Mauchly, described a computer architecture in which the data and the program are both stored in the computer's memory in the same address space. This architecture is the basis of most modern computer designs, unlike the earliest computers that were "programmed" using a separate memory device such as a paper tape or plugboard. Although the single-memory, stored program architecture is commonly called von Neumann architecture as a result of von Neumann's paper, the architecture was based on the work of Eckert and Mauchly, inventors of the ENIAC computer at the University of Pennsylvania.
I think Eckert and Mauchly deserve credit for their work.
... someone writes an article about the future. Making predictions, forecasting the demise of all that we know today, making terrible mistakes when describing current and commonplace technology, so on and so on and so on.
Throw in some reference to the world's FAAAAAAAAASTEST!!! supercomputer, to show you're up-to-date. Throw in AI/ML. Gotta have that.
It's the same kind of pitch that stockbrokers make when they try selling you a stock you've never heard of. It's all about the future.
Yeah, memristors are the future. Von Neumann artchitecture is dead. Moore's Law - kaputt.
We need memristors. To do what with?
Here's the thing: the people who are actually working on the future are doing just that, right now: they're working on it. They aren't talking about it. Why? Because (a) trade secrets, (b) don't let the competition what you're working on and (c) if the current working ideas don't pan out, no-one likes geting caught with their pants down making predictions that turned out to be bunk. OK, except professional future predictionists.
Someone's got a memristor startup and they're looking for more funding and some buzz. That's my guess.
"Every now and then ...
... someone writes an article about the future. Making predictions, forecasting the demise of all that we know today, making terrible mistakes when describing current and commonplace technology, so on and so on and so on.
Throw in some reference to the world's FAAAAAAAAASTEST!!! supercomputer, to show you're up-to-date. Throw in AI/ML. Gotta have that..."
...also 'cloud computing', 'NO-code programming', 'agile' programming hor**shit...
“It's tough to make predictions, especially about the future.”― Yogi Berra