Well blow me over
Seems science, medicine and physics make good use of ai.
Scientists have developed a machine learning model that can outperform official agencies at predicting tropical cyclone tracks, and do it faster and cheaper than traditional physics-based systems. Aurora, a foundation model developed by researchers from Microsoft, the University of Pennsylvania (UPenn), and several other …
This is where it gets unhelpful calling the bullshit generators LLM's "AI" is so unhelpful, when machine learning and so on has been around for a long time and is actually useful in so many ways.
Certainly among my cicle calling anything "AI" is instantly met with a lot of cynicism and understandably so.
I'm afraid that, for all intents and purposes, that bird has long flown the coop; AI is now largely used as a synonym for ML - even (but not invariably) in the tech community.
What is unfortunate, is that (useful) ML is in danger of being unfairly tainted by the consequent association with LLMs. I'm seeing this frequently even on The Reg, where any mention of AI triggers a pile-on of lambastification - even if the article in question turns out to have been about a trad/useful application of machine learning rather than LLMs. (Perhaps you may pin that partly on Reg. banner writers - or even authors - but I suspect much of it is simply a kneejerk by respondents who turn purple at any mention of "AI" without bothering to read the article properly.)
> an expert system basically
Ah, nope. Expert Systems use far more explicit rules than this does and those aren't classically derived from Machine Learning.
But, yes, this does use techniques that are also found in an LLM*, but it isn't an LLM.
* and non-trivially (insert snide comment that LLMs use stdio and so does grep).
Indeed. Expert Systems cannot properly be termed machine learning - there's (generally) no "learning" involved; rather, they're rule-bound algorithms that attempt to mimic the decision-making of human experts. That can be effective in some very constrained domains, but tends to fall foul of combinatorial explosions - a "curse of dimensionality" - in more sophisticated/general scenarios.
(And for the system in the article, the first "L" in LLM may apply, but not the second :-))
My thought also. But it's not just climate changes. There are lots of other factors that might impact the forecasts. A volcanic eruption, massive ice shelf losses, desertification because of human activities, I suppose earthquakes and tsunamis, and as always, Don't Look Up.
Actually. It'll react to that change as it occurs based on updated training data. This is one use of ML where it, the machine, outputs don't impact the inputs. (As opposed to something like a law enforcement use case would).
Expert systems would need new rules for the new climate situation unless they had that folded in already and that's really hard to impossible. Whereas the ML version of the weather predictor will work more like a me when I look at the current front/condition maps for my area and agree or disagree with the models based on my experience of real world historical data. (I'm usually more accurate than the models in use today for a day or two in the future, but not 100% of the time)
Starting from mathematical and scientific foundations such as conservation of mass, energy and momentum, Bernoulli's principle and thermodynamics, anyone can derive a weather predictor. Just takes effort.
Feeding in a history of previous weather predictions, plus a history of observations and shoving the lot onto a massive LLM there are going to generate a lot of predictions that can be made with a high probability of success. Fat chance of being able to derive useful insights to drive an increase in understanding of weather along with the impacts of our behaviour on it though.
Creating a magic box solution is fine if you just want to magic box to entertain you.
Its is not going to help you learn to be able to perform the magic yourself.
So you will be dependent on the magicians with the expensive magic boxes to entertain you.
And you will line up to pay them to perform their magic tricks so that they can build bigger and more magical boxes to make you more dependent on the magic tricks you so desperately want.
> Creating a magic box solution is fine if you just want to magic box to entertain you.
Well, when that magic box saves hundreds of lives by getting people to evacuate on time (still an uncrackable problem itself) then I don't care if the meteorologists pull the forecast out of the bunny's ass.
But yes, I hope the basic research keeps going.
Personally, I think there's enough non-deterministic chaos in the system that a low-level bottom-up approach will never really work well. Hell, we still can't really predict how a single airplane will fly, much less a continent-size airmass.
> Personally, I think there's enough non-deterministic chaos in the system that a low-level bottom-up approach will never really work well.
Extremely pedantic mathematician speaking: technically, "chaos" is a strictly deterministic phenomenon1. But that's not really relevant here; what is relevant is predictability, specifically as regards sensitivity to initial conditions. It's not that it will "never really work well" - it already does work well (for some values of "well") on short time scales (for some values of "short") - but rather that it's a case of diminishing returns with respect to the fine-grainedness of available data, and to computing resources. Roughly, doubling the density of data or compute power does not double your practical prediction horizon - gains in prediction horizons and accuracy are (negative) exponentially proportional to resources.
FWIW, there is a lot of research currently into "peering into the black box" of ML; of trying to glean how ML systems achieve the functionality that they exhibit. If successful (and it's quite a big if), this could make ML useful not only as a black-box oracle, but also to gain insights into physical principles that mediate the phenomenon at hand.
This applies equally to ML as it does to computational, physics-based modelling.
1Mathematics has wrestled for decades with what "stochastic chaos" might, or ought to mean, but it's not to my mind been terribly illuminating.
Yup, we call those "error bars".
But seriously, forecasting systems generally try to quantify prediction error; e.g., a traditional physics-based weather forecast will generally be re-run many times with perturbed initial conditions and/or parameters to gauge prediction error. That's where your "35% chance of rain" comes from.
Or indeed the horse-racing form guide.
I worked with a gentleman who put his acknowledged data skills to predicting horse races. And for a time (many race meetings) he won more than he lost. But eventually the fund he was manging on behalf of himself and his optimistic colleagues went into the red and stubbornly resolved to remain irrational longer than he could remain solvent. He is currently an advocate for Cryptocurrency. Make of that what you will...
I've been trying to predict mountain bike races recently with a similar level of success...
My hedge-fund job involved trialling various nonlinear predictive algorithms beyond the standard linear regression that was (probably still is) the bread and butter of financial market prediction. So I tested various models such as nonlinear regression, ANNs, genetic algorithms, yadda yadda. Linear regression almost always won out. The reason was interesting: financial time series have a truly terrible signal-to-noise ratio - every opportunity is very rapidly arbitraged out and you're left with what looks for all the world like a random walk. This scenario heavily penalises model complexity - the number of parameters in the model: the more complex the model, the worse the model fit, and therefore the worse the prediction. For models of a given number of variables, the linear ones are simply the least complex. (The real voodoo was figuring out which information to feed into the models - by adding another variable into the mix, are you increasing the signal or the noise? And also, of course, minimising risk1.)
Predicting weather, though, is a different kettle of fish entirely.
1Ironically, hedge funds at some point came to be associated with reckless trading, whereas in reality they're all about risk management; the clue is in the name. So, e.g., the goal of your trading algorithm is usually not to maximise returns, but rather (roughly) the ratio of returns to volatility - the so-called Sharpe ratio.
does this indicate the old long term staff seen as an excessive cost by manglement might actually have useful rules of thumb/historical local knowledge not in the manuals or models that enables them to better estimate likely outcomes in complex systems ? Machine learning of this by studying the same historical events seems to be doing the same thing.