Change we can believe in.
" Folks who aren't keen on climate change"
"FWANKOCC".
Excellent. Better than the hurtful "deniers".
Proposal for integration into official El Reg dictionary has hereby been submitted.
Development work on a not-yet-prime-time weather forecasting model has been seized on as proof that climate models can't be trusted. The reason? Folks who aren't keen on climate change discovered this paper in the journal of the American Meteorological Society, in which Song-You Hong of South Korea's Yonsei University …
Strictly, fwankoccs are not equivalent to deniers. I accept the logic of AGW (though with reservations about 'the sky is falling' sensationalism). I am very definitely not keen on avoidable climate change, even though it's my grandchildren that have to worry and it is unlikely to affect me. On the other hand I suspect some of the people funding people like Watts are very keen indeed on AGW, because they stand to make a killing on the markets by factoring in the likely effects, and one reason for their spreading of FUD is that they don't want other investors to cotton on. The harder denialism is promoted, the higher the chance of a hard landing and the higher that energy prices will eventually go. Effective action now to reduce energy usage and invest in renewables and nuclear will reduce oil and gas industry profits in the long term.
Let's get our terminology straight. KOCHS != FWANKOCCS.
This isn't doubt about the science.
This is not even doubt about the models.
What this is, is doubt about whether, EVEN IF THE MODELS ARE TOTALLY CORRECT, AND THE SCIENCE IS SPOT ON, any meaningful predictions can be made about the future.
The example I always use is balancing a pencil on the sharp end, and predicting which way it will fall.
There is nothing to dispute about the science or the mathematics or the modelling of such and exercise, but in the limit, it doesn't allow you predict the right answer.
Running a model of that on various different machines may well give you a range of completely different answers. That doesn't tell you the models or the science are wrong, merely that the problem you are trying to solve has a very large range of possible solutions, and which one actually happens is probably beyond your power to predict.
I.e. in this case, its may well be that the science and the models are perfectly correct accurate and good. But absolutely no use whatsoever in determining the actual course of future events.
That is in the nature of 'chaotic' systems.
If you are lucky, you will have an attractor, which broadly says that the answer will be inside some bounded set of conditions. (the car that runs off the road bounces and ends up in a field). If you are unlucky it may mean that you have no idea what the final outcome will be (the car that hits a tree, and ends up tumbling across the landscape to end up a twisted lump of metal whose exact shape and location are the result of some extremely fine data on exactly what tree it hit, where it hit it, what the ground was made of at a micro scale, and several other factors you would normally ignore).
What you don't seem to understand is that mathematics science and indeed philosophy have, during the 20th century, arrived at an understanding of the nature of problems which are by its methodology, insoluble, and worse, one of those problems turns out to be even understanding which problems are in fact insoluble.
Science and models are in an IT sense, COMPRESSED forms of the real world data. Sometimes the real world is very highly compressible. The data turns out to actually contain very little ultimate information.
But this is not always the case, and attempting to apply compression techniques that work well on one data set, to another, results in abject failure to deduce any meaningful predictive power whatsoever.
When we do science, what we are doing, is guessing at what compression algorithms we can apply to real world data, and, insofar as it is successful, the algorithm we use is held to be 'not refuted' (in the Popperian sense). What the Great Unwashed erroneously call 'scientific truth'.
However, such models are not th actual data sets themselves. And re-expanding them to data sets that predicts the future is only valid if both the algorithm is correct, and the expansion process itself is not subject to data sensitivity of such magnitude as to make the result meaningless.
You can, in theory, average out a huge bitmaps of a detailed picture into half a dozen bytes of information. But you cannot - unless you have accurately detected a deep pattern in the original bitmap - reassemble it from those half dozen bytes.
And that is the problem with climate forecasting. Whether the science is settled or not (and I would say its very far from settled) the models that the science leads to, do not it seems produce any reliable forecasting whatsoever.
@itzman: exactly, how they tell "for sure" that we all be under 10 meters of water in 100 years because of changes in weather if the same scientists can't predict accurately (99%+) what weather will be on next Tuesday .
And, no, I'm not a "denier" (as probably most people posting on El Reg that are sceptic about the whole Global Warming political movement). We know for sure that the climate got warmer in past 10-20 years. Nobody knows why it got warmer. But that also doesn't mean that we shouldn't move away from fossil fuels, if only because they will run out almost certainly in the next 50 years.
@Tomato42 - The climate is what you expect to happen over large areas/globally over long time periods, it's a trend, whereas weather is a specific point in time happening. It's much "easier" to predict climate because being off by a few days either way makes no difference at all in the great scheme of things, whereas being off by two days in weather is pretty catastrophic.
Saying that we don't know why the climate got warmer is like suggesting we have no idea. We do know why the climate got warmer, there are several good candidates as contributes, CO2 is at the top of the list, but there are others. The question is what to do about it, we can sit back and do nothing, or we can change what we are able to change in order to try to stop it. Again, CO2 is right at the top of the list of things we can do something about.
Exactly. A nice analogy I saw was that if you place a saucepan of water on the gas ring and turn the heat on you can predict that the water in the pan is going to get hotter, but that doesn't mean that you can predict where bubbles will form as it starts to boil.
As an extreme pedant could I just note that "agenda" is plural (it means "things to do"), so until we replace Latin with the useful English "to do list", could I just request that people write that El Reg either "has agenda" or "has an agenda item"?
Thank you.
El Reg has an agendum? But surely if agenda can be usefully translated as 'to-do list' then it can also be singular - I have a to-do list. I'm pretty sure established English usage means 'an agenda' is acceptable now, even if it wouldn't have been for the Romans.
I'm not sure whose "comments" you are referring to here. But it is not uncommon for scientists to present ongoing work at conferences, and discuss both the promising aspects and the ongoing problems that need to be resolved. Sadly, I think this valuable behaviour is starting to become less common, what with the profusion of camera and video wielding scientists in the audience, inclined to tweet/blog about stuff they've just seen on the spot.
And now, it seems, if you are in some sort of potentially controversial area of research, there's the additional risk of being caught up in a media/blog propaganda war.
In their evidence to the House of Commons the UK Met Office stated that they use exactly the same code for weather prediction as they do for climate modelling, so whether this is a climate model or weather model is irrelevant.
“trivial differences in initial conditions, or in processing methods, will lead to divergences in weather forecasts”
There were no differences in initial conditions or in processing methods. The same starting conditions and same processing methods should lead to the same results, but they don't. They lead to wildly different results.
This means that the output of your climate model depends upon the architecture it is run on and not upon the starting conditions or the physics the model is supposed to be modelling.
to identify ways in which the model's code needs to be polished to make sure it produces consistent results in different environments.
By polishing, what they mean is deliberately decreasing the accuracy of the calculations until there is no difference between hardware. For example, if after every calculation they round all floating point numbers to 3 decimal places instead of leaving them at the maximum accuracy of the underlying machine, then they will acheive their goal.
"There were no differences in initial conditions or in processing methods. The same starting conditions and same processing methods should lead to the same results"
Errrrrr, no. The initial conditions may be the same, but looking at the (not yet peer reviewed paper) shows that the model was run using a number of different compilers at a number of different optimisation levels using a number of different MPI implementations. All of these can lead to differences in operations as simple as adding up a list of numbers, and thus over time the result of the calculations may diverge. Hell, depending on the way the MPI (and possibly OpenMP) is implemented it's perfectly possible to get differing results from two runs on the same machine with the same executable.
And that's even before I consider whether the machine itself is strictly IEEE 754 compliant (maybe, probably not if you look at the grubby details), whether the code is being is being compiled to exploit the machine is a IEEE 754 manner (probably not in all cases), and whether you can expect all complier/library combinations used by the code can be expected to give identical answers in every single case (probably not). And then there's comparing runs done on differening numbers of cores. Et cetera, et cetera, et cetera ...
This is usual behavior. And what is being presented looks like (I haven't read the paper in detail) an reasonable attempt at trying to quantify the observed divergences.
The same code and starting conditions produce different results* based upon compilers, compiler options, libraries and hardware. The spread of results produced is as wide as the spread of results when they vary the starting conditions for model "ensembles". This effectively means that the model ensembles are as dependant upon which compiler/computer system they are run on as they are to the starting conditions they use. They shouldn't be. They should be solely dependant upon the starting conditions and the physics encapsulated within the code.
* We are not talking about results differing at 10+ significant digits, but at them differing at a couple of significant digits.
I'm having a bit of trouble figuring out exactly why all of this is worth note either way. The best I can come up with is that Wattsupwiththat is taking this opportunity to point out a fundamental underlying flaw in the methodology of Climatology.
About 20 years ago (maybe 40, I'm recalling a book I read about 20 years ago and it might have been referencing even older events) there was renewed interest in the area of chaotic theory. The reason why is that the weather forecasters all assumed their mathematical models were exactly reproducing results when they ran them. But somebody had a system crash and all they could recover was the printed output, not the registers values at the time of the crash. The printout was at about half the precision of the registers. So as a base check they figured roll back a couple dozen lines and make sure the output matched before finishing the rest of the calculation. What they found was that no matter how many lines back they went, they couldn't reproduce the calculation from a midpoint value. The farther away you get from the initial inputs, the more variance you get in the numbers. This accepted for weather prediction.
What climatologists attempt to claim without proof is that these variances in the weather are eliminated by Einstein's solution to the random walk electron problem: the variances on average cancel out. But the whole no-attractor chaotic math basis of weather ought to call that assumption into question.
One day in the winter of 1961, wanting to examine one sequence at greater length, Lorenz took a shortcut. Instead of starting the whole run over, he started midway through. To give the machine its initial conditions, he typed the numbers straight from the earlier printout. Then he wlaked down the hall to get away from the noise and drink a cup of coffee. When he returned an hour later, he saw something unexpected, something that planted a seed for a new science.
This new run should have exactly duplicated the old. Lorenz had copied the numbers into the machine himself. The program had not changed. Yet as he stared at the new printout, Lorenz saw his his weather diverging so rapidly from the pattern of the last run that, within just a few months, all resemblance had disappeared.
Chaos: Making a New Science By James Gleick 1987
"What climatologists attempt to claim without proof is that these variances in the weather are eliminated by Einstein's solution to the random walk electron problem: the variances on average cancel out. But the whole no-attractor chaotic math basis of weather ought to call that assumption into question."
Surely they can prove it. Simply run the climate model several times with slightly different initial state and see if you get wildly different results.
Real Time Strategy game developers typically have the same problem but on steroids.
It is not feasible because of bandwidth for a server machine to calculate and stream the movement data of 1000s of tanks to all the player machines. Instead the player machines only communicate player inputs and each player machine must run the simulation themselves based on those inputs.
The problem then is that all the machines need to calculate the battle simulation identically given the same inputs. Any slight difference in calculation between machines and it will create a tiny tiny little difference in the simulation at first, but over time left unchecked such tiny differences grow into game changing differences. Eg you have one player seeing a tank being blown up by a rocket while another player sees the rocket miss the tank.
It's even possible, if not done correctly, to reach a point where if you record a game on one machine, and then try to replay it on a different machine, the entire battle outcome is different and a different player wins, just because of tiny, usually irrelevant, differences in floating point calculations between the two machines.
My initial reaction to the piece at WUWT was scepticism and the thought that WUWT really didn't understand the software they're criticising.
Then I began to wonder just how they manage to test a complex model like this in the first place, how much testing they do and what sort of verification procedures they have. I've worked on distributed real time calc engines receiving multiple inputs and reacting to them in real time. Testing was time consuming and therefore expensive.
After that, I remembered some of the developer comments on the quality of code written by CRU (released as part of climate gate), and wondered just who it was that has been programming many of the GCMs out there. Professional programmers or climate scientists?
Anyone working on these models like to comment?
Any professional climate scientist would need a thick skin to even contemplate posting commentary here. Climate debate in forums such as this is a pretty ugly process, and, in fact, tends to have very little to do with any of the formal scientific output - it's largely based around rehashed (& sometimes misread) press releases, superfically relevant technical issues, and/or poorly-founded assumptions about conspiracies, groupthink, or special-interests (on any side of the argument).
This thread is a brilliant example of an internet forum "climate debate". But not so much a brilliant example of any science. Really, what would some putative "regtard" climate scientist have to offer this thread? And what would it offer them? Would it really be worth it? Do you really think you could convince them it would be worthwhile? No, really?
Well, I wasn't asking for comment on climate change, just looking for someone that works on a GCM to tell us a little bit about how they do their testing, porting, release management all the usual stuff that any professional programmer would expect to go with the of developing large/complex software systems.
That would then put the WUWT story into some perspective.
In fact, if someone out there is brave enough, perhaps El Reg might like to give them space for an article...
Quality of the code is a red herring, the code may well be not programmed in the optimal manner or using modern programming techniques, but that doesn't mean that the code doesn't produce correct output.
The issue you have is that a programmer can't do the science, but a scientist can be given basic training to be able to produce functional code. The code only needs to be functional, not optimal. The alternative is to have scientists specify code and programmers code it up. As someone who makes these kinds of specifications and gets programmers to code them up, I can assure you it's a laborious process and while you get good code out - eventually - it probably takes far too long for use in 1-3 year contracts.
The quality of code isn't a red herring. Poor quality code is just that, it probably runs slowly, is inclined to break at the first provocation, is difficult to modify and a pig to test.
You wouldn't get an airline pilot to write the flight control software for an Airbus, but you'd be a fool not to have them working as part of the team developing the software as they're the people who will have to use it, and have the domain experience.
I see nothing that tells me climate scientists would make better programmers than airline pilots....
Firstly, cards on the table. I don't work on the code, and I am not a weather scientist.
But what I am is a support specialists who on occasion does talk to some of these people who work on the Unified Model. The people working on the code are a mix of scientists of various disciplines, and professional programmers, working together
One thing I know from personal experience is that when one of the weather organisations change supercomputer, they perform months of trial and parallel runs on the new system before they are comfortable with switching over. Ever wondered why it takes more than a year from the start of the install to full switch over? Well, a significant amount of the time is to make sure that they will not suddenly change their forecasts as a result of the computaions coming out differently on the new machines.
Where I work, each time we do an OS patching operation, especially if it involves any of the parallel maths libraries, MPI, OpenMP or compiler systems, or even the firmware of the machines and network components, there is serious soul searching and testing to make sure the results are either the same after the work as it was before, or that the differences can be reconciled. If they can't there is a real posibillity that the upgrade will be backed off. The change process mandates that all work must have a complete backout plan prepared in advance for situations like these, and there are situations where this has been invoked.
There are benchmarks that are used to test the comparabillity of MPI and serial floating point calculations, and these are generally checked by bitwise comparison to a very high number of significant digits. If there are any discrepancies, then these have to be accounted or accepted as genuine differences, with appropriate allowances in the code.
Other aspects of what they do include renting time on other vendor systems in order to make sure that the results are broadly compatible between different HPCs, and rotating procurements between the HPC suppliers to make sure that their model and build systems remain as portable as possible.
I don't doubt that other models have similar controls on them.
So the process of working on the models is taken very seriously, and is done by people who often have a vocation (they almost certainly could earn more money outside of weather research) for the work they are doing.
re AC 14:59. A guy writes a highly informative post on how HPC specialists recognise FP accuracy issues and how they cope with them. It gets two upvotes, one of them mine.
Mention guns or free speech on slashdot and the normally sane crowd goes screechingly bipolar. With the el reg crowd it's climate articles that bring out the strong nutty flavours with crunchy cluelessness stirred in by the spadeful.
Here's some thoughts for the Daily Mail reader who likes to vent here occasionally: models are never perfect but imperfect models can still be valuable, and please gain some solid understanding of the behaviours and limitations of computer-implemented floating point representations before smashing out your opinions. It may divert you from your usual Mail diet of lesbo-muslim paedo-terrorists out to cannibalise the royal newborn but actually, that's not always a bad thing. Give it a try.
If you bang your head enough times on the keyboard you'll find the downvote button in the end. Go on, you know you want to.
The other upvote was mine. I guess all the other readers are HPC specialists who thought it was just restating the obvious.
The thing I find interesting is that the behaviour pattern being exhbited (of trying to pick holes in something complex that you don't understand in the belief that this will somehow discredit it) is so common that psychologists have investigated it quite extensively, and demonstrated that the great majority of people do not want to know the truth, they simply want their view to prevail. At some time this must have had evolutionary advantages. Nowadays the remaining large mammals and birds must be hoping that it leads to our extinction before we can bring about theirs.
"I guess all the other readers are HPC specialists..."
Here? On the reg? (giggle)
"...who thought it was just restating the obvious"
It went a long way beyond that with details on the practicalities. Very interesting to me, that.
"The thing I find interesting ..."
Agreed
A few years ago a guy developed a method for predicting future severe weather based on sunspot number, and other solar phenomena.
In fact some of those are easily detectable by amateur radio enthusiasts, such as sudden increases in signal strength on certain bands corresponding with CMEs impacting Earth's atmosphere.
The mechanism by which it works is theorised to be increase in rain clouds and lightning strikes by adding seeding particles to the atmosphere as they condense round solar particle (muon) tracks.
Think cloud chamber on a massive scale :-)
It was at the time laughed at, but it predicted many severe events such as Hurricane Katrina, weeks before they even formed.
Of course the "official" meteorologists just ignored it while carefully documenting "hits" where the model predicted severe events until it became obvious that the model was more accurate than anything they had.
AC/DC 6EQUJ5
This is a good example of non-science.
Nicholas Nassim Taleb puts in an approachable way what a lot of statistical textbooks try to tell you in their technical ones - that past performance is no guarantee of the future in any speculative area.
When you write "is theorised", do you mean that there is a proper theory that, say, calculates the expected number of seeds based on actual cloud chamber results, and shows this is a credible mechanism? Or is it the tabloid meaning of "theory" as in "an idea the reporter had while drinking lunch"?
If it "predicted" Katrina weeks in advance, what is the proposed mechanism by which an event weeks in the past causes a major weather event? Because otherwise we might be tempted to say "so what you mean is, you are having to stretch coincidence by extending your mechanism to events quite a long way in the future".
Climate prediction is obviously very difficult. Very clever people with armouries of equipment are constantly arguing over what is going on. Yet somehow $random_guy_with_idea is likely somehow to get it right quite simply. There are very few real world examples (plate tectonics). In this case, I'd like to see any peer reviewed studies that have actually been subjected to statistical analysis.
"The mechanism by which it works is theorised to be increase in rain clouds and lightning strikes by adding seeding particles to the atmosphere as they condense round solar particle (muon) tracks."
That part is mostly correct and is being tested experimentally at CERN. The rest of your comment is complete and utter garbage.
Now, ask yourself: did the "official" Meteorologists ignore these findings because they didn't want to write another paper on the subject, possibly get a Nobel prize, certainly a career defining paper at minimum. Or did they ignore it because it's bunkum?
What's the most likely?
Shocking but true.
When you put the same numbers through the same system on differenct machines it should give same answers.
In this case it does not.
And the answers appear to be as widely spread as the system as the outputs you would expect with the when the range of input parameters is used. Which is clearly wrong.
Bottom line. The website listed will seize on very flimsy evidence to bolster their agenda. It is little better than FUD for climate change denial and as IT insiders most of you would know this.
Not to say concerns don't exist about the models. They do seem over systematically over sensitive to CO2 levels, but that's another story.
Clearly you have never written any numerical software!
If you put the same numbers through exactly the same computation process, then (assuming no Monte Carlo-style random number generation in use) you do get the same answer.
If anything is different (e.g. floating point representation or rounding) you get a different answer. How much of a difference that makes to the end result depends on what you are computing and how you went about it. That is one of the two fundamental problems of numerical analysis:
1) Computers are not 100% accurate for floating point maths (finite precision), thus you need to chose computation methods that are as insensitive to this as possible.
2) Computers do not have infinite speed so you need to chose algorithms that are fast enough for your budget and/or state of the art in hardware (even if they are even less precise such as truncated power series for some functions, etc).
When you have a chaotic system to model, the finite precision effects are magnified. That is almost the definition of a chaotic system! This is exactly the same problem with the initial data quality.
It has bugger-all to do with if the underlying theory is correct or not, and everything to do with how difficult it is to model, and how the researchers have chosen to implement it on real-world hardware. Looking in to what is making the difference might result in a better implementation (e.g. a change of algorithm somewhere that is less sensitive to maths precision) or reveal that the underlying problem cannot be modelled to the precision/time period requested.
That is numerical science in action really.
This post has been deleted by its author
Set Price = numeric, float
Set Tendered = numeric, float
Set Change = numeric, float
#Input Sale
Input Price, [input =4.95]
Input Tender [input =5.00]
If Price > Tender Print "You can't afford that": Go to #Input Sale
Change = Tender -Price
Print Change [Your change = $0.04999999999]
It's been ages since my high school math teacher introduced us to that problem on a Radio Shack TRS Model 80-III.
It's one I will never, ever, ever forget.
"When you put the same numbers through the same system on differenct machines it should give same answers."
Only when you're working with integers in a non-random process using discrete methods. Which climate/weather models , being emergent systems, using real numbers, involving quite a few random effects do not comply to.
Which is why most weather projections/predictions are averaged over a fair number of runs (500+ usually for our met office (KNMI) for 3-5 day predictions.) and published as nice little expanding graphs letting everyone + dog draw their own conclusions.
As far as the hotheads pro- and con-AGW are concerned: As you'd expect from a country where >50% of its' territory lies well below sea level, the dutch met office has had a long hard look at the current climate data to see if there's any serious threat to our dikes in the near future ( next 100 years), to see whether or not we should be training our fingers to plug holes in them ( once again), or top them up a bit. So far the simple answer is that the requirements set forth in the Delta-plan in the 1960's are more than sufficient to cope with any anticipated sea level rise with a reasonable amount of probability. Increased rainfall on the continent may cause some problems with rivers flooding locally, but we've been coping with that for centuries.
So that 10-meter surge? Not so much. Hockeystick temperature curve? Inconclusive evidence from available data. Deniers? No, but sufficiently sceptic to not accept the extreme ranges of incomplete models as Gospel.
will use any excuse to bury their heads in the sand.
Nothing here is at all new, or surprising, or in any way invalidates climate science. It's well-known that weather modeling is chaotic; small changes to input data result in disproportionately large variations in output. In this case, the output isn't all that different; there's a discrepancy between the test machines, but the overall result of the simulations are similar.
It's also well-known that floating-point calculations can produce different results on different processors. Chips are often designed to perform these calculations with more bits of precision than the output register can hold in order to produce a more accurate result. This is normally a good thing, but can be a problem when exact reproduceablitity between platforms is needed. Programmers have been dealing with this for many years; for example, back in 2000, Java added the StrictMath functions, which have consistent (but slower) results across all platforms.
AC quoted trivial differences in initial conditions, or in processing methods, will lead to divergences in weather forecasts”
I have a real problem with that statement - if a difference in initial conditions leads to (significant) differences in weather forcasts... then how can you classify those initial differences as "trivial" ?
My understanding of trivial is of that which has no consequence....
So if these weather/climate guys don't even understand what is trivial and what is not... and can't write+test code that works accurately on different hardware platforms to produce the same results... why should we trust their results at all?
My experience of the recent met office weather forcasting is pretty poor. I have a strong suspicion that after some fairly major extreme weather event "misses" - they now over predict rain/snow/wind etc so that they can't be blamed for not warning people - leading to generally overly negative forecasts.
Or put it another way - in the last two weeks, at home (South London) we've been forecast "heavy" rain on at least 9 days - of which we've had a little rain on two and short bursts of heavier rain on two more. Thats not stellar, or any good for our grass!
"I have a real problem with that statement - if a difference in initial conditions leads to (significant) differences in weather forcasts... then how can you classify those initial differences as "trivial""
Are you serious? The word trivial was obviously referring to how tiny the differences were in the initial conditions.
"So if these weather/climate guys don't even understand what is trivial and what is not... and can't write+test code that works accurately on different hardware platforms"
Oh please, get off your high horse. Floating point operations on machines will never be "accurate". You aren't even on the same page as those weather/climate guys. Back to school with you.
I have a real problem with that statement - if a difference in initial conditions leads to (significant) differences in weather forcasts... then how can you classify those initial differences as "trivial" ?
Easily, if you accept 'trivial' to be 'trivially small'.
Look up the difference in - for example - tan(89.999) and tan(89.9991) (degrees)
My calculator here gives 6366.1977 A pretty big divergence for a trivial difference of 0.0001 degrees.
And that is the problem with people who live in a faux world of simplistic mathematics where maths is always 100% accurate, models always represent reality perfectly, and getting the science right means you know exactly what will happen.
This is probably an institutional bias in forecasters, since a significant amount of their output is for situations where it is important to know the potential 'worst' outcome rather than the most likely outcome. Think forecasts for pilots, and the classic one, the forecasts for the D-Day landings. Complaints in the press of 'getting it wrong' don't help either as it's less heinous to forecast rain when it turns out to be sunny than the other way round (unless you are a farmer in certain seasons, but the bigger farmers pay for specific forecasts, with probabilities and more detail anyway).
One of the fundamental assumptions in science is that if the change occurs beyond the precision of your measurements, it is trivial at least in the sense that you can't measure it. Perhaps moot would have been a better word, but ages ago they settled on trivial.
It has long been known that changes in rounding at digits three orders of magnitude below the limits of measurements have a profound impact on the output results on extended range forecasts. This is the root idea behind chaotic theory and the butterfly effect. It's also why we are constantly measuring and updating the weather prediction systems with real data.
Surely what has happened here is not so much that Climate Science has been proven to be all wrong(tm), but that all supercomputing has been proven to be all wrong(tm).
Or maybe, different highly complicated computers work in slightly different ways and you can't just run a piece of software on one and the other and expect it to work in the same way, without checking.
Ever recovered a file from a big-endian system onto a little-endian system? Depending upon the backup package (and OS) it will be garbage, it doesn't mean that the system doesn't work.
"And the answers appear to be as widely spread as the system as the outputs you would expect with the when the range of input parameters is used. Which is clearly wrong."
No. It is not wrong. It is however deeply INTERESTING.
Mind you it is unclear as to what that sentence you wrote actually means.
If you have mathematics of the form
a=((b/1000000000000) * 1000000000000) you would expect the answer to be a==b.
But that is not the answer the real world or a computer might give you.
For example, take a million bottles of homeopathetically prepared liquid with a drop of something in each. and then take a milllionth of each one, and stick them in a new bottle..would the concentration be the same as the other bottles? No. you are down to random statistics at that level as to whether the millionth of a bottle did or did not contain the actual molecule of the 'magic ingredient'
In the computer case, even floating point maths has its limits, and depending on how the numbers are internally represented and approximated - and essentially all floating point numbers are approximations, - there being only a finite set of totally accurately representable floating point numbers, and an infinity of 'approximations' - you will get a set of different answers. That doesn't mean the model used is WRONG, just USELESS. It hasn't the real power in the real world to accurately predict anything.
What this interesting analysis has revealed, is that even if the models are totally correct, the science is settled etc. etc. It is *still of no use whatsoever* in accurately predicting climate change.
And THAT is why its very INTERESTING .
"What this interesting analysis has revealed, is that even if the models are totally correct, the science is settled etc. etc. It is *still of no use whatsoever* in accurately predicting climate change."
I disagree. Only if the analysis showed that climate models running on different machines or with different initial state produced wildly different amounts of warming due to human emissions then would that be true. But the analysis doesn't touch on that.
> Only if the analysis showed that climate models running on different machines or with different initial state produced wildly different amounts of ...
You are both right and wrong. The same inputs are producing wildly different outputs, which they are not supposed to do. Varying the initial state is supposed to produce different outputs.
It isn't that much of problem that the same code/initial state produces different outputs, depending upon compilation etc, provided that the difference is less than what you are trying to calculate. If you are trying to calculate some value to the nearest 1/10 then it isn't a problem if cumulative errors result in differences at 1/100. Problems occur when the differences are greater than 1/10. At this point, your model isn't doing what it was designed for.
It's not about whether this particular model is or is not a good or mature one. It's about long running numerical calculations being pretty much inherently unstable unless heroic measures are taken to organize every step in them for minimum error. Just getting a simple gaussian elimination right is a complex and specialized task.
We KNOW that typical climate scientists are not numerical analysis whizzes. Recall that in FOIA2011 email 1885.txt, Phil Jones admits to not knowing how to calculate a linear trend in Excel (or anything else):
http://foia2011.org/index.php?id=1835
It's very likely that other, more mature, climate models used in publications:
1) have serious numerical analysis instabilities, to the point of being GIGO
2) have never been run on more than one kind of computer
3) would show exactly the same kinds of problems if they were
By focusing on individual models you are forgetting (somehow) that there ARE lots of different models. Not just running on different hardware, but even with different source code, written by different teams. Even the modules are different, some including biology, some not. Especially when you consider these models have been build up over many generations.
So your point is pretty much void. Numerical instabilities are obviously not a huge problem because if they were all the models would wildly disagree with each other on the subject of warming in a way they don't..
> By focusing on individual models you are forgetting (somehow) that
> there ARE lots of different models
Which we're told are modelled on BASIC PHYSICS so they must be right.
Although they give wildly varying results (and don't actually agree with real life data).
Which one are we supposed to believe?
I'm a backup expert, but I don't know how to use TSM. Therefore I must know nothing about backup.
That's pretty much the logic you just used, as it happens, I'm an expert in three major backup packages and have good knowledge of at least another four.
Climate science is a multidisciplinary field, including Physics, Engineering, Chemistry, Quantum physics, meteorology, maths, stats and probably a few more. It's not surprising that Excel isn't the package of choice for plotting output.
The author of El Reg's article is pointing out how this person seized on a paper done on what is essentially beta software as part of its testing to claim that climate modeling is wrong. Song-You Hong was testing the climate modeling software looking for problems that needed fixing. Anthony Watts of Wattsupwiththat decided that what the paper meant was that this paper proves that climate modeling doesn't work, and some of the commenters here apparently agree with that.
Seriously people, that paper was just saying "hey look, I found a bug in your _Beta_ software." End of story.
"The author of El Reg's article is pointing out how this person seized on a paper done on what is essentially beta software as part of its testing to claim that climate modeling is wrong. Song-"
Exactly
Having read the previous comments the software needs work to reduce this to acceptable levels, because as posters have implied eliminating it is impossible.
The real takeway from the article was the website that reported the work is exceptionally biased and should probably be avoided.
In adding a collection of floating point numbers the result can change depending on which numbers are added first owing to the limited (though large) precision of computer floating point arithmetic
Example using 2 digits of precision
Starting from the left 1+.001+.001+.002+.003+.003 => 1.0 (Intermediate results truncated to 1.0)
Starting from the right 1+.001+.001+.002+.003+.003 => 1.1 (No data lost due to truncation)
As different versions of compilers may change the order in which arithmetic operations are carried out, getting different results from different systems with the same data on a model as sensitive to minor changes as weather forecasts is to be expected.
In weather forecasting the input data is noisy and low precision and many values are missing and are derived from averaging nearby data points (that may be 100 miles away) so runs are repeated with small changes made to the input data to see how stable the result is. In some conditions the forecasters can give accurate predictions for several days, in others the results differ so much after 48 hours that no useful longer range prediction can be made.
(Input data (best case) - temperature accuracy 0.1 degree C - 1 part in 1000, Pressure accuracy- 1 millibar - 1 part in 1000, Wind direction - 1 degree - 1 part in 360, Wind speed - 0.1 mph - 1 part in 1000)
Old news: Edward Lorenz discovered that floating point truncation causes weather simulations to diverge massively back in 1961. This was the foundation of Chaos Theory and it was Lorenz who coined the term "Butterfly Effect"
http://www.ganssle.com/articles/achaos.htm [ganssle.com]
http://www.aps.org/publications/apsnews/200301/history.cfm [aps.org]
<cite>
Instead of starting the whole run over, he started midway through, typing the numbers straight from the earlier printout to give the machine its initial conditions. Then he walked down the hall for a cup of coffee, and when he returned an hour later, he found an unexpected result. Instead of exactly duplicating the earlier run, the new printout showed the virtual weather diverging so rapidly from the previous pattern that, within just a few virtual "months", all resemblance between the two had disappeared.
</cite>
I love how this has become a "I disagree with the previous post's stance on Climate Change therefore I must assert they are wrong based on my opinion".
Fact - Climate Change occurs, we know this because every few million years in gets rather brisk outside, then at other times it's most definitely more of a shorts and t-shirt millennia. Outside of the Creationist side of the God Squad, no one doubts that.
What we're looking at here is yet another example of flaws in the Doomsday Scenario data that we keep having rammed down our throats as fact, rather than assign them the probability levels they actually warrant. We're looking at Models whose results are driving Political changes, taxation etc and highlighting flaws in how the results are generated, although at least in this case this is inadvertent rather than some of the outright manipulation we've seen in some sources. Surely if you can't trust that results are accurate, how can you "predict" the future with any certainty? By certainty I mean "with enough statistical likelihood that I can put laws, taxes etc etc in place".
I've not read the paper, but in hindsight, I guess it's sort of obvious that if you change the way you calculate your answers and chop and change your resolution, you're going to change the answer between dissimilar systems.
OK, so this is a beta version. My question is, has this test been done on the code that they are using for publications? Given the issues exposed during climategate etc., re. software QA, lack of version control and archiving of data sets etc., I wouldn't be shocked to find that they haven't.
Given the interesting posts here about testing of different systems when switching (i.e. upgrading) from one supercomputer system to another... I wonder how this kind of effect will show itself in "cloud computing".
When you a run a cloud process on one day and then run the same cloud process on another, what are the chances of you using the same actual hardware? Probably quite slim, therefore like in this situation you could see differing results due to differing underlying systems.
Most of us would never run anything that requires that level of precision, but some people are sure to.