And no one thought that an electrical instrument would have problems monitoring an electrical instrument destroying phenomena? Good to see our AI overlords stepping in to help out.
When one of NASA's sun-studying satellites went down, AI was there to fill in the gaps
Neural networks have helped scientists monitor the Sun’s extreme ultraviolet outbursts after an instrument on NASA’s Solar Dynamic Observatory suffered an electrical malfunction, making it difficult for scientists to monitor a portion of extreme ultraviolet energy (EUV) being spewed by our star. EUV rays ejected from solar …
COMMENTS
-
Thursday 3rd October 2019 07:33 GMT Anonymous Coward
But if they are predicting the values then either the values are pointless as its a guess or the instruments that they are filling in for are pointless as they are predictable.You either actually observe the value because you need to know what it is or you don't. If a value can be reliably inferred why do they need to take the readings.If they are using the values to confirm the 'model', then ok, but they are not. If it can't be reliably done, they shouldn't be using the predicted data, as its just that, a guess.
Probably just another buzzword funding round, this time for NASA.
-
Thursday 3rd October 2019 08:17 GMT Anonymous Coward
I'll probably get voted down for this but ...
Assuming the data is not inferrable, then it is "fake data". Fake data, means fake science. Which means it is perfectly in line with all the other fakery going on around us.
Now, there is one possible upside - assuming they ever get around to properly measuring the data - we will be able to check if the assumptions / models / inferences are in keeping (or not) with real data, thus allowing us to improve our understanding.
Given the current climate, however, it is likely to be used as confirmation that scientific knowledge is a social construct and we now have the opportunity to correct the behaviour of the sun.
-
Thursday 3rd October 2019 10:43 GMT Anonymous Coward
I think you've very much missed the point -
The neural net was trained on outputs from both sensors on the craft when they were working, to identify potentially complex correlations between them (if a million parameters are involved) that could not have been guessed without the ground truth of 2 years of concurrent data.
In other words they can only guess at the sensors outputs now because the really had the sensor working before. It wasn't a waste to send it up in the first place.
-
Thursday 3rd October 2019 14:16 GMT Draco
I don't think anyone is missing the point
The article states clearly that they are using a neural net to to fill in values for a defective sensor (MEGS-A) - this makes the values "fake".
It is true, NASA has a large collection of complete (and, we hope, accurate) data from before the sensor died. As a general rule, it is reasonable to assume there is a strong predictable pattern across the entire spectrum it is monitoring. As a consequence, inferring the missing data is reasonable - just as we would expect to filling any missing data for a reasonably understood phenomenon (say black body radiation).
If the only reason for the satellite is to accurately monitor the sun, and the spectrum is fairly standard / uniform (the way black body radiation is), then missing the lower third of the monitored spectrum (5-37nm) isn't a big deal. The entire spectrum studied is 5-105nm.
If the reason for the satellite is to study spectrum because we don't understand it well - because it is highly variable - then interpolating the missing 5-37nm becomes problematic.
-
Friday 4th October 2019 10:29 GMT 96percentchimp
Re: I don't think anyone is missing the point
The reason for the data is both to study the spectrum and to provide data to spacecraft operators. My understanding is that this solution allows them to provide reasonably useful data for the spacecraft operators, even if the science part of the mission is now redundant.
-
-
-
Friday 4th October 2019 12:45 GMT dvhamme
I fully agree. The synthesized data itself is of no value, the value is in mapping the correlations and dependencies. But using a neural network to model the dependencies is basically cheating as you don't end up with a model, only a number spewing orifice. It's like Isaac Newton publishing a lookup table of falling times of spheres of different weights with no accompanying formula.
-
-
Thursday 3rd October 2019 08:00 GMT Anonymous Coward
Neural nets
Hey. Wonder if this would work so my Gen 1 NoIR Pi cam (tm) can turn into a FLIR? If I use an infrared thermometer as feedback and replace the NoIR lens with one transparent to the sort of wavelengths it might need ie >800nm it may be enough. Simple blob of high temperature wax with the right focal length, one salvaged from a defunct Laserdisc player or old CD drive from a car as these have nice big high quality objective lenses to focus the light from the Gen 1 GaAs laser diode.
Incidentally the 5.6mm IR silicon window modules used on some ear thermometers like the one I found smashed in the road do work but the calibration is easy to mess up though the sensor generally survives most things if the window isn't disassembled or pins bent/torn out.
If damaged its *very* hard to regenerate and you need to use an old laptop with DDR SPD chip mod (tm) to read back the chip in circuit.
At least when the inevitable happens you have a hard copy and its maybe 2 minutes work to rewrite the lost parameters rather than 4 hours or more.
As for neural net, the Pi has a fair amount of processing power and I could probably use a Gen 1 Movidius mPCIe module as these are trickling down to the used market.
-
Friday 4th October 2019 10:26 GMT Boy Quiet
Re: Neural nets
As a thought experiment that sounds interesting
As a real experiment which AI engine would you use.
Also I’m thinking any AI that correlates data one way between two devices, should also be trained to correlate data the other way - so in theory your infrared thermometer will (via the AI) produce the images your Gen1 NoIR cam would have taken.
-
Friday 4th October 2019 12:51 GMT dvhamme
Re: Neural nets
This can work but it's not as easy as you may think. You need a big, complex network and LOTS of training data if you want to get anything resembling real IR output. Basically your network must estimate the object boundaries, object type, object context and environment conditions to give you meaningful IR results. Those tasks in isolation are currently in the realm of the possible, but in combination I'm not convinced at all.
To focus the thoughts: a visible light pedestrian detection network (e.g. faster R-CNN) has hundreds of millions of coefficients, and researchers still put IR cameras in tandem with it because there is information in the IR that is simply too hard to infer from the visible light image.
-
-
Thursday 3rd October 2019 09:10 GMT Anonymous Coward
Models, fits, and wild guesses
“The goal of this work is to fill this gap in measurement capabilities with a virtual replacement for MEGS-A,” a group of researchers said in a paper ...
From where I am sitting, and without reading the article, this looks dangerously close to faking the data. It is absolutely fine to use physical models constrained by the observed parameters to infer the parameters we can't measure. Using an intrinsically inscrutable, million-parameters fit - a.k.a. the neural net - for the same purpose is a lot more problematic, but may be acceptable under certain limited circumstances and with a lot of caveats. Pretending that this is somehow a replacement for the actual measurement is not.
-
Thursday 3rd October 2019 10:03 GMT mj.jam
Re: Models, fits, and wild guesses
I think the point is more that they don't have any working equipment in space. So until they build and launch a replacement, they have two options.
1. Do nothing, hope that there isn't a major problem
2. See if they can use other data to partially fill-in the gaps they have.
I don't think they are pretending that they will do major advances on the science here, more that they think they can still provide early warning for events.
-
Friday 4th October 2019 10:29 GMT Anonymous Coward
Re: Models, fits, and wild guesses
I don't think they are pretending that they will do major advances on the science here, more that they think they can still provide early warning for events.
Given the name of the journal where the article is published[1], that is actually quite funny :-)
Seriously though, this is a fairly common, and perfectly valid, line of reasoning in science: "I haven't got the data I know I need. Can I somehow massage the information I've got to get the same or substantially similar data?" What you do next is however critical. Let's assume you somehow have a prescription which estimates the missing data from other observations. Two possibilities exist:
a) The estimate is not sufficiently similar to the actual measurement. This is a boring possibility [2], so we'll forget about it.
b) The estimate is substantially similar to the actual measurement. This means that the measured data we had as the input does in fact contain the necessary information. With physics-based models, we can trace this information to the specific features of the input data and the processes underlying these features, understand them, and then monitor these features directly. With traditional statistical analysis (e.g. principal-component or time-series analysis) we could at least locate the correlations between the measured data and the desired observable, which will guide our understanding. With a neural-net fit, we've learned nothing beyond the fact that the correlation exists. We do not know why; we do not know which part or feature of the measured signal contains the data; we do not know how robust the correlation is and whether it will still hold tomorrow. That is numerology, not science.
[1] Science Advances
[2] Which however can't be eliminated in the present case, since the instrument performing the actual measurement is dead, and the real comparison is no longer possible. The standard counter-argument is that this possibility can be guarded against by dividing the initial dataset into the training and testing subsets; if the data from the testing set, which was not used in training, is reproduced, then everything is fine. This counter-argument is valid if, and only if the available dataset covers all possible variations of all possible input parameters the system depends on. If some of the parameter space was never explored by the reference dataset, the behaviour of the neural net for these parameter combinations is intrinsically unpredictable. This is the key difference between the physics-based models and fitting.
-
-
Thursday 3rd October 2019 13:52 GMT mr.K
Re: Models, fits, and wild guesses
It is faking data if they claim they are actual measurements, they are not. Interpolating or predicting data based on known data is in the core in more or less everything we do, from radio communication, wether forcast and election, even down to how our own brain function. It is perfectly fine to infer anything regardless if we can mesure it or not. Your job is to make good judgements on how reliable the data are, which you have to do anyway since no measurements are accurate anyway, prone to failure and most instruments you use doesn't actually measure the actual phenonoma, but usually something else before infering what you are after.
-
-
Thursday 3rd October 2019 10:55 GMT Twanky
Make a decision!
'EUV rays ejected from solar flares are particularly worrisome. The surge of highly energetic particles bombarding Earth can cause radio communication blackouts, knock satellites out of place, and disturb GPS signals.'
Is it rays or particles?
Come on, why the uncertainty?
Oh...
-
Thursday 3rd October 2019 11:18 GMT Anonymous Coward
Re: Make a decision!
I'm not a solar boffin, but it seems to me that the level of ultra-violet energy emitted would be an indicator that those slower moving particles are on the way. Light moves faster than the particles in a solar flare, thus giving a little time to engage defensive actions to mitigate possible damage.
-
-
-
Friday 4th October 2019 08:58 GMT Twanky
Re: [sigh]
In other words it is a surrogate measure for the solar flares. But instead of establishing the correlation with multiple other measures and using that to predict the dangerous solar flare they have gone around in a circle and predicted the EUV in order to predict the flare. Genius.
-
-
Friday 4th October 2019 10:26 GMT Boy Quiet
I do not understand why just because an AI used historic data to work out a correlation it’s being called “fake”data.
Much more interesting to me is did they / will they create the inverse AI using the data from the MEGS-A before it went down to predict what the other instruments readings would be.
If so we could have software redundancy when we cannot afford to have hardware redundancy.
-
Monday 7th October 2019 11:18 GMT Draco
Data is a value obtained from direct measurement or computed in a known way.
For example, you can measure the length, width, and height of a box - these are direct measurements. You can also calculate the volume using those measurements. Any "data" that is not directly measured or directly calculable is (technically) "fake".
What is happening in this case is that the MEGS-A sensor (which measures EUV between 5 and 30 nm) is broken. However, the MEGS-B sensor (which measures UV between 30 and 105 nm) is working. NASA has created neural net that extrapolates the missing data from the MEGS-A sensor.
Consider the following: you have a machine which measures the dimensions of boxes. It uses one sensor to measure between 5 and 30 cm and another for dimensions between 30 and 105 cm. Imagine the 5 - 30 cm sensor becomes defective, but the engineers create a neural net that is trained on past data sets and can "fill in" values in the 5 - 30 cm range based on the data the 30-105 cm sensor returns - this is "fake" data. The "fake" data may be good or it may be poor, but it is not "real" because it was not measured.
-
-
Friday 4th October 2019 19:12 GMT TobyK
For scientific validity it's extremely important to keep any model predictions or adjustments separate from the original, sacrosanct, empirical measurements. With the ubiquity of computer modeling this has often been ignored in recent decades which has turned much research into farce or fraud. The problem with this model then lies in how its output is used. The paper stating "these virtual instruments will leverage existing and historic scientific instruments to yield similar levels of scientific data products" is a bad sign. Model outputs are not scientific data.