
Generation Y
"Butt, butt, does it affect my selfies?!"
Users of sixteen of the world's most prestigious optical telescopes - including the Hubble Space Telescope - are revisiting old data in case an analogue-to-digital converter design has polluted the instruments' measurements. Analogue-to-digital converters (ADCs) are where the real world meets the digital, and in the case of …
I think I know what's going on with the ref voltage: apparently the ADC (or external circuitry, even) draws different levels of current based on the digital bit pattern, causing changes in voltage on its reference voltage supply. And if the ref voltage is not filtered and/or regulated correctly, you end up with errors based on "how many 1 bits" (as appears to be indicated by the article).
Solution: you put a proper regulator and/or filter on the ref voltage, or [as the article suggested], use differential voltages against a different reference. Or both.
ADCs have been around in discrete form for a very long time, and they nearly always have a reference voltage supply that's separate from the Vcc supply. It's that way on the microcontrollers I've worked with as well. [I've used the 'differential ADC with gain' feature on one particular microcontroller a few times]
Also, you're really NOT supposed to tie Vcc and Vref together unless you filter it really well. You can, but it can result in A:D noise. A 'big fat capacitor' is often sufficient. Lately, I've gone to 'small series resistor plus big fat capacitor'. Yeah, this tech isn't all that complicated.
But ideally your voltage reference is one of those temperature compensated reference voltage chips, which only cost a dollar or two [last I priced them]. And to use one of THOSE you'll need a differential ADC measurement. So there ya go.
but if you SERIOUSLY want accuracy, you'll have something on the reference voltage circuit that actually maintains a relatively constant temperature, because voltage drift due to temperature still exists. In the comms world, crystals have been kept in temperature controlled ovens since decades ago, so that their frequencies are more stable. So similar thing for A:D reference voltages.
And yeah, there's really no need to overengineer it, because too much overengineering results in higher current consumption, larger circuit boards, heavier payloads, yotta yotta. However, a series resistor and fat filter capacitor aren't really that much to add.
Well, if it has more impact on bright objects such as a supernova, it very well might since a certain type of supernova are used as standard candles and we specifically take those to measure distances.
On the other hand, we've been using them for a while already, so even though there might be an error in the actual distance, the same error is present at each measure so the difference remains reliable (I think ?).
In other words, maybe, maybe not.
That's normally done through redshift and brightness of some supernova. The redshift is measured by the position of bright and dark lines in the spectrum and I dont think that would be affected by this problem.
The problem seems to exist for 'fainter objects and planets' so I would imagine brighter* objects like supernova where not a problem.
Be fun if it it tho!
I can't remember the details offhand, but one current puzzle is that two different methods of measuring the Hubble Constant give different values - not completely outside the possible error bounds but very odd. Systematic bias like this will need to be checked carefully.
@Ryegrass: I think it will have affected the measurements used because, as I understand it, the historical rate of expansion was mostly established by looking at very old, and therefore very distance and very feint type 1a supernovae.
If I understand the problem correctly, the error caused by the problem isn't linear i.e. it's not simply the case of a larger absolute value producing a greater absolute error. For example a nominal value of '7' (binary 0111) will produce a greater error than '12' (binary 1100).
The problem for observations of very feint objects, like ancient and distant type 1a supernovae, is that all of the values you get are very low, so you're only using the bottom few bits of the ADC, effectively reducing the dynamic range of the ADC; the native range of a 16 bit ADC is 65536 but if you're only using the bottom (least significant) 3 bits your effective range is only 7.
Now a range of just '7', on its own, doesn't provide the level of detail needed to identify smaller differences between different objects; everything would have to fall in to one of those seven bands. What you can do though, is take multiple samples, so that instead of a single reading of, say, 011 (3), you get a series that goes 011(3), 010(2), 011(3), 011(3), 011(3), 010(2), 011(3), 011(3)... With enough values you can use statistics to derive a more accurate value than just '2' or '3' - simply averaging the sequence above gives 2.75.
So although it seems that errors are more likely with brighter objects that use more of the full range of the ADC, the relatively small absolute errors that can still occur when you're only using the bottom few bits become relatively large in proportion to the usable range; a one bit error in a 3 bit value can be worse than an error in two bits of a 16 bit value.
As I read it the error occurs specifically when looking close to a transition from bright to faint. So for instance trying to observe a distant galaxy just past the edge of a closer and brighter nebula. There's a lot of research happening on data that basically comes from: "take an image and crank up the brightness, then look at the stuff just above the noise level for interesting stuff" Problem with this ADC error is that some of that stuff is just noise instead of a signal above the noise. (By a wide margin)
Alternatively to stuff being thought significant when it wasn't however might also be data that was filtered and ignored for being "too bright to be our target". I'm sure this is going to have repercussions in the astronomy sciences for some time.
"I blame digital data collection in the first place. It's... unnatural."
Read up on how eyes and ears work. At some point everything gets converted into a kind of pulse coded binary signal before going to the brain.
The nervous system is an incredibly complicated mixed system of analog and digital, one reason AI human brain simulators won't be coming any time soon. Imagine complex gates with perhaps over a thousand inputs that are being summed, some being positive and some negative, in order to drive a transmission line (axon) with a synapse at the end of it.
Having said that, this is absolutely fascinating. I've never used anything higher resolution than 13 bit (12 bit + sign) ADCs, and these are sensitive little beggars - praying to them occasionally helps but mostly they respond best to a diet of absolutely clean DC and a very careful routing of signal lines. Working at higher resolutions with, presumably, cryogenics to keep down the noise floor sounds fascinating but very difficult, so this being the A/D industry's Meltdown doesn't really surprise me.
"Working at higher resolutions with, presumably, cryogenics to keep down the noise floor sounds fascinating but very difficult"
there are many ways to mitigate A:D noise, from low impedance (and cooling) to oversampling. I tend to use the latter, since it works and is pretty cheap to implement. Sample 100 times, report the average. Effectively it increases the accuracy by the square root of the sample count, as I recall. It's a statistics thing, yeah.
I imagine cryogenics would help, too, but I'd start with low Z. high Z circuits have lower current, and as such, entropy is a bigger factor. Assuming same type of material and construction, a 1M resistor generates ~10 times the noise of a 10k resistor. And I think temp in deg K is proportional to noise as well (thermal noise anyway).
http://www.daycounter.com/Calculators/Thermal-Noise-Calculator.phtml
Silicon junctions follow a similar rule, as I understand it. less current, or higher temperature, means more noise. [geometry and other factors notwithstanding].
So if you can get away with it, you'd use a bit more current in your 'measured' circuit (and the input amplifier) to cut back on the noise, as well as cooling it as low as you can, solar power and batteries and cryo-cooling hardware notwithstanding. yeah on a satellite this can get 'complicated'.
"For example, giving the ADC a differential input rather than a single reference voltage should reduce the chance of it being affected by a stray signal."
Which is standard procedure for every single radio, and audio signal to be digitized out there, (that must have SNR controlling at some point). They were never visited by Captain Obvious, were they?
There is not enough information in your post to be sure, but I suspect you have not understood the problem.
At some point an ADC requires a stable reference, and by stable I mean "within tolerance over the calibration interval".
If the reference is affected by the binary value of the output, it doesn't matter if the input is differential or single sided, except insofar as where in the scale the error[s] become significant. Because the output is still going to consist of ones and zeroes and some values have more ones than others.
"If the reference is affected by the binary value of the output, it doesn't matter if the input is differential or single sided"
right - this assumes that the 'averaging' method commonly used in A:D conversion isn't the inherent problem. But what I think happened is that they either wired VRef to Vcc for the A:D converter [a common practice, actually] and forgot to put a big fat cap on the VRef pin, OR should have included a series resistor, OR [better yet] a voltage regulator with a VRef circuit that's temperature compensated. So yeah.
Using a differential A:D takes the built-in VRef (possibly tied to Vcc) out of the equation. I'm assuming you'd use a reference voltage circuit of some kind, like an ADR512 [one I've used before], or perhaps something even better.
[I'm also basing this on my work with microcontrollers, some of which have these *kinds* of features]
D A N G E R ........ D A N G E R .......... D A N G E R ........ D A N G E R
< --------------------------- F e e d i n g - t h e - T r o l l ------------------------>
D A N G E R ........ D A N G E R .......... D A N G E R ........ D A N G E R
unwarranted triumphalism,
I have to ask the Question:
What is it about Science that is so Fearful and Challenging ???
What is Scientism ???
I don't understand what your problem is !!!
P.S.
You may have discovered/'worked out' that the world is NOT Perfect !!!
Why do you expect Scientist to be any different ???
Not even science is perfect... although the physics studied by scientists *IS* perfect, and the more they study physics the closer they get to knowing how it works. That's science. And thanks to scientists discovering this error, we just got another bit closer.
p.s. not just physics, obviously. Science is great for working out other stuff too.
I'm not sure how it's relevant in this context, but "Scientism" is a real thing that's most obvious in climate change: the Holy Consensus is worshipped to the point where anybody who questions it is immediately branded a Denier, with no need for silly things like evidence to destroy entire careers. Despite the fact that we've gotten constant doomsday predictions that have failed to manifest(how many times have we passed the date we were supposed to be below sea level?), the Consensus continues unaffected and unadjusted, continuing to demand we drop our carbon emissions well past the point of diminishing returns. And, of course, those who merely believe climate change is real without believing it to be catastrophic are branded heretics and given no better treatment than Deniers.
Anonymous due to my climate heterodoxy.
"latest failure of Scientism"
there is no FAILURE in science. There are only "failed experiments", from which we gather data, write up what caused the complete cockup to occur, publish it so everyone can point fingers and laugh and make comments [some of which will actually be helpful/useful], and then MOVE ON and don't screw up that way evar again. And in the process, it's more likely that other scientists won't repeat OUR mistakes.
So, "failures" (failed experiments/designs) are just fine, in science. They're at LEAST as good as success. It's how we increase our knowledge. I'd be just as interested in FAIL results as GOOD results for any experiment or design that I make (as long as I discover the reason for fail and am able to correct it). It's all good science.
I'm quite amazed that this was not discovered when the original prototypes underwent calibration and testing.
However, as the cause of the hardware bug is known and is presumably consistent, it may be possible to apply an adjustment so that the collected data can be corrected.
Er, anything wrong with a network of resistors?! Once you get up to proper sampling frequencies such as 3.6GHz (see this this bad boy) you're pretty much stuck with a chain of resistors and a network of comparators....
Incidentally, streaming data from something like this and processing it on the fly is, naturally, quite hard work...