I could hope...
that somewhere, in a future generation, Hubble will be pronounced one of the Wonders of the Modern World.
It's earned it. Many times over, IMHO.
Computer scientists at NASA are trying to fix the Hubble Space Telescope’s payload computer after the hardware froze due to what's believed to be a degraded memory module. “The payload computer has four total memory modules and only requires one,” a NASA spokesperson explained to El Reg. “You can think of it as being similar …
...by managers who failed math and are generally clueless? The NASA spokesdroid appears to have a problem with numbers, too.
“The Webb Space Telescope will exceed its capability in the infrared once it is launched at the end of this year," a NASA spokesperson said.
The 'it' here refers to Hubble: what the person is saying is that JWST will exceed Hubble's IR capability, which it will.
Hubble's IR performance is limited because the mirrors are kept intentionally warm, which I assume is to prevent thermal-cycling problems without requiring refrigerants which will run out. So the Hubble mirrors sit at about 15 centigrade, which means that at longer wavelengths what IR instruments see is mostly thermal emission from the telescope itself, which limits performance at wavelengths longer than about 1.6μm (see the white paper on WFC3 (PDF link)).
JWST on the other hand will have its mirror and instruments cooled to below 50K, which means it can observe from 0.6μm to 28μm, which is far longer wavelength than HST. This means that at least some of the instruments on JWST need to be actively cooled. Oddly this isn't the longest-wave IR instruments but shorter-wave ones: it looks as if this is because the long-ave IR detectors use some technology which uses little enough power that the sunshield alone keeps them cool enough (I think the no-power-dissipation temperature on the dark side of the sunshield gets to 30K or something), while the shorter-wavelength detectors dissipate too much power for this and so need active cooling.
I'm not sure what limits the life of JWST: it may be the cryogenics but I think it's the fuel. Presumably even if the cryogenics run out it can still keep doing long-wave observations until the fuel runs out.
Sadly, while JWST will do fantastically interesting science (deeper IR means cooler and/or further away things) it's going to make less pretty pictures than Hubble and presumably not last as long. It's good they've resisted (if they had to) the urge to make prettier pictures at the expense of science though.
>I'm not sure what limits the life of JWST: it may be the cryogenics but I think it's the fuel.
It's fuel, there are no consumable cryogenics everything is cooled by fridges.
The limit is orbit keeping fuel. The orbit is far enough away that although it isn't absolutely stable you don't need to maintain a very precise position or boost the orbit.
I suspect it will have a much longer mission life than advertised, guessing that some smart scheduling algorithms will let you use much smaller movements / fuel burns and eek out the fuel.
Oh, thank you. I'd got it into my head that it was some open-circuit thing with coolant venting because Planck was, but JWST runs way hotter than Planck did.
That's my guess (longer life) too: they wouldn't be talking about 10 years unless they were hoping for 25...
Not a stupid question at all, and I think I can answer it for you.
What I think they mean by elevated levels of radiation affecting just the one module is not talking about an excessive base rate of radiation (which would affect all equally), but the effects of cosmic rays hitting that device. Cosmic rays are usually alpha (or sometimes gamma) particles flying through our solar system at incredible speeds. They are not a continous stream, but come flying in in single particle cases from all directions. When one of these hits a piece of electronics (or anything in space to be honest), it does significant damage in a very small and localised area. You can look up the effect of this on the early space helmets, where you can actually see traces off the particles in the thin layers of Gold that was protecting the Astronuats lenses.
Anyway, whats happened is that the current memory module has been hit either more times or with just the right (or wrong) angle/speed to do enough specific damage to make the memory unable to function properly (damaging part of the connector, damaging a tracer in the control part of the memory, who knows). It's entirely possible that the other modules are fine, as maybe they have slightly more protection (the damaged one might be on the end of the row, so the others are more protected against cosmic rays from that direction), or they have just been very lucky and avoided the truly damaging hits.
Designing around cosmic ray contacts is actually something you have to do when designing electronics for space as a direct hit can easily flip some bits, and corrupt data or even control programming. It's why you always include multiple backups, so that if you get a bit flip, you can see that because the other systems would be on the original value and can override the damage data.
Space - it's not the friendliest place to visit, but it sure is interesting...
The typical cosmic ray is a proton. 90% are. Whereas alpha particles (helium nuclei) only count for a miserly 9% of cosmic rays. [It did surprise me that they outnumber electrons.]
X and gamma rays flooding the system aren't included in the cosmic ray tally. They're just part of the general shit that is "space". But I'd hazard the most damaging X rays are those caused by charged particles going splat onto a transistor. (Bremsstrahlung)
Sorry I got that wrong. I had in my mind that Alphas were protons, and Betas where the helium nuclei. I also had in my head that Gammas did count towards cosmic rays, but I see from a bit of quick research thats not the case. Still hopefully the overall message of what i was communicating was clear enough.
In my defence I learnt this stuff ~15 years ago, time and wine might have dulled a few corners here and there... ;)
It's entirely possible...
Yes, but it's also entirely possible that the other modules have also sustained similar damage unless they're better shielded. Mark's question was a good one. We don't know and it's going to take experiment to find out. Meanwhile we just have to hope for the best.
Not a stupid question at all, but the answer is it's complicated. (1) The radiation is not uniform. (2) The other modules were not on, so may have had a different response to the radiation. (3) They hedged their bets by installing three redundant modules instead of the usual one. In the end they might still strike out.
Because you need to first, analyse all the available data and make sure you are certain of the error and the cause of the error (not necessarily straight forward). You then need to plan exactly what steps you're doing down to the tiniest details, because there are no take backs in space, you brick it and there are a billion dollars gone and ZERO chance of recovery. You then need to simulate it on the test hardware here on Earth (to prove that you're not going to brick it or have some other unanticipated effect), evaluate that everything worked exactly as planned, get approval from everyone involved that yes ,what you are about to do is the right thing to do, and that no one has any open questions or queries. And then, only after EVERYONE has signed off on it, do you go ahead with doing it.
I know modern Agile programming likes to move fast and break things, but that does not work when you're talking a multi-billion dollar piece of equipment that you cannot retrieve and perform any physical fixes or replacements on. You bork it, and there goes not just billions of dollars of hardware, but countless hours of science time. Understandably, the people involved move slowly and carefully, and wont do anything without being absolutley sure of the outcomes...
I'd hope they already had a plan on how to switch to a backup module, which they knew worked because it was tested before launch [Edit: it does say in the article "done numerous times during testing of the hardware before launch and the operations procedures for doing this are in place"], but having never done space engineering it could be that things don't work out the way you expect. Visions of a small robot arm unplugging a DIMM and trying to get the replacement to click into place are probably not how they will do it ;-)
Further edit: My question was also based on "Degraded memory modules are easy to workaround thanks to spare components NASA installed in the telescope before launch", but of course I forgot that "easy" means "easy compared to other stuff in space" not "easy" in the more common sense.
Even if there was a plan from before launch about what to do and how to do it. You probably want to review it with a fine tooth comb, before setting it in motion.
What was good practice 30 years ago, might not really be considered a good idea anymore... ;)
Indeed. Not least because, as the article notes, it's not the same hardware anymore. New batteries, new sensors, various other new bits of electronics attached to things. Your procedure may have worked perfectly before launch, but you want to be very sure it still works with the hardware that exists now.
Plus there's the human side of course. No matter how good the procedure itself might be, you want to be sure you're actually carrying it out correctly. The people doing it today are likely not the same ones who tested and documented it. Some of them may not even have been born when it was launched. Anyone with experience running old code written and documented by someone now retired or deceased will know exactly how well things are likely to go the first time you try running it.
Finally, it's worth considering the time involved. The issue occured on the 13th, a Sunday. The first attempts to get things working were done on the 14th, but failed. NASA made the announcement about the issue on the 16th and said they were planning to fix it that same day. Exactly how much faster is it reasonable to demand they work?
This is Science we're talking about. I would suppose that, in the interest of fairness, the queue resumes at the point it was when the incident happened.
Hubble is a precious resource. Being granted time means you have something interesting and important to check out. It remains important when Hubble comes back online.
At least, I hope that's the way it works.
I would have assumed the opposite, your slot got lost, you'll have to reschedule if it's still appropriate and can get a new slot. The science comes first which is not equitable and observing time is not fungible, e.g. "You want to look in THAT direction, sorry it's too close to the Sun now" or "That occultation you wanted to observe? Sorry it's already happened now."
Yes, at least in the early days of Hubble in normal mode that I'm more familiar with.
The campaigns are planned well in advance in long cycles to minimize maneuvering, which wastes time and fuel, instrument changes, and to fit around all the constraints of a Hubble observation.
If the observation (or observer) is important enough it could bump somebody else out of a slot in the next cycle if the telescope is pointing in the right direction and the right instrument is online - this mostly happened if some country/group had lost some guaranteed time and politics was an issue.
I suspect that it is now in a degraded mode where things are more limited
I've heard a few astrophysicist discuss this in the past (Q&A sessions etc), and assuming it's the same as other observatories (from a booking point of view), then you've lost your slot and have to re-apply for time later on (which could be years away of course!).
Part of the issue is a lot of science has to be timed, for example you might have more than one observatory looking at the same area of the sky at the same time (each with different wavelengths etc), or you might have other dependencies, such as you've booked resources to process the data once captured, or you might be observing something that has specific timing, such as alignments, or seasonal changes etc.
If you push the queue back, then you'd impact many many projects, some of which might be forced to reschedule anyway due to other dependencies, or even just cancel the work!
So if there is unplanned downtime, then it's just bad luck, you've lost your slot. Unfortunate for that specific project (or projects), but everyone else keeps going.
"The space agency is supposed to launch its much-delayed James Webb Space Telescope later this year. The JWST views in the infrared, compared to Hubble's optical and ultraviolet captures, and the two together could be a powerful tool, we’re told"
^ That is the really important bit that gets lost in so many other news reports. The James Webb Space Telescope operates at different electromagnetic spectrum wavelengths to the Hubble Space Telescope so it can never be a direct replacement for the Hubble Space Telescope. Therefore, it is essential that the Hubble Space Telescope is kept going for as long as is practically possible.
Yes and no,
Many of the objects don't need observing at the same time - if we have Hubble data in the UV/Visible on file then observing it in the IR in the future with JWST.
The real mission for JWST is to observe things that are only IR. The early universe with stuff that is now redshifted to being IR, and are too faint for Hubble anyway.
There might be boring nearby, low energy, objects (like planets) that people might want to observe multi-wavelength at the same time, but nobody cares about a bunch of non-cosmologist stamp collectors
I just hope the JWST actually launches safely and works correctly., remember the Hubble out of focus error. It seems to have been being built and tested forever.
Where the JWST is going, there is no chance of a repair mission, also a lot more radiation as it is outside the Earths Van Allen belts
You worry about complicate unfolding mechanisms that are impossible to test on Earth?
When have they ever failed , except both Hubble and Skylab's solar arrays and pretty much every other time they've been used.
The radiation is mixed, there are more high energy = single particle blowing a big hole in some electronics - type radiation. But the smaller more vulnerable electronics allows for more redundancy.
Hubble's low orbit gave it a whole bunch of other radiation problems. To be reachable from the Shuttle it flew into a very nasty place where the radiation belts dip down and so it flew through a charged particle cloud every 90 mins.
Biting the hand that feeds IT © 1998–2021