3.2 gigapixels... Mmmm... <insert Homer Simpson drool here>
I'm wondering just how long it takes to dump a single image onto permanent storage; I suspect it might be a matter of "click" followed by "make and drink a coffee."
Construction of the LSST Camera, destined for the Vera C Rubin Observatory in Chile has been completed at the SLAC National Accelerator Laboratory in Silicon Valley. Dubbed the Legacy Survey of Space and Time (LSST) Camera, it's capable of taking photos with a resolution of 3,200 megapixels. It's five years late – and it's …
There was a design (don't know if ever got built) for a camera like this for the Anglo-Australian Telescope (IIRC) that had two sets of CCDS back to back.
You took a picture with one set while you were reading-out the other and then flipped them over to repeat the process.
CCDs actually store the image electronically on the chip and it stays there until reading it out destroys the image and clears the chip.
The LSST website says it generates 15 terabytes in a night, but not how much of it is kept.
How does that compare to the HST, taking into account that even in high Chilean country you have to deal with atmospheric distortion?
If I read the Wikipedia article right, this is for massive surveys at a scale not previously achieved, whereas HST is more for pinpointing at specific areas or objects?
Modern adaptive optics pretty much completely eliminates the issue of atmospheric interference in visible spectrum astronomy (the atmosphere blocks the rest for for IR and UV you do need space telescopes).
This is largely why there is currently no replacement for the Hubble planned. The only planned visible spectrum space telescope is the the Nancy Grace Roman space telescope, which is using one of the spare Hubble class KH-11 telescopes that NRO donated to NASA.
Yes, this is a survey telescope. the idea is to continually image the whole sky (over however many days) and so spot 'new' objects that you can follow up with bigger telescopes or space telescopes
Adaptive optics doesn't really apply to this sort of instrument, you can only correct the wobbly atmosphere over small patches of sky. The sky wobble at this site is <0.5 arcsec ( arcsec= 1/3600 deg) and the field of view is 3.5deg. The main requirement is a site that is never cloudy, if you are continually imaging the whole sky you can't miss half of it cos it was raining that night. The thing about a desert at 15,000ft is that you don't get a lot of rain !
No it won't.
The satelites only leave trails when the sun hits them, which only happens near dawn/dusk when they are in the sun, but the telescope is in the dark.
You try to image when it's dark, at dawn/dusk you normally try and schedule so you are pointing away form the sun (mostly from sky glow) but you balance this against looking near the horizon and having to look through more atmosphere which gives crappy images.
So I suspect with this mission you would want to stick to looking much closer to straight-up and just live with losing 30min/night
The Starlink satellites won't leave so many trails, as you point out.
Instead, they will just occult all sorts of interesting stuff.
Such as the the alien fleet that is spiralling Earthwards, hiding behind Elon "Lizard Lord" Musk's space birds. That is the only logical explanation for making the new Starlink design too big to launch without Starship: as the Slaver ships grow closer they need the larger shape to keep them obscured. Their final approach was laid out in the plans sent up in the glove box of that red Tesla - and you don't want to know what was really inside that humanoid body...
Thing is... way this planet is going... unless their message of We Come in Peace is "To Serve Man", slavery on an interstellar cruiser could be preferable.
OK, maybe not in the Adamantium mines of Florfix Jumblepad Prime, but why would a civilisation capable of interstellar cruisers need fleshware miners? More likely you'll be zoo exhibits or entertainment for their kids. Or bad examples: "see this species kids? they had one of the few absolutely perfectly placed planets around a G star! And they blew it!" Still not that horrible an existence, all things considered.
But the satelites are moving very fast and will only cross a pixel for a tiny fraction of a second during the 15sec exposure so it will only obscure a tiny amount
The reason the are building it in the Andes is so that giant Andean condors can be trained to fly in circles over the camera whenever a lizard-people ship is arriving.
Rare gourmet meal. Livestock breeds early and often at the age the meat is tenderest. So with a few billion stars around and proper culling practice the supply can managed to keep prices up. As a bonus nice resort property becomes available for the DiHydrgen Oxide lovers.
It uses a fridge. A camera this big would use metric shit-loads of LN2 and since you want to run 365days/year for years that is a metric shit-tonne of LN2 to get to a telescope in the middle of nowhere.
Interesting that they do cool to -100c for a 15second exposure. We had to cool CCDs to that 30 years ago but modern dark current reduction designs (thanks Sony) made cooling short exposure CCDs unneccessary.
A bigger issue for this camera is probably cooling all the readout electronics. With that many chips you are going to have to have a lot of readout / processing electronics directly behind the chips = in the middle of the telescope, where you don't want to dump any hot air. The advantage of a fridge is that you can have the heat exchanger away from the telescope.
Cerro Pachón is lower than the Bolivian capital (La Paz) & just below Ecuador capital (Quito) - based on reported heights for each.
.. But its still reasonably high (certainly, those unaccustomed to such heights should be wary of altitude sickness at those heights) though when I was in the Andes 4k+ was about the height that I treated with caution as you could feel high oxygen usage physical activities noticeably becoming more of a struggle at those heights, but really varies a lot between individuals with how well (or not) your body copes at altitude (the other Europeans on the Andes trip with me were struggling at altitude more than I was, so I was either naturally lucky or my youthful pursuits of swimming & scuba diving trained my body to cope well on reduced oxygen (when diving "on air", especially quite deep, you control your breathing to not waste air supply* in your tanks, you want to keep within your dive plan as you really don't want to be low on air and having to do uncontrolled ascent for obvious bends reasons if your dive was beyond about 9 - 10 m) ).
* Biggest hurdle to overcome in scuba diving as a novice is to not "panic breathe", but to do very controlled breathing (keeping breathing relaxed & "minimal", whilst still ensuring you breathe enough to not struggle from oxygen deprivation).
how far 3.2GP was ahead of the best consumer cameras? Not being a photo person [if I cannot remember it why would I want a photograph? If I can also why?] I had no idea.
102MP appears to be the largest but probably uses some sort of interpolation on a lower resolution optoelectronics due to my being a cynical sod.
3.2GP is pretty impressive as it equates to a square array of 56k×56k - if you mapped a big 300mmx300mm face onto 56k×56k each array element would correspond to ~5µm (a red blood cell is ~8µm.)
The impressionists might lead us to believe that the beauty is in lower resolution images (understandably as beauty is clearly in the eye [mind] of the beholder) still the idea of a 3.2GP snap of the white bikini clad Ursula Andress emerging, like Aphrodite, from the waves (Dr No) is rather impressive and could quite nicely cover a football (rugby) field. How much trouble would you be in if you made any of those old Bond movies today?
I suspect this camera is as much a product of the supporting technologies than just the optoelectronics. The scientists and engineers involved must have pushed each technology to its limited and integrated the results into a unique instrument. Their reward will be the priceless astronomical observations it will make.
Back in the day, when sensors were crap , worked on creating large images from a (microscope slide) - essentially a microscope with camera would move, take a shpt, move again, take a shot etc (it took a lot of individual shots). until required target area of slide was covered.
Images were sent off & stitched together on a different machine.
That was nice and easy as images were coming sequentially.
With the telescope it will be do nothing for 15 seconds, then the sensors send their images off to be processed - be that in a queue based system or "all at once" via some decent bandwidth coupling, and would expect the stitching together to be done outside of the computer system that's controlling the movement of the telescope as you do not really need 1 jack of all trades system to do both movement control and image stitching together (also makes it a lot easier to have separate systems as chances are sensors will be easily swappable and as time goes by better sensors will be available so the image processing software & hardware most likely to need an upgrade)
This post has been deleted by its author
102MP appears to be the largest but probably uses some sort of interpolation on a lower resolution optoelectronics due to my being a cynical sod.If you're looking at the Fujifilm GFX 100 then it is an honest 102 million pixels sitting behind a standard (for digital photography) Bayer type RGB filter array. The LSST uses a monochrome imager with six separate spectral band filters, so at the pixel level it will resolve significantly more colour detail than a normal digital camera and slightly more spatial detail.
I should have added that some modern high end photographic cameras – the GFX 100 included – can compensate for this using their sensor stabilisation systems to take multiple exposures with the sensor shifted by half the width of one pixel at a time, combining the exposures into a single, much larger image with full colour data at every pixel location.
There's a trade-off between pixel size and dynamic range. I'm guessing they want a LOT of dynamic range if they're taking 15 second exposures. Correcting for optical imperfections requires high sampling precision and no major clipping.
Real 102 MP cameras cost $$$$$. The 102 MP sensors in Samsung phones are a gimmick to create HDR with lots of tiny varied gain cells rather than one larger cell. It has to be repainted by AI - something you don't want happening in scientific sensors. I guess a little AI is OK when people write comments here, though.
The telescope will cover the entire night sky every few nights. The data center computers will compare the new images with old ones looking for anything that changed. Within the first year it is expected to find millions of new objects in the solar system. It will to send out alerts for new object detection within a few minutes of the new image being taken, useful for stuff like new supernova.