Whither Intel/TSMC/GloFlo?
These corps make 1e9 transistor CPU chips. I don't know how big an individual element in a CCD is... could a billion pixel CCD be fabbed at 28 or 32 or 40 nM per pixel?
The European Space Agency has announced the completion of the camera that’s to be used in its Gaia mission: a billion-pixel mosaic comprising 106 individual CCDs in a 0.5x1 meter array. Assembled in May and June at Astrium’s facility in Toulouse, the camera is designed to map around a billion stars when Gaia’s five-year …
The problem with pixel size is that as you make them smaller, they receive less photons. That means the amount of signal reduces with the pixel size, but the noise doesn't (if anything, the noise increases as you need more gain to actually measure the number of photons). Generally for good signal to noise ratio, the bigger the pixel the better (within limits!)
It would be like cramming a 2 week holiday to Venice into 1 day by only staying in each tourist location for 1 minute, and travelling between them at 70MPH. Yeah, you'd fit it all in, but you'd miss a lot of the subtlety and the quality of your holiday might be a bit lower :)
The fundamental thing about CCD sensors is that as you make them smaller the light Wells into which the photons fall get smaller. Thus for each exposure, which is essentially an ADC sample of the charge accumulation in the well, you get less charge and a higher signal to noise. Pixels are one area where smaller is rarely better, best left to the specialists.
Maybe, but the more pixels you put on a chip, the smaller they are and so the less light-sensitive they are. For astronomical use, you need large pixels.
One thing about the article - the camera itself is NOT stereoscopic in the classical sense of a pair of sensors taking images of the same view at the same time. How could it possibly detect the parallax shift of objects light years away if it did that? The answer is that it will image each area of the sky from the two extremes of the Earth's orbit, thus giving enough separation between the images for depth information to be recovered.
The minimum size of a photosensor element in a CCD is effectively dictated by the optics and the nature of visible light and not the ability to fabricate smaller elements. Indeed a 40nm element would be something like an order of magnitude smaller than the what any visible light optical instrument could theoretically resolve. Indeed for many types of optical instrument, the practical resolving power is significantly worse than that. Whilst there is some value in the sensor "out-resolving" the optics, there is a law of diminishing returns and there are other issues. One of these is the ability of the photosensors to be hold enough excited electrons to provide for a decent dynamic range. That ability is directly related to the photosensor size. Then there is a requirement to be able to read all these elements. A single billion-cell CCD would take a long time, and due to the "bucket-brigade" nature of a CCD it would be likely to introduce more errors and noise. A CCD also has to allow for space round the photocells for insulation and for circuitry. That reduces the surface area for actually sensing light, which is very bad news.
For this sort of work, the optimal photosensor size is probably of the order of a several microns, or a good two orders of magnitude larger than that used for producing processor chips.
The ESA say that the two telescopes are at a precise 106.5 degree angle to each other. This means they can use this as a reference point that shows where the imaged stars are _in relation to other imaged stars_. Using this data the ESA will produce a relatively accurate 3D map of star positions (and velocities) for a huge number of stars. It's not a design for stereoscopic imaging of single objects, but a method of fixing the location of objects in relation to other objects for the production of maps that can be displayed stereoscopically.
Of course, I may _not_ be reading this right...
(Sometimes I look at what humans can do with our technologies, and think "we are so cool!". Then I look to what we do to each other and other animals, and I despair. Is that just me?)
Which tickled something in my head, and Google supplied the rest. e2v was once EEV and EEV was once the English Electric Valve company. A great example of how a company has to change to stay in business. History here http://www.alphatronlinac.com/default.asp?articleid=94
This post has been deleted by its author
It's a bit more complicated.
The telescope is two barrels pointing 106 deg apart looking at different sets of stars.
Gradually as it moves each telescope will see each star a lot of times
So you measure the 2D XY pixel position of star1 on camera1 and star2 on camera2, Later you have star2 on camera1 and star3 on camera2
Eventually you have a few billion XY pixel measurements of 20million stars and then you do the mother of all simultaneous equations to get their relative 2D positions.
Then independently you work out the distance to each star (from movement, colour, brightness etc) - eh voila you have a 3D map. A bit like GPS - the lat/long position is incredibly good but the distance (height) is a lot worse.