
OK, so I need more memory
We are processing up to 4Gpixel images now, so I'd better polish up my act
If you listen very carefully, you can hear the owners of high-end digital SLRs yelling “I want it!” at their computer screens: Duke University researchers have stitched together 98 “microcameras” into a 50-gigapixel monster called AWARE 2. As the researchers note in their paper in Nature (abstract here), “Ubiquitous gigapixel …
One of the challenges the increasing image size brings is that you need something to handle and store it. For most domestic users, the kind of resolution that allows an A3 print of an image is more than enough, yet they still fall for the "moxe pixels is better" BS.
Yet another total waste of computing power..
> they still fall for the "moxe pixels is better" BS.
I'm not sure 'they' do... the pixel count of cameras released as successors in a range seems to have plateaued; many manufacturers aren't trying to use pixel count as the selling point.
Anyway, if you can have more pixels without compromising quality (ie a bigger sensor and correspondingly bigger optics) it is better - pixels can be interpolated to reduce noise, or you can crop down to what is required for your A3 print. That is 'technically' better, not necessarily 'creatively' better, mind.
Consider that new Nokia camera with a daft high pixel count - in low light close up situations (social gatherings etc), pixels are interpolated to reduce noise. When outside in good light, the extra pixels can emulate a zoom, by cropping. Seems a reasonable solution to getting a camera on a small device where mechanical optical zooms are too bulky and fragile.
Anyway, I don't think the Original Poster is a domestic user... I would guess that he something to do with surveying or the like. Cameras with huge pixel counts are used for aerial / satellite photography, or he may have made 4Gpixel images from a montage.
"I'm not sure 'they' do... the pixel count of cameras released as successors in a range seems to have plateaued; many manufacturers aren't trying to use pixel count as the selling point."
Spot on. The lens is considerably more important. The sharpness, contrast and apparent resolving power of that monster are abysmal - no match for almost any of of the gigapixel photos on the Gigapan site, http://www.gigapan.org/ which tend to be taken using cameras with decent lenses.
The same applies to domestic digital cameras: My Pentax K100 SLR produces much higher resolution images using its standard kit lens than my new Pentax WG-1 snapper. This is despite the K-100 having a 6Mp sensor while the WG-1 has 14Mp. Yes, the WG-1 image files are more than twice the size (4.1 MB vs 1.7MB for the same test subject) and both were shot with flash to eliminate camera shake.
Compare them using the GIMP at max resolution (1:1 pixel display) and the difference is obvious: the K-100 image is still sharp while the WG-1 image isn't even if it is twice the size. Compare 1:1 (K100) with 50%. shrink (WG-1) so details appear the same size and its still obvious that the K100 image is sharp while the WG-1 image isn't.
Fortunatly I bought the WG-1 for use in situations where I wouldn't dream of using the K100, such as in a single seat glider cockpit, or I might be disappointed with it. As it is, it does its job rather well.
Thanks for this comment - especially the link to gigapan. That's just wasted half a morning for me!
I have 50+ years of photographic experience, sadly this is actuallly about 100 times 6 months of the same novice experience; your comment has explained, in one simple paragraph, what I have failed to unerstand in all that time and why my Nikon D80 delivers such poor images [poor operator (80%) + poor lens (20%)].
Yeah, sounds like the theoretical HEDSLROs fell into the same trap. Its all about the optics; megapixel stupidity is what gets us cameras with dire low-light capabilties and absurdities like that new Nokia abomination.
New world of snoopery? Doubtful. They understand the need for good optics.
>absurdities like that new Nokia abomination.
Actually, it's a reasonable engineering solution to putting a camera on a device that is too small and too roughly handled for a mechanical optical zoom and aperture assembly. It is based on what users might actually use it for, being based on the observation that, generally, close up shots are often indoors (low light, so interpolate pixels to reduce noise) and that distance shots are usually outside (more light so no interpolation required, so you can crop to zoom). It is not trying to be a fancy fancy camera, just a device to allow people to snap pictures of their mates at parties, and pictures of landmarks when they are sightseeing.
> Those example pics are not very impressive at all.
No, neither were early digital photographs compared to film. However, even in the infancy of digital photography, (thousands of pixels, hundreds of thousands of dollars) there were scenarios in which it was used - such as astronomy.
This is a prototype. Click through their links to 'Evolution of Image Quality' to see how far they have come and how they consider this a work in progress.
It seems that many of us here have been spoilt by those photo-stitched giga-pixel images of London and the like.
..the acuity is pretty well what a single camera would give. What they have done is stitched together lot of images to get a bigger field. And with crappy smoothing.
I thought there was something clever going on. Like blending bracketed images, differencing out aberrations and interpolating focus across multiple sensors.
Then if you stitched those together you'd be able to sex a gnat at 100 miles.
What you are talking about is software cleverness. Whilst useful, it has been done before by others. What these guys are doing is researching a hardware system- obscuring their work with software tricks is not the point of this exercise.
Using bracketed images would produce artefacts caused by movement in the subject between frames. If they wanted that, they would just take a thousand pictures with a standard DSLR and stitch them together with a commercial piece of software - but that would not be new.
If you click on the link in the article, and go to 'Evolution of Image Quality', you will see them explain what has caused the aberrations in their images, and what steps they have taken to minimise them. They would rather eliminate the aberrations at source, rather say 'It's close enough now, we'll fudge it software', though obviously any eventual product that comes from this would incorporate post-processing- like a compact zoom camera corrects barrel distortion in software.
the hubble definitively has the optics to match that.
And carl zeiss didnt get his reputation for endorsing crappy lens. As others said, crappy lens, crappy pictures... that now takes gigabytes to store. It already got the looks of a LHC or NASA part anyway. Wires and polished stainless steel...
Okay in engineering terms, it is far and away better than old fashioned "film" photography.
But that doesn't take human nature into account.
With digital, you end up with 10 billion photos, mostly poor, that nobody ever looks at. You back them up for your whole life. Once in a while you show somebody a tiny, shaky, upside-down image on your iPhone, while yelling at them how fantastic modern life is. You go to a party and all you see is cameras where there should be happy faces.
building a camera that takes your old SLR lenses.
Due to chip yield issues, most digital cameras, even most digital SLRs, have sensors significantly smaller than the 24 by 36 mm size of the film frame in a 35mm SLR. If you can digitally stitch together images from smaller sensors with a good result, using prisms to split the image to avoid gaps between sensors, then one can make a big sensor out of several much cheaper small ones.
Since you have to do the splitting up before the focal plane, the result couldn't be an SLR, because the prisms would get in the way of the mirror flipping up, but the technology would have its attractions to the consumer.