Imagine having a camera of the future that could capture an expansive photograph of a surrounding scene, yet would be capable of cropping down to view a single insect on a small stone. This is what researchers at Duke University and the University of Arizona are attempting to make a reality.
Scientists are working on creating a 50 gigapixel sensor, which, to put in perspective, is similar to using more than 6,000 iPhones to capture an image with the same resolution!
The 50,000 MP camera however, may be more of fantasy right now, but scientists believe it could be widely available to the general public within the near future.
The megapixel rating of a camera indicates how much resolution a photograph will have once an image has been snapped. Pixels are individual “dots” of data, so the higher the pixel count, the higher the resolution. But equally important in the pixel equation is each individual pixel’s “size“… so it is possible that a 30 MP sensor produces less resolution than a 20 MP sensor with larger pixels.
The team at Duke led by David Brady explains how they were able to develop the mega-resolution, gigapixel camera.
“Each one of the microcameras captures information from a specific area of the field of view,” Brady explained. “A computer processor essentially stitches all this information into a single highly detailed image. In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later.”
He also says the key challenge for researchers was / is “developing high-performance, low-cost micro camera optics and components. While novel multiscale lens designs are essential, the primary barrier to plentiful, high-pixel imaging turns out to be lower power and more compact, integrated circuits… not the optics,” Brady said.
The prototype camera measures two-and-half square feet, by 20 inches deep. Only about 3% of the camera is composed of optical elements, while the remainder is made of electronics and processors.
The researchers said the processors and electronics area would need to be reduced in size in order to allow a more practical use for everyday photographers.
“The camera is so large now due to the electronic control boards and the need to add components to keep it from overheating,” Brady said. “As more efficiently-compact electronics are developed, the age of hand-held, gigapixel photography should follow.”
The University of Arizona team assisted in developing software which combines images from the microcameras into one huge, 50 gigapixel image.
Team leader Michael Gehm at the facility in Arizona, said supercomputers face problems similar to those as developing of the optics.
“Supercomputers face the same problem, with their ever more complicated processors, but at some point the complexity just saturates, and becomes cost-prohibitive,” Gehm explained. “Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements.”
He compared the optics functioning, combined with its lenses… to how a computer network operates.
“A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations,” Gehm said. “Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don’t miss anything.”
What’s in your bag?