DISP took pictures of the 2013 ACC football championship. Here is
a photo of our team at the game
To see images from the game click
click on this gallery.
Additional recent images may be found on the DISP gigapan page
and at Aqueti.comWe are also building a new query based database accessible from our development website at http://mosaic.disp.duke.edu:8080.
The AWARE program focuses on design and manufacturing of microcameras as a platform for scalable supercameras. As illustrated below, the AWARE microcamera includes advanced relay optics with integrated focal mechanisms, vertically integrated focal plane and focal plane read-out, and speciality microcamera control modules. Using this platform, the AWARE team has published designs for cameras resolving 1-50 gigapixels.
Traditional monolithic lens designs, must increase f/# and lens complexity and reduce field of view as image scale increases. In addition, traditional electronic architectures are not designed for highly parallel streaming and analysis of large scale images. The AWARE Wide field of view project addresses these challenges using multiscale designs that combine a monocentric objective lens with arrays of secondary microcameras.
A basic AWARE system architecture is shown below, producing a 1.0
gigapixel image based on 98 micro-optics covering a 120 by 40
degree FOV. A monocentric objective enables the use of identical
secondary systems (referred to as microcameras) greatly
simplifying design and manufacturing. Following the multiscale
lens design methodology, the field-of-view (FOV) is increased by
arraying microcameras along the focal surface of the objective. In
practice, the FOV is limited by the physical housing. Each
microcamera operates independently, offering much more flexibility
in image capture, exposure, and focus parameters.
Conventional cameras struggle to achieve diffraction-limited
performance in as the entrance aperture increases in size. Scaling
the size of an optical system to gigapixels also scales the
optical path difference errors and the resulting aberrations .
Because of this, larger instruments require more surfaces and
elements to produce diffraction-limited performance. Multiscale
designs  are a means of combating this escalating complexity.
Rather than forming an image with a single monolithic lens system,
multiscale designs divide the imaging task between an objective
lens and a multitude of smaller micro-optics. The objective lens
is a precise but simple lens that produces an imperfect image with
known aberrations. Each micro-optic camera relays a portion of the
microcamera image onto its respective sensor correcting for the
objective aberrations and forms a diffraction-limited image.
Because there are typically hundreds or thousands of microcameras
per objective, the microcamera optics are much smaller and
therefore easier and cheaper to fabricate. The scale of the
microcameras are typically those of plastic molded lenses,
enabling mass production of complex aspherical shapes and
therefore minimizing the number of elements. An example optical
layout is modeled in the figure below.
The electronics subsystem reflects the multiscale optical design and has been developed to scale to an arbitrary number of microcameras. The focus and exposure parameters of each camera are independently controlled and the communications architecture optimized to minimize the amount of transmitted data. The electronics architecture is designed to support multiple simultaneous users and is able to scale the output bandwidth depending on application requirements.
In the current implementation, each microcamera includes a 14 megapixel focal plane, focus mechanism, and a HiSpi interface for data transmission. An FPGA-based camera control module provides an interface to provide local processing and data management. The control modules communicate over ethernet to an external rendering computer. Each module connects to two microcameras and is used to sync image collection, scale images based on system requirements, and implement basic exposure and focus capabilities for the microcameras.
The image formation process generates a seamless image from the microcameras in the array. Since each camera operates independently, this process must account for alignment, rotation, illumination discrepancies between the microcameras. To approach real-time compositing, a forward model based on the multiscale optical design is used to map individual image pixels into a global coordinate space. This allows display scale images to be stitched multiple frames per second independent of model corrections, which can happen at a significantly slower rate.
The current image formation process supports two functional modes of operation. In the "Live-View" mode, the camera generates a single display-scale image stream by binning information at sensor level to minimize the transmission bandwidth and then performing GPU based compositing on a display computer. This mode allows users to interactively explore events in the scene in realtime. The snapshot mode captures a full dataset in 14 seconds and stores the information for future rendering and analysis. This mode is used for capturing still images such as those presented on the AWARE image archive.
A major advantage of this design is that it can be scaled.
Except for slightly different surface curvatures, the same
microcamera design suffices for 2, 10, and 40 gigagpixel
systems. FOV is also strictly a matter of adding more
cameras, with no change in the objective lens or micro-optic
The 1.5 gigapixel AWARE 10 camera, Triton, was completed in
October 2013. A photograph of Triton is shown below.
Our transition partner, Aqueti, builds the qG quarter gigapixel
camera using the AWARE 2 design. The qG camera is shown in the
Second generation color AWARE 2 cameras came on line in April 2013 using the original AWARE 2 mounting dome. AWARE 2 captures a 120 degree circular FOV with 226 microcameras, 38 microradian ifov, and an effective f-number of 2.17. Each microcamera operates at 10 fps at full resolution. The optical volume is about 8 liters and the total enclosure is about 300 liters. The optical track length from the first surface of the objective to the focal plane is 188 mm. Example images taken by these systems can be found on the AWARE image server, at aqueti.com and on gigapan.com
AWARE cameras constructed by an academic/industrial consortium with significant contributions from more than 50 graduate students, researchers and engineers. Duke University is the lead institution and led the design and manufacturing team. The construction process for the first cameras is illustrated in timelapse video below.
Ultimately, the goal of AWARE is to demonstrate that it is
possible to capture all of the information in the optical field
entering a camera aperture. The monocentric multiscale approach
allows detection of modes at the diffraction limit. As discussed
in "Petapixel Photography," the
number of voxels resolved in the space-time-spectral data cube is
ultimately limited by photon flux. We argue in the "gigapixel
television," a paper presented at the 14th Takayanagi
Kenjiro Memorial Symposium, that real-time streaming of gigapixel
images is within reach and advisable
The AWARE project is led at DARPA by Dr. Nibir Dhar. The seeds of AWARE began at DARPA through the integrated sensing and processing model developed by Dr. Dennis Healy .