The AWARE program focuses on design and manufacturing of microcameras as a platform for scalable supercameras. As illustrated below, the AWARE microcamera includes advanced relay optics with integrated focal mechanisms, vertically integrated focal plane and focal plane read-out, and speciality microcamera control modules. Using this platform, the AWARE team has published designs for cameras resolving 1-50 gigapixels.
Traditional monolithic lens designs, must increase f/# and lens complexity and reduce field of view as image scale increases. In addition, traditional electronic architectures are not designed for highly parallel streaming and analysis of large scale images. The AWARE Wide field of view project addresses these challenges using multiscale designs that combine a monocentric objective lens with arrays of secondary microcameras.
A basic AWARE system architecture is shown below, producing a 1.0
gigapixel image based on 98 micro-optics covering a 120 by 40
degree FOV. A monocentric objective enables the use of identical
secondary systems (referred to as microcameras) greatly
simplifying design and manufacturing. Following the multiscale
lens design methodology, the field-of-view (FOV) is increased by
arraying microcameras along the focal surface of the objective. In
practice, the FOV is limited by the physical housing. This
yields a much more linear cost and volume versus FOV.
Additionally, each microcamera operates independently, offering
much more flexibility in image capture, exposure, and focus
Multiscale lens design is an attempt at severing the inherent
connection between geometric aberrations and aperture size that
plagues traditional lenses . By taking advantage of the
superior imaging capabilities of small scale optics, a multiscale
lens can effectively increase its field-of-view and image size by
simply arraying additional optical elements, similar to a lens
array. The resulting partial images can then be stitched during
post-processing to create a single image of a large field. An
array of identical microcameras is used to synthesize a curved
focal plane, with micro-optics to locally flatten the field onto a
It is increasingly difficult to achieve diffraction-limited
performance in an optical instrument as the entrance aperture
increases in size. Scaling the size of an optical system to
gigapixels also scales the optical path difference errors and the
resulting aberrations . Because of this, larger instruments
require more surfaces and elements to produce diffraction-limited
performance. Multiscale designs  are a means of combating this
escalating complexity. Rather than forming an image with a single
monolithic lens system, multiscale designs divide the imaging task
between an objective lens and a multitude of smaller micro-optics.
The objective lens is a precise but simple lens that produces an
imperfect image with known aberrations. Each micro-optic camera
relays a portion of the microcamera image onto its respective
sensor correcting for the objective aberrations and forms a
diffraction-limited image. Because there are typically hundreds or
thousands of microcameras per objective, the microcamera optics
are much smaller and therefore easier and cheaper to fabricate.
The scale of the microcameras are typically those of plastic
molded lenses, enabling mass production of complex aspherical
shapes and therefore minimizing the number of elements. An example
optical layout is modeled in the figure below.
The electronics subsystem reflects the multiscale optical design and has been developed to scale to an arbitrary number of microcameras. The focus and exposure parameters of each camera are independently controlled and the communications architecture optimized to minimize the amount of transmitted data. The electronics architecture is designed to support multiple simultaneous users and is able to scale the output bandwidth depending on application requirements.
In the current implementation, each microcamera includes a 14 megapixel focal plane, focus mechanism, and a HiSpi interface for data transmission. An FPGA-based camera control module provides an interface to provide local processing and data management. The control modules communicate over ethernet to an external rendering computer. Each module connects to two microcameras and is used to sync image collection, scale images based on system requirements, and implement basic exposure and focus capabilities for the microcameras.
The image formation process generates a seamless image from the microcameras in the array. Since each camera operates independently, this process must account for alignment, rotation, illumination discrepancies between the microcameras. To approach real-time compositing, a forward model based on the multiscale optical design is used to map individual image pixels into a global coordinate space. This allows display scale images to be stitched multiple frames per second independent of model corrections, which can happen at a significantly slower rate.
The current image formation process supports two functional modes of operation. In the "Live-View" mode, the camera generates a single display-scale image stream by binning information at sensor level to minimize the transmission bandwidth and then performing GPU based compositing on a display computer. This mode allows users to interactively explore events in the scene in realtime. The snapshot mode captures a full dataset in 14 seconds and stores the information for future rendering and analysis. This mode is used for capturing still images such as those presented on the AWARE website.
A major advantage of this design is that it can be scaled.
Except for slightly different surface curvatures, the same
microcamera design suffices for 2, 10, and 40 gigagpixel
systems. FOV is also strictly a matter of adding more
cameras, with no change in the objective lens or micro-optic
Second generation color AWARE 2 cameras came on line in April
2013 using the original AWARE 2 mounting dome. More compact next
generation AWARE 2 and AWARE 10 cameras will be available in June
AWARE 2 captures a 120 degree circular FOV with 226 microcameras,
38 microradian FOV for a single pixel, and an effective f-number
of 2.17. Each microcamera operates at 10 fps at full
resolution. The optical volume is about 8 liters and the
total enclosure is about 300 liters. The optical track
length from the first surface of the objective to the focal plane
is 188 mm. Specifications of camera one can be found here: AWARE2
camera 1, and Specifications of camera two can be found
camera 2. Example images taken by both systems can be found
AWARE-2 was constructed by an academic/industrial consortium with significant contributions from more than 50 graduate students, researchers and engineers. Duke University is the lead institution and led the design and manufacturing team. The construction process is illustrated in timelapse video below.
The AWARE-10 5-10 gigapixel camera is in production and will be
on-line later in June 2013. Significant improvements have
been made to the optics, electronics, and integration of the
camera. Some are described here: Camera
Evolution. The goal of this DARPA project is to design
a long-term production camera that is highly scalable from
sub-gigapixel to tens-of-gigapixels. Deployment of the system is
envisioned for military, commercial, and civilian applications.
Ultimately, the goal of AWARE is to demonstrate that it is
possible to capture all of the information in the optical field
entering a camera aperture. The monocentric multiscale approach
allows detection of modes at the diffraction limit. As discussed
in "Petapixel Photography," the
number of voxels resolved in the space-time-spectral data cube is
ultimately limited by photon flux. We argue in the "gigapixel
television," a paper presented at the 14th Takayanagi
Kenjiro Memorial Symposium, that real-time streaming of gigapixel
images is within reach and advisable
This project is a collaboration with Duke University, University of Arizona, and Distant Focus Corporation. The AWARE project is led at DARPA by Dr. Nibir Dhar. The seeds of AWARE began at DARPA through the integrated sensing and processing model developed by Dr. Dennis Healy .