[Blue Header]

Biometric Sensing Project

The DISP human tracking project is developing detection, tracking, and identification techniques for one or more human subjects within a spatial region. Conventional sensor systems traditionally have gathered optical information which is then processed by either humans or computers. Going beyond optical phenomena, we seek to change the way people think about sensing by making use of a new methodology - geometric reference structure tomography (RST). RST consists of multiplex sensors, geometric analysis and data efficient estimation algorithms for human tracking and human-machine interaction and, more generally, for managing and interpreting the mapping between embedding, measurement and analysis spaces on distributed sensor systems. With funding by the Army Research Office, the goal of this research is to develop low-bandwith, low-power wireless systems for security applications using pyroelectric motion sensors. Research within the Biometric Sensing Project includes developing algorithms for sensor configuration and spatial segmentation, designing new sensor, implementing specialized data processing and analysis algorithms for tracking and identification, and building sensor platforms that are optimized for specific tasks. Our end goal is to achieve centimeter resolution detection of point sources, and to identify human targets using sensors attached to wireless embedded microprocessor platforms.

Biometric Sensing Tasks

In order to achieve our end goal, we are focused on two separate, but related tasks.

  • Target Tracking - The sensor system implements the appropriate coding schemes that can be used to achieve a wide range of tracking objectives. We are interested in using compressive codes to minimize the number of sensors required to track a target as well as error correcting codes that can be used to reduce tracking errors in a fully sampled space.
  • Target Identification - Human targets can be distinguished by their gait characteristics. We are exploring methods to correlate the response across an array spatially distributed sensors to extract identifying characteristics of people within a sensing environment. The challenge is to select a minimal number signal features that is necessary to achieve target identification.

To accomplish these tasks, we modulate the field of view of simple pyroeletric motion sensors to generate complex receiver patterns on each sensor head. Since the sensors have overlapping FOVs, we can take advantage of the multiplex nature of the system to extract more information from the sensors than they individually provide. The correct mappings between the source state and the sensor state has been shown to reduce the number of measurements required to track at a given resolution. We are also using these techniques to implement error correcting coding schemes that will improve the accuracy of our system over simpler sampling approaches.

In our current system, we have developed a sensor node that has a sensor head with eight pyroelectric detectors with modulated and overlapping FOVs. We have deployed twenty of these nodes in our sensor space to demonstrate their tracking and identification capabilities. Several components of the system are shown in the table below.

IR Human Image
Human infrared image
SensorHead
Sample sensor head
Sensor Pattern
  Matching visibility pattern

Sensor Projection
Projected field of view
Sensor Data
Sensor data for a moving target
Target Data
Signals from a running human and a running deer
Tracking Image
Single target tracking
Clustering Graph
Clustering results for three human targets
The raw signal coming off of the pyroelectric detectors is noisy due to the nature of the device. We use several signal processing techniques for noise reduction and to extract event characteristics from the signal. These events are then fed into our tracking and classification algorithms to determine to determine the characteristics of the target. This information can then be displayed visually.

In addition to the visual display, we have collaborated with a graduate student in the music department at Duke to sonify the sensor data.  The sonification maps the sensor data in two ways: directly and through a hierarchical superimposition of aural layers.  The direct mappings inform elements of the synthesized audio including: pitch, timbre, amplitude, and spatialisation.  Higher-level interpretations of the sensor data create layers of aural activity in response to detected activity within the space as a whole: thereby modulating the sonification's formal and textural elements.  As a target moves through the space, the background sound indicates time average signal changes related to the target's location and behavior.  While we are still exploring the usefulness of this system for our goals of tracking and identification, it provides some insight into the complexity of the signals we are dealing with and provides a new way to examine the data.

Links

Real-time video of the sensor studio

Archives

To see previous versions of this web page, click here.
Previous Collaborative Projects
soundSense 11/2005
An exploration of the possibility of representing human identity and motion through sound.
FreeSpace
2/2002
Collaboration with the alban elved dance company