Advances in VLSI technology will, over the next decade, allow us to build a new generation of vision systems that rely on large numbers of inexpensive cameras and a distributed network of high-performance processors. Networks of distributed smart cameras are an emerging enabling technology for a broad range of important information technology applications, including human and animal recognition, surveillance, motion analysis, smart conference rooms, and facial detection. By having access to scenes from multiple directions, such networks have the potential to synthesize far more complete views than single-camera systems. However, this distributed nature, coupled with the inherent challenges associated with real-time video processing, greatly complicates the development of effective algorithms, architectures, and software. An integrated research program including video, design tools, and embedded system architecture is required to understand how new generations of smart camera hardware can be best utilized.
This project develops new techniques for distributed smart camera networks through an integrated exploration of distributed algorithms, embedded architectures, and software synthesis techniques: we are developing new architectures and tools that are designed to handle modern video algorithms; we are developing new algorithms that can leverage distributed architectures and are compilable into efficient implementations.
In this project, we are investigating a series of complex smart camera algorithms and applications, specifically, human gesture recognition; self-calibration of the distributed camera network; detection, tracking and fusion of trajectories using distributed cameras; view synthesis using image based visual hulls; gait-based human recognition; and human activity analysis. Through analysis of these applications, we are exploring domain-specific programming models and software synthesis techniques to automate their translation into efficient implementations. By translating domain-specific, formal models of distributed signal processing systems into streamlined procedural language (e.g., C) implementations, these synthesis techniques are complementary to the growing body of work on embedded processor code generation techniques.
This research leads to new embedded architectures for distributed smart camera networks, and to video processing algorithms that are tailored to the opportunities and constraints associated with these architectures. The research also leads to a better understanding of relationships among distributed signal processing, embedded multiprocessors, and low power/low latency operation, and develops synthesis tools that use this understanding to help automate the implementation of smart camera applications.
A list of publications from this project and PDF versions of selected publications can be found on the Distributed Smart Cameras Project Publications Page .
This project is supported by NSF Award #0325119 .