Document Type

Technical Report

Publication Date

2004-05-01

Filename

wucse-2004-29.pdf

DOI:

10.7936/K7G73C14

Technical Report Number

WUCSE-2004-29

Abstract

A network of adaptive processing elements has been developed that transforms and fuses video captured from multiple sensors. Unlike systems that rely on end-systems to process data, this system distributes the computation throughout the network in order to reduce overall network bandwidth. The network architecture is scalable because it uses a hierarchy of processing engines to perform signal processing. Nodes within the network can be dynamically reprogrammed in order to compose video from multiple sources, digitally transform camera perspectives, and adapt the video format to meet the needs of specific applications. A prototype has been developed using reconfigurable hardware that collects and processes real-time, streaming video of an urban environment. Multiple video cameras gather data from different perspectives and fuse that data into a unified, top-down view. The hardware exploits both the spatial and temporal parallelism of the video streams and the regular processing when applying the transforms. Recon-figurable hardware allows for the functions at nodes to be reprogrammed for dynamic changes in topology. Hardware-based video processors also consume less power than high frequency software-based solutions. Performance and scalability are compared to a distributed software-based implementation. The reconfigurable hardware design is coded in VHDL and prototyped using Washington University’s Field Programmable Port Extender (FPX) platform. The transform engine circuit utilizes approximately 34 percent of the resources of a Xilinx Virtex 2000E FPGA, and can be clocked at frequencies up to 48 MHz. The com-position engine circuit utilizes approximately 39 percent of the resources of a Xilinx Virtex 2000E FPGA, and can be clocked at frequencies up to 45 MHz.

Comments

Permanent URL: http://dx.doi.org/10.7936/K7G73C14

Share

COinS