Discovering, Localizing, Calibrating, and Using Thousands of Outdoor Webcams

Date of Award

Spring 5-15-2010

Author's Department

Computer Science & Engineering

Degree Name

Doctor of Philosophy (PhD)

Degree Type



The web has an enormous collection of live cameras that capture images of roads, beaches, cities, mountains, buildings, and forests. With an appropriate foundation, this massively distributed, scalable, and already existing network of cameras could be used as a new global sensor. The sheer number of cameras and their broad spatial distribution prompts new questions in computer vision: How can you automatically geo-locate each camera? How can you learn the 3D structure of the scene? How can you correct the color measurements from the camera? What environmental properties can you extract?

We address these questions using models of the geometry and statistics of natural outdoor scenes, and with an understanding of how image appearance varies due to transient objects, the weather, the time of day, and the season. We show algorithms to calibrate cameras and annotate scenes that formalize this understanding in classical linear and nonlinear optimization frameworks. Our work uses images captured at long temporal scales, ranging from minutes to years; often these images are from a database of images we created by capturing images from 1000 webcams every half hour. By exploring natural cues capable of working over such long time scales on a broad range of scenes, we deepen our understanding of the interplay between geographic location, time, and the causes of image appearance change.


English (en)


Robert Pless

Committee Members

Victor Gruev, Joseph A O'Sullivan, Mladen Victor Wickerhauser


Permanent URL: https://doi.org/10.7936/K77942NC

This document is currently not available here.