ORCID

http://orcid.org/0000-0003-1548-2113

Date of Award

Summer 8-15-2021

Author's School

McKelvey School of Engineering

Author's Department

Computer Science & Engineering

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

The wide availability of cheap consumer cameras has democratized photography for novices and experts alike, with more than a trillion photographs taken each year. While many of these cameras---especially those on mobile phones---have inexpensive optics and make imperfect measurements, the use of modern computational techniques can allow the recovery of high-quality photographs as well as of scene attributes.

In this dissertation, we explore algorithms to infer a wide variety of physical and visual properties of the world, including color, geometry, reflectance etc., from images taken by casual photographers in unconstrained settings. We specifically focus on neural network-based methods, while incorporating domain knowledge about scene structure and the physics of image formation. We describe novel techniques to produce high-quality images in poor lighting environments, train scene map estimators in the absence of ground-truth data and learn to output our understanding and uncertainty on the scene given observed images.

The key to inferring scene properties from casual photography is to exploit the internal structure of natural scenes and the expressive capacity of neural networks. We demonstrate that neural networks can be used to identify the internal structure of scenes maps, and that our prior understanding on natural scenes can shape the design, training and the output representation of neural networks.

Language

English (en)

Chair

Ayan Chakrabarti

Committee Members

Tao Ju, Brendan Juba, Ulugbek Kamilov, Kalyan Sunkavalli,

Share

COinS