Date of Award

Summer 8-15-2022

Author's School

McKelvey School of Engineering

Author's Department

Computer Science & Engineering

Degree Name

Doctor of Philosophy (PhD)

Degree Type



This dissertation addresses integrating physical models and learning priors for computational imaging. The motivation of our work is driven by the recent discussion of learning-based methods that solve the imaging inverse problem by directly learning a measurement-to-image mapping from the existing data: they achieve superior performance over the traditional model-based methods but lack the physical model to impose sufficient interpretation and guarantee of the final image. We adopt the classic statistical inference as the underlying formulation and integrate learning models as implicit image priors, such that our framework is able to simultaneously leverage physical models and learning priors. Additionally, the growing sizes of the image as well as the measurements in modern computational imaging systems place a significant burden on both computation and memory. Another purpose of the dissertation is to extend our framework to those scenarios by incorporating large-scaleoptimization techniques.

The dissertation significantly extends three algorithmic frameworks—that is, plug-and-play priors (PnP, Part II), regularization by denoising (RED, Part III), and neural fields (NF, Part IV)—with multiple contributions including the design of novel algorithms, establishment of unified theory, and applications to real imaging problems. In Part II, we present in-depth discussions of two popular PnP algorithms: PnP-PGM and PnP-ADMM. Our contributions here include the proof of their fixed-point convergence under deep denoising priors and the proposal of scalable PnP variants for processing a large set of measurements by using online gradients or proximal maps. In Part III, we conduct similar investigations on RED. We first prove the fixed-point convergence for gradient-method RED (GM-RED) algorithm and propose two variants for efficiently inferring large images by using block coordinate and parallel computing techniques. In particular, our analysis framework, based on monotone operator theory, is unified for PnP and RED and has not been established in the existing literature. In Part IV, we extend NF—a novel self-supervised learning paradigm—to computational imaging by developing two novel methods. Our first method, DeCAF, investigates NF’s regularization ability in the image domain with spatial coordinates, while our second method, CoIL, leverages the representation power of NF to complete under-sampled measurements in the measurement domain with non-spatial coordinates.


English (en)


Ulugbek S. Kamilov

Committee Members

Tao Ju, Netanel Raviv, William Yeoh, Brendt Wohlberg,