Date of Award

Summer 8-19-2021

Author's School

McKelvey School of Engineering

Author's Department

Computer Science & Engineering

Degree Name

Master of Science (MS)

Degree Type



Neural representation learning recently shows outstanding performance in several computer vision tasks. In this thesis, we propose a novel self-supervised neural represented reconstruction method for optical tomography. Our method uses a Multi-Layer Perceptron (MLP) network to represent the target sample without the need for any ground truth or training data. The MLP weights serve as a latent representation of the target object. Any desired permittivity information can be inferred by querying the neural network within the sample domain. We also investigate applying regularization to implicitly restrict the manifold of MLP for better performance. Our experiments produce low artifacts results with strong sectioning effects on three optical tomography modalities: fully sampled intensity diffraction tomography (IDT), multiplexed IDT (mIDT), and annular IDT (aIDT). Furthermore, since our model implicitly represents a 3D volume, it enables upsampling to any scale, despite only being optimized on discrete measurements. In addition, the disk space required to store the latent representation is much smaller than the traditional method.


English (en)


Ulugbek Kamilov

Committee Members

Jason Trobaugh, Tao Ju

Included in

Engineering Commons