Date of Award

Spring 5-15-2023

Author's School

McKelvey School of Engineering

Author's Department

Electrical & Systems Engineering

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

According to the American Cancer Society, breast cancer incidence rates have risen in the past four decades, and breast cancer has become the second most diagnosed cancer in the US women. Ultrasound (US)-guided diffuse optical tomography (DOT), a promising non-invasive functional imaging technique for diagnosing breast cancer and monitoring breast cancer treatment response, has been applied in clinical studies over the past two decades. By utilizing multiple wavelengths in the near-infrared range (NIR), USDOT can provide quantitative estimations of functional information related to tumor angiogenesis, including oxygenated and deoxygenated hemoglobin, and total hemoglobin concentration. However, due to the extensive scattering inside biological tissue, DOT reconstruction is an ill-posed, underdetermined problem, which leads to low resolution and low accuracy in the reconstructed images and reduces the accuracy of diagnoses made using a US-guided DOT system. In recent years, deep learning has been increasingly applied to this problem in medical imaging This dissertation describes the development of algorithms to improve the quality of DOT image reconstruction with the help of deep learning and traditional optimization techniques, and it demonstrates how the superior reconstructed images enable better classification of breast cancer. In this dissertation, a depth-regularized reconstruction algorithm is combined with a semi-automated interactive neural network (CNN) for depth-dependent reconstruction of absorption distribution. The CNN segments the co-registered US image to extract both spatial and depth priors, and the depth-regularized algorithm incorporates these parameters into the reconstruction. Through simulation and phantom data, the proposed algorithm is shown to significantly improve the depth distribution of reconstructed absorption maps of large targets. Evaluated with data from 26 patients with breast lesions larger than 1.5 cm in diameter, the algorithm shows 2.4 to 3 times improvement in the reconstructed homogeneity of the absorption maps for these lesions, throughout the depth of the image. In general, image reconstruction methods used in diffuse optical tomography (DOT) are based on diffusion approximation, and they consider breast tissue as a homogenous, semi-infinite medium. However, the chest wall underneath the breast tissue can distort light reflection measurements, invalidating the semi-infinite assumption used in DOT reconstruction. In this dissertation, a deep learning-based neural network approach is developed, in which a CNN is trained to simultaneously obtain accurate optical property values for both the breast tissue and the chest wall. In the presence of a shallow chest wall, the CNN model reduces errors in estimating the optical properties of the breast tissue by at least 40%. For patient data, the CNN model predicts the breast tissue’s optical absorption coefficient, which is independent of the chest wall depth. After acquiring a better reconstructed image, inspired by a fusion model deep learning approach, we combined the US features extracted by a modified VGG-11 network with images reconstructed from a DOT deep learning auto-encoder-based model to form a new neural network for breast cancer diagnosis. The combined neural network model was trained with simulation data and fine-tuned with clinical data: it achieved an AUC of 0.931 (95% CI: 0.919-0.943), superior to those achieved using US images alone (0.860) or DOT images alone (0.842). To speed up the classification scheme with less manual intervention during the DOT image reconstruction, we proposed a two-stage classification strategy with deep learning. In the first stage, US images and histograms created from DOT perturbation measurements are combined to predict benign lesions. Then the non-benign lesions are passed through the second stage, which combines US features and 3D DOT reconstructed images for final diagnosis. The first stage alone identified 72.6% of benign cases without image reconstruction. In distinguishing between benign and malignant breast lesions in patient data, the two-stage approach achieved an AUC of 0.960, outperforming diagnoses from the first stage alone (AUC = 0.889) and the second stage alone (AUC = 0.909). The proposed two-stage approach achieves better classification accuracy than either the single-modality-only or single-stage classification models. It can potentially distinguish breast cancers from benign lesions in near real-time.

Language

English (en)

Chair

Quing Q. Zhu

Committee Members

Joseph A J. O'Sullivan, Ulugbek U. Kamilov, Chao C. Zhou, Adam A. Bauer,

Available for download on Wednesday, May 15, 2024

Share

COinS