Photoacoustic Imaging, Feature Extraction, and Machine Learning Implementation for Ovarian and Colorectal Cancer Diagnosis
Date of Award
Doctor of Philosophy (PhD)
Among all cancers related to women’s reproductive systems, ovarian cancer has the highest mortality rate. Pelvic examination, transvaginal ultrasound (TVUS), and blood testing for cancer antigen 125 (CA-125), are the conventional screening tools for ovarian cancer, but they offer very low specificity. Other tools, such as magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET), also have limitations in detecting small lesions. In the USA, considering men and women separately, colorectal cancer is the third most common cause of death related to cancer; for men and women combined, it is the second leading cause of cancer deaths. It is estimated that in 2021, 52,980 deaths due to this cancer will be recorded. The common screening tools for colorectal cancer diagnosis include colonoscopy, biopsy, endoscopic ultrasound (EUS), optical imaging, pelvic MRI, CT, and PET, which all have specific limitations. In this dissertation, we first discuss in-vivo ovarian cancer diagnosis using our coregistered photoacoustic tomography and ultrasound (PAT/US) system. The application of this system is also explored in colorectal cancer diagnosis ex-vivo. Finally, we discuss the capability of our photoacoustic microscopy (PAM) system, complemented by machine learning algorithms, in distinguishing cancerous rectums from normal ones. The dissertation starts with discussing our low-cost phantom construction procedure for pre-clinical experiments and quantitative PAT. This phantom has ultrasound and photoacoustic properties similar to those of human tissue, making it a good candidate for photoacoustic imaging experiments. In-vivo ovarian cancer diagnosis using our PAT/US system is then discussed. We demonstrate extraction of spectral, image, and functional features from our PAT data. These features are then used to distinguish malignant (n=12) from benign ovaries (n=27). An AUC of 0.93 is achieved using our developed SVM classifier. We then explain a sliding multi-pixel method to mitigate the effect of noise on the estimation of functional features from PAT data. This method is tested on 13 malignant and 36 benign ovaries. After that, we demonstrate our two-step optimization method for unmixing the optical absorption (μa) of the tissue from the system response (C) and Grüneisen parameter (Γ) in quantitative PAT (QPAT). Using this method, we calculate the absorption coefficient and functional parameters of five blood tubes, with sO2 values ranging from 24.9% to 97.6%. We then demonstrate the capability of our PAT/US system in monitoring colorectal cancer treatment as well as classifying 13 malignant and 17 normal colon samples. Using PAT features to distinguish these two types of samples (malignant and normal colons), our classifier can achieve an AUC of 0.93. After that, we demonstrate the capability of our coregistered photoacoustic microscopy and ultrasound (PAM/US) system in distinguishing normal from malignant colorectal tissue. It is shown that a convolutional neural network (CNN) significantly outperforms the generalized regression model (GLM) in distinguishing these two types of lesions.
Quing Zhu, Hong Chen, Abhinav Jha, Joseph O'Sullivan,