Abstract
In recent years, machine learning (ML) has advanced at an unprecedented pace, driving the widespread adoption of increasingly sophisticated models across a broad range of real-world applications, including healthcare, finance, autonomous systems, and critical infrastructure. While these models have delivered remarkable benefits and transformed numerous industries, they remain inherently vulnerable to a variety of security and privacy threats. Given their growing role in safety-critical domains, ensuring their security and privacy has become imperative. Systematically addressing these vulnerabilities requires comprehensive adversarial analyses to uncover weaknesses and inform the design of robust defenses. This dissertation systematically investigates ML vulnerabilities in adversarial settings from two complementary perspectives: adversarial capabilities and adversarial goals. From the capability perspective, I examine attacks launched by both cyber and physical domain adversaries. From the goal perspective, I explore attacks targeting three critical security properties: integrity, availability, and confidentiality. First, in the cyber domain, I investigate integrity threats to text-to-image generation models through tailored adversarial attacks, and propose a novel membership inference attack based on information bottleneck theory to compromise confidentiality. Second, in the physical domain, I explore integrity threats by crafting physically realizable adversarial examples against automatic speech recognition systems deployed in video conferencing platforms. I also design availability attacks that degrade the operational efficiency of LiDAR-based detection models. Finally, beyond individual attacks, I examine the unintended interactions among security properties and other essential ML system attributes. Specifically, I analyze trade-offs between privacy and explainability, both crucial for ensuring trustworthy ML systems, and further investigate the interplay between availability and privacy in federated learning environments. Through these comprehensive explorations, this dissertation provides an in-depth understanding of ML system vulnerabilities from multiple perspectives, thereby contributing to the development of more secure and privacy-preserving machine learning systems.
Degree
Doctor of Philosophy (PhD)
Author's Department
Computer Science & Engineering
Document Type
Dissertation
Date of Award
5-9-2025
Language
English (en)
DOI
https://doi.org/10.7936/yyf5-t346
Recommended Citation
Liu, Han, "Towards Secure and Privacy-Preserving Machine Learning Systems" (2025). McKelvey School of Engineering Theses & Dissertations. 1256.
The definitive version is available at https://doi.org/10.7936/yyf5-t346