ORCID
https://orcid.org/0009-0000-7338-4066
Date of Award
Spring 5-2025
Degree Name
Master of Science (MS)
Degree Type
Thesis
Abstract
The development of autonomous vehicles (AVs) has been accelerated by advancements in deep neural networks (DNNs), which power the complex perception systems necessary for safe and efficient real-world navigation. However, as AVs increasingly integrate into public transportation networks, the robustness of their perception systems against potential vulnerabilities is critical. Among these threats, adversarial attacks—particularly through the use of adversarial patches—pose significant risks. These patches are carefully crafted perturbations designed to mislead DNNs, potentially compromising AV safety by causing incorrect object recognition or misclassification.
While extensive research has demonstrated high attack success rates for adversarial patches in controlled digital environments, their performance under practical conditions remains underexplored. This gap is noteworthy because real-world environments introduce variability such as changes in lighting, object angles, and material textures, which could influence the practical applicability of these attacks. Furthermore, most existing defense mechanisms have been evaluated primarily in digital domains, leaving their robustness in real-world conditions largely unexplored. To address these gaps, we empirically evaluated the performance of adversarial patches through extensive physical experiments, following a systematic approach across diverse real-world environments.
To achieve this, we began by training adversarial patches using methodologies from previous works to establish a strong baseline and validate their effectiveness under idealized digital conditions. Following this, we printed the patches and applied them to real-world objects to assess their performance under varying physical conditions. Experiments were conducted with different types of vehicles, including SUVs and sedans, in diverse settings: outdoor environments during the day, outdoor environments at night, and indoor parking lots. This approach enabled us to evaluate the robustness of adversarial patches under conditions resembling real-world AV environments.
Our findings indicate that while adversarial patches achieve high attack success rates in controlled digital settings, their effectiveness is notably reduced in real-world environments due to environmental variability. This highlights the critical challenge of bridging the gap between theoretical vulnerability studies and practical adversarial threats in AV systems. By identifying key factors such as lighting conditions, object angles, and material textures that influence the success of adversarial attacks, we offer empirical insights that can guide efforts to improve AV perception system resilience. Furthermore, our results underscore the pressing need for robust defense mechanisms that account for real-world complexities, as current defenses largely focus on digital domains and may not adequately address real-world vulnerabilities. This study contributes to the growing body of knowledge on adversarial machine learning and lays a foundation for future research to enhance the security and robustness of AV technologies in practical settings.
Language
English (en)
Chair
Ning Zhang
Committee Members
Tao Ju, Chongjie Zhang