Date of Award

Spring 5-8-2024

Author's School

McKelvey School of Engineering

Author's Department

Electrical & Systems Engineering

Degree Name

Master of Science (MS)

Degree Type

Thesis

Abstract

The application of autonomous vehicles in real life relies on trajectory prediction models based on perception and observation of the surrounding scene. The deep neural network model has been widely proven to provide relatively stable and excellent performance in various scenarios. Many formal approaches are used as verification of the prediction results of DNN models, where Conformal Prediction is one which can provide statistical safety guarantee region for DNN models. However, so far, no research has shown that conformal prediction possesses satisfactory robustness in dealing with purposed adversarial attacks. In this paper, we propose an adversarial attack approach against trajectory prediction models that use conformal prediction to provide verification of deep neural network model prediction. While satisfying the assumption of conformal prediction, our approach could lead the deep neural network to generate erroneous results that following our expectations without manually introducing a specific-designed target. We also demonstrate in simulation experiments the horrible consequences that this erroneous result would have in real-life application scenarios. To our knowledge, this is the first adversarial attack model against deep neural networks equipped with conformal prediction.

Language

English (en)

Chair

Yiannis Kantaros

Committee Members

Andrew Clark Yevgeniy Vorobeychik

Available for download on Wednesday, May 07, 2025

Share

COinS