Date of Award

Spring 5-21-2021

Author's School

McKelvey School of Engineering

Author's Department

Computer Science & Engineering

Degree Name

Master of Science (MS)

Degree Type

Thesis

Abstract

Although neural networks have achieved remarkable success on classification, adversarial robustness is still a significant concern. There are now a series of approaches for designing adversarial examples and methods to defending against them. This paper consists of two projects. In our first work, we propose an approach by leveraging cognitive salience to enhance additional robustness on top of these methods. Specifically, for image classification, we split an image into the foreground (salient region) and background (the rest) and allow significantly larger adversarial perturbations in the background to produce stronger attacks. Furthermore, we show that adversarial training with dual-perturbation attacks yield classifiers that are more robust to these than state-of-the-art robust learning approaches and comparable in robustness to conventional attacks. We also incorporate a stabilization process for binary inputs after the regular defense method to increase robustness.

In the second part of our work, we introduce a naive method that requires much less com- putation than other state-of-the-art methods, which adds regularization to the first layer of the neural networks. We also provide a generalized version that could apply to more com- plicated neural networks and empirically prove that our method has comparable robustness with baseline methods and is much faster.

Language

English (en)

Chair

Yevgeniy Vorobeychik Chien-Ju Ho William Yoeh

Committee Members

Yevgeniy Vorobeychik Chien-Ju Ho William Yoeh

Share

COinS