Adversarial Attack and Defense on Image Classification under Physical Constraints

Document Type

MS Project Report

Department

Computer Science and Engineering

Publication Date

2021-04-14

Embargo Period

4-14-2021

Abstract

Recent work has demonstrated how adversarial attacks on neural network can cause misclassifications of given objects in physical worlds. However, despite the large amount works, there have been limited researches focusing on the security of the whole computer vision pipeline. In addition, effective defending methods against such attacks remained unexplored, raising safety and reliability concerns for deployment of machine learning applications.

In this project, we argue the recent works of directly applying perturbations to the camera lens are obvious if any testing is done on the network to check its accuracy. Then, we produce a physically realizable Trojaned lens to attach to a camera that only causes the neural network vision pipeline to produce incorrect classifications if a specific adversarial patch is present in the scene. This lens serves to amplify the effect of the patch so that the attackers can achieve similar performance with smaller and less noticeable patches.

We then study the problem of developing a defense approaches for image classification from physically realizable attacks. We demonstrate that the two most scalable and effective methods for learning robust models exhibit very limited effectiveness against three of the highest profile physical attacks. Next, we propose a new abstract adversarial model, rectangular occlusion attacks, in which an adversary places a small adversarially crafted rectangle in an image. Finally, we illustrate that adversarial training using our new attack yields image classification models that exhibit high robustness against the physically realizable attacks.

This document is currently not available here.

Share

COinS