Date of Award

Spring 5-15-2021

Author's School

McKelvey School of Engineering

Author's Department

Computer Science & Engineering

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

Machine learning (ML) has come to be widely used in a broad array of settings, including important security applications such as network intrusion, fraud, and malware detection, as well as other high-stakes settings, such as autonomous driving. A general approach is to extract a set of features, or numerical attributes, of entities in question, collect a training data set of labeled examples (for example, indicating which instances are malicious and which are benign), learn a model which labels previously unseen instances presented in terms of their extracted features, and then investigate alerts raised by instances predicted as malicious. Despite the striking success of ML in security applications, security issues emerge from the full pipeline of ML-based detection systems. First, ML models are often susceptible to adversarial examples, in which an adversary makes changes to the input (such as malware) to avoid being detected. Second, using detection systems in practice is dealing with an overwhelming number of alerts that are triggered by normal behavior (the so-called false positives), obscuring alerts resulting from actual malicious activities. Third, adversaries can target a broad array of ML-based detection systems to maximize impact, which is often ignored by individual ML system designers.

In this thesis, I focus on studying the security problems of deploying robust machine learning systems in adversarial settings. To conduct systematic research on this topic, my study is based on four components. First, I study the problem of systematizing adversarial evaluation. Concretely, I propose a fine-grained robustness evaluation framework for face recognition systems. Second, I investigate robust machine learning against decision-time attacks. Specifically, I propose a framework for validating models of ML evasion attacks, and evaluate the efficacy of conventional robust machine learning models against realizable attacks in PDF malware detection. My work shows that the key to robustness is the conserved features, and I propose a systematic algorithm to identify these. Additionally, I study robustness against non-salient adversarial examples in image classification and propose cognitive modeling of suspiciousness of adversarial examples. Third, I study the robust alert prioritization problem---often a necessary step in the detection pipeline. I propose a novel approach for computing a policy for prioritizing alerts using adversarial reinforcement learning. Last, I investigate robust decentralized learning, and I develop a game-theoretic model for robust linear regression involving multiple learners and a single adversary.

Language

English (en)

Chair

Yevgeniy Y. Vorobeychik

Committee Members

Ayan A. Chakrabarti, Sanmay S. Das, Bruno B. Sinopoli, Ning N. Zhang,

Share

COinS