ORCID

https://orcid.org/0000-0001-5982-773X

Date of Award

Spring 5-12-2025

Author's School

McKelvey School of Engineering

Author's Department

Computer Science & Engineering

Degree Name

Master of Science (MS)

Degree Type

Thesis

Abstract

Neural networks are an increasingly ubiquitous tool in systems of varying complexity across a range of domains. While these tools can be used to learn and predict complex functions, their opaque nature limits the scope of their acceptable applications. In particular, a lack of performance guarantees means that they are unsuitable for safety-critical applications such as self-driving cars and scheduling systems. Neural networks trained to solve NP-complete problems, in particular, are unlikely to be able to solve the problem exactly. However, a weaker soundness guarantee may be sufficient for some systems, e.g., that positive instances of the problem may be rejected but no negative instances are accepted. Research into the complexity of securing such guarantees would improve our understanding of the applications to which neural networks can be effectively and economically applied.

In this work, we show that the verification of soundness for neural networks trained to solve 3SAT formulae is a ΠP2-complete problem and conjecture that the same hardness holds for neural networks solving other NP-complete problems. Therefore, it is unlikely that even these weaker guarantees on the performance of neural networks solving hard problems can be economically secured. A related result drastically expands the set of problems known to be complete for higher levels of the polynomial hierarchy. These problems are formulaically constructed in terms of problems complete for lower levels of the polynomial hierarchy.

Language

English (en)

Chair

Sanjoy Baruah

Committee Members

Kunal Agrawal, Pontus Ekberg

Share

COinS