Abstract

Neural networks are an increasingly ubiquitous tool in systems of varying complexity across a range of domains. While these tools can be used to learn and predict complex functions, their opaque nature limits the scope of their acceptable applications. In particular, a lack of performance guarantees means that they are unsuitable for safety-critical applications such as self-driving cars and scheduling systems. Neural networks trained to solve NP-complete problems, in particular, are unlikely to be able to solve the problem exactly. However, a weaker soundness guarantee may be sufficient for some systems, e.g., that positive instances of the problem may be rejected but no negative instances are accepted. Research into the complexity of securing such guarantees would improve our understanding of the applications to which neural networks can be effectively and economically applied.

In this work, we show that the verification of soundness for neural networks trained to solve 3SAT formulae is a ΠP2-complete problem and conjecture that the same hardness holds for neural networks solving other NP-complete problems. Therefore, it is unlikely that even these weaker guarantees on the performance of neural networks solving hard problems can be economically secured. A related result drastically expands the set of problems known to be complete for higher levels of the polynomial hierarchy. These problems are formulaically constructed in terms of problems complete for lower levels of the polynomial hierarchy.

Committee Chair

Sanjoy Baruah

Committee Members

Kunal Agrawal, Pontus Ekberg

Degree

Master of Science (MS)

Author's Department

Computer Science & Engineering

Author's School

McKelvey School of Engineering

Document Type

Thesis

Date of Award

Spring 5-12-2025

Language

English (en)

Author's ORCID

https://orcid.org/0000-0001-5982-773X

Share

COinS