Abstract
We build off of previous ideas used to study both reductions between CSPrefutation problems and improper learning and between CSP-refutation problems themselves to expand some hardness results that depend on the assumption that refuting random CSP instances are hard for certain choices of predicates (like k-SAT). First, we are able argue the hardness of the fundamental problem of learning conjunctions in a one-sided PAC-esque learning model that has appeared in several forms over the years. In this model we focus on producing a hypothesis that foremost guarantees a small false-positive rate while minimizing the false-negative rate for such hypotheses. Further, we formalize a notion of CSP-refutation reductions and CSP-refutation completeness that and use these, along with candidate CSP-refutatation complete predicates, to provide further evidence for the hardness of several problems.
Committee Chair
Brendan Juba
Committee Members
Brendan Juba Jeremy Buhler William Yeoh
Degree
Master of Science (MS)
Author's Department
Computer Science & Engineering
Document Type
Thesis
Date of Award
Spring 5-15-2020
Language
English (en)
DOI
https://doi.org/10.7936/2sbr-dv81
Recommended Citation
Durgin, Alexander, "CSP-Completeness And Its Applications" (2020). McKelvey School of Engineering Theses & Dissertations. 520.
The definitive version is available at https://doi.org/10.7936/2sbr-dv81
Comments
Permanent URL: https://doi.org/10.7936/2sbr-dv81