Date of Award
Spring 5-15-2020
Degree Name
Master of Science (MS)
Degree Type
Thesis
Abstract
We build off of previous ideas used to study both reductions between CSPrefutation problems and improper learning and between CSP-refutation problems themselves to expand some hardness results that depend on the assumption that refuting random CSP instances are hard for certain choices of predicates (like k-SAT). First, we are able argue the hardness of the fundamental problem of learning conjunctions in a one-sided PAC-esque learning model that has appeared in several forms over the years. In this model we focus on producing a hypothesis that foremost guarantees a small false-positive rate while minimizing the false-negative rate for such hypotheses. Further, we formalize a notion of CSP-refutation reductions and CSP-refutation completeness that and use these, along with candidate CSP-refutatation complete predicates, to provide further evidence for the hardness of several problems.
Language
English (en)
Chair
Brendan Juba
Committee Members
Brendan Juba Jeremy Buhler William Yeoh
Comments
Permanent URL: https://doi.org/10.7936/2sbr-dv81