Date of Award

8-19-2024

Author's School

McKelvey School of Engineering

Author's Department

Computer Science & Engineering

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

When interacting with an environment, an agent would want to make decisions on the fly based on the input of the dynamically changing environmental factors, so as to maximize the rewards or minimize the risks. In order to achieve that, one would want to learn the dynamics of the environment and employ efficient planning based on what has been learned about the environment. However, when the environment is large and convoluted, the planning and learning algorithms can easily become computationally intractable in general. In this thesis, we first quantitatively show that this type of problems is indeed hard to solve by describing several hardness results including informatic-theoretic limits of Bayesian network structure learning and computational lower bound of learning Bilinear Classes in reinforcement learning. Therefore, we introduce several nontrivial sparse structures assumptions about the environment where we can obtain provably efficient planning and learning algorithms. We note that these assumptions, on a high level, of sparse dependence only on small sets of variables or supports are much milder than previous works, where complete independence of the environmental factors are assumed.

Language

English (en)

Chair

Brendan Juba

Available for download on Thursday, September 18, 2025

Share

COinS