Document Type

Technical Report

Publication Date

2005-05-01

Filename

WUCSE-2005-19.pdf

DOI:

10.7936/K7VT1QDQ

Technical Report Number

WUCSE-2005-19

Abstract

Reinforcement learning (RL) has shown itself to be an effective paradigm for solving optimal control problems with a finite number of states. Generalizing RL techniques to problems with a continuous state space has proven a difficult task. We present an approach to modeling the RL value function using a manifold representation. By explicitly modeling the topology of the value function domain, traditional problems with discontinuities and resolution can be addressed without resorting to complex function approximators. We describe how manifold techniques can be applied to value-function approximation, and present methods for constructing manifold representations in both batch and online settings. We present empirical results demonstrating the effectiveness of our approach.

Comments

Permanent URL: http://dx.doi.org/10.7936/K7VT1QDQ

Share

COinS