Technical Report Number
Reinforcement learning (RL) has shown itself to be an effective paradigm for solving optimal control problems with a finite number of states. Generalizing RL techniques to problems with a continuous state space has proven a difficult task. We present an approach to modeling the RL value function using a manifold representation. By explicitly modeling the topology of the value function domain, traditional problems with discontinuities and resolution can be addressed without resorting to complex function approximators. We describe how manifold techniques can be applied to value-function approximation, and present methods for constructing manifold representations in both batch and online settings. We present empirical results demonstrating the effectiveness of our approach.
Glaubius, Robert and Smart, William D., "Manifold Representations for Continuous-State Reinforcement Learning" Report Number: WUCSE-2005-19 (2005). All Computer Science and Engineering Research.
Permanent URL: http://dx.doi.org/10.7936/K7VT1QDQ