English

Manifold Embeddings for Model-Based Reinforcement Learning of Neurostimulation Policies






Real-world reinforcement learning problems often exhibit nonlinear, continuous-valued, noisy, partially-observable state-spaces that are prohibitively expensive to explore. The formal reinforcement learning framework, unfortunately, has not been successfully demonstrated in a real-world domain having all of these constraints. We approach this domain with a two-part solution. First, we overcome continuous-valued, partially observable state-spaces by constructing manifold embeddings of the system’s underlying dynamics, which substitute as a complete state-space representation. We then define a generative model over this manifold to learn a policy off-line. The model-based approach is preferred because it enables simplification of the learning problem by domain knowledge. In this work we formally integrate manifold embeddings into the reinforcement learning framework, summarize a spectral method for estimating embedding parameters, and demonstrate the model-based approach in a complex domain-adaptive seizure suppression of an epileptic neural system.
Find OpenCourseWare Online Exams!
Attribution: The Open Education Consortium
http://www.ocwconsortium.org/courses/view/416722f9afa1d72b327b809548cddf58/
Course Home http://videolectures.net/icml09_bush_membrlnp/