<< Chapter < Page | Chapter >> Page > |
We now begin our study of reinforcement learning and adaptive control.
In supervised learning, we saw algorithms that tried to make their outputs mimic the labels $y$ given in the training set. In that setting, the labels gave an unambiguous“right answer” for each of the inputs $x$ . In contrast, for many sequential decision making and control problems, it is very difficult to provide thistype of explicit supervision to a learning algorithm. For example, if wehave just built a four-legged robot and are trying to program it to walk, then initially we have no idea what the “correct” actions to take are to make itwalk, and so do not know how to provide explicit supervision for a learning algorithm to try to mimic.
In the reinforcement learning framework, we will instead provide our algorithms only a reward function, which indicates to the learning agent when itis doing well, and when it is doing poorly. In the four-legged walking example, the reward function might give therobot positive rewards for moving forwards, and negative rewards for either moving backwards or falling over. It will then be the learningalgorithm's job to figure out how to choose actions over time so as to obtain large rewards.
Reinforcement learning has been successful in applications as diverse as autonomous helicopter flight, robot legged locomotion, cell-phone networkrouting, marketing strategy selection, factory control, and efficient web-page indexing.Our study of reinforcement learning will begin with a definition of the Markov decision processes (MDP) , which provides the formalism in which RL problems are usually posed.
A Markov decision process is a tuple $(S,A,\left\{{P}_{sa}\right\},\gamma ,R)$ , where:
The dynamics of an MDP proceeds as follows: We start in some state ${s}_{0}$ , and get to choose some action ${a}_{0}\in A$ to take in the MDP. As a result of our choice, the state of the MDPrandomly transitions to some successor state ${s}_{1}$ , drawn according to ${s}_{1}\sim {P}_{{s}_{0}{a}_{0}}$ . Then, we get to pick another action ${a}_{1}$ . As a result of this action, the state transitions again, now tosome ${s}_{2}\sim {P}_{{s}_{1}{a}_{1}}$ . We then pick ${a}_{2}$ , and so on.... Pictorially, we can represent this process as follows:
Upon visiting the sequence of states ${s}_{0},{s}_{1},...$ with actions ${a}_{0},{a}_{1},...$ , our total payoff is given by
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?