<< Chapter < Page | Chapter >> Page > |
And so you could actually choose 5-S equals S. That would be one reasonable choice, if you want to approximate the value function as a linear function of the states, but you can also choose other things, so for example, for the inverted pendulum example, you may choose 5-S to be equal to a vector of features that may be [inaudible]1 or you may have Xdot2, Xdot maybe some cross terms, maybe times X, maybe dot2 and so on. So you choose some vector or features and then approximate the value function as the value of the state as is equal to data transfers times the features. And I should apologize in advance; I’m overloading notation here. It’s unfortunate. I use data both to denote the angle of the cart of the pole inverted pendulum. So this is known as the angle T but also using T to denote the vector of parameters in my [inaudible]algorithm. So sorry about the overloading notation.
Just like we did in linear regression, my goal is to come up with a linear combination of the features that gives me a good approximation to the value function and this is completely analogous to when we said that in linear regression our estimate, my response there but Y as a linear function a feature is at the input. That’s what we have in linear regression. Let me just write down value iteration again and then I’ll written down an approximation to value iteration, so for discrete states, this is the idea behind value iteration and we said that V(s) will be updated as R(s) + G [inaudible].
That was value iteration and in the case of continuous states, this would be the replaced by an [inaudible], an [inaudible]over states rather via sum over states. Let me just write this as R(s) + G([inaudible]) and then that sum over T’s prime. That’s really an expectation with respect to random state as prime drawn from the state transition probabilities piece SA of V(s) prime. So this is a sum of all states S prime with the probability of going to S prime (value), so that’s really an expectation over the random state S prime flowing from PSA of that. And so what I’ll do now is write down an algorithm called fitted value iteration that’s in approximation to this but specifically for continuous states. I just wrote down the first two steps, and then I’ll continue on the next board, so the first step of the algorithm is we’ll sample. Choose some set of states at random. So sample S-1, S-2 through S-M randomly so choose a set of states randomly and initialize my parameter vector to be equal to zero. This is analogous to in value iteration where I might initialize the value function to be the function of all zeros. Then here’s the end view for the algorithm. Got quite a lot to write actually. Let’s see. And so that’s the algorithm. Let me just adjust the writing. Give me a second. Give me a minute to finish and then I’ll step through this. Actually, if some of my handwriting is eligible, let me know. So let me step through this and so briefly explain the rationale. So the hear of the algorithm is - let’s see. In the original value iteration algorithm, we would take the value for each state, V(s)I, and we will overwrite it with this expression here. In the original, this discrete value iteration algorithm was to V(s)I and we will set V(s)I to be equal to that, I think. Now we have in the continuous state case, we have an infinite continuous set of states and so you can’t discretely set the value of each of these to that. So what we’ll do instead is choose the parameters T so that V(s)I is as close as possible to this thing on the right hand side instead. And this is what YI turns out to be. So completely, what I’m going to do is I’m going to construct estimates of this term, and then I’m going to choose the parameters of my function approximator. I’m gonna choose my parameter as T, so that V(s)I is as close as possible to these. That’s what YI is, and specifically, what I’m going to do is I’ll choose parameters data to minimize the sum of square differences between T [inaudible]plus 5SI. This thing here is just V(s)I because I’m approximating V(s)I is a linear function of 5SI and so choose the parameters data to minimize the sum of square differences. So this is last step is basically the approximation version of value iteration. What everything else above was doing was just coming up with an approximation to this term, to this thing here and which I was calling YI. And so confluently, for every state SI we want to estimate what the thing on the right hand side is and but there’s an expectation here. There’s an expectation over a continuous set of states, may be a very high dimensional state so I can’t compute this expectation exactly. What I’ll do instead is I’ll use my simulator to sample a set of states from this distribution from this P substrip, SIA, from the state transition distribution of where I get to if I take the action A in the state as I, and then I’ll average over that sample of states to compute this expectation. And so stepping through the algorithm just says that for each state and for each action, I’m going to sample a set of states. This S prime 1 through S prime K from that state transition distribution, still using the model, and then I’ll set Q(a) to be equal to that average and so this is my estimate for R(s)I + G(this expected value for that specific action A). Then I’ll take the maximum of actions A and this gives me YI, and so YI is for S for that. And finally, I’ll run really run linear regression which is that last of the set [inaudible]to get V(s)I to be close to the YIs. And so this algorithm is called fitted value iteration and it actually often works quite well for continuous, for problems with anywhere from 6- to 10- to 20-dimensional state spaces if you can choose appropriate features. Can you raise a hand please if this algorithm makes sense? Some of you didn’t have your hands up. Are there questions for those, yeah?
Notification Switch
Would you like to follow the 'Machine learning' conversation and receive update notifications?