<< Chapter < Page Chapter >> Page >

So I should write down some other things first, just to ground the notations, but what I’ll do is eventually come up with an algorithm for computing V*, the optimal value function and then we’ll plug them into this and that will give us the optimal policy p*.

And so I’ll write down the algorithm in a second, but just to ground the notation, well – yeah, let’s skip that. Let’s just talk about the algorithm. So this is an algorithm called value iteration and it makes use of Bellman’s equations for the optimal policy to compute V*. So here’s the algorithm. Okay, and that’s the entirety of the algorithm and oh, you repeat the step, I guess. You repeatedly do this step.

So just to be concrete, let’s say in my MDP of 11 states, the first step is initialize V(s) equals zero, so what that means is I create an array in computer implementation, create an array of 11 elements and say set all of them to zero. Says I can initialize into anything. It doesn’t really matter.

And now what I’m going to do is I’ll take Bellman’s equations and we’ll keep on taking the right hand side of Bellman’s equations and overwriting and start copying down the left hand side. So we’ll essentially iteratively try to make Bellman’s equations hold true for the numbers V(s) that are stored along the way. So V(s) here is in the array of 11 elements and I’m going to repeatedly compute the right hand side and copy that onto V(s).

And it turns out that when you do this, this will make V(s) converge to V*(s), so it may be of no surprise because we know V* [inaudible] set inside Bellman’s equations.

Just to tell you, some of these ideas that they get more than the problem says, so I won’t prove the conversions of this algorithm. Some implementation details, it turns out there’s two ways you can do this update. One is when I say for every state s that has performed this update, one way you can do this is for every state s, you can compute the right hand side and then you can simultaneously overwrite the left hand side for every state s. And so if you do that, that’s called a sequence update. Right and sequence [inaudible], so update all the states s simultaneously.

And if you do that, it’s sometimes written as follows. If you do synchronous update, then it’s as if you have some value function, you’re at the Ith iteration or Tth iteration of the algorithm and then you’re going to compute some function of your entire value function, and then you get to set your value function to your new version, so simultaneously update all 11 values in your s space value function.

So it’s sometimes written like this. My B here is called the Bellman backup operator, so the synchronized valuation you sort of take the value function, you apply the Bellman backup operator to it and then the Bellman backup operator just means computing the right hand side of this for all the states and you’ve overwritten your entire value function.

The only way of performing these updates is asynchronous updates, which is where you update the states one at a time. So you go through the states in some fixed order, so would update V(s) for state No. 1 and then I would like to update V(s) for state No. 2, then state No. 3, and so on. And when I’m updating V(s) for state No. 5, if V(s) prime, if I end up using the values for states 1, 2, 3, and 4 on the right hand side, then I’d use my recently updated values on the right hand side. So as you update sequentially, when you’re updating in the fifth state, you’d be using values, new values, for states 1, 2, 3, and 4. And that’s called an asynchronous update.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask