<< Chapter < Page Chapter >> Page >

Other versions will cause V(s) conversion to be *(s). In synchronized updates, it makes them just a tiny little bit faster [inaudible] and then it turns out the analysis of value iterations synchronous updates are also easier to analyze and that just matters [inaudible]. Asynchronous has been just a little bit faster.

So when you run this algorithm on the MDP – I forgot to say all these values were computed with gamma equals open 99 and actually, Roger Gross, who’s a, I guess, master [inaudible] helped me with computing some of these numbers. So you compute it. That way you run value relation on this MDP. The numbers you get for V* are as follows: .86, .90 – again, the numbers sort of don’t matter that much, but just take a look at it and make sure it intuitively makes sense.

And then when you plug those in to the formula for computing, that I wrote down earlier, for computing p* as a function of V*, then – well, I drew this previously, but here’s the optimal policy p*.

And so, just to summarize, the process is run value iteration to compute V*, so this would be this table of numbers, and then I use my form of p* to compute the optimal policy, which is this policy in this case.

Now, to be just completely concrete, let’s look at that free one state again. Is it better to go left or is it better to go north? So let me just illustrate why I’d rather go left than north. In the form of the p*, if I go west, then sum over s prime, P(s, a) s prime, P*(sp), this would be – well, let me just write this down. Right, if I go north, then it would be because of that. I wrote it down really quickly, so it’s messy writing. The way I got these numbers is suppose I’m in this state, in this free one state. If I choose to go west and with chance .8, I get to .75 – to this table -- .75. With chance .1, I veer off and get to the .69, then at chance .1, I go south and I bounce off the wall and I stay where I am.

So that’s why my expected future payoff for going west is .8 times .75, plus .1 times .69, plus .1 times .71, the last .71 being if I bounce off the wall to the south and then seeing where I am, that gives you .740.

You can then repeat the same process to estimate your expected total payoff if you go north, so if you do that, with a .8 chance, you end up going north, so you get .69. With a .1 chance, you end up here and .1 chance you end up there. This map leads mentally to that expression and compute the expectation, you get .676. And so your total payoff is higher if you go west – your expected total payoff is higher if you go west than if you go north. And that’s why the optimal action in this state is to go west.

So that was value iteration. It turns out there are two sort of standard algorithms for computing optimal policies in MDPs. Value iteration is one. As soon as you finish the writing. So value iteration is one and the other sort of standard algorithm for computing optimal policies in MDPs is called policy iteration. And let me – I’m just going to write this down.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask