<< Chapter < Page Chapter >> Page >

So it turns out that there’s a version of Bellman’s equations for V* as well and so this is also called Bellman’s equations for V* rather than for Vp and I’ll just write that down. So this says that the optimal payoff you can get from the state s is equal to – so our [inaudible] are multi here, so let’s see. Just for starting off in the state s, you’re going to get your immediate R(s) and then depending on what action a you take your expected total payoff will be given by this. So if I take an action a in some state s, then with probability given by P subscript (s; a) of s prime, by this probability our transition of state s prime, and when we get to the state s prime, I’ll expect my total payoff from there to be given by V*(s) prime because I’m now starting to use the s prime.

So the only thing in this equation I need to fill in is where is the action a, so in order to actually obtain the optimal expected payoff, and to actually obtain the maximum or the optimal expected total payoff, what you should choose here is the max over our actions a, choose your action a that maximizes the expected value of your total payoffs as well.

So it just makes sense. There’s a version of Bellman’s equations for V* rather than Vp and I’ll just say it again. It says that my optimal expected total payoff is my immediate reward plus, and then the best action it can choose, the max over all actions a of my expected future payoff.

And these also lead to my definition of p*, which is let’s say I’m in some state s and I want to know what action to choose. Well, if I’m in some state s, I’m gonna get here an immediate R(s) anyway, so what’s the best action for me to choose is whatever action will enable me to maximize the second term, as well as if my robot is in some state s and it wants to know what action to choose, I want to choose the action that will maximize my expected total payoff and so p*(s) is going to define as R(max) over actions a of this same thing.

I could also put the gamma there, but gamma is just a positive. Gamma is almost always positive, so I just drop that because it’s just a constant scale you go through and doesn’t affect the R(max).

And so, the consequence of this definition is that p* is actually the optimal policy because p* will maximize my expected total payoffs.

Cool. Any questions at this point? Cool. So what I’d like to do now is talk about how algorithms actually compute high start, compute the optimal policy. I should write down a little bit more before I do that, but notice that if I can compute V*, if I can compute the optimal value function, then I can plug it into this equation and then I’ll be done. So if I can compute V*, then you are using this definition for p* and can compute the optimal policy.

So my strategy for computing the optimal policy will be to compute V* and then plug it into this equation and that will give me the optimal policy p*. So my goal, my next goal, will really be to compute V*.

But the definition of V* here doesn’t lead to a nice algorithm for computing it because let’s see – so I know how to compute Vp for any given policy p by solving that linear system equation, but there’s an exponentially large number of policies, so you get 11 states and four actions and what the number of policies is froze to the par of 11. This is of a huge space of possible policies and so I can’t actually exhaust the union of all policies and then take a max on [inaudible].

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask