<< Chapter < Page Chapter >> Page >
π * ( s ) = arg max a A s ' S P s a ( s ' ) V * ( s ' ) .

Note that π * ( s ) gives the action a that attains the maximum in the “max” in Equation  [link] .

It is a fact that for every state s and every policy π , we have

V * ( s ) = V π * ( s ) V π ( s ) .

The first equality says that the V π * , the value function for π * , is equal to the optimal value function V * for every state s . Further, the inequality above says that π * 's value is at least a large as thevalue of any other other policy. In other words, π * as defined in Equation  [link] is the optimal policy.

Note that π * has the interesting property that it is the optimal policy for all states s . Specifically, it is not the case that if we were starting in some state s then there'd be some optimal policy for that state, and if we were starting in some other state s ' then there'd be some other policy that's optimal policy for s ' . Specifically, the same policy π * attains the maximum in Equation  [link] for all states s . This means that we can use the same policy π * no matter what the initial state of our MDP is.

Value iteration and policy iteration

We now describe two efficient algorithms for solving finite-state MDPs. For now, we will consider only MDPs with finite state and action spaces ( | S | < , | A | < ).

The first algorithm, value iteration , is as follows:

  1. For each state s , initialize V ( s ) : = 0 .
  2. Repeat until convergence {
    • For every state, update V ( s ) : = R ( s ) + max a A γ s ' P s a ( s ' ) V ( s ' ) .
  3. }

This algorithm can be thought of as repeatedly trying to update the estimated value function using Bellman Equations  [link] .

There are two possible ways of performing the updates in the inner loop of the algorithm. In the first, we can first compute the new values for V ( s ) for every state s , and then overwrite all the old values with the new values. This is called a synchronous update. In this case, the algorithm can be viewed as implementing a “Bellman backup operator” that takes a current estimateof the value function, and maps it to a new estimate. (See homework problem for details.) Alternatively, we can also perform asynchronous updates. Here, we would loop over the states (in some order), updating the values one ata time.

Under either synchronous or asynchronous updates, it can be shown that value iteration will cause V to converge to V * . Having found V * , we can then use Equation  [link] to find the optimal policy.

Apart from value iteration, there is a second standard algorithm for finding an optimal policy for an MDP. The policy iteration algorithm proceeds as follows:

  1. Initialize π randomly.
  2. Repeat until convergence {
    1. Let V : = V π .
    2. For each state s , let π ( s ) : = arg max a A s ' P s a ( s ' ) V ( s ' ) .
  3. }

Thus, the inner-loop repeatedly computes the value function for the current policy, and then updates the policy using the current value function. (The policy π found in step (b) is also called the policy that is greedy with respect to V .) Note that step (a) can be done via solving Bellman's equations as described earlier, which in the case of a fixed policy, is just a setof | S | linear equations in | S | variables.

After at most a finite number of iterations of this algorithm, V will converge to V * , and π will converge to π * .

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask