<< Chapter < Page Chapter >> Page >

can make it run much more quickly. Specifically, in the inner loop of the algorithm where we apply value iteration, if instead of initializing valueiteration with V = 0 , we initialize it with the solution found during the previous iteration of our algorithm, then that will provide value iteration witha much better initial starting point and make it converge more quickly.

Continuous state mdps

So far, we've focused our attention on MDPs with a finite number of states. We now discuss algorithms for MDPs that may have an infinite number of states. For example, for a car,we might represent the state as ( x , y , θ , x ˙ , y ˙ , θ ˙ ) , comprising its position ( x , y ) ; orientation θ ; velocity in the x and y directions x ˙ and y ˙ ; and angular velocity θ ˙ . Hence, S = R 6 is an infinite set of states, because there is an infinite number of possible positionsand orientations for the car. Technically, θ is an orientation and so the range of θ is better written θ [ - π , π ) than θ R ; but for our purposes, this distinction is not important. Similarly, the inverted pendulum you saw in PS4 has states ( x , θ , x ˙ , θ ˙ ) , where θ is the angle of the pole. And, a helicopter flying in 3d space has states of the form ( x , y , z , φ , θ , ψ , x ˙ , y ˙ , z ˙ , φ ˙ , θ ˙ , ψ ˙ ) , where here the roll φ , pitch θ , and yaw ψ angles specify the 3d orientation of the helicopter.

In this section, we will consider settings where the state space is S = R n , and describe ways for solving such MDPs.

Discretization

Perhaps the simplest way to solve a continuous-state MDP is to discretize thestate space, and then to use an algorithm like value iteration or policy iteration, as described previously.

For example, if we have 2d states ( s 1 , s 2 ) , we can use a grid to discretize the state space:

a normal grid

Here, each grid cell represents a separate discrete state s ¯ . We can then approximate the continuous-state MDP via a discrete-state one ( S ¯ , A , { P s ¯ a } , γ , R ) , where S ¯ is the set of discrete states, { P s ¯ a } are our state transition probabilities over the discrete states, and so on. We can then use value iteration or policy iterationto solve for the V * ( s ¯ ) and π * ( s ¯ ) in the discrete state MDP ( S ¯ , A , { P s ¯ a } , γ , R ) . When our actual system is in some continuous-valued state s S and we need to pick an action to execute, we compute the corresponding discretized state s ¯ , and execute action π * ( s ¯ ) .

two downsides. First, it uses a fairly naive representation for V * (and π * ). Specifically, it assumes that the value function is takes a constant value over each of the discretization intervals(i.e., that the value function is piecewise constant in each of the gridcells).

To better understand the limitations of such a representation, consider a supervised learning problem of fitting a function to this dataset:

graph. roughly x=y

Clearly, linear regression would do fine on this problem. However, if we instead discretize the x -axis, and then use a representation that is piecewise constant in eachof the discretization intervals, then our fit to the data would look like this:

the above data set, with a stepwise line added

This piecewise constant representation just isn't a good representation for many smooth functions. It results in little smoothing over the inputs, and nogeneralization over the different grid cells. Using this sort of representation, we would also need a very fine discretization (very small grid cells) to get a good approximation.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask