<< Chapter < Page Chapter >> Page >

For example, one may choose to learn a linear model of the form

s t + 1 = A s t + B a t ,

using an algorithm similar to linear regression. Here, the parameters of the model are the matrices A and B , and we can estimate them using the data collected from our m trials, by picking

arg min A , B i = 1 m t = 0 T - 1 s t + 1 ( i ) - A s t ( i ) + B a t ( i ) 2 .

(This corresponds to the maximum likelihood estimate of the parameters.)

Having learned A and B , one option is to build a deterministic model, in which given an input s t and a t , the output s t + 1 is exactly determined. Specifically, we always compute s t + 1 according to Equation  [link] . Alternatively, we may also build a stochastic model, in which s t + 1 is a random function of the inputs, by modelling it as

s t + 1 = A s t + B a t + ϵ t ,

where here ϵ t is a noise term, usually modeled as ϵ t N ( 0 , Σ ) . (The covariance matrix Σ can also be estimated from data in a straightforward way.)

Here, we've written the next-state s t + 1 as a linear function of the current state and action; but of course, non-linear functions are also possible.Specifically, one can learn a model s t + 1 = A φ s ( s t ) + B φ a ( a t ) , where φ s and φ a are some non-linear feature mappings of the states and actions. Alternatively, one can also use non-linear learning algorithms, such as locally weighted linearregression, to learn to estimate s t + 1 as a function of s t and a t . These approaches can also be used to build either deterministic or stochastic simulatorsof an MDP.

Fitted value iteration

We now describe the fitted value iteration algorithm for approximating the value function of a continuous state MDP. In the sequel, we will assumethat the problem has a continuous state space S = R n , but that the action space A is small and discrete. In practice, most MDPs have much smaller action spaces than state spaces. E.g., a car has a 6d state space, and a2d action space (steering and velocity controls); the inverted pendulum has a 4d state space, and a 1d action space; a helicopter has a 12d state space, and a4d action space. So, discretizing ths set of actions is usually less of a problem than discretizing the state space would have been.

Recall that in value iteration, we would like to perform the update

V ( s ) : = R ( s ) + γ max a s ' P s a ( s ' ) V ( s ' ) d s ' = R ( s ) + γ max a E s ' P s a [ V ( s ' ) ]

(In "Value iteration and policy iteration" , we had written the value iteration update with a summation V ( s ) : = R ( s ) + γ max a s ' P s a ( s ' ) V ( s ' ) rather than an integral over states; the new notation reflects that we are now working in continuous states rather than discrete states.)

The main idea of fitted value iteration is that we are going to approximately carry out this step, over a finite sampleof states s ( 1 ) , ... , s ( m ) . Specifically, we will use a supervised learning algorithm—linear regression in our description below—to approximate the value functionas a linear or non-linear function of the states:

V ( s ) = θ T φ ( s ) .

Here, φ is some appropriate feature mapping of the states.

For each state s in our finite sample of m states, fitted value iteration will first compute a quantity y ( i ) , which will be our approximation to R ( s ) + γ max a E s ' P s a [ V ( s ' ) ] (the right hand side of Equation  [link] ). Then, it will apply a supervised learning algorithm to try to get V ( s ) close to R ( s ) + γ max a E s ' P s a [ V ( s ' ) ] (or, in other words, to try to get V ( s ) close to y ( i ) ).

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask