<< Chapter < Page Chapter >> Page >

Student:

Is there a recess [inaudible] function in this setup?

Instructor (Andrew Ng) :Oh, yes. An MDP comprises SA, the state transition probabilities G and R and so for continuous state spaces, S would be like R4 for the inverted pendulum or something. Actions with discretized state transitions probabilities with specifying with the model or the simulator, G is just a real number like .99 and the real function is usually a function that’s given to you.

And so the reward function is some function of your 4-dimensional state, and for example, you might choose a reward function to be minus – I don’t know. Just for an example of simple reward function, if we want a -1 if the pole has fallen and there it depends on you choose your reward function to be -1 if the inverted pendulum falls over and to find that its angle is greater than 30° or something and zero otherwise. So that would be an example of a reward function that you can choose for the inverted pendulum, but yes, assuming a reward function is given to you so that you can compute R(s)I for any state. Are there other questions?

Actually, let me try asking a question, so everything I did here assume that we have a stochastic simulator. So it turns out I can simply this algorithm if I have a deterministic simulator, but deterministic simulator is given a stated action, my next state is always exactly determined. So let me ask you, if I have a deterministic simulator, how would I change this algorithm? How would I simplify this algorithm?

Student: Lower your samples that you’re drawing [inaudible].

Instructor (Andrew Ng) :Right, so Justin’s going right. If I have a deterministic simulator, all my samples from those would be exactly the same, and so if I have a deterministic simulator, I can set K to be equal to 1, so I don’t need to draw K different samples. I really only need one sample if I have a deterministic simulator, so you can simplify this by setting K=1 if you have a deterministic simulator. Yeah?

Student: I guess I’m really confused about the, yeah, we sorta turned this [inaudible] into something that looks like linear state regression or some’ you know the data transpose times something that we’re used to but I guess I’m a little. I don’t know really know what question to ask but like when we did this before we had like discrete states and everything. We were determined with finding this optimal policy and I guess it doesn’t look like we haven’t said the word policy in a while so kinda difficult.

Instructor (Andrew Ng) :Okay, yeah, so [inaudible] matters back to policy but maybe I should just say a couple words so let me actually try to get at some of what maybe what you’re saying. Our strategy for finding optimal policy has been to find some way to find V*, find some way to find the optimal value function and then use that to compute ?* and some of approximations of ?*. So far everything I’ve been doing has been focused on how to find V*. I just want to say one more word. It actually turns out that for linear regression it’s often very easy. It’s often not terribly difficult to choose some resource of the features.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask