<< Chapter < Page Chapter >> Page >

Both value iteration and policy iteration are standard algorithms for solving MDPs, and there isn't currently universal agreement over which algorithm is better.For small MDPs, policy iteration is often very fast and converges with very few iterations. However, for MDPs with largestate spaces, solving for V π explicitly would involve solving a large system of linear equations, and could be difficult. In these problems,value iteration may be preferred. For this reason, in practice value iteration seems to be used more often than policy iteration.

Learning a model for an mdp

So far, we have discussed MDPs and algorithms for MDPs assuming that the state transition probabilities and rewards are known. In many realistic problems,we are not given state transition probabilities and rewards explicitly, but must instead estimate them from data. (Usually, S , A and γ are known.)

For example, suppose that, for the inverted pendulum problem (see problem set 4), we had a number of trials in the MDP, that proceeded as follows:

s 0 ( 1 ) a 0 ( 1 ) s 1 ( 1 ) a 1 ( 1 ) s 2 ( 1 ) a 2 ( 1 ) s 3 ( 1 ) a 3 ( 1 ) ... s 0 ( 2 ) a 0 ( 2 ) s 1 ( 2 ) a 1 ( 2 ) s 2 ( 2 ) a 2 ( 2 ) s 3 ( 2 ) a 3 ( 2 ) ... ...

Here, s i ( j ) is the state we were at time i of trial j , and a i ( j ) is the corresponding action that was taken from that state. In practice, each of the trials above might be run until the MDP terminates(such as if the pole falls over in the inverted pendulum problem), or it might be run for some large but finite number of timesteps.

Given this “experience” in the MDP consisting of a number of trials, we can then easily derive the maximum likelihood estimates for the statetransition probabilities:

P s a ( s ) #times took we action a in state s and got to s #times we took action a in state s

Or, if the ratio above is “0/0”—corresponding to the case of never having taken action a in state s before—the we might simply estimate P s a ( s ' ) to be 1 / | S | . (I.e., estimate P s a to be the uniform distribution over all states.)

Note that, if we gain more experience (observe more trials) in the MDP, there is an efficient way to update our estimated state transition probabilities usingthe new experience. Specifically, if we keep around the counts for both the numerator anddenominator terms of  [link] , then as we observe more trials, we can simply keep accumulating those counts. Computing the ratio of these countsthen given our estimate of P s a .

Using a similar procedure, if R is unknown, we can also pick our estimate of the expected immediate reward R ( s ) in state s to be the average reward observed in state s .

Having learned a model for the MDP, we can then use either value iteration or policy iteration to solve the MDP using the estimated transition probabilitiesand rewards. For example, putting together model learning and value iteration, here is one possible algorithm for learning in an MDP with unknown state transitionprobabilities:

  1. Initialize π randomly.
  2. Repeat {
    1. Execute π in the MDP for some number of trials.
    2. Using the accumulated experience in the MDP, update our estimates for P s a (and R , if applicable).
    3. Apply value iteration with the estimated state transition probabilities and rewards to get a new estimated value function V .
    4. Update π to be the greedy policy with respect to V .
  3. }

Questions & Answers

what is mutation
Janga Reply
what is a cell
Sifune Reply
how is urine form
Sifune
what is antagonism?
mahase Reply
classification of plants, gymnosperm features.
Linsy Reply
what is the features of gymnosperm
Linsy
how many types of solid did we have
Samuel Reply
what is an ionic bond
Samuel
What is Atoms
Daprince Reply
what is fallopian tube
Merolyn
what is bladder
Merolyn
what's bulbourethral gland
Eduek Reply
urine is formed in the nephron of the renal medulla in the kidney. It starts from filtration, then selective reabsorption and finally secretion
onuoha Reply
State the evolution relation and relevance between endoplasmic reticulum and cytoskeleton as it relates to cell.
Jeremiah
what is heart
Konadu Reply
how is urine formed in human
Konadu
how is urine formed in human
Rahma
what is the diference between a cavity and a canal
Pelagie Reply
what is the causative agent of malaria
Diamond
malaria is caused by an insect called mosquito.
Naomi
Malaria is cause by female anopheles mosquito
Isaac
Malaria is caused by plasmodium Female anopheles mosquitoe is d carrier
Olalekan
a canal is more needed in a root but a cavity is a bad effect
Commander
what are pathogens
Don Reply
In biology, a pathogen (Greek: πάθος pathos "suffering", "passion" and -γενής -genēs "producer of") in the oldest and broadest sense, is anything that can produce disease. A pathogen may also be referred to as an infectious agent, or simply a germ. The term pathogen came into use in the 1880s.[1][2
Zainab
A virus
Commander
Definition of respiration
Muhsin Reply
respiration is the process in which we breath in oxygen and breath out carbon dioxide
Achor
how are lungs work
Commander
where does digestion begins
Achiri Reply
in the mouth
EZEKIEL
what are the functions of follicle stimulating harmones?
Rashima Reply
stimulates the follicle to release the mature ovum into the oviduct
Davonte
what are the functions of Endocrine and pituitary gland
Chinaza
endocrine secrete hormone and regulate body process
Achor
while pituitary gland is an example of endocrine system and it's found in the Brain
Achor
what's biology?
Egbodo Reply
Biology is the study of living organisms, divided into many specialized field that cover their morphology, physiology,anatomy, behaviour,origin and distribution.
Lisah
biology is the study of life.
Alfreda
Biology is the study of how living organisms live and survive in a specific environment
Sifune
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Machine learning. OpenStax CNX. Oct 14, 2013 Download for free at http://cnx.org/content/col11500/1.4
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Machine learning' conversation and receive update notifications?

Ask