<< Chapter < Page Chapter >> Page >
We first summarize certain essentials in the analysis of homogeneous Markov chains with costs and rewards associated with states, or with transitions between states. Then we consider three cases: a. Gain associated with a state; b. One-step transition gains; and c. Gains associated with a demand under certain reasonable conditions. Matlab implementations of the results of analysis provide machine solutions to a variety of problems.

In order to provide the background for Matlab procedures for Markov decision processes, we first summarize certain essentials in the analysis ofhomogeneous Markov chains with costs and rewards associated with states, or with transitions between states. References are to Pfeiffer: PROBABILITY FOR APPLICATIONS.  

  1. Some cost and reward patterns
    Consider a finite, ergodic (i.e., recurrent, positive, aperiodic, irreducible) homogeneous Markov chain X N , with state space E = { 1 , 2 , , M } . To facilitate use of matrix computation programs, we number states from one, except in certain examples in which state zero has anatural meaning. Associated with this chain is a cost or reward structure belonging to one of the three generalclasses described below. The one-step transition probability matrix is P = [ p ( i , j ) ] , where p ( i , j ) = P ( X n + 1 = j | X n = i ) . The distribution for X n is represented by the row matrix π ( n ) = [ p 1 ( n ) , p 2 ( n ) , , p M ( n ) ] , where p i ( n ) = P ( X n = i ) . The long-run distribution is represented by the row matrix π = [ π 1 , π 2 , , π M ] .
    1. Type 1. Gain associated with a state . A reward or gain in the amount g ( i ) is realized in the next period if the current state is i . The gain sequence { G n : 1 n } of random variables G n + 1 = g ( X n ) is the sequence of gains realized as the chain X N evolves. We represent the gain function g by the column matrix g = [ g ( 1 ) g ( 2 ) g ( M ) ] T .
    2. Type 2. One-step transition gains . A reward or gain in the amount g ( i , j ) = g i j is realized in period n + 1 if the system goes from state i in period n to state j in period n + 1 . The corresponding one-step transition probability is p ( i , j ) = p i j . The gain matrix is g = [ g ( i , j ) ] . The gain sequence { G n : 1 n } of random variables G n + 1 = g ( X n , X n + 1 ) is the sequence of gains experienced as the chain X N evolves.
    3. Type 3. Gains associated with a demand . In this case, the gain random variables are of the form
      G n + 1 = g ( X n , D n + 1 )
      If n represents the present, the random vector U n = ( X 0 , X 1 , , X n ) represents the “past” at n of the chain X N . We suppose { D n : 1 n } is iid and each { D n + 1 , U n } is independent. Again, the gain sequence { G n : 1 n } represents the gains realized as the process evolves.Standard results on Markov chains show that in each case the sequence G N = { G n : 1 n } is Markov.
    A recurrence relation . We are interested in the expected gains in future stages, given the present state of the process. For any n > 0 and any m > 1 , the gain G n ( m ) in the m periods following period n is given by
    G n ( m ) = G n + 1 + G n + 2 + + G n + m
    If the process is in state i , the expected gain in the next period q i is
    q i = v i ( 1 ) = E [ G n + 1 | X n = i ]
    and the expected gain in the next m periods is
    v i ( m ) = E [ G n ( m ) | X n = i ]
    We utilize a recursion relation that allows us to begin with thetransition matrix P and the next-period expected gain matrix q = [ q 1 q 2 q m ] T and calculate the v i ( m ) for any m > 1 . Although the analysis is somewhat different for the various gain structures, the result exhibits a common pattern. In each case
    v i ( m ) = E [ G n ( m ) | X n = i ] = E [ G n + 1 | X n = i ] + k = 1 m - 1 E [ G n + k + 1 | X n = i ]
    = q i + k = 1 m - 1 j E E [ G n + k + 1 | X n + 1 = j ] p ( i , j )
    = q i + j E E [ k = 1 m - 1 G n + k | X n = j ] p ( i , j )
    We thus have the fundamental recursion relation
    ( * ) v i ( m ) = q i + j E v j ( m - 1 ) p ( i , j )
    The recursion relation ( * ) shows that the transition matrix P = [ p ( i , j ) ] and the vector of next-period expected gains, which we represent by the column matrix q = [ q 1 , q 2 , , q M ] T , determine the v i ( m ) . The difference between the cases is the manner in which the q i are obtained.
    • . q i = E [ g ( X n ) | X n = i ] = g ( i )
    • . q i = E [ g ( X n , X n + 1 ) | X n = i ] = E [ g ( i , X n + 1 ) | X n = i ] = j E g ( i , j ) p ( i , j )
    • . q i = E [ g ( X n , D n + 1 ) | X n = i ] = E [ g ( i , D n + 1 ) ] = k g ( i , k ) P ( D = k )
    • . For computational purposes, we utilize these relations in matrix form. If
      v ( n ) = [ v 1 ( n ) v 2 ( n ) v M ( n ) ] T and q = [ q 1 q 2 q M ] T
      then
      ( * ) v ( m + 1 ) = q + P v ( m ) for all m > 0 , with v ( 0 ) = 0
    Examination of the expressions for q i , above, shows that the next-period expected gain matrix q takes the following forms. In each case, g is the gain matrix. In case c, p D is the column matrix whose elements are P ( D = k ) .
    • q = g
    • q = the diagonal of P g T
    • q = g p D
  2. Some long-run averages
    Consider the average expected gain for m periods
    E [ 1 m G n ( m ) ] = 1 m k = 1 m E [ G n + k ] = 1 m k = 1 m E { E [ G n + k | X n - 1 ] }
    Use of the Markov property and the fact that for an ergodic chain
    1 m k = 1 m p k ( i , j ) π j as m
    yields the result that,
    lim m E [ 1 m G n ( m ) ] = i P ( X n - 1 = i ) j q j π j = j q j π j = g
    and for any state i ,
    lim m E [ 1 m G n ( m ) | X n = i ] = lim m 1 m v i ( m ) = g
    The expression for g may be put into matrix form. If the long-run probability distribution is represented by the row matrix π = [ π 1 π 2 π M ] and q = [ q 1 q 2 ; q M ] T , then
    g = π q
  3. Discounting and potentials
    Suppose in a given time interval the value of money increases by a fraction a . This may represent the potential earning if the money were invested. One dollar now is worth 1 + a dollars at the end of one period. It is worth ( 1 + a ) n dollars after n periods. Set α = 1 / ( 1 + a ) , so that 0 < α 1 . It takes α dollars to be worth one dollar after one period. It takes α n dollars at present to be worth one dollar n periods later. Thus the present worth or discounted value of one dollar n periods in the future is α n dollars. This is the amount one would need to invest presently, at interest rate a per unit of time, to have one dollar n periods later. If f is any function defined on the state space E , we designate by f the column matrix [ f ( 1 ) f ( 2 ) f ( M ) ] T . We make an exception to this convention in the case of the distributions of thestate probabilities π ( n ) = [ p 1 ( n ) p 2 ( n ) p M ( n ) ] , their generating function, and the long-run probabilities π = [ π 1 π 2 π M ] , which we represent as row matrices. It should be clear that much of the followingdevelopment extends immediately to infinite state space E = N = { 0 , 1 , 2 , } . We assume one of the gain structures introduced in Sec 1 and the correspondinggain sequence { G n : 1 n } . The value of the random variable G n + 1 is the gain or reward realized during the period n + 1 . We let each transition time be at the end of one period of time (say a month or aquarter). If n corresponds to the present, then G n + k is the gain in period n + k . If we do not discount for the period immediately after the present, then α k - 1 G n + k is the present value of that gain. Hence
    k = 1 α k - 1 G n + k is the total discounted future gain
    Definition . The α - potential of the gain sequence { G n : 1 n } is the function φ defined by
    φ ( i ) = E [ n = 0 α n G n + 1 | X 0 = i ] i E
    Remark . φ ( i ) is the expected total discounted gain, given the system starts in state i . We next define a concept which is related to the α -potential in a useful way. Definition . The α - potential matrix R α for the process X N is the matrix
    R α = n = 0 α n P n with P 0 = I

    3.1

    Let X N be an ergodic, homogeneous Markov chain with state space E and gain sequence { G n : 1 n } . Let q = [ q 1 q 1 q M ] T where q i = E [ G n + 1 | X n = i ] for i E . For α ( 0 , 1 ) , let φ be the α -potential for the gain sequence. That is,

    φ ( i ) = E n = 0 α n G n + 1 | X 0 = i i E
    Then, if R α is the α -potential matrix for X N , we have
    φ = R α q

    3.2

    Consider an ergodic, homogeneous Markov chain X N with gain sequence { G n : 1 n } and next-period expected gain matrix q . For α ( 0 , 1 ) , the α -potential φ and the α -potential matrix R α satisfy

    [ I - α P ] R α = I and [ I - α P ] φ = q
    If the state space E is finite, then
    R α = [ I - α P ] - 1 and φ = [ I - α P ] - 1 q = R α q

    A numerical example

    Suppose the transition matrix P and the next-period expected gain matrix q are

    P = 0 . 2 0 . 3 0 . 5 0 . 4 0 . 1 0 . 5 0 . 6 0 . 3 0 . 1 and q = 2 5 3
    For α = 0 . 9 , we have
    R 0 . 9 = ( I - 0 . 9 P ) - 1 = 4 . 4030 2 . 2881 3 . 3088 3 . 5556 3 . 1356 3 . 3088 3 . 6677 2 . 2881 4 . 0441 and φ = R 0 . 9 q = 30 . 17 32 . 72 30 . 91

    The next example is of class c, but the demand random variable is Poisson, which does not have finite range. The simple pattern for calculating the q i must be modified.

    Costs associated with inventory under an ( m , M ) Order policy.

    If k units are ordered, the cost in dollars is

    c ( k ) = 0 k = 0 10 + 25 k 0 < k M

    For each unit of unsatisfied demand, there is a penalty of 50 dollars. We use the ( m , M ) inventory policy described in Ex 23.1.3. X 0 = M , and X n is the stock at the end of period n , before restocking. Demand in period n is D n . The cost for restocking at the end of period n + 1 is

    g ( X n , D n + 1 ) = 10 + 25 ( M - X n ) + 50 max { D n + 1 - M , 0 } 0 X n < m 50 max { D n + 1 - X n , 0 } m X n M
    For m = 1 , M = 3 , we have
    g ( 0 , D n + 1 ) = 85 + 50 I [ 3 , ) ( D n + 1 ) ( D n + 1 - 3 )
    g ( i , D n + 1 ) = 50 I [ 3 , ) ( D n + 1 ) ( D n + 1 - i ) 1 i 3
    We take m = 1 , M = 3 and assume D is Poisson (1). From Ex 23.1.3 in PA, we obtain

    P = 0 . 0803 0 . 1839 0 . 3679 0 . 3679 0 . 6321 0 . 3679 0 0 0 . 2642 0 . 3679 0 . 3679 0 0 . 0803 0 . 1839 0 . 3679 0 . 3679

    The largest eigenvalue | λ | 0 . 26 , so n 10 should be sufficient for convergence. We use n = 16 .

    Taking any row of P 16 , we approximate the long-run distribution by

    π = [ 0 . 2858 0 . 2847 0 . 2632 0 . 1663 ]

    We thus have

    q 0 = 85 + 50 E [ I [ 3 , ) ( D n + 1 ) ( D n + 1 - 3 ) ] = 85 + 50 k = 4 ( k - 3 ) p k ( the term for k = 3 is zero )

    For the Poisson distribution k = n k p k = λ k = n - 1 p k .
    Hence q 0 = 85 + 50 [ k = 3 p k - 3 k = 4 p k ] 85 + 50 [ 0.0803 - 3 × 0.0190 ] = 86.1668 q 1 = 50 k = 3 ( k - 1 ) p k = 50 [ k = 2 p k - k = 3 p k ] = 50 p 2 = 9 . 1970 q 2 = 50 k = 3 ( k - 2 ) p k = 50 [ 0.2642 - 2 × 0.0803 ] = 5.1809 q 3 = q 0 - 85 = 1.1668 so that

    q = 86 . 1668 9 . 1970 5 . 1809 1 . 1668

    Then, for α = 0 . 8 we have

    R 0 . 8 = ( I - 0 . 8 P ) - 1 = 1 . 9608 1 . 0626 1 . 1589 0 . 8178 1 . 4051 2 . 1785 0 . 8304 0 . 5860 1 . 1733 1 . 2269 2 . 1105 0 . 4893 0 . 9608 1 . 0626 1 . 1589 1 . 8178 and φ = R 0 . 8 q = 185 . 68 146 . 09 123 . 89 100 . 68

    Recall that we are dealing with expected discounted costs. Since we usually consider starting with a full stock, the amount φ ( M ) = φ ( 3 ) = 100 . 68 is the quantity of interest. Note that this is smaller than other values of φ ( j ) , which correspond to smaller beginning stock levels. We expect greater restocking costs insuch cases.

  4. Evolution of chains with costs and rewards
    1. No discounting A generating function analysis of
      ( * ) v ( m + 1 ) = q + P v ( m ) for all m > 0 , with v ( 0 ) = 0
      shows
      v ( n ) = n g 1 + v + transients, where v = B 0 q
      Here g = π q , is a column matrix of ones, P 0 = P , and B 0 is a matrix which analysis also shows we may approximate by
      B 0 = B ( 1 ) I + P + P 2 + + P n - ( n + 1 ) P 0
      The largest | λ i | < 1 gives an indication of how many terms are needed.

      The inventory problem (continued)

      We consider again the inventory problem. We have

      P = 0 . 0803 0 . 1839 0 . 3679 0 . 3679 0 . 6321 0 . 3679 0 0 0 . 2642 0 . 3679 0 . 3679 0 0 . 0803 0 . 1839 0 . 3679 0 . 3679 and q = 86 . 1668 9 . 1970 5 . 1819 1 . 1668

      The eigenvalues are 1 , 0 . 0920 + i 0 . 2434 , 0 . 0920 - 0 . 2434 and 0. Thus, the decay of the transients is controlled by | λ * | = ( 0 . 0920 2 + 0 . 2434 2 ) 1 / 2 = 0 . 2602 . Since | λ * | 4 0 . 0046 , the transients die out rather rapidly. We obtain the following approximations

      P 0 P 16 0 . 2852 0 . 2847 0 . 2632 0 . 1663 0 . 2852 0 . 2847 0 . 2632 0 . 1663 0 . 2852 0 . 2847 0 . 2632 0 . 1663 0 . 2852 0 . 2847 0 . 2632 0 . 1663 so that π [ 0 . 2852 0 . 2847 0 . 2632 0 . 1663 ]

      The approximate value of B 0 is found to be

      B 0 I + P + P 2 + P 3 + P 4 - 5 P 0 0 . 4834 - 0 . 3766 - 0 . 1242 0 . 0174 0 . 0299 0 . 7537 - 0 . 5404 - 0 . 2432 - 0 . 2307 - 0 . 1684 0 . 7980 - 0 . 3989 - 0 . 5166 - 0 . 3766 - 0 . 1242 1 . 0174

      The value g = π q 28 . 80 is found in the earlier treatment. From the values above, we have

      v = B 0 q 37 . 6 6 . 4 - 17 . 8 - 47 . 4 so that v ( n ) 28.80 n + 37.6 28.80 n + 6.4 28.80 n - 17.8 28.80 n - 47.4 + transients

      The average gain per period is clearly g 28 . 80 . This soon dominates the constant terms represented by the entries in v .

    2. With discounting Let the discount factor be α ( 0 , 1 ) . If n represents the present period, then G n + 1 = the reward in the first succeeding period G n + k = the reward in the k th succeding period. If we do not discount the first period, then
      G n ( m ) = G n + 1 + α G n + 2 + α 2 G n + 3 + + α m - 1 G n + m = G n + 1 + α G n + 1 ( m - 1 )
      Thus
      v i ( m ) = E [ G n ( m ) | X n = i ] = q i + α j = 1 M p ( i , j ) v j ( m - 1 )
      In matrix form, this is
      v ( n ) = q + α P v ( n - 1 )
      A generating function analysis shows that
      v i ( n ) = v i + transients 1 i M
      Hence, the steady state part of
      v i ( m ) = q i + α j = 1 M p ( i , j ) v j ( m - 1 ) is v i = q i + α j = 1 M p ( i , j ) v j 1 i M
      In matrix form,
      v = q + α P v which yields v = [ I - α P ] - 1 q
      Since the q i = E [ G n + 1 | X n = i ] are known, we can solve for the v i . Also, since v i ( m ) is the present value of expected gain m steps into the future, the v i represent the present value for an unlimited future, given that the process starts in state i .
  5. Stationary decision policies
    We suppose throughout this section that X N is ergodic, with finite state space E = { 1 , 2 , , M } . For each state i E , there is a class A i of possible actions which may be taken when the process is in state i . A decision policy is a sequence of decision functions d 1 , d 2 such that
    The action at stage n is d n ( X 0 , X 1 , . . . , X n ) .
    The action selected is in A i whenever X n = i . We consider a special class of decision policies. Stationary decision policy
    d n ( X 0 , X 1 , , X n ) = d ( X n ) with d invariant with n
    The possible actions depend upon the current state. That is d ( X n ) A i whenever X n = i . Analysis shows the Markov character of the process is preserved under stationary policies. Also, if E is finite and every stationary policy produces an irreducible chain, then an optimal stationary policy is optimal. We suppose each policy yields an ergodic chain . Since the transition probabilities from any state depend in part on the action taken when inthat state, the long-run probabilities will depend upon the policy. In the no-discounting case, we want to determine a stationary policy that maximizesthe gain g = π q for the corresponding long-run probabilities. We say “a stationary policy,” since we do not rule out the possibilitythere may be more than one policy which maximizes the gain. In the discounting case, the goal is to maximize the steady state part v i in the expressions
    v i ( n ) = v i + transients 1 i M
    In simple cases, with small state space and a small number of possible actions in each state, it may be feasible to obtain the gain or present values for eachpermissible policy and then select the optimum by direct comparison. However, for most cases, this is not an efficient approach. In the next twosections, we consider iterative procedures for step-by-step optimization which usually greatly reduces the computational burden.
  6. Policy iteration method– no discounting
    We develop an iterative procedure for finding an optimal stationary policy. The goal is to determine a stationary policy thatmaximizes the long run expected gain per period g . In the next section, we extend this procedure to thecase where discounting is in effect. We assume that each policy yields an ergodic chain . Suppose we have established that under a given policy (1) v i ( n ) = q i + j p ( i , j ) v j ( n - 1 ) , where q i = E [ G n + 1 | X n = i ] In this case, the analysis in Sec 4 shows that for large n , (2) v i ( n ) n g + v i , where g = i π i q i We note that v i and g depend upon the entire policy, while q i and p ( i , j ) depend on the individual actions a i . Using (1) and (2), we obtain
    n g + v i = q i + i p ( i , j ) [ ( n - 1 ) g + v j ] = q i + ( n - 1 ) g + j p ( i , j ) v j
    From this we conclude (3) g + v i = q i + j p ( i , j ) v j for all i E Suppose a policy d has been used. That is, action d ( i ) is taken whenever the process is in state i . To simplify writing, we drop the indication of the action and simply write p ( i , j ) for p i j ( d ( i ) ) , etc. Associated with this policy, there is a gain g . We should like to determine whether or not this is the maximum possible gain, and ifit is not, to find a policy which does give the maximum gain. The following two-phase procedure has been found to be effective. It is essentiallythe procedure developed originally by Ronald Howard, who pioneered the treatment of these problems. The new feature is the method of carrying out the value-determinationstep, utilizing the approximation method noted above.
    1. Value-determination procedure We calculate g = i π i q i = π q . As in Sec 4, we calculate
      v = B 0 q [ I + P + P 2 + + P s - ( s + 1 ) P 0 ] q where P 0 = lim n P n
    2. Policy-improvement procedure We suppose policy d has been used through period n . Then, by (3), above,
      g + v i = q i + j p ( i , j ) v j
      We seek to improve policy d by selecting policy d * , with d * ( i ) = a i k * , to satisfy
      q i * + j p i j * v j = max { q i ( a i k ) + j p i j ( a i k ) v j : a i k A i } , 1 i M
    Remarks
    • In the procedure for selecting d * , we use the “old” v j .
    • It has been established that g * g , with equality iff g is optimal. Since there is a finite number of policies, the proceduremust converge to an optimum stationary policy in a finite number of steps.
    We implement this two step procedure with Matlab. To demonstrate the procedure, we consider the following

    A numerical example

    A Markov decision process has three states: State space E = { 1 , 2 , 3 } .

    Actions: State 1: A 1 = { 1 , 2 , 3 } State 2: A 2 = { 1 , 2 } State 3: A 3 = { 1 , 2 }

    Transition probabilities and rewards are:

    p 1 j ( 1 ) : [1/3 1/3 1/3] g 1 j ( 1 ) : [1 3 4]
    p 1 j ( 2 ) : [1/4 3/8 3/8] g 2 j ( 2 ) : [2 2 3]
    p 1 j ( 3 ) : [1/3 1/3 1/3] g 3 j ( 3 ) : [2 2 3]
    p 2 j ( 1 ) : [1/8 3/8 1/2] g 2 j ( 1 ) : [2 1 2]
    p 2 j ( 2 ) : [1/2 1/4 1/4] g 2 j ( 2 ) : [1 4 4]
    p 3 j ( 1 ) : [3/8 1/4 3/8] g 3 j ( 1 ) : [2 3 3]
    p 3 j ( 2 ) : [1/8 1/4 5/8] g 3 j ( 2 ) : [3 2 2]
    Use the policy iteration method to determine the policy which gives the maximum gain g .

    A computational procedure utilizing Matlab We first put the data in an m-file. Since we have several cases, it is expedient to include the case number. This example belongs to type 2.Data in file md61.m type = 2 PA = [1/3 1/3 1/3; 1/4 3/8 3/8; 1/3 1/3 1/3; 0 0 0; 1/8 3/8 1/2; 1/2 1/4 1/4; 0 0 0;3/8 1/4 3/8; 1/8 1/4 5/8] GA = [1 3 4; 2 2 3; 2 2 3; 0 0 0; % Zero rows are separators2 1 2; 1 4 4; 0 0 0; 2 3 3; 3 2 2]A = [1 2 3 0 1 2 0 1 2]

    md61 type = 2PA = 0.3333 0.3333 0.3333 0.2500 0.3750 0.37500.3333 0.3333 0.3333 0 0 00.1250 0.3750 0.5000 0.5000 0.2500 0.25000 0 0 0.3750 0.2500 0.37500.1250 0.2500 0.6250GA = 1 3 4 2 2 32 2 3 0 0 02 1 2 1 4 40 0 0 2 3 33 2 2A = 1 2 3 0 1 2 0 1 2

    Once the data are entered into Matlab by the call for file “md61,” we make preparation for the policy-improvement step. The procedureis in the file newpolprep.m

    % file newpolprep.m % version of 3/23/92disp('Data needed:') disp(' Matrix PA of transition probabilities for states and actions')disp(' Matrix GA of gains for states and actions') disp(' Type number to identify GA matrix types')disp(' Type 1. Gains associated with a state') disp(' Type 2. One-step transition gains')disp(' Type 3. Gains associated with a demand') disp(' Row vector of actions')disp(' Value of alpha (= 1 for no discounting)') c = input('Enter type number to show gain type ');a = input('Enter value of alpha (= 1 for no discounting) '); PA = input('Enter matrix PA of transition probabilities ');GA = input('Enter matrix GA of gains '); if c == 3PD = input('Enter row vector of demand probabilities '); endif c == 1 QA = GA';elseif c ==2 QA = diag(PA*GA'); % (qx1)else QA = GA*PD';end A = input('Enter row vector A of possible actions '); % (1xq) m = length(PA(1,:));q = length(A); n = input('Enter n, the power of P to approximate P0 ');s = input('Enter s, the power of P in the approximation of V '); QD = [(1:q)' A' QA]; % First col is index; second is A; third is QA DIS = [' Index Action Value']; disp(DIS)disp(QD) if a == 1call = 'Call for newpol'; elsecall = 'Call for newpold'; enddisp(call) newpolprep % Call for preparatory program in file npolprep.m Data needed:Matrix PA of transition probabilities for states and actions Matrix GA of gains for states and actionsType number to identify GA matrix types Type 1. Gains associated with a stateType 2. One-step transition gains Type 3. Gains associated with a demandRow vector of actions Value of alpha (= 1 for no discounting)Enter type number to show gain type 2Enter value of alpha (=1 for no discounting) 1 Enter matrix PA of transition probabilities PAEnter matrix GA of gains GA Enter row vector A of possible actions AEnter n, the power of P to approximate P0 16 Enter s, the power of P in the approximation of V 6Index Action Value 1.0000 1.0000 2.66672.0000 2.0000 2.3750 3.0000 3.0000 2.33334.0000 0 0 5.0000 1.0000 1.62506.0000 2.0000 2.5000 7.0000 0 08.0000 1.0000 2.6250 9.0000 2.0000 2.1250Call for newpol

    The procedure has prompted for the procedure newpol (in file newpol.m)

    % file: newpol.m (without discounting) % version of 3/23/92d = input('Enter policy as row matrix of indices '); D = A(d); % policy in terms of actionsP = PA(d',:); % selects probabilities for policy Q = QA(d',:); % selects next-period gains for policyP0 = P^n; % Display to check convergence PI = P0(1,:); % Long-run distributionG = PI*Q % Long-run expected gain per period C = P + eye(P);for j=2:s C = C + P^j; % C = I + P + P^2 + ... + P^send V = (C - (s+1)*P0 )*Q; % B = B0*Qdisp(' ') disp('Approximation to P0; rows are long-run dbn')disp(P0) disp('Policy in terms of indices')disp(d) disp('Policy in terms of actions')disp(D) disp('Values for the policy selected')disp(V) disp('Long-run expected gain per period G')disp(G) T = [(1:q)' A' (QA + PA*V)]; % Test values for determining new policy DIS =[' Index Action Test Value']; disp(DIS)disp(T) disp('Check current policy against new test values.')disp('--If new policy needed, call for newpol') disp('--If not, use policy, values V, and gain G, above') newpol Enter policy as row matrix of indices [2 6 9]% A deliberately poor choiceApproximation to P0; rows are long-run dbn 0.2642 0.2830 0.45280.2642 0.2830 0.4528 0.2642 0.2830 0.4528Policy in terms of indices2 6 9Policy in terms of actions 2 2 2 Long-run expected gain per period G 2.2972Index Action Test Value1.0000 1.0000 2.7171 2.0000 2.0000 2.41683.0000 3.0000 2.3838 4.0000 0 05.0000 1.0000 1.6220 6.0000 2.0000 2.56777.0000 0 0 8.0000 1.0000 2.64799.0000 2.0000 2.0583Check current policy against new test values. --If new policy needed, call for newpol--If not, use policy and gain G, above % New policy is needed newpolEnter policy as row matrix of indices [1 6 8]Approximation to P0; rows are long-run dbn0.3939 0.2828 0.3232 0.3939 0.2828 0.32320.3939 0.2828 0.3232Policy in terms of indices 1 6 8Policy in terms of actions1 2 1Values for selected policy 0.0526-0.0989 0.0223Long-run expected gain per period G2.6061Index Action Test Value 1.0000 1.0000 2.65872.0000 2.0000 2.3595 3.0000 3.0000 2.32544.0000 0 0 5.0000 1.0000 1.60576.0000 2.0000 2.5072 7.0000 0 08.0000 1.0000 2.6284 9.0000 2.0000 2.1208Check current policy against new test values.--If new policy needed, call for newpol --If not, use policy, values V, and gain G, above

    Since the policy selected on this iteration is the same as the previous one, the procedure has converged to an optimal policy.The first of the first three rows is maximum; the second of the next two rows is maximum; and the first of the final two rows is maximum. Thus,we have selected rows 1, 5, 6, corresponding to the optimal policy d * ( 1 2 1 ) . The expected long-run gain per period g = 2 . 6061 .

    The value matrix is

    v = v 1 v 2 v 3 = 0 . 0527 - 0 . 0989 0 . 0223

    We made a somewhat arbitrary choice of the powers of P used in the approximation of P 0 and B 0 . As we note in the development of the approximation procedures in Sec 4, the convergence of P n is governed by the magnitude of the largest eigenvalue other than one. We can always get a check on the reliability of the calculations by checkingthe eigenvalues for P corresponding to the presumed optimal policy. For the choice above, we find the eigenvalues to be 1, -0.1250, 0.0833. Since 0 . 125 4 0 . 0002 , the choices of exponents should be quite satisfactory. In fact, we could probably use P 8 as a satisfactory approximation to P 0 , if that were desirable. The margin allows for the possibility that for some policies the eigenvalues may not be so small.—

  7. Policy iteration– with discounting
    It turns out that the policy iteration procedure is simpler in the case of discounting, as suggested by the evolution analysis in Sec 4. We havethe following two-phase procedure, based on that analysis.
    1. Value-determination procedure .Given the q i and transition probabilities p ( i , j ) for the current policy, solve v = q + α P v to get for the corresponding v i
      v = [ I - α P ] - 1 q
    2. Policy-improvement procedure Given { v i : 1 i M } for the current policy, determine a new policy d * , with each d * ( i ) = a i * determined as that action for which
      q i * + α j = 1 M p * ( i , j ) v j = max k { q i ( a i k ) + j = 1 M p i j ( a i k ) v j a i k A i }
    We illustrate the Matlab procedure with the same numerical example as in the previous case, using discount factor a = 0 . 8 . The data file is the same, so that we call for it, as before.

    Assume data in file md61.m is in Matlab; if not, call for md61. We use the procedure newpolprep to prepare for the iterative procedure by making someinitial choices.

    newpolprep Data needed:Matrix PA of transition probabilities for states and actions Matrix GA of gains for states and actionsType number to identify GA matrix types Type 1. Gains associated with a stateType 2. One-step transition gains Type 3. Gains associated with a demandRow vector of actions Value of alpha (= 1 for no discounting) Enter type number to show gain type 2 Enter value of alpha (= 1 for no discounting) 0.8Enter matrix PA of transition probabilities PA Enter matrix GA of gains GAEnter row vector A of possible actions A Enter n, the power of P to approximate P0 16Enter s, the power of P in the approximation of V 6Index Action Test Value 1.0000 1.0000 2.66672.0000 2.0000 2.3750 3.0000 3.0000 2.33334.0000 0 0 5.0000 1.0000 1.62506.0000 2.0000 2.5000 7.0000 0 08.0000 1.0000 2.6250 9.0000 2.0000 2.1250Call for newpold

    The procedure for policy iteration is in the file newpold.m.

    % file newpold.m (with discounting) % version of 3/23/92d = input('Enter policy as row matrix of indices '); D = A(d);P = PA(d',:); % transition matrix for policy selected Q = QA(d',:); % average next-period gain for policy selectedV = (eye(P) - a*P)\Q; % Present values for unlimited futureT = [(1:q)' A' (QA + a*PA*V)]; % Test values for determining new policydisp(' ') disp('Approximation to P0; rows are long-run dbn')disp(P0) disp(' Policy in terms of indices')disp(d) disp(' Policy in terms of actions')disp(D) disp(' Values for policy')disp(V) DIS =[' Index Action Test Value']; disp(DIS)disp(T) disp('Check current policy against new test values.')disp('--If new policy needed, call for newpold') disp('--If not, use policy and values above') newpold Enter policy as row matrix of indices [3 5 9]% Deliberately poor choiceApproximation to P0; rows are long-run dbn 0.3939 0.2828 0.32320.3939 0.2828 0.3232 0.39390.2828 0.3232Policy in terms of indices 3 5 9Policy in terms of actions3 1 2Values for policy 10.37789.6167 10.1722Index Action Test Value1.0000 1.0000 10.7111 2.0000 2.0000 10.38723.0000 3.0000 10.3778 4.0000 0 05.0000 1.0000 9.6167 6.0000 2.0000 10.60897.0000 0 0 8.0000 1.0000 10.71339.0000 2.0000 10.1722Check current policy against new test values. --If new policy needed, call for newpold--If not, use policy and values abovenewpold Enter policy as row matrix of indices [1 6 8]Approximation to P0; rows are long-run dbn0.3939 0.2828 0.3232 0.3939 0.2828 0.32320.3939 0.2828 0.3232Policy in terms of indices 1 6 8Policy in terms of actions1 2 1Values for policy 13.084412.9302 13.0519 Index Action Test Value 1.0000 1.0000 13.08442.0000 2.0000 12.7865 3.0000 3.0000 12.75114.0000 0 0 5.0000 1.0000 12.03336.0000 2.0000 12.9302 7.0000 0 08.0000 1.0000 13.0519 9.0000 2.0000 12.5455Check current policy against new test values.--If new policy needed, call for newpold --If not, use policy and values above

    Since the policy indicated is the same as the previous policy, we know this is an optimal policy. Rows 1, 6, 8, indicate theoptimal policy to be d * ( 1 , 2 , 1 ) . It turns out in this example that the optimal policies are the same for thediscounted and undiscounted cases. That is not always true. The v matrix gives the present values for unlimited futures, for each of the three possible starting states.

Questions & Answers

what does preconceived mean
sammie Reply
physiological Psychology
Nwosu Reply
How can I develope my cognitive domain
Amanyire Reply
why is communication effective
Dakolo Reply
Communication is effective because it allows individuals to share ideas, thoughts, and information with others.
effective communication can lead to improved outcomes in various settings, including personal relationships, business environments, and educational settings. By communicating effectively, individuals can negotiate effectively, solve problems collaboratively, and work towards common goals.
it starts up serve and return practice/assessments.it helps find voice talking therapy also assessments through relaxed conversation.
miss
Every time someone flushes a toilet in the apartment building, the person begins to jumb back automatically after hearing the flush, before the water temperature changes. Identify the types of learning, if it is classical conditioning identify the NS, UCS, CS and CR. If it is operant conditioning, identify the type of consequence positive reinforcement, negative reinforcement or punishment
Wekolamo Reply
please i need answer
Wekolamo
because it helps many people around the world to understand how to interact with other people and understand them well, for example at work (job).
Manix Reply
Agreed 👍 There are many parts of our brains and behaviors, we really need to get to know. Blessings for everyone and happy Sunday!
ARC
A child is a member of community not society elucidate ?
JESSY Reply
Isn't practices worldwide, be it psychology, be it science. isn't much just a false belief of control over something the mind cannot truly comprehend?
Simon Reply
compare and contrast skinner's perspective on personality development on freud
namakula Reply
Skinner skipped the whole unconscious phenomenon and rather emphasized on classical conditioning
war
explain how nature and nurture affect the development and later the productivity of an individual.
Amesalu Reply
nature is an hereditary factor while nurture is an environmental factor which constitute an individual personality. so if an individual's parent has a deviant behavior and was also brought up in an deviant environment, observation of the behavior and the inborn trait we make the individual deviant.
Samuel
I am taking this course because I am hoping that I could somehow learn more about my chosen field of interest and due to the fact that being a PsyD really ignites my passion as an individual the more I hope to learn about developing and literally explore the complexity of my critical thinking skills
Zyryn Reply
good👍
Jonathan
and having a good philosophy of the world is like a sandwich and a peanut butter 👍
Jonathan
generally amnesi how long yrs memory loss
Kelu Reply
interpersonal relationships
Abdulfatai Reply
What would be the best educational aid(s) for gifted kids/savants?
Heidi Reply
treat them normal, if they want help then give them. that will make everyone happy
Saurabh
What are the treatment for autism?
Magret Reply
hello. autism is a umbrella term. autistic kids have different disorder overlapping. for example. a kid may show symptoms of ADHD and also learning disabilities. before treatment please make sure the kid doesn't have physical disabilities like hearing..vision..speech problem. sometimes these
Jharna
continue.. sometimes due to these physical problems..the diagnosis may be misdiagnosed. treatment for autism. well it depends on the severity. since autistic kids have problems in communicating and adopting to the environment.. it's best to expose the child in situations where the child
Jharna
child interact with other kids under doc supervision. play therapy. speech therapy. Engaging in different activities that activate most parts of the brain.. like drawing..painting. matching color board game. string and beads game. the more you interact with the child the more effective
Jharna
results you'll get.. please consult a therapist to know what suits best on your child. and last as a parent. I know sometimes it's overwhelming to guide a special kid. but trust the process and be strong and patient as a parent.
Jharna
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Topics in applied probability. OpenStax CNX. Sep 04, 2009 Download for free at http://cnx.org/content/col10964/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Topics in applied probability' conversation and receive update notifications?

Ask