Summary: We first summarize certain essentials in the analysis of homogeneous Markov chains with costs and rewards associated with states, or with transitions between states. Then we consider three cases: a. Gain associated with a state; b. One-step transition gains; and c. Gains associated with a demand under certain reasonable conditions. Matlab implementations of the results of analysis provide machine solutions to a variety of problems.
In order to provide the background for Matlab procedures for Markov decision processes, we first summarize certain essentials in the analysis of homogeneous Markov chains with costs and rewards associated with states, or with transitions between states. References are to Pfeiffer: PROBABILITY FOR APPLICATIONS.
- Some cost and reward patternsConsider a finite, ergodic (i.e., recurrent, positive, aperiodic, irreducible) homogeneous Markov chain XN, with state space E={1,2,⋯,M{. To facilitate use of matrix computation programs, we number states from one, except in certain examples in which state zero has a natural meaning. Associated with this chain is a cost or reward structure belonging to one of the three general classes described below. The one-step transition probability matrix is P=[p(i,j)], where p(i,j)=P(Xn+1=j|Xn=i). The distribution for Xn is represented by the row matrix π(n)=[p1(n),p2(n),⋯,pM(n)], where pi(n)=P(Xn=i). The long-run distribution is represented by the row matrix π=[π1,π2,⋯,πM].
- Type 1. Gain associated with a state. A reward or gain in the amount g(i) is realized in the next period if the current state is i. The gain sequence{Gn:1≤n{ of random variables Gn+1=g(Xn) is the sequence of gains realized as the chain XN evolves. We represent the gain function g by the column matrix g=[g(1)g(2)⋯g(M)]T.
- Type 2. One-step transition gains. A reward or gain in the amount g(i,j)=gij is realized in period n+1 if the system goes from state i in period n to state j in period n+1. The corresponding one-step transition probability is p(i,j)=pij. The gain matrix is g=[g(i,j)]. The gain sequence{Gn:1≤n{ of random variablesGn+1=g(Xn,Xn+1) is the sequence of gains experienced as the chain XN evolves.
- Type 3. Gains associated with a demand. In this case, the gain random variables are of the formGn+1=g(Xn,Dn+1)(1)If n represents the present, the random vector Un=(X0,X1,⋯,Xn) represents the “past” at n of the chain XN. We suppose {Dn:1≤n{ is iid and each {Dn+1,Un{ is independent. Again, the gain sequence{Gn:1≤n{ represents the gains realized as the process evolves. Standard results on Markov chains show that in each case the sequence GN={Gn:1≤n{ is Markov.
A recurrence relation. We are interested in the expected gains in future stages, given the present state of the process. For any n>0 and any m>1, the gain Gn(m) in the mperiods following period n is given byGn(m)=Gn+1+Gn+2+⋯+Gn+m(2)If the process is in state i, the expected gain in the next period qi isqi=vi(1)=E[Gn+1|Xn=i](3)and the expected gain in the next m periods isvi(m)=E[Gn(m)|Xn=i](4)We utilize a recursion relation that allows us to begin with the transition matrix P and the next-period expected gain matrixq=[q1q2⋯qm]T and calculate the vi(m) for any m>1. Although the analysis is somewhat different for the various gain structures, the result exhibits a common pattern. In each casevi(m)=E[Gn(m)|Xn=i]=E[Gn+1|Xn=i]+(5)m−1 ∑ k=1 =qi+(6)m−1 ∑ k=1 =qi+∑E[(7)m−1 ∑ k=1 We thus have the fundamental recursion relation(*)vi(m)=qi+∑vj(m−1)p(i,j)(8)The recursion relation (*) shows that the transition matrix P=[p(i,j)] and the vector of next-period expected gains, which we represent by the column matrixq=[q1,q2,⋯,qM]T, determine the vi(m). The difference between the cases is the manner in which the qi are obtained.- Type 1: . qi=E[g(Xn)|Xn=i]=g(i)
- Type 2: . qi=E[g(Xn,Xn+1)|Xn=i]=E[g(i,Xn+1)|Xn=i]=∑j∈Eg(i,j)p(i,j)
- Type 3: . qi=E[g(Xn,Dn+1)|Xn=i]=E[g(i,Dn+1)]=∑kg(i,k)P(D=k)
- Matrix form: . For computational purposes, we utilize these relations in matrix form. Ifv(n)=[v1(n)v2(n)⋯vM(n)]Tandq=[q1q2⋯qM]T(9)then(*)v(m+1)=q+Pv(m)forallm>0,withv(0)=0(10)
Examination of the expressions for qi, above, shows that the next-period expected gain matrix q takes the following forms. In each case, g is the gain matrix. In case c, pD is the column matrix whose elements are P(D=k).- Type 1: q=g
- Type 2: q=thediagonalofPgT
- Type 3: q=gpD
- Some long-run averagesConsider the average expected gain for m periodsE[(11)
1 m 1 m m ∑ k=1 1 m m ∑ k=1 Use of the Markov property and the fact that for an ergodic chain(12)1 m m ∑ k=1 yields the result that,limE[(13)1 m and for any state i,limE[(14)1 m 1 m The expression for g may be put into matrix form. If the long-run probability distribution is represented by the row matrix π=[π1π2⋯πM] and q=[q1q2⋯;qM]T, theng=πq(15) - Discounting and potentialsSuppose in a given time interval the value of money increases by a fraction a. This may represent the potential earning if the money were invested. One dollar now is worth 1+a dollars at the end of one period. It is worth (1+a)n dollars after n periods. Set α=1/(1+a), so that 0<α≤1. It takes α dollars to be worth one dollar after one period. It takes αndollars at present to be worth one dollar n periods later. Thus the present worth or discounted value of one dollar n periods in the future is αn dollars. This is the amount one would need to invest presently, at interest rate a per unit of time, to have one dollar n periods later. If f is any function defined on the state space E, we designate by f the column matrix[f(1)f(2)⋯f(M)]T. We make an exception to this convention in the case of the distributions of the state probabilities π(n)=[p1(n)p2(n)⋯pM(n)], their generating function, and the long-run probabilities π=[π1π2⋯πM], which we represent as row matrices. It should be clear that much of the following development extends immediately to infinite state space E=N={0,1,2,⋯{. We assume one of the gain structures introduced in Sec 1 and the corresponding gain sequence {Gn:1≤n{. The value of the random variable Gn+1 is the gain or reward realized during the period n+1. We let each transition time be at the end of one period of time (say a month or a quarter). If n corresponds to the present, then Gn+k is the gain in period n+k. If we do not discount for the period immediately after the present, then αk−1Gn+k is the present value of that gain. Hence(16)
∞ ∑ k=1 Definition. The α-potential of the gain sequence {Gn:1≤n{ is the function φ defined byφ(i)=E[(17)∞ ∑ n=0 Remark. φ(i) is the expected total discounted gain, given the system starts in state i. We next define a concept which is related to the α-potential in a useful way. Definition. Theα-potential matrixRα for the process XN is the matrixRα=(18)∞ ∑ n=0 THEOREM 1: 3.1
Let XN be an ergodic, homogeneous Markov chain with state space E and gain sequence {Gn:1≤n{. Let q=[q1q1⋯qM]T where qi=E[Gn+1|Xn=i]fori∈E. Forα∈(0,1), let φ be the α-potential for the gain sequence. That is,φ(i)=E[(19)∞ ∑ n=0 Then, if Rα is the α-potential matrix for XN, we haveφ=Rαq(20)— □THEOREM 2: 3.2
Consider an ergodic, homogeneous Markov chain XN with gain sequence {Gn:1≤n{ and next-period expected gain matrix q. For α∈(0,1), the α-potential φ and the α-potential matrix Rα satisfy[I−αP]Rα=Iand[I−αP]φ=q(21)If the state space E is finite, thenRα=[I−αP]−1andφ=[I−αP]−1q=Rαq(22)— □EXAMPLE 1: A numerical example
Suppose the transition matrix P and the next-period expected gain matrix q areP=[(23)0.2 0.3 0.5 0.4 0.1 0.5 0.6 0.3 0.1 2 5 3 For α=0.9, we haveR0.9=(I−0.9P)−1=[(24)4.4030 2.2881 3.3088 3.5556 3.1356 3.3088 3.6677 2.2881 4.0441 30.17 32.72 30.91 — □The next example is of class c, but the demand random variable is Poisson, which does not have finite range. The simple pattern for calculating the qi must be modified.EXAMPLE 2: Costs associated with inventory under an (m,M) order policy.
If k units are ordered, the cost in dollars isc(k)={(25)0 k=0 10+25k 0<k≤M For each unit of unsatisfied demand, there is a penalty of 50 dollars. We use the (m,M) inventory policy described in Ex 23.1.3. X0=M, and Xn is the stock at the end of period n, before restocking. Demand in period n is Dn. The cost for restocking at the end of period n+1 isg(Xn,Dn+1)={(26)10+25(M−Xn)+50max{Dn+1−M,0{ 0≤Xn<m 50max{Dn+1−Xn,0{ m≤Xn≤M For m=1,M=3, we haveg(0,Dn+1)=85+50I[3,∞)(Dn+1)(Dn+1−3)(27)g(i,Dn+1)=50I[3,∞)(Dn+1)(Dn+1−i)1≤i≤3(28)We take m=1,M=3 and assume D is Poisson (1). From Ex 23.1.3 in PA, we obtainP=[(29)0.0803 0.1839 0.3679 0.3679 0.6321 0.3679 0 0 0.2642 0.3679 0.3679 0 0.0803 0.1839 0.3679 0.3679 The largest eigenvalue |λ|≈0.26, so n≥10 should be sufficient for convergence. We use n=16.Taking any row of P16, we approximate the long-run distribution byπ=[0.28580.28470.26320.1663](30)We thus haveq0=85+50E[I[3,∞)(Dn+1)(Dn+1−3)]=85+50(31)∞ ∑ k=4 For the Poisson distribution∞ ∑ k=n ∞ ∑ k=n−1 Hence q0=85+50[∞ ∑ k=3 ∞ ∑ k=4 ∞ ∑ k=3 ∞ ∑ k=2 ∞ ∑ k=3 ∞ ∑ k=3 q=[(32)86.1668 9.1970 5.1809 1.1668 Then, for α=0.8 we haveR0.8=(I−0.8P)−1=[,(33)1.9608 1.0626 1.1589 0.8178 1.4051 2.1785 0.8304 0.5860 1.1733 1.2269 2.1105 0.4893 0.9608 1.0626 1.1589 1.8178 185.68 146.09 123.89 100.68 Recall that we are dealing with expected discounted costs. Since we usually consider starting with a full stock, the amount φ(M)=φ(3)=100.68 is the quantity of interest. Note that this is smaller than other values of φ(j), which correspond to smaller beginning stock levels. We expect greater restocking costs in such cases. - Evolution of chains with costs and rewards
- No discounting A generating function analysis of(*)v(m+1)=q+Pv(m)forallm>0,withv(0)=0(34)showsv(n)=ng1+v+transients,wherev=B0q(35)Here g=πq, is a column matrix of ones, P0=P∞, and B0 is a matrix which analysis also shows we may approximate byB0=B(1)≈I+P+P2+⋯+Pn−(n+1)P0(36)The largest |λi|<1 gives an indication of how many terms are needed.
EXAMPLE 3: The inventory problem (continued)
We consider again the inventory problem. We haveP=[(37)0.0803 0.1839 0.3679 0.3679 0.6321 0.3679 0 0 0.2642 0.3679 0.3679 0 0.0803 0.1839 0.3679 0.3679 86.1668 9.1970 5.1819 1.1668 The eigenvalues are 1,0.0920+i0.2434,0.0920−0.2434 and 0. Thus, the decay of the transients is controlled by|λ*|=(0.09202+0.24342)1/2=0.2602. Since |λ*|4≈0.0046, the transients die out rather rapidly. We obtain the following approximationsP0≈P16≈[(38)0.2852 0.2847 0.2632 0.1663 0.2852 0.2847 0.2632 0.1663 0.2852 0.2847 0.2632 0.1663 0.2852 0.2847 0.2632 0.1663 The approximate value of B0 is found to beB0≈I+P+P2+P3+P4−5P0≈[(39)0.4834 −0.3766 −0.1242 0.0174 0.0299 0.7537 −0.5404 −0.2432 −0.2307 −0.1684 0.7980 −0.3989 −0.5166 −0.3766 −0.1242 1.0174 The value g=πq≈28.80 is found in the earlier treatment. From the values above, we havev=B0q≈[(40)37.6 6.4 −17.8 −47.4 28.80n+37.6 28.80n+6.4 28.80n−17.8 28.80n−47.4 The average gain per period is clearly g≈28.80. This soon dominates the constant terms represented by the entries in v. - With discounting Let the discount factor be α∈(0,1). If n represents the present period, then Gn+1= the reward in the first succeeding period Gn+k= the reward in thekth succeding period. If we do not discount the first period, thenGn(m)=Gn+1+αGn+2+α2Gn+3+⋯+αm−1Gn+m=Gn+1+αGn+1(m−1)(41)Thusvi(m)=E[Gn(m)|Xn=i]=qi+α(42)
M ∑ j=1 In matrix form, this isv(n)=q+αPv(n−1)(43)A generating function analysis shows thatvi(n)=vi+transients1≤i≤M(44)Hence, the steady state part ofvi(m)=qi+α(45)M ∑ j=1 M ∑ j=1 In matrix form,v=q+αPvwhichyieldsv=[I−αP]−1q(46)Since the qi=E[Gn+1|Xn=i] are known, we can solve for the vi. Also, since vi(m) is the present value of expected gain m steps into the future, the vi represent the present value for an unlimited future, given that the process starts in state i.
- Stationary decision policiesWe suppose throughout this section that XN is ergodic, with finite state space E={1,2,⋯,M{. For each state i∈E, there is a class Ai of possible actions which may be taken when the process is in state i. A decision policy is a sequence of decision functions d1,d2⋯ such thatTheactionatstagenisdn(X0,X1,...,Xn).(47)The action selected is in Ai whenever Xn=i. We consider a special class of decision policies. Stationary decision policydn(X0,X1,⋯,Xn)=d(Xn)withdinvariantwithn(48)The possible actions depend upon the current state. That is d(Xn)∈Ai whenever Xn=i. Analysis shows the Markov character of the process is preserved under stationary policies. Also, if E is finite and every stationary policy produces an irreducible chain, then an optimal stationary policy is optimal. We suppose each policy yields an ergodic chain. Since the transition probabilities from any state depend in part on the action taken when in that state, the long-run probabilities will depend upon the policy. In the no-discounting case, we want to determine a stationary policy that maximizes the gain g=πq for the corresponding long-run probabilities. We say “a stationary policy,” since we do not rule out the possibility there may be more than one policy which maximizes the gain. In the discounting case, the goal is to maximize the steady state part vi in the expressionsvi(n)=vi+transients1≤i≤M(49)In simple cases, with small state space and a small number of possible actions in each state, it may be feasible to obtain the gain or present values for each permissible policy and then select the optimum by direct comparison. However, for most cases, this is not an efficient approach. In the next two sections, we consider iterative procedures for step-by-step optimization which usually greatly reduces the computational burden.
- Policy iteration method– no discountingWe develop an iterative procedure for finding an optimal stationary policy. The goal is to determine a stationary policy that maximizes the long run expected gain per period g. In the next section, we extend this procedure to the case where discounting is in effect. We assume that each policy yields an ergodic chain. Suppose we have established that under a given policy (1) vi(n)=qi+∑p(i,j)vj(n−1), where qi=E[Gn+1|Xn=i] In this case, the analysis in Sec 4 shows that for large n, (2) vi(n)≈ng+vi, where g=∑πiqi We note that vi and g depend upon the entire policy, while qi and p(i,j) depend on the individual actions ai. Using (1) and (2), we obtainng+vi=qi+∑p(i,j)[(n−1)g+vj]=qi+(n−1)g+∑p(i,j)vj(50)From this we conclude (3) g+vi=qi+∑p(i,j)vj for all i∈E Suppose a policy d has been used. That is, action d(i) is taken whenever the process is in state i. To simplify writing, we drop the indication of the action and simply write p(i,j) for pij(d(i)), etc. Associated with this policy, there is a gain g. We should like to determine whether or not this is the maximum possible gain, and if it is not, to find a policy which does give the maximum gain. The following two-phase procedure has been found to be effective. It is essentially the procedure developed originally by Ronald Howard, who pioneered the treatment of these problems. The new feature is the method of carrying out the value-determination step, utilizing the approximation method noted above.
- Value-determination procedure We calculate g=∑πiqi=πq. As in Sec 4, we calculatev=B0q≈[I+P+P2+⋯+Ps−(s+1)P0]qwhereP0=limPn(51)
- Policy-improvement procedure We suppose policy d has been used through period n. Then, by (3), above,g+vi=qi+∑p(i,j)vj(52)We seek to improve policy d by selecting policy d*, with d*(i)=aik*, to satisfyqi*+∑pij*vj=max{qi(aik)+∑pij(aik)vj:aik∈Ai{,1≤i≤M(53)
Remarks- In the procedure for selecting d*, we use the “old” vj.
- It has been established that g*≥g, with equality iff g is optimal. Since there is a finite number of policies, the procedure must converge to an optimum stationary policy in a finite number of steps.
We implement this two step procedure with Matlab. To demonstrate the procedure, we consider the followingEXAMPLE 4: A numerical example
A Markov decision process has three states: State space E={1,2,3{.Actions: State 1: A1={1,2,3{ State 2: A2={1,2{ State 3: A3={1,2{Transition probabilities and rewards are:TABLE 1 p1j(1): [1/3 1/3 1/3] g1j(1): [1 3 4] p1j(2): [1/4 3/8 3/8] g2j(2): [2 2 3] p1j(3): [1/3 1/3 1/3] g3j(3): [2 2 3] p2j(1): [1/8 3/8 1/2] g2j(1): [2 1 2] p2j(2): [1/2 1/4 1/4] g2j(2): [1 4 4] p3j(1): [3/8 1/4 3/8] g3j(1): [2 3 3] p3j(2): [1/8 1/4 5/8] g3j(2): [3 2 2] Use the policy iteration method to determine the policy which gives the maximum gain g.A computational procedure utilizing Matlab We first put the data in an m-file. Since we have several cases, it is expedient to include the case number. This example belongs to type 2. Data in file md61.mtype = 2PA = [1/3 1/3 1/3; 1/4 3/8 3/8; 1/3 1/3 1/3; 0 0 0;1/8 3/8 1/2; 1/2 1/4 1/4; 0 0 0;3 4; 2 2 3; 2 2 3; 0 0 0;3/8 1/4 3/8; 1/8 1/4 5/8]GA = [1 % Zero rows are separators2 1 2; 1 4 4; 0 0 0;1 2]A = [1 2 3 0 1 2 0md61type = 2PA = 0.3333 0.3333 0.33330.2500 0.3750 0.37500 0 00.3333 0.3333 0.33330.1250 0.3750 0.50000.3750 0.2500 0.37500.5000 0.2500 0.2500 0 0 0 0.1250 0.2500 0.6250Once the data are entered into Matlab by the call for file “md61,” we make preparation for the policy-improvement step. The procedure is in the file newpolprep.m% file newpolprep.m% version of 3/23/92disp('Data needed:')transition probabilities for states and actions') disp(' Matrix GA ofdisp(' Matrix PA ofgains for states and actions')disp(' Type number to identify GA matrix types')sition gains') disp(' Type 3. Gains associateddisp(' Type 1. Gains associated with a state')disp(' Type 2. One-step tran with a demand')disp(' Row vector of actions')disp(' Value of alpha (= 1 for no discounting)')counting) '); PA = input('Enter matrix PA of transic = input('Enter type number to show gain type ');a = input('Enter value of alpha (= 1 for no distion probabilities ');GA = input('Enter matrix GA of gains ');if c == 3t('Enter row vector of demand probabilities '); end if cPD = inp u== 1A = input('Enter row vector A of possible actions '); % (1xq)m = length(PA(1,:));q = length(A);o approximate P0 '); s = input('Enter s, the power of Pn = input('Enter n, the power of P tin the approximation of V ');col is index; second is A; third is QA DIS = [' Index ActionQD = [(1:q)' A' QA]; % Firs t Value']; disp(DIS) disp(QD) if a == 1 call = 'Call for newpol'; else call = 'Call for newpold';(call)dis pnewpolprep % Call for preparatory program in file npolprep.mData needed:transition probabilities for states and actions Matrix GA oMatrix PA off gains for states and actionsType number to identify GA matrix typesgains Type 3. Gains associated withType 1. Gains associated with a stateType 2. One-step transitiona demandRow vector of actionsValue of alpha (= 1 for no discounting)Enter type number to show gain type 2gains GA Enter row vector A of possible actionEnter value of alpha (=1 for no discounting) 1 Enter matrix PA of transition probabilities PA Enter matrix GA of s A Enter n, the power of P to approximate P0 16 Enter s, the power of P in the approximation of V 6 Index Action Value00 1.6250 6.0000 2.01.0000 1.0000 2.6667 2.0000 2.0000 2.3750 3.0000 3.0000 2.3333 4.0000 0 0 5.0000 1.0 0000 2.5000 7.0000 0 0 8.0000 1.0000 2.6250 9.0000 2.0000 2.1250The procedure has prompted for the procedure newpol (in file newpol.m)% file: newpol.m (without discounting)% version of 3/23/92icy as row matrix of indices '); D = A(d);d = input('Enter pol % policy in terms of actionsprobabilities for policy Q = QA(d',:); % selectsP = PA(d',:); % selectsnext-period gains for policyP0 = P^n; % Display to check convergencepected gain per period C = P + eye(P); for j=2:sPI = P0(1,:); % Long-run distributionG = PI*Q % Long-run exC = C + P^j; % C = I + P + P^2 + ... + P^send V = (C - (s+1)*P0 )*Q; % B = B0*Q disp(' ')sp('Policy in terms of actions') disp(D) disp('Valdisp('Approximation to P0; rows are long-run dbn') disp(P0) disp('Policy in terms of indices') disp(d) d iues for the policy selected') disp(V) disp('Long-run expected gain per period G') disp(G)Action Test Value']; disp(DIS) disp(T) disp('Check current policy aT = [(1:q)' A' (QA + PA*V)]; % Test values for determining new policy DIS =[' Index gainst new test values.') disp('--If new policy needed, call for newpol')newpolEnter policy as row matrix of indices [2 6 9] % A deliberately poor choiceApproximation to P0; rows are long-run dbn0.2642 0.2830 0.4528ms of indices 2 60.2642 0.2830 0.4528 0.2642 0.2830 0.4528 Policy in te r 9 Policy in terms of actionsLong-run expected gain per period G2.2972Action Test Value 1.0000Index 1.0000 2.71712.4168 3.0000 3.00002.0000 2.0000 2.3838 4.0000 0 02.0000 2.5677 7.00005.0000 1.0000 1.6220 6.0000 0 0 8.0000 1.0000 2.6479cy against new test values. -9.0000 2.0000 2.0583 Check current pol i-If new policy needed, call for newpol% New policy is needed newpol Enter policy as row matrix of--If not, use policy and gain G, above indices [1 6 8] Approximation to P0; rows are long-run dbn 0.3939 0.2828 0.32321 6 8 Policy in te0.3939 0.2828 0.3232 0.3939 0.2828 0.3232 Policy in terms of indices rms of actions 1 2 1 Values for selected policy 0.0526 -0.0989 0.0223.0000 2.6587 2.0000 2.0000Long-run expected gain per period G 2.6061 Index Action Test Value 1.0000 1 2.3595 3.0000 3.0000 2.3254 4.0000 0 0 5.0000 1.0000 1.60572.0000 2.1208 Check cur6.0000 2.0000 2.5072 7.0000 0 0 8.0000 1.0000 2.6284 9.0000 rent policy against new test values. --If new policy needed, call for newpolSince the policy selected on this iteration is the same as the previous one, the procedure has converged to an optimal policy. The first of the first three rows is maximum; the second of the next two rows is maximum; and the first of the final two rows is maximum. Thus, we have selected rows 1, 5, 6, corresponding to the optimal policy d*∼(121). The expected long-run gain per period g=2.6061.The value matrix isv=[,(54)v1 v2 v3 0.0527 −0.0989 0.0223 We made a somewhat arbitrary choice of the powers of P used in the approximation of P0 and B0. As we note in the development of the approximation procedures in Sec 4, the convergence of Pn is governed by the magnitude of the largest eigenvalue other than one. We can always get a check on the reliability of the calculations by checking the eigenvalues for P corresponding to the presumed optimal policy. For the choice above, we find the eigenvalues to be 1, -0.1250, 0.0833. Since0.1254≈0.0002, the choices of exponents should be quite satisfactory. In fact, we could probably use P8 as a satisfactory approximation to P0, if that were desirable. The margin allows for the possibility that for some policies the eigenvalues may not be so small. — □ - Policy iteration– with discountingIt turns out that the policy iteration procedure is simpler in the case of discounting, as suggested by the evolution analysis in Sec 4. We have the following two-phase procedure, based on that analysis.
- Value-determination procedure. Given the qi and transition probabilities p(i,j) for the current policy, solve v=q+αPv to get for the corresponding viv=[I−αP]−1q(55)
- Policy-improvement procedure Given {vi:1≤i≤M{ for the current policy, determine a new policy d*, with each d*(i)=ai* determined as that action for whichqi*+α(56)
M ∑ j=1 M ∑ j=1
We illustrate the Matlab procedure with the same numerical example as in the previous case, using discount factor a=0.8. The data file is the same, so that we call for it, as before.EXAMPLE 5
Assume data in file md61.m is in Matlab; if not, call for md61. We use the procedure newpolprep to prepare for the iterative procedure by making some initial choices.newpolprepData needed:Matrix PA of transition probabilities for states and actionsMatrix GA of gains for states and actionsType 1. Gains associated with a stateType number to identify GA matrix typesType 2. One-step transition gainsType 3. Gains associated with a demandRow vector of actionsr no discounting)Value of alpha (= 1 foEnter type number to show gain type 2Enter value of alpha (= 1 for no discounting) 0.8Enter matrix PA of transition probabilities PAof possible actions A EnterEnter matrix GA of gains GAEnter row vector An, the power of P to approximate P0 16ion of V 6 Index Action Test Value 1.0Enter s, the power of P in the approximat000 1.0000 2.66672.0000 2.0000 2.37503.0000 3.0000 2.33335000 7.0000 04.0000 0 0 5.0000 1.0000 1.6250 6.0000 2.0000 2 . 0 8.0000 1.0000 2.6250 9.0000 2.0000 2.1250The procedure for policy iteration is in the file newpold.m.% file newpold.m (with discounting)% version of 3/23/92cy as row matrix of indices '); D = A(d); P = PA(d',d = input('Enter poli:); % transition matrix for policy selectedpolicy selected V = (eye(P) - a*P)\Q; % Present valuesQ = QA(d',:); % average next-period gain forfor unlimited futureT = [(1:q)' A' (QA + a*PA*V)]; % Test values for determining new policydisp(' ')disp(' Policy in terms of actions') disp(D) disdisp('Approximation to P0; rows are long-run dbn') disp(P0) disp(' Policy in terms of indices') disp(d) p(' Values for policy') disp(V)eeded, call for newpold') disp('--If not, usDIS =[' Index Action Test Value']; disp(DIS) disp(T) disp('Check current policy against new test values.') disp('--If new policyne policy and values above')newpoldEnter policy as row matrix of indices [3 5 9] % Deliberately poor choiceApproximation to P0; rows are long-run dbn2828 0.3232 0.3939 00.3939 0.2828 0.32320.3939 0..2828 0.3232Policy in terms of indices3 5 910.3778 9.6167 10.1722Policy in terms of actions 3 1 2 Values for policy2.0000 10.3872 3.0000Index Action Test Value 1.0000 1.0000 10.7111 2.000 0 3.0000 10.3778 4.0000 0 0 5.0000 1.0000 9.616710.7133 9.0000 2.00006.0000 2.0000 10.6089 7.0000 0 0 8.0000 1.0000 10.1722 Check current policy against new test values. --If new policy needed, call for newpoldoximation to P0; rows are long-run db--If not, use policy and values above newpold Enter policy as row matrix of indices [1 6 8] App rn 0.3939 0.2828 0.3232 0.3939 0.2828 0.3232 0.3939 0.2828 0.3232 Policy in terms of indicesIndex Action Test Value1.0000 1.0000 13.08443.0000 3.0000 12.75112.0000 2.0000 12.78654.0000 0 07.0000 0 05.0000 1.0000 12.0333 6.0000 2.0000 12.9302 8.0000 1.0000 13.0519icy needed, call for newpold9.0000 2.0000 12.5455 Check current policy against new test values. --If new pol--If not, use policy and values aboveSince the policy indicated is the same as the previous policy, we know this is an optimal policy. Rows 1, 6, 8, indicate the optimal policy to be d*∼(1,2,1). It turns out in this example that the optimal policies are the same for the discounted and undiscounted cases. That is not always true. The v matrix gives the present values for unlimited futures, for each of the three possible starting states.
0 comments:
Post a Comment