chapter 17 markov chains ( 馬可夫鏈 ). 2 description sometimes we are interested in how a random...

92
Chapter 17 Markov Chains ( 馬馬馬馬 )

Upload: elwin-terry

Post on 02-Jan-2016

278 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

Chapter 17

Markov Chains ( 馬可夫鏈 )

Page 2: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

2

Description

Sometimes we are interested in how a random variable changes over time.

The study of how a random variable evolves over time includes stochastic processes( 隨機過程 ).

An explanation of stochastic processes – in particular, a type of stochastic process known as a Markov chain is included.

We begin by defining the concept of a stochastic process.

Page 3: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

3

17.1 What is a Stochastic Process? Suppose we observe some characteristic of a system

at discrete points in time. Let Xt be the value of the system characteristic at

time t. In most situations, Xt is not known with certainty before time t and may be viewed as a random variable.

A discrete-time stochastic process is simply a description of the relation between the random variables X0, X1, X2 …..

A continuous –time stochastic process is simply the stochastic process in which the state of the system can be viewed at any time, not just at discrete instants in time.

For example, the number of people in a supermarket t minutes after the store opens for business may be viewed as a continuous-time stochastic process.

p.923

Page 4: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

4

17.2 What is a Markov Chain?

One special type of discrete-time is called a Markov Chain.

A discrete-time stochastic process is a Markov chain if, for t = 0,1,2… and all statesP(Xt+1 = it+1|Xt = it, Xt-1=it-1,…,X1=i1, X0=i0)=P(Xt+1=it+1|Xt = it)

Essentially this says that the probability distribution of the state at time t+1 depends on the state at time t(it) and does not depend on the states the chain passed through on the way to it at time t.

( 第 t+1 期狀態只與第 t 期有關 ,與第 t-1 期以前均無關 )

p.924

Page 5: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

5

In our study of Markov chains, we make further assumption that for all states i and j and all t, P(Xt+1 = j|Xt = i) is independent of t. ( 與時間 t 無關 :某一時間是狀態 i, 下一個時間就移轉至狀態 j)

This assumption allows us to write P(Xt+1 = j|Xt = i) = pij where pij is the probability that given the system is in state i at time t, it will be in a state j at time t+1.

If the system moves from state i during one period to state j during the next period, we that a transition from i to j has occurred.

The pij’s are often referred to as the transition probabilities for the Markov chain.

Page 6: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

6

This equation implies that the probability law

relating the next period’s state to the current state does not change over time.

It is often called the Stationary Assumption and any Markov chain that satisfies it is called a stationary Markov chain.

We also must define qi to be the probability that the chain is in state i at the time 0; in other words, P(X0=i) = qi.

We call the vector q= [q1, q2,…qs] the initial probability distribution for the Markov chain.

Page 7: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

7

In most applications, the transition probabilities are displayed as an s x s transition probability matrix P. The transition probability matrix P may be written as

We also know that each entry in the P matrix must be nonnegative.

Hence, all entries in the transition probability matrix are nonnegative, and the entries in each row must sum to 1.

ss2s1s

111121

s11211

ppp

ppp

ppp

P

sj

1jij 1p

Page 8: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

8

Example 1 : Gambler’s Ruin Problem At time 0, I have $2. at time 1,2,…, I play a game in which I bet $1. With probability p, I win the game, and with probability 1-p, I lose the game. My goal is to increase my capital to $4, and as soon as I do, the game is over. The game is also over if my capital is reduced to 0$. If we define xt to by my capital position after the time t game is played, then x0,x1,…,xt may be viewed as a discrete-time stochastic process. Note that x0=2 is a known constant, but x1 and later xt’s are random. For example, with probability p, x1=3, and with probability 1-p,x1=1. note that if xt=4, then xt+1 and all later xt’s will also equal 4. similarity, if xt=0, then xt+1 and all later xt’s will also equal 0. this type of situation is called a gambler’s ruin problem.

p.923

Page 9: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

9

Example 1:Gambler’s Ruin Problem (cont.)

Find the transition matrix.Sol: state i means that we have i dollars.

p.925

0 1 2 3 41 1

p

p

p

1-p

1-p1-pFigure 1

1 0 000

p 0 p100

0 p 0p10

0 0 p0p1

0 0 001

4

3

2

1

0

p

4 3 2 1 0

Page 10: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

10

Example 2: Choosing Balls from an Urn

An urn contains two unpainted balls at present. We choose a ball at random and flip a coin. If the chosen ball is unpainted and the coin comes up heads, we paint the chosen unpainted ball red; if the chosen ball is unpainted and the coin comes up tails, we paint the chosen unpainted ball black. If the ball has already been painted, then (whether heads or tail has been tossed) we change the color of the ball (from red to black or from black to red). We define time t to be the time after the coin has been flipped for the tth time and the chosen ball has been painted. The state at any time may be described by the vector [u r b], where u is the number of unpainted balls in the urn, r is the number of red balls in the urn and b is the number of black balls in the urn. We are given that x0=[2 0 0]. After the first coin toss, one ball will have been painted either red or black, and the state will be either [1 1 0] or [1 0 1]. That is x1=[1 1 0] or x1=[1 0 1] . There are some sort relation between the xt’s. For example, if xt=[0 2 0], we can sure that xt+1=[0 1 1].

p.923

Page 11: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

11

Example: Choosing Balls (cont.)

Find the transition matrix.Sol: state i means that we have i dollars.

p.926

Figure 20 1 1

0 2 0

0 0 2 1 0 1

1 1 0

2 0 0

½

1

¼½

½

¼

¼¼

½½

½

1

Page 12: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

12

17.3 n-Step Transition Probabilities

A question of interest when studying a Markov chain is: If a Markov chain is in a state i at time m, what is the probability that n periods later than the Markov chain will be in state j?

This probability will be independent of m, so we may write

P(Xm+n =j|Xm = i) = P(Xn =j|X0 = i) = Pij(n)

where Pij(n) is called the n-step probability of

a transition from state i to state j. For n > 1, Pij(n) = ijth element of Pn

p.928

Page 13: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

13

i j

s

k

2

1

:

:

pis

pik

pi2

pi1

p1j

pkj

psj

p2j

Pij(2)=pi1p1j+ pi2p2j+…+ pikpkj+…+ pispsj

Figure 3

p.928

Page 14: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

14

Example 4 : The Cola Example

The entire cola industry produces only two colas.

Given that a person last purchased cola 1, there is a 90% chance that her next purchase will be cola 1.

Given that a person last purchased cola 2, there is an 80% chance that her next purchase will be cola 2.

1.If a person is currently a cola 2 purchaser, what is the probability that she will purchase cola 1 two purchases from now?

2.If a person is currently a cola 1 purchaser, what is the probability that she will purchase cola 1 three purchases from now?

p.929

Page 15: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

15

Hence, each person’s cola purchases may be represented by a two-state Markov chain, where State 1 = person has last purchased cola 1 State 2 = person has last purchased cola 2

If we define Xn to be the type of cola purchased by a person on her nth future cola purchase, then X0, X1, … may be described as the Markov chain with the following transition matrix:

0.80.2

0.10.9

2cola

1colaP

2cola 1cola

Figure 4

cola2

cola2

cola1

cola1

P22=0.8

P21=0.2

P21=0.2

P11=0.9

Page 16: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

16

We can now answer questions 1 and 2.1.P(X2 = 1|X0 = 2) = P21(2) = element 21 of P2:

Hence, P21(2) =0.34. This means that the probability is 0.34 that two purchases in the future a cola 2 drinker will purchase cola 1.

By using basic probability theory, we may obtain this answer in a different way.

2.P11(3) = element 11 of P3:

Therefore, P11(3) = .781

.66.34

.17.83

.80.20

.10.90

.80.20

.10.90P2

.562.438

.219.781

.66.34

.17.83

.80.20

.10.90)P(PP 23

Page 17: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

17

Many times we do not know the state of the Markov chain at time 0. Then we can determine the probability that the system is in state i at time n by using the reasoning.

Probability of being in state j at time n where q=[q1, q2, … q3].

Suppose 60% of all people now drink cola 1, and 40% now drink cola 2. three purchase from now, what faction of all purchasers will be drinking cola 1?

(n)Pq ij

si

1ii

0.64380.438

0.7814060

..

Page 18: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

18

To illustrate the behavior of the n-step transition probabilities for large values of n, we have computed several of the n-step transition probabilities for the Cola example.

This means that for large n, no matter what the initial state, there is a .67 chance that a person will be a cola 1 purchaser.

We can easily multiply matrices on a spreadsheet using the MMULT command.

Page 19: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

19

Exercise : p.931 #1 Each American family is classified as living in an urban , rural

or suburban location. During a giving year, 15% of all urban families move to a suburban location, and 5% move to a rural location; also, 6% of all suburban families move to an urban location, and 4% move to a rural location; finally, 4% of all rural families move to a urban location, and 6% move to a suburban location.

a. If a family now lives in an urban location, what is the probability that it will live in an urban area 2 years from now? A suburban area ? A rural area?

b. Suppose that at present,40% of all families live in an urban area, 35% live in a suburban area, and 25% live in a rural area.2 years from now, what percentage of American family will be live in an urban area?

c. what problems might occur if this model were used to predict the future population distribution of the United State?

Page 20: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

20

0.900.060.04

0.040.900.06

0.050.150.80

R

S

U

P

R S U

0.091

0.90

0.04

0.05

0.050.150.80(2)P

0.258

0.06

0.90

0.15

0.050.150.80(2)P

0.651

0.04

0.06

0.80

0.050.150.80(2)P

UR

US

UU

[ ]05.015.080.0=q

1a.

Page 21: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

21

0.315

0.072

0.104

0.651

0.250.350.40)p of 1 (columnq 2

0.250.350.40q

Problem 1c. a non-stationary Markov Chain

1b.

Page 22: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

22

Exercise : p.931 #2

The following questions refer to Example 1.a. After playing the game twice, what is the

probability that I will have $3? How about $2?b. After playing the 3 times, what is the probability

that I will have $2?

Page 23: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

23

a. p23(2)=0, p22(2)=2p(1-p)

b. p22(3)=(Row 3 of p2)(column 3 of p)

=[(1-p)2 0 2p(1-p) 0 p2]

1 0 000

p p)p(1 0p)(10

p 0 p)2p(10p)(1

0 p 0p)p(1p1

0 0 001

p

1 0 000

p 0 p100

0 p 0p10

0 0 p0p1

0 0 001

4

3

2

1

0

p

4 3 2 1 0

2

22

2

2

0

0

p1

0

p

0

2.

Page 24: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

24

17.4 Classification of States in a Markov Chain

To understand the n-step transition in more detail, we need to study how mathematicians classify the states of a Markov chain.

The following transition matrix illustrates most of the following definitions.

.2.8000

.1.4.500

0.7.300

000.5.5

000.6.4

P

p.931

1 2 3

5

4

0.3

0.4

0.8

0.5

0.6 0.5

0.1

0.40.7

0.5

0.2Figure 6

Page 25: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

25

Given two states of i and j, a path from i to j is a

sequence of transitions that beings in i and ends in j, such that each transition in the sequence has a positive probability of occurring.

A state j is reachable from state i if there is a path leading from i to j. (i 可以到達 j)

Two states i and j are said to communicate if j is reachable from i, and i is reachable from j.

A set of states S in a Markov chain is a closed set if no state outside of S is reachable from any state in S.

A state i is an absorbing state ( 吸收狀態 )if pii=1. ( 吸收狀態 , 不會改變狀態 )

A state i is a transient state ( 轉移狀態 ) if there exists a state j that is reachable from i, but the state i is not reachable from state j.

If a state is not a transient, it is called a recurrent state( 返回狀態 )

Page 26: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

26

A state I is periodic with period k>1 if k is the smallest

number such that all paths leading from state I back to state I have a length that is a multiple of k. If a recurrent state is not period, it is referred to as aperiodic.

If all states in a chain are recurrent, aperiodic, and communicate with each other, the chain is said to be ergodic.

Page 27: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

27

state 5 is reachable from state 3 (path 3-4-5).state 5 is not reachable from state 1.state 1 and 2 communicate.S1={1,2} and S2 ={3,4,5} are both closed set.

state 0 and 4 are absorbing state.state 1, 2 and 3 are transient state. (2-3-4)state 0 and 4 are recurrent state.

0 1 2 3 41 1

p

p

p

1-p

1-p1-p

1 2 3

5

4

0.3

0.4

0.8

0.5

0.6 0.5

0.1

0.40.7

0.5

0.2

Page 28: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

28

.

0 1 2 3 41 1

p

p

p

1-p

1-p1-p

Figure 1 is not an ergodic chain, because state 3 and 4 do not communicate.

The cola example is an ergodic markov chain.

Page 29: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

29

17.5 Steady-State Probabilities and Mean First Passage Times

Steady-state probabilities are used to describe the long-run behavior of a Markov chain.

正規馬可夫鏈Theorem 1:

Let P be the transition matrix for an s-state ergodic chain. Then there exists a vector π = [π1 π2 … πs] such that

s

s

s

n

nP

21

21

21

lim

p.934

Page 30: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

30

Theorem 1 tells us that for any initial state i,

The vector π = [π1 π2 … πs] is often called the

steady-state distribution, or equilibrium distribution, for the Markov chain.

jijn

nP

)(lim

1...

1)(...)()(

)()1(

)()1(

21

21

11

s

isii

sk

kkjkj

sk

kkjikij

jijij

nPnPnP

PppnPnP

nPnP

Page 31: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

31

Chain Markov regular a is It

xxx

xxx

xxx

xxx

x0x

x0x

xxx

x0x

x0x

P

xxx

x0x

x0x

x0x

x00

xx0

x0x

x00

xx0

P

x0x

x00

xx0

P

zero-non:x

0

100

0

P

41

211

21

21

32

31

1

Page 32: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

32

Chain Markov bsorbinga a is It

Chain Markov regular a not is It

PPx0

xx

x0

xx

x0

xxP

x0

xx

x0

xx

x0

xxP

x0

xxP

10

P

162

82

42

222

43

41

2

Page 33: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

33

Chain Markov bsorbinga a not is It

Chain Markov regular a not is It

0x

x0P

0x

x0P

01

10P

233

3

Page 34: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

34

Example : The Cola Examplep.937

31

,32

1

20.090.0

80.010.0

20.090.0

80.020.0

10.090.0

80.020.0

10.090.0P

2121

211

212

2112121

After a long time, there is 2/3 probability that a given person will purchase cola 1 and a 1/3 probability that a given person will purchase cola 2.

Page 35: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

35

Example 5 : The Cola Example

In the Cola Example, suppose that each customer makes one purchase of cola during any week(52 weeks = 1 year).

Suppose there are 100 million cola customers.

One selling unit of cola costs the company $1 to produce and is sold for $2.

For $500 million per year, an advertising firm guarantees to decrease from 10% to 5% the fraction of cola 1 customers who switch after a purchase.

Should the company that makes cola 2 hire the advertising firm?

p.937

Page 36: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

36

At present, a fraction π1 = ⅔ of all purchases are cola 1 purchases.

Each purchase of cola 1 earns the company a $1 profit. We can calculate the annual profit as $3,466,666,667=2/3(5,200,000,000).

The advertising firm is offering to change the P matrix to

.80.20

.05.95P1

Solution :

Page 37: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

37

For P1, the steady-state equations becomeπ1 = 0.95π1+0.20π2

π2 = 0.05π1+0.80π2

Replacing the second equation by π1 + π2 = 1 and solving, we obtain π1=0.8 and π2 =0.2.

Now the cola 1 company’s annual profit will be $3,660,000,000.

(0.8)(5,200,000,000)-500,000,000= 3,660,000,000

The cola 1 company should hire the ad agency.

Page 38: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

38

Exercise:

根據氣象資料顯示 , 某山區天氣和過去兩天有關。若過去連續兩天晴天,則明天晴天之機率為 0.90 。

若昨天雨天,今天晴天,則明天晴天之機率為 0.75 。

若昨天晴天,今天雨天,則明天晴天之機率為 0.60 。

若過去連續兩天雨天,則明天晴天之機率為 0.45 。

以 Markov Chain 表達此問題。

33+34

Page 39: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

39

Exercise:

目前手動刮鬍刀的兩個主要品牌 (A 與 B) 幾乎佔有整個市場。兩品牌之市場佔有率分別為 45% 與 30%, 另有 25% 的人使用電動刮鬍刀 (C) 。據市場調查公司的調查分析 , 消費者之轉移機率矩陣為

(1)A 牌的使用者 , 兩年後使用 B 牌機率為何 ?(2) 目前電動刮鬍刀的使用者 , 兩年後使用手動刮鬍刀機率為何 ? (3) 兩年後 A,B 及 C 的市場佔有率為何 ?(4) 長期下來 A,B 及 C 的市場佔有率為何 ?

6.2.2.

1.7.2.

1.1.8.

C

B

A

P

C B A

Page 40: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

40

(1)

(2) 目前電動刮鬍刀的使用者 , 兩年後使用手動刮鬍刀機率為 0.32+0.28=0.60

(3) 兩年後 A,B 及 C 的市場佔有率為[.45 .30 .25] P2=[.494 .303 .203]

(4) 長期下來 A,B 及 C 的市場佔有率為 πA+πB+πC=1 πA=.8πA+.2πB+.2πC

πB=.1πA+.7πB+.2πC

πC=.1πA+.1πB+.6πC

πA = .5, πB =.3, C = .2

Solution :

40.28.32.

15.53.32.

15.17.68.

C

B

A

P

C B A

2

A 牌的使用者 , 兩年後使用 B 牌機率為 PAB(2)=0.17

Page 41: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

41

Mean First Passage Times

For an ergodic chain, let mij = expected number of transitions before we first reach state j, given that we are currently in state i; mij is called the mean first passage time ( 平均第一次通過時間 ) from state i to state j.

we assume we are currently in state i. Then with probability pij, it will take one transition to go from state i to state j. For k ≠ j, we next go with probability pik to state k. In this case, it will take an average of 1 + mkj transitions to go from i and j.

p.939

Page 42: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

42

This reasoning implies

By solving the linear equations of the equation above, we find all the mean first passage times. It can be shown that

s

1kjk

kjikij mp1m

iii

1m

Page 43: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

43

Example : The Cola Example

π1 = ⅔ , π2 = 1/3

then m11=1/(2/3)=1.5, m22=1/(1/3)=3

m12=1+p11m12=1+0.9m12

m21=1+p22m21=1+0,8m21

Solving these two eq., we find that m12=10, and m21=5

It means that a person who last drink cola 1 will drank an average of 10 bottles of soda before switching to cola 2.

p.939

Page 44: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

44

Solving for Steady-State Probabilities and Mean First Passage Times on the Computer

Since we solve steady-state probabilities and mean first passage times by solving a system of linear equations, we may use LINDO to determine them.

Simply type in an objective function of 0, and type the equations you need to solve as your constraints.

Alternatively, you may use the LINGO model in the file Markov.lng to determine steady-state probabilities and mean first passage times for an ergodic chain.

Page 45: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

45

Exercise: p.940 #1~#9

Chapter 17-exercise.doc

Page 46: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

46

Solution : p.940 #1~#3

1.

After replacing the third steady state eq by π1+π2+π3=1 (1=U, 2=S, 3=R).

π1=.80π1+.06π2+.04π3

π2=.15π1+.90π2+.06π3

π1+π2+π3=1

π1 = 38/183, π2 = 90/183, π3 = 55/183

Thus about 49% of the families will eventually live in suburban areas; about 21% in urban areas; and about 30% in rural areas.

Page 47: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

47

2.

Clearly the gambler will end up with either $4 or $0. Since the probability of ending up in these states depends on your initial wealth level, no steady state probabilities exist.

Page 48: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

48

3a. π1=(2π1)/3+(1/2)π2 π1+π2=1 π1=0.6, π2=0.4.

3b. π1=.8π1+.8π3

π3 =.8π2

π1+π2+π3=1 π1=16/25, π2=1/5, π3= 4/25.

3c. m11=1/π1=1/.64=1.56 m31=1+.2m21

m12=1+0.8m12 m32=1+.8m12

m13=1 +0.8m13+.2m23 m33=1/π3=1/.16=6.25 m21=1+.2m21+.8m31

m22=1/π2 =1/.2 =5 m23 =1+.2m23

m11=1.56, m12=6, m13=6.25, m21=2.81 , m22=5, m23=1.25, m31=1.56, m32=5.8, m33=6.25

Page 49: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

49

4.

(1)we do not replace a fair car TM=

(note: the state at beginning of year before a car is replaced) πG=.64, πF=.25, πBD=.11

Expected Cost per Year is given by 1000(.64+.11)+1500(.25)+.11(6000) = $1785.

This is because we will have a good car during any year which begins with a good or bad car.

(2)we do replace a fair car TM=

πG = .85, πF = .10, and πBD = .05.

Expected Net Cost Per Year is given by 0.15(6000)+1(1000)-2000(0.10)=$1700.

This is because we will have a good car during every year and will trade in a fair car during 10% of all years.

05.10.85.

30.70.0

05.10.85.

BD

F

G

05.10.85.

05.10.85.

05.10.85.

BD

F

G

Page 50: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

50

5.

We must show that for an s state doubly stochastic matrix, all πi = 1/s.

The steady-state equation for state i is

(1)

If all πi = 1/s, (1) becomes

By assumption

soπi = 1/s does satisfy all steady-state equations.

si

1

sk

kkikii p

1

sk

kkikips 1

1

11

sk

kkikip

Page 51: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

51

6a. π1p1i+π2p2i+...+πspsi. But from Eq(8), it is just πi.

6b. To determine the probability of being in state i after n

transitions, the relevant initial probability is where we are after n-1 transitions. By repeating the reasoning in (a), we see that the prob of being in state i after n transitions is still πi.

6c. If we begin with a probability πi of being in state i,

then as time goes on the probability of being in state i remains unchanged, or stationary.

Clearly the gambler will end up with either $4 or $0. Since the probability of ending up in these states depends on your initial wealth level, no steady state probabilities exist.

Page 52: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

52

7. For Stock 1

π10= .8π10+ .1π20 , π10 + π20 = 1. π10 =1/3, π20 = 2/3.

The expected price of Stock 1 is (1/3)10+(2/3)20 =$50/3. For Stock 2

π10=.9π10+.15π25, π10+π25=1. π10=.60, π25=.40.

The expected price of Stock 2 is(.60)10+(.40)25 =$16. Thus Stock 1 will sell for a higher average price than Stock 2.

For Stock 1, m10,10=1/π10=3, m20,20=1/π20=1.5, m10,20=1+.8m10,20 m10,20 = 5,

m20,10=1+ .9m20,10 m20,10=10.

Ex. if the price of stock 1 is $10, then will take an average of 10 days before the price of stock 1 first becomes $20.

For Stock 2, m10,10=1/π10=1.67, m25,25=1/π25 =2.5

m10,25=1+.9m10,25 m10,25=10

m25,10=1+.85m25,10 m25,10 = 6.67.

90.10.

20.80.P

85.15.

10.90.P

Page 53: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

53

8a. Let the state during a period equal the number of

balls in Container 1. Then the transition probability matrix is 0 1 2 3

Find the steady state probabilities by solving π0=π1/3, π1=π0+2π2/3, π2=2π1/3+π3, π0+π1+π2+π3=1.

π0=1/8, π1=3/8, π2=3/8, π3=1/8.

Problem 8b. m0,0 = 1/π0 = 8.

0100

1/302/30

02/301/3

0010

3

2

1

0

Page 54: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

54

9.

We can find the steady state probabilities by solving the following equations

πG = .7πG + .2πB + .1πBoth + .05πN.

πB= .2πG + .6πB + .1πBoth + .05πN

πN = .05πG + .1πB + .8πN.

πBoth = 1 – (sum of other probabilities).

πG= .286, πBoth= .286, πB= .238, πN= .19.

Thus gray squirrels are present during 57.2% of all weeks while Black squirrels are present during 52.4% of all weeks.

Page 55: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

55

17.6 Absorbing Chains (吸收鏈 )Many interesting applications of Markov chains

involve chains in which some of the states are absorbing and the rest are transient states.

This type of chain is called an absorbing chain.

To see why we are interested in absorbing chains we consider the following accounts receivable example.

吸收性馬可夫鏈(1) 至少有一吸收狀態(2) 由任何非吸收狀態開始 , 經若干轉移後 , 均可到達吸收狀態

由非吸收狀態開始轉移 ,則會被某一個吸收狀態吸收之機率被吸收狀態吸收前 ,在每一個非吸收狀態上平均停留次數由非吸收狀態開始轉移 , 平均經過幾次才被吸收

p.942

Page 56: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

56

Example 7 : Accounts Receivable

The accounts receivable situation of a firm is often modeled as an absorbing Markov chain.

Suppose a firm assumes that an account is uncollected if the account is more than three months overdue.

Then at the beginning of each month, each account may be classified into one of the following states:State 1 New account

State 2 Payment on account is one month overdue

State 3 Payment on account is two months overdue

State 4 Payment on account is three months overdue

State 5 Account has been paid

State 6 Account is written off as bad debt

Page 57: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

57

Suppose that past data indicate that the following Markov chain describes how the status of an account changes from one month to the next month:

100000

010000

.3.70000

0.6.4000

0.50.500

0.400.60

Debt Bad

Paid

Mons 3

Mons 2

Mon 1

New

Debt Bad Paid Mons 3 Mons 2 onM 1 New

Page 58: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

58

To simplify our example, we assume that after three months, a debt is either collected or written off as a bad debt.

Once a debt is paid up or written off as a bad debt, the account if closed, and no further transitions occur.

Hence, Paid or Bad Debt are absorbing states. Since every account will eventually be paid or written off as a bad debt, New, 1 month, 2 months, and 3 months are transient states.

Page 59: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

59

A typical new account will be absorbed as either a collected debt or a bad debt.

What is the probability that a new account will eventually be collected?

To answer this questions we must write a transition matrix. We assume s – m transient states and m absorbing states. The transition matrix is written in the form of

mcolumns

IO

RQs-m rows

m rows

s-mcolumns

P =

IO

RQ非吸收吸收

P =

非吸收 吸收

Page 60: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

60

Q :非吸收狀態間隨著時間改變轉移的機率R :非吸收狀態在下個時間被吸收的機率O :零矩陣I :單位矩陣

IO

RQ非吸收吸收

P =

非吸收 吸收

Page 61: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

61

The transition matrix for this example is

Then s =6, m =2, and Q and R are as shown.

100000

010000

3.7.0000

06.4.000

05.05.00

04.006.0New

1 month

2 months

3 months

Paid

Bad Debt

New 1 month 2 months 3 months Paid Bad Debt

Q R

O I

Page 62: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

62

1. What is the probability that a new account will

eventually be collected?

2. What is the probability that a one-month overdue account will eventually become a bad debt?

3. If the firm’s sales average $100,000 per month, how much money per year will go uncollected?

1000

.4100

0.510

00.61

QI

p.945

Page 63: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

63

被吸收前 ,在每一個非吸收狀態上平均停留次數 N=(I-Q)-1

By using the Gauss-Jordan method, we find that

t1=New, t2=1 month, t3=2 months, t4=3 months

a1=Paid, a2=Bad Debt

1000

.40100

.20.5010

.12.30.601t1

t2

t3

t4

t1 t2 t3 t4

(I-Q)-1 =

Page 64: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

64

To answer questions 1-3, we need to compute

馬可夫鏈被每一個吸收狀態吸收之機率 B=(I-Q)-1R

.300.700

.120.880

.060.940

.036.964t1

t2

t3

t4

a1 a2

(I-Q)-1 R=

Page 65: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

65

then

1. t1 = New, a1 = Paid. Thus, the probability that a new account is eventually collected is element 11 of (I –Q)-1R =.964.

2. t2 = 1 month, a2 = Bad Debt. Thus, the probability that a one-month overdue account turns into a bad debt is element 22 of (I –Q)-1R = .06.

3. From answer 1, only 3.6% of all debts are uncollected. Since yearly accounts payable are $1,200,000 on the average, (0.36)(1,200,000) = $43,200 per year will be uncollected.

p.946

Page 66: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

66

非吸收狀態平均經過幾次才被吸收 T=Ne T 之元素為 N 之列和

e 為元素均為 1 之向量力而為

Page 67: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

67

The Law firm of Mason and Burger employs three types of lawyers: junior, senior, and partners.

The transition matrix as follow:

Example 8 : Work-Force Planning

10000

01000

05.095.00

010.20.70.0

005.015.80.Junior

Senior

Partner

Leave as NP

Leave as P

Junior Senior Partner Leave as NP Leave as P

p.943p.946

Page 68: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

68

What is the average length of time that a newly hired junior lawyer spends working for the firms?

What is the probability that a junior lawyer makes it to partner?

What is the average length of time that a partner spends with the firm (as a partner)?

Page 69: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

69

Solution :

05.00

20.30.0

015.20.

QI

95.00

20.70.0

015.80.

Q

05.0

010.

005.

R

2000

0

105.25

340

310

t1

t2

t3

t1 t2 t3

(I-Q)-1 =

10

50.50.

32

31

a1 a2

(I-Q)-1 R=t1

t2

t3

Page 70: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

70

1.Expected time junior lawyer stays with firm

= (expected time junior lawyer stays with firm as junior)

+ (expected time junior lawyer stays with firm as senior)

+ (expected time junior lawyer stays with firm as partner)

= Expected time as (junior + senior + partner)

= =5 + 2.5 + 10 =17.5 (years)

2.The probability that a new junior lawyer makes it to partner is just the probability that he or she leaves the firm as a partner. The answer is element 12 of (I-Q)-1R = 0.50.

3.Since t3=Partner, we seek the expected number of

years that are spent in t3, given that we begin in t3.

This is just element 33 of (I-Q)-1 = 20 years.

113

112

111 )()()( QIQIQI

Page 71: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

71

Exercise:

某商業大學二技部 ( 專科畢業讀三大 , 大四兩年取得大學學位 )學生就學狀況之馬可夫鏈為

J代表大三 S 代表大四 G 代表畢業 Q 代表退學 該校二技部不收轉學生 , 所有學生都由大三開始入學 (1) 學生平均在學幾年 ( 不論最後是畢業或退學 )? (2) 一位大三新生最後順利畢業的機率是多少 ?

39+40

1000

0100

8.080.12.0

10.082.08.

Q

G

S

J

I0

RQP

Q G S J

Page 72: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

72

Solution :

136.10

013.1087.1

88.0

82.92.

12.0

82.08.

10

01)QI(

1

1

1

091.909.

190.810.

80.80.

10.0

136.10

013.1087.1R)QI( 1

(1) 學生平均在學 1.087+1.013=2.1 年

(2) 一位大三新生最後順利畢業的機率是 0.81

Page 73: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

73

Chapter 17-exercise.doc

Exercise : p.947 #1~#4

Page 74: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

74

1.

Solution :

10.000

80.15.00

085.1.0

008.1.

Sr

Jr

So

Fr

Q

85.05.

005.

005.

01.

Sr

Jr

So

Fr

R

10.000

80.85.00

085.9.0

008.9.

QI

11.1000

05.118.100

99.11.111.10

88.99.99.11.1

)(1

QI

Page 75: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

75

1a.

Exp. Number of Years as Freshman =(I-Q)-111= 1.11

Exp. Number of Years as Sophomore=(I-Q)-112 = .99

Exp. Number of Years as Junior = (I-Q)-113 = .99

Exp. Number of Years as Senior = (I-Q)-114 = .88

so Exp. Number of Years an Entering Freshman spends

at I.U. is 1.11+.99+.99+.88=3.97.

Note that this is <4 yrs. because many entering freshman quit before their senior year.

1b. {(I-Q)-1R}12=[1.11 .99 .99 .88]=.748

Page 76: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

76

2.

We are starting in state 1 (new Subscriber) Thus expected number of years that person remains a subscriber is 1+.80+18=19.80 years.

96.00

90.00

080.0

Q

5.2.00

001.4.R

2500

5.2210

1880.1

)QI(1

1000

04.96.00

10.90.00

20.080.0

Cancelled

.Sub .Yrs 2

.Sub .Yr 1

.Sub New

Page 77: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

77

3.

100000

010000

001000

000100

5.2.003.0

001.4.2.3.

50$

30$

20$

5

50

Sold

Sold

Sold

Die

3.0

2.3.Q

5.2.00

001.4.R

7.0

2.7.QI

7/100

49/207/10)(

1

QI

Page 78: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

78

3a.

We seek entry 1-1 of (I - Q)-1R = 4/7.

3b.

(4/7)0 + (1/7)20 + (4/49)30 + (10/49)50 = $15.51

7/57/200

49/1049/47/17/4R)QI(

1

Page 79: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

79

4.

NH = No History, LI = Low Interest, HI = High Interest, S = Sale, D = Deleted.

D

S

HI

LI

NH

10000

01000

05410

23230

10360

...

....

...

4.1.0

2.3.0

3.6.0

Q

05.

2.3.

1.0

R

6.1.0

2.7.0

3.6.1

QI

75.125.0

5.5.10

825.975.1

)QI(1

05.95.

3.7.

295.705.

R)QI(1

Page 80: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

80

4a.

Element 1-1 of (I-Q)-1R=.705.

4b.

Element 2-2 of (I-Q )-1R =.30.

4c.

Sum of the elements in row 1 of (I-Q) -1 =1+.975+.825=2.8.

Page 81: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

81

17.7 Work-Force Planning Models

Many organization employ several categories of workers.

For long-term planning purposes, it is often useful to be able to predict the number of employees of each type who will be available in the steady state.

Such predications can be made via an analysis similar to the steady state probabilities for Markov chains.

Consider an organization whose members are classified at any point in time into one of s groups.

p.950

Page 82: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

82

During every time period, a fraction pij of those who begin a time period in group i begin the next time period in group j.

Also during every time period, a fraction pi,s+1 of all group i members leave the organization.

Let P be the s x (s+1) matrix whose ij th entry is pij.

At the beginning of each time period, the organization hire Hi group i members.

Let Ni(t) be the number of group i members at the beginning of period t.

A question of natural interest is when Ni(t) approaches a limit as t grows large (call the limit, if it exists, Ni).

Page 83: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

83

If each Ni(t) does not approach a limit, we call

N = (N1, N2,…,Ns) the steady-state census of the organization.

The equations used to compute the steady-state census is (entering group i = leaving group i )

The following can be used to simplify the above equation.

If a steady-state census does not exist, then the equation will have no solution.

)s,...,2,1i(pNpNHik

ikiik

kiki

iiik ik p1p

Page 84: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

84

Example 9 : Steady-State Census

Let Group 1=children, Group 2=working adults, Group 3=retired people, Group 4=died.

We are given that H1=1000, H2=H3=0. and

050.950.00

010.030.960.0

001.0040.959.

P

p.951

Page 85: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

85

1.Determine the steady-state census:

entering group i each year=leaving group i each year

1000 = (.04 + .001)N1 (children)

.04N1 = (.03 + .01)N2 (working adults)

.03N2 = .05N3 (retired people)

∴ N1 = 24390,

N2 = 24390.24

N3 = 14634.14

Page 86: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

86

2.Each retired person received a pension of $5000 per year. The pension fund is funded by payment from working adults. How much money must each working adult contribute annually to the pension fund?

∴In the steady state,

)yearper(300024.243905000*14.14634

(Working : retired = 5 : 3 )

Page 87: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

87

Example 10 : The Mason and Burger Law Firm

Suppose the firm’s long-term goal is to employ 50 junior lawyers, 30 senior lawyers, and 10 partners. To achieve this steady-state census, how much lawyers of each type should Mason and Burger hire each year?

Group 1=junior lawyers, Group 2=senior lawyers, Group 3=partners,Group 4=lawyers who have left firm.

N1=50, N2=30, N3=10

05.95.00

10.20.70.0

05.015.80.

P

p.952

Page 88: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

88

By number entering group i = number leaving group i

H1 = (.15 + .05)50 (junior lawyers)

(.15)50 + H2 = (.20 + .10)30 (senior lawyers)

(.20)30 + H3 = (.05)10 (partners)

∴ H1 = 10

H2 = 1.5

H3 = -5.5 (fire 5.5 partners each year)

([.20(30) - 5.5]*20=10)(become partner)(stay at firm)

Page 89: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

89

Exercise : p.953 #1~#3

Chapter 17-exercise.doc

Page 90: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

90

Solution :

1.

Steady State Equations are

Number entering yr. i of college = Number leaving yr. i of college

7,000 = (.8+.1)N1

500+.8N1 = (.85+.05)N2

500+.85N2 = (.80+.05)N3

0 +.80N3 = (.05+.85)N4

N1 = 7777.78, N2 =7469.14,

N3 = 8057.37, N4 = 7162.10

Thus the juniors will be the most numerous class.

Page 91: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

91

Solution :

2.

This changes the third steady state equation to .03N2 = .03N3

which forces N2 = N3. Since the number of working adults will equal the number of retirees each working adult must, in effect, pay the pension for one retiree. Thus annual contribution by each working adult must be $5,000.

Page 92: Chapter 17 Markov Chains ( 馬可夫鏈 ). 2 Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable

92

Solution :

3.

NY = steady state level of pollution in New York, NE = steady state level of pollution in Newark, J = Steady state level of pollution in Jersey City.

Equating flow in to flow out for each city yields the following:

1000 + J/3 = 2NY/3

50 + NY/3 + J/3 = 2NE/3

100 + 2NE/3 = 2J/3

NY=3450, J=3900, NE=3750.