IE 423 page 1 of 1
••••••••••••••••••••••••••••••••••••••••
•
IE 423 Engineering OR II
Homework #2
••••••••••••••••••••••••••••••••••••••••
•
Part I. Read Sections #29.5 (Long-run properties of Markov Chains), #29.6 (First Passage
Times) of the attached, and write a summary report.
Note that the summary report has to be prepared on a word processor (e.g., MS Word), and it has
to be submitted through our class canvas system. Your report will be formatted with the
following traits:
The title page should include course title, student name, and the date.
There is no page limit but the article summary should be at least 2 pages long, single spaced
throughout.
Use a standard font (Times New Roman 12).
Use 1 inch margins for top, bottom, left, and right.
Use proper punctuation, spelling, and grammar.
All pages (with the exception of the title page) should be numbered.
1
29C H A P T E R
Markov Chains
Chapter 16 focused on decision making in the face of uncertainty about one futureevent (learning the true state of nature). However, some decisions need to take into
account uncertainty about many future events. We now begin laying the groundwork f
o
r
decision making in this broader context.
In particular, this chapter presents probability models for processes that evolve over
time in a probabilistic manner. Such processes are called stochastic processes. After briefly
introducing general stochastic processes in the first section, the remainder of the chapter
focuses on a special kind called a Markov chain. Markov chains have the special prop-
erty that probabilities involving how the process will evolve in the future depend only o
n
the present state of the process, and so are independent of events in the past. Many
processes fit this description, so Markov chains provide an especially important kind of
probability model.
For example, Chap. 17 mentioned that continuous-time Markov chains (described in
Sec. 29.8) are used to formulate most of the basic models of queueing theory. Markov
chains also provided the foundation for the study of Markov decision models in Chap. 19.
There are a wide variety of other applications of Markov chains as well. A considerable
number of books and articles present some of these applications. One is Selected Refer-
ence 4, which describes applications in such diverse areas as the classification of
customers, DNA sequencing, the analysis of genetic networks, the estimation of sales
demand over time, and credit rating. Selected Reference 6 focuses on applications in fi-
nance and Selected Reference 3 describes applications for analyzing baseball strategy.
The list goes on and on, but let us turn now to a description of stochastic processes in
general and Markov chains in particular.
■ 29.1 STOCHASTIC PROCESSES
A stochastic process is defined as an indexed collection of random variables {Xt},
where the index t runs through a given set T. Often T is taken to be the set of non-
negative integers, and Xt represents a measurable characteristic of interest at time t.
For example, Xt might represent the inventory level of a particular product at the end
of week t.
hil23453_ch29_001-036.qxd 1/22/1970 10:52 PM Page 1
Confirming Pages
2 CHAPTER 29 MARKOV CHAINS
Stochastic processes are of interest for describing the behavior of a system operating
over some period of time. A stochastic process often has the following structure.
The current status of the system can fall into any one of M � 1 mutually exclusive cate-
gories called states. For notational convenience, these states are labeled 0, 1, . . . , M. The
random variable Xt represents the state of the system at time t, so its only possible values
are 0, 1, . . . , M. The system is observed at particular points of time, labeled t � 0,
1, 2, . . . . Thus, the stochastic process {Xt} � {X0, X1, X2, . . .} provides a mathematical
representation of how the status of the physical system evolves over time.
This kind of process is referred to as being a discrete time stochastic process with a finite
state space. Except for Sec. 29.8, this will be the only kind of stochastic process con-
sidered in this chapter. (Section 29.8 describes a certain continuous time stochastic
process.)
A Weather Example
The weather in the town of Centerville can change rather quickly from day to day. However,
the chances of being dry (no rain) tomorrow are somewhat larger if it is dry today than
if
it rains today. In particular, the probability of being dry tomorrow is 0.8 if it is dry today,
but is only 0.6 if it rains today. These probabilities do not change if information about the
weather before today is also taken into account.
The evolution of the weather from day to day in Centerville is a stochastic process.
Starting on some initial day (labeled as day 0), the weather is observed on each day t, f
or
t � 0, 1, 2, . . . . The state of the system on day t can be either
State 0 � Day t is dry
or
State 1 � Day t has rain.
Thus, for t � 0, 1, 2, . . . , the random variable Xt takes on the values,
Xt �
�
The stochastic process {Xt} � {X0, X1, X2, . . .} provides a mathematical representation
of how the status of the weather in Centerville evolves over time.
An Inventory Example
Dave’s Photography Store has the following inventory problem. The store stocks a par-
ticular model camera that can be ordered weekly. Let D1, D2, . . . represent the dem
and
for this camera (the number of units that would be sold if the inventory is not depleted)
during the first week, second week, . . . , respectively, so the random variable Dt (for
t � 1, 2, . . .) is
Dt � number of cameras that would be sold in week t if the inventory is not
depleted. (This number includes lost sales when the inventory is depleted.)
It is assumed that the Dt are independent and identically distributed random variables hav-
ing a Poisson distribution with a mean of 1. Let X0 represent the number of cameras on
hand at the outset, X1 the number of cameras on hand at the end of week 1, X2 the num-
ber of cameras on hand at the end of week 2, and so on, so the random variable Xt (for
t � 0, 1, 2, . . .) is
Xt � number of cameras on hand at the end of week t.
if day t is dry
if day t has rain.
0
1
hil23453_ch29_001-036.qxd 1/22/1970 10:52 PM Page
2
Confirming Pages
29.2 MARKOV CHAINS
3
Assume that X0 � 3, so that week 1 begins with three cameras on hand.
{Xt} � {X0, X1, X2, . . .}
is a stochastic process where the random variable Xt represents the state of the system at
time t, namely,
State at time t � number of cameras on hand at the end of week t.
As the owner of the store, Dave would like to learn more about how the status of this sto-
chastic process evolves over time while using the current ordering policy described below.
At the end of each week t (Saturday night), the store places an order that is delivered in
time for the next opening of the store on Monday. The store uses the following order policy:
If Xt � 0, order 3 cameras.
If Xt � 0, do not order any cameras.
Thus, the inventory level fluctuates between a minimum of zero cameras and a maximum
of three cameras, so the possible states of the system at time t (the end of week t) are
Possible states � 0, 1, 2, or 3 cameras on hand.
Since each random variable Xt (t � 0, 1, 2, . . .) represents the state of the system at the end
of week t, its only possible values are 0, 1, 2, or 3. The random variables Xt are dependent
and may be evaluated iteratively by the expression
Xt�1 � �
for t � 0, 1, 2, . . . .
These examples are used for illustrative purposes throughout many of the following
sections. Section 29.2 further defines the particular type of stochastic process considered
in this chapter.
if Xt � 0
if Xt � 1,
max{3 � Dt�1, 0}
max{Xt � Dt�1, 0}
■ 29.2 MARKOV CHAINS
Assumptions regarding the joint distribution of X0, X1, . . . are necessary to obtain ana-
lytical results. One assumption that leads to analytical tractability is that the stochastic
process is a Markov chain, which has the following key property:
A stochastic process {Xt} is said to have the Markovian property if P{Xt�1 � j⏐X0 � k0,
X1 � k1, . . . , Xt�1 � kt�1, Xt � i} � P{Xt�1 � j⏐Xt � i}, for t � 0, 1, . . . and every sequence
i, j, k0, k1, . . . , kt�1.
In words, this Markovian property says that the conditional probability of any future
“event,” given any past “events” and the present state Xt � i, is independent of the past
events and depends only upon the present state.
A stochastic process {Xt} (t � 0, 1, . . .) is a Markov chain if it has the Markovian
property.
The conditional probabilities P{Xt�1 � j⏐Xt � i} for a Markov chain are called (one-
step) transition probabilities. If, for each i and j,
P{Xt�1 � j⏐Xt � i} � P{X1 � j⏐X0 � i}, for all t � 1, 2, . . . ,
then the (one-step) transition probabilities are said to be stationary. Thus, having
stationary transition probabilities implies that the transition probabilities do not change
hil23453_ch29_001-036.qxd 1/22/1970 10:52 PM Page 3
Confirming Pages
4 CHAPTER 29 MARKOV CHAINS
1For n � 0, pij
(0) is just P{X0 � j⏐X0 � i} and hence is 1 when i � j and is 0 when i � j.
over time. The existence of stationary (one-step) transition probabilities also implies that,
for each i, j, and n (n � 0, 1, 2, . . .),
P{Xt�n � j⏐Xt � i} � P{Xn � j⏐X0 � i}
for all t � 0, 1, . . . . These conditional probabilities are called n-step transition probabilities.
To simplify notation with stationary transition probabilities, let
pij � P{Xt�1 � j⏐Xt � i},
pij
(n) � P{Xt�n � j⏐Xt � i}.
Thus, the n-step transition probability pij
(n) is just the conditional probability that the sys-
tem will be in state j after exactly n steps (time units), given that it starts in state i at any
time t. When n � 1, note that pij
(1) � pij
1.
Because the pij
(n) are conditional probabilities, they must be nonnegative, and since
the process must make a transition into some state, they must satisfy the properties
pij
(n) � 0, for all i and j; n � 0, 1, 2, . . . ,
and
�
M
j�0
pij
(n) � 1 for all i; n � 0, 1, 2, . . . .
A convenient way of showing all the n-step transition probabilities is the n-ste
p
transition matrix
State 0 1 … M
P(n) �
Note that the transition probability in a particular row and column is for the transition
from the row state to the column state. When n � 1, we drop the superscript n and sim-
ply refer to this as the transition matrix.
The Markov chains to be considered in this chapter have the following properties:
1. A finite number of states.
2. Stationary transition probabilities.
We also will assume that we know the initial probabilities P{X0 � i} for all i.
Formulating the Weather Example as a Markov Chain
For the weather example introduced in the preceding section, recall that the evolution of
the weather in Centerville from day to day has been formulated as a stochastic process
{Xt} (t � 0, 1, 2, . . .)
where
Xt � �0 if day t is dry1 if day t has rain.
⎤
⎥
⎥
⎥
⎥
⎦
p(n)0M
p(n)1M
…
p(n)MM
…
…
…
…
p01
(n)
p11
(n)
…
p(n)M1
p00
(n)
p
10
(n)
…
p(n)M0
⎡
⎢
⎢
⎢
⎢
⎣
0
1
�
M
hil23453_ch29_001-036.qxd 1/22/1970 10:52 PM Page
4
Confirming Pages
29.3 MARKOV CHAINS 5
P{Xt�1 � 0⏐Xt � 0} � 0.8,
P{Xt�1 � 0⏐Xt � 1} � 0.6.
Furthermore, because these probabilities do not change if information about the weather
before today (day t) is also taken into account,
P{Xt�1 � 0⏐X0 � k0, X1 � k1, . . . , Xt�1 � kt�1, Xt � 0} � P{Xt�1 � 0⏐Xt � 0}
P{Xt�1 � 0⏐X0 � k0, X1 � k1, . . . , Xt�1 � kt�1, Xt � 1} � P{Xt�1 � 0⏐Xt � 1}
for t � 0, 1, . . . and every sequence k0, k1, . . . , kt�1. These equations also must hold if
Xt�1 � 0 is replaced by Xt�1 � 1. (The reason is that states 0 and 1 are mutually exclusive
and the only possible states, so the probabilities of the two states must sum to 1.) There-
fore, the stochastic process has the Markovian property, so the process is a Markov chain.
Using the notation introduced in this section, the (one-step) transition probabilities are
p00 � P{Xt�1 � 0⏐Xt � 0} � 0.8,
p10 � P{Xt�1 � 0⏐Xt � 1} �
0.6
for all t � 1, 2, . . . , so these are stationary transition probabilities.
Furthermore,
p00 � p01 � 1, so p01 � 1 – 0.8 � 0.2,
p10 � p11 � 1, so p11 � 1 – 0.6 � 0.4.
Therefore, the (one-step) transition matrix is
P � � � � � �
where these transition probabilities are for the transition from the row state to the column
state. Keep in mind that state 0 means that the day is dry, whereas state 1 signifies that
the day has rain, so these transition probabilities give the probability of the state the weather
will be in tomorrow, given the state of the weather today.
The state transition diagram in Fig. 29.1 graphically depicts the same information
provided by the transition matrix. The two nodes (circle) represent the two possible states
for the weather, and the arrows show the possible transitions (including back to the same
state) from one day to the next. Each of the transition probabilities is given next to the
corresponding arrow.
The n-step transition matrices for this example will be shown in the next section.
1
0.2
0.4
0
0.
8
0.6
State
0
1
1
p01
p11
0
p00
p10
State
0
1
10
0.2
0.6
0.8 0.4
■ FIGURE 29.1
The state transition diagram
for the weather example.
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 5
Confirming Pages
6 CHAPTER 29 MARKOV CHAINS
Formulating the Inventory Example as a Markov Chain
Returning to the inventory example developed in the preceding section, recall that Xt is the
number of cameras in stock at the end of week t (before ordering any more), so Xt represents
the state of the system at time t (the end of week t). Given that the current state is Xt � i, the
expression at the end of Sec. 29.1 indicates that Xt�1 depends only on Dt�1 (the demand in
week t � 1) and Xt. Since Xt�1 is independent of any past history of the inventory system prior
to time t, the stochastic process {Xt} (t � 0, 1, . . .) has the Markovian property and so is a
Markov chain.
Now consider how to obtain the (one-step) transition probabilities, i.e., the elements
of the (one-step) transition matrix
P �
given that Dt�1 has a Poisson distribution with a mean of 1. Thus,
P{Dt�1 � n} � �
(1)
n
ne
!
�1
�, for n � 0, 1, . . . ,
so (to three significant digits)
P{Dt�1 � 0} � e
�1 � 0.368,
P{Dt�1 � 1} � e
�1 � 0.368,
P{Dt�1 � 2} �
�
1
2
�e�1 � 0.184,
P{Dt�1 � 3} � 1 � P{Dt�1 2} � 1 � (0.368 � 0.368 � 0.184) � 0.080.
For the first row of P, we are dealing with a transition from state Xt � 0 to some state
Xt�1. As indicated at the end of Sec. 29.1,
Xt�1 � max{3 � Dt�1, 0} if Xt � 0.
Therefore, for the transition to Xt�1 � 3 or Xt�1 � 2 or Xt�1 � 1,
p03 � P{Dt�1 � 0} � 0.368,
p02 � P{Dt�1 � 1} � 0.368,
p01 � P{Dt�1 � 2} � 0.184.
A transition from Xt � 0 to Xt�1 � 0 implies that the demand for cameras in week t � 1 is 3
or more after 3 cameras are added to the depleted inventory at the beginning of the week, so
p00 � P{Dt�1 � 3} � 0.080.
For the other rows of P, the formula at the end of Sec. 29.1 for the next state is
Xt�1 � max {Xt � Dt�1, 0} if Xt � 1.
This implies that Xt�1 Xt, so p12 � 0, p13 � 0, and p23 � 0. For the other transitions,
p11 � P{Dt�1 � 0} � 0.368,
p10 � P{Dt�1 � 1) � 1 � P{Dt�1 � 0} � 0.632,
p22 � P{Dt�1 � 0} � 0.368,
p21 � P{Dt�1 � 1} � 0.368,
p20 � P{Dt�1 � 2} � 1 � P{Dt�1 1} � 1 � (0.368 � 0.368) � 0.264.
⎤
⎥
⎥
⎥
⎥
⎦
3
p03
p13
p23
p33
2
p02
p12
p22
p32
1
p01
p11
p21
p31
0
p00
p10
p20
p30
⎡
⎢
⎢
⎢
⎢
⎢
⎣
State
0
1
2
3
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 6
Confirming Pages
29.2 MARKOV CHAINS 7
For the last row of P, week t � 1 begins with 3 cameras in inventory, so the calculations
for the transition probabilities are exactly the same as for the first row. Consequently, the
complete transition matrix (to three significant digits) is
P �
The information given by this transition matrix can also be depicted graphically with
the state transition diagram in Fig. 29.2. The four possible states for the number of cam-
eras on hand at the end of a week are represented by the four nodes (circles) in the dia-
gram. The arrows show the possible transitions from one state to another, or sometimes
from a state back to itself, when the camera store goes from the end of one week to
the end of the next week. The number next to each arrow gives the probability of that
particular transition occurring next when the camera store is in the state at the base of
the arrow.
Additional Examples of Markov Chains
A Stock Example. Consider the following model for the value of a stock. At the end of
a given day, the price is recorded. If the stock has gone up, the probability that it will go up
tomorrow is 0.7. If the stock has gone down, the probability that it will go up tomorrow is
only 0.5. (For simplicity, we will count the stock staying the same as a decrease.) This is a
Markov chain, where the possible states for each day are as follows:
State 0: The stock increased on this day.
State 1: The stock decreased on this day.
The transition matrix that shows each probability of going from a particular state today
to a particular state tomorrow is given by
⎤
⎥
⎥
⎥
⎥
⎦
3
0.368
0
0
0.368
2
0.368
0
0.368
0.368
1
0.184
0.368
0.368
0.184
0
0.080
0.632
0.264
0.080
⎡
⎢
⎢
⎢
⎢
⎣
State
0
1
2
3
0 1
2 3
0.080
0.080
0.184
0.368
0.368
0.368
0.3680.264 0.184
0.632
0.368
0.368
0.368
■ FIGURE 29.2
The state transition diagram
for the inventory example.
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 7
Confirming Pages
P � � �
The form of the state transition diagram for this example is exactly the same as for
the weather example shown in Fig. 29.1, so we will not repeat it here. The only differ-
ence is that the transition probabilities in the diagram are slightly different (0.7 replaces
0.8, 0.3 replaces 0.2, and 0.5 replaces both 0.6 and 0.4 in Fig. 29.1).
A Second Stock Example. Suppose now that the stock market model is changed
so that
the stock’s going up tomorrow depends upon whether it increased today and yesterday. In
particular, if the stock has increased for the past two days, it will increase tomorrow with
probability 0.9. If the stock increased today but decreased yesterday, then it will increase
tomorrow with probability 0.6. If the stock decreased today but increased yesterday, then it
will increase tomorrow with probability 0.5. Finally, if the stock decreased for the past two
days, then it will increase tomorrow with probability 0.3. If we define the state as repre-
senting whether the stock goes up or down today, the system is no longer a Markov chain.
However, we can transform the system to a Markov chain by defining the states as follows:2
State 0: The stock increased both today and yesterday.
State 1: The stock increased today and decreased yesterday.
State 2: The stock decreased today and increased yesterday.
State 3: The stock decreased both today and yesterday.
This leads to a four-state Markov chain with the following transition matrix:
P �
Figure 29.3 shows the state transition diagram for this example. An interesting feature of
the example revealed by both this diagram and all the values of 0 in the transition matrix is
that so many of the transitions from state i to state j are impossible in one step. In other words,
pij � 0 for 8 of the 16 entries in the transition matrix. However, check out how it always is
possible to go from any state i to any state j (including j � i) in two steps. The same holds
true for three steps, four steps, and so forth. Thus, pij
(n) � 0 for n � 2, 3, . . . for all i and j.
A Gambling Example. Another example involves gambling. Suppose that a player
has $1 and with each play of the game wins $1 with probability p � 0 or loses $1 with
probability 1 � p � 0. The game ends when the player either accumulates $3 or goes
broke. This game is a Markov chain with the states representing the player’s current hold-
ing of money, that is, 0, $1, $2, or $3, and with the transition matrix given by
P �
⎤
⎥
⎥
⎥
⎥
⎦
3
0
0
p
1
2
0
p
0
0
1
0
0
1 � p
0
0
1
1 � p
0
0
⎡
⎢
⎢
⎢
⎢
⎣
State
0
1
2
3
⎤
⎥
⎥
⎥
⎥
⎦
3
0
0
0.5
0.7
2
0.1
0.4
0
0
1
0
0
0.5
0.3
0
0.9
0.6
0
0
⎡
⎢
⎢
⎢
⎢
⎣
State
0
1
2
3
1
0.3
0.5
0
0.7
0.5
State
0
1
8 CHAPTER 29 MARKOV CHAINS
2We again are counting the stock staying the same as a decrease. This example demonstrates that Markov chains
are able to incorporate arbitrary amounts of history, but at the cost of significantly increasing the number of states.
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 8
Confirming Pages
■ 29.3 CHAPMAN-KOLMOGOROV EQUATIONS
29.3 CHAPMAN-KOLMOGOROV EQUATIONS 9
The state transition diagram for this example is shown in Fig. 29.4. This diagram
demonstrates that once the process enters either state 0 or state 3, it will stay in that state
forever after, since p00 � 1 and p33 � 1. States 0 and 3 are examples of what are called
an absorbing state (a state that is never left once the process enters that state). We will
focus on analyzing absorbing states in Sec. 29.7.
Note that in both the inventory and gambling examples, the numeric labeling of the states
that the process reaches coincides with the physical expression of the system—i.e., actual in-
ventory levels and the player’s holding of money, respectively—whereas the numeric label-
ing of the states in the weather and stock examples has no physical significance.
2 3
0.5
0.4
0.5
0.30.1
0.7
0 1
0.6
0.9
2 3 1
0 1
1-r
1-
r
r
r
1
■ FIGURE 29.3
The state transition diagram
for the second stock
example.
■ FIGURE 29.4
The state transition diagram
for the gambling example.
Section 29.2 introduced the n-step transition probability pij
(n). The following Chapman-
Kolmogorov equations provide a method for computing these n-step transition probabilities:
pij
(n) � �
M
k�0
pik
(m)pkj
(n�m), for all i � 0, 1, . . . , M,
j � 0, 1, . . . , M,
and any m � 1, 2, . . . , n � 1,
n � m � 1, m � 2, . . . .3
3These equations also hold in a trivial sense when m � 0 or m � n, but m � 1, 2, . . . , n � 1 are the only
interesting cases.
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 9
Confirming Pages
10 CHAPTER 29 MARKOV CHAINS
These equations point out that in going from state i to state j in n steps, the process
will be in some state k after exactly m (less than n) steps. Thus, pik
(m) pkj
(n�m) is just the con-
ditional probability that, given a starting point of state i, the process goes to state k after
m steps and then to state j in n � m steps. Therefore, summing these conditional proba-
bilities over all possible k must yield pij
(n). The special cases of m � 1 and m � n � 1 lead
to the expressions
pij
(n) � �
M
k�0
pikpkj
(n�1)
and
pij
(n) � �
M
k�0
pik
(n�1)pkj,
for all states i and j. These expressions enable the n-step transition probabilities to be obtained
from the one-step transition probabilities recursively. This recursive relationship is best
explained in matrix notation (see Appendix 4). For n � 2, these expressions become
pij
(2) � �
M
k�0
pikpkj, for all states i and j,
where the pij
(2) are the elements of a matrix P(2). Also note that these elements are obtained
by multiplying the matrix of one-step transition probabilities by itself; i.e.,
P(2) � P � P � P2.
In the same manner, the above expressions for pij
(n) when m � 1 and m � n � 1 indicate
that the matrix of n-step transition probabilities is
P(n) � PP(n�1) � P(n�1)P
� PPn�1 � Pn�1P
� Pn.
Thus, the n-step transition probability matrix Pn can be obtained by computing the nth
power of the one-step transition matrix P.
n-Step Transition Matrices for the Weather Example
For the weather example introduced in Sec. 29.1, we now will use the above formulas to
calculate various n-step transition matrices from the (one-step) transition matrix P that
was obtained in Sec. 29.2. To start, the two-step transition matrix is
P(2) � P
P � � � � � � � �.
Thus, if the weather is in state 0 (dry) on a particular day, the probability of being in state 0
two days later is 0.76 and the probability of being in state 1 (rain) then is 0.24. Similarly, if
the weather is in state 1 now, the probability of being in state 0 two days later is 0.72 whereas
the probability of being in state 1 then is 0.28.
The probabilities of the state of the weather three, four, or five days into the future
also can be read in the same way from the three-step, four-step, and five-step transition
matrices calculated to three significant digits below.
0.76 0.24
0.72 0.28
0.8 0.2
0.6 0.4
0.8 0.2
0.6 0.4
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 10
Confirming Pages
29.3 CHAPMAN-KOLMOGOROV EQUATIONS 11
P(3) � P3 � P � P2 � � � � � � � �
P(4) � P4 � P � P3 � � � � � � � �
P(5) � P5 � P � P4 � � � � � � � �
Note that the five-step transition matrix has the interesting feature that the two rows
have identical entries (after rounding to three significant digits). This reflects the fact
that the probability of the weather being in a particular state is essentially independent
of the state of the weather five days before. Thus, the probabilities in either row of this
five-step transition matrix are referred to as the steady-state probabilities of this Markov
chain.
We will expand further on the subject of the steady-state probabilities of a Markov
chain, including how to derive them more directly, at the beginning of Sec. 29.5.
n-Step Transition Matrices for the Inventory Example
Returning to the inventory example included in Sec. 29.1, we now will calculate its n-step
transition matrices to three decimal places for n = 2, 4, and 8. To start, its one-step transition
matrix P obtained in Sec. 29.2 can be used to calculate the two-step transition matrix P(2) as
follows:
P(2) � P2 �
� .
For example, given that there is one camera left in stock at the end of a week, the
probability is 0.283 that there will be no cameras in stock 2 weeks later, that is, p10
(2) �
0.283. Similarly, given that there are two cameras left in stock at the end of a week,
the probability is 0.097 that there will be three cameras in stock 2 weeks later, that is,
p23
(2) � 0.097.
The four-step transition matrix can also be obtained as follows:
P(4) � P4 � P(2)
P(2)
�
� .
⎤
⎥
⎥
⎥
⎥
⎦
0.164
0.166
0.171
0.164
0.261
0.268
0.263
0.261
0.286
0.285
0.283
0.286
0.289
0.282
0.284
0.289
⎡
⎢
⎢
⎢
⎢
⎣
⎤
⎥
⎥
⎥
⎥
⎦
0.165
0.233
0.097
0.165
0.300
0.233
0.233
0.300
0.286
0.252
0.319
0.286
0.249
0.283
0.351
0.249
⎡
⎢
⎢
⎢
⎢
⎣
⎤
⎥
⎥
⎥
⎥
⎦
0.165
0.233
0.097
0.165
0.300
0.233
0.233
0.300
0.286
0.252
0.319
0.286
0.249
0.283
0.351
0.249
⎡
⎢
⎢
⎢
⎢
⎣
⎤
⎥
⎥
⎥
⎥
⎦
0.165
0.233
0.097
0.165
0.300
0.233
0.233
0.300
0.286
0.252
0.319
0.286
0.249
0.283
0.351
0.249
⎡
⎢
⎢
⎢
⎢
⎣
⎤
⎥
⎥
⎥
⎥
⎦
0.368
0
0
0.368
0.368
0
0.368
0.368
0.184
0.368
0.368
0.184
0.080
0.632
0.264
0.080
⎡
⎢
⎢
⎢
⎢
⎣
⎤
⎥
⎥
⎥
⎥
⎦
0.368
0
0
0.368
0.368
0
0.368
0.368
0.184
0.368
0.368
0.184
0.080
0.632
0.264
0.080
⎡
⎢
⎢
⎢
⎢
⎣
0.75 0.25
0.75 0.25
0.75 0.25
0.749 0.251
0.8 0.2
0.6 0.4
0.75 0.25
0.749 0.251
0.752 0.248
0.744 0.256
0.8 0.2
0.6 0.4
0.752 0.248
0.744 0.256
0.76 0.24
0.72 0.28
0.8 0.2
0.6 0.4
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 11
Confirming Pages
12 CHAPTER 29 MARKOV CHAINS
For example, given that there is one camera left in stock at the end of a week, the prob-
ability is 0.282 that there will be no cameras in stock 4 weeks later, that is, p10
(4) � 0.282.
Similarly, given that there are two cameras left in stock at the end of a week, the
probability is 0.171 that there will be three cameras in stock 4 weeks later, that is,
p23
(4) � 0.171.
The transition probabilities for the number of cameras in stock 8 weeks from now
can be read in the same way from the eight-step transition matrix calculated below.
P(8) � P8 � P(4)
P(4)
�
�
Like the five-step transition matrix for the weather example, this matrix has the interesting
feature that its rows have identical entries (after rounding). The reason once again is that
probabilities in any row are the steady-state probabilities for this Markov chain, i.e., the
probabilities of the state of the system after enough time has elapsed that the initial state is
no longer relevant.
Your IOR Tutorial includes a procedure for calculating P(n) � Pn for any positive
integer n 99.
Unconditional State Probabilities
Recall that one- or n-step transition probabilities are conditional probabilities; for example,
P{Xn � j⏐X0 � i} � pij
(n). Assume that n is small enough that these conditional probabilities
are not yet steady-state probabilities. In this case, if the unconditional probability P{Xn � j}
is desired, it is necessary to specify the probability distribution of the initial state, namely,
P{X0 � i} for i � 0, 1, . . . , M. Then
P{Xn � j} � P{X0 � 0} p0j
(n) � P{X0 � 1}p1j
(n) �
� P{X0 � M}pMj
(n).
In the inventory example, it was assumed that initially there were 3 units in stock,
that is, X0 � 3. Thus, P{X0 � 0} � P{X0 � 1} � P{X0 � 2} � 0 and P{X0 � 3} � 1.
Hence, the (unconditional) probability that there will be three cameras in stock 2 weeks
after the inventory system began is P{X2 � 3} � (1)p33
(2) � 0.165.
⎤
⎥
⎥
⎥
⎥
⎦
3
0.166
0.166
0166
0.166
2
0.264
0.264
0.264
0.264
1
0.285
0.285
0.285
0.285
0
0.286
0.286
0.286
0.286
⎡
⎢
⎢
⎢
⎢
⎣
State
0
1
2
3
⎤
⎥
⎥
⎥
⎥
⎦
0.164
0.166
0.171
0.164
0.261
0.268
0.263
0.261
0.286
0.285
0.283
0.286
0.289
0.282
0.284
0.289
⎡
⎢
⎢
⎢
⎢
⎣
⎤
⎥
⎥
⎥
⎥
⎦
0.164
0.166
0.171
0.164
0.261
0.268
0.263
0.261
0.286
0.285
0.283
0.286
0.289
0.282
0.284
0.289
⎡
⎢
⎢
⎢
⎢
⎣
■ 29.4 CLASSIFICATION OF STATES OF A MARKOV CHAIN
We have just seen near the end of the preceding section that the n-step transition probabil-
ities for the inventory example converge to steady-state probabilities after a sufficient num-
ber of steps. However, this is not true for all Markov chains. The long-run properties of a
Markov chain depend greatly on the characteristics of its states and transition matrix. To fur-
ther describe the properties of Markov chains, it is necessary to present some concepts
and definitions concerning these states.
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 12
Confirming Pages
29.4 CLASSIFICATION OF STATES OF A MARKOV CHAIN 13
State j is said to be accessible from state i if pij
(n) � 0 for some n � 0. (Recall that
pij
(n) is just the conditional probability of being in state j after n steps, starting in state i.)
Thus, state j being accessible from state i means that it is possible for the system to en-
ter state j eventually when it starts from state i. This is clearly true for the weather ex-
ample (see Fig. 29.1) since pij � 0 for all i and j. In the inventory example (see Fig. 29.2),
pij
(2) � 0 for all i and j, so every state is accessible from every other state. In general, a
sufficient condition for all states to be accessible is that there exists a value of n for which
pij
(n) � 0 for all i and j.
In the gambling example given at the end of Sec. 29.2 (see Fig. 29.4), state 2 is not
accessible from state 3. This can be deduced from the context of the game (once the player
reaches state 3, the player never leaves this state), which implies that p32
(n) � 0 for all n
� 0. However, even though state 2 is not accessible from state 3, state 3 is accessible from
state 2 since, for n � 1, the transition matrix given at the end of Sec. 29.2 indicates that
p23 � p � 0.
If state j is accessible from state i and state i is accessible from state j, then states i
and j are said to communicate. In both the weather and inventory examples, all states
communicate. In the gambling example, states 2 and 3 do not. (The same is true of states
1 and 3, states 1 and 0, and states 2 and 0.) In general,
1. Any state communicates with itself (because pii
(0) � P{X0 � i⏐X0 � i} � 1).
2. If state i communicates with state j, then state j communicates with state i.
3. If state i communicates with state j and state j communicates with state k, then state i
communicates with state k.
Properties 1 and 2 follow from the definition of states communicating, whereas property
3 follows from the Chapman-Kolmogorov equations.
As a result of these three properties of communication, the states may be parti-
tioned into one or more separate classes such that those states that communicate with
each other are in the same class. (A class may consist of a single state.) If there is only
one class, i.e., all the states communicate, the Markov chain is said to be irreducible.
In both the weather and inventory examples, the Markov chain is irreducible. In both
of the stock examples in Sec. 29.2, the Markov chain also is irreducible. However, the
gambling example contains three classes. Observe in Fig. 29.4 how state 0 forms a
class, state 3 forms a class, and states 1 and 2 form a class.
Recurrent States and Transient States
It is often useful to talk about whether a process entering a state will ever return to this
state. Here is one possibility.
A state is said to be a transient state if, upon entering this state, the process might never
return to this state again. Therefore, state i is transient if and only if there exists a state j
( j � i) that is accessible from state i but not vice versa, that is, state i is not accessible
from state j.
Thus, if state i is transient and the process visits this state, there is a positive probability
(perhaps even a probability of 1) that the process will later move to state j and so will
never return to state i. Consequently, a transient state will be visited only a finite number
of times. To illustrate, consider the gambling example presented at the end of Sec. 29.2.
Its state transition diagram shown in Fig. 29.4 indicates that both states 1 and 2 are tran-
sient states since the process will leave these states sooner or later to enter either state 0
or state 3 and then will remain in that state forever.
When starting in state i, another possibility is that the process definitely will return
to this state.
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 13
Confirming Pages
14 CHAPTER 29 MARKOV CHAINS
A state is said to be a recurrent state if, upon entering this state, the process definitely
will return to this state again. Therefore, a state is recurrent if and only if it is not
transient.
Since a recurrent state definitely will be revisited after each visit, it will be visited in-
finitely often if the process continues forever. For example, all the states in the state
transition diagrams shown in Figs. 29.1, 29.2, and 29.3 are recurrent states because the
process always will return to each of these states. Even for the gambling example, states
0 and 3 are recurrent states because the process will keep returning immediately to one
of these states forever once the process enters that state. Note in Fig. 29.4 how the
process eventually will enter either state 0 or state 3 and then will never leave that state
again.
If the process enters a certain state and then stays in this state at the next step, this
is considered a return to this state. Hence, the following kind of state is a special type of
recurrent state.
A state is said to be an absorbing state if, upon entering this state, the process never will
leave this state again. Therefore, state i is an absorbing state if and only if pii � 1.
As just noted, both states 0 and 3 for the gambling example fit this definition, so they
both are absorbing states as well as a special type of recurrent state. We will discuss
absorbing states further in Sec. 29.7.
Recurrence is a class property. That is, all states in a class are either recurrent
or transient. Furthermore, in a finite-state Markov chain, not all states can be tran-
sient. Therefore, all states in an irreducible finite-state Markov chain are recurrent.
Indeed, one can identify an irreducible finite-state Markov chain (and therefore con-
clude that all states are recurrent) by showing that all states of the process commu-
nicate. It has already been pointed out that a sufficient condition for all states to be
accessible (and therefore communicate with each other) is that there exists a value of
n for which pi j
(n) � 0 for all i and j. Thus, all states in the inventory example (see Fig.
29.2) are recurrent, since pi j
(2) is positive for all i and j. Similarly, both the weather
example and the first stock example contain only recurrent states, since pi j is pos-
itive for all i and j. By calculating pi j
(2) for all i and j in the second stock example
in Sec. 29.2 (see Fig. 29.3), it follows that all states are recurrent since pi j
(2) � 0 for
all i and j.
As another example, suppose that a Markov chain has the following transition matrix:
P �
Note that state 2 is an absorbing state (and hence a recurrent state) because if the process
enters state 2 (row 3 of the matrix), it will never leave. State 3 is a transient state be-
cause if the process is in state 3, there is a positive probability that it will never return.
The probability is �13� that the process will go from state 3 to state 2 on the first step. Once
the process is in state 2, it remains in state 2. State 4 also is a transient state because if
the process starts in state 4, it immediately leaves and can never return. States 0 and 1
are recurrent states. To see this, observe from P that if the process starts in either of
⎤
⎥
⎥
⎥
⎥
⎥
⎦
4
0
0
0
0
0
3
0
0
0
�
2
3
�
0
2
0
0
1
�
1
3
�
0
1
�
3
4
�
�
1
2
�
0
0
0
0
�
1
4
�
�
1
2
�
0
0
1
⎡
⎢
⎢
⎢
⎢
⎢
⎣
State
0
1
2
3
4
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 14
Confirming Pages
29.5 LONG-RUN PROPERTIES OF MARKOV CHAINS 15
these states, it can never leave these two states. Furthermore, whenever the process
moves from one of these states to the other one, it always will return to the original
state eventually.
Periodicity Properties
Another useful property of Markov chains is periodicities. The period of state i is defined
to be the integer t (t � 1) such that pii
(n) � 0 for all values of n other than t, 2t, 3t, . . . and
t is the smallest integer with this property. In the gambling example (end of Section 29.2),
starting in state 1, it is possible for the process to enter state 1 only at times 2, 4, . . . , so
state 1 has period 2. The reason is that the player can break even (be neither winning nor
losing) only at times 2, 4, . . . , which can be verified by calculating p11
(n) for all n and not-
ing that p11
(n) � 0 for n odd. You also can see in Fig. 29.4 that the process always takes
two steps to return to state 1 until the process gets absorbed in either state 0 or state 3.
(The same conclusion also applies to state 2.)
If there are two consecutive numbers s and s � 1 such that the process can be in state i
at times s and s � 1, the state is said to have period 1 and is called an aperiodic state.
Just as recurrence is a class property, it can be shown that periodicity is a class prop-
erty. That is, if state i in a class has period t, then all states in that class have period t. In
the gambling example, state 2 also has period 2 because it is in the same class as state 1
and we noted above that state 1 has period 2.
It is possible for a Markov chain to have both a recurrent class of states and a transient
class of states where the two classes have different periods greater than 1.
In a finite-state Markov chain, recurrent states that are aperiodic are called ergodic
states. A Markov chain is said to be ergodic if all its states are ergodic states. You will see
next that a key long-run property of a Markov chain that is both irreducible and ergodic is
that its n-step transition probabilities will converge to steady-state probabilities as n grows
large.
■ 29.5 LONG-RUN PROPERTIES OF MARKOV CHAINS
Steady-State Probabilities
While calculating the n-step transition probabilities for both the weather and inventory
examples in Sec. 29.3, we noted an interesting feature of these matrices. If n is large
enough (n � 5 for the weather example and n � 8 for the inventory example), all the rows
of the matrix have identical entries, so the probability that the system is in each state j no
longer depends on the initial state of the system. In other words, there is a limiting prob-
ability that the system will be in each state j after a large number of transitions, and this
probability is independent of the initial state. These properties of the long-run behavior
of finite-state Markov chains do, in fact, hold under relatively general conditions, as sum-
marized below.
For any irreducible ergodic Markov chain,
lim
n→�
pij
(n) exists and is independent of i.
Furthermore,
lim
n→�
pij
(n) � �j � 0,
where the �j uniquely satisfy the following steady-state equations
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 15
Confirming Pages
�j �
�
M
i�0
�i pij, for j � 0, 1, . . . , M,
�
M
j�0
�j � 1.
If you prefer to work with a system of equations in matrix form, this system (excluding
the sum = 1 equation) also can be expressed as
� � �P,
where � = (�0, �1, . . . , �M).
The �j are called the steady-state probabilities of the Markov chain. The term steady-
state probability means that the probability of finding the process in a certain state, say j,
after a large number of transitions tends to the value �j, independent of the probability
distribution of the initial state. It is important to note that the steady-state probability does
not imply that the process settles down into one state. On the contrary, the process con-
tinues to make transitions from state to state, and at any step n the transition probability
from state i to state j is still pij.
The �j can also be interpreted as stationary probabilities (not to be confused with
stationary transition probabilities) in the following sense. If the initial probability of
being in state j is given by �j (that is, P{X0 � j} � �j) for all j, then the probabil-
ity of finding the process in state j at time n � 1, 2, . . . is also given by �j (that is,
P{Xn � j} � �j).
Note that the steady-state equations consist of M � 2 equations in M � 1 unknowns.
Because it has a unique solution, at least one equation must be redundant and can, there-
fore, be deleted. It cannot be the equation
�
M
j�0
�j � 1,
because �j � 0 for all j will satisfy the other M � 1 equations. Furthermore, the solu-
tions to the other M � 1 steady-state equations have a unique solution up to a multi-
plicative constant, and it is the final equation that forces the solution to be a probability
distribution.
Application to the Weather Example. The weather example introduced in Sec. 29.1
and formulated in Sec. 29.2 has only two states (dry and rain), so the above steady-state
equations become
�0 � �0p00 � �1p10,
�1 � �0p01 � �1p11,
1 � �0 � �1.
The intuition behind the first equation is that, in steady state, the probability of being in
state 0 after the next transition must equal (1) the probability of being in state 0 now and
then staying in state 0 after the next transition plus (2) the probability of being in state 1
now and next making the transition to state 0. The logic for the second equation is the
same, except in terms of state 1. The third equation simply expresses the fact that the
probabilities of these mutually exclusive states must sum to 1.
Referring to the transition probabilities given in Sec. 29.2 for this example, these
equations become
16 CHAPTER 29 MARKOV CHAINS
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 16
Confirming Pages
29.5 LONG-RUN PROPERTIES OF MARKOV CHAINS 17
�0 � 0.8�0 � 0.6�1, so 0.2�0 � 0.6�1,
�1 � 0.2�0 � 0.4�1, so 0.6�1 � 0.2�0,
1 � �0 � �1.
Note that one of the first two equations is redundant since both equations reduce to
�0 � 3�1. Combining this result with the third equation immediately yields the fol-
lowing steady-state probabilities:
�0 = 0.75, �1 = 0.25
These are the same probabilities as obtained in each row of the five-step transition matrix
calculated in Sec. 29.3 because five transitions proved enough to make the state probabil-
ities essentially independent of the initial state.
Application to the Inventory Example. The inventory example introduced in
Sec. 29.1 and formulated in Sec. 29.2 has four states. Therefore, in this case, the steady-
state equations can be expressed as
�0 � �0p00 � �1p10 � �2p20 � �3p30,
�1 � �0p01 � �1p11 � �2p21 � �3p31,
�2 � �0p02 � �1p12 � �2p22 � �3p32,
�3 � �0p03 � �1p13 � �2p23 � �3p33,
1 � �0 � �1 � �2 � �3.
Substituting values for pij (see the transition matrix in Sec. 29.2) into these equations leads
to the equations
�0 � 0.080�0 � 0.632�1 � 0.264�2 � 0.080�3,
�1 � 0.184�0 � 0.368�1 � 0.368�2 � 0.184�3,
�2 � 0.368�0 � 0.368�2 � 0.368�3,
�3 � 0.368�0 � 0.368�3,
1 � �0 � �1 � �2 � �3.
Solving the last four equations simultaneously provides the solution
�0 � 0.286, �1 � 0.285, �2 � 0.263, �3 � 0.166,
which is essentially the result that appears in matrix P(8) in Sec. 29.3. Thus, after many
weeks the probability of finding zero, one, two, and three cameras in stock at the end of
a week tends to 0.286, 0.285, 0.263, and 0.166, respectively.
More about Steady-State Probabilities. Your IOR Tutorial includes a procedure
for solving the steady-state equations to obtain the steady-state probabilities.
There are other important results concerning steady-state probabilities. In particular,
if i and j are recurrent states belonging to different classes, then
pij
(n) � 0, for all n.
This result follows from the definition of a class.
Similarly, if j is a transient state, then
lim
n→�
pij
(n) � 0, for all i.
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 17
Confirming Pages
Thus, the probability of finding the process in a transient state after a large number of
transitions tends to zero.
Expected Average Cost per Unit Time
The preceding subsection dealt with irreducible finite-state Markov chains whose states
were ergodic (recurrent and aperiodic). If the requirement that the states be aperiodic is
relaxed, then the limit
lim
n→�
pij
(n)
may not exist. To illustrate this point, consider the two-state transition matrix
P � � �.
If the process starts in state 0 at time 0, it will be in state 0 at times 2, 4, 6, . . . and in
state 1 at times 1, 3, 5, . . . . Thus, p00
(n) � 1 if n is even and p00
(n) � 0 if n is odd, so that
lim
n→�
p00
(n)
does not exist. However, the following limit always exists for an irreducible (finite-state)
Markov chain:
lim
n→� ��
1
n
� �
n
k�1
pij
(k)� � �j,
where the �j satisfy the steady-state equations given in the preceding subsection.
This result is important in computing the long-run average cost per unit time asso-
ciated with a Markov chain. Suppose that a cost (or other penalty function) C(Xt) is in-
curred when the process is in state Xt at time t, for t � 0, 1, 2, . . . . Note that C(Xt) is a
random variable that takes on any one of the values C(0), C(1), . . . , C(M) and that the
function C(�) is independent of t. The expected average cost incurred over the first n pe-
riods is given by
E��1n� �
n
t�1
C(Xt)�.
By using the result that
lim
n→���
1
n
� �
n
k�1
pij
(k)� � �j,
it can be shown that the (long-run) expected average cost per unit time is given by
lim
n→�
E��1n� �
n
t�1
C(Xt)� � �
M
j�0
�jC( j).
Application to the Inventory Example. To illustrate, consider the inventory exam-
ple introduced in Sec. 29.1, where the solution for the �j was obtained in an earlier
subsection. Suppose the camera store finds that a storage charge is being allocated for
each camera remaining on the shelf at the end of the week. The cost is charged as
follows:
1
1
0
0
0
1
State
0
1
18 CHAPTER 29 MARKOV CHAINS
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page
18
Confirming Pages
29.5 LONG-RUN PROPERTIES OF MARKOV CHAINS 19
C(xt) �
Using the steady-state probabilities found earlier in this section, the long-run expected
average storage cost per week can then be obtained from the preceding equation, i.e.,
lim
n→�
E��1n� �
n
t�1
C(Xt)� � 0.286(0) � 0.285(2) � 0.263(8) � 0.166(18) � 5.662.
Note that an alternative measure to the (long-run) expected average cost per unit time
is the (long-run) actual average cost per unit time. It can be shown that this latter mea-
sure also is given by
lim
n→� ��
1
n
� �
n
t�1
C(Xt)� � �
M
j�0
�jC( j)
for essentially all paths of the process. Thus, either measure leads to the same result. These
results can also be used to interpret the meaning of the �j. To do so, let
C(Xt) � �
The (long-run) expected fraction of times the system is in state j is then given by
lim
n→�
E��1n� �
n
t�1
C(Xt)� � limn→� E(fraction of times system is in state j ) � �j.
Similarly, �j can also be interpreted as the (long-run) actual fraction of times that the sys-
tem is in state j.
Expected Average Cost per Unit Time for Complex Cost Functions
In the preceding subsection, the cost function was based solely on the state that the
process is in at time t. In many important problems encountered in practice, the cost may
also depend upon some other random variable.
For example, in the inventory example introduced in Sec. 29.1, suppose that the costs
to be considered are the ordering cost and the penalty cost for unsatisfied demand (stor-
age costs are so small they will be ignored). It is reasonable to assume that the number
of cameras ordered to arrive at the beginning of week t depends only upon the state of
the process Xt�1 (the number of cameras in stock) when the order is placed at the end of
week t � 1. However, the cost of unsatisfied demand in week t will also depend upon the
demand Dt. Therefore, the total cost (ordering cost plus cost of unsatisfied demand) for
week t is a function of Xt�1 and Dt, that is, C(Xt�1, Dt).
Under the assumptions of this example, it can be shown that the (long-run) expected
average cost per unit time is given by
lim
n→�
E��1n� �
n
t�1
C(Xt�1, Dt)� � �
M
j�0
k( j) �j,
if Xt � j
if Xt � j.
1
0
xt � 0
xt � 1
xt � 2
xt � 3
if
if
if
if
0
2
8
18
⎧
⎪
⎪
⎨
⎪
⎪
⎩
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 19
Confirming Pages
20 CHAPTER 29 MARKOV CHAINS
where
k( j ) � E[C( j, Dt)],
and where this latter (conditional) expectation is taken with respect to the probability distri-
bution of the random variable Dt, given the state j. Similarly, the (long-run) actual average
cost per unit time is given by
lim
n→� ��
1
n
� �
n
t�1
C(Xt�1, Dt)� � �
M
j�0
k( j)�j.
Now let us assign numerical values to the two components of C(Xt�1, Dt) in this
example, namely, the ordering cost and the penalty cost for unsatisfied demand. If z � 0
cameras are ordered, the cost incurred is (10 � 25z) dollars. If no cameras are ordered,
no ordering cost is incurred. For each unit of unsatisfied demand (lost sales), there is a
penalty of $50. Therefore, given the ordering policy described in Sec. 29.1, the cost in
week t is given by
C(Xt�1, Dt) � �
for t � 1, 2, . . . . Hence,
C(0, Dt) � 85 � 50 max{Dt � 3, 0},
so that
k(0) � E[C(0, Dt)] � 85 � 50E(max{Dt � 3, 0})
� 85 � 50[PD(4) � 2PD(5) � 3PD(6) �
],
where PD(i) is the probability that the demand equals i, as given by a Poisson distribu-
tion with a mean of 1, so that PD(i) becomes negligible for i larger than about 6. Since
PD(4) � 0.015, PD(5) � 0.003, and PD(6) � 0.001, we obtain k(0) � 86.2. Also using
PD(2) � 0.184 and PD(3) � 0.061, similar calculations lead to the results
k(1) � E[C(1, Dt)] � 50E(max{Dt � 1, 0})
� 50[PD(2) � 2PD(3) � 3PD(4) �
]
� 18.4,
k(2) � E[C(2, Dt)] � 50E(max{Dt � 2, 0})
� 50[PD(3) � 2PD(4) � 3PD(5) �
]
� 5.2,
and
k(3) � E[C(3, Dt)] � 50E(max{Dt � 3, 0})
� 50[PD(4) � 2PD(5) � 3PD(6) �
]
� 1.2.
Thus, the (long-run) expected average cost per week is given by
�
3
j�0
k( j)�j � 86.2(0.286) � 18.4(0.285) � 5.2(0.263) � 1.2(0.166) � $31.46.
This is the cost associated with the particular ordering policy described in Sec. 29.1.
The cost of other ordering policies can be evaluated in a similar way to identify the pol-
icy that minimizes the expected average cost per week.
if Xt�1 � 0
if Xt�1 � 1,
10 � (25)(3) � 50 max{Dt � 3, 0}
50 max {Dt � Xt�1, 0}
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 20
Confirming Pages
29.6 FIRST PASSAGE TIMES 21
The results of this subsection were presented only in terms of the inventory example.
However, the (nonnumerical) results still hold for other problems as long as the follow-
ing conditions are satisfied:
1. {Xt} is an irreducible (finite-state) Markov chain.
2. Associated with this Markov chain is a sequence of random variables {Dt} which are
independent and identically distributed.
3. For a fixed m � 0, �1, �2, . . . , a cost C(Xt, Dt�m) is incurred at time t, for t � 0, 1,
2, . . . .
4. The sequence X0, X1, X2, . . . , Xt must be independent of Dt�m
In particular, if these conditions are satisfied, then
lim
n→�
E��1n� �
n
t�1
C(Xt, Dt�m)� � �
M
j�0
k( j)�j,
where
k( j) � E[C( j, Dt�m)],
and where this latter conditional expectation is taken with respect to the probability dis-
tribution of the random variable Dt, given the state j. Furthermore,
lim
n→� ��
1
n
� �
n
t�1
C(Xt, Dt�m)� � �
M
j�0
k( j)�j
for essentially all paths of the process.
■ 29.6 FIRST PASSAGE TIMES
Section 29.3 dealt with finding n-step transition probabilities from state i to state j. It is
often desirable to also make probability statements about the number of transitions made
by the process in going from state i to state j for the first time. This length of time is called
the first passage time in going from state i to state j. When j � i, this first passage time
is just the number of transitions until the process returns to the initial state i. In this case,
the first passage time is called the recurrence time for state i.
To illustrate these definitions, reconsider the inventory example introduced in Sec. 29.1,
where Xt is the number of cameras on hand at the end of week t, where we start with X0 � 3.
Suppose that it turns out that
X0 � 3, X1 � 2, X2 � 1, X3 � 0, X4 � 3, X5 � 1.
In this case, the first passage time in going from state 3 to state 1 is 2 weeks, the first passage
time in going from state 3 to state 0 is 3 weeks, and the recurrence time for state 3 is 4 weeks.
In general, the first passage times are random variables. The probability distributions
associated with them depend upon the transition probabilities of the process. In particu-
lar, let f ij
(n) denote the probability that the first passage time from state i to j is equal to n.
For n � 1, this first passage time is n if the first transition is from state i to some state
k (k � j) and then the first passage time from state k to state j is n � 1. Therefore, these
probabilities satisfy the following recursive relationships:
f ij
(1) � pij
(1) � pij,
f ij
(2) � �
k�j
pik f kj
(1),
f ij
(n) � �
k�j
pik f kj
(n�1).
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 21
Confirming Pages
22 CHAPTER 29 MARKOV CHAINS
Thus, the probability of a first passage time from state i to state j in n steps can be computed
recursively from the one-step transition probabilities.
In the inventory example, the probability distribution of the first passage time in going
from state 3 to state 0 is obtained from these recursive relationships as follows:
f 30
(1) � p30 � 0.080,
f 30
(2) � p31 f 10
(1) � p32 f 20
(1) � p33 f 30
(1)
� 0.184(0.632) � 0.368(0.264) � 0.368(0.080) � 0.243,
�
where the p3k and f k0
(1) � pk0 are obtained from the (one-step) transition matrix given in
Sec. 29.2.
For fixed i and j, the f ij
(n) are nonnegative numbers such that
�
�
n�1
f ij
(n) 1.
Unfortunately, this sum may be strictly less than 1, which implies that a process initially
in state i may never reach state j. When the sum does equal 1, f ij
(n) (for n � 1, 2, . . .)
can be considered as a probability distribution for the random variable, the first passage
time.
Although obtaining f ij
(n) for all n may be tedious, it is relatively simple to obtain the
expected first passage time from state i to state j. Denote this expectation by �ij, which
is defined by
� if �
�
n�1
f ij
(n)
1
�ij � � ��
n�1
nf ij
(n) if �
�
n�1
f ij
(n) � 1.
Whenever
�
�
n�1
f ij
(n) � 1,
�ij uniquely satisfies the equation
�ij � 1 � �
k�j
pik�kj.
This equation recognizes that the first transition from state i can be to either state j or
to some other state k. If it is to state j, the first passage time is 1. Given that the first
transition is to some state k (k � j) instead, which occurs with probability pik, the con-
ditional expected first passage time from state i to state j is 1 � �kj. Combining these
facts, and summing over all the possibilities for the first transition, leads directly to this
equation.
For the inventory example, these equations for the �ij can be used to compute the
expected time until the cameras are out of stock, given that the process is started when
three cameras are available. This expected time is just the expected first passage time
�30. Since all the states are recurrent, the system of equations leads to the expressions
�30 � 1 � p31�10 � p32�20 � p33�30,
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 22
Confirming Pages
29.7 ABSORBING STATES 23
■ 29.7 ABSORBING STATES
It was pointed out in Sec. 29.4 that a state k is called an absorbing state if pkk � 1, so
that once the chain visits k it remains there forever. If k is an absorbing state, and the
process starts in state i, the probability of ever going to state k is called the probability
of absorption into state k, given that the system started in state i. This probability is de-
noted by fik.
When there are two or more absorbing states in a Markov chain, and it is evident that
the process will be absorbed into one of these states, it is desirable to find these proba-
bilities of absorption. These probabilities can be obtained by solving a system of linear
equations that considers all the possibilities for the first transition and then, given the first
transition, considers the conditional probability of absorption into state k. In particular, if
the state k is an absorbing state, then the set of absorption probabilities fik satisfies the
system of equations
fik � �
M
j�0
pij fjk, for i � 0, 1, . . . , M,
subject to the conditions
fkk � 1,
fik � 0, if state i is recurrent and i � k.
�20 � 1 � p21�10 � p22�20 � p23�30,
�10 � 1 � p11�10 � p12�20 � p13�30,
or
�30 � 1 � 0.184�10 � 0.368�20 � 0.368�30,
�20 � 1 � 0.368�10 � 0.368�20,
�10 � 1 � 0.368�10.
The simultaneous solution to this system of equations is
�10 � 1.58 weeks,
�20 � 2.51 weeks,
�30 � 3.50 weeks,
so that the expected time until the cameras are out of stock is 3.50 weeks. Thus, in mak-
ing these calculations for �30, we also obtain �20 and �10.
For the case of �ij where j � i, �ii is the expected number of transitions until the
process returns to the initial state i, and so is called the expected recurrence time for
state i. After obtaining the steady-state probabilities (�0, �1, . . . , �M) as described in the
preceding section, these expected recurrence times can be calculated immediately as
�ii � ��
1
i
�, for i � 0, 1, . . . , M.
Thus, for the inventory example, where �0 � 0.286, �1 � 0.285, �2 � 0.263, and �3 � 0.166,
the corresponding expected recurrence times are
�00 � ��
1
0
� � 3.50 weeks, �22 � ��
1
2
� � 3.80 weeks,
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 23
Confirming Pages
24 CHAPTER 29 MARKOV CHAINS
Absorption probabilities are important in random walks. A random walk is a Markov
chain with the property that if the system is in a state i, then in a single transition the sys-
tem either remains at i or moves to one of the two states immediately adjacent to i. For
example, a random walk often is used as a model for situations involving gambling.
A Second Gambling Example. To illustrate the use of absorption probabilities in a ran-
dom walk, consider a gambling example similar to that presented in Sec. 29.2. However,
suppose now that two players (A and B), each having $2, agree to keep playing the game
and betting $1 at a time until one player is broke. The probability of A winning a single bet
is �13�, so B wins the bet with probability �
2
3
�. The number of dollars that player A has before
each bet (0, 1, 2, 3, or 4) provides the states of a Markov chain with transition matrix
P � .
Starting from state 2, the probability of absorption into state 0 (A losing all her money)
can be obtained by solving for f20 from the system of equations given at the beginning of
this section,
f00 � 1 (since state 0 is an absorbing state),
f10 � �
2
3
� f00 � �
1
3
� f20,
f20 � �
2
3
� f10 � �
1
3
� f30,
f30 � �
2
3
� f20 � �
1
3
� f40,
f40 � 0 (since state 4 is an absorbing state).
This system of equations yields
f20 � �
2
3
���23� � �
1
3
� f20� � �13���
2
3
� f20� � �49� � �
4
9
� f20,
which reduces to f20 � �
4
5
� as the probability of absorption into state 0.
Similarly, the probability of A finishing with $4 (B going broke) when starting with
$2 (state 2) is obtained by solving for f24 from the system of equations,
f04 � 0 (since state 0 is an absorbing state),
f14 � �
2
3
� f04 � �
1
3
� f24,
f24 � �
2
3
� f14 � �
1
3
� f34,
f34 � �
2
3
� f24 � �
1
3
� f44,
f44 � 1 (since state 0 is an absorbing state).
This yields
f24 � �
2
3
���13� f24� � �
1
3
���23�f24 � �
1
3
�� � �49� f24 � �
1
9
�,
so f24 � �
1
5
� is the probability of absorption into state 4.
⎤
⎥
⎥
⎥
⎥
⎥
⎦
4
0
0
0
�
1
3
�
1
3
0
0
�
1
3
�
0
0
2
0
�
1
3
�
0
�
2
3
�
0
1
0
0
�
2
3
�
0
0
0
1
�
2
3
�
0
0
0
⎡
⎢
⎢
⎢
⎢
⎢
⎣
State
0
1
2
3
4
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 24
Confirming Pages
29.7 ABSORBING STATES 25
A Credit Evaluation Example. There are many other situations where absorbing states
play an important role. Consider a department store that classifies the balance of a customer’s
bill as fully paid (state 0), 1 to 30 days in arrears (state 1), 31 to 60 days in arrears (state
2), or bad debt (state 3). The accounts are checked monthly to determine the state of each
customer. In general, credit is not extended and customers are expected to pay their bills
promptly. Occasionally, customers miss the deadline for paying their bill. If this occurs
when the balance is within 30 days in arrears, the store views the customer as being in
state 1. If this occurs when the balance is between 31 and 60 days in arrears, the store
views the customer as being in state 2. Customers that are more than 60 days in arrears
are put into the bad-debt category (state 3), and then bills are sent to a collection agency.
After examining data over the past several years on the month by month progression
of individual customers from state to state, the store has developed the following transi-
tion matrix:4
4Customers who are fully paid (in state 0) and then subsequently fall into arrears on new purchases are viewed
as “new” customers who start in state 1.
State 1: 1 to 30 Days 2: 31 to 60 Days
State 0: Fully Paid in Arrears in Arrears 3: Bad Debt
0: fully paid 1 0 0 0
1: 1 to 30 days 0.7 0.2 0.1 0
in arrears
2: 31 to 60 days 0.5 0.1 0.2 0.2
in arrears
3: bad debt 0 0 0 1
Although each customer ends up in state 0 or 3, the store is interested in determining the
probability that a customer will end up as a bad debt given that the account belongs to
the 1 to 30 days in arrears state, and similarly, given that the account belongs to the 31
to 60 days in arrears state.
To obtain this information, the set of equations presented at the beginning of this sec-
tion must be solved to obtain f13 and f23. By substituting, the following two equations are
obtained:
f13 � p10 f03 � p11 f13 � p12 f23 � p13 f33,
f23 � p20 f03 � p21 f13 � p22 f23 � p23 f33.
Noting that f03 � 0 and f33 � 1, we now have two equations in two unknowns, namely,
(1 � p11) f13 � p13 � p12 f23,
(1 � p22) f23 � p23 � p21 f13.
Substituting the values from the transition matrix leads to
0.8f13 � 0.1 f23,
0.8f23 � 0.2 � 0.1 f13,
and the solution is
f13 � 0.032,
f23 � 0.254.
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 25
Confirming Pages
26 CHAPTER 29 MARKOV CHAINS
Thus, approximately 3 percent of the customers whose accounts are 1 to 30 days in arrears
end up as bad debts, whereas about 25 percent of the customers whose accounts are 31 to
60 days in arrears end up as bad debts.
In all the previous sections, we assumed that the time parameter t was discrete (that is,
t � 0, 1, 2, . . .). Such an assumption is suitable for many problems, but there are certain
cases (such as for some queueing models considered in Chap. 17) where a continuous time
parameter (call it t�) is required, because the evolution of the process is being observed
continuously over time. The definition of a Markov chain given in Sec. 29.2 also extends
to such continuous processes. This section focuses on describing these “continuous time
Markov chains” and their properties.
Formulation
As before, we label the possible states of the system as 0, 1, . . . , M. Starting at time 0
and letting the time parameter t� run continuously for t� � 0, we let the random variable
X(t�) be the state of the system at time t�. Thus, X(t�) will take on one of its possible
(M � 1) values over some interval, 0 t�
t1, then will jump to another value over the
next interval, t1 t�
t2, etc., where these transit points (t1, t2, . . .) are random points
in time (not necessarily integer).
Now consider the three points in time (1) t� � r (where r � 0), (2) t� � s (where
s � r), and (3) t� � s � t (where t � 0), interpreted as follows:
t� � r is a past time,
t� � s is the current time,
t� � s � t is t time units into the future.
Therefore, the state of the system now has been observed at times t� � s and t� � r. Label
these states as
X(s) � i and X(r) � x(r).
Given this information, it now would be natural to seek the probability distribution of the
state of the system at time t� � s � t. In other words, what is
P{X(s � t) � j⏐X(s) � i and X(r) � x(r)}, for j � 0, 1, . . . , M ?
Deriving this conditional probability often is very difficult. However, this task is con-
siderably simplified if the stochastic process involved possesses the following key property.
A continuous time stochastic process {X(t�); t� � 0} has the Markovian
property if
P{X(t � s) � j⏐X(s) � i and X(r) � x(r)} � P{X(t � s) � j⏐X(s) � i},
for all i, j � 0, 1, . . . , M and for all r � 0, s � r, and t � 0.
Note that P{X(t � s) � j⏐X(s) � i} is a transition probability, just like the transi-
tion probabilities for discrete time Markov chains considered in the preceding sections,
where the only difference is that t now need not be an integer.
If the transition probabilities are independent of s, so that
P{X(t � s) � j⏐X(s) � i} � P{X(t) � j⏐X(0) � i}
■ 29.8 CONTINUOUS TIME MARKOV CHAINS
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 26
Confirming Pages
29.8 CONTINUOUS TIME MARKOV CHAINS 27
for all s � 0, they are called stationary transition probabilities.
To simplify notation, we shall denote these stationary transition probabilities by
pij (t) � P{X(t) � j⏐X(0) � i},
where pij (t) is referred to as the continuous time transition probability function. We
assume that
lim
t→0
pij (t) � �
Now we are ready to define the continuous time Markov chains to be considered in
this section.
A continuous time stochastic process {X(t�); t� � 0} is a continuous time Markov chain
if it has the Markovian property.
We shall restrict our consideration to continuous time Markov chains with the following
properties:
1. A finite number of states.
2. Stationary transition probabilities.
Some Key Random Variables
In the analysis of continuous time Markov chains, one key set of random variables is the
following:
Each time the process enters state i, the amount of time it spends in that state before mov-
ing to a different state is a random variable Ti, where i � 0, 1, . . . , M.
Suppose that the process enters state i at time t� � s. Then, for any fixed amount of
time t � 0, note that Ti � t if and only if X(t�) � i for all t� over the interval s t�
s � t. Therefore, the Markovian property (with stationary transition probabilities) implies
that
P{Ti � t � s⏐Ti � s} � P{Ti � t}.
This is a rather unusual property for a probability distribution to possess. It says that the
probability distribution of the remaining time until the process transits out of a given state
always is the same, regardless of how much time the process has already spent in that state.
In effect, the random variable is memoryless; the process forgets its history. There is only
one (continuous) probability distribution that possesses this property—the exponential
distribution. The exponential distribution has a single parameter, call it q, where the mean
is 1/q and the cumulative distribution function is
P{Ti t} � 1 � e
�qt, for t � 0.
(We described the properties of the exponential distribution in detail in Sec. 17.4.)
This result leads to an equivalent way of describing a continuous time Markov chain:
1. The random variable Ti has an exponential distribution with a mean of 1/qi.
2. When leaving state i, the process moves to a state j with probability pij, where the pij
satisfy the conditions
pii � 0 for all i,
and
if i � j
if i � j.
1
0
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 27
Confirming Pages
�
M
j�0
pij � 1 for all i.
3. The next state visited after state i is independent of the time spent in state i.
Just as the one-step transition probabilities played a major role in describing discrete
time Markov chains, the analogous role for a continuous time Markov chain is played by
the transition intensities.
The transition intensities are
qi � ��d
d
t
�pii(0) � lim
t→0
�
1 �
t
pii(t)�, for i � 0, 1, 2, . . . , M,
and
qij � �d
d
t
�pij(0) � lim
t→0
�
pij
t
(t)
� � qi pij, for all j � i,
where pij(t) is the continuous time transition probability function introduced at the beginning
of the section and pij is the probability described in property 2 of the preceding paragraph.
Furthermore, qi as defined here turns out to still be the parameter of the exponential distrib-
ution for Ti as well (see property 1 of the preceding paragraph).
The intuitive interpretation of the qi and qij is that they are transition rates. In par-
ticular, qi is the transition rate out of state i in the sense that qi is the expected number
of times that the process leaves state i per unit of time spent in state i. (Thus, qi is the
reciprocal of the expected time that the process spends in state i per visit to state i; that
is, qi � 1/E[Ti].) Similarly, qij is the transition rate from state i to state j in the sense that
qij is the expected number of times that the process transits from state i to state j per unit
of time spent in state i. Thus,
qi � �
j�i
qij.
Just as qi is the parameter of the exponential distribution for Ti, each qi j is the para-
meter of an exponential distribution for a related random variable described below:
Each time the process enters state i, the amount of time it will spend in state i before a
transition to state j occurs (if a transition to some other state does not occur first) is a ran-
dom variable Tij, where i, j � 0, 1, . . . , M and j � i. The Tij are independent random vari-
ables, where each Tij has an exponential distribution with parameter qij, so E[Tij] � 1/qij.
The time spent in state i until a transition occurs (Ti) is the minimum (over j � i) of the
Tij. When the transition occurs, the probability that it is to state j is pij � qij/qi.
Steady-State Probabilities
Just as the transition probabilities for a discrete time Markov chain satisfy the Chapman-
Kolmogorov equations, the continuous time transition probability function also satisfies
these equations. Therefore, for any states i and j and nonnegative numbers t and s
(0 s t),
pij(t) � �
M
k�0
pik(s)pkj(t � s).
28 CHAPTER 29 MARKOV CHAINS
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 28
Confirming Pages
29.8 CONTINUOUS TIME MARKOV CHAINS 29
A pair of states i and j are said to communicate if there are times t1 and t2 such that
pij(t1) � 0 and pji(t2) � 0. All states that communicate are said to form a class. If all
states form a single class, i.e., if the Markov chain is irreducible (hereafter assumed), then
pij(t) � 0, for all t � 0 and all states i and j.
Furthermore,
lim
t→�
pij(t) � �j
always exists and is independent of the initial state of the Markov chain, for j � 0, 1, . . . , M.
These limiting probabilities are commonly referred to as the steady-state probabilities (or
stationary probabilities) of the Markov chain.
The �j satisfy the equations
�j � �
M
i�0
�ipij(t), for j � 0, 1, . . . , M and every t � 0.
However, the following steady-state equations provide a more useful system of equa-
tions for solving for the steady-state probabilities:
�j qj � �
i�j
�i qij, for j � 0, 1, . . . , M.
and
�
M
j�0
�j � 1.
The steady-state equation for state j has an intuitive interpretation. The left-hand side
(�j qj) is the rate at which the process leaves state j, since �j is the (steady-state) proba-
bility that the process is in state j and qj is the transition rate out of state j given that the
process is in state j. Similarly, each term on the right-hand side (�i qij) is the rate at which
the process enters state j from state i, since qij is the transition rate from state i to state j
given that the process is in state i. By summing over all i � j, the entire right-hand side
then gives the rate at which the process enters state j from any other state. The overall
equation thereby states that the rate at which the process leaves state j must equal the rate
at which the process enters state j. Thus, this equation is analogous to the conservation of
flow equations encountered in many engineering and science courses.
Because each of the first M � 1 steady-state equations requires that two rates be in
balance (equal), these equations sometimes are called the balance equations.
Example. A certain shop has two identical machines that are operated continuously
except when they are broken down. Because they break down fairly frequently, the top-
priority assignment for a full-time maintenance person is to repair them whenever
needed.
The time required to repair a machine has an exponential distribution with a mean of
�
1
2
� day. Once the repair of a machine is completed, the time until the next breakdown of
that machine has an exponential distribution with a mean of 1 day. These distributions are
independent.
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 29
Confirming Pages
30 CHAPTER 29 MARKOV CHAINS
Define the random variable X(t�) as
X(t�) � number of machines broken down at time t�,
so the possible values of X(t�) are 0, 1, 2. Therefore, by letting the time parameter t� run
continuously from time 0, the continuous time stochastic process {X(t�); t� � 0} gives the
evolution of the number of machines broken down.
Because both the repair time and the time until a breakdown have exponential distri-
butions, {X(t�); t� � 0} is a continuous time Markov chain5 with states 0, 1, 2. Conse-
quently, we can use the steady-state equations given in the preceding subsection to find
the steady-state probability distribution of the number of machines broken down. To do
this, we need to determine all the transition rates, i.e., the qi and qij for i, j � 0, 1, 2.
The state (number of machines broken down) increases by 1 when a breakdown
occurs and decreases by 1 when a repair occurs. Since both breakdowns and repairs
occur one at a time, q02 � 0 and q20 � 0. The expected repair time is �
1
2
� day, so the rate
at which repairs are completed (when any machines are broken down) is 2 per day, which
implies that q21 � 2 and q10 � 2. Similarly, the expected time until a particular operational
machine breaks down is 1 day, so the rate at which it breaks down (when operational) is
1 per day, which implies that q12 � 1. During times when both machines are operational,
breakdowns occur at the rate of 1 � 1 � 2 per day, so q01 � 2.
These transition rates are summarized in the rate diagram shown in Fig. 29.5. These
rates now can be used to calculate the total transition rate out of each state.
q0 � q01 � 2
q1 � q10 � q12 � 3
q2 � q21 � 2
Plugging all the rates into the steady-state equations given in the preceding subsection
then yields
Balance equation for state 0: 2�0 � 2�1
Balance equation for state 1: 3�1 � 2�0 � 2�2
Balance equation for state 2: 2�2 � �1
Probabilities sum to 1: �0 � �1 � �2 � 1
Any one of the balance equations (say, the second) can be deleted as redundant, and the
simultaneous solution of the remaining equations gives the steady-state distribution as
(�0, �1, �2) � ��25�, �
2
5
�, �
1
5
��.
Thus, in the long run, both machines will be broken down simultaneously 20 percent of
the time, and one machine will be broken down another 40 percent of the time.
5Proving this fact requires the use of two properties of the exponential distribution discussed in Sec. 17.4 (lack
of memory and the minimum of exponentials is exponential), since these properties imply that the Tij random
variables introduced earlier do indeed have exponential distributions.
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 30
Confirming Pages
LEARNING AIDS FOR THIS CHAPTER ON OUR WEBSITE 31
210State:
q01 � 2 q12 � 1
q10 � 2 q21 � 2
■ FIGURE 29.5
The rate diagram for the
example of a continuous
time Markov chain.
Chapter 17 (on queueing theory) features many more examples of continuous time
Markov chains. In fact, most of the basic models of queueing theory fall into this cate-
gory. The current example actually fits one of these models (the finite calling population
variation of the M/M/s model included in Sec. 17.6).
■ SELECTED REFERENCES
1. Bhat, U. N., and G. K. Miller: Elements of Applied Stochastic Processes, 3rd ed., Wiley, New
York, 2002.
2. Bini, D., G. Latouche, and B. Meini: Numerical Methods for Structured Markov Chains, Oxford
University Press, New York, 2005.
3. Bukiet, B., E. R. Harold, and J. L. Palacios: “A Markov Chain Approach to Baseball,” Operations
Research, 45: 14–23, 1997.
4. Ching, W.-K., X. Huang, M. K. Ng, and T.-K. Siu: Markov Chains: Models, Algorithms and
Applications, 2nd ed., Springer, New York, 2013.
5. Grassmann, W. K. (ed.): Computational Probability, Kluwer Academic Publishers (now
Springer), Boston, MA, 2000.
6. Mamon, R. S., and R. J. Elliott (eds.): Hidden Markov Models in Finance, Springer, New York, 2007.
Volume 2 is scheduled for publication in 2015.
7. Resnick, S. I.: Adventures in Stochastic Processes, Birkhäuser, Boston, 1992.
8. Sheskin, T. J.: Markov Chains and Decision Processes for Engineers and Managers, CRC Press,
Boca Raton, 2011.
9. Tijms, H. C.: A First Course in Stochastic Models, Wiley, New York, 2003.
■ LEARNING AIDS FOR THIS CHAPTER ON THIS WEBSITE
Automatic Procedures in IOR Tutorial:
Enter Transition Matrix
Chapman-Kolmogorov Equations
Steady-State Probabilities
“Ch. 29—Markov Chains” LINGO File for Selected Examples
See Appendix 1 for documentation of the software.
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 31
Confirming Pages
32 CHAPTER 29 MARKOV CHAINS
■ PROBLEMS
The symbol to the left of some of the problems (or their parts) has
the following meaning.
C: Use the computer with the corresponding automatic procedures
just listed (or other equivalent routines) to solve the problem.
29.2-1. Assume that the probability of rain tomorrow is 0.5 if it is
raining today, and assume that the probability of its being clear (no
rain) tomorrow is 0.9 if it is clear today. Also assume that these
probabilities do not change if information is also provided about
the weather before today.
(a) Explain why the stated assumptions imply that the Markovian
property holds for the evolution of the weather.
(b) Formulate the evolution of the weather as a Markov chain by
defining its states and giving its (one-step) transition matrix.
29.2-2. Consider the second version of the stock market model
presented as an example in Sec. 29.2. Whether the stock goes up
tomorrow depends upon whether it increased today and yesterday.
If the stock increased today and yesterday, it will increase tomor-
row with probability �1. If the stock increased today and decreased
yesterday, it will increase tomorrow with probability �2. If the stock
decreased today and increased yesterday, it will increase tomorrow
with probability �3. Finally, if the stock decreased today and yes-
terday, it will increase tomorrow with probability �4.
(a) Construct the (one-step) transition matrix of the Markov chain.
(b) Explain why the states used for this Markov chain cause the
mathematical definition of the Markovian property to hold even
though what happens in the future (tomorrow) depends upon
what happened in the past (yesterday) as well as the present
(today).
29.2-3. Reconsider Prob. 29.2-2. Suppose now that whether or not
the stock goes up tomorrow depends upon whether it increased to-
day, yesterday, and the day before yesterday. Can this problem be
formulated as a Markov chain? If so, what are the possible states?
Explain why these states give the process the Markovian property
whereas the states in Prob. 29.2-2 do not.
29.3-1. Reconsider Prob. 29.2-1.
C (a) Use the procedure Chapman-Kolmogorov Equations in
your IOR Tutorial to find the n-step transition matrix P(n)
for n � 2, 5, 10, 20.
(b) The probability that it will rain today is 0.5. Use the results
from part (a) to determine the probability that it will rain n
days from now, for n � 2, 5, 10, 20.
C (c) Use the procedure Steady-State Probabilities in your IOR
Tutorial to determine the steady-state probabilities of the
state of the weather. Describe how the probabilities in the
n-step transition matrices obtained in part (a) compare to
these steady-state probabilities as n grows large.
29.3-2. Suppose that a communications network transmits binary
digits, 0 or 1, where each digit is transmitted 10 times in succes-
sion. During each transmission, the probability is 0.995 that the
digit entered will be transmitted accurately. In other words, the
probability is 0.005 that the digit being transmitted will be
recorded with the opposite value at the end of the transmission.
For each transmission after the first one, the digit entered for trans-
mission is the one that was recorded at the end of the preceding
transmission. If X0 denotes the binary digit entering the system,
X1 the binary digit recorded after the first transmission, X2 the bi-
nary digit recorded after the second transmission, . . . , then {Xn}
is a Markov chain.
(a) Construct the (one-step) transition matrix.
C (b) Use your IOR Tutorial to find the 10-step transition matrix
P(10). Use this result to identify the probability that a digit
entering the network will be recorded accurately after the
last transmission.
C (c) Suppose that the network is redesigned to improve the prob-
ability that a single transmission will be accurate from 0.995
to 0.998. Repeat part (b) to find the new probability that a
digit entering the network will be recorded accurately after
the last transmission.
29.3-3. A particle moves on a circle through points that have been
marked 0, 1, 2, 3, 4 (in a clockwise order). The particle starts at
point 0. At each step it has probability 0.5 of moving one point
clockwise (0 follows 4) and 0.5 of moving one point counter-
clockwise. Let Xn (n � 0) denote its location on the circle after
step n. {Xn} is a Markov chain.
(a) Construct the (one-step) transition matrix.
C (b) Use your IOR Tutorial to determine the n-step transition
matrix P(n) for n � 5, 10, 20, 40, 80.
C (c) Use your IOR Tutorial to determine the steady-state probabil-
ities of the state of the Markov chain. Describe how the prob-
abilities in the n-step transition matrices obtained in part (b)
compare to these steady-state probabilities as n grows large.
29.4-1. Given the following (one-step) transition matrices of a
Markov chain, determine the classes of the Markov chain and
whether they are recurrent.
(a) P �
⎤
⎥
⎥
⎥
⎥
⎦
3
�
2
3
�
0
0
0
2
�
1
3
�
0
0
0
1
0
0
1
1
0
0
1
0
0
⎡
⎢
⎢
⎢
⎢
⎣
State
0
1
2
3
hil23453_ch29_001-036.qxd 1/22/1970 10:53 PM Page 32
Confirming Pages
We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.
Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.
Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.
Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.
Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.
Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.
We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.
Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.
You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.
Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.
Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.
You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.
You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.
Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.
We create perfect papers according to the guidelines.
We seamlessly edit out errors from your papers.
We thoroughly read your final draft to identify errors.
Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!
Dedication. Quality. Commitment. Punctuality
Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.
We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.
We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.
We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.
We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.