Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Stochastic Models in Queueing Theory
Stochastic Models in Queueing Theory
Stochastic Models in Queueing Theory
Ebook876 pages4 hours

Stochastic Models in Queueing Theory

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This is a graduate level textbook that covers the fundamental topics in queuing theory. The book has a broad coverage of methods to calculate important probabilities, and gives attention to proving the general theorems. It includes many recent topics, such as server-vacation models, diffusion approximations and optimal operating policies, and more about bulk-arrival and bull-service models than other general texts.
  • Current, clear and comprehensive coverage
  • A wealth of interesting and relevant examples and exercises to reinforce concepts
  • Reference lists provided after each chapter for further investigation
LanguageEnglish
Release dateNov 6, 2002
ISBN9780080541815
Stochastic Models in Queueing Theory

Related to Stochastic Models in Queueing Theory

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Stochastic Models in Queueing Theory

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Stochastic Models in Queueing Theory - Jyotiprasad Medhi

    47

    Preface

    Overview

    The study of queueing models has been of considerable active interest ever since the birth of queueing theory at the beginning of the last century. Queueing theory continues to be one of the most extensive theories of stochastic models. Its progress and development, both in methodology and in applications, are ever growing. Innovative analytic treatments toward its theoretical development are being advanced, and newer areas of application are emerging.

    There is a large and growing audience interested in the study of queueing models. The level of background and preparation among them varies a great deal, along with their requirements for depth of coverage. The audience is composed of advanced undergraduate and graduate students from a number of disciplines. In addition to students of standard graduate courses, there are many researchers, professionals, and industry analysts who require an in-depth knowledge of the subject.

    There are, of course, some excellent advanced works, monographs, and texts on the subject as well. The rapid development of the subject demands updated texts, especially for the type of audience aimed at. Furthermore, the style of presentation and the approach of individual authors appeal to different sections of this large and varied audience.

    The author feels that there is sufficient scope and material to warrant additional texts, especially at the graduate level, in this ever-growing subject area. This book has grown out of the author’s long experience of teaching and research in India, the United States, and Canada. A reviewer’s glowing compliment (in American Mathematical Monthly) on the author’s first book Stochastic Processes (Wiley Eastern, and Halsted Wiley 1982) inspired the author to undertake preparation of a book on queueing models in a similar readable style.

    Organization of the book

    The book is divided into eight chapters. Chapter 1 is a summary of basic results in stochastic processes. This should be helpful to users in eliminating the need to refer frequently to other books on stochastic processes just for basic results. Chapter 2, which is devoted to general concepts, contains some discussions on concepts such as PASTA, superposition of arrival processes, and customer and time averages. Chapters 3 and 4 deal with birth-and-death queueing models and non-birth-and-death systems, respectively. Transient behavior and busy period analysis have been discussed at some length, and a uniformity of approach is emphasized. Some models of bulk queues have also been included because of their importance in transportation science. Chapter 5 is devoted to networks of queues and Chapter 6 to certain non-Markovian queueing systems. In Chapter 7, systems with both general arrival and service patterns are discussed. Chapter 8 covers miscellaneous topics such as asymptotic methods and queues with vacations, with a brief excursion into the design and control of queues. Diffusion approximations, which have emerged as powerful tools, have been discussed in some detail. We believe this chapter will be especially useful to researchers and professionals who wish to have a broad, general idea of the diffusion approximation methods.

    Each of the chapters (except Chapter 2) contains a number of worked examples and problems, and all the chapters include extensive and recent references. The problems contain some materials that have been discussed, keeping in mind researchers and those who wish to pursue the subject further.

    Changes to the new edition

    In order to facilitate use of the second edition by those who are already familiar with the first edition, a drastic change in the basic structure has been avoided. The number of chapters has been kept at eight, with considerable additions in the broad topics mainly based on recent developments during the intervening years. Apart from inclusion of new topics (including some emerging during the past few years), new examples, and new problems, topical discussions have been expanded through notes, remarks, and so on. References have been updated. These have been supplemented by related works of interest for further reading. Chapters 3, 6, and 8, in particular, contain many new topics. Some of the new matters address finite input source and finite buffer models, advanced vacation models, retrial queueing systems, and a newly emerging trend in teletraffic processes and their analyses. My sincere hope is that the book will be found useful as a graduate text and also as a reference book by professionals and researchers in this subject area.

    In addition to mathematics and statistics, the book could be used for a one- or two-semester course at the advanced undergraduate or graduate level in operations research, computer science, systems science, industrial (and other branches of) engineering, telecommunications, economics, management, and business (with programs focused on quantitative methods).

    Course coverage

    The prerequisites for using this book are a course on applied probability and a course on advanced calculus.

    Teachers would be the best judges of topics to be covered in a course. The following suggestions are for their consideration:

    For a two-semester course:

    The whole book.

    For a one-semester course:

    Sections 1.1 through 1.5; 1.7 through 1.9;

    Sections 2.1 through 2.7;

    Sections 3.1 through 3.8 and 3.11;

    Sections 4.2 and 4.3;

    Sections 5.1 through 5.4;

    Sections 6.1 through 6.4; 6.7, 6.9 and 6.10; and

    Sections 7.1 and 8.1

    Exercises are to be selected from problems and complements.

    Acknowledgments

    I am intellectually indebted to all those whose works have stimulated my interest in this subject area. I have drawn freely and widely from the ever-increasing body of literature.

    In preparing this book, I have received encouragement and assistance in various ways from a number of experts, friends, and colleagues from this country and abroad. I am thankful to them all.

    I am most grateful to Professor J. G. C. Templeton (University of Toronto) and to Professor David D. Yao (Columbia University, formerly of Harvard University), both of whom painstakingly read portions of the original first edition manuscript and offered useful comments and valuable suggestions.

    Our eldest son, (Professor) Deepankar Medhi (University of Missouri, Kansas City, formerly of AT&T Bell Laboratories), and our eldest daughter, Shakuntala Choudhury (AT&T Technology Systems, Bridgewater, NJ), rendered invaluable technical assistance. Our younger son, Shubhankar, and younger daughter, Alakanandaa have been of great help. Interest has been shown also by our granddaughters, Namrata Gargee Choudhury (now at University of Pennsylvania) and Sumangala, and grandsons Neiloy, Robby, Abhishek Vikram, and Shimankar.

    After the first edition appeared I received feedback from several users and experts. At my specific request, Professor Pavel P. Bocharov, Moscow (reviewer of the first edition in Mathematical Reviews) took great pains to look into the first edition carefully and was kind enough to offer concrete suggestions for improvement. I am immensely grateful to him. I would like to thank Professors Sheldon Ross, Svetlozar Rachev, Donald Miller, Chun Jin, and Morteza Shafii-Mousavi, as well as Dr. Patrick L. Reilly (then with Motorola) for their many helpful comments. I thank Dr. A. Borthakur, Dr. G. Choudhury, and Dr. K. K. Das, who helped me with proofreading. My very special thanks are due to Dr. Das who along with Mitra also managed much of the typesetting in LaTex with great efficiency. Deepankar provided me with interesting material and references. Our grandson Riddhiman was a source of inspiration.

    Last but not least, I thank my wife, Prity, who bore with me patiently through the long hours that kept me engaged for months and months and who seldom complained of (or was tired of) waiting!

    Assistance from the Department of Science and Technology, Government of India, is gratefully acknowledged. Thanks are also due to Ms. Barbara Holland, Ms. Nancy Zachor, Mr. Tom Singer and other concerned individuals of Academic Press for their care and cooperation.

    July 1, 2002

    Jyotiprasad Medhi

    CHAPTER 1

    Stochastic Processes

    1.1 Introduction

    The theory of stochastic processes mainly originated from the needs of physicists. It began with the study of physical phenomena as random phenomena changing with time. Let t be a parameter assuming values in a set T, and let X(t) represent a random or stochastic variable for every t ∈ T. The family or collection of random variables {X(t), t ∈ T] is called a stochastic process. The parameter or index t is generally interpreted as time and the random variable X(t) as the state of the process at time t. The elements of T are time points, or epochs, and T is a linear set, denumerable or nondenumerable. If T is countable (or denumerable), then the stochastic process {X(t), t ∈ T] is said to be a discrete-parameter or discrete-time process, while if T is an interval of the real line, then the stochastic process is said to be a continuous-parameter or continuous-time process. For example, {Xn, n = 0,1,2,…} is a discrete-time and {X(t), t>0} is a continuous-time process. The set of all possible values that the random variable X(t) can assume is called the state space of the process; this set may be countable or noncountable. Thus, stochastic processes may be classified into these four types:

    (i) discrete-time and discrete state space,

    (ii) discrete-time and continuous state space,

    (iii) continuous-time and discrete state space, and

    (iv) continuous-time and continuous state space.

    A discrete state space process is often referred to as a chain. A process such as (i) is a discrete-time chain, and a process such as (iii) is a continuous-time chain.

    A stochastic process-that is, a family of random variables—thus provides description of the evolution of some physical phenomenon through time. Queueing systems provide many examples of stochastic processes. For example, X(t) might be the number of customers that arrive before a service counter by time t;then {X(f), t ≥ 0} is of the type (iii) above. Again Wn might be the queueing time of the nth arrival; then {Wn, n = 0,1,2,…} is of the type (ii) above.

    Stochastic processes play an important role in modeling queueing systems. Certain stochastic processes are briefly discussed in this chapter.

    1.2 Markov¹ Chains

    1.2.1 Basic ideas

    Suppose that we observe the state of a system at a discrete set of time points t = 0,1,2,…. The observations at successive time points define a set of random variables (RVs) X0, X1, X2,…. The values assumed by the RVs Xn are the states of the system at time n. Assume that Xn assumes the finite set of values 0,1,…, m; then Xn = i implies that the state of the system at time n is i. The family of random variables (RVs) {Xn, n ≥ 0} is a stochastic process with discrete parameter space n = 0,1,2,… and discrete state space S = {0,1,…, m}.

    Definition 1.1

    A stochastic process {Xn, n ≥ 0} is called a Markov chain, if for every xi ∈ S,

    (1.2.1)

    provided the first member (LHS) is defined. Equation (1.2.1) indicates a kind of dependence between the RVs Xn; intuitively, it implies that given the present state of the system, the future is independent of the past. The conditional probability

    is called the transition probability from state j to state k. This is denoted by

    (1.2.2)

    The Markov chain will be called (temporally) homogeneous if pjk(n) does not depend on n–that is,

    for m = −(n − 1), −(n − 2),…, 0,1,…. In such cases we denote pjk(n) simply by pjk. The transition probability pjk, which is the probability of transition from state j to state k in one step—that is, from step n − 1 to next step n (or from step n + m − 1 to step n + m)—is called one-step transition probability; the transition probability

    from state j to state k in n steps (from state j in step r to state k in step r + n) is called the n-step transition probability. We denote

    (1.2.3)

    so that pjk(1) = pjk. Define

    Then (1.2.3) is defined for n = 0,1,2,…. Denote

    π(0) is the initial probability vector. We have

    (1.2.4)

    Thus, given pjk and π(0), the joint probability given by (1.2.4) can be determined.

    The matrix P = (pjk), j,k ∈ S is called the transition matrix or transition probability matrix (TPM) of the Markov chain. P is a nonnegative square matrix with unit row sums—that is, 0 ≤ pjk ≤ 1, Σk pjk = 1 for every j ∈ S.

    A nonnegative square matrix P with unit row sums is called a stochastic matrix.

    It can be easily shown that Pn is also a stochastic matrix and that

    (1.2.5)

    That is,

    more generally,

    (1.2.6)

    Equation (1.2.6) is a special case of the Chapman-Kolmogorov equation. It is satisfied by transition probabilities of a Markov chain.

    Remark:: To every stochastic matrix P = (pij), i, j = 0,1,… there exists a homogeneous Markov chain {Xn, n = 0,1,…} with state space S = {0,1,…} and one-step transition probability pij, i, j ∈ S.

    That is, to every stochastic matrix P, there corresponds a Markov chain {Xn} for which P is the unit-step transition matrix.

    Then P² = (pij(2)) is also stochastic; it is the two-step transition matrix for the chain {Xn, n = 0,1,…}. However, not every stochastic matrix is the two-step transition matrix of a Markov chain.

    1.2.2 Classification of states and chains

    1.2.2.1 Finite homogeneous chain

    Let {Xn, n ≥ 0} be a finite homogeneous Markov chain having TPM P = (pjk) and state space S, and let i, j, k be arbitrary states of S.

    State i is said to lead to state j (or state j is said to be accessible from state i) and is denoted by i → j, if there exists an integer m{≥1) such that pij(m)>0. If no such integer exists, then we say that i does not lead to j: denote this by i ↛ j. Two states are said to communicate with each other if i → j and j → i; this is denoted by i ↔ j. The relations i → j and i ↔ j are transitive.

    One way to classify the state of a chain is given below.

    If i → j but j ↛ i, then index i is said to be inessential. If i → j implies i ↔ j for at least one j, then i is said to be essential. All essential states can be grouped into a number of essential classes such that all states belonging to an essential class communicate with one another, but cannot lead to a state outside the class. An essential class is closed—that is, if i, j, k belong to an essential class and l is another state outside the class, then i ↔ j ↔ k, but i ↛ l, j ↛ l, k ↛ l (though l → i, j, or k).

    Inessential states, if any, can be grouped into a number of inessential classes such that all states belonging to an inessential class communicate with all states in the class. A finite homogeneous Markov chain has at least one essential class of states, whereas a Markov chain with denumerable number of states may not necessarily have any essential class.

    A Markov chain is said to be irreducible if it contains exactly one essential class of states; in this case every state communicates with every other state of the chain. A nonirreducible (or reducible) chain may have more than one essential class of states as well as some inessential classes.

    Suppose that i → i—that is, there exists some m(≥1) such that pii(m)>0. The greatest common divisor of all such m for which pii(m)>0 is called the period d(i) of the state i. If d(i) = 1, then state i is said to be aperiodic (or acyclic), and if d(i)>1, state i is said to be periodic with period d(i). If pii> 0, then clearly state i is aperiodic.

    For an irreducible Markov chain, either all states are aperiodic or they are periodic having the same period d(i) = d(j) = …. Thus, irreducible Markov chains can be divided into two classes: aperiodic and periodic. An irreducible Markov chain is said to be primitive if it is aperiodic and imprimitive if it is periodic.

    A Markov chain whose essential states form a single essential class and are aperiodic is said to be regular. Such a chain may have some inessential indices as well. Note that the transitions between states of the essential class of a regular chain form a submatrix P1 that is stochastic. The TPM P of a regular chain can be written in canonical form,

    (1.2.7)

    where the stochastic submatrix P1 corresponds to transitions between states of essential class, the square matrix Q to transitions between inessential states, and the rectangular matrix R1 to transitions between inessential states and essential states (see Seneta (1981) for details).

    1.2.2.2 Ergodicity property

    We shall now discuss an important concept: the ergodicity property. The classification given above is enough for this discussion so far as finite chains are concerned.

    Before considering ergodicity, we shall describe another concept: invariant measure.

    If {Xn} is a Markov chain with TPM P, and if there exists a probability vector V = (ν1, ν2,…) (i.e., 0 ≤ νi ≤ 1, Σ νi = 1) such that

    then V is called an invariant measure (or stationary distribution) of the Markov chain {Xn} or with respect to the stochastic matrix P.

    If there exists V such that VP ≤ V, then V is called a subinvariant measure of the chain with TPM P. Our interest lies in the types of Markov chains that possess invariant measures.

    For a finite, irreducible Markov chain with TPM P, an invariant measure exists and is unique—that is, there is a unique probability vector V such that

    (1.2.8)

    where e =(1,1,…,1) is a column vector with all its elements equal to unity. We next discuss the limiting behavior of chains.

    1.2.2.3 Ergodic theorems

    Theorem 1.1

       Ergodic Theorem for Primitive Chains

    Let {Xn, n ≥ 0} be a finite irreducible aperiodic Markov chain with TPM P and state space S. Then, as n → ∞,

    (1.2.9)

    elementwise, where V is the unique stationary distribution (or invariant measure) of the chain.

    Further, the rate of approach to the limit is geometrically fast—that is, there exist positive constants a, b, 0 < b < 1 such that εij(n) ≤ abn, where pij(n) = νj + εij(n).

    The above, theorem imnlies that, for every j ∈ S,

    (1.2.10)

    exists and is independent of the initial state i and the quantities ν′js are given by the solution of the matrix equation

    (1.2.11)

    The limiting probability distribution of the chain tends to an equilibrium distribution that is independent of the initial distribution. This tendency is known as ergodicity or ergodic property.

    Another ergodic theorem is stated below.

    Theorem 1.2

    Let {Xn, n ≥ 0} be a finite k state regular Markov chain (i.e., having a single essential class of aperiodic states) and having TPM P. Let V1 be the stationary distribution corresponding to the primitive submatrix P1 (corresponding to the transitions between the states of the essential aperiodic class).

    Let V = (V1, 0) be a 1 × k vector. Then, as n → ∞,

    (1.2.9a)

    elementwise.

    V is the unique stationary distribution corresponding to the matrix P and the rate of approach to the limit in (1.2.9a) is geometrically fast.

    If C denotes the single essential class with states j, j = 1,2,…, m(m < k), j ∈ C, and V1 = (ν12,…, νj …, νm) is given by the solution of

    (1.2.12)

    then the above result implies that, as n → ∞,

    (1.2.13)

    The above theorem asserts that regularity of the chain is a sufficient condition for ergodicity. It is also a necessary condition.

    Example 1.1

    Consider the Markov chain having state space S = {0,1,2} and TPM P

    This is a finite irreducible chain. Its invariant measure V = (v1, v2, ν3) is given by the solution of

    which leads to

    As Ve = v1 + v2 + v3 = 1, one of the above equations is redundant. We get

    where

    Thus,

    That is, as n → ∞, for all i = 1,2,3,

    Example 1.2

    Consider the two-state Markov chain with TPM

    The equation V P = V, (V = (ν1, ν2)) leads to

    so that the invariant distribution is (½, ½) tor all p. We have, tor p ≠ 0

    For p = 0, the chain consists of two absorbing states 0 and 1—that is, it consists of two essential classes, C1 and C2, with members 0 and 1, respectively. The chain is decomposable. For p = 1,

    The chain is periodic.

    Thus, even though invariant distribution exists for all p, 0 ≤ p ≤ 1, as n → ∞,

    exist only when p ≠ 0,1.

    Remarks:: We can make two deductions: (1) Existence of stationary distribution (i.e., existence of a solution of V P = V, Ve = 1) does not necessarily imply existence of the limiting distribution

    (2) The example also shows how much the transient behavior can vary over the class of transient matrices for a given equilibrium distribution.

    See Whitt (1983) for a discussion on this topic.

    1.2.2.4 Markov chain having a denumerably infinite number of states

    So far we have considered homogeneous Markov chains with a finite number of states. Now we shall discuss homogeneous chains having a denumerably infinite number of states. We shall denote the state space by S = {0,1,2,…} instead of the more general

    Notions already defined (e.g., accessibility, communication, and periodicity) and subsequent definitions of essential and inessential states, essential classes, inessential classes, irreducible chains, and primitive chains will remain valid for a chain with a denumerable number of states. Only the definition of a regular chain cannot be carried over to this case. For whereas in a finite chain there is at least one essential class (and so the states of a finite chain may constitute exactly one essential class or more than one essential class), there may not be any essential class in the case of a chain with a denumerably infinite number of states. For example, consider the chain {Xn, n ≥ 0} with S = {0,1,2,…} and TPM P = (Pij) where

    This denumerable chain does not possess any essential class at all. Each state is inessential, and no two states communicate. Whereas consideration of classification of states into essential and inessential classes was adequate for dealing with limiting distribution for finite chains, a more sensitive classification of states would be required in the present case of chains with a denumerable infinity of states.

    1.2.2.5 Transience and Recurrence

    Define

    (1.2.14)

    The quantity fij(k) is the probability of transition from state i to state j in k steps, without revisiting the state j in the meantime. (It is called taboo probability—j being the taboo state.) Here {fij(k)} gives the distribution of the first passage time from state i to state j. We can write

    The relation (1.2.14) can also be written as

    (1.2.15)

    The relations (1.2.14) and (1.2.15) are known as first entrance formulas.

    Let

    Then from the convolution structure,

    (1.2.16)

    Definition 1.2

    A state i is said to be persistent if Fii = Fii (1 − 0) = 1 and is said to be transient if Fii(1 − 0) < 1.

    A persistent state is null or nonnull based on whether μii = F′ii(1) = ∞ or < ∞, respectively.

    Equivalent criteria of persistence and recurrence are as follows.

    An index i is persistent iff (if and only if)

    and is transient iff

    The relationship between these two types of classification of states and chain can be given as follows.

    An inessential state is transient and a persistent state is essential. In the case of a finite chain, i is transient iff it is inessential; otherwise it is nonnull persistent.

    All the states of an irreducible chain, whether finite or denumerable, are of the same type: all transient, all null persistent, or all nonnull persistent.

    A finite Markov chain contains at least one persistent state. Further, a finite irreducible Markov chain is nonnull persistent. The ergodic theorem for a Markov chain with a denumerable infinite number of states is stated below.

    Theorem 1.3

       General Ergodic Theorem

    Let P be the TPM of an irreducible aperiodic (i.e., primitive) Markov chain with a countable state space S (which may have a finite or a denumerably infinite number of states). If the Markov chain is transient or null persistent, then for each i, j ∈ S,

    (1.2.17a)

    If the chain is nonnull persistent, then for each i, j ∈ S,

    (1.2.17b)

    exists and is independent of i. The probability vector V = (ν1, ν2,…) is the unique invariant measure of P—that is,

    (1.2.18a)

    and further if μjj is the mean recurrence time of state j, then

    (1.2.18b)

    The result is general and holds for a chain with a countable state space S. In case the chain is finite, irreducibility ensures nonnull persistence, so that irreducibility and aperiodicity (i.e., primitivity) constitute a set of sufficient conditions for ergodicity of a finite chain. The sufficient conditions for ergodicity (lim Pij(n) = νi) for a chain with a denumerably infinite number of states involve, besides irreducibility and aperiodicity, nonnull persistence of the chain. For a chain with a denumerably infinite number of states, the number of equations given by (1.2.18a) will be infinite. It would sometimes be more convenient to find V in terms of the generating function of {νj} than to attempt to solve Eqn.(1.2.18a) as such. We shall consider two such Markov chains that arise in queueing theory. See the Note below.

    Example 1.3

    Consider a Markov chain with state space S = {0,1,2,…} having a denumerable number of states and having TPM

    (1.2.19)

    where Σk pk = 1. Let

    be the probability-generating functions (PGF) of {pk} and {νk}, respectively. Clearly, the chain is irreducible and aperiodic; since it is a denumerable chain, we need to consider transience and persistence of the chain to study its ergodic property.

    It can be shown that the states of the chain (which are all of the same type because of the irreducibility of the chain) are transient, persistent null or persistent nonnull according to

    respectively (see Prabhu, 1965). Assume that P′(1) < 1, so that the states are persistent nonnull; then from (1.2.18a), we get

    (1.2.20)

    Multiplying both sides of (1.2.20) by sk and adding over k = 0,1,2,…, we get

    whence

    Since V(1) = 1, we have

    Thus,

    (1.2.21)

    Example 1.4

    Consider a Markov chain with state space S = {0,1,2,…} and having TPM

    (1.2.22)

    where hi = + gi+1 + gi+2 + …, i ≥ 0,gi> 0, and Σgi = 0 gi = 1. Here Pi0 = hi, i ≥ 0,

    The chain is irreducible and aperiodic. It can be shown that it is persistent nonnull when α = Σ jgj>1. Then the chain is ergodic and νj = limn→∞ Pij(n) exist and are given as a solution of (1.2.18a); these lead to

    (1.2.23a)

    (1.2.23b)

    (1.2.23c)

    Let G(s) = Σr grsr be the PGF of {gr}. Denote the displacement operator by E so that

    Then we can write (1.2.23b) in symbols as

    (1.2.24)

    The characteristic equation of the above difference equation is given by

    (1.2.25)

    It can be shown that when α = G′(1) > 1, there is exactly one real root of r(z) = 0 between 0 and 1. Denote this root by r0 and the other roots by r1, r2,…, |ri|> 1, i ≥ 1. The solution of (1.2.24) can thus be put as

    where the c’s are constants. Since Σ νj = 1,

    so that

    (1.2.26)

    r0 being the root lying between 0 and 1 of (1.2.25) (provided α = G′(1) > 1). The distribution is geometric.

    Notes::

    (1) The equation VP = V is quite well known in the matrix theory. It follows from the well-known Perron-Frobenius theorem of matrix theory that there exists a solution V = (ν12,…) of the matrix equation VP = V subject to the constraints νi 0, Σ νi = 1.

    (2) When the order of P is not large, the equations can be solved fairly easily to get V = (ν1, ν2,…). When the order of P is large (infinite), the number of equations is also large (infinite) and the solution of the equations becomes troublesome. In Example 1.3 we considered and obtained the solution in terms of the generating function V(s) = Σ νjs′. This method may not always be

    Enjoying the preview?
    Page 1 of 1