Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Introduction to Stochastic Models
Introduction to Stochastic Models
Introduction to Stochastic Models
Ebook451 pages5 hours

Introduction to Stochastic Models

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book provides a pedagogical examination of the way in which stochastic models are encountered in applied sciences and techniques such as physics, engineering, biology and genetics, economics and social sciences. It covers Markov and semi-Markov models, as well as their particular cases: Poisson, renewal processes, branching processes, Ehrenfest models, genetic models, optimal stopping, reliability, reservoir theory, storage models, and queuing systems. Given this comprehensive treatment of the subject, students and researchers in applied sciences, as well as anyone looking for an introduction to stochastic models, will find this title of invaluable use.
LanguageEnglish
PublisherWiley
Release dateMar 4, 2013
ISBN9781118623527
Introduction to Stochastic Models

Related to Introduction to Stochastic Models

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Introduction to Stochastic Models

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Introduction to Stochastic Models - Marius Iosifescu

    Preface

    Stochastic models have become a very useful tool, indeed fundamental, for a lot of sciences and applied engineering. Whether it be in theoretical or applied physics, economy or social sciences, engineering or even music, stochastic models are there to help.

    Stochastic modeling is mainly based on Markov processes, characterized by the memoryless property, conditioned on their position at present. This is the probabilistic analogue of dynamical systems, where the future position of a moving particle depends only on the current position and speed. The natural generalization of Markov processes to semi-Markov processes offers much more generality, as the sojourn time in a state can have any arbitrary distribution on the real half-line x ≥ 0, and not only an exponential distribution as in the Markovian case. Let us also notice that martingales are increasingly used for real system modeling, especially in financial mathematics and statistics.

    This book is based on the book [IOS 84] published in 1984, in Romanian. We will follow the main lines of that book, but will replace quite a lot of the material. We would like to pay tribute to the memory of our co-authors of [IOS 84], Serban Grigorescu (1946–1997) and Gheorghe Popescu (1944–1989), who unfortunately died before fulfilling their creative potential.

    Throughout this book, composed of seven chapters, we will focus on Markov and semi-Markov processes and on their particular cases: Poisson and renewal processes.

    The aim of our book is to present stochastic models in a simple context, without complicated mathematical tools, but precise nonetheless. Emphasis is placed on comprehension of the main issues, on the development of the results linked to the phenomena discussed, as well as on the specific needs giving rise to these models.

    Chapter 1 presents useful families of stochastic processes. We give special attention to martingales, Markov chains, Markov processes, and semi-Markov processes, which will be the protagonists of the following chapters.

    In Chapter 2 we present some simple stochastic models, like urn models, Brownian motion, Poisson processes and birth and death processes, models which are used in applications individually or combined with other stochastic processes. We should stress that these models alone might be a good introduction to stochastic processes. Although these processes are particular cases of the Markov processes presented in the following chapters, they are studied separately by way of more direct techniques.

    Chapter 3 is devoted to the Markovian modeling from a more systematic point of view, starting from the Markov property. The presentation is focused on models like the Ehrenfest chain and various models in genetics, storage problems, and system reliability.

    The basic results on renewal models are presented in Chapter 4, together with their main applications, such as replacement and reward models, risk models in insurance and counter models.

    In Chapter 5 we describe semi-Markov processes. After several basic theoretical results, we study some of the best known applications for semi-Markov models in fields like: system reliability, reservoir models, queueing systems, and digital communication channels.

    In Chapter 6 the branching models are presented, especially the Bienyamé-Galton-Watson model, with the associated computation of extinction probability or of absorption time distribution and the analysis of related asymptotic properties. Some generalizations of this model, as well as some models in continuous time, are also presented.

    Finally, Chapter 7 is devoted to optimal stopping models. After a description of the classic problem, the optimal stopping problem for a

    Markovian structure is also presented and different resolution methods are proposed. Then we give the optimal stopping problem for a renewal structure.

    This book is mainly intended for applied science and engineering students and researchers in applied sciences, but also for anybody interested in an accessible introduction to stochastic models.

    We would like to thank our colleagues for their contributions to our discussions, as well as our students from Bucharest and Compiègne, whose questions helped us advance.

    Chapter 1

    Introduction to Stochastic Processes

    In this introductory chapter we present some notions regarding sequences of random variables (r.v.) and the most important classes of random processes.

    1.1. Sequences of random variables

    Let be a probability space and (An) a sequence of events in .

    – The set of ω ∈ Ω belonging to an infinity of An, that is,

    is called the upper limit of the sequence (An).

    – The set of ω ∈ Ω belonging to all An except possibly to a finite number of them, that is,

    is called the lower limit of the sequence (An).

    THEOREM 1.1.– (Borel-Cantelli lemma) Let (An) be a sequence of events in .

    1. , them

    2. and (An) is a sequence of independent events, then

    Let (Xn) be a sequence of r.v. and X an r.v., all of them defined on the same probability space . The convergence of the sequence (Xn) to X will be defined as follows:

    1. Almost sure

    2. In probability if for any ε > 0,

    .

    3. In distribution (or weekly, or in law) pointwise in every continuity point of FX, where FXn and FX are the distribution functions of Xn and X, respectively.

    4. In mean of order

    for all n ∈ N+, and if . The most commonly used are the cases p = 1 (convergence in mean) and p = 2 (mean square convergence).

    The relations between these types of convergence are as follows:

    [1.1]

    The convergence in distribution of r.v. is a convergence property of their distributions (i.e. of their laws) and it is the most used in probability applications. This convergence can be expressed by means of characteristic functions.

    THEOREM 1.2.– (Lévy continuity theorem) The sequence (Xn) converges in distribution to X if and only if the sequence of characteristic functions of (Xn) converges pointwise to the characteristic function of X.

    EXAMPLE 1.3.– Let X1, X2,… be independent r. v. such that and . We have , but Xn is not almost surely convergent to 1 as n → ∞.

    Indeed, for any > 0 we have

    Consequently, .

    To analyze the convergence a.s., we recall the following condition: , if and only if, for any ∈ > 0 and 0 < δ < 1, there exists an n0 such that for any n > n0

    [1.2]

    As for any > 0, δ ∈ (0,1), and we have

    it does not exist an n0 such that relation [1.2] is satisfied, so we conclude that Xn does not converge a.s.

    EXAMPLE 1.4.– Let us consider the probability space , with Ω = [0,1], , and the Lebesgue measure on . Let the r.v. X and the sequence of r.v be defined by

    and

    On the one hand, as

    we obtain that

    . Thus (Xn) does not converge in probability. On the other hand, the characteristic functions of the r.v. , and X are

    and, respectively,

    so .

    Let us consider a sequence of r.v. (Xn,n ≥ 1) and suppose that the expected values , exist. Let

    [1.3]

    DEFINITION 1.5.– The sequence (Xn,n ≥ 1) is said to satisfy the weak or the strong law of large numbers if , respectively.

    Obviously, if a sequence of r.v. satisfies the strong law of large numbers, then it also satisfies the weak law of large numbers.

    The traditional name law of large numbers that is still in use nowadays should be clearly understood: it is indeed a true mathematical theorem! We do indeed still speak of laws of large numbers to express vaguely but correctly the sense of a convergence of frequency toward probability, but we also tend to believe sometimes that a return to equilibrium will always occur in the end, and this is a big error of which we are often unaware.

    Let us present now some classic laws of large numbers.

    THEOREM 1.6.– (Markov) Let (Xn,n ≥ 1) be a sequence of r.v. such that and If

    [1.4]

    then (Xn, n ≥ 1) satisfies the weak law of large numbers.

    PROOF.– Chebyshev’s inequality applied to Yn defined in equation [1.3] yields

    and thus we obtain

    COROLLARY 1.7.– (Chebyshev) Let (Xn,n ≥ 1) be a sequence of independent r.v. such that

    If there exists an M, 0 < M < ∞, such that then (Xn,n ≥ 1) satisfies the weak law of large numbers.

    PROOF.– In our case we have

    and condition [1.4] is verified.

    THEOREM 1.8.– (Kolmogorov) Let (Xn, n ≥ 1) be a sequence of independent r.v. such that , and

    , then (Xn,n ≥ 1) satisfies the strong law of large numbers.

    COROLLARY 1.9.– In the settings of Corollary 1.7, the sequence of r.v. (Xn,n ≥ 1) satisfies the strong law of large numbers.

    EXAMPLE 1.10.– Let (Xn) be a sequence of i.i.d. r.v. of distribution P.¹ For any Borel set B, the r.v. ll (Xn ∈ B) are i.i.d., with common expected value P(B). Using the law of large numbers, Corollary 1.9, we have

    This argument justifies the estimation of probabilities by frequencies.

    EXAMPLE 1.11.– Let (Xn) be a sequence of r.v. that take the values , , with probabilities and

    We have and ar (Xk) = 2, k = 2,3,.... Consequently, the sequence (Xn) satisfies the strong law of large numbers (see Corollary 1.9).

    EXAMPLE 1.12.– Let (Xn) be a sequence of r.v. that take the values , , with probabilities and

    We have and , which yields

    and

    Consequently, the hypotheses of Theorem 1.6 are fulfilled and the sequence (Xn) satisfies the weak law of large numbers.

    It is easy to see that in applications we often have to deal with random variables composed of an important number of independent elements. Under very general conditions, such sum variables are normally distributed. This fundamental fact can be mathematically expressed in one of the forms (depending on the conditions) of the central limit theorem. To state the central limit theorem, we consider (Xn, n ≥ 1) a sequence of r.v. and we introduce the following notation:

    A central limit theorem gives sufficient conditions for the random variable

    to converge in distribution to the standard normal distribution N(0, 1).

    We see that, if the r.v. (Xn, n ≥ 1) are independent, then and are standard r.v., because and .

    We give now a central limit theorem important in probability and mathematical statistics.

    THEOREM 1.13.– Let (Xn, n ≥ 1) be a sequence of independent identically distributed r.v. with mean a and finite positive variance a². Then, the sequence of r.v. , is convergent in distribution to the normal distribution N(0,1). Zn is said to be asymptotically normal.

    PROOF.- Let φ(t) be the characteristic function of (X1 — a) (hence of all (Xn — a), n ≥ 1). Then the characteristic function of Zn is

    THEREFORE, and, using Theorem 1.2, we obtain .

    COROLLARY 1.14.– (De Moivre-Laplace) If X ~ Bi(n,p) and q = 1 — p, then is asymptotically normal N(0,1), as n —>.

    PROOF.- This result is a direct consequence of Theorem 1.13, because X = , with (Xn, n ≥ 1) i.i.d. random variables, Xn ~ Be(p).

    EXAMPLE 1.15.– We roll a die n times and we consider the r.v. N = number of six. Starting from which value n of N do we have 9 out of 10 chances to get

    As N is a binomial r.v. Bi(n, 1/6), for n large enough we can use Corollary 1.14 to approach the r.v. by a normal r.v. N(0, 1). Then, the condition can be written as . From relation which can be written as 2Φ(x) − 1 = 0.9,² we obtain x = 1.645 (using the table of the standard normal distribution). Thus we have nine out of 10 chances to have if the inequality 1.645 is satisfied, that is, if

    EXAMPLE 1.16.– Suppose that the lifetime of a component is an exponentially distributed r.v. with parameter λ = 0.2 x 10-3h-¹. When a component breaks down, it is immediately replaced with a new identical one. What is the probability that after 50,000 hours the component which will be working will be at least the 10th used from the beginning?

    Let us denote by Sn the sum of n independent exponential random variables that represent the lifetimes of the first n components. From the properties of the exponential distribution, we have . Using Theorem 1.13, we can approach by a standard normal random variable, which is equivalent to saying that we can approach Sn by a normal r.v. N(n/λ, /λ) = N(5000n, 5000 ). The fact that the component which will be working after 50,000 hours will be at least the 10th used means that we have S9< 50,000, so we need only to compute , with S9 ~ N (45,000,15,000). Thus we obtain the required probability

    EXAMPLE 1.17.– A restaurant can serve 75 meals a day. We know from experience that 20% of the clients that reserved would not eventually show up.

    a) The owner of the restaurant accepts 90 reservations. What is the probability that more than 50 clients show up?

    b)How many reservations should the owner of the restaurant accept in order to have a probability greater than or equal to 0.9 that he will be able to serve all the clients who will show up?

    If n is the number of clients who reserved, the number N of clients who will come has a binomial distribution with parameters n and p = 0.8. Using corollary 1.14, this binomial distribution is approached by a normal distribution with the same mean and variance.

    a)

    Consequently,

    b) We have to solve the inequality with respect to N. On the one hand, we have

    and, on the other hand, we have $(1.281) = 0.9. So we need to have , that is, . By letting we get the inequality 0.8x² + 0.5124x − 75 < 0 and finally obtain x < 9.367503, 097, i.e. n ≤ 87.

    1.2. The notion of stochastic process

    A stochastic or a random process is a family of r.v. defined on the same probability space , with values in a measurable space .

    The set E can be either or , and in this case is the σ-algebra of Borel sets, or an arbitrary finite or countable infinite discrete set, and in this case .

    The index set I is usually (in this case the process is called chain) or , and the parameter t is interpreted as being the time. The function is called a realization or a sample path of the process (see Figure 1.1).

    If the evolution of a stochastic process is done by jumps from state to state and if almost all its sample paths are constant except at isolated jump times, then the process is called a jump process.

    EXAMPLE 1.18.– Let us consider the process , with state space E = , describing the evolution of a population, the number of failures of a component, etc. Then Xt(ω) = X(t, ω) = i E means that the population has i individuals at time t, or that the component has failed i times during the time interval (0, t].

    The times T0(ω),T1(ω), ... (Figure 1.2) are the jump times (transition times) for the particular sample path ω ∈ Ω. The r.v. ζ = supn Tn is the lifetime of the process.

    Figure 1.1. Process with continuous sample path

    The stochastic processes can be studied either by means of their finite-dimensional distributions or by considering the type of dependence between the r.v. of the process. In the latter case, the nature of the process is given by this type of dependence.

    The distribution of a stochastic process X, i.e. , is specified by the knowledge of its finite-dimensional distributions. For a real-valued

    Figure 1.2. Sample path of a jump process

    process we define

    [1.5]

    for any t1, ..., tn I and x1, ..., xn ∈ .

    The process is said to be given if all the finite-dimensional distributions Ft1 ,...,tn (·, ..., ·), t1, ..., tn I, are given.

    The finite-dimensional distributions must satisfy the following properties:

    1) for any permutation (i1, ..., in) of (1, ..., n),

    2) for any 1 ≤ k n and x1, ..., xn ∈ ,

    .

    Let X = (Xt;t I) and Y = (Yt;t I) be two stochastic processes on the same probability space , with values in the same measurable space (E, ∈ ).

    DEFINITION 1.19.– (Stochastically equivalent processes in the wide sense) If two stochastic processes X and Y satisfy

    for all n ∈ N, t1, ..., tn I, and A1, ..., An ∈ E, then they are called stochastically equivalent in the wide sense.

    DEFINITION 1.20.– (Stochastically equivalent processes) If two stochastic processes X and Y satisfy

    then they are called stochastically equivalent.

    DEFINITION 1.21.– (Indistinguishable processes) If two stochastic processes X and Y satisfy

    then they are called indistinguishable.

    PROPOSITION 1.22.– We have the following implications: X,Y indistinguishable X,Y stochastically equivalent X, Y stochastically equivalent in the wide sense.

    PROPOSITION 1.23.– If the processes X and Y are stochastically equivalent and right continuous, then they are indistinguishable.

    DEFINITION 1.24.– (Version or modification of a process) If the process Y is stochastically equivalent to the process X, then we say that Y is a version or a modification of the process X.

    DEFINITION 1.25.– (Continuous process) Aprocess X with values in aBorel space (E, ∈ ) is said to be continuous a.s. if, for almost all ω, the function is continuous.

    PROPOSITION 1.26.– (Kolmogorov continuity criterion) A real process X has a continuous modification if there exist the constants α, β, C > 0, such that for all t and s.

    1.3. Martingales

    1.3.1. Stopping time

    Let be a probability space. An increasing sequence of σ-algebras, , is called a filtration of . A sequence (Xn,n ∈ ) of r.v. is said to be F-adapted if Xn is n -measurable for all . Usually, a filtration is associated with the sequence of r.v. , that is, we have n and the sequence will be F-adapted. This is called the natural filtration of .

    DEFINITION 1.27.– Let be a filtration of . A stopping time for F (or F-adapted, or F-stopping time) is an r.v. T with values in satisfying one of the following (equivalent) conditions:

    If , T is said to be adapted to the sequence . In this case we note .

    EXAMPLE 1.28.– (Hitting time) Let be a sequence of r.v. with values in d and . The r.v. T = inf is a stopping time, adapted to the sequence (Xn, n ∈ ) and is called the hitting time of B. Indeed,

    Properties of stopping times

    1.The set

    is a σ-algebra called the σ-algebra of events prior to T.

    2. If S and T are stopping times, then S + T, S ⋀ T, and S T are also stopping times.

    3.If is a sequence of stopping times, then is also a stopping time.

    4.If S and T are stopping times such that S ≤ T, then s ⊆ T.

    PROPOSITION 1.29.– (Wald identity) If T is a stopping time with finite expected value, adapted to an i.i.d. and integrable random sequence (Xn,n ∈ ), then

    1.3.2. Discrete-time martingales

    We will consider in the following that every filtration F is associated with the corresponding random sequence.

    DEFINITION 1.30.– A sequence of r.v. (Xn, n ∈ ) is an

    1)Xn F-martingale if

    (a)Xn is F-adapted for all n;

    (b)E |Xn | < ∞ for all n ∈ ;

    (c) for all n ∈ .

    2)F-submartingale if it satisfies (a), (b), and (c) with <.

    3) F-supermartingale if it satisfies (a), (b), and (c) with >.

    Note that condition (c) is equivalent to .

    EXAMPLE 1.31.– Let be a sequence of i.i.d. r.v. such that and . The random walk (see section 2.2) , is a submartingale for the natural filtration of (ζn) if p > 1/2, a martingale if p = 1/2, and a supermartingale if p < 1/2. Indeed, we have

    Consequently,

    EXAMPLE 1.32.– Let X be a real r.v. with and let F be an arbitrary filtration. Then, the sequence is a martingale. Indeed,

    EXAMPLE 1.33.– (A martingale at the casino) A gambler bets 1 euro the first time, if he loses he bets 2 the second time, etc. k−1 the kth time. He stops gambling when he wins for the first time. At every play, he wins or loses his bet with a probability of 1/2. This strategy will make him eventually win the game. Indeed, when he stops playing at a random time N, he would have won 2N − (1 + 2 + ··· + 2N − ¹) = 1.

    If Xn is the r.v. defined as the fortune of the gambler after the nth game, we have

    if he loses up to the nth game. Consequently (Xn+1 | Xn,..., X1) = Xn.

    On the other hand, the expected value of the loss is

    because N is a geometrically distributed r.v. with parameter 1/2 and for n ≥ N. Consequently, the strategy of the player is valid only if his initial fortune is greater than the casino’s.

    1.3.3. Martingale convergence

    THEOREM 1.34.– (Doob’s convergence theorem) Every supermartingale, submartingale or martingale bounded in L¹ converges a.s. to an integrable r.v.

    EXAMPLE 1.35.– If X is an r.v. on with finite expected value and is a filtration of , then

    THEOREM 1.36.– If (Xn) is an uniformly integrable martingale, i.e.

    then there exists X integrable such that , and we have .

    A martingale (Xn) for is said to be a square integrable if for all n ≥ 1.

    THEOREM 1.37.– (Strong convergence theorem) If (Xn) is a square integrable martingale, then there exists an r.v. X such that .

    THEOREM 1.38.– (Stopping theorem) Let (Xn) be a martingale (resp. a submartingale) for ( n). If S and T are two stopping times adapted to ( n) such that

    1) and

    2) and

    then

    1.3.4. Square integrable martingales

    Let (Mn, n ≥ 0) be a square integrable martingale, i.e.

    The process is a submartingale and Doob’s decomposition gives

    where Xn is a martingale, and < M >n is a predictable increasing process, that is, -measurable for all n ≥ 1.

    THEOREM 1.39.– If (Mn) is a square integrable martingale, with predictable process < M >n, and if

    1)

    2)

    for all

    then

    and

    where (an) is a sequence increasing to infinity.

    1.4. Markov chains

    Markov chains and processes represent probabilistic models of great importance for the analysis and study of complex systems. The fundamental concepts of Markov modeling are the state and the transition.

    1.4.1. Markov property

    It is clear that the situation of a physical system at a certain moment can be completely specified by giving the values of a certain number of variables that describe the system. For instance, a physical system can often be specified by giving the values of its temperature, pressure, and volume; in a similar way, a particle can be specified by its coordinates with respect to a coordinate system, by its mass and speed. The set of such variables is called the state of the system, and the knowledge of the values of these variables at a fixed moment allows us to specify the state of the system, and, consequently, to describe the system at that precise moment.

    Usually, a system evolves in time from one state to another, and is thus characterized by its own dynamics. For instance, the state of a chemical system can change due to a modification in the environment temperature and/or pressure, whereas the state of a particle can change because of interaction with other particles. These state modifications are called transitions.

    In many applications, the states are described by continuous variables and the transitions may occur at any instant. To simplify, we will consider a system with a finite number of states, denoted by E = {1, 2, . . ., N}, and with transitions that can occur at discrete-time moments. So, if we set X(n) for the state of the system at time n, then the sequence X(0), X(1), . . ., X(n) describes the itinerary of the system in the state space, from the beginning of the observation, up to the fixed time n; this sequence is called a sample path (realization or trajectory) of the process (see section 1.2).

    In most of the concrete situations, the observation of the process makes us come to the conclusion that the process is random. Consequently, to a sample path of the process a certain probability

    needs to be associated. Elementary techniques of probability theory show that these probabilities can be expressed in terms of the conditional probabilities

    for all and for any states i0, i1, . . ., in+1 ∈ E. This means that it is necessary to know the probability that the system is in a certain state in+1 ∈ E after the (n + 1)th transition, , knowing its history up to time n. Computing all these conditional probabilities renders the study of a real phenomenon modeled in this way very complicated. The statement that the process is Markovian is equivalent to the simplifying hypothesis that only the last state (i.e. the current state) counts for its future evolution. In other words, for a Markov process we have (the Markov property)

    [1.6]

    DEFINITION 1.40.– The sequence of r.v. defined on , with values in the set E, is called a Markov chain (or discrete-time Markov process) with a finite or countable state space, if the Markov property is

    Enjoying the preview?
    Page 1 of 1