Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Stochastic Differential Equations and Applications
Stochastic Differential Equations and Applications
Stochastic Differential Equations and Applications
Ebook809 pages3 hours

Stochastic Differential Equations and Applications

Rating: 5 out of 5 stars

5/5

()

Read preview

About this ebook

This advanced undergraduate and graduate text has now been revised and updated to cover the basic principles and applications of various types of stochastic systems, with much on theory and applications not previously available in book form. The text is also useful as a reference source for pure and applied mathematicians, statisticians and probabilists, engineers in control and communications, and information scientists, physicists and economists.
  • Has been revised and updated to cover the basic principles and applications of various types of stochastic systems
  • Useful as a reference source for pure and applied mathematicians, statisticians and probabilists, engineers in control and communications, and information scientists, physicists and economists
LanguageEnglish
Release dateDec 30, 2007
ISBN9780857099402
Stochastic Differential Equations and Applications
Author

X Mao

Xuerong Mao, Strathclyde University, UK

Related to Stochastic Differential Equations and Applications

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Stochastic Differential Equations and Applications

Rating: 5 out of 5 stars
5/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Stochastic Differential Equations and Applications - X Mao

    Stochastic Differential Equations and Applications

    Second Edition

    Xuerong Mao

    Department of Statistics and Modelling Science, University of Strathclyde, Glasgow

    WOODHEAD PUBLISHING LIMITED

    Oxford  Cambridge  Philadelphia  New Delhi

    Table of Contents

    Cover image

    Title page

    Copyright page

    Dedication

    Preface to the Second Edition

    Preface from the 1997 Edition

    Acknowledgements

    General Notation

    1: Brownian Motions and Stochastic Integrals

    1.1 INTRODUCTION

    1.2 BASIC NOTATIONS OF PROBABILITY THEORY

    1.3 STOCHASTIC PROCESSES

    1.4 BROWNIAN MOTIONS

    1.5 STOCHASTIC INTEGRALS

    1.6 ITÔ’S FORMULA

    1.7 MOMENT INEQUALITIES

    1.8 GRONWALL-TYPE INEQUALITIES

    2: Stochastic Differential Equations

    2.1 INTRODUCTION

    2.2 STOCHASTIC DIFFERENTIAL EQUATIONS

    2.3 EXISTENCE AND UNIQUENESS OF SOLUTIONS

    2.4 LP-ESTIMATES

    2.5 ALMOST SURELY ASYMPTOTIC ESTIMATES

    2.6 CARATHEODORY’S APPROXIMATE SOLUTIONS

    2.7 EULER–MARUYAMA’S APPROXIMATE SOLUTIONS

    2.8 SDE AND PDE: FEYNMAN–KAC’S FORMULA

    2.9 THE SOLUTIONS AS MARKOV PROCESSES

    3: Linear Stochastic Differential Equations

    3.1 INTRODUCTION

    3.2 STOCHASTIC LIOUVILLE’S FORMULA

    3.3 THE VARIATION-OF-CONSTANTS FORMULA

    3.4 CASE STUDIES

    3.5 EXAMPLES

    4: Stability of Stochastic Differential Equations

    4.1 INTRODUCTION

    4.2 STABILITY IN PROBABILITY

    4.3 ALMOST SURE EXPONENTIAL STABILITY

    4.4 MOMENT EXPONENTIAL STABILITY

    4.5 STOCHASTIC STABILIZATION AND DESTABILIZATION

    4.6 FURTHER TOPICS

    5: Stochastic Functional Differential Equations

    5.1 INTRODUCTION

    5.2 EXISTENCE-AND-UNIQUENESS THEOREMS

    5.3 STOCHASTIC DIFFERENTIAL DELAY EQUATIONS

    5.4 EXPONENTIAL ESTIMATES

    5.5 APPROXIMATE SOLUTIONS

    5.6 STABILITY THEORY—RAZUMIKHIN THEOREMS

    5.7 STOCHASTIC SELF-STABILIZATION

    6: Stochastic Equations of Neutral Type

    6.1 INTRODUCTION

    6.2 NEUTRAL STOCHASTIC FUNCTIONAL DIFFERENTIAL EQUATIONS

    6.3 NEUTRAL STOCHASTIC DIFFERENTIAL DELAY EQUATIONS

    6.4 MOMENT AND PATHWISE ESTIMATES

    6.5 Lp-CONTINUITY

    6.6 EXPONENTIAL STABILITY

    7: Backward Stochastic Differential Equations

    7.1 INTRODUCTION

    7.2 MARTINGALE REPRESENTATION THEOREM

    7.3 EQUATIONS WITH LIPSCHITZ COEFFICIENTS

    7.4 EQUATIONS WITH NON–LIPSCHITZ COEFFICIENTS

    7.5 REGULARITIES

    7.6 BSDE AND QUASILINEAR PDE

    8: Stochastic Oscillators

    8.1 INTRODUCTION

    8.2 THE CAMERON–MARTIN-GIRSANOV THEOREM

    8.3 NONLINEAR STOCHASTIC OSCILLATORS

    8.4 LINEAR STOCHASTIC OSCILLATORS

    8.5 ENERGY BOUNDS

    9: Applications to Economics and Finance

    9.1 INTRODUCTION

    9.2 STOCHASTIC MODELLING IN ASSET PRICES

    9.3 OPTIONS AND THEIR VALUES

    9.4 OPTIMAL STOPPING PROBLEMS

    9.5 STOCHASTIC GAMES

    10: Stochastic Neural Networks

    10.1 INTRODUCTION

    10.2 STOCHASTIC NEURAL NETWORKS

    10.3 STOCHASTIC NEURAL NETWORKS WITH DELAYS

    11: Stochastic Delay Population Systems

    11.1 INTRODUCTION

    11.2 NOISE INDEPENDENT OP POPULATION SIZES

    11.3 NOISE DEPENDENT ON POPULATION SIZES: PART I

    11.4 NOISE DEPENDENT ON POPULATION SIZES: PART II

    11.5 STOCHASTIC DELAY LOTKA–VOLTERRA FOOD CHAIN

    Bibliographical Notes

    References

    Index

    Copyright

    Published by Woodhead Publishing Limited, Abington Hall, Granta Park

    Great Abington, Cambridge CB21 6AH, UK

    www.woodheadpublishing.com

    Woodhead Publishing, 525 South 4th Street #241, Philadelphia, PA 19147, USA

    Woodhead Publishing India Private Limited, G-2, Vardaan House, 7/28 Ansari Road,

    Daryaganj, New Delhi – 110002, India

    www.woodheadpublishingindia.com

    First published by Horwood Publishing Limited, 1997

    Second edition, 2007

    Reprinted by Woodhead Publishing Limited, 2011

    © Horwood Publishing Limted 2007, © Woodhead Publishing Limited, 2010

    The author has asserted his moral rights

    This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials. Neither the author nor the publisher, nor anyone else associated with this publication, shall be liable for any loss, damage or liability directly or indirectly caused or alleged to be caused by this book.

      Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming and recording, or by any information storage or retrieval system, without permission in writing from Woodhead Publishing Limited.

      The consent of Woodhead Publishing Limited does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from Woodhead Publishing Limited for such copying.

    Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe.

    British Library Cataloguing in Publication Data

    A catalogue record for this book is available from the British Library

    ISBN 978-1-904275-34-3

    Printed in the United Kingdom by Lightning Source UK Ltd

    Dedication

    To my parents:

    Mr Mao Yiming and Mrs Chen Dezhen

    Preface to the Second Edition

    Xuerong Mao, Glasgow June 2007

    In this new edition, I have added some material which is particularly useful in applications, namely the new Section 9.3 on options and their values and the new Chapter 11 on stochastic delay population systems. In addition, more material has been added to Section 9.2 to include several popular stochastic models in finance, while the concept of the maximal local solution to a stochastic functional differential equation has been added to Section 5.2 which forms a fundamental theory for our new Chapter 11.

    During this work, I have benefitted from valuable comments and help from several people, including K.D. Elworthy, G. Gettinby, W. Gurney, D.J. Higham, N. Jacob, P. Kloeden, J. Lam, X. Liao, E. Renshaw, A.M. Stuart, A. Truman, G.G. Yin. I am grateful to them all for their help.

    I would like to thank the EPSRC/BBSRC, the Royal Society, the London Mathematics Society as well as the Edinburgh Mathematical Society for their financial support. Moreover, I should thank my family, in particular, Weihong, for their constant support.

    Preface from the 1997 Edition

    Xuerong Mao, Glasgow May 1997

    Stochastic modelling has come to play an important role in many branches of science and industry where more and more people have encountered stochastic differential equations. There are several excellent books on stochastic differential equations but they are long and difficult, especially for the beginner. There are also a number of books at the introductory level but they do not deal with several important types of stochastic differential equations, e.g. stochastic equations of the neutral type and backward stochastic differential equations which have been developed recently. There is a need for a book that not only studies the classical theory of stochastic differential equations, but also the new developments at an introductory level. It is in this spirit that this text is written.

    This text will explore stochastic differential equations and their applications. Some important features of this text are as follows:

    • This text presents at an introductory level the basic principles of various types of stochastic systems, e.g. stochastic differential equations, stochastic functional differential equations, stochastic equations of neutral type and backward stochastic differential equations. The neutral-type and backward equations appear frequently in many branches of science and industry. Although they are more complicated, this text treats them at an understandable level.

    • This text discusses the new developments of Carathedory’s and Cauchy–Marayama’s approximation schemes in addition to Picard’s. The advantage of Cauchy–Marayama’s and Carathedory’s schemes is that the approximate solutions converge to the accurate solution under a weaker condition than the Lipschitz one, but the corresponding convergence problem is still open for Picard’s scheme. These schemes are used to establish the theory of existence and uniqueness of the solution while they also give the procedures to obtain numerical solutions in applications.

    • This text demonstrates the manifestations of the general Lyapunov method by showing how this effective technique can be adopted to study entirely differently qualitative and quantitative properties of stochastic systems, e.g. asymptotic bounds and exponential stability.

    • This text emphasises the analysis of stability in stochastic modelling and illustrates the practical use of stochastic stabilization and destabilization. This is the first text that explains systematically the use of the Razumikhin technique in the study of exponential stability for stochastic functional differential equations and the neutral-type equations.

    • This text illustrates the practical use of stochastic differential equations through the study of stochastic oscillators, stochastic modelling in finance and stochastic neural networks.

    Acknowledgements

    I wish to thank Professors G. Gettinby, W.S.C. Gurney and E. Renshaw for their constant support and kind assistance. I am indebted to the University of Strathclyde, the EPSRC, the London Mathematics Society and the Royal Society for their financial support. I also wish to thank Mr. C. Selfridge who read the manuscript carefully and made many useful suggestions. Moreover, I should thank my family, in particular, my beloved wife Weihong, for their constant support.

    General Notation

    Theorem 4.3.2, for example, means Theorem 3.2 (the second theorem in Section 3) in Chapter 4. If this theorem is quoted in Chapter 4, it is written as Theorem 3.2 only.

    positive  

    > 0.

    nonpositive  

    ≤ 0.

    negative  

    < 0.

    nonnegative  

    ≥ 0.

    a.s.  

    almost surely, or P-almost surely, or with probability 1.

    A := B   A is defined by B or A is denoted by B,

    A(x) ≡ B(x)   A(x) and B(x) are identically equal, i.e. A(x) = B(x) for all x.

      the empty set.

    IA   the indicator function of a set A, i.e. IA(x) = 1 if x A or otherwise 0.

    AC   the complement of A in Ω, i.e. AC = Ω A.

    A B   A BC .

    AB a.s.   P(A BC) = 0.

    σ(C)   the σ-algebra generated by C.

    a b   the maximum of a and b.

    a b   the minimum of a and b.

    f : A B   the mapping f from A to B.

    R = Rl   the real line.

    R+   the set of all nonnegative real numbers, i.e. R+ = [0,∞).

    Rd   the d-dimensional Euclidean space.

    R+d   = {x Rd : xi > 0, 1 ≤ i d}, i.e. the positive cone.

    Bd   the Borel-σ-algebra on Rd.

    B  

    = B¹.

    Rd× m   the space of real d× m-matrices.

    Bd× m   Borel-σ-algebra on Rd× m.

    Cd   the d-dimensional complex space.

    Cd× m   the space of complex d × m-matrices.

    |x|   the Euclidean norm of a vector x.

    Sh   = {x Rd : |x| ≤ h}.

    AT   the transpose of a vector or matrix A.

    (x,,y)   the scale product of vectors x and y, i.e. (x,y) = xTy.

    trace A   the trace of a square matrix A = (aij)d× d, i.e. trace A = ∑ 1 ≤ i ≤ daii.

    λmin (A)   the smallest eigenvalue of a matrix A.

    λmax(A)   the largest eigenvalue of a matrix A.

    λmax+(A)

    |A| i.e. the trace norm of a matrix A.

    ||A|| , i.e. the operator norm of a matrix A.

    δij   Dirac’s delta function, that is δij = 1 if i = j or otherwise 0.

    C(D; Rd)   the family of continuous Rd-valued functions defined on D.

    Cm(D; Rd)   the family of continuously m-times differentiable Rd-valued functions defined on D.

    C0m(DRd)   the family of functions in Cm(D; Rd) with compact support in D,

    C²,¹(D × R+; R)   the family of all real-valued functions V(x, t) defined on D × R+ which are continuously twice differentiable in x D and once differentiable in t R+

    ∆  

    Vx  

    Vxx  

     ξ   LP   = (E|ξ|p)¹/p.

    Lp(Ω; Rd)   the family of Rd-valued random variables ξ with E|ξ|p < ∞.

      the family of Rd-valued Ft-measurable random variables ξ with E|ξ|p < ∞.

    C([− τ, 0]; Rd)   the space of all continuous Rd-valued functions φ defined on[− τ, 0] with a norm ||φ|| = sup− τ≤θ≤0|φ(θ)|.

    L p([− τ, 0]; Rd)   the family of all C([− τ, 0]; Rd)-valued random variables ϕ such that E|| ϕ ||p < ∞.

    t-measurable C([− τ, 0]; Rd)-valued random variables ϕ such that E|| ϕ ||p < ∞.

    t-measurable bounded C([− τ, 0]; Rd)-valued random variables.

    Lp([a, b]); Rd)   the family of Borel measurable functions h : [a, b] → Rd such that ∫ ab|h(t)|pdt < ∞.

    p([a, b]); Rd)   the family of Rdt-adapted processes {f(t}a≤ t≤ b such that ∫ ab|f(t)|pdt < ∞ a.s.

    p([a, b]); Rd)   the family of processes {f(t)}a≤ t≤ b p([a, b]); Rd) such that E ∫ ab|f(t)pdt < ∞.

    p(R+ ; Rd)   the family of processes {f(t)}t≥ 0 such that for every T > 0, {f(t)0 ≤ t≤ Tp([0,T]; Rd]

    p(R+ ; Rd)   the family of processes {f(t)}t≥ 0 such that for every T > 0, {f(t)0 ≤ t≤ Tp([0,T]; Rd).

    Erf(∙)  

    Sign(x)  

    the sign function, that is sign(x) = + 1 if x ≥ 0 or otherwise−1.

    Other notations will be explained where they first appear.

    1

    Brownian Motions and Stochastic Integrals

    1.1 INTRODUCTION

    Systems in many branches of science and industry are often perturbed by various types of environmental noise. For example, consider the simple population growth model

       (1.1)

    with initial value N(0) = N0, where N(t) is the size of the population at time t and a(t) is the relative rate of growth. It might happen that a(t) is not completely known, but subject to some random environmental effects. In other words,

    so equation (1.1) becomes

    That is, in the form of integration,

      

    (1.2)

    The questions are: What is the mathematical interpretation for the noise term and what is the integration ∫0(s)N(snoiseds?

    , which is formally regarded as the derivative of a Brownian motion B(t. So the term noise dt , and

      

    (1.3)

    If the Brownian motion B(t) were differentiable, then the integral would have no problem at all. Unfortunately, we shall see that the Brownian motion B(t) is nowhere differentiable hence the integral can not be defined in the ordinary way. On the other hand, if σ(t)N(t) is a process of finite variation, one may define the integral by

    However, if σ(t)N(t) is only continuous, or just integrable, this definition does not make sense. To define the integral, we need make use of the stochastic nature of Brownian motion. This integral was first defined by K. Itô in 1949 and is now known as the Itô stochastic integral. The main aims of this chapter are to introduce the stochastic nature of Brownian motion and to define the stochastic integral with respect to Brownian motion.

    To make this book self-contained, we shall briefly review the basic notations of probability theory and stochastic processes. We then give the mathematical definition of Brownian motions and introduce their important properties. Making use of these properties, we proceed to define the stochastic integral with respect to Brownian motion and establish the well-known Itô formula. As the applications of Itô’s formula, we establish several moment inequalities e.g. the Burkholder–Davis–Gundy inequality for the stochastic integral, as well as the exponential martingale inequality. We shall finally show a number of well-known integral inequalities of the Gronwall type.

    1.2 BASIC NOTATIONS OF PROBABILITY THEORY

    Probability theory deals with mathematical models of trials whose outcomes depend on chance. All the possible outcomes—the elementary events—are grouped together to form a set, Ω, with typical element, ω , should have the following properties:

    denotes the empty set;

    (ii) A AC , where AC = Ω − A is the complement of A in Ω;

    (iii) 

    with these three properties is called a σ-algebra) is called a measurable space-measurable sets instead of events. If C is a family of subsets of Ω, then there exists a smallest σ-algebra σ(C) on Ω which contains C. This σ(C) is called the σ-algebra generated by C. If Ω = Rd and C is the family of all open sets in Rdd = σ(C) is called the Borel σ-algebra d are called the Borel sets.

    A real-valued function X: Ω → R -measurable if

    The function X -measurable) random variable. An Rd-valued function Χ(ω) = (Χ1(ω),…,Χd(ω))T -measurable if all the elements Xi -measurable. Similarly, a d×m-matrix-valued function Χ(ω) = (Χij(ω))d×m -measurable if all the elements Χij -measurable. The indicator function IA of a set A ⊂ Ω is defined by

    The indicator function IA -measurable if and only if A -measurable set, i.e. A . If the measurable space is (Rddd-measurable function is then called a Borel measurable function') be another measurable space. A mapping X, )-measurable if

    The mapping X , '-measurable) random variable.

    Let X: Ω → Rd be any function. The σ-algebra σ(Χ) generated by X is the smallest σ-algebra on Ω containing all the sets {ω: X(w) ∈ U}, U ⊂ Rd open. That is

    Clearly, X will then be σ(Χ)-measurable and σ(Χ) is the smallest σ-algebra with this property. If X -measurable, then σ(Χ, i.e. X generates a sub-σ. If {Xi: i I} is a collection of Rd-valued functions, define

    which is called the σ-algebra generated by {Xi: i I}. It is the smallest σ-algebra with respect to which every Xi is measurable. The following result is useful. It is a special case of a result sometimes called the Doob–Dynkin lemma.

    Lemma 2.1 If X,Y: Ω → Rd are two given functions, then Y is σ(Χ)-measurable if and only if there exists a Borel measurable function g:Rd → Rd such that Y = g(X).

    A probability measure Ρ ) is a function Ρ → [0, 1] such that

    (i) Ρ(Ω) = 1;

    (ii) for any disjoint sequence {Ai}i(i.e. Ai Ajif i ≠ j)

    , P) is called a probability space, P) is a probability space, we set

    is σ-algebra and is called the completion , P) is said to be complete. If not, one can easily extend Ρ by defining P(A) = P(B) = P(Cwhere B,C with the properties that B ⊂ A ⊂ C and P(B) = P(C, Ρ) is a complete probability space, called the completion , P).

    , P) be a probability space. If X is a real-valued random variable and is integrable with respect to the probability measure P, then the number

    is called the expectation of X (with respect to P). The number

    is called the variance of X (here and in the sequel of this section we assume that all integrals concerned exist). The number E|X|p(p > 0) is called the pth moment of X. If Y is another real-valued random variable,

    is called the covariance of X and Y. If Cov(X,Y) = 0, X and Y are said to be uncorrelated. For an Rd-valued random variable X = (X1, …,Xd)T, define EX = (EX1,…,EXd)T. For a d×m-matrix-valued random variable X = (Xij)d×m, define EX = (EXij)d×m. If X and Y are both Rd-valued random variables, the symmetric nonnegative definite d×d matrix

    is called their covariance matrix.

    Let X be an Rd-valued random variable. Then X induces a probability measure μX on the Borel measurable space (Rdd), defined by

    and μX is called the distribution of X. The expectation of X can now be expressed as

    More generally, if g: Rd → Rm is Borel measurable, we then have the following transformation formula

    For p ∈ (0, ∞), let Lp = Lp(Ω; Rd) be the family of Rd-valued random variables X with E|X|p < ∞. In L¹,we have |EX| ≤ E|X|. Moreover, the following three inequalities are very useful:

    (i) Hölder’s inequality

    if p > 1, 1/p + 1/q = 1, X Lp, Y Lq;

    (ii) Minkovski’s inequality

    if p > 1, X, Y Lp;

    (iii) Chebyshev’s inequality

    if c > 0, p > 0, X Lp.

    A simple application of Hölder’s inequality implies

    if 0 < r < p < ∞, X Lp.

    Let X and Xk, k ≥ 1, be Rd-valued random variables. The following four convergence concepts are very important:

    (a) If there exists a Psuch that for every ω ∉ Ω0, the sequence {Xk(ω)} converges to Χ(ω) in the usual sense in Rd, then {Xk} is said to converge to X almost surely or with probability 1, and we write limk→∞ Xk = X a.s.

    (b) If for every ε > 0, Ρ{ω:|Xk(ω) – Χ(ω)| > ε} → 0 as k → ∞, then {Xk} is said to converge to X stochastically or in probability.

    (c) If Xk and X belong to Lp and E|Xk − X|p → 0, then {Xk} is said to converge to X in pth moment or in Lp.

    (d) If for every real-valued continuous bounded function g defined on Rd, limk→∞ Eg(Xk) = Eg(X), then {Xk} is said to converge to X in distribution.

    These convergence concepts have the following relationship:

    Furthermore, a sequence converges in probability if and only if every subsequence of it contains an almost surely convergent subsequence. A sufficient condition for limk→∞ Xk = X a.s. is the condition

    We now state two very important integration convergence theorems.

    Theorem 2.2 (Monotonic convergence theorem) If {Xk} is an increasing sequence of nonnegative random variables, then

    Theorem 2.3 (Dominated convergence theorem) Let p ≥ 1, {Xk} ⊂ Lp (Ω; Rd) and Y Lp (Ω; R). Assume that |Xk| ≤ Y a.s. and {Xk} converges to X in probability. Then X Lp (Ω; Rd), {Xk} converges to X in Lp, and

    When Y is bounded, this theorem is also referred as the bounded convergence theorem.

    Two sets A, B are said to be independent if P(A B) = P(A)P(B). Three sets A,B,C are said to be independent if

    Let I be an index set. A collection of sets {Ai: iIis said to be independent if

    for all possible choices of indices i1,…,ik I. Two sub-σare said to be independent if

    A collection of sub-σi: i I} is said to be independent if for every possible choice of indices i1,…,ik I,

    . A family of random variables {Xi: iI} (whose ranges may differ for different values of the index) is said to be independent if the σ-algebras σ(Xi), iI generated by them are independent. For example, two random variables X: Ω → Rd and Y: Ω → Rm are independent if and only if

    holds for all A d, Β m. If X and Y are two independent real-valued integrable random variables, then XY is also integrable and

    If X, Y L²(Ω; R) are uncorrelated, then

    If the X and Y are independent, they are uncorrelated. If (X, Y) has a normal distribution, then X and Y are independent if and only if they are uncorrelated.

    Let {Ak. Define the upper limit of the sets by

    . With regard to its probability, we have the following well-known Borel-Cantelli lemma.

    Lemma 2.4 (Borel–Cantelli’s lemma)

    (1) If {Akand ∑ k = 1∞P(Ak) < ∞, then

    That is, there exists a set Ωo with Ρo) = 1 and an integer-valued random variable ko such that for every ω ∈ Ωo, we have ω Ak whenever k ≥ ko(ω).

    (2) If the sequence {Akis independent and ∑ k = 1∞P(Ak) = ∞, then

    That is, there exists a set Ωθ with Pθ) = 1 such that for every ω ∈ Ωθ, there exists a sub-sequence such that the ω belongs to every .

    Let A, B with P(B> 0. The conditional probability of A under condition Β is

    However, we frequently encounter a family of conditions so we need the more general concept of conditional expectation. Let X L¹(Ω; Ris a sub-σ) is a measurable space. In general, X -measurable random variable Y such that it has the same values as X on the average in the sense that

    By the Radon–Nikodym theorem, there exists one such Y, almost surely unique. It is called the conditional expectation of X under the condition , and we

    Enjoying the preview?
    Page 1 of 1