Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Equity Derivatives: Theory and Applications
Equity Derivatives: Theory and Applications
Equity Derivatives: Theory and Applications
Ebook343 pages3 hours

Equity Derivatives: Theory and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Written by the quantitative research team of Deutsche Bank, the world leader in innovative equity derivative transactions, this book acquaints readers with leading-edge thinking in modeling and hedging these transactions. Equity Derivatives offers a balanced, integrated presentation of theory and practice in equity derivative markets. It provides a theoretical treatment of each new modeling and hedging concept first, and then demonstrates their practical application. The book covers: the newest and fastest-growing class of derivative instruments, fund derivatives; cutting-edge developments in equity derivative modeling; new developments in correlation modeling and understanding volatility skews; and new Web-based implementation/delivery methods.
Marcus Overhaus, PhD, Andrew Ferraris, DPhil, Thomas Knudsen, PhD, Frank Mao, PhD, Ross Milward, Laurent Nguyen-Ngoc, PhD, and Gero Schindlmayr, PhD, are members of the Quantitative Research team of Deutsche Bank's Global Equity Division, which is based in London and headed by Dr. Overhaus.
LanguageEnglish
PublisherWiley
Release dateAug 10, 2011
ISBN9781118160879
Equity Derivatives: Theory and Applications

Related to Equity Derivatives

Titles in the series (100)

View More

Related ebooks

Finance & Money Management For You

View More

Related articles

Reviews for Equity Derivatives

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Equity Derivatives - Marcus Overhaus

    CHAPTER 1

    Mathematical Introduction

    The use of probability theory and stochastic calculus is now an established standard in the field of financial derivatives. During the last 30 years, a large amount of material has been published, in the form of books or papers, on both the theory of stochastic processes and their applications to finance problems. The goal of this chapter is to introduce notions on probability theory and stochastic calculus that are used in the applications presented afterwards. The notations used here will remain identical throughout the book.

    We hope that the reader who is not familiar with the theory of stochastic processes will find here an intuitive presentation, although rigorous enough for our purposes, and a set of useful references about the underlying mathematical theory. The reader acquainted with stochastic calculus will find here an introduction of objects and notations that are used constantly, although maybe not very explicitly.

    This chapter does not aim at giving a thorough treatment of the theory of stochastic processes, nor does it give a detailed view of mathematical finance theory in general. It recalls, rather, the main general facts that will be used in the examples developed in the next chapters.

    1.1 PROBABILITY BASIS

    Financial models used for the evaluation of derivatives are mainly concerned with the uncertainty of the future evolution of the stock prices. The theory of probability and stochastic processes provides a framework with a form of uncertainty, called randomness. A probability space Ω is assumed to be given once and for all, interpreted as consisting of all the possible paths of the prices of securities we are interested in. We will suppose that this probability space is rich enough to carry all the random objects we wish to construct and use. This assumption is not restrictive for our purposes, because we could always enlarge the space Ω, for example, by considering a product space. Note that Ω can be chosen to be a canonical space, such as the space of continuous functions, or the space of cadlag (French acronym for continuous from the right, with left limits) functions.

    We endow the set Ω with a σ-field which is also assumed to be fixed throughout this book, unless otherwise specified. represents all the events that are or will eventually be observable.

    Let be a probability measure on the measurable space (Ω, ). The (Lebesgue) integral with respect to of a random variable X (that is, a measurable function from (Ω, ) to ( N, N), where N is the Borel σ-field on N) is denoted by [X] instead of ∫Ω X d and is called the expectation of X. If we need to emphasize that the expectation operator is relative to , we denote it by . We assume that the reader is familiar with general notions of probability theory such as independence, correlation, conditional expectation, and so forth. For more details and references, we refer to [9], [45], or [49].

    The probability space (Ω, , ) is endowed with a filtration ( , t ≥ 0), that is, a family of sub-σ-fields of such that s t for all 0 ≤ s t. The filtration is said to be -complete if for all t, all -null sets belong to every t; it is said to be right-continuous if for all t > 0,

    It will be implicit in the sequel that all the filtrations we use have been previously completed and made right-continuous (this is always possible).

    The filtration t represents the flow of information available; we will often deal with the filtration generated by some process (e.g., stock price process), in which case t represents past observations up to time t. For detailed studies on filtrations the reader can consult any book concerned with stochastic calculus, such as [44], [63], and [103].

    1.2 PROCESSES

    We will be concerned with random quantities whose values depend on time. Denote by a subset of +; can be + itself, a bounded interval [0, T], or a discrete set = {0,1, . . .}. In general, given a measurable space (E, ), a process with values in E is an application X : Ω × ⇒ E that is measurable with respect to the σ-fields and ⊗ , where denotes the Borel σ-field on .

    In our applications we will need to consider only the case in which E = N and is the Borel σ-field N. From now on, we make these assumptions. A process will be denoted by X or (Xt, t ); the (random) value of the process at time t will be denoted by Xt or X(t); we may sometimes wish to emphasize the dependence on ω, in which case we will use the notation Xt(ω) or X(t, ω). The jump at time t of a process X, is denoted by ΔXt and defined by ΔXt = Xt − Xt−, where Xt− = lim ↓0 Xt− .

    Where in Time?

    Before we take on the study of processes themselves, we define a class of random times that form a cornerstone in the theory of stochastic processes. These are the times that are suited to the filtration t.

    DEFINITION 1.1

    A random time T, that is a random variable with values in + ∪ {∞}, is called an t-stopping time if for all t

    This definition means that at each time t, based on the available information t, one is able to determine whether T is in the past or in the future. Stopping times include constant times, as well as hitting times (i.e., random times τ of the form τ = inf{t : Xt B}, where B is a Borel set), among others.

    From a financial point of view, the different quantities encountered are constrained to depend only on the available information at the time they are given a value. In mathematical words, we state the following:

    DEFINITION 1.2

    A process X is said to be adapted to the filtration t (or t-adapted) if, for all t , Xt is t-measurable.

    A process used to model the price of an asset must be adapted to the flow of information available in the market. On the other hand, this information consists mainly in the prices of different assets. Given a process X, we can define a filtration ( Xt), where Xt is the smallest sub-σ-field of that makes the variables (Xu, u t) simultaneously measurable. The filtration Xt is said to be generated by X, and X is clearly adapted to it. One also speaks of X in its own filtration.

    Because we do not make the assumption that the processes we consider have continuous paths, we need to introduce a fine view of the past. Continuous processes play a special role in this setting.

    DEFINITION 1.3

    1. The predictable σ-field is the σ-field on Ω × generated by t-adapted processes whose paths are continuous.

    2. A process X is said to be predictable if it is measurable with respect to .

    That is, is the smallest σ-field on Ω × such that every process X, viewed as a function of (ω, t), for which t X(t) is continuous, is -measurable. It can be shown that is also generated by random intervals (S, T] where S < T are stopping times.

    A process that describes the number of shares in a trading strategy must be predictable, because the investment decision is taken before the price has a possible instantaneous shock.

    In discrete time, the definition of a predictable process is much simpler, since then a process (Xi, i ) is predictable if for each n, Xn is n−1- measurable. However, we have the satisfactory property that if X is an t-adapted process, then the process of left limits (Xt−, t ≥ 0) is predictable. For more details about predictable processes, see [27] or [63].

    Let us also mention the optional σ-field: It is the σ-field on Ω × generated by t-adapted processes with right-continuous paths. It will not be, for our purposes, as crucial as the predictable σ-field; see, however, Chapter 2 for a situation where this is needed.

    We end this discussion by introducing the notion of localization, which is the key to establishing certain results in a general case.

    DEFINITION 1.4

    A localizing sequence (Tn) is an increasing sequence of stopping times such that Tn ⇒ ∞ as n ⇒ ∞.

    In this chapter, a property is said to hold locally if there exists a localizing sequence such that the property holds on every interval [0, Tn]. This notion is important, because there are many interesting cases in which important properties hold only locally (and not on a fixed interval, [0, ∞), for example).

    Martingales and Semimartingales

    Among the adapted processes defined in the foregoing section, not all are suitable for financial modelling. The work of Harrison and Pliska [60] shows that only a certain class of processes, called semimartingales are good candidates. Indeed, the reader familiar with the theory of arbitrage knows that the stock price process must be a local martingale under an appropriate probability measure; Girsanov’s theorem then implies that it must be a semimartingale under any (locally) equivalent probability measure.

    DEFINITION 1.5

    A process X is called an t-martingale if it is integrable (i.e., [|Xt|] < ∞ for all t), t-adapted, and if it satisfies, for all 0 ≤ s t

    1.1 (1.1)

    X is called a local martingale if there is a localizing sequence (Tn) such that for all n, (Xt∧Tn, t ≥ 0) is a martingale. X is called a semimartingale if it is t-adapted and can be written

    1.2 (1.2)

    where M is a local martingale, V has a.s. (almost surely) finite variation, and M and V are null at time t = 0. If V can be chosen to be predictable, X is called a special semimartingale and the decomposition with such V is called the canonical decomposition.

    If we need to emphasize the underlying probability measure , we will say that X is a -(semi)martingale.

    With a semimartingale X are associated two increasing processes, called the quadratic variation and the conditional quadratic variation. These processes are interesting because they allow us to compute the decomposition of a semimartingale under a change of probability measure: This is the famous Girsanov theorem (see Section 1.3). We give a brief introduction to these processes here; for more details, see for example [27], [44], [63], [100], [103], [104], [105].

    We first turn to the quadratic variation of semimartingale.

    DEFINITION 1.6

    Let X be a semimartingale such that [Xt²] < ∞ for all t. There exists an increasing process, denoted by [X, X], and called the quadratic variation of X, such that

    1.3 (1.3)

    where for each n, τ(n) = (0 = t0 < t1 < · · · < tpn = t) is a subdivision of [0, t] whose mesh sup1 ≤ i pn (ti ti−1) tends to 0 as n tends to ∞.

    The abbreviation plim stands for limit in probability. It can be shown that the above definition is actually meaningful: The limit does not depend on a particular sequence of subdivisions. Moreover, if X is a martingale, the quadratic variation is a compensator of X²; that is, X² − [X, X] is again a martingale. More generally, given a process X, another process Y will be called a compensator for X if X Y is a local martingale. Because of the properties of martingales, compensation is the key to many properties when paths are not supposed to be continuous.

    Given two semimartingales X and Y, we define the quadratic covariation of X and Y by a polarization identity:

    Let M be a martingale. It can be shown that there exist two uniquely determined martingales Mc and Md such that: M = Mc + Md, Mc has continuous paths and Md is orthogonal to any continuous martingale; that is, MdN is a martingale for any continuous martingale N. Mc is called the martingale continuous part of M, while Md is called the purely discontinuous part. If X is a special semimartingale, with canonical decomposition X = M + V, Xc denotes the martingale continuous part of M, that is Xc = Mc.

    Note that the jump at time t of the quadratic variation of a semimartingale X is simply given by Δ[X, X]t = (ΔXt)². We have the following important property:

    1.4

    (1.4)

    where the last sum is actually meaningful (see [100]).

    We now turn to the conditional quadratic variation.

    DEFINITION 1.7

    Let X be a semimartingale such that [Xt²] < ∞ for all t. If

    1.5 (1.5)

    exists, where for each n, τ(n) = (0 = t0 < t1 < · · · < tpn = t) is a subdivision of [0, t] whose mesh sup1 ≤ i pn ti ti−1 tends to 0 as n tends to ∞, and the limit does not depend on a particular subdivision, this limit is called the conditional quadratic variation of X and is denoted by X, X t. In that case, X, X t is an increasing process.

    In contrast to the quadratic variation, the limit in (1.5) may fail to exist for some semimartingales X. However, it can be shown that the limit exists, and that the process X, X is well-defined, if X is a special semimartingale, in particular for a Lévy process or a continuous semimartingale, for example.

    Similar to the case of quadratic variation, the conditional quadratic covariation is defined as

    as soon as this expression makes sense.

    It can also be proven that when it exists, the conditional quadratic variation is the predictable compensator of the quadratic variation; that is, X, X is a predictable process and [X, X] − X, X is a martingale. It follows that if X is a martingale, X² − X, X is also a martingale, and the quadratic variation is the predictable compensator of X². The (conditional) quadratic variation has the following well-known properties, provided the quantities considered exist:

    The applications (X, Y) [X, Y] and (X, Y) X, Y are linear in X and Y.

    If X has finite variation, [X, Y] = X, Y = 0 for any semimartingale Y.

    Moreover we have the following important identity (see [100]):

    so that if X has continuous paths, X, X is identical to [X, X]. The (conditional) quadratic variation will appear into the decomposition of F(X) for suitable F, given by It ’s formula, which lies at the heart of stochastic calculus.

    Markov Processes

    We now introduce briefly another class of processes that are memoryless at stopping times.

    DEFINITION 1.8

    1. An t-adapted process X is called a Markov process in the filtration ( t) if for all t ≥ 0, for every measurable and bounded functional F,

    1.6

    (1.6)

    2. X is called a strong Markov process if (1.6) holds with t replaced by any finite stopping time T.

    In other words, for a Markov process, at each time t, the whole past is summarized in the present value of the process Xt. For a strong Markov process, this is true with a stopping time. In financial words, an investment decision is often made on the basis of the present state of the market, that in some sense sums up its history.

    A nice feature of Markov processes is the Feynman-Kac formula; this formula links Markov processes to (integro-)partial differential equations and makes available numerical techniques such as the finite difference method explained in Chapter 4. We do not go further into Markov processes and go on with stochastic calculus. Some relationships between Markov processes and semimartingales are discussed in [28].

    1.3 STOCHASTIC CALCULUS

    With the processes defined in the previous section (semimartingales), a theory of (stochastic) integral calculus can be built and used to model financial time series. Accordingly, this section contains the two results of probability theory that are most useful in finance: It ’s formula and the Girsanov theorem, both in a quite general form.

    The construction and properties of the stochastic integral are well known, and the financial reader can think of most of them by taking the parallel of a portfolio strategy (see Section 1.4 and Chapter 2).

    In general, the integral of a process H with respect to another one X is well-defined provided H is locally bounded and predictable and X is a semimartingale with [Xt²] < ∞ for all t. The integral can then be thought of as the limit of elementary sums

    where for each n, τ(n) = (0 = t0 < t1 < · · · < tpn = t) is a subdivision of [0, t] whose mesh sup1 ≤ i pn ti ti−1 tends to 0 as n tends to ∞. See [27], [100], [103], or [104] for a rigorous definition.

    Note an important property of the stochastic integral. Let X, Y be semimartingales and H a predictable process such that ∫ HsdYs is well-defined; the following

    Enjoying the preview?
    Page 1 of 1