Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Stability by Fixed Point Theory for Functional Differential Equations
Stability by Fixed Point Theory for Functional Differential Equations
Stability by Fixed Point Theory for Functional Differential Equations
Ebook617 pages4 hours

Stability by Fixed Point Theory for Functional Differential Equations

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book is the first general introduction to stability of ordinary and functional differential equations by means of fixed point techniques. It contains an extensive collection of new and classical examples worked in detail and presented in an elementary manner.
Most of this text relies on three principles: a complete metric space, the contraction mapping principle, and an elementary variation of parameters formula. The material is highly accessible to upper-level undergraduate students in the mathematical sciences, as well as working biologists, chemists, economists, engineers, mathematicians, physicists, and other scientists using differential equations. It also introduces many research problems that promise to remain of ongoing interest.
LanguageEnglish
Release dateApr 16, 2013
ISBN9780486153322
Stability by Fixed Point Theory for Functional Differential Equations

Read more from T. A. Burton

Related to Stability by Fixed Point Theory for Functional Differential Equations

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Stability by Fixed Point Theory for Functional Differential Equations

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Stability by Fixed Point Theory for Functional Differential Equations - T. A. Burton

    Index

    Chapter 0

    Introduction and Overview

    0.1 The Origin of the Work

    It has long been our view that Liapunov’s direct method is the leading technique for dealing with stability problems in many areas of differential equations. We have contributed to that theory for more than forty years and continue to do so. Yet, numerous difficulties with the theory and application to specific problems persist and it does seem that new methods are needed to address those difficulties. There is, of course, the problem of constructing appropriate Liapunov functionals and in carrying out various calculations such as is seen in the annulus argument. These are things which we feel we can overcome with work and imagination. But there are also problems with the types of conditions which are typically imposed on the functions in the differential equations. These conditions are virtually always precise pointwise requirements. Real-world problems with all their uncertainties call for conditions which are averages. In this book we explore the use of fixed point theory in meeting some of those problems.

    Not only do the fixed point conditions emerge as averages, but very recently John Appleby has shown that the fixed point methods will also admit stochastic perturbations. In Section 7.3 he discusses how these perturbations relate to real-world applications. Chapter 7 is entirely his contribution. He introduces basic stochastic theory and shows how the fixed point methods yield stability results under stochastic perturbations. Thus, in two ways the fixed point methods seem more applicable to real-world problems than does Liapunov’s method.

    There is an added benefit to studying stability by fixed point methods. In the next section we illustrate how very complete, simple, and rigorous stability analysis of a highly sophisticated problem can be given using only a complete metric space, the contraction mapping theorem, and the elementary variation of parameters formula. In one step we prove existence, uniqueness, and boundedness of solutions of a problem which has challenged investigators for decades. This is important for students and investigators who are not primarily stability analysts, but whose work leads them to sophisticated stability problems. Increasingly, stability problems emerge in biology, economics, and unlikely places. These problems can often be solved using fixed point theory in an elementary and mathematically honest manner. What is required is imagination in setting up a fixed point mapping. This book is a guide to the process.

    For a long time we have been interested in the use of fixed point theory in conjunction with Liapunov’s direct method. Fixed point theory can be used directly to study a differential equation when solutions are being considered on a finite interval. But when we are discussing solutions on an infinite interval, some technique seems to be necessary to keep control of the solution. In Burton (1984) we use Liapunov functionals in conjunction with Browder’s fixed point theorem to obtain periodic solutions of nonlinear functional differential equations. In Burton (1985) a large part of the monograph is devoted to the use of Liapunov functionals with Horn’s fixed point theorem to obtain periodic solutions. In Burton (1994) we use mildly unbounded Liapunov functionals with Schaefer’s fixed point theorem to prove global existence for a variety of equations. In a parallel way Seiji (1989) and others have used Liapunov functions with fixed point theory to prove stability. In a different vein, Serban (2001) has used fixed point theory to prove asymptotic stability for a difference equation.

    In 1997 several investigators began a systematic study toward a comprehensive stability method based on fixed point theory. As it developed it became more of an investigation into what could be said about classical problems which had offered resistance to Liapunov’s direct method and what kinds of absolutely new properties can be discovered. The accent has been on using averaging techniques associated with variation of parameters to avoid pointwise conditions generated by the direct method. In several places we use Liapunov’s direct method, but in different ways than described above. First, it is used as a contrast with fixed point methods. Next, we frequently use the direct method to prove that solutions can be continued for all future time. Finally, in one of our sections fixed point methods are used to identify harmless perturbations, while Liapunov functionals are then used to obtain new stability results.

    In this book we offer a large collection of examples worked in great detail which clearly establish fixed point theory as a viable tool in stability theory. We also present stability results of types not seen before in the literature which emerge because of the fixed point theory. There are other avenues for investigation and we believe that this can be a major area of research for many years. Moreover, there is such a wide class of problems that this book should serve as a handbook for investigation into the stability properties of specific problems confronting biologists, chemists, economists, engineers, mathematicians, physicists, and other scientists.

    0.2 An Introductory Lecture

    During the past year we have had the opportunity to lecture in several countries on fixed point methods in stability. The following material has emerged as what we believe is an informative introduction to the need for the method, the basic techniques, and some convincing examples.

    Functional differential equations are often used to model real-world problems in which there are great uncertainties. Yet, our conditions frequently demand excessively precise and detailed conditions which are totally unverifiable. A concrete example of this begins with Volterra (1928) who sought to model a certain biological problem by means of the equation

    (0.2.1)

    in which xg(x) > 0 if x ≠ 0, L > 0, and a(t) is twice differentiable with a″(t) not identically zero. He was interested in conditions to ensure that solutions tend to zero as t → ∞. At the time he suggested an idea for constructing a Liapunov functional which might yield some stability properties. While that construction was a long time in coming, interest in the formulation of the problem continues to this day. Ergen (1954) and others used the equation to model a circulating fuel nuclear reactor. It is used as a model of one-dimensional viscoelasticity. And it has been used to model neural networks as well as many other problems. Levin (1963) followed Volterra’s suggestion and successfully constructed a Liapunov functional for a companion equation; thus, we call (0.2.1) the Volterra-Levin equation. The next year, Levin and Nohel (1964) extended that Liapunov functional to the case of (0.2.1), itself. Here is their result.

    Let L > 0, xg(x) > 0 when x ≠ 0, a : [0, L] → R and consider the IVP

    (0.2.2)

    We need an initial function, ψ, because we must define x′(0):

    The resulting solution is x(t, 0, ψ) where x(t, 0, ψ) = ψ(t) for −L t ≤ 0.

    Theorem Levin-Nohel. Let

    For a given ψ and for x(t) = x(t, 0, ψ) the functional

    satisfies

    along the solution of (0.2.2) and x(j) (t) → 0 as t → ∞ for j = 0, 1, 2.

    It can readily be argued that this is one of the most beautiful results in all of stability theory by Liapunov’s direct method. But when we look at the precise and detailed conditions on a(t) we are compelled to complain that never can these conditions be well verified in a real-world problem. Other solutions to these kinds of problems do exist.

    The following set of steps (S1, S2, S3) represents the way in which we establish stability of the zero solution of the functional differential equation x′ = f(t, xt) by fixed point methods in this book.

    S1. An examination of the differential equation reveals that for a given tand we require an initial function ψ Rn. We then must determine a set M of functions φ ∪ [t0, ∞) → Rn with φ(t) = ψ(twhich would serve as acceptable solutions. Usually, this means that we would ask φ bounded and, sometimes, φ(t) → 0 as t → ∞.

    S2. Next, shrewdly invert x′ = f(t, xt) obtaining an integral equation

    so that for the mapping P defined by φ M implies

    and a fixed point solves the initial value problem, x′ = f(t, xt) and x(t) = ψ(t. The critical point here is that the inversion must be so shrewdly accomplished that the implied mapping does map bounded sets into bounded sets. Virtually never will a simple integration of the differential equation accomplish this.

    S3. Finally, select a fixed point theorem which will show that P , it is an acceptable solution.

    We will now give a sequence of examples and techniques leading us to a solution of the Volterra-Levin equation. Our first example is a half-linear equation in which we can directly use the variation of parameters formula to invert the differential equation and obtain the mapping. The contraction mapping principle is given in the next chapter, as well as considerably more detail than we offer here in this brief introduction.

    Example 0.2.1. Let a, b, r be continuous, r(t) ≥ 0,

    (0.2.3)

    where

    (0.2.4)

    Theorem 0.2.1. Let be bounded above and let (0.2.4) hold. If ψ : (−∞, 0] → R is bounded and continuous, then the solution x(t, 0, ψ) of (0.2.3) exists, is unique, and is bounded for t ≥ 0.

    Notice that on average a(t) dominates b(t). There are no bounds on a, b, r, or r′. The function a can have positive and negative values. Also, b(t) can be large when a(t) is negative. All of these properties would cause substantial difficulties with Liapunov’s direct method.

    Proof. Identify b(t)x(t r(t)) as an inhomogeneous term and apply the variation of parameters formula to obtain

    by

    define ()(t) = ψ(t) if t ≤ 0 and for t ≥ 0 then

    Then is bounded and continuous. If φthen

    . It is bounded, continuous, satisfies the initial condition, and satisfies the differential equation.

    Notice that the process is simple, fast, rigorous, and only requires knowledge of the elementary variation of parameters formula. In one step we obtain existence, uniqueness, and boundedness of solutions.

    We can readily add small conditions implying solutions tend to zero. This is done in a later section. Next, we come to a linear equation which offers challenges.

    Example 0.2.2. Let a : [0, ∞) → R be bounded and continuous, let r be a positive constant, and let

    (0.2.5)

    Given a continuous ψ : [−r, 0] → R, we can write the equation as

    so that by the variation of parameters formula, followed by integration by parts, we have

    as before we will define a contraction mapping from this formula for x(t) if there is a constant α < 1 with

    (0.2.6)

    Notice again that (0.2.6) represents averages; there is no bound, either above or below, on a(t). But if a(t) ≥ 0, then (0.2.6) can be replaced by

    As we are interested in asymptotic stability we will need

    Theorem 0.2.2. Let → ∞ as t → ∞ and let (0.2.6) hold. Then for every continuous initial function ψ : [−r, 0] → R the solution x(t, 0, ψ) is bounded and tends to zero as t → ∞.

    Proofbe the Banach space of bounded and continuous functions φ : [−r, ∞) → R with the supremum norm. Let (Ssuch that φ(t) = ψ(t) on [−r, 0] and φ(t) → 0 as t → ∞.

    Define P : S S by ((t) = ψ(t) on [−r, 0] and for t ≥ 0 then

    Now is continuous, and from (0.2.6) it follows that is bounded. P is a contraction by (0.2.6).

    We can show that the last term of tends to zero by using the classical proof that the convolution of an L, a solution of the equation.

    We return now to the Volterra-Levin problem in which we ask more of g, but conditions on a(t) which are far more verifiable in real-world problems.

    Example 0.2.3. Suppose that there is a function g : R R satisfying:

    (0.2.7)

    for some K > 0 and all x, y R,

    (0.2.8)

    and sometimes

    (0.2.9)

    for some β > 0.

    Consider the scalar equation

    (0.2.10)

    with L > 0, p continuous,

    (0.2.11)

    .) and for the K of let (0.2.7)

    (0.2.12)

    Theorem 0.2.3. If (0.2.7), (0.2.8), (0.2.11), and (0.2.12) hold, then every solution of (0.2.10) is bounded. If (0.2.9) also holds, then every solution and its derivative tends to zero.

    Proof. Let ψ : [−L, 0] → R be a given continuous initial function and let x1(t) := x(t, 0, ψ) be the unique resulting solution. By the growth condition on g, x1(t) exists on [0, ∞). If we add and subtract g(x) we can write the equation as

    Define a continuous non-negative function a : [0, ∞) → [0, ∞) by

    Since a is the quotient of continuous functions it is continuous when assigned the limit at x1(t) = 0, if such a point exists.

    Thus, for the fixed solution, our equation is

    which, by the variation of parameters formula, followed by integration by parts, can then be written as

    With the notation φ0 = ψ meaning that φ(t) = ψ(t) for – L t ≤ 0, define

    using the above equation in x(tdefine ()(t) = ψ(t) if −L t ≤ 0. If t ≥ 0, then define

    To see that P is a contraction, if φ, η then

    There is a unique fixed point, a bounded solution.

    the condition that φ(t) → 0 as t → ∞. We can show ()(t) → 0 when φ(t) → 0 so the fixed point tends to 0. From (0.2.10) we see that x′(t) also tends to zero. With assumptions on boundedness of the derivative of p we could claim that x″(t) also tends to zero, as did Levin and Nohel.

    Fixed point theory can answer some of our most fundamental questions about the behavior of solutions of differential equations. Our first encounters with differential equations show us that all solutions of the constant coefficient equation

    x″ + ax′ + bx = 0

    tend to zero as t → ∞ whenever a and b are positive constants; in fact, they do so exponentially, along with their derivatives. But we can prove using ideal theory that we can never solve the simple equation

    x″ + tx = 0

    with anything like elementary functions. Thus, we know that variable coefficients present real difficulties. We look at

    x″ + h(t)x′ + k²x = 0

    with k² > 0 and h(t) ≥ h0, where h0 is a positive constant and we conjecture that solutions will tend to zero. R. A. Smith (1961) proved a beautiful result showing that the zero solution of that equation is asymptotically stable if and only if

    . It is not hard to show that the condition holds for h(t) = h0 + t, but fails for h(t) = h0 + t². These functions are crucial benchmarks in our thinking about differential equations. They tell us when our intuition works and when it fails. Volumes have been written about the behavior of solutions of

    x″ + f(t, x)x′ + g(x) = 0

    but we have very little information about solutions of

    x″ + f(t, x, x′)x′ + b(t)g(x(t L)) = 0

    in which b : [0, ∞) → [0, ∞) is continuous and bounded, g : R R and f : R × R × R → [0, ∞) its continuous. We would be so interested in knowing what limits on these functions will preserve the property that all solutions tend to zero. In Section 2.4 we study that problem by means of transformations much like those in the proof of Theorem 0.2.3 and with contraction mappings. We show that for appropriate bounds on b and L, the benchmark of R. A. Smith holds in that f can increase as fast as t and still solutions will tend to zero.

    Not only should we be able to attack classical problems and obtain better solutions, but new methods should present entirely new kinds of solutions. Fixed point theory does exactly that. Results of the following type are given in Section 3.1. We can use methods similar to those in the proof of Theorem 0.2.3 to obtain stability results for

    x′(t) = −x²n+1(t r)

    for r a positive constant. We have no idea how to prove a stability result for

    x′(t) = – (1 – 2 sin t)x²n+1(t r).

    Is it possible that we could use the higher order term in

    x′(t) = −(1 – 2sint)x²n+1 (t r) – 2sin tx²n+3(t r)

    to stabilize the equation? In fact, we can borrow the coefficient of the higher order term, add it to the coefficient of the lower order term, and use the techniques in the proof of Theorem 0.2.3 to show that the equation is stable. We do not believe that similar results have ever been conjectured or proved in any other way.

    Chapter 1

    Half-linear Equations

    1.1 Statement of the problem

    This section is an elementary introduction to the formulations of fixed point problems in differential equations. These formulations serve two purposes. First, they give a brief introduction to the kinds of problems we will be considering. But, more importantly, they offer motivation for the properties and results which we present in the next section; those properties are fundamental in the study of stability by fixed point theory.

    Many different kinds of problems can be solved by means of fixed point theory. Generally, to solve a problem with fixed point theory is to find:

    (a) a set S consisting of points which would be acceptable solutions;

    (b) a mapping P : S S with the property that a fixed point solves the problem;

    (c) a fixed point theorem stating that this mapping on this set will have a fixed point.

    We will be primarily interested in functional differential equations, but we begin with an ordinary differential equation

    (1.1.1)

    where g : [0, ∞) × Rn Rn is continuous. To start us on our way we will discuss problems which are central to the study and motivate the contents of future sections. Several concepts may be used here which will be more fully defined and discussed later.

    Example 1.1.1 An existence theorem. Perhaps the most basic problem concerning (1.1.1) is to find a solution through a given point (t0, x0) ∈ [0, ∞) × Rn defined on some interval [t0, t0 + γ] and satisfying (1.1.1) on that interval.

    For this problem, our first guess would be that the set S should consist of differentiable functions φ : [t0, t0 + γ] → Rn with φ(t0) = x0. Next, the simplest way to find a mapping is to formally integrate (1.1.1) and obtain

    so that the mapping P on S is defined by

    A fixed point will certainly satisfy the equation. Since our mapping is given by an integral, our second approximation to S is the set of continuous functions; differentiability will be automatic. There is now a vast array of fixed point theorems which will yield a fixed point of that mapping and satisfy our initial value problem. This is typical of fixed point theory. The big step is in finding a suitable mapping; once that is done investigators may work for decades using that mapping to get progressively better results. We will begin with one of the simplest fixed point theorems.

    Theorem 1.1.1 Contraction Mapping Principle. Let (S, ρ) be a complete metric space and let P : S → S. If there is a constant α < 1 such that for each pair φ1, φ2 ∈ S we have

    ρ(1, 2) ≤ αp(φ1, φ2),

    then there is one and only one point φ ∈ S with Pφ = φ.

    We frequently use the supremum metric denoted by || · ||.

    For our illustration here, it is easiest to complete the solution by asking that g satisfy a global Lipschitz condition of the form

    g(t, x) – g(t, y)∣ ≤ Kx y

    for t to and for all x, y Rn, where ∣ · ∣is any norm on Rn. This will allow us to give a contraction mapping argument. For any fixed interval [t0, t0 + γ], our set S with the supremum metric is a complete metric space and P : S S. Checking our contraction requirement, we have

    so that if α := γK < 1 then P is a contraction with unique fixed point φ, a solution of our differential equation and it satisfies the initial condition.

    So much more can be done. By adopting a different metric, we can increase the length of the interval of existence. By using Schaefer’s fixed point theorem we can get existence on [t0, ∞). By using Schauder’s theorem we can drop the Lipschitz condition and obtain existence without uniqueness. But everything begins with a suitable mapping; that is the central problem and it is the one on which this entire book centers. It can be relatively easy to state and prove theorems, once we have a proper mapping, but the real problem is in constructing the mapping.

    Seldom will we see a problem in which it is so easy to find a suitable mapping as the one we just finished. We are concerned with a treatment of stability by means of fixed point theory. The main classical method of studying stability is Liapunov’s direct method. In that study we must always begin by finding a Liapunov function which is a type of generalized distance function measuring the distance from a solution to the origin. It is a major problem to find an appropriate Liapunov function. But once a suitable Liapunov function is found, investigators may continue for decades deriving more and more information from that Liapunov function; we will later discuss the Liénard equation and Levin’s equation as two such examples. In precisely the same way, our major problem is to find a suitable mapping. Once that mapping is found, investigators may find the mapping an endless source of results. This study began with the idea that difficulties encountered in Liapunov theory might be circumvented by fixed point theory. Thus, we will frequently refer back to classical difficulties with Liapunov theory.

    Example 1.1.2 A bounded solution. Stability concerns a special kind of boundedness of solutions. Thus, we next ask how we can show that solutions of (1.1.1) are bounded. We would pick the set S as follows. For a given (t0, x0) ∊ [0, ∞) × Rn, let S be the set of differentiable functions φ : [t0, ∞) → Rn which are bounded and satisfy φ(t0) = x0. Next, we must select an appropriate mapping. If we try to write

    then we immediately have difficulty. If φ is bounded, we have no simple way of establishing that is bounded. That mapping will map a given function φ right out of the set and there is no hope of proving that there is a fixed point. And that gives rise to the name of this chapter.

    Suppose that our problem is half linear in the sense that it can be written as

    x′ = Ax + f(t, x)

    where A is an n × n real constant matrix, all of whose characteristic roots have negative real parts, while f : [0, ∞) × Rn Rn is

    Enjoying the preview?
    Page 1 of 1