Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Stability & Periodic Solutions of Ordinary & Functional Differential Equations
Stability & Periodic Solutions of Ordinary & Functional Differential Equations
Stability & Periodic Solutions of Ordinary & Functional Differential Equations
Ebook459 pages5 hours

Stability & Periodic Solutions of Ordinary & Functional Differential Equations

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book's discussion of a broad class of differential equations will appeal to professionals as well as graduate students. Beginning with the structure of the solution space and the stability and periodic properties of linear ordinary and Volterra differential equations, the text proceeds to an extensive collection of applied problems. The background for and application to differential equations of the fixed-point theorems of Banach, Brouwer, Browder, Horn, Schauder, and Tychonov are examined, in addition to those of the asymptotic fixed-point theorems. The text concludes with a unified presentation of the basic stability and periodicity theory for nonlinear ordinary and functional differential equations.
LanguageEnglish
Release dateJun 24, 2014
ISBN9780486150451
Stability & Periodic Solutions of Ordinary & Functional Differential Equations

Related to Stability & Periodic Solutions of Ordinary & Functional Differential Equations

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Stability & Periodic Solutions of Ordinary & Functional Differential Equations

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Stability & Periodic Solutions of Ordinary & Functional Differential Equations - T. A. Burton

    context.

    Chapter 1

    Linear Differential and Integrodifferential Equations

    1.0The General Setting

    This chapter contains

    (a)a basic introduction to linear ordinary differential equations,

    (b)a basic introduction to linear Volterra integrodifferential equations,

    (c)the theory of existence of periodic solutions of both types of equations based on boundedness and stability.

    While it provides an introduction to these subjects, it does not pretend to be an exhaustive review of either subject; rather, it is a streamlined treatment which leads the reader quickly and clearly into the periodic theory and forms the proper foundation for the subsequent nonlinear work.

    The theme of the chapter is the cohesiveness of the theory. In so many ways the structure of the solution spaces of systems

    and

    are completely indistinguishable. The proofs are identical and the variation of parameters formulae are frequently identical. Moreover, from the variation of parameters formulas there emerge beautiful formulas for periodic solutions.

    The present section is a general introduction to notation concerning both linear and nonlinear ordinary differential equations. Section 1.1 is a lengthy account of the basic theory of linear ordinary differential equations. Section 1.2 is a brief treatment of the existence of periodic solutions of

    Section 1.3 introduces the basic theory of linear Volterra equations and contains a subsection on equations of convolution type. Section 1.4 extends the periodic theory of Section 1.2 to Volterra equations of convolution type. Section 1.5 extends the periodic material to Volterra equations not of convolution type. Finally, Section 1.6 deals with stability and boundedness needed to implement the periodic results.

    Throughout the following pages we will consider real systems of differential equations with a real independent variable t, usually interpreted as time.

    We use the following fairly standard notation: [a, b] = {t|a t b}, (a, b) = {t|a < t < b}, and [a, b) = {t|a t < b}.

    If U is a set in Rn and p is a point in Rn, then Ū is the closure of U, Uc is the complement of U, S(U-neighborhood of U, and d(p, U) denotes the distance from p to U.

    If A is a matrix of functions of t, then A′ = dA/dt denotes the matrix A has the corresponding meaning.

    Unless otherwise stated, if x Rn, then |x| denotes the Euclidean length of x. If A is an n × n matrix, A = (aij), then

    Additional norms are given in Sections 1.1 and 1.2.

    Let D be an open subset of Rn, (a, b) an open interval on the t-axis, and f: (a, b) × D Rn. Then

    is a system of first-order differential equations. Thus, as x and f are n-vectors, (1.0.1) represents n equations:

    Definition 1.0.1. A function ϕ: (c, d) → D with (c, d) ⊂ (a, b) is a solution of (c, d) then ϕ′(t) = f(t, ϕ(t)).

    Example 1.0.1. Let n = 1, (a, b) = (–∞, ∞), D = R, and x′ = x². Then one may verify that ϕ(t) = –1/t is a solution on (–∞, 0) and also on (0, ∞). Thus, (c, d) could be chosen as any subinterval of either (–∞, 0) or (0, ∞). We say that the solution has finite escape time because |x(t)| → ∞ as |t| → 0.

    There is a simple geometrical interpretation of (1.0.1) and a solution ϕ. If ϕ(t) is graphed in Rn treating t as a parameter, then a directed curve is swept out. At each point ϕ(t1) with c < t1 < d the vector f(t1, ϕ(t1)) is the tangent vector to the curve (Fig. 1.1).

    Fig. 1.1

    Our discussion will be concerned primarily with the existence and properties of solutions of (1.0.1) through specified points (t0, x(a, b) × D.

    Definition 1.0.2. For a given system (1.0.1) and a given point (t0, x(a, b) × D then

    denotes the initial value problem for (1.0.1).

    Definition 1.0.3. A function ϕ: (c, d) → D is a solution of (IVP) if ϕ is a solution of (1.0.1) and if t(c, d) with ϕ(t0) = x0. We denote it by ϕ(t, t0, x0) or by x(t, t0, x0).

    Example 1.0.2. Let x′ = x², as in Example 1.0.1, and consider the IVP with x(–1) = 1. Then an appropriate ϕ is defined by ϕ(t) = –1/t for –∞ < t < 0.

    Definition 1.0.4. An IVP has a unique solution just in case each pair of solutions ϕ1 and ϕ2 on any common interval (c, d) with ϕ1(t0) = ϕ2(t0) = x0 satisfy ϕ1(t)= ϕ2(t) on (c, d).

    Example 1.0.3. Let n = 1,

    and let x(0) = 0 be the initial condition. Clearly, ϕ1(t) = 0 on (–∞, ∞) is one solution of the IVP, while

    is another solution on the same interval.

    The following two results are among the most basic in differential equations. A proof in the linear case is given in Section 1.1 and in the general case in Chapter 3.

    Theorem 1.0.1. Let f be continuous on (a, b) × D and let (t0, x(a, b) × D. Then (IVP) has a solution.

    Definition 1.0.5. A function f: (a, b) × D Rn satisfies a local Lipschitz condition with respect to x at a point (t0, x0) if there is a neighborhood N of (t0, x0) and a constant K such that (t, x1) and (t, xN imply |f(t, x1) – f(t, x2)| ≤ K|x1 – x2|.

    Theorem 1.0.2. Let f be continuous on (a, b) × D and let (t0, x(a, b) × D. If f satisfies a local Lipschitz condition with respect to x at (t0, x0), then (IVP) has a unique solution.

    Exercise 1.0.1. Show that the function f in Example 1.0.3 does not satisfy a local Lipschitz condition at (0, 0).

    Exercise 1.0.2. A function f(t, x) is linear in x if f(t, x) = A(t, x)+ b(t) where A is an n × n matrix of functions and b(t) is an n-column vector function. Show that if f is linear in x and if each element of A is continuous for a < t < b, then f is locally Lipschitz at any point (t0, x(a, b) × Rn.

    Exercise 1.0.3. Show that if n = 1, if f is continuous, and if f has a bounded partial derivative with respect to x in a neighborhood of (t0, x0), then f satisfies a local Lipschitz condition with respect to x. Can you extend the result to arbitrary n? Is f(x) = |x| a counterexample to the converse?

    We remark that solutions are defined on open intervals (c, d) ⊂ (a, b) only for the existence of right- and left-hand derivatives. If, for example, f: [a, b) × D Rn and f is continuous then Theorem 1.0.1 will be easily extended to produce a solution on [a, d) for some d > a; the solution simply will not have a derivative from the left at x = a. In this manner, it will be understood that solutions on closed or half-closed intervals are permitted with one-sided derivatives being all that is required at closed end points.

    We next show that certain scalar differential equations of order greater than one may be reduced to a system of first-order equations. If y is a scalar, then y″ = g(t, y, y′) denotes a second-order scalar differential equation, where g is defined on some suitable set. It is understood, without great formality, that ϕ is a solution if it satisfies the equation. We convert this to a system of two first-order equations as follows: define y = xso that

    . Thus, we have the system

    which we express as

    with the understanding then that x = (x1, x2)T and f = (x2, g)T. If ϕ(t) is a solution of y″ = g(t, y, y′), then (ϕ(t), ϕ′(t))T is a solution of x′ = f(t, x). On the other hand, if (ϕ(t), ψ(t))T is a solution of x′ = f(t, x) then ϕ(t) is a solution of y″ = g(t, y, y′). Thus, we say the system is equivalent to the single equation.

    In exactly the same manner, the nth-order equation

    is equivalent to the system

    One may find many equivalent systems for a given higher-order scalar equation. For example, the well-known Liénard equation

    has an equivalent system

    which has proved to be very useful in analyzing behavior of solutions.

    By Theorem 1.0.1 we see that under very mild conditions indeed (IVP) has a solution. There are certainly many equations of the form of (IVP) which can be explicitly solved and that is the scope of most elementary courses in differential equations. However, most equations of interest cannot be solved and so one must resort to other methods. A good discussion of the problem of unsolvable equations may be found in Kaplansky (1957). In particular, it is shown that the simple equation y″ + ty = 0 cannot be solved in anything resembling closed form.

    Not only are we faced with the impossibility of solving an equation, but often the solution of a solvable equation is so complex and so formidable that it is nearly impossible to extract from it the desired information. Moreover, since the initial condition is often known only approximately, one wishes to make a fairly concise statement concerning the behavior of all solutions starting near a given point (t0, x0). Finally, one may be anxious to learn the behavior of solutions for arbitrarily large t.

    These difficulties and requirements often preclude effective use of approximate and computational devices. One is then led to the area of qualitative theory and, in particular, stability theory. These terms mean many things to many authors, and one author has isolated over 17,000 different types of stability. Our presentation here will relate in large measure to the following two types.

    Definition 1.0.6. Let f: (a, ∞) × Rn → Rn. Equation (1.0.1) is Lagrange stable if for each (t0, x(a, ∞) × Rn, then each solution ϕ(t, t0, x0) of the IVP is defined on [t0, ∞) and is bounded.

    Definition 1.0.7. Let f: (a, ∞) × D Rn and let ϕ be a solution of (1.0.1) defined on [t0, ∞). We say that ϕ is Liapunov stable if for each t1 ≥ t0 and each > 0 there exists δ > 0 such that |ϕ(t1, t0, x0) – x1| < δ and t t1 imply |ϕ(t1, t0, x0) – ϕ(t, t1, x.

    Fig. 1.2

    Fig. 1.3

    Frequently, there is a point p D with f(t, p) = 0 for all t so that ϕ(t) = p is a solution of (1.0.1), and it is found by inspection. It is called an equilibrium solution.

    Definition 1.0.8. Let f: (a, ∞) × D Rn and let f(t, p) = 0 for all t and some p D. We say that ϕ(t) = p is Liapunov stable if for each t0 > a and each > 0, there exists δ > 0 such that t t0 and |x0 – p| < δ imply |p ϕ(t, t0, x.

    For p = 0 this concept is portrayed in Fig. 1.2 when (1.0.1) is time independent, while Fig. 1.3 gives the time-dependent picture.

    Definitions 1.0.6 and 1.0.7 lead us along quite divergent paths, both of which we will explore in some detail in the following sections. However, for a linear equation of the form x′ = A(t)x, then Definitions 1.0.6 and 1.0.8 are, in fact, equivalent.

    Our definitions clearly specify that we are working with a real equation, in a real domain, and with real solutions. However, the form of the equation may frequently allow the domain D to be part of the complex plane. And it may happen that complex solutions present themselves in an entirely natural way. In such cases it is often possible to extract real solutions from complex ones.

    In the next example we attempt to present a concrete realization of most of the concepts presented to this point.

    Example 1.0.4. Consider the equation y″ + y = 0 having complex solution ϕ(t) = expi(t t0). Note that both the real and imaginary parts also satisfy the equation. An equivalent system is

    which may be expressed as

    and thus with f(t, x) = Ax a Lipschitz condition is satisfied. Two solutions of the system are (cos(t t0), – sin(t t0))T and (sin(t t0), cos(t t0))T. One easily verifies that linear combinations of solutions are solutions. Also, if (t0, x0) is a given initial condition with x0 = (x01, x02)T, then

    is a solution satisfying the initial condition. As the IVP has unique solutions, this is the solution. Note that in Definition 1.0.8 we may take p = 0. Moreover, a computation shows that solutions of our equation have constant Euclidean length; hence, the zero solution is Liapunov stable and we may take δ .

    1.1Linear Ordinary Differential Equations

    1.1.1Homogeneous Systems

    Let A be an n × n matrix of real, continuous functions on a real t-interval (a, b). Then

    is called a linear homogeneous system of first-order differential equations. If A = (aij), then (1.1.1) can also be expressed as

    The following result seems to have been first proved by Gronwall, although it was subsequently discovered by many others. It usually goes under the name of Gronwall’s inequality, Reid’s inequality, or Bellman’s lemma. The result is continually being extended and has wide application. We use it here to obtain a simple uniqueness result for the IVP.

    Theorem 1.1.1. Gronwall Let k be a nonnegative constant and let f and g be continuous functions mapping an interval [c, d] into [0, ∞) with

    Then

    Proof. We prove the result for k > 0 and then let k → 0 through positive values for the result when k = 0. Multiply both sides of (1.1.2) by g(t) and then divide both sides by the right side of (1.1.2). Then integrate both sides from c to t to obtain

    The integrand on the left contains the derivative of the denominator in the numerator and so this immediately yields

    or

    If we multiply by k and apply (1.1.2) again, then we have (1.1.3), as desired.

    It is now convenient to introduce certain matrix norms and inequalities.

    Definition 1.1.1. If A is an n × n matrix, A = (aij), and if x is a column vector, then and .

    Exercise 1.1.1. Let A and B be n × n matrices, α a scalar, and x an n-column vector. Show that the following properties hold both as stated and with norm replaced by Euclidean length as defined in Section 1.0:

    (1)||A|| ≥ 0 and ||A|| = 0 iff A = 0.

    (2)||αA|| = |α| ||A||.

    (3)||A + B|| ≤ ||A|| + ||B|| and ||x1 + x2|||| ≤ ||x1|| + ||x2||.

    (4)||AB|| ≤ ||A|| ||B||.

    (5)||Ax|| ≤ ||A|| ||x||.

    The reader may also note that the definition of Liapunov stability (Def. 1.0.7) may be equivalently stated with Euclidean length replaced by norm.

    Convergence of sequences of vectors is defined component by component, or by norm, or distance function. For example, a sequence of vectors {xk} has limit x if ||xk x|| → 0 or |xk x| → 0 as k → ∞.

    Theorem 1.1.2. Let (c, d) ⊂ (a, b), with (c, d) a finite interval, and let A(t) be continuous and bounded on (c, d). If (t0, x(c, d) × Rn, then the IVP

    has a unique solution defined on (c, d).

    Proof. Note that x(t, t0, x0) is a solution of the IVP if and only if

    For brevity we write x(t) = x(t, t0, x0) and define a sequence {xn(t)} on (c, d) inductively as

    As A is continuous on (c, d) and t(c, d), the sequence is well defined and each xk(t) is differentiable. We will show that {xk(t)} converges uniformly on (c, d) and, as each xk(t) is continuous, we may pass the limit through the integral in (1.1.5) obtaining relation (1.1.4) for the limit function x(t) (cf. Fulks, 1969, p. 417).

    Using (1.1.5) twice and subtracting, we obtain

    Taking norms of both sides of the last equation and recalling that ||A(t)|| ≤ m for some m > 0, we obtain

    Lemma. For t (c, d) we have

    Proof. By induction, if k = 0 then from so that

    which verifies the statement. Assume that (1.1.6) holds for k = p – 1:

    Then

    as desired for (1.1.6).

    As (c, d) is a finite interval, the right side of (1.1.6) is bounded by

    converges by the ratio test. Thus, by the Weierstrass M-test the series

    converges uniformly to some function x(t). But the typical partial sum is xk(t) and hence {xk(t)} converges uniformly to x(t). Thus, we take the limit through the integral in (1.1.5) obtaining the existence of a continuous function x(t) satisfying

    As the right side is differentiable, so is the left. The initial condition is satisfied and so the existence of a solution of the IVP is proved.

    We now suppose that there are two solutions ϕ1 and ϕ2 of the IVP on (c, d). Then by (1.1.4) we have

    Then

    and so for t t0 we have

    We now apply Gronwall’s inequality to obtain ||ϕ2(t) – ϕ1(t) = 0 on [t0, d). A similar argument for t t0 completes the proof.

    The result is fundamental and gives a simple criterion for existence and uniqueness. The technique of proof is called Picard’s method of successive approximations and it has many other uses as well. The contraction mapping principal is based on it, which we use extensively in Chapter 3.

    We next show that, under special conditions, one can obtain very exact information about the nature of solutions from the successive approximations.

    Example 1.1.1. Let x′ = A(t)x where A is an n × n matrix of continuous functions on (–∞, ∞) with A(t + T) = A(t) for all t and some T > 0, and suppose that A(–t) = –A(t). Then all solutions are periodic of period T. We give the proof for t0 = 0 and x0 arbitrary. In (1.1.5) we note that for k = 0 we have

    As the integrand is an odd periodic function, x1(t) is an even periodic function. By induction, if xk is an even T-periodic function, then in

    the integrand is odd and T-periodic, so xk+1(t) is even and T-periodic. As {xk(t)} converges uniformly to x(t, 0, x0), it must also be even and T-periodic.

    Remark 1.1.1. The successive approximations are constructive up to a point and, also, an error bound can be easily obtained. Such bounds are discussed in Chapter 3 in some detail.

    It readily follows from Theorem 1.1.2 that for A(t) continuous on (a, b), then for each (t0, x(a, b) × Rn, the unique solution x(t, t0, x0) exists on all of (a, b).

    Theorem 1.1.3. Let A(t) be continuous on (a, b). The set of solutions of (1.1.1) on (a, b) forms an n-dimensional vector space over the reals:

    (a)x(t) = 0 is a solution.

    (b)If x(t1) = 0 for some t(a, b), then x(t) = 0 on (a, b).

    (c)If x(t) is a solution and c is a scalar, then cx(t) is a solution.

    (d)If x1(t) and x2(t)x are solutions, so is x1(t) + x2(t).

    (e)There are exactly nlinearly independent solutions.

    Proof. Parts (a)–(d) are clear. We must show that the dimension is n. First, we note that for a given t(a, b) the solutions x1(t),..., xn(t) in which xi(t0) = (0,..., 0, 1i, 0,..., 0)T = ei are linearly independent; for if not, then there are constants c1,..., cn. But then

    , a contradiction. Now the dimension is at most n, for if x(t) is any solution of (1.1.1), then we have x(t0) = (x10, x20,..., xn0)T and ; thus, the x1(t),..., xn(t) span the space. This completes the proof.

    Definition 1.1.2. Any set of n linearly independent solutions of (1.1.1) is called a fundamental system of solutions or a linear basis for (1.1.1).

    Definition 1.1.3. Given t(a, b), the n × n matrix Z(t, t0) whose columns are the xi(t) satisfying xi(t0) = ei (so that Z(t0, t0) = I) is called the principal matrix solution (PMS) of (1.1.1).

    Notice that the unique solution of (1.1.1) through (t0, x0) is x(t, t0, x0) = Z(t, t0)x0. Also, if x1(t), ..., xn(t) is a linear basis for (1.1.1), then the matrix H(t) whose columns are x1(t),..., xn(t), respectively, satisfies det H(t) ≠ 0 (cf. Theorem 1.1.5); hence, Z(t, t0) = H(t)H–1(t0). In that connection, note that for any n × n matrix C, then H(t)C has columns which are solutions of (1.1.1); however, CH(t) normally does not share that property.

    Theorem 1.1.4. Let A(t) be continuous on [t0, ∞). Then the zero solution of (1.1.1) is Liapunov stable if and only if Z(t, t0) is bounded.

    Proof. It is understood that Z is bounded if each of its elements is bounded. We first suppose that Z(t, t0) is bounded, say |Z(t, t0)| ≤ M for t0 ≤ t < ∞. Let t1 ≥ t> 0 be given. We must find δ >0 such that |x0| < δ and t t1 imply |x(t, t1, x. Now

    where |Z–1(t1, t0)| = m. Thus we take δ /Mm.

    Exercise 1.1.2. Complete the proof of Theorem 1.1.4. That is, assume x = 0 is Liapunov stable and show that Z(t, t0) is bounded.

    Exercise 1.1.3. Show that under the conditions of Example 1.1.1 we have Z(t, t0) bounded and hence x = 0 is Liapunov stable. Furthermore, show that δ may be chosen independent of t1 in this case.

    Recall that if A is an n × n matrix then

    The next result is known as Jacobi’s identity or Abel’s lemma.

    Theorem 1.1.5. Jacobi-Abel For t(a, b) and A(t) continuous we have

    Proof. Let Z(t, t0) = (ψij) and let M(t) = det Z(t, t0). The derivative of an n × n determinant M(t) is the sum of n determinants, say M′(t) = M1(t)+...+Mn(t) in which Mi(t) is obtained from M(t) by differentiating the ith row and leaving the other elements unchanged. Consider

    and note that the ψ1i, for i = 1,..., n, satisfy the first equation in the vector system (1.1.1) so M1(t) can be written as

    Now multiply the second, third, ... row by a12, a13,..., respectively, and subtract from the first row (which leaves the determinant unchanged) to get

    Perform similar operations on M2,..., Mn to get

    . As M′ = (tr A)M is a first-order linear equation, it has a unique solution for each initial condition. As M(t0) = det Z(t0, t0) = 1, the displayed solution is the desired one. This completes the

    Enjoying the preview?
    Page 1 of 1