Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Applied Iterative Methods
Applied Iterative Methods
Applied Iterative Methods
Ebook616 pages6 hours

Applied Iterative Methods

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This graduate-level text examines the practical use of iterative methods in solving large, sparse systems of linear algebraic equations and in resolving multidimensional boundary-value problems. Assuming minimal mathematical background, it profiles the relative merits of several general iterative procedures. Topics include polynomial acceleration of basic iterative methods, Chebyshev and conjugate gradient acceleration procedures applicable to partitioning the linear system into a “red/black” block form, adaptive computational algorithms for the successive overrelaxation (SOR) method, and computational aspects in the use of iterative algorithms for solving multidimensional problems. 1981 edition. 48 figures. 35 tables.
LanguageEnglish
Release dateApr 27, 2012
ISBN9780486153308
Applied Iterative Methods

Read more from Louis A. Hageman

Related to Applied Iterative Methods

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Applied Iterative Methods

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Applied Iterative Methods - Louis A. Hageman

    METHODS

    CHAPTER

    1

    Background on Linear Algebra and Related Topics

    1.1 INTRODUCTION

    In this chapter we give some background material from linear algebra which will be helpful for our discussion of iterative methods. In Sections 1.2–1.5, we present a brief summary of basic matrix properties and principles which will be used in subsequent chapters. No proofs are given. It is assumed that the reader is already familiar with the general theory of matrices such as presented, for instance, in Noble and Daniel [1977] or in Faddeev and Faddeeva [1963, Chap. 1]. In Sections 1.6 and 1.7, we discuss the matrix problem which is obtained from a simple discretization of the generalized Dirichlet problem. The purpose of this example is to illustrate some of the matrix concepts presented in this chapter and to illustrate the formulations of matrix problems arising from the discretization of boundary- value problems.

    1.2 VECTORS AND MATRICES

    We let EN denote the set of all N × 1 column matrices, or vectors, whose components may be real or complex. A typical element v of EN is given by

    To indicate that vi, i = 1, 2, …, N, are the components of v, we write v = (vi). A collection of vectors v(1), v(2), …, v(s) is said to be linearly dependent if there exist real or complex numbers c1, c2, …, cs, not all zero, such that

    If this equality holds only when all the constants c1, …, cs are zero, then the vectors v(1), …, v(s) are said to be linearly independent. A basis for EN is a set of N linearly independent vectors of EN. Given such a basis, say, v(1), v(2), …, v(N) then any vector w in EN can be expressed uniquely as a linear combination of basis vectors; i.e., there exists a unique set of numbers c1, c2, …, cN such that

    The transpose of the vector v is denoted by vT and the conjugate transpose by vH. Given two vectors w and v of EN, the inner product (w, v) of the vector w with v is defined by

    Similarly, EN, N denotes the set of all N × N square matrices whose elements may be real or complex. A typical element of EN, N is given by

    or, equivalently, in abbreviated form by A = (ai,j) for 1 ≤ i,j N. We denote the transpose of the matrix A by AT and the conjugate transpose of A by AH. If the elements of A are real, then AH = AT. Normally, we shall deal only with real matrices. The matrix A is symmetric if A = AT.

    The special N × N matrix A = (ai, j), where ai,i = 1 and ai,j = 0 if i j ,is called the identity matrix and is denoted by I. A matrix A in EN, N is nonsingular if there exists a matrix H such that AH = HA = I. If such an H exists, it is unique. The matrix H is called the inverse of A and is denoted by A–1.

    A real matrix A in EN,N is symmetric and positive definite (SPD) if A is symmetric and if (v, Av) > 0 for any nonzero vector v. If A is SPD, then A is nonsingular. The matrix LLT is SPD for any real nonsingular matrix L. Also, if A is SPD, there exists a unique SPD matrix J such that J² = A. The matrix J is called the square root of the SPD matrix A and is denoted by A¹/².

    1.3 EIGENVALUES AND EIGENVECTORS

    An eigenvalue of the N × N matrix A is a real or complex number λ which, for some nonzero vector y, satisfies the matrix equation

    Any nonzero vector y which satisfies (1-3.1) is called an eigenvector of the matrix A corresponding to the eigenvalue λ.

    In order for (1-3.1) to have a nontrivial solution vector y, the determinant of A λI (denoted by det(A λI)) must be zero. Hence, any eigenvalue λ must satisfy

    Equation (1-3.2), which is called the characteristic equation of A, is a polynomial of degree N in λ. The eigenvalues of A are the N zeros of the polynomial (1-3.2).

    A matrix A in EN, N has precisely N , some of which may be complex. The existence of at least one eigenvector corresponding to each eigenvalue λi is assured since (1-3.1) with λ = λi has a nontrivial solution. Eigenvectors corresponding to unequal eigenvalues are linearly independent. Thus, when all the eigenvalues of A are distinct, the set of eigenvectors for A includes a basis for the vector space EN. However, this is not always the case when some eigenvalues of A are repeated. In this book, we shall be concerned, for the most part, with those matrices whose eigenvalues are real and whose set of eigenvectors includes a basis for EN. Such eigenproperties are satisfied, for example, by the eigenvalues and eigenvectors of symmetric matrices.

    Before discussing symmetric matrices and related matrices, we introduce some notations that will be used repeatedly in this and subsequent chapters.

    The spectral radius S(A) of the N × N matrix A is defined as the maximum of the moduli of the eigenvalues of Ais the set of eigenvalues of A, then

    If the eigenvalues of A are real, we let m(A) and M(A) denote, respectively, the algebraically smallest and largest eigenvalues of A, i.e.,

    Eigenproperties of Symmetric and Related Matrices

    Important eigenproperties of real symmetric matrices are summarized in the following theorem.

    Theorem 1-3.1. If the N × N matrix A is real and symmetric, then

    (1) the eigenvalues λi, i = 1, …, N, of A are real, and

    (2) there exists N for A such that

    When A is SPD, in addition to the eigenproperties given in Theorem 1-3.1, the eigenvalues of A are also positive. Since the matrix A is nonsingular if and only if no eigenvalue of A equals zero, it follows that a SPD matrix is also nonsingular.

    Two matrices A and B are similar if B = WAW–1 for some nonsingular matrix W. Similar matrices have identical eigenvalues.

    Except for (2c), the conclusions of Theorem 1-3.1 also are valid for any real matrix A which is similar to a real symmetric matrix C.

    For any real matrix A in EN, N and for any nonzero vector v in EN (real or complex), the Rayleigh quotient of v with respect to A is defined as the quotient of inner products (v, Av)/(v, v). If A is symmetric, then for any nonzero vector v in EN

    Here m(A) and M(A) are defined by (1-3.4). Moreover, there exist nonzero vectors w and z such that

    and

    Eigenproperties of Real Nonsymmetric Matrices

    The material in this subsection is used primarily in Chapter 9. Since the discussion is somewhat involved, many readers may wish to skip it on a first reading.

    A real nonsymmetric matrix A may have complex eigenvalues. Since the coefficients of the characteristic polynomial (1-3.2) are real for this case, any complex eigenvalues of A must occur in complex conjugate pairs; i.e., if λi is a complex eigenvalue of the real matrix A, then λk = λi* is also an eigenvalue of A. Here, λi* denotes the complex conjugate of λi. Moreover, if y(i) is an eigenvector corresponding to λi, then y(k) = y*(i) is an eigenvector of A corresponding to λk = λi*.

    For an N × N nonsymmetric matrix A, it is not always possible to find a basis for EN from the set of eigenvectors of A. However, it is always possible to form a basis from the independent eigenvectors of A supplemented by other vectors (called principal vectors) which are associated with the eigenvalues and eigenvectors of A. Such a basis can best be described in terms of the Jordan canonical form associated with A. The following is a restatement of the results given in Noble and Daniel [1977].

    A square matrix of order ≥ 1 that has the form

    is called a Jordan block. Note that the elements of J are zero except for those on the principal diagonal, which are all equal to λ, and those on the first superdiagonal, which are all equal to unity. Any matrix A can be reduced to a direct sum of Jordan blocks by a similarity transformation. More precisely, we have

    Theorem 1-3.2. For any N × N matrix A, there exists a nonsingular matrix V such that

    where each Ji, 1 ≤ i k, is a Jordan block whose constant diagonal element is an eigenvalue of A. The number of linearly independent eigenvectors of A is equal to the number k of Jordan blocks in (1-3.9).

    in (1-3.9) is called the Jordan canonical form of A and is unique up to a permutation of the diagonal submatrices. Let the column vectors of V ; i.e., V = [v(1), v(2), …, v(N)]. Since Vis a basis for EN. For use in later chapters, we now examine the behavior of the matrix-vector products Aiv(i), i = 1, …, N.

    From the relation AV = V , it follows that the vectors v(i) separate, for each Jordan block J, into equations of the form

    where λi is an eigenvalue of A and vi . If vi = 0, then v(i) is an eigenvector of A. When v(i) is an eigenvector of A, we have by (1-3.10) that

    is 1 × 1, then vi = 0 for all i.† For this case, each column of V is an eigenvector of A and satisfies (1-3.11).

    The relationship (1-3.11), however, is not valid for all v(i) when some of the Jordan blocks are of order greater than unity. To illustrate this case, consider the example

    From (1-3.10), the column vectors of V here separate into equations of the form

    The vectors v(1) and v(2) are eigenvectors of A. The other vector v(3) is known as a principal vector (or generalized eigenvector) of grade 2, corresponding to the eigenvalue λ2. From (1-3.13), the eigenvectors v(1) and v(2) satisfy

    while tlie principal vector of grade 2 satisfies

    Note that if λconverge to the null vector. However, the sequence involving the principal vector of grade 2 converges at a slower rate. Indeed, from (converges to the null vector at a rate governed by |λ2|lconverges at the slower rate governed by l|λ2|l – ¹.

    In general, the f column vectors of V associated with a Jordan block of order f consist of one eigenvector and (f – 1) principal vectors of grade 2 through f. However, we shall not discuss the more general case since for reasons of simplicity we consider in later chapters only Jordan blocks of order 2 or less. We remark that eigenvectors are often defined as principal vectors of grade 1. A matrix whose set of eigenvectors does not include a basis is said to have an eigenvector deficiency.

    Matrix Polynomials

    If A is an N × N matrix, an expression of the form

    where α1, …, αn are complex numbers, is called a matrix polynomial. A matrix polynomial Qn(A) can be obtained by substitution of the matrix A for the variable x in the associated algebraic polynomial

    The eigenvalues of the matrix polynomial Qn(A) can be obtained by substitution of the eigenvalues of A for the variable x in Qn(xis the set of eigenvalues for the matrix A in EN, Nis the set of eigenvalues for the N × N matrix Qn(A).

    1.4 VECTOR AND MATRIX NORMS

    We shall consider several different vector and matrix norms. The vector norms that we consider are the following:

    Here L is any nonsingular matrix. We also consider the corresponding matrix norms

    For α = 2, ∞, and L, it can be shown that

    and

    An important property of matrix norms is that

    for β = 2, ∞, and L. If A is symmetric, then

    while if LAL–1 is symmetric, then

    The sequence of vectors v(0), v(1), … converges to the vector v if and only if

    for any vector norm α. Similarly, the sequence of matrices A(0), A(1), … converges to A if and only if

    for any matrix norm β. It can be shown that

    and

    for all vectors v if and only if

    For any nonsingular matrices A and L, we define the L-condition number κL(A) of the matrix A by

    The spectral condition number κ(A) is obtained for the special case L = I, i.e.,

    If A is SPD, then by (1-4.10), the spectral condition number of A is given by

    1.5 PARTITIONED MATRICES

    In writing the matrix equation Au = b, an ordering of the unknowns (and equations) is implied. For the iterative methods that we consider, this implied ordering usually determines the sequence in which the unknowns are improved in the iterative process. For block iterative methods, blocks or groups of unknowns are improved simultaneously. The blocks of unknowns to be improved simultaneously are determined by an imposed partitioning of the coefficient matrix A. Such a partitioning is defined by the integers n1, n2, …, nq, where ni ≥ 1 for all i and where

    which satisfies (1-5.1), the q × q partitioned form of the N × N matrix A is then given by

    where Ai,j is an ni × nj submatrix.

    If q = N, i.e., if ni = 1 for all i, then we say (1-5.2) is a point partitioning of A.

    When q = 2, we obtain the special case

    which is called a red/black partitioning. A 2 × 2 partitioning of the coefficient matrix A is required for some of the iterative methods discussed later in Chapters 8 and 9. Examples of red/black partitionings are given in Sections 1.7 and 9.2, and in Chapters 10 and 11.

    If Ai,j = 0 whenever i j, then A is called a block diagonal matrix; i.e., A has the form

    where the Ai,i submatrices are square. In abbreviated form (1-5.4) is written as A = diag (A1,1, …, Aqq).

    We always assume that the unknown vector u and the known vector b in the matrix equation Au = b are partitioned in a form consistent with A. Thus if A is given by (1-5.2), then u is assumed to be partitioned as

    where Ui is an ni × 1 matrix (column vector). Conversely, a partitioning for A is implied by a given partitioning (1-5.5) for u. Thus a partitioning for the matrix problem Au = b can be given by specifying a partitioning either for A or for u.

    As we shall see in Chapter 2, in order to carry out one iteration of a given block iterative process corresponding to the partitioning (1-5.2), it will be necessary to solve subsystems of the form Ai,i Ui = yi, where yi is some known vector. In general, the larger the sizes of the diagonal blocks Ai,i, the more difficult it will be to solve these subsystems. On the other hand, the larger the blocks, the faster the convergence of a given method. (This is usually, though not always, true.) Any convergence improvement obtained through the use of larger blocks needs to be balanced against the additional work per iteration. In this book, we shall be concerned primarily with those partitionings that result in diagonal submatrices Ai,i whose sizes are considerably larger than unity and whose sparseness structures are such that the subsystems Ai,i Ui = yi are considerably easier to solve than the complete system Au = b. In subsequent chapters, submatrices Ai,i that satisfy these conditions will be called easily invertible. Examples of such partitionings for some practical problems are given in Chapters 10 and 11.

    1.6 THE GENERALIZED DIRICHLET PROBLEM

    In this and the next section, we discuss the matrix problem that is obtained from a discretization of the generalized Dirichlet problem. The purpose of this simple example is to illustrate some of the matrix concepts presented in this chapter and to familiarize the reader with the formulation of matrix equations arising from the discretization of boundary value problems. The examples given of red/black partitionings are referred to in later chapters and will be more meaningful then. Thus on the first reading of this chapter, these examples may be skimmed.

    Consider the problem of numerically solving the generalized Dirichlet problem

    in a square region R with boundary S. We assume that B and C are positive functions that are twice continuously differentiable with respect to x and y in R. Moreover, it is assumed that F ≤ 0 and that F and G are continuous in R. Given a function g(x, y) that is defined and continuous on S(x, y) which satisfies (1-6.1) in R and such that

    for (x, y) on S.

    To solve this problem numerically, we impose a uniform square mesh of size h = L/(M + 1) on R (see Fig. 1-6.1). Here L is the side length of R, and we exclude the trivial case M = 0. With i and j integers, the set of mesh points Ωh is defined by (see Fig. 1-6.1) Ωh = {(xi, yi): xi = ih, yj = jh for 0 ≤ i, j ≤ (M + 1)}. Moreover, we let Rh = R ∩ Ωh be the set of interior mesh points and Sh = S ∩ Ωh be the set of boundary points.

    Fig. 1-6.1. Uniform mesh subdivision for a square region.

    To discretize the problem defined by (1-6.1)–(1-6.2), we replace (see, e.g., Varga [1962]) the differential equation (1-6.1) at a mesh point (xi, yj) in Rh by the finite difference equation

    Here ui,j (xi, yj). An equation of the form (1-6.3) holds for each of the M² mesh points in Rh.

    Equation (1-6.3), which we refer to as the five-point formula, can also be expressed in the form

    where

    and

    For (xi, yj on Sh, we require that the ui,j satisfy

    If we now multiply both sides of (1-6.4) by – h² and transfer to the right-hand side those terms involving the known boundary values ui,j on Sh, we obtain a linear system of the form

    Here A is an N × N matrix, b a known N × 1 vector, u the N × 1 vector of unknowns, and N = M² the number of points of Rh.

    When a system of linear equations such as (1-6.4) is expressed in matrix form Au = b, it is implied that a correspondence between equations and unknowns exists and that an ordering of the unknowns has been chosen. In writing (1-6.6), if the kth unknown in the vector u is ui,j, we assume that the kth row of A is obtained from the difference equation (1-6.4), corresponding to the mesh point (xi, yi). Independent of the ordering of the unknowns ui,j for u, this correspondence between equations and unknowns determines the diagonal elements of A.† To illustrate this, suppose the first element of u is the unknown u1,1. With A = (al, m), this correspondence implies that a1,1 = P1,1, where P1,1 is defined in (1-6.4). However, if u1,1 were the second element of u, then this correspondence would imply that a2,2 = P1,1. For both cases, P1,1 is a diagonal element of A.

    Moreover, with this correspondence between equations and unknowns, it is easy to see that A is symmetric. This follows since the coefficient Ei,j of ui+1, j in the equation corresponding to the unknown ui,j , while the coefficient Wi+1, j of ui,j in the equation corresponding to ui+1, j . Similar arguments are valid for any other pair of adjacent points (xi, yj). It can be shown without much difficulty that A also is positive definite and, hence, nonsingular (see, e.g., Varga [1962]).

    The above properties of A are independent of the partitioning and the ordering of unknowns for u. However, the behavior of the iterative methods discussed in Chapter 2 and in later chapters often depends on the ordering of unknowns and on the partitioning imposed. We now define several orderings of u for both point and block partitionings, which we use for illustrative purposes later.

    We consider two orderings for the point partitioning. The first is the natural ordering defined by

    Relative to this ordering for u, the elements of A can now be determined uniquely. An example is given in the next section (see (1-7.3)).

    Another ordering for the point partitioning is the so-called point red/black ordering, which is defined as follows: Let the red unknowns be the set of all ui,j such that (i + j) is even and let the black unknowns be the set of all ui,j such that (i + j) is odd. The red/black ordering is then any ordering such that every black unknown follows all the red unknowns. In the next section, we show that this ordering of unknowns leads to a red/black partitioning of A (see (1-5.3)) such that the submatrices A1,1 and A2,2 are diagonal.

    We now consider natural and red/black orderings when the unknowns are partitioned by lines. Let Uj denote the vector of unknowns on the line y = yi (see Fig. 1-6.1). The natural ordering for this line partitioning of the unknown vector u is defined by

    An example is given in the next section (see (1-7.7)).

    To define the red/black line ordering, we let Uj be a red line of unknowns if j is odd and let Uj be a black line of unknowns if j is even. The red/black line ordering is then any ordering such that every black line of unknowns follows all the red lines. In the next section, we shall use this red/black line ordering of the unknowns to obtain another red/black partitioning for A.

    1.7 THE MODEL PROBLEM

    For our later discussion and for illustrative purposes, it is convenient to present in some detail the following model elliptic differential equation. Consider the discrete approximation of the Poisson equation

    with boundary conditions (1-6.2) on S, the boundary of the unit square R. With the mesh subdivision given by Fig. 1-6.1, the finite difference approximation (1-6.4) to Poisson’s equation at a mesh point (xi, yj) in Rh may be written as

    We now give several point and line partitionings for the corresponding coefficient matrix A when M + 1 = 4 (see Fig. 1-6.1). For this special case, there are nine unknowns and h . In what follows, we indicate the ordering for the unknown vector u by first numbering the mesh points of the problem solution region. We then let the kth component uk of the vector u be the unknown corresponding to the mesh point marked k.

    Fig. 1-7.1. Mesh point ordering for point partitionings, (a) Natural ordering, (b) red/black ordering.

    Point Partitionings

    For point partitionings the natural ordering of unknowns for u is defined by the mesh point numbering given in Fig. 1-7.1a. With the unknown at mesh point k denoted by uk and with all boundary terms moved to the right-hand side, the system of difference equations for this case can be written as

    A red/black ordering for the point partitioning is defined by the mesh point numbering in Fig. 1-7.1b. For this ordering for the unknowns, the difference equations (1-7.2) can be expressed in the matrix form

    The red unknowns are u1, u2, …, u5 and the black unknowns are u6, …, u9. Note that if u is now partitioned by red unknowns and black unknowns, we obtain the red/black partitioning (1-5.3). Indeed, if we let

    then from (1-7.4) we obtain

    where

    Line Partitionings

    For line partitionings the natural ordering of unknowns for u can be given, for example, by the mesh point numbering given in Fig. 1-7.2a. With the unknowns on line k denoted by Uk and with all boundary terms moved to the right-hand side, the system of equations (1-7.2) can be written in the partitioned form

    Fig. 1-7.2. Mesh point ordering for line partitionings, (a) Natural ordering, (b) red/black ordering.

    where the submatrix Ak,l gives the couplings of the unknowns from line k to those on line lFor the mesh point numbering of unknowns† within a line given in Fig. 1-7.2a, we have

    A red/black ordering for the line partitioning is defined by the mesh point numbering given in Fig. 1-7.2b. The partitioned matrix resulting from this ordering is

    The red lines of unknowns are U1 and U2, while U3 is the only black line. If we now partition u by red and black lines, i.e., let

    then from (7.9) we obtain the red/black partitioning

    where

    Note that here the matrices DR and DB are themselves block diagonal matrices. Contrast this with the form of DR and DB for the point red/black partitioning (1-7.6). However, in both the point and line cases, DR and DB are block diagonal matrices whose diagonal blocks are determined by the diagonal submatrices of the basic point or line partitioning imposed on u.

    Iterative methods utilizing the red/black partitioning (discussed in Chapters 8 and 9) require that subsystems of the form DRuR = FR and DBuB = FB be solved for every iteration. The work required to solve these subsystems is reduced significantly when DR and DB are block diagonal matrices which are easily invertible. For some physical problems, careful thought must be given in order to obtain such partitionings. This problem is discussed in more detail in Chapters 9–11.


    † This is the case, for example, if A is symmetric or if all eigenvalues of A are distinct.

    † This correspondence usually also ensures that the largest element in any row lies on the diagonal.

    † As we point out later, the ordering of unknowns within a line does not affect the iterative behavior of block methods but can affect the work required per iteration.

    CHAPTER

    2

    Background on Basic Iterative Methods

    2.1 INTRODUCTION

    In this chapter we give some background material on a class of iterative methods for solving the linear system

    where A is a given real N × N nonsingular matrix and b is a given N × 1 real column matrix.

    All methods considered in this chapter are linear stationary methods of first degree. Such methods may be expressed in the form

    where G is the real N × N iteration matrix for the method and k is an associated known vector. The method is of first degree since u(n + ¹) depends explicitly only on u(n) and not on u(n), …, u(0). The method is linear since neither G nor k depends on u(n), and it is stationary since neither G nor k depends on n. In this book we refer to any method of the form (2-1.2) as a basic iterative method.

    In Section 2.2, we briefly discuss general principles concerning convergence and rates of convergence of basic iterative methods. In Section 2.3, we describe those basic methods which we consider in later chapters. These methods are well known and include the RF (Richardson’s) method, the Jacobi method, the Gauss-Seidel method, the successive overrelaxation (SOR) method, and the symmetric successive overrelaxation (SSOR) method. We limit our attention to these basic methods because of the limitations of space and our own experience. However, in Section 2.5 we give a brief introduction to other solution methods which, although useful, will not be considered elsewhere in this book. As in Chapter 1, no proofs are given. It is assumed that the reader already has some familiarity with the use of basic iterative methods, such as that provided by Varga [1962] or Young [1971]. References will be cited only for those results that are not given in either of these basic texts.

    2.2 CONVERGENCE AND OTHER PROPERTIES

    In this section we discuss convergence and other properties of basic iterative methods that will be used in later portions of the book.

    We assume throughout that

    for some nonsingular matrix Q. Such a matrix Q is called a splitting matrix. The assumptions of (2-2.1) together with the fact that A is nonsingular imply that u is a solution to the related system

    is also the unique solution to (2-1.1), i.e.,

    An iterative method (which is the same as the solution of (2-1.1) is said to be completely consistent.

    If {u(n)} is the sequence of iterates determined by (2-1.2), then complete consistency implies that (a) if u(nfor some n, then u(n+1) = u(nand (b) if the sequence {u(n.

    We always assume that the basic iterative method (2-1.2) is completely consistent since this property seems essential for any reasonable method. Another property of basic iterative methods, which we do not always assume, is that of convergence. The method (2-1.2) is said to be convergent if for any u(0) the sequence u(1), u. A necessary and sufficient condition for convergence is that

    Enjoying the preview?
    Page 1 of 1