Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications
A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications
A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications
Ebook747 pages8 hours

A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications evaluates and compares advances in iterative techniques, also discussing their numerous applications in applied mathematics, engineering, mathematical economics, mathematical biology and other applied sciences. It uses the popular iteration technique in generating the approximate solutions of complex nonlinear equations that is suitable for aiding in the solution of advanced problems in engineering, mathematical economics, mathematical biology and other applied sciences. Iteration methods are also applied for solving optimization problems. In such cases, the iteration sequences converge to an optimal solution of the problem at hand.

  • Contains recent results on the convergence analysis of numerical algorithms in both finite-dimensional and infinite-dimensional spaces
  • Encompasses the novel tool of dynamic analysis for iterative methods, including new developments in Smale stability theory and polynomiography
  • Explores the uses of computation of iterative methods across non-linear analysis
  • Uniquely places discussion of derivative-free methods in context of other discoveries, aiding comparison and contrast between options
LanguageEnglish
Release dateFeb 13, 2018
ISBN9780128094938
A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications
Author

A. Alberto Magrenan

Professor Alberto Magreñán (Department of Mathematics, Universidad Internacional de La Rioja, Spain). Magreñán has published 43 documents. He works in operator theory, computational mathematics, Iterative methods, dynamical study and computation.

Related to A Contemporary Study of Iterative Methods

Related ebooks

Mathematics For You

View More

Related articles

Reviews for A Contemporary Study of Iterative Methods

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    A Contemporary Study of Iterative Methods - A. Alberto Magrenan

    Chapter 1

    The majorization method in the Kantorovich theory

    Abstract

    The goal in this chapter is to present some improvements related to the convergence of Newton's and modified Newton's method by means of introducing and using the center Lipschitz condition. Using both conditions we obtain tighter majorizing sequences that allow us to obtain weaker convergence criteria. Numerical examples and applications validating the theoretical results are also presented.

    Keywords

    Newton's method; Kantorovich; Majorizing sequences; Local/semilocal convergence

    1.1 Introduction

    Let , be Banach spaces and let be a closed and convex subset of . In the present chapter, we are concerned with the problem of approximating a locally unique solution of

    (1.1.1)

    where is a Fréchet-differentiable operator.

    By means of using mathematical modeling as it can be seen in [10], [13], and [18], many problems in Applied Sciences can be brought in the form of equation (1.1.1).

    Finding a solution of equation (1.1.1) in a closed form is not usually easy, and that is why we use iterative methods.

    The most well-known and studied one is Newton's method, which is defined as

    (1.1.2)

    where is an initial point and denotes the Fréchet-derivative of F at the point . Newton's method requires the inversion of linear operator , as well as the computation of the function F at ( ) at each step. If the starting point is close enough to the solution, Newton's method converges quadratically [10], [31]. The inversion of ( ) at each step may be too expensive or unavailable. That is why the modified Newton's method

    (1.1.3)

    can be used in this case instead of Newton's method. However, the convergence of this modification is only linear as it is stated in [6], [9], [10], and [31].

    The study about convergence of iterative methods is usually centered on two types: semilocal and local convergence analysis. The semilocal convergence matter is, based on the information around an initial point, to give criteria ensuring the convergence of the iterative method; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls. There exist several studies about the convergence of Newton's method, see [10], [13], [18], [31], [34], and the ones presented by the authors of this chapter [1–3,19–24,30,32,33].

    Concerning the semilocal convergence of Newton's method or Newton-like methods, one of the most important results is the celebrated Kantorovich theorem for solving nonlinear equations. This theorem provides a simple and transparent convergence criterion for operators with bounded second derivatives or Lipschitz continuous first derivatives. The second type analysis for numerical methods is the local convergence. Traub and Woźniakowsi [46], Rheinboldt [44], [45], Rall [43], Argyros [10], and other authors gave estimates of the radii of local convergence balls when the Fréchet-derivatives are Lipschitz continuous around a solution.

    Concerning the semilocal convergence of both methods, the Lipschitz-type condition

    (1.1.4)

    has been used [4], [5], [10], [25], [26–29], [35–42], [47–49], where ω is a strictly increasing and continuous function with

    (1.1.5)

    If for , we obtain the Lipschitz case, whereas if for and fixed , we obtain the Hölder case. Sufficient convergence criteria in the references, as well as error bounds on the distances , for each , have been established using the majorizing sequence given by

    (1.1.6)

    Using (1.1.6) (see [10], [4], [5]) it is easy that

    where

    Under the same or weaker convergence criteria, we provided a convergence analysis [6], [10], [16], [18] with the following advantages over the earlier stated works:

    •  tighter error bounds on the distances , for each ,

    •  and an at least as precise information on the location of the solution .

    We obtain these advantages by means of introducing the center Lipschitz-condition

    (1.1.7)

    where is a strictly increasing and continuous function with the same property as (1.1.5). Condition (1.1.7) follows from (1.1.4), and

    (1.1.8)

    holds in general. Note also that ( ) can be arbitrarily large [6–18]. Using (1.1.4) it can be shown that

    (1.1.9)

    for each x in a certain subset of . This estimate leads to majorizing sequence [10], [4], [5]. Now, using the less expensive and more precise (1.1.7), we obtain that

    (1.1.10)

    Inequality (1.1.10) is tighter than (1.1.9) if strict inequality holds in (1.1.8). This way we used majorizing sequence given by

    (1.1.11)

    A simple inductive argument shows

    (1.1.12)

    (1.1.13)

    and

    (1.1.14)

    In the case of modified Newton's method [6], [9], [10], [31], the majorizing sequences used are

    (1.1.15)

    Using (1.1.15) we obtain

    However, we used tighter majorizing sequences given by

    (1.1.16)

    Moreover, we obtain

    where

    It follows from (1.1.7) that there exist functions and with the same properties as such that

    (1.1.17)

    and

    (1.1.18)

    Function can certainly be chosen to be a constant function. For example,

    We have that

    Note that (1.1.7), (1.1.17), and (1.1.18) are not additional to (1.1.4) hypotheses, since in practice finding function ω requires also finding functions , , and . Furthermore, the verification of (1.1.17) and (1.1.18) requires only computations at the initial data.

    In the present chapter, we still use (1.1.4) and (1.1.7). However, we use an even tighter than majorizing sequence given by

    (1.1.19)

    Under the same or weaker convergence criteria (than those for ), we get

    (1.1.20)

    (1.1.21)

    and

    (1.1.22)

    Our results for the semilocal convergence extend those in [17] when restricted to the Lipschitz case.

    Related to the local convergence matter, the condition

    (1.1.23)

    has been used [10], [18], [31], [34], [43], where function ν is as ω. The radius of convergence has been enlarged and error bounds on the distances ( ) have been found to be tighter [10], [18] using (1.1.23) in combination with

    (1.1.24)

    where the function is as . In this case we obtain

    (1.1.25)

    used by different authors (see [6], [10], [16], [18])

    (1.1.26)

    which is tighter than (1.1.25). We shall also use the conditions

    (1.1.27)

    (1.1.28)

    and

    (1.1.29)

    where , , and are functions with the same properties as the other ν functions. In the present chapter we show that the new error bounds can be tighter than the old ones [10], [17], [18].

    The chapter is organized as follows. Section 1.2 contains the semilocal convergence of Newton's methods. The local convergence is examined in Section 1.3. Finally, some applications are shown in Section 1.4 while some numerical examples are given in Section 1.5.

    1.2 Semilocal convergence

    First of all, we need to introduce some results for the convergence of majorizing sequences for Newton's method. Let , be given by (1.1.19). It is convenient for us to define a sequence of functions on for each by

    (1.2.1)

    Denote by the class of strictly increasing and continuous functions on the interval with and such that

    (1.2.2)

    Note that is nonempty. Moreover, let us define

    (1.2.3)

    for some positive constants and L. Then, we have by (1.2.1) and (1.2.3) that

    (1.2.4)

    Set

    (1.2.5)

    Note that . Then (1.2.2) holds for α given by (1.2.5).

    Lemma 1.2.1

    Set

    (1.2.6)

    Furthermore, suppose that functions and ω are in the class ,

    (1.2.7)

    and there exists such that

    (1.2.8)

    Then, sequence given by (1.1.19) is well defined, strictly increasing, bounded from above by

    (1.2.9)

    and converges to its unique least upper bound which satisfies

    (1.2.10)

    Moreover, the following estimates hold:

    (1.2.11)

    Proof

    By means of using mathematical induction, we will prove that

    (1.2.12)

    Estimate (1.2.12) holds for by (1.2.8). Then we have by (1.1.19) and (1.2.12) (for ) that

    (1.2.13)

    Assume that (1.2.12) holds for all natural integer . Then we get by (1.1.19) and (1.2.12) that

    (1.2.14)

    and

    (1.2.15)

    Evidently, estimate (1.2.12) is true if k is replaced by , provided that

    or

    (1.2.16)

    Moreover, estimate (1.2.16) motivates us to define recurrent functions on by

    (1.2.17)

    We need to find a relationship between two consecutive functions . We get

    (1.2.18)

    where is given by (1.2.1). Then estimate (1.2.16) is certainly true if

    (1.2.19)

    In view of (1.2.2) and (1.2.18), we obtain

    (1.2.20)

    Define function on by

    (1.2.21)

    Then we get by (1.2.19)–(1.2.21) that (1.2.19) is satisfied, provided that

    (1.2.22)

    Using (1.2.17) and (1.2.21) we obtain that

    (1.2.23)

    That is, (1.2.22) is true by (1.2.8) and (1.2.23). Consequently, induction for (1.2.11) is complete. Hence, sequence is increasing, bounded from above by given by (1.2.9), and as such that it converges to its unique least upper bound which satisfies (1.2.10). □

    Next, we have the following useful extension of Lemma 1.2.1.

    Lemma 1.2.2

    Suppose there exists a minimal natural integer and such that

    and

    Then, the sequence given by (1.1.19) is well defined, strictly increasing, bounded from above

    Enjoying the preview?
    Page 1 of 1