A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications
()
About this ebook
A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications evaluates and compares advances in iterative techniques, also discussing their numerous applications in applied mathematics, engineering, mathematical economics, mathematical biology and other applied sciences. It uses the popular iteration technique in generating the approximate solutions of complex nonlinear equations that is suitable for aiding in the solution of advanced problems in engineering, mathematical economics, mathematical biology and other applied sciences. Iteration methods are also applied for solving optimization problems. In such cases, the iteration sequences converge to an optimal solution of the problem at hand.
- Contains recent results on the convergence analysis of numerical algorithms in both finite-dimensional and infinite-dimensional spaces
- Encompasses the novel tool of dynamic analysis for iterative methods, including new developments in Smale stability theory and polynomiography
- Explores the uses of computation of iterative methods across non-linear analysis
- Uniquely places discussion of derivative-free methods in context of other discoveries, aiding comparison and contrast between options
A. Alberto Magrenan
Professor Alberto Magreñán (Department of Mathematics, Universidad Internacional de La Rioja, Spain). Magreñán has published 43 documents. He works in operator theory, computational mathematics, Iterative methods, dynamical study and computation.
Related to A Contemporary Study of Iterative Methods
Related ebooks
An Invitation to Applied Mathematics: Differential Equations, Modeling, and Computation Rating: 1 out of 5 stars1/5Numerical Methods and Optimization in Finance Rating: 3 out of 5 stars3/5Physics and Ecology in Fluids: Modeling and Numerical Experiments Rating: 0 out of 5 stars0 ratingsEngineering Mathematics with Examples and Applications Rating: 4 out of 5 stars4/5Data Assimilation for the Geosciences: From Theory to Application Rating: 0 out of 5 stars0 ratingsParameter Estimation and Inverse Problems Rating: 4 out of 5 stars4/5Entropy Theory and its Application in Environmental and Water Engineering Rating: 0 out of 5 stars0 ratingsNumerical Methods for Inverse Problems Rating: 0 out of 5 stars0 ratingsRenewable-Energy-Driven Future: Technologies, Modelling, Applications, Sustainability and Policies Rating: 0 out of 5 stars0 ratingsComputational Nuclear Engineering and Radiological Science Using Python Rating: 0 out of 5 stars0 ratingsHandbook of HydroInformatics: Volume II: Advanced Machine Learning Techniques Rating: 0 out of 5 stars0 ratingsIntroduction to Bayesian Estimation and Copula Models of Dependence Rating: 0 out of 5 stars0 ratingsNumerical Methods for Roots of Polynomials - Part II Rating: 0 out of 5 stars0 ratingsEcosystem and Territorial Resilience: A Geoprospective Approach Rating: 0 out of 5 stars0 ratingsReaction Rate Theory and Rare Events Rating: 0 out of 5 stars0 ratingsIntroductory Statistics Rating: 0 out of 5 stars0 ratingsTheory and Methods of Statistics Rating: 0 out of 5 stars0 ratingsMicrofluidics: Modeling, Mechanics and Mathematics Rating: 0 out of 5 stars0 ratingsSkeletonization: Theory, Methods and Applications Rating: 0 out of 5 stars0 ratingsFractional Operators with Constant and Variable Order with Application to Geo-hydrology Rating: 0 out of 5 stars0 ratingsAdaptive Learning Methods for Nonlinear System Modeling Rating: 0 out of 5 stars0 ratingsStatistical Methods in the Atmospheric Sciences Rating: 5 out of 5 stars5/5An Introduction to Statistical Computing: A Simulation-based Approach Rating: 0 out of 5 stars0 ratingsHandbook of Metaheuristic Algorithms: From Fundamental Theories to Advanced Applications Rating: 0 out of 5 stars0 ratingsProbability, Statistics and Econometrics Rating: 0 out of 5 stars0 ratingsSatellite Soil Moisture Retrieval: Techniques and Applications Rating: 0 out of 5 stars0 ratingsAeroacoustics of Low Mach Number Flows: Fundamentals, Analysis and Measurement Rating: 0 out of 5 stars0 ratingsAdvanced Numerical Methods with Matlab 2: Resolution of Nonlinear, Differential and Partial Differential Equations Rating: 0 out of 5 stars0 ratingsBasics of Computational Geophysics Rating: 0 out of 5 stars0 ratingsHamiltonian Monte Carlo Methods in Machine Learning Rating: 0 out of 5 stars0 ratings
Mathematics For You
Basic Math & Pre-Algebra For Dummies Rating: 4 out of 5 stars4/5The Everything Guide to Algebra: A Step-by-Step Guide to the Basics of Algebra - in Plain English! Rating: 4 out of 5 stars4/5Algebra - The Very Basics Rating: 5 out of 5 stars5/5Geometry For Dummies Rating: 5 out of 5 stars5/5Mental Math Secrets - How To Be a Human Calculator Rating: 5 out of 5 stars5/5Quantum Physics for Beginners Rating: 4 out of 5 stars4/5Introducing Game Theory: A Graphic Guide Rating: 4 out of 5 stars4/5Precalculus: A Self-Teaching Guide Rating: 5 out of 5 stars5/5Game Theory: A Simple Introduction Rating: 4 out of 5 stars4/5Algebra I Workbook For Dummies Rating: 3 out of 5 stars3/5The Little Book of Mathematical Principles, Theories & Things Rating: 3 out of 5 stars3/5The Thirteen Books of the Elements, Vol. 1 Rating: 0 out of 5 stars0 ratingsReal Estate by the Numbers: A Complete Reference Guide to Deal Analysis Rating: 0 out of 5 stars0 ratingsSneaky Math: A Graphic Primer with Projects Rating: 0 out of 5 stars0 ratingsCalculus Made Easy Rating: 4 out of 5 stars4/5The Everything Everyday Math Book: From Tipping to Taxes, All the Real-World, Everyday Math Skills You Need Rating: 5 out of 5 stars5/5Is God a Mathematician? Rating: 4 out of 5 stars4/5The Golden Ratio: The Divine Beauty of Mathematics Rating: 5 out of 5 stars5/5Algebra II For Dummies Rating: 3 out of 5 stars3/5Relativity: The special and the general theory Rating: 5 out of 5 stars5/5Limitless Mind: Learn, Lead, and Live Without Barriers Rating: 4 out of 5 stars4/5Algebra I For Dummies Rating: 4 out of 5 stars4/5My Best Mathematical and Logic Puzzles Rating: 5 out of 5 stars5/5See Ya Later Calculator: Simple Math Tricks You Can Do in Your Head Rating: 4 out of 5 stars4/5
Reviews for A Contemporary Study of Iterative Methods
0 ratings0 reviews
Book preview
A Contemporary Study of Iterative Methods - A. Alberto Magrenan
Chapter 1
The majorization method in the Kantorovich theory
Abstract
The goal in this chapter is to present some improvements related to the convergence of Newton's and modified Newton's method by means of introducing and using the center Lipschitz condition. Using both conditions we obtain tighter majorizing sequences that allow us to obtain weaker convergence criteria. Numerical examples and applications validating the theoretical results are also presented.
Keywords
Newton's method; Kantorovich; Majorizing sequences; Local/semilocal convergence
1.1 Introduction
Let , be Banach spaces and let be a closed and convex subset of . In the present chapter, we are concerned with the problem of approximating a locally unique solution of
(1.1.1)
where is a Fréchet-differentiable operator.
By means of using mathematical modeling as it can be seen in [10], [13], and [18], many problems in Applied Sciences can be brought in the form of equation (1.1.1).
Finding a solution of equation (1.1.1) in a closed form is not usually easy, and that is why we use iterative methods.
The most well-known and studied one is Newton's method, which is defined as
(1.1.2)
where is an initial point and denotes the Fréchet-derivative of F at the point . Newton's method requires the inversion of linear operator , as well as the computation of the function F at ( ) at each step. If the starting point is close enough to the solution, Newton's method converges quadratically [10], [31]. The inversion of ( ) at each step may be too expensive or unavailable. That is why the modified Newton's method
(1.1.3)
can be used in this case instead of Newton's method. However, the convergence of this modification is only linear as it is stated in [6], [9], [10], and [31].
The study about convergence of iterative methods is usually centered on two types: semilocal and local convergence analysis. The semilocal convergence matter is, based on the information around an initial point, to give criteria ensuring the convergence of the iterative method; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls. There exist several studies about the convergence of Newton's method, see [10], [13], [18], [31], [34], and the ones presented by the authors of this chapter [1–3,19–24,30,32,33].
Concerning the semilocal convergence of Newton's method or Newton-like methods, one of the most important results is the celebrated Kantorovich theorem for solving nonlinear equations. This theorem provides a simple and transparent convergence criterion for operators with bounded second derivatives or Lipschitz continuous first derivatives. The second type analysis for numerical methods is the local convergence. Traub and Woźniakowsi [46], Rheinboldt [44], [45], Rall [43], Argyros [10], and other authors gave estimates of the radii of local convergence balls when the Fréchet-derivatives are Lipschitz continuous around a solution.
Concerning the semilocal convergence of both methods, the Lipschitz-type condition
(1.1.4)
has been used [4], [5], [10], [25], [26–29], [35–42], [47–49], where ω is a strictly increasing and continuous function with
(1.1.5)
If for , we obtain the Lipschitz case, whereas if for and fixed , we obtain the Hölder case. Sufficient convergence criteria in the references, as well as error bounds on the distances , for each , have been established using the majorizing sequence given by
(1.1.6)
Using (1.1.6) (see [10], [4], [5]) it is easy that
where
Under the same or weaker convergence criteria, we provided a convergence analysis [6], [10], [16], [18] with the following advantages over the earlier stated works:
• tighter error bounds on the distances , for each ,
• and an at least as precise information on the location of the solution .
We obtain these advantages by means of introducing the center Lipschitz-condition
(1.1.7)
where is a strictly increasing and continuous function with the same property as (1.1.5). Condition (1.1.7) follows from (1.1.4), and
(1.1.8)
holds in general. Note also that ( ) can be arbitrarily large [6–18]. Using (1.1.4) it can be shown that
(1.1.9)
for each x in a certain subset of . This estimate leads to majorizing sequence [10], [4], [5]. Now, using the less expensive and more precise (1.1.7), we obtain that
(1.1.10)
Inequality (1.1.10) is tighter than (1.1.9) if strict inequality holds in (1.1.8). This way we used majorizing sequence given by
(1.1.11)
A simple inductive argument shows
(1.1.12)
(1.1.13)
and
(1.1.14)
In the case of modified Newton's method [6], [9], [10], [31], the majorizing sequences used are
(1.1.15)
Using (1.1.15) we obtain
However, we used tighter majorizing sequences given by
(1.1.16)
Moreover, we obtain
where
It follows from (1.1.7) that there exist functions and with the same properties as such that
(1.1.17)
and
(1.1.18)
Function can certainly be chosen to be a constant function. For example,
We have that
Note that (1.1.7), (1.1.17), and (1.1.18) are not additional to (1.1.4) hypotheses, since in practice finding function ω requires also finding functions , , and . Furthermore, the verification of (1.1.17) and (1.1.18) requires only computations at the initial data.
In the present chapter, we still use (1.1.4) and (1.1.7). However, we use an even tighter than majorizing sequence given by
(1.1.19)
Under the same or weaker convergence criteria (than those for ), we get
(1.1.20)
(1.1.21)
and
(1.1.22)
Our results for the semilocal convergence extend those in [17] when restricted to the Lipschitz case.
Related to the local convergence matter, the condition
(1.1.23)
has been used [10], [18], [31], [34], [43], where function ν is as ω. The radius of convergence has been enlarged and error bounds on the distances ( ) have been found to be tighter [10], [18] using (1.1.23) in combination with
(1.1.24)
where the function is as . In this case we obtain
(1.1.25)
used by different authors (see [6], [10], [16], [18])
(1.1.26)
which is tighter than (1.1.25). We shall also use the conditions
(1.1.27)
(1.1.28)
and
(1.1.29)
where , , and are functions with the same properties as the other ν functions. In the present chapter we show that the new error bounds can be tighter than the old ones [10], [17], [18].
The chapter is organized as follows. Section 1.2 contains the semilocal convergence of Newton's methods. The local convergence is examined in Section 1.3. Finally, some applications are shown in Section 1.4 while some numerical examples are given in Section 1.5.
1.2 Semilocal convergence
First of all, we need to introduce some results for the convergence of majorizing sequences for Newton's method. Let , be given by (1.1.19). It is convenient for us to define a sequence of functions on for each by
(1.2.1)
Denote by the class of strictly increasing and continuous functions on the interval with and such that
(1.2.2)
Note that is nonempty. Moreover, let us define
(1.2.3)
for some positive constants and L. Then, we have by (1.2.1) and (1.2.3) that
(1.2.4)
Set
(1.2.5)
Note that . Then (1.2.2) holds for α given by (1.2.5).
Lemma 1.2.1
Set
(1.2.6)
Furthermore, suppose that functions and ω are in the class ,
(1.2.7)
and there exists such that
(1.2.8)
Then, sequence given by (1.1.19) is well defined, strictly increasing, bounded from above by
(1.2.9)
and converges to its unique least upper bound which satisfies
(1.2.10)
Moreover, the following estimates hold:
(1.2.11)
Proof
By means of using mathematical induction, we will prove that
(1.2.12)
Estimate (1.2.12) holds for by (1.2.8). Then we have by (1.1.19) and (1.2.12) (for ) that
(1.2.13)
Assume that (1.2.12) holds for all natural integer . Then we get by (1.1.19) and (1.2.12) that
(1.2.14)
and
(1.2.15)
Evidently, estimate (1.2.12) is true if k is replaced by , provided that
or
(1.2.16)
Moreover, estimate (1.2.16) motivates us to define recurrent functions on by
(1.2.17)
We need to find a relationship between two consecutive functions . We get
(1.2.18)
where is given by (1.2.1). Then estimate (1.2.16) is certainly true if
(1.2.19)
In view of (1.2.2) and (1.2.18), we obtain
(1.2.20)
Define function on by
(1.2.21)
Then we get by (1.2.19)–(1.2.21) that (1.2.19) is satisfied, provided that
(1.2.22)
Using (1.2.17) and (1.2.21) we obtain that
(1.2.23)
That is, (1.2.22) is true by (1.2.8) and (1.2.23). Consequently, induction for (1.2.11) is complete. Hence, sequence is increasing, bounded from above by given by (1.2.9), and as such that it converges to its unique least upper bound which satisfies (1.2.10). □
Next, we have the following useful extension of Lemma 1.2.1.
Lemma 1.2.2
Suppose there exists a minimal natural integer and such that
and
Then, the sequence given by (1.1.19) is well defined, strictly increasing, bounded from above