Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Multipoint Methods for Solving Nonlinear Equations
Multipoint Methods for Solving Nonlinear Equations
Multipoint Methods for Solving Nonlinear Equations
Ebook560 pages3 hours

Multipoint Methods for Solving Nonlinear Equations

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book is the first on the topic and explains the most cutting-edge methods needed for precise calculations and explores the development of powerful algorithms to solve research problems. Multipoint methods have an extensive range of practical applications significant in research areas such as signal processing, analysis of convergence rate, fluid mechanics, solid state physics, and many others. The book takes an introductory approach in making qualitative comparisons of different multipoint methods from various viewpoints to help the reader understand applications of more complex methods. Evaluations are made to determine and predict efficiency and accuracy of presented models useful to wide a range of research areas along with many numerical examples for a deep understanding of the usefulness of each method. This book will make it possible for the researchers to tackle difficult problems and deepen their understanding of problem solving using numerical methods.

Multipoint methods are of great practical importance, as they determine sequences of successive approximations for evaluative purposes. This is especially helpful in achieving the highest computational efficiency. The rapid development of digital computers and advanced computer arithmetic have provided a need for new methods useful to solving practical problems in a multitude of disciplines such as applied mathematics, computer science, engineering, physics, financial mathematics, and biology.

  • Provides a succinct way of implementing a wide range of useful and important numerical algorithms for solving research problems
  • Illustrates how numerical methods can be used to study problems which have applications in engineering and sciences, including signal processing, and control theory, and financial computation
  • Facilitates a deeper insight into the development of methods, numerical analysis of convergence rate, and very detailed analysis of computational efficiency
  • Provides a powerful means of learning by systematic experimentation with some of the many fascinating problems in science
  • Includes highly efficient algorithms convenient for the implementation into the most common computer algebra systems such as Mathematica, MatLab, and Maple
LanguageEnglish
Release dateDec 31, 2012
ISBN9780123972989
Multipoint Methods for Solving Nonlinear Equations

Related to Multipoint Methods for Solving Nonlinear Equations

Related ebooks

Computers For You

View More

Related articles

Reviews for Multipoint Methods for Solving Nonlinear Equations

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Multipoint Methods for Solving Nonlinear Equations - Miodrag Petkovic

    Preface

    Calculating zeros of a scalar function f ranks among the most significant problems in the theory and practice not only of applied mathematics, but also of many branches of engineering sciences, physics, computer science, finance, to mention only some fields. These problems lead to a rich blend of mathematics, numerical analysis and computing science. Our book mainly concerns itself with a special class of iterative methods for solving nonlinear equations, commonly called multipoint or multi-step methods (both terms appear in literature). Multipoint iterative methods are defined as methods that require evaluation of f and its derivatives at a number of values of the independent variable.

    The main goal and motivation in the construction of new methods is to achieve the highest computational efficiency; in other words, it is desirable to attain as high as possible convergence order with a fixed number of function evaluations per iteration. Some classes of multipoint iterative methods considered in this book can satisfy the latter requirements. This book will study many new computationally effective root-finding methods and numerous well-known methods are identified as special cases.

    Any one-point iterative method, such as Newton’s, Halley’s, Laguerre’s, Euler-Cauchy’s method and members of the Traub-Schröder basic sequence, which depends explicitly on f and its first r-1 derivatives, cannot attain an order higher than r. Therefore, the informational efficiency of one- point methods, expressed as the ratio of the order of convergence r and the number of required function evaluations per iteration cannot exceed 1. Multipoint methods are of great practical importance, since they overcome theoretical limits of any one-point method concerning the convergence order and computational efficiency. The majority of this book is devoted to the so-called optimal multipoint methods that attain the order of convergence r = 2n costing n + 1 function evaluations. Their informational efficiency is always greater than 1 for n 2.

    Most iterative methods presented in this book can also handle complex zeros. However, for simplicity and following previous examples, we consider functions and root-finding methods in real domain. Thus, throughout this book, we assume the possibility of finding complex zeros without placing undue emphasis upon it.

    Traub’s 1964 book, as well as papers published in the 1970s and 1980s, studied multipoint methods extensively. The early years of the twenty-first century have occasioned a renewed interest in multipoint methods. This is no mere coincidence. Being that multipoint methods produce approximations of great accuracy, the rapid development of digital computers, advanced computer arithmetics (multi-precision arithmetic and interval arithmetic) and symbolic computation have allowed for an even more effective implementation of multipoint methods. Many iterative methods, once of academic interest only, have become feasible in practice.

    During the last ten years, at least 200 new methods have been proposed, making a comprehensive survey of all such methods too ambitious for the present text. However, many methods are either non-optimal or slight modifications or variations of existing methods, even rediscovered methods. For these reasons, our study mainly concerns itself with optimal methods that have brought a genuine advance in the theory and practice of iterative processes. Nonetheless, the book includes some multipoint methods of lower computational efficiency, usually without details, because of their influence on the topic or for their historical importance.

    The book is divided into seven chapters. Approximately half of the material presented originates from the authors’ papers. Chapters 6 and 7 consist largely of very recent results. Also included are the selected results of other authors published over the last fifty years, from Ostrowski’s 1960 work, up to very recently published results. The book is intended as both a systematic introduction to techniques for developing multipoint methods and as a unifying presentation of the multipoint iterative methods constructed over fifty years using these techniques.

    The first chapter has an introductory character and contains the basic concepts of iterative methods for the numerical solution of nonlinear equations such as order of convergence, studies of initial approximations and computational efficiency, stopping criteria and a review of well-known methods for simple and multiple roots.

    Two-point methods are considered in Chapter 2. Although non-optimal, Traub’s two-point methods of third order from 1964 are given for their great influence on the later development of multipoint methods. Several different derivations of the Ostrowski method proposed in 1960, the first optimal two-point method of order four, are presented in this chapter with the aim of demonstrating different developing techniques. Derivation and convergence analysis of two families of optimal two-point methods, proposed by the authors, are given in detail. Aside from the well- known Jarratt methods, we present a generalization of Jarratt’s type that produces many two-point methods of optimal order four. The last section is concerned with optimal two-point methods for finding multiple roots.

    Chapter 3 concerns non-optimal three-point methods with order of convergence less than eight. At the beginning, several techniques for developing sixth-order methods are demonstrated. These techniques can be successfully applied for the construction of optimal n-point methods for n 3. Special attention is paid to the sixth-order methods of Ostrowski’s and Jarratt’s type.

    Chapter 4 is the most voluminous and discusses optimal three-point methods. Different techniques are applied for their construction: inverse interpolation, Hermite’s interpolation, interpolation by rational functions, method of undetermined weight functions. This chapter ends with a derivative free family of optimal order eight, constructed by using a family of optimal two-point methods in the first two steps and Newton’s interpolation with divided differences in the third step.

    Chapter 5 begins with a discussion of practical interest: Is the construction of faster and faster multipoint methods always justified? From a practical point of view, multipoint methods of very high order (16 or more) produce approximations with 100 and more significant decimal digits already in the second iteration. On the other hand, double-precision arithmetic with its accuracy of approximately 16 decimal significant digits gives quite satisfactory results in solving most contemporary practical problems. Hence, the construction of a multipoint method of a very high order is justified (at least from the theoretical point of view) if such method is a member of some general family of methods of arbitrary order of convergence. Higher order multipoint methods with this property are described in Chapter 5. Interpolation techniques used in Chapter 4 are also applied in constructing higher order methods presented in Chapter 5.

    Multipoint methods with memory use information from the current and previous iterations. Although the first ideas for the construction of this class of methods date back to 1964 and Traub’s book, contributions on this subject very seldom appear in the literature. To fill this gap we present in Chapter 6 some old and some new multipoint methods with memory. The order of convergence of new multipoint methods with memory is greater than the order of the corresponding optimal multipoint methods without memory (considered in Chapters 2 to 5). Accelerated convergence is obtained by variation of a self-correcting parameter, which is recursively calculated as the iteration proceeds using information from the current and previous iterations. The improved convergence rate, attained without additional function evaluations, is a significant advantage of multipoint methods with memory.

    Chapter 7 presents accelerated methods for the simultaneous determination of all simple or multiple roots of polynomials. This acceleration is realized using suitable corrections that arise from optimal multipoint methods. Several iterative methods of high efficiency are given in this chapter, both in ordinary complex arithmetic and circular complex interval arithmetic. Interval methods produce disks of very small size that contain the desired roots, providing in this manner information about upper error bounds of approximations to the roots, which can be regarded as an important advance in controlling results of finite-precision arithmetic.

    The ultimate estimate of the quality, usefulness and computational efficiency of the considered algorithms cannot be made until they have been tested on a variety of functions of different forms and structure. For this reason, a number of numerical examples is included in the book to demonstrate convergence properties of the presented methods and for their comparison. These examples were programmed by the authors and realized in the computational software program Mathematica. This package was also used for symbolic computation in all complicated evaluations and convergence analysis of the methods presented. Since the considered methods produce approximations of high accuracy, multiprecision arithmetic was employed.

    A general list of selected references, used or cited throughout the text, is given at the end of the book. This list is still far from being complete; it refers to an extensive collection of publications that we have assembled and built up in a systematic manner.

    This book, intended as a mixture of theoretical results, algorithmic aspects and symbolic computation, is both a text and a reference source for numerical analysts, engineers, physicists and computer scientists, who are interested in new developments and applications. It is the only book that treats iterative multipoint methods for solving nonlinear equations. This particular feature enables readers to understand the convergence behavior and computational efficiency of the various iterative methods. The presented material will also be useful as a text for a graduate or undergraduate course in numerical analysis.

    We wish to thank to Luc Wuytack, Melvin Scott and Takemoto Mitsui, the editors of Elsevier’s mathematical journals, who have encouraged us to write this book. Many thank go to dear friends Biljana Mišić-Ilić and Margaret Hanson who read parts of the manuscript and suggested some improvements in language and style. We are also thankful to Elsevier’s Editorial project manager Kathryn Morrissey and Acquisitions editor Patricia Osborn for their great efforts in the preparation of the manuscript.

    Finally, we would like to express our gratitude to the authors of many valuable papers on the studied topic, which were used and cited here; their results have certainly made this book more exhaustive.

    Miodrag S. Petković

    Ljiljana D. Petković

    Jovana Džunić

    University of Niš

    18 000 Niš, Serbia

    Beny Neta

    Naval Postgraduate School

    Department of Applied Mathematics

    Monterey, CA 93943, U.S.A.

    January 2012

    Chapter 1

    Basic concepts

    In this book we are mainly concerned with multipoint methods for solving nonlinear equations, a special class of iterative methods that possess a greater computational efficiency than classical one-point methods. To develop multipoint methods and investigate their convergence characteristics and computational efficiency, in this chapter we give some basic concepts and properties of root-solvers. Following Traub’s approach (Traub, 1964), we first expose a natural classification of iterative methods relied on the required information from the current and previous iterations. Properties of iterative root-finding methods of paramount importance, such as order of convergence, computational efficiency, the choice of initial approximations, and stopping criterion, are considered in separate sections. A list of one-point methods for simple and multiple zeros of functions, often used in this book as the base for the construction of multipoint methods or for comparison purpose, is given in this introductory chapter in a systematic fashion. Most symbols and notation introduced in Chapter 1 are global and maintain their meaning for the entire book.

    1.1 Classification of iterative methods

    is said to be a zero or, equivalently, a root of the equation

    (1.1)

    . As well known, roots of equation by applying some iterative method of the form

    (1.2)

    is called iteration function. These conditions will be considered later.

    Now we present a classification of iterative methods adopting Traub’s approach (Traub, 1964):

    (i) Formula . Such iterative scheme is called one-point iterative method. The most commonly used iterative method of this type is given by

    (1.3)

    known as Newton’s method or Newton-Raphson’s method.

    satisfies is called a fixed point is a fixed point iteration.

    and let us define the mapping

    (1.4)

    of the form (1.4) is called a one-point iteration function with memory. The best known iteration function with memory is defined by the secant method

    (1.5)

    , defined as

    (1.6)

    is called a multipoint iteration function without memory. We see from .

    ). Then this I.F. can be represented in the general form as

    Such iteration function is called a multipoint iteration function with memory.

    The classes of iteration functions (iii) and (iv) will be of major interest in this book.

    1.2 Order of convergence

    . More precisely, if there holds

    is convergent. Convergence conditions depend on the form of the iteration function, its properties, and the chosen initial approximation.

    according to the following convention:

    .

    .

    (i.e., their moduli are of the same order).

    The convergence rate of an iterative method is the issue of equal importance to the theory and practice of iterative methods as the convergence itself. The convergence rate is defined by the order of convergence.

    Definition 1.1

    such that

    (1.7)

    is called the order of convergence is the factor of convergence or the asymptotic error constant, AEC for brevity.

    be the error of the approximation in the k. Hence, (1.7) becomes

    This can be expressed equivalently as

    large enough.

    Remark 1

    in .

    Remark 2

    For simplicity, the ratio

    for absolute value (see (1.7)) will also be called the asymptotic error constant, as a positive constant in Definition 1. Note that this alternative definition is often used in the literature.

    In practice, the order of convergence is often determined by the following statement known as Schröder-Traub’s theorem (see Traub, 1964).

    Theorem 1.1

    if and only if

    (1.8)

    The asymptotic error constant is given by

    Example 1

    (Newton’s iteration). By a direct calculation we find that

    Therefore, Newton’s method (1.3) is of second order and

    The following two theorems are concerned with the order of the composition of iteration functions.

    Theorem 1.2

    (introduced by Newton’s method

    .

    Theorem 1.3

    (, respectively. Then the composition

    .

    1.2.1 Computational order of convergence (COC)

    Together with the order of convergence, for practical purposes we introduce the notion of computational order of convergence (COC, for brevity). Namely, it is of interest to check the order of convergence of an iterative method during its practical implementation and estimate how much it differs from the theoretical order.

    . Taking the logarithm of approximate relations

    we obtain the computational order of convergence

    (1.9)

    This old result has been rediscovered by Weerakoon and Fernando (2000).

    and (1.9) to derive the approximate formula

    (1.10)

    approximates well the theoretical order of convergence assuming that a pathological behavior of the iterative method (slow convergence at the beginning of the iterative process, oscillation of approximations, etc.) does not

    Enjoying the preview?
    Page 1 of 1