Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables
Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables
Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables
Ebook1,897 pages7 hours

Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables

Rating: 5 out of 5 stars

5/5

()

Read preview

About this ebook

Despite the increasing use of computers, the basic need for mathematical tables continues. Tables serve a vital role in preliminary surveys of problems before programming for machine operation, and they are indispensable to thousands of engineers and scientists without access to machines. Because of automatic computers, however, and because of recent scientific advances, a greater variety of functions and a higher accuracy of tabulation than have been available until now are required.
In 1954, a conference on mathematical tables, sponsored by M.I.T. and the National Science Foundation, met to discuss a modernization and extension of Jahnke and Emde's classical tables of functions. This volume, published 10 years later by the U.S. Department of Commerce, is the result. Designed to include a maximum of information and to meet the needs of scientists in all fields, it is a monumental piece of work, a comprehensive and self-contained summary of the mathematical functions that arise in physical and engineering problems.
The book contains 29 sets of tables, some to as high as 20 places: mathematical constants; physical constants and conversion factors (6 tables); exponential integral and related functions (7); error function and Fresnel integrals (12); Bessel functions of integer (12) and fractional (13) order; integrals of Bessel functions (2); Struve and related functions (2); confluent hypergeometric functions (2); Coulomb wave functions (2); hypergeometric functions; Jacobian elliptic and theta functions (2); elliptic integrals {9); Weierstrass elliptic and related functions; parabolic cylinder functions {3); Mathieu functions (2); spheroidal wave functions (5); orthogonal polynomials (13); combinatorial analysis (9); numerical interpolation, differentiation and integration (11); probability functions (ll); scales of notation (6); miscellaneous functions (9); Laplace transforms (2); and others.
Each of these sections is prefaced by a list of related formulas and graphs: differential equations, series expansions, special functions, and other basic relations. These constitute an unusually valuable reference work in themselves. The prefatory material also includes an explanation of the numerical methods involved in using the tables that follow and a bibliography. Numerical examples illustrate the use of each table and explain the computation of function values which lie outside its range, while the editors' introduction describes higher-order interpolation procedures. Well over100 figures illustrate the text.
In all, this is one of the most ambitious and useful books of its type ever published, an essential aid in all scientific and engineering research, problem solving, experimentation and field work. This contains every page of the original government publication.

 

LanguageEnglish
Release dateApr 30, 2012
ISBN9780486158242
Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables

Related to Handbook of Mathematical Functions

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Handbook of Mathematical Functions

Rating: 5 out of 5 stars
5/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Handbook of Mathematical Functions - Milton Abramowitz

    Handbook of Mathematical Functions

    with

    Formulas, Graphs, and Mathematical Tables

    Edited by Milton Abramowitz and Irene A. Stegun

    1. Introduction

    The present Handbook has been designed to provide scientific investigators with a comprehensive and self-contained summary of the mathematical functions that arise in physical and engineering problems. The well-known Tables of Functions by E. Jahnke and F. Emde has been invaluable to workers in these fields in its many editions¹ during the past half-century. The present volume extends the work of these authors by giving more extensive and more accurate numerical tables, and by giving larger collections of mathematical properties of the tabulated functions. The number of functions covered has also been increased.

    The classification of functions and organization of the chapters in this Handbook is similar to that of An Index of Mathematical Tables by A. Fletcher, J. C. P. Miller, and L. Rosenhead.² In general, the chapters contain numerical tables, graphs, polynomial or rational approximations for automatic computers, and statements of the principal mathematical properties of the tabulated functions, particularly those of computational importance. Many numerical examples are given to illustrate the use of the tables and also the computation of function values which lie outside their range. At the end of the text in each chapter there is a short bibliography giving books and papers in which proofs of the mathematical properties stated in the chapter may be found. Also listed in the bibliographies are the more important numerical tables. Comprehensive lists of tables are given in the Index mentioned above, and current information on new tables is to be found in the National Research Council quarterly Mathematics of Computation (formerly Mathematical Tables and Other Aids to Computation).

    The mathematical notations used in this Handbook are those commonly adopted in standard texts, particularly Higher Transcendental Functions, Volumes 1–3, by A. Erdélyi, W. Magnus, F. Oberhettinger and F. G. Tricomi (McGraw-Hill, 1953–55). Some alternative notations have also been listed. The introduction of new symbols has been kept to a minimum, and an effort has been made to avoid the use of conflicting notation.

    2. Accuracy of the Tables

    The number of significant figures given in each table has depended to some extent on the number available in existing tabulations. There has been no attempt to make it uniform throughout the Handbook, which would have been a costly and laborious undertaking. In most tables at least five significant figures have been provided, and the tabular intervals have generally been chosen to ensure that linear interpolation will yield four-or five-figure accuracy, which suffices in most physical applications. Users requiring higher precision in their interpolates may obtain them by use of higher-order interpolation procedures. described below.

    In certain tables many-figured function values are given at irregular intervals in the argument. An example is provided by Table 9.4. The purpose of these tables is to furnish key values for the checking of programs for automatic computers; no question of interpolation arises.

    The maximum end-figure error, or tolerance in the tables in this Handbook is of 1 unit everywhere in the case of the elementary functions, and 1 unit in the case of the higher functions except in a few cases where it has been permitted to rise to 2 units.

    3. Auxiliary Functions and Arguments

    One of the objects of this Handbook is to provide tables or computing methods which enable the user to evaluate the tabulated functions over complete ranges of real values of their parameters. In order to achieve this object, frequent use has been made of auxiliary functions to remove the infinite part of the original functions at their singularities, and auxiliary arguments to cope with infinite ranges. An example will make the procedure clear.

    The exponential integral of positive argument is given by

    The logarithmic singularity precludes direct interpolation near x = 0. The functions Ei(x) – ln x and x–1[Ei(x) – ln x – γ], however, are well-behaved and readily interpolable in this region. Either will do as an auxiliary function; the latter was in fact selected as it yields slightly higher accuracy when Ei(x) is recovered. The function x–1[Ei(x) – ln x – γ] has been tabulated to nine decimals for the range For Ei(x) is sufficiently well-behaved to admit direct tabulation, but for larger values of x, its exponential character predominates. A smoother and more readily interpolable function for large x is xexEi(x); this has been tabulated for 2 ≤x≤10. Finally, the range 10 ≤x≤∞ is covered by use of the inverse argument x–1. Twenty-one entries of xexEi(x), corresponding to x–1 = .l( – .005)0, suffice to produce an interpolable table.

    4. Interpolation

    The tables in this Handbook are not provided with differences or other aids to interpolation, because it was felt that the space they require could be better employed by the tabulation of additional functions. Admittedly aids could have been given without consuming extra space by increasing the intervals of tabulation, but this would have conflicted with the requirement that linear interpolation is accurate to four or five figures.

    For applications in which linear interpolation is insufficiently accurate it is intended that Lagrange's formula or Aitken's method of iterative linear interpolation³ be used. To help the user, there is a statement at the foot of most tables of the maximum error in a linear interpolate, and the number of function values needed in Lagrange's formula or Aitken's method to interpolate to full tabular accuracy.

    As an example, consider the following extract from Table 5.1.

    The numbers in the square brackets mean that the maximum error in a linear interpolate is 3×10–6, and that to interpolate to the full tabular accuracy five points must be used in Lagrange's and Aitken's methods.

    Let us suppose that we wish to compute the value of xexE1(x) for x = 7.9527 from this table. We describe in turn the application of the methods of linear interpolation, Lagrange and Aitken, and of alternative methods based on differences and Taylor's series.

    (1) Linear interpolation. The formula for this process is given by

    fp = (1–p)f0 + pf1

    where f0, f1 are consecutive tabular values of the function, corresponding to arguments x0, x1 respectively; p is the given fraction of the argument interval

    p = (x xo)/(x1 – x0)

    and fp the required interpolate. In the present instance, we have

    f0 = .89717  4302              f1 = .89823  7113            p = .527

    The most convenient way to evaluate the formula on a desk calculating machine is to set f0 and f1 in turn on the keyboard, and carry out the multiplications by 1–p and p cumulatively; a partial check is then provided by the multiplier dial reading unity. We obtain

    f.527 = (l–.527)(.89717 4302) + .527(.89823 7113)

      = .89773 4403.

    Since it is known that there is a possible error of 3×10–6 in the linear formula, we round off this result to .89773. The maximum possible error in this answer is composed of the error committed by the last rounding, that is, .4403×10–5, plus 3×10–6, and so certainly cannot exceed .8×10–5.

    (2) Lagrange's formula. In this example, the relevant formula is the 5-point one, given by

    f = A–2(p)f–2 + A–1(p)f–1 + A0(p)f0 + A1(p)f1 + A2(p)f2

    Tables of the coefficients Ak(p) are given in chapter 25 for the range p = 0(.01)l. We evaluate the formula for p = .52, .53 and .54 in turn. Again, in each evaluation we accumulate the Ak(p) in the multiplier register since their sum is unity. We now have the following subtable.

    The numbers in the third and fourth columns are the first and second differences of the values of xexE1(x) (see below); the smallness of the second difference provides a check on the three interpolations. The required value is now obtained by linear interpolation:

    fp = .3(.89772 9757) + .7(.89774 0379)

                                                              = .89773 7192.

    In cases where the correct order of the Lagrange polynomial is not known, one of the preliminary interpolations may have to be performed with polynomials of two or more different orders as a check on their adequacy.

    (3) Aitken's method of iterative linear interpolation. The scheme for carrying out this process in the present example is as follows:

    Here

    If the quantities xn x and xm x are used as multipliers when forming the cross-product on a desk machine, their accumulation (xn x) – (xm x) in the multiplier register is the divisor to be used at that stage. An extra decimal place is usually carried in the intermediate interpolates to safeguard against accumulation of rounding errors.

    The order in which the tabular values are used is immaterial to some extent, but to achieve the maximum rate of convergence and at the same time minimize accumulation of rounding errors, we begin, as in this example, with the tabular argument nearest to the given argument, then take; the nearest of the remaining tabular arguments, and so on.

    The number of tabular values required to achieve a given precision emerges naturally in the course of the iterations. Thus in the present example six values were used, even though it was known in advance that five would suffice. The extra row confirms the convergence and provides a valuable check.

    (4) Difference formulas. We use the central difference notation (chapter 25),

    Here

    δf1/2 = f1 – f0, δf3/2 = f2 – f1,. . . ,.

                              δ²f1 = δf3/2 – δf1/2 = f2 – 2f1 + f0

                        δ³f3/2 = δ²f1 = f3 – 3f3 – f0

                                δ⁴f2 = δ³f5/2 – δ³f3/2 = f4 – 4f3 + 6f2 – 4f1 + f0

    and so on.

    In the present example the relevant part of the difference table is as follows, the differences being written in units of the last decimal place of the function, as is customary. The smallness of the high differences provides a check on the function values

    Applying, for example, Everett's interpolation formula

    fp = (1 – p)f0 + E2(p)δ²f0 + E4(p)δ⁴f0. . .

                                              + pf1 + F2(p)δ²f1 + F4(p)δ⁴f1 + . . .

    and taking the numerical values of the interpolation coefficients E2(p), E4(p), F2(p) and F4(p) from Table 25.1, we find that

    10⁹f.527 = .473(89717 4302) + .061196(2 2754) –.012(34)

                     + .527(89823 7113) + .063439(2 2036) –.012(39) = 89773 7193.

    We may notice in passing that Everett's formula shows that the error in a linear interpolate is approximately

    Since the maximum value of |E2(p) + F2(p)| in the range 0

    (5) Taylor's series. In cases where the successive derivatives of the tabulated function can be computed fairly easily, Taylor's expansion

    can be used. We first compute as many of the derivatives f(n)(x0) as are significant, and then evaluate the series for the given value of x. An advisable check on the computed values of the derivatives is to reproduce the adjacent tabular values by evaluating the series for x = x–1 and x1.

    In the present example, we have

    f(x) = xexE1(x)

    f′(x) = (1 + x–1)f(x) – 1

    f′′(x) = (1 + x–1)f′(x) – x–2(x)

    f′′′(x) = (1 + x–1)f′′(x) – 2x–2f′(x) + 2x–3f(x).

    With x0 = 7.9 and x x0 = .0527 our computations are as follows; an extra decimal has been retained in the values of the terms in the series to safeguard against accumulation of rounding errors.

    5. Inverse Interpolation

    With linear interpolation there is no difference in principle between direct and inverse interpolation. In cases where the linear formula provides an insufficiently accurate answer, two methods are available. We may interpolate directly, for example, by Lagrange's formula to prepare a new table at a fine interval in the neighborhood of the approximate value, and then apply accurate inverse linear interpolation to the subtabulated values. Alternatively, we may use Aitken's method or even possibly the Taylor's series method, with the roles of function and argument interchanged.

    It is important to realize that the accuracy of an inverse interpolate may be very different from that of a direct interpolate. This is particularly true in regions where the function is slowly varying, for example, near a maximum or minimum. The maximum precision attainable in an inverse interpolate can be estimated with the aid of the formula

    in which Δf is the maximum possible error in the function values.

    Example. Given xexE1(x) = .9, find x from the table on page X.

    (i) Inverse linear interpolation. The formula for p is

    p = (fp fo)/(f1 – fo).

    In the present example, we have

    The desired x is therefore

    x = xo + p(x1 – x0) = 8.1 + .708357(.l) = 8.17083 57

    To estimate the possible error in this answer, we recall that the maximum error of direct linear interpolation in this table is Δf = 3×10–6. An approximate value for df/dx is the ratio of the first difference to the argument interval (chapter 25), in this case .010. Hence the maximum error in x is approximately 3× 10–6/(.010), that is, .0003.

    (ii) Sub tabulation method. To improve the approximate value of x just obtained, we interpolate directly for p = .70, .71 and .72 with the aid of Lagrange's 5-point formula,

    Inverse linear interpolation in the new table gives

    Hence x = 8.17062 23.

    An estimate of the maximum error in this result is

    (iii) Aitken's method. This is carried out in the same manner as in direct interpolation.

    The estimate of the maximum error in this result is the same as in the subtabulation method. An indication of the error is also provided by the discrepancy in the highest interpolates, in this case x.0,1,2,3,4, and x.0,1,2,3,5,

    6. Bivariate Interpolation

    Bivariate interpolation is generally most simply performed as a sequence of univariate interpolations. We carry out the interpolation in one direction, by one of the methods already described, for several tabular values of the second argument in the neighborhood of its given value. The interpolates are differenced as a check, and interpolation is then carried out in the second direction.

    An alternative procedure in the case of functions of a complex variable is to use the Taylor's series expansion, provided that successive derivatives of the function can be computed without much difficulty.

    7. Generation of Functions from Recurrence Relations

    Many of the special mathematical functions which depend on a parameter, called their index, order or degree, satisfy a linear difference equation (or recurrence relation) with respect to this parameter. Examples are furnished by the Le-gendre function Pn(x), the Bessel function Jn(x) and the exponential integral En(x), for which we have the respective recurrence relations

    Particularly for automatic work, recurrence relations provide an important and powerful computing tool. If the values of Pn(x) or Jn(x) are known for two consecutive values of n, or En(x) is known for one value of n, then the function may be computed for other values of n by successive applications of the relation. Since generation is carried out perforce with rounded values, it is vital to know how errors may be propagated in the recurrence process. If the errors do not grow relative to the size of the wanted function, the process is said to be stable. If, however, the relative errors grow and will eventually overwhelm the wanted function, the process is unstable.

    It is important to realize that stability may depend on (i) the particular solution of the difference equation being computed; (ii) the values of x or other parameters in the difference equation; (iii) the direction in which the recurrence is being applied. Examples are as follows.

                    Stability — Increasing n

                    Stability — decreasing n

    Illustrations of the generation of functions from their recurrence relations are given in the pertinent chapters. It is also shown that even in cases where the recurrence process is unstable, it may still be used when the starting values are known to sufficient accuracy.

    Mention must also be made here of a refinement, due to J. C. P. Miller, which enables a recurrence process which is stable for decreasing n to be applied without any knowledge of starting values for large n. Miller's algorithm, which is well-suited to automatic work, is described in 19.28, Example 1.

    8. Acknowledgments

    The production of this volume has been the result of the unrelenting efforts of many persons, all of whose contributions have been instrumental in accomplishing the task. The Editor expresses his thanks to each and every one.

    The Ad Hoc Advisory Committee individually and together were instrumental in establishing the basic tenets that served as a guide in the formation of the entire work. In particular, special thanks are due to Professor Philip M. Morse for his continuous encouragement and support. Professors J. Todd and A. Erdélyi, panel members of the Conferences on Tables and members of the Advisory Committee have maintained an un-diminished interest, offered many suggestions and carefully read all the chapters.

    Irene A. Stegun has served effectively as associate editor, sharing in each stage of the planning of the volume. Without her untiring efforts, completion would never have been possible.

    Appreciation is expressed for the generous cooperation of publishers and authors in granting permission for the use of their source material. Acknowledgments for tabular material taken wholly or in part from published works are given on the first page of each table. Myrtle R. Kelling-ton corresponded with authors and publishers to obtain formal permission for including their material, maintained uniformity throughout the bibliographic references and assisted in preparing the introductory material.

    Valuable assistance in the preparation, checking and editing of the tabular material was received from Ruth E. Capuano, Elizabeth F. Godefroy, David S. Liepman, Kermit Nelson, Bertha H. Walter and Ruth Zucker.

    Equally important has been the untiring cooperation, assistance, and patience of the members of the NBS staff in handling the myriad of detail necessarily attending the publication of a volume of this magnitude. Especially appreciated have been the helpful discussions and services from the members of the Office of Technical Information in the areas of editorial format, graphic art layout, printing detail, preprinting reproduction needs, as well as attention to promotional detail and financial support. In addition, the clerical and typing staff of the Applied Mathematics Division merit commendation for their efficient and patient production of manuscript copy involving complicated technical notation.

    Finally, the continued support of Dr. E. W. Cannon, chief of the Applied Mathematics Division, and the advice of Dr. F. L. Alt, assistant chief, as well as of the many mathematicians in the Division, is gratefully acknowledged.

    M. ABRAMOWITZ.

    ¹ The most recent, the sixth, with F. Loesch added as co-author, was published in 1960 by McGraw-Hill, U.S.A., and Teubner, Germany.

    ² The second edition, with L. J. Comrie added as co-author, was published in two volumes in 1962 by Addison-Wesley, U.S.A., and Scientific Computing Service Ltd., Great Britain.

    ³ A. C. Aitken, On interpolation by iteration of proportional parts, with out the use of differences, Proc. Edinburgh Math Soc. 3, 56-76 (1932).

    1. Mathematical Constants

    DAVID S. LIEPMAN¹

    Contents

    Table 1.1. Mathematical Constants

    prime <100,   20S

    Some roots of 2, 3, 5, 10, 100, 1000, e,   20S

    e±n, n = 1(1)10,   25S

    e±nπ, n = 1(1)10,   25S

    e±e, e± γ,   20S

    1n n, log10 n,, n = 2(l)10, primes <100, 26, 25S

    1n π, , log10 π, log10 e,   25S

    n 1n 10, n = 1(1)9,   25S

    nπ, n = 1(1)9,   25S

    π±n,n = 1(1)10,   25S

    Fractions of π powers and roots involving π,   25S

    1 radian in degrees,   26S

    1°, 1', 1" in radians,   24D

    γ, In γ,   24D

    TABLE 1. 1.   MATHEMATICAL CONSTANTS

    ¹ National Bureau of Standards.

    * See page II.

    2. Physical Constants and Conversion Factors

    A. G. MCNISH¹

    Contents

    Table 2.1. Common Units and Conversion Factors

    Table 2.2. Names and Conversion Factors for Electric and Magnetic Units

    Table 2.3. Adjusted Values of Constants

    Table 2.4. Miscellaneous Conversion Factors

    Table 2.5. Conversion Factors for Customary U.S. Units to Metric Units

    Table 2.6. Geodetic Constants

    2. Physical Constants and Conversion Factors

    The tables in this chapter supply some of the more commonly needed physical constants and conversion factors.

    All scientific measurements in the fields of mechanics and heat are based upon four international arbitrarily adopted units, the magnitudes of which are fixed by four agreed on standards:

    Length – the meter – fixed by the vacuum wavelength of radiation corresponding to the transition 2P10 – 5D3 of krypton 86

    (1 meter – 1650763.73λ).

    Mass – the kilogram – fixed by the international kilogram at Sevres, France.

    Time – the second – fixed as 1/31,556,925.9747 of the tropical year 1900 at 12h ephemeris time, or the duration of 9,192,631,770 cycles of the hyperfine transition frequency of cesium 133.

    Temperature – the degree – fixed on athermodynamic basis by taking the temperature for the triple point of natural water as 273.16 °K. (The Celsius scale is obtained by adding –273.15 to the Kelvin scale.)

    Other units are defined in terms of them by assigning the value unity to the proportionality constant in each defining equation.   The entire system, including electricity units, is called the Système International d’Unitès (SI). Taking the 1/100 part of the meter as the unit of length and the 1/1000 part of the kilogram as the unit of mass, similarly, gives rise to the CGS system, often used in physics and chemistry.

    Table 2.1. Common Units and Conversion Factors

    The SI unit of electric current is the ampere defined by the equation 2ΓmI1I2/4π = F giving the force in vacuo per unit length between two infinitely long parallel conductors of infinitesimal cross-section. If F is in newtons, and Γm has the numerical value 4π × 10–7, then I1 and I2 are in amperes.   The customary equations define the other electric and magnetic units of SI such as the volt, ohm, farad, henry, etc. The force between electric charges in a vacuum in this system is given by Q1Q2/4πΓer² = F, Γe having the numerical value 10⁷/4πc² where c is the speed of light in meters per second (Γe = 8.854 × 10-12).

    The CGS unrationalized system is obtained by deleting 4π in the denominators in these equations and expressing F in dynes, and r in centimeters. Setting Γm equal to unity defines the CGS unrationalized electromagnetic system (emu), Γe then taking the numerical value of 1/c².   Setting Γe equal to unity defines the CGS unrationalized electrostatic system (esu), Γm then taking the numerical value of 1/c².

    Table 2.2. Names and Conversion Factors for Electric and Magnetic Units

    Example: If the value assigned to a current is 100 amperes its value in abamperes is 100 × 10 –1 = 10.

    *Divide this number by 4π if unrationalized system is involved; other numbers are unchanged.

    The values of constants given in Table 2.3 are based on an adjustment by Taylor, Parker, and Langenberg, Rev. Mod. Phys. 41, p.375 (1969).   They are being considered for adoption by the Task Group on Fundamental Constants of the Committee on Data for Science and Technology, International Council of Scientific Unions. The uncertainties given are standard errors estimated from the experimental data included in the adjustment. Whore applicable, values are based on the unified scale of atomic masses in which the atomic mass unit (u) is defined as 1/12 of the mass of the atom of the ¹²C nuclide.

    Table 2.3. Adjusted Values of Constants

    Table 2.4. Miscellaneous Conversion Factors

    1 curie, the quantity of radioactive material undergoing 3.7 × 10¹⁰ disintegrations per second*.

    1 roentgen, the exposure of x- or gamma radiation which produces together with its secondaries 2.082 × 10⁹ electron-ion pairs in 0.001 293 gram of air.

    The index of refraction of the atmosphere for radio waves of frequency less than 3 × 10¹⁰ Hz is given by (n – 1)10⁶ = (77.6/t (p + 4810e/t), where n is the refractive index; t, temperature in kelvins; p, total pressure in millibars; e, water vapor partial pressure in millibars.

    Factors for converting the customary United States units to units of the metric system are given in Table 2.5.

    Table 2.5. Factors for Converting Customary U.S. Units to SI Units

    Geodetic constants for the international (Hayford) spheroid are given in Table 2.6. The gravity values are on the basis of the revised Potsdam value. They are about 14 parts per million smaller than previous values. They are calculated for the surface of the geoid by the international formula.

    Table 2.6. Geodetic Constants

    a = 6 378 388 m; f = 1/297; b = 6 356 912 m

    ¹ National Bureau of Standards.

    ¹Used principally by chemists.

    ² Used principally by engineers.

    ³ Various definitions are given for the British thermal unit. This represents a rounded mean value differing from none of the more important definitions by more than 3 in 10⁴.

    *Exact value.

    3. Elementary Analytical Methods

    MILTON ABRAMOWITZ¹

    Contents

    Elementary Analytical Methods

    3.1. Binomial Theorem and Binomial Coefficients; Arithmetic and Geometric Progressions; Arithmetic, Geometric, Harmonic and Generalized Means

    3.2. Inequalities

    3.3. Rules for Differentiation and Integration

    3.4. Limits, Maxima and Minima

    3.5. Absolute and Relative Errors

    3.6. Infinite Series

    3.7. Complex Numbers and Functions

    3.8. Algebraic Equations

    3.9. Successive Approximation Methods

    3.10. Theorems on Continued Fractions

    Numerical Methods

    3.11. Use and Extension of the Tables

    3.12. Computing Techniques

    References

    Table 3.1. Powers and Roots

    nk, k = 1(1)10, 24, 1/2, 1/3, 1/4, 1/5

    n = 2(1)999, Exact or 10S

    The author acknowledges the assistance of Peter J. O’Hara and Kermit C. Nelson in the preparation and checking of the table of powers and roots.

    3. Elementary Analytical Methods

    3.1. Binomial Theorem and Binomial Coefficients; Arithmetic and Geometric Progressions; Arithmetic, Geometric, Harmonic and Generalized Means

    Binomial Theorem

    3.1.1

    Binomial Coefficients (see chapter 24)

    3.1.2

    *     

    3.1.3     

    3.1.4                 

    3.1.5                            

    3.1.6           

    3.1.7     

    Table of Binomial Coefficients  

    3.1.8

    For a more extensive table see chapter 24.

    3.1.9

    Sum of Arithmetic Progression to n Terms

                                last term in series = l = a + (n – 1)d

    Sum of Geometric Progression to n Terms

    3.1.10

    Arithmetic Mean of n Quantities A

    3.1.11                                

    Geometric Mean of n Quantities G

    3.1.12    G = (a1a2 … an)¹/n (ak > 0 , k = l , 2 , . . . ,n)

    Harmonic Mean of n Quantities H

    3.1.13

    Generalized Mean

    3.1.14                               

    3.1.15                   M(t) = 0 (t < 0 , some ak zero)

    3.1.16                         (a1 , a2 , . . . , an) = max .a

    3.1.17                        (a1 , a2 , . . . , an) = min .a

    3.1.18                                      

    3.1.19                                      M(1) = A

    3.1.20                                      M(-1) = H

    3.2. Inequalities

    Relation Between Arithmetic, Geometric, Harmonic and Generalized Means

    3.2.1

    A G H , equality if and only if a1 = a2 = . . . = an

    3.2.2                              min. aa

    3.2.3                              min. a<Ga

    equality holds if all ak are equal, or t < 0 and an ak is zero

    3.2.4    M(t)<M(s) if t < s unless all ak are equal, or s < 0 and an ak is zero.

    Triangle Inequalities

    3.2.5                         |a1| — |a2| |a1 + a2| ≤ |a1| + |a2|

    3.2.6                                          

    Chebyshev’s Inequality

     If a1≥a2≥a3≥ . . . ≥an

    b1≥b2≥b3≥ . . . ≥bn

    3.2.7                           

    Hölder’s Inequality for Sums

    If

    3.2.8         ;

    equality holds if and only if |bk| = c|ak|p–1 (c = constant>0). If p = q = 2 we get

    Cauchy’s Inequality

    3.2.9

       (equality for ak = cbk , c constant).

    Hölder’s Inequality for Integrals

    If

    3.2.10

    equality holds if and only if |g(x)| = c|ƒ(x)|p–1 (c = constant>0).

    If p = q = 2 we get

    Schwarz’s Inequality

    3.2.11

    Minkowski’s Inequality for Sums

    If P>1 and ak , bk>0 for all k,

    3.2.12

    ,

    equality holds if and only if bk = cak (c = constant >0).

    Minkowski’s Inequality for Integrals

    If P > 1,

    3.2.13

    equality holds if and only if g(x) = cƒ(x) (c = constant > 0).

    3.3. Rules for Differentiation and Integration Derivatives

    3.3.1        

    3.3.2                

    3.3.3                

    3.3.4          

    3.3.5                    

    3.3.6         

    Leibniz’s Theorem for Differentiation of an Integral

    3.3.7

    Leibniz’s Theorem for Differentiation of a Product

    3.3.8

    3.3.9                                  

    3.3.10                        

    3.3.11      

    Integration by Parts

    3.3.12                    

    3.3.13   

    Integrals of Rational Algebraic Functions

    (Integration constants are omitted)

    3.3.14    

    3.3.15                  

    The following formulas are useful for evaluating

     where P(x) is a polynomial and n>1 is an integer.

    3.3.16

    3.3.17             

    3.3.18              

    3.3.19

    3.3.20

    3.3.21           

    3.3.22        

    3.3.23           

    3.3.24     

    3.3.25    

    Integrals of Irrational Algebraic Functions

    3.3.26   

    3.3.27                                             

    3.3.28                                             

    3.3.29     

    3.3.30                                      

    3.3.31

    3.3.32

    3.3.33

    3.3.34                        

    3.3.35                           = a–1/21n |2ax + b|(a>0 ,b² = 4ac)

    3.3.36                     

    3.3.37

    3.3.38

    3.3.39

    3.3.40              

    3.3.41

    3.3.42       

    3.3.43                  

    3.3.44                        

    3.3.45    

    3.3.46        

    3.3.47               

    3.3.48

    3.3.49

    3.3.50

    3.4. Limits, Maxima and Minima

    Indeterminate Forms (L’Hospital’s Rule)

    3.4.1   Let ƒ(x) and g(x) be differentiable on an interval a≤x<b for which g(x)0.

    If

    and

    or if

    and

    and if

    then .

    Both b and l may be finite or infinite.

    Maxima and Minima

    3.4.2   (1) Functions of One Variable

    The function y = ƒ(x) has a maximum at x = x0 if ƒ′(x0) = 0 and ƒ″(x0)<0, and a minimum at x = x0 if ƒ′(x0) = 0 and ƒ″(x0)>0.   Points x0 for which ƒ′(x0) = 0 are called stationary points.

    3.4.3   (2) Functions of Two Variables

    The function ƒ(x, y) has a maximum or minimum for those values of (x0 , y0)for which

    and for which 

    (a) ƒ(x , y) has a maximum

                                  if   and  at(x0 , y0),

    (b) ƒ(x, y) has a minimum

                                  if   and   at(x0 , y0) .

    3.5. Absolute and Relative Errors

    (1) If x0 is an approximation to the true value of x, then

    3.5.1   (a) the absolute error of x0 is Δx = x0 — x, x — x0 is the correction to x.

    3.5.2   (b) the relative error of x0 is

    3.5.3   (c) the percentage error is 100 times the relative error.

    3.5.4   (2) The absolute error of the sum or difference of several numbers is at most equal to the sum of the absolute errors of the individual numbers.

    3.5.5   (3) If ƒ( x1 ,x2 , . . .,xn) is a function of x1 , x2 , ..., xn and the absolute error in xi (i = l, 2, ... n) is Δxi , then the absolute error in ƒ is

    3.5.6   (4) The relative error of the product or quotient of several factors is at most equal to the sum of the relative errors of the individual factors.

    3.5.7

    (5) If y = ƒ(x), the relative error

    Approximate Values

               If |∈|<< l ,|η|<< l , b<<a,

    3.5.8                                  (a + b)k ≈ak + kak-1b

    3.5.9                                  (1 + ∈)(1 + η)≈ 1 + ∈ + η

    3.5.10                                 

    3.6. Infinite Series

    Taylor’s Formula for a Single Variable

    3.6.1

    3.6.2

    3.6.3

    3.6.4

    3.6.5                 

    Lagrange’s Expansion

    If y = ƒ(x), yo = ƒ(xo),ƒ(x0) ≠0, then

    3.6.6

    3.6.7

    g(x) = g(x0)

    where g(x) is any function indefinitely differentiable.

    Binomial Series

    3.6.8

                                            (–1<x<1)

    3.6.9

    ,

    3.6.10

    (1|x)–l = l –x + x²–x³ + x⁴– . . . (–1

    3.6.11

    3.6.12

    3.6.13

    3.6.14

    Asymptotic Expansions

    3.6.15 A series is said to be an asymptotic expansion of a function ƒ(x) if

    for every n = l,2, .... We write

    .

    The series itself may be either convergent or divergent.

    Operations With Series

                                    Let s1 = 1 + a1x + a2x² + a3x³ + a4x⁴ + . . .

    s2 = 1 + b1x + b2x² + b3x³ + b4x⁴ + . . .

    s3 = 1 + c1x + c2x² + c3x³ + c4x⁴ +. . .

    Reversion of Series

    3.6.25   Given

    y = ax + bx² + cx³ + dx⁴ + ex⁵ + ƒx⁶ + gx⁷ + . . .

    then

    x = Ay + By² + Cy³ + Dy⁴ + Ey⁵ + Fy⁶ + Gy⁷ + . . .

    where

    aA = 1

    a³B = – b

    aC = 2b² – ac

    aD = 5abc – a²d – 5b³

    aE = 6a²bd + 3a²c² + 14b⁴ – a³e – 21ab²c

    a¹¹F = 7a³be + 07a³cd + 84ab³c – a⁴ƒ

    - 28a²bc² – 42b⁵ – 28a²b²d

    a¹³G = 8abf + 8ace + 4ad² + 120a²b³d

                       + 180a²b²c² + 132b⁶ - ag - 36a³b²e

                                              – 72a³bcd - 12a³c³ - 330abc

    Kummer’s Transformation of Series

    3.6.26   Let be a given convergent series and be a given convergent series with knownsum c such that

    Then

    .

    Euler’s Transformation of Series

    3.6.27   If is a convergent series with sum s then

    Euler–Maclaurin Summation Formula

    3.6.28

    3.7. Complex Numbers and Functions

    Cartesian Form

    3.7.1                                               z = x + iy

    Polar Form

    3.7.2                                     z = reiθ = r(cos θ + i sin θ)

    3.7.3                                Modulus :

    3.7.4   Argument: arg z = arctan (y/x) = θ (other notations for arg z are am z and ph z).

    3.7.5                                 Real Part:

    3.7.6                             Imaginary Part:

    Complex Conjugate of z

    3.7.7                                           

    3.7.8                                              

    3.7.9                                      

    Multiplication and Division

          If z1 = x1 + iy1, z2 = x2 + iy2, then

    3.7.10               z1z2 = x1x2 — y1y2 + i(x1y2 + x2y1)

    3.7.11                             |z1z2| = |z1| |z2|

    3.7.12                    arg (z1z2) = arg z1 + arg z2

    3.7.13      

    3.7.14                              

    3.7.15               

    Powers

    3.7.16      zn = rneinθ

    3.7.17          = rn cos nθ + irn sin

                                                     (n = 0,±l,±2,...)

    3.7.18                   z² = x² – y² + i(2xy)

    3.7.19              z³ = x³ – 3xy² + i(3x²y y³)

    3.7.20          z⁴ = x⁴ – 6x²y² + y⁴ + i(4x³y – 4xy³)

    3.7.21      z⁵ = x⁵ – 10x³y² + 5xy⁴ + i(5xy – 10x²y³ + y⁵)

    3.7.22

    If  zn = un + ivn, then zn + ¹ = un + 1 + ivn + 1 where

    3.7.23    un + 1 = xun yvn; vn + 1 = xvn + yun

    and are called harmonic polynomials.

    3.7.24                    

    3.7.25                    

    Roots

    3.7.26      

    If — π<θ≤π this is the principal root. The other root has the opposite sign. The principal root is given by

    3.7.27        where 2uv = y and where the ambiguous sign is taken to be the same as the sign of y.

    3.7.28   z¹/n = r¹/n eiθ/n, (principal root if -π<θ≤π). Other roots are r¹/n ei(θ + ²πk)/n (k = 1,2, 3, . . .,n — 1).

    Inequalities

    3.7.29                     

    Complex Functions, Cauchy–Riemann Equations

    ƒ(z) = ƒ(x + iy) = u(x,y) + iv(x,y)where u(x,y),v(x,y) are, real, is analytic at those points z = x + iy at which

    3.7.30                            

             If z = reiθ,

    3.7.31                            

    Laplace’s Equation

    The functions u(x, y) and v(x,y) are called harmonic functions and satisfy Laplace’s equation:

    Cartesian Coordinates

    3.7.32                            

    Polar Coordinates

    3.7.33        

    3.8. Algebraic Equations

    Solution of Quadratic Equations

    3.8.1   Given az² + bz + c = 0,

            If q >0, two real roots,

    q = 0, two equal roots,

    q <0, pair of complex conjugate roots.

    Solution of Cubic Equations

    3.8.2   Given z³ + a2z² + a1z + a0 = 0, let

    .

    If q³ + r²>0, one real root and a pair of complex conjugate roots,

    q³ + r² = 0, all roots real and at least two are equal,

        g³ + r²<0, all roots real (irreducible case).

    Let

    then

    If z1, z2, z3 are the roots of the cubic equation

    zl + z2 + z3 = — a2

    z1z2 + z1z3 + z2z3 = a1

    z1z2z3 = — a0

    Solution of Quartic Equations

    3.8.3   Given z⁴ + a3z³ + a2z² + a1z + a0 = 0, find the real root u1 of the cubic equation

    and determine the four roots of the quartic as solutions of the two quadratic equations

    If all roots of the cubic equation are real, use the value of u1 which gives real coefficients in the *quadratic equation and select signs so that if

    z⁴ + a3z³ + a2z² + a1z + a0 = (z² + plz + ql)(z² + p2z + q2),

    then

    P1 + P2 = a3, p1 p2 + q1 + q2 = a2 ,plq2 + p2q1 = a1 , q1q2 = a0.

         If z1 , z2 , z3 ,24 are the roots,

                       Σz1i = — a3, Σzizjzk = — a1 ,

                           Σzi ,zj = a2 , z1z2z3z4 = a0.

    3.9. Successive Approximation Methods

    General Comments

    3.9.1   Let x = x1 be an approximation to x = ξ where ƒ(ξ) = 0 and both x1 and ξ are in the interval axb. We define

    xn + 1 = xn + cnƒ(xn) (n = l,2, . . .).

    Then, if ƒ (x)≥ 0 and the constants cn are negative and bounded, the sequence xn converges monotonically to the root ξ.

    If cn = c = constant < 0 and ƒ ′(x)> 0, then the process converges but not necessarily monotonically.

    Degree of Convergence of an Approximation Process

    3.9.2   Let x1 , x2 , x3 , . . . be an infinite sequence of approximations to a number ξ. Then, if

                |xn + 1 - ξ|<A|xn - ξ|k , (n = l,2, . . .)

    where A and k are independent of n, the sequence is said to have convergence of at most the kth degree (or order or index) to ξ. If k = 1 and A<1 the convergence is linear; if k = 2 the convergence is quadratic.

    Regula Falsi (False Position)

    3.9.3   Given y = ƒ(x) to find ξ such that ƒ(ξ) = 0, choose x0 and x1 such that ƒ(x0)and ƒ(x1)have opposite signs and compute

    .

    Then continue with x2 and either of x0 or x1 for which ƒ(x0)orƒ(x1)is of opposite sign toƒ(x2).

    Regula falsi is equivalent to inverse linear interpolation.

    Method of Iteration (Successive Substitution)

    3.9.4    The iteration scheme xk + 1 = F(xk) will converge to a zero of x = F{x) if

                                           (1)   |F′(x)|≤q<1 for a≤xb,

    .

    Newton’s Method of Successive Approximations

    3.9.5

    Newton’s Rule

    If x = xk is an approximation to the solution x = ξ of ƒ(x) = 0 then the sequence

    will converge quadratically to x = ξ: (if instead of the condition (2) above),

    (1) Monotonic convergence, ƒ(x0(x0)>0 and ƒ′(x), ƒ″(x) do not change sign in the interval (x0 ,ξ), or

    (2) Oscillatory convergence, ƒ(x0(x0)<0 and ƒ(x), ƒ(x) do not change sign in the interval (x0 , x1) , x0≤ξ≤x1.

    Newton’s Method Applied to Real nth Roots

    3.9.6  Given xn = N, if xk is an approximation x = N¹/n then the sequence

    will converge quadratically to x.

    If n = 2, ,

    If n = 3, .

    Aitken’s δ²-Process for Acceleration of Sequences

    3.9.7   If xk , xk + l , xk + 2 are three successive iterates in a

    Enjoying the preview?
    Page 1 of 1