Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Numerical Methods for Ordinary Differential Equations
Numerical Methods for Ordinary Differential Equations
Numerical Methods for Ordinary Differential Equations
Ebook1,044 pages6 hours

Numerical Methods for Ordinary Differential Equations

Rating: 0 out of 5 stars

()

Read preview

About this ebook

A new edition of this classic work, comprehensively revised to present exciting new developments in this important subject

The study of numerical methods for solving ordinary differential equations is constantly developing and regenerating, and this third edition of a popular classic volume, written by one of the world’s leading experts in the field, presents an account of the subject which reflects both its historical and well-established place in computational science and its vital role as a cornerstone of modern applied mathematics.

In addition to serving as a broad and comprehensive study of numerical methods for initial value problems, this book contains a special emphasis on Runge-Kutta methods by the mathematician who transformed the subject into its modern form dating from his classic 1963 and 1972 papers.  A second feature is general linear methods which have now matured and grown from being a framework for a unified theory of a wide range of diverse numerical schemes to a source of new and practical algorithms in their own right.  As the founder of general linear method research, John Butcher has been a leading contributor to its development; his special role is reflected in the text.  The book is written in the lucid style characteristic of the author, and combines enlightening explanations with rigorous and precise analysis. In addition to these anticipated features, the book breaks new ground by including the latest results on the highly efficient G-symplectic methods which compete strongly with the well-known symplectic Runge-Kutta methods for long-term integration of conservative mechanical systems.

This third edition of Numerical Methods for Ordinary Differential Equations will serve as a key text for senior undergraduate and graduate courses in numerical analysis, and is an essential resource for research workers in applied mathematics, physics and engineering.

 

 

 

LanguageEnglish
PublisherWiley
Release dateAug 5, 2016
ISBN9781119121527
Numerical Methods for Ordinary Differential Equations
Author

J. C. Butcher

My name is Josh Butcher, and ever since I was seven I loved writing. From writing little short-stories based off of video-games to poetry here and there, my work has always varied depending on my mood. I am currently a high-school student at Greencastle Antrim High School and am happy doing anything that has anything to do with writing. I send my time hanging out with my friends and family, helping teach little kids at an after-school program and making money here-and-there watching dogs around my neighborhood. All in all, I’m just a kid at heart and love goofing around with whoever will put up with me at the time. I try my best to place myself into my writing and make it immersive for anyone who cracks open my book and delves into the world of my imagination.

Related to Numerical Methods for Ordinary Differential Equations

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Numerical Methods for Ordinary Differential Equations

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Numerical Methods for Ordinary Differential Equations - J. C. Butcher

    Foreword

    This book is devoted to a subject – the numerical solution of ordinary differential equations – where practical relevance meets mathematical beauty in a unique way. Its writing is masterly, as befits its author, someone whose past and current contributions to the field are second to none in history.

    The numerical integration of differential equations plays a crucial role in all applications of mathematics. Virtually all the scientific laws that govern the physical world can be expressed as differential equations; therefore making explicit the implications and consequences of these laws requires finding the solutions to the equations. This necessity, coupled with the unfortunate fact that there is no general rule to solve analytically a given differential equation, has led over the years to the introduction by the best mathematical minds of a raft of techniques applicable only in particular equations or oriented to specific features of the solutions sought. While some of those efforts have significantly spurred the development of mathematics in the last 300 years (e.g. they have given us the theory of special functions, Lie groups and topology), numerical methods are the only master key for solving differential equations.

    The subject matter of this volume is not only of exceptional relevance due to its importance in practical applications; it also constitutes a rich and elegant branch of mathematics with exceptionally distinguished roots. As is well known, the simplest numerical algorithm to solve (ordinary) differential equations was suggested by Euler in the mid 18th century. It is less well known that, for Euler, what we now call Euler’s method was just a stepping stone in his insightful presentation of the Taylor series method of arbitrary order. Euler also carefully discussed the use of variable order and variable step lengths and implementation details. The next milestone of the subject, the introduction of multistep algorithms, was reached in the mid 19th century by Adams, the scientist best known for having co-discovered the existence of the planet Neptune using only mathematics. Another important class of numerical integrators was introduced by Runge and systematized by Kutta around the year 1900. Thus, 100 years ago, the sciences had a pressing need to solve differential equations, so the mathematicians put forward many useful algorithms to solve them … and yet there was a big gap: carrying out the required computations was typically unfeasible when pencil and paper or mechanical machines were the only ways of performing arithmetic operations. It is no exaggeration to state that the need to implement in practice the integration algorithms of Adams, Runge and Kutta led to the conception and construction of (digital, electronic) computers; after all, one the first computers was named ENIAC, (electronic numerical integrator and computer). Since then, computers have of course revolutionized not only the mathematical solution of differential equations but almost everything else.

    It was only natural that when the use of computers became widespread, mathematicians asked themselves whether the venerable integrators introduced by Adams, Runge and Kutta were the best conceivable. As it turned out, in the multistep field, the beautiful mathematics of Dahlquist showed that for nonstiff problems it is not really feasible to do much better than Adams had suggested. By the addition of extra free parameters, it is possible to concoct more sophisticated integrators, but these are doomed to be unstable. In the Runge-Kutta realm the situation is just the opposite: the many degrees of freedom in the world of Runge-Kutta integrators have shown themselves capable of providing a good integrator for each situation.

    The construction and analysis of specific Runge-Kutta schemes is a daunting job if one approaches it as Runge and Kutta did; these schemes are highly nonlinear with a remarkable Matrioshka doll structure, where the vector field has to be evaluated at an expression that involves the vector field evaluated at an expression that involves the vector field … Mathematics owes much to the author of this book for a simple and elegant, alternative, general methodology based on the use of trees and other algebraic ideas. It is thanks to J. C. Butcher’s techniques that many authors have been able in the last decades to develop Runge-Kutta methods tailored to different needs and to implement them in useful software. Many such techniques have found uses away from numerical mathematics in fields such as quantum field theory and noncommutative geometry. Connes (Field medallist 1982) and Kreimer, writing on renormalization theories, state: ‘We regard Butcher’s work on the classification of numerical integration methods as an impressive example that concrete problem-oriented work can lead to far-reaching conceptual results.’

    The author wrote an earlier text restricted to Runge-Kutta and general linear methods. In the case of the present more comprehensive volume, this is the third, significantly updated, edition; I am sure it will be as well received as the two preceding editions.

    JM Sanz-Serna

    Universidad Carlos III de Madrid

    Preface to the first edition

    Introductory remarks

    This book represents an attempt to modernize and expand my previous volume, The Numerical Analysis of Ordinary Differential Equations: Runge–Kutta and General Linear Methods. It is more modern in that it considers several topics that had not yet emerged as important research areas when the former book was written. It is expanded in that it contains a comprehensive treatment of linear multistep methods. This achieves a better balance than the earlier volume which made a special feature of Runge–Kutta methods.

    In order to accommodate the additional topics, some sacrifices have been made. The background work which introduced the earlier book is here reduced to an introductory chapter dealing only with differential and difference equations. Several topics that seem to be still necessary as background reading are now introduced in survey form where they are actually needed. Some of the theoretical ideas are now explained in a less formal manner. It is hoped that mathematical rigour has not been seriously jeopardized by the use of this more relaxed style; if so, then there should be a corresponding gain in accessibility. It is believed that no theoretical detail has been glossed over to the extent that an interested reader would have any serious difficulty in filling in the gaps.

    It is hoped that lowering the level of difficulty in the exposition will widen the range of readers who might be able to find this book interesting and useful. With the same idea in mind, exercises have been introduced at the end of each section.

    Following the chapter on differential and difference equations, Chapter 2 is presented as a study of the Euler method. However, it aims for much more than this in that it also reviews many other methods and classes of methods as generalizations of the Euler method. This chapter can be used as a broad-ranging introduction to the entire subject of numerical methods for ordinary differential equations.

    Chapter 3 contains a detailed analysis of Runge–Kutta methods. It includes studies of the order, stability and convergence of Runge–Kutta methods and also considers in detail the design of efficient explicit methods for non-stiff problems. For implicit methods for stiff problems, inexpensive implementation costs must be added to accuracy and stability as a basic requirement. Recent work on each of these questions is surveyed and discussed.

    Linear multistep methods, including the combination of two methods as predictor–corrector pairs, are considered in Chapter 4. The theory interrelating stability, consistency and convergence is presented together with an analysis of order conditions. This leads to a proof of the (first) ‘Dahliquist barrier’. The methods in this class which are generally considered to be the most important for the practical solution of non-stiff problems are the Adams–Bashforth and Adams–Moulton formulae. These are discussed in detail, including their combined use as predictor–corrector pairs. The application of linear multistep methods to stiff problems is also of great practical importance and the treatment will include an analysis of the backward difference formulae.

    In Chapter 5 the wider class of general linear methods is introduced and analysed. Questions analogous to those arising in the classical Runge–Kutta and linear multistep methods – that is, questions of consistency, stability, convergence and order – are considered and explored. Several sub-families of methods, that have a potential practical usefulness, are examined in detail. This includes the so-called DIMSIM methods and a new type of method exhibiting what is known as inherent Runge–Kutta stability.

    The remarks in the following paragraphs are intended to be read following Chapter 5.

    Concluding remarks

    Any account of this rapidly evolving subject is bound to be incomplete. Complete books are all alike; every incomplete book is incomplete in its own way.

    It has not been possible to deal adequately with implementation questions. Numerical software for evolutionary problems entered its modern phase with the DIFSUB code of Gear (1971b). ‘Modern’ in this sense means that most of the ingredients of subsequent codes were present. Both stiff and non-stiff problems are catered for, provision is made for Jacobian calculation either by subroutine call or by difference approximation; the choice is up to the user. Most important, automatic selection of stepsize and order is made dynamically as the solution develops. Compared with this early implementation of linear multistep methods, the Radau code (Hairer and Wanner 1996) uses implicit Runge–Kutta methods for the solution of stiff problems.

    In recent years, the emphasis in numerical methods for evolutionary problems has moved beyond the traditional areas of non-stiff and stiff problems. In particular, differential algebraic equations have become the subject of intense analysis as well as the development of reliable and efficient algorithms for problems of variable difficulty, as measured for example by the indices of the problems. Some basic references in this vibrant area are Brenan, Campbell and Petzold (1996) and Hairer, Lubich and Roche (1989). In particular, many codes are now designed for applications to stiff ordinary differential equations in which algebraic constraints also play a role. On the Runge–Kutta side, Radau is an example of this multipurpose approach. On the linear multistep side, Petzold’s DASSL code is closely related to Gear’s DIFSUB code but has the capability of solving differential algebraic equations, at least of low index.

    Many problems derived from mechanical systems can be cast in a Hamiltonian formulation. To faithfully model the behaviour of such problems it is necessary to respect the symplectic structure. Early work on this by the late Feng Kang has led to worldwide activity in the study of this type of question. A basic reference on Hamiltonian problems is Sanz-Serna and Calvo (1994)

    The emphasis on the preservation of qualitative features of a numerical solution has now grown well beyond the Hamiltonian situation and has become a mathematical discipline in its own right. We mention just two key references in this emerging subject of ‘geometric integration’. They are Iserles et al. (2000) and Hairer, Lubich and Wanner (2006).

    Internet commentary

    Undoubtedly there will be comments and suggestions raised by readers of this volume. A web resource has been developed to form a commentary and information exchange for issues as they arise in the future. The entry point is

    http://www.math.auckland.ac.nz/~butcher/book

    Acknowledgements

    I acknowledge with gratitude the support and assistance of many people in the preparation of this volume. The editorial and production staff at Wiley have encouraged and guided me through the publishing process. My wife, children, grandchildren and stepchildren have treated me gently and sympathetically.

    During part of the time I have been working on this book, I have received a grant from the Marsden Fund. I am very grateful for this assistance both as an expression of confidence from my scientific colleagues in New Zealand and as practical support.

    The weekly workshop in numerical analysis at The University of Auckland has been an important activity in the lives of many students, colleagues and myself. We sometimes refer to this workshop as the ‘Runge–Kutta Club’. Over the past five or more years especially, my participation in this workshop has greatly added to my understanding of numerical analysis through collaboration and vigorous discussions. As this book started to take shape they have provided a sounding board for many ideas, some of which were worked on and improved and some of which were ultimately discarded. Many individual colleagues, both in Auckland and overseas, have read and worked through drafts of the book at various stages of its development. Their comments have been invaluable to me and I express my heartfelt thanks.

    Amongst my many supportive colleagues, I particularly want to name Christian Brouder, Robert Chan, Tina Chan, David Chen, Allison Heard, Shirley Huang, Arieh Iserles, Zdzisław Jackiewicz, Pierre Leone, Taketomo (Tom) Mitsui, Nicolette Moir, Steffen Schulz, Anjana Singh, Angela Tsai, Priscilla Tse and Will Wright.

    Preface to the second edition

    Reintroductory remarks

    The incremental changes incorporated into this edition are an acknowledgement of progress in several directions. The emphasis on structure-preserving algorithms has driven much of this recent progress, but not all of it. The classical linear multistep and Runge–Kutta methods have always been special cases of the large family of general linear methods, but this observation is of no consequence unless some good comes of it. In my opinion, there are only two good things that might be worth achieving. The first is that exceptionally good methods might come to light which would not have been found in any other way. The second is that a clearer insight and perhaps new overarching theoretical results might be expressed in the general linear setting. I believe that both these aims have been achieved, but other people might not agree. However, I hope it can be accepted that some of the new methods which arise naturally as general linear methods have at least some potential in practical computation. I hope also that looking at properties of traditional methods from within the general linear framework will provide additional insight into their computational properties.

    How to read this book

    Of the five chapters of this book, the first two are the most introductory in nature. Chapter 1 is a review of differential and difference equations with a systematic study of their basic properties balanced against an emphasis on interesting and prototypical problems. Chapter 2 provides a broad introduction to numerical methods for ordinary differential equations. This is motivated by the simplicity of the Euler method and a view that other standard methods are systematic generalizations of this basic method. If Runge–Kutta and linear multistep methods are generalizations of Euler then so are general linear methods, and it is natural to introduce a wide range of multivalue–multistage methods at this elementary level.

    A reading of this book should start with these two introductory chapters. For a reader less experienced in this subject this is an obvious entry point but they also have a role for a reader who is ready to go straight into the later chapters. For such readers, they will not take very long but they do set the scene for an entry into the most technical parts of the book.

    Chapter 3 is intended as a comprehensive study of Runge–Kutta methods. A full theory of order and stability is presented and at least the early parts of this chapter are prerequisites for Chapter 5 and to a lesser extent for Chapter 4. The use of B-series, or the coefficients that appear in these series, is becoming more and more a standard tool for a full understanding of modern developments in this subject.

    Chapter 4 is a full study of linear multistep methods. It is based on Dahlquist’s classic work on consistency, stability and order and includes analysis of linear and nonlinear stability. In both Chapters 3 and 4 the use of order stars to resolve order and stability questions is complemented by the introduction of order arrows. It is probably a good idea to read through most of Chapter 4 before embarking on Chapter 5. This is not because general linear methods are intrinsically inaccessible, but because an appreciation of their overarching nature hinges on an appreciation of the special cases they include.

    General linear methods, the subject of Chapter 5, treat well-known methods in a unified way, but it is hoped that they do more than this. There really seem to be new and useful methods buried amongst them which cannot be easily motivated in any other way. Thus, while this chapter needs to be put aside to be read as a culmination, it should not be put off too long. There is so much nice mathematics already associated with these methods, and the promise of more to come provides attraction enough. It is general linear methods, and the stability functions associated with them, that really put order arrows in their rightful place.

    Internet support pages

    For additional information and supporting material see

    http://www.math.auckland.ac.nz/~butcher/ODE–book–2008

    Reacknowledgements

    I have many people to thank and to rethank in my efforts to produce an improved edition. My understanding of the stability and related properties of general linear methods has been sharpened by working with Adrian Hill and Laura Hewitt. Helmut Podhaisky has given me considerable help and advice, especially on aspects of general linear method implementation. My special thanks to Jane Lee for assistance with the final form of the manuscript. A number of people have made comments and provided corrections on the first edition or made constructive suggestions on early drafts of this new version. In addition to people acknowledged in some other way, I would like to mention the names of Ian Gladwell, Dawoomi Kim, Yoshio Komori, René Lamour, Dione O’Neale, Christian Perret, Higinio Ramos, Dave Simpson, Steve Stalos, Caren Tischendorf, Daniel Weiß, Frank Wrona and Jinsen Zhuang.

    Preface to the third edition

    A new edition

    ‘Numerical methods for ordinary differential equations’ is a mature and stable subject, and new ideas and techniques should not be expected too frequently. While this new edition is a consolidation of the early editions, it does attempt to take the subject further in some directions. Most notably, Section 56, dealing with G-symplectic methods, records some major advances in what is now known about these methods. At the time of the appearance of the second edition the author, and people with whom he was collaborating, did not fully appreciate the disastrous role that parasitism could play in extended time integrations. This shortcoming has now been overcome. In Butcher, Habib, Hill and Norton (2014), parasitism has been analysed and largely dealt with as a source of numerical difficulty. A recent result (Butcher 2015) has gone some way towards explaining why G-symplectic methods work as well as they do. However, D’Ambrosio and Hairer (2014) show that the suppression of unstable behaviour caused by parasitism cannot be relied on forever. This third edition attempts to present this new work in a manner that underlines the potential of G-symplectic methods as practical integrators for Hamiltonian and other problems. Although known G-symplectic methods are generally restricted to order 4, a new result (Butcher, Imran and Podhaisky 2016) shows how order 6 methods can be constructed. Limited numerical testing confirms the order and conservation properties of this new method and one of these tests is quoted as Figure 568(ii).

    Chapter 3 played a central role in the previous editions and does so again. The aspects of this subject, dealing with the composition group, will be difficult for some readers and this edition attempts to explain this in a fresh way. But the importance of algebraic analysis, or anything else of a theoretical nature, must, to a large extent, be in the applications. This theory is not merely a mathematical device; it leads to the construction of useful numerical methods and gives insight to the nature of these methods.

    Attributions and personal names

    Many results in numerical analysis began as conjectures and were eventually proved, but not always by the individuals who formulated the conjectures. For example, the Ehle (Ehle 1973) and Daniel Moore (Daniel and Moore 1970) conjectures were not settled until the invention of order stars (Wanner, Hairer and Nørsett 1978). In this volume I have tried to use the convention of naming these results using the names of the first people to formulate them because I believe this avoids confusion about which result is being referred to.

    Some authors refer to the commonly used tableaux of coefficients in a specific Runge–Kutta methods as ‘Butcher tableaux’. In this third edition I sometimes follow this terminology but the single word ‘tableau’ is usually enough to make it clear what is being referred to in any particular case. There are many different products associated with rooted trees, the most important being used for the constructing of forests as the product of trees. However, in this book, extensive use is made of the product c0x-math-001 formed by adjoining the two roots and defining the root of the product as the same vertex as the root of c0x-math-002 . This is sometimes referred to as the ‘Butcher product’ and this name will be used in this edition.

    The late Germund Dahlquist once told me why he used the name ‘A-stability’ for this fundament linear stability definition. It was simply to follow the lead of David M. Young who used ‘property A’ as the name of a fundamental condition in an entirely different area of numerical analysis. When Germund introduced the nonlinear definition ‘G-stability’, he was referring to the matrix G which appears in the formulation of the concept. Shortly afterwards I needed a name for nonlinear stability in the case of Runge–Kutta methods, and I chose B because it next follows A. The fact that B is one of my initials is no more significant than the fact that G is one of the initials of Germund Dahlquist.

    Algorithms

    The purpose of including formal algorithms in this book is to illuminate the numerical processes they represent. Hence, they should not be interpreted as computer codes. Nevertheless, with little or no change, they can be used as scripts or functions in a variety of languages including MATLAB®, Octave and Scilab.

    Notice concerning MATLAB® in this book

    MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The MathWorks of a particular pedagogical approach or particular use of the MATLAB® software.

    Acknowledgements

    I am grateful to Ernst Hairer for correspondence which led to an appreciation of the nature of parasitism. For the method P, introduced in Subsection 560, applied to the simple pendulum, it is observed that, for small amplitudes, very little goes wrong but, for amplitudes greater than c0x-math-003 , parasitism eventually produces disastrous effects. Ernst kindly sent me his own analysis of this phenomenon.

    Many colleagues throughout the world have discussed interesting questions with me, and I am grateful for these stimulating interactions. There are too many to name individually but I do want to make a special mention of the colleagues and former students who have collaborated with me and learnt with me about G-symplectic and other types of general linear methods: Saghir Ahmad, Yousaf Habib, Adrian Hill, Gulshad Imran, Terrence Norton and Helmut Podhaisky.

    Over the years, including the period when I worked on this new edition, I have received generous support from the Marsden Fund and this has given me opportunities I would not otherwise have had.

    I have enjoyed the support of children, stepchildren and grandchildren in all aspects of my life. I express my special appreciation for the nest Jennifer has provided for me: a bower ‘full of sweet dreams, and health, and quiet breathing’.

    equation

    Chapter 1

    Differential and Difference Equations

    10 Differential Equation Problems

    100 Introduction to differential equations

    As essential tools in scientific modelling, differential equations are familiar to every educated person. In this introductory discussion we do not attempt to restate what is already known, but rather to express commonly understood ideas in the style that will be used for the rest of this book.

    The aim will always be to understand, as much as possible, what we expect to happen to a quantity that satisfies a differential equation. At the most obvious level, this means predicting the value this quantity will have at some future time. However, we are also interested in more general questions such as the adherence to possible conservation laws or perhaps stability of the long-term solution. Since we emphasize numerical methods, we often discuss problems with known solutions mainly to illustrate qualitative and numerical behaviour.

    Even though we sometimes refer to ‘time’ as the independent variable, that is, as the variable on which the value of the ‘solution’ depends, there is no reason for insisting on this interpretation. However, we generally use x to denote the ‘independent’ or ‘time’ variable and y to denote the ‘dependent variable’. Hence, differential equations will typically be written in the form

    (100a) equation

    where

    equation

    Sometimes, for convenience, we omit the x in c01-math-002 .

    The terminology used in (100a) is misleadingly simple, because y could be a vector-valued function. Thus, if we are working in c01-math-003 , and x is permitted to take on any real value, then the domain and range of the function f which defines a differential equation and the solution to this equation are given by

    equation

    Since we might be interested in time values that lie only in some interval c01-math-004 , we sometimes consider problems in which c01-math-005 , and c01-math-006 . When dealing with specific problems, it is often convenient to focus, not on the vector-valued functions f and y, but on individual components. Thus, instead of writing a differential equation system in the form of (100a), we can write coupled equations for the individual components:

    (100b) equation

    Autonomous differential equations

    A differential equation for which f is a function not of x, but of y only, is said to be ‘autonomous’. Some equations arising in physical modelling are more naturally expressed in one form or the other, but we emphasize that it is always possible to write a non-autonomous equation in an equivalent autonomous form. All we need to do to change the formulation is to introduce an additional component c01-math-008 into the y vector, and ensure that this can always maintain the same value as x, by associating it with the differential equation c01-math-009 . Thus, the modified system is

    (100c) equation

    A system of differential equations alone does not generally define a unique solution, and it is necessary to add to the formulation of the problem a number of additional conditions. These are either ‘boundary conditions’, if further information is given at two or more values of x, or ‘initial conditions’, if all components of y are specified at a single value of x.

    Initial value problems

    If the value of c01-math-011 is given, then the pair of equations

    equation

    is known as an ‘initial value problem’. Our main interest in this book is with exactly this problem, where the aim is to obtain approximate values of c01-math-012 for specific values of x, usually with c01-math-013 , corresponding to the prediction of the future states of a differential equation system.

    Note that for an N-dimensional system, the individual components of an initial value vector need to be given specific values. Thus, we might write

    equation

    When the problem is formally converted to autonomous form (100c), the value of c01-math-014 must be identical to c01-math-015 , otherwise the requirement that c01-math-016 should always equal x would not be satisfied.

    For many naturally occurring phenomena, the most appropriate form in which to express a differential equation is as a high order system. For example, an equation might be of the form

    equation

    with initial values given for c01-math-017 . Especially important in the modelling of the motion of physical systems subject to forces are equation systems of the form

    (100d) equation

    where the equations, though second order, do have the advantages of being autonomous and without c01-math-019 occurring amongst the arguments of c01-math-020 .

    To write (100d) in what will become our standard first order system form, we can introduce additional components c01-math-021 . The differential equation system (100d) can now be written as the first order system

    equation

    101 The Kepler problem

    The problems discussed in this section are selected from the enormous range of possible scientific applications. The first example problem describes the motion of a single planet about a heavy sun. By this we mean that, although the sun exerts a gravitational attraction on the planet, we regard the corresponding attraction of the planet on the sun as negligible, and that the sun will be treated as being stationary. This approximation to the physical system can be interpreted in another way: even though both bodies are in motion about their centre of mass, the motion of the planet relative to the sun can be modelled using the simplification we have described. We also make a further assumption, that the motion of the planet is confined to a plane.

    Let c01-math-022 and c01-math-023 denote rectangular coordinates centred at the sun, specifying at time x the position of the planet. Also let c01-math-024 and c01-math-025 denote the components of velocity in the c01-math-026 and c01-math-027 directions, respectively. If M denotes the mass of the sun, c01-math-028 the gravitational constant and m the mass of the planet, then the attractive force on the planet will have magnitude

    equation

    Resolving this force in the coordinate directions, we find that the components of acceleration of the planet, due to this attraction, are c01-math-029 and c01-math-030 , where the negative sign denotes the inward direction of the acceleration.

    We can now write the equations of motion:

    equation

    By adjusting the scales of the variables, the factor c01-math-031 can be removed from the formulation, and we arrive at the equations

    (101a) equation

    (101b) equation

    (101c) equation

    (101d) equation

    The solutions of this system are known to be conic sections, that is, ellipses, parabolas or hyperbolas, if we ignore the possibility that the trajectory is a straight line directed either towards or away from the sun. We investigate this further after we have shown that two ‘first integrals’, or invariants, of the solution exist.

    Conservation of Hamiltonian and angular momentum

    Theorem 101A

    The quantities

    equation

    are constant.

    Proof

    We verify that the values of c01-math-036 and c01-math-037 are zero if y satisfies (101a)–(101d). We have

    equation

    and

    equation

    The quantities H and A are the ‘Hamiltonian’ and ‘angular momentum’, respectively. Note that c01-math-038 , where c01-math-039 is the kinetic energy and c01-math-040 is the potential energy.

    A further property of this problem is its invariance under changes of scale of the variables:

    equation

    The Hamiltonian and angular momentum get scaled to

    equation

    Invariance under orthogonal transformations

    A second type of transformation is based on a two-dimensional orthogonal transformation (that is, a rotation or a reflection or a composition of these) Q, where c01-math-041 . The time variable x is invariant, and the position and velocity variables get transformed to

    equation

    It is easy to see that c01-math-042 implies that the trajectory lies entirely in a subspace defined by

    c01-math-043

    for some fixed angle c01-math-044 . We move on from this simple case and assume that c01-math-045 . The sign of H is of crucial importance: if c01-math-046 then it is possible to obtain arbitrarily high values of c01-math-047 without c01-math-048 vanishing. We exclude this case for the present discussion and assume that c01-math-049 . Scale H so that it has a value c01-math-050 and at the same time A takes on a positive value. This value cannot exceed 1 because we can easily verify an identity involving the derivative of c01-math-051 . This identity is

    (101e)

    equation

    Since the left-hand side cannot be negative, the quadratic function in r on the right-hand side must have real roots. This implies that c01-math-053 . Write c01-math-054 , for c01-math-055 , where we see that e is the eccentricity of an ellipse on which the orbit lies. The minimum and maximum values of r are found to be c01-math-056 and c01-math-057 , respectively. Rotate axes so that when c01-math-058 and c01-math-059 . At this point we find that c01-math-060 and c01-math-061 . We will use these as initial values at c01-math-062 .

    Change to polar coordinates by writing c01-math-063 . It is found that

    equation

    so that, because c01-math-064 , we find that

    (101f) equation

    From (101e) and (101f) we find a differential equation for the path traced out by the orbit

    equation

    and we can verify that this is satisfied by

    equation

    If we change back to Cartesian coordinates, we find that all points on the trajectory lie on the ellipse

    equation

    with centre c01-math-066 , eccentricity e, and major and minor axis lengths 1 and c01-math-067 respectively.

    As we have seen, a great deal is known about this problem. However, much less is known about the motion of a many-body gravitational system.

    One of the aims of modern numerical analysis is to understand the behaviour of various geometrical properties. In some cases it is possible to preserve the value of quantities that are invariant in the exact solution. In other situations, such as problems where the Hamiltonian is theoretically conserved, it may be preferable to conserve other properties, such as what is known as ‘symplectic behaviour’.

    We consider further gravitational problems in Subsection 120.

    102 A problem arising from the method of lines

    The second initial value problem we consider is based on an approximation to a partial differential equation. Consider the parabolic system

    (102a) equation

    where we have used t to represent time, x to represent distance and c01-math-069 to represent some quantity, such as temperature, which diffuses with time. For this problem it is necessary to impose conditions on the boundaries c01-math-070 and c01-math-071 as well as at the initial time c01-math-072 . We may interpret the solution as the distribution of the temperature at points in a conducting rod, given that the temperature is specified at the ends of the rod. In this case the boundary conditions would be of the form c01-math-073 and c01-math-074 . Equation (102a) is known as the heat or diffusion equation, and the conditions given at c01-math-075 and c01-math-076 are known as Dirichlet conditions. This is in contrast to Neumann conditions, in which the values of c01-math-077 are given at the ends of the x interval.

    Space discretization

    To convert this problem into an ordinary differential equation system, which mimics the behaviour of the parabolic equation, let c01-math-078 denote the values of c01-math-079 , respectively. That is,

    equation

    where we have included c01-math-080 for convenience.

    For c01-math-081 , evaluated at c01-math-082 , is approximately equal to c01-math-083 . Hence, the vector of derivatives of c01-math-084 is given by

    equation

    This system can be written in vector–matrix form as

    (102b) equation

    where

    equation

    The original problem is ‘dissipative’ in the sense that, if u and v are each solutions to the diffusion equation, which have identical boundary values but different initial values, then

    equation

    is non-increasing as t increases. We can verify this by differentiating with respect to t and by showing, using integration by parts, that the result found cannot be positive. We have

    equation

    Even though the approximation of (102a) by (102b) is not exact, it is an advantage of the discretization we have used, that the qualitative property is still present. Let y and z be two solutions to the ordinary differential equation system. Consider the nature of

    equation

    We have

    equation

    Sprectum of discretization

    Another aspect of the discretization that might be explored is the spectrum of the matrix A, in comparison with the spectrum of the linear operator c01-math-086 on the space of c01-math-087 functions on [0, 1] for which c01-math-088 . The eigenfunctions for the continuous problem are of the form c01-math-089 , for c01-math-090 , and the corresponding eigenvalues are c01-math-091 . For the discrete problem, we need to find the solutions to the problem

    (102c) equation

    where c01-math-093 are not all zero. Introducing also c01-math-094 , we find that it is possible to write (102c) in the form

    (102d)

    equation

    where c01-math-096 . The difference equation (102d) has solutions of the form

    (102e) equation

    where c01-math-098 , unless c01-math-099 (which is easily seen to be impossible). Because c01-math-100 , it follows that c01-math-101 and hence that

    equation

    with c01-math-102 . Hence,

    equation

    Denote this by c01-math-103 .

    The eigenvector corresponding to c01-math-104 is found from (102e), with C chosen so that the eigenvectors are orthonormal. The result is

    equation

    By spectral decomposition

    equation

    and furthermore

    equation

    for c01-math-105 a suitable function. In particular, if c01-math-106 , the solution to (102b) over an interval c01-math-107 is

    (102f) equation

    It is interesting to consider the relative contributions of the k terms on the right-hand side of (102f). For k small compared with N, we can use the approximation c01-math-109 , which holds for small c01-math-110 , to give eigenvalue number k as c01-math-111 . On the other hand, for k close to N, c01-math-112 . This means that for moderate values of h, the terms in (102f) decay at a slow rate for low values of k but more rapidly for values of k close to N.

    In the special case c01-math-113 and c01-math-114 , we illustrate this effect by observing the behaviour of

    equation

    The element of this matrix with greatest magnitude is c01-math-115 for c01-math-116 and for c01-math-117 the value reduces rapidly through the values c01-math-118 , c01-math-119 , c01-math-120 and c01-math-121 . For the remaining values of K, the value is close to zero.

    These observations have an important consequence for numerical approximations. To model the behaviour of solutions to this problem, it is the low values of k which are significant. However, in many numerical approximations, the values of k close to N have an overwhelming influence on the stability of the computation. Problems like this are said to be ‘stiff’

    Enjoying the preview?
    Page 1 of 1