Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Mathematical Physics with Partial Differential Equations
Mathematical Physics with Partial Differential Equations
Mathematical Physics with Partial Differential Equations
Ebook933 pages3 hours

Mathematical Physics with Partial Differential Equations

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Mathematical Physics with Partial Differential Equations, Second Edition, is designed for upper division undergraduate and beginning graduate students taking mathematical physics taught out by math departments. The new edition is based on the success of the first, with a continuing focus on clear presentation, detailed examples, mathematical rigor and a careful selection of topics. It presents the familiar classical topics and methods of mathematical physics with more extensive coverage of the three most important partial differential equations in the field of mathematical physics—the heat equation, the wave equation and Laplace’s equation.

The book presents the most common techniques of solving these equations, and their derivations are developed in detail for a deeper understanding of mathematical applications. Unlike many physics-leaning mathematical physics books on the market, this work is heavily rooted in math, making the book more appealing for students wanting to progress in mathematical physics, with particularly deep coverage of Green’s functions, the Fourier transform, and the Laplace transform. A salient characteristic is the focus on fewer topics but at a far more rigorous level of detail than comparable undergraduate-facing textbooks. The depth of some of these topics, such as the Dirac-delta distribution, is not matched elsewhere.

New features in this edition include: novel and illustrative examples from physics including the 1-dimensional quantum mechanical oscillator, the hydrogen atom and the rigid rotor model; chapter-length discussion of relevant functions, including the Hermite polynomials, Legendre polynomials, Laguerre polynomials and Bessel functions; and all-new focus on complex examples only solvable by multiple methods.

  • Introduces and evaluates numerous physical and engineering concepts in a rigorous mathematical framework
  • Provides extremely detailed mathematical derivations and solutions with extensive proofs and weighting for application potential
  • Explores an array of detailed examples from physics that give direct application to rigorous mathematics
  • Offers instructors useful resources for teaching, including an illustrated instructor's manual, PowerPoint presentations in each chapter and a solutions manual
LanguageEnglish
Release dateFeb 26, 2018
ISBN9780128147603
Mathematical Physics with Partial Differential Equations
Author

James Kirkwood

James Kirkwood (1924-1989) was a prominent figure in the theater world as well as the author of several novels. He's best remembered as the co-author of the long-running musical A Chorus Line and for P.S. Your Cat is Dead.

Read more from James Kirkwood

Related to Mathematical Physics with Partial Differential Equations

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Mathematical Physics with Partial Differential Equations

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Mathematical Physics with Partial Differential Equations - James Kirkwood

    Mathematical Physics with Partial Differential Equations

    Second Edition

    James Kirkwood

    Professor of Mathematical Sciences, Sweet Briar College, Sweet Briar, VA, USA

    Table of Contents

    Cover image

    Title page

    Copyright

    Dedication

    Preface

    Chapter 1. Preliminaries

    1.1. Self-Adjoint Operators

    1.2. Curvilinear Coordinates

    1.3. Approximate Identities and the Dirac-δ Function

    1.4. The Issue of Convergence

    1.5. Some Important Integration Formulas

    Chapter 2. Vector Calculus

    2.1. Vector Integration

    2.2. The Divergence and Curl

    2.3. Green's Theorem, the Divergence Theorem, and Stokes' Theorem

    Chapter 3. Green's Functions

    3.1. Introduction

    3.2. Construction of Green's Function Using the Dirac-Delta Function

    3.3. Green's Function Using Variation of Parameters

    3.4. Construction of Green's Function From Eigenfunctions

    3.5. More General Boundary Conditions

    3.6. The Fredholm Alternative (or, What If 0 Is an Eigenvalue?)

    3.7. Green's Function for the Laplacian in Higher Dimensions

    Chapter 4. Fourier Series

    4.1. Introduction

    4.2. Basic Definitions

    4.3. Methods of Convergence of Fourier Series

    4.4. The Exponential Form of Fourier Series

    4.5. Fourier Sine and Cosine Series

    4.6. Double Fourier Series

    Chapter 5. Three Important Equations

    5.1. Introduction

    5.2. Laplace's Equation

    5.3. Derivation of the Heat Equation in One Dimension

    5.4. Derivation of the Wave Equation in One Dimension

    5.5. An Explicit Solution of the Wave Equation

    5.6. Converting Second-Order Partial Differential Equations to Standard Form

    Chapter 6. Sturm–Liouville Theory

    6.1. Introduction

    6.2. The Self-Adjoint Property of a Sturm–Liouville Equation

    6.3. Completeness of Eigenfunctions for Sturm–Liouville Equations

    6.4. Uniform Convergence of Fourier Series

    Chapter 7. Using Generating Functions to Solve Specialized Differential Equations

    7.1. Introduction

    7.2. Generating Function for Laguerre Polynomials

    7.3. Hermite's Differential Equation

    7.4. Generating Function for Legendre's Equation

    7.5. Generator for Bessel Functions of the First Kind

    Chapter 8. Separation of Variables in Cartesian Coordinates

    8.1. Introduction

    8.2. Solving Laplace's Equation on a Rectangle

    8.3. Laplace's Equation on a Cube

    8.4. Solving the Wave Equation in One Dimension by Separation of Variables

    8.5. Solving the Wave Equation in Two Dimensions in Cartesian Coordinates by Separation of Variables

    8.6. Solving the Heat Equation in One Dimension Using Separation of Variables

    8.7. Steady State of the Heat Equation

    8.8. Checking the Validity of the Solution

    Chapter 9. Solving Partial Differential Equations in Cylindrical Coordinates Using Separation of Variables

    9.1. Introduction

    9.2. The Solution to Bessel's Equation in Cylindrical Coordinates

    9.3. Solving Laplace's Equation in Cylindrical Coordinates Using Separation of Variables

    9.4. The Wave Equation on a Disk (The Drumhead Problem)

    9.5. The Heat Equation on a Disk

    Chapter 10. Solving Partial Differential Equations in Spherical Coordinates Using Separation of Variables

    10.1. An Example Where Legendre Equations Arise

    10.2. The Solution to Bessel's Equation in Spherical Coordinates

    10.3. Legendre's Equation and Its Solutions

    10.4. Associated Legendre Functions

    10.5. Laplace's Equation in Spherical Coordinates

    10.6. Rigid Rotor

    10.7. One Dimension Quantum Mechanical Oscillator

    10.8. The Hydrogen Atom

    Chapter 11. The Fourier Transform

    11.1. Introduction

    11.2. The Fourier Transform as a Decomposition

    11.3. The Fourier Transform From Fourier Series

    11.4. Some Properties of the Fourier Transform

    11.5. Solving Partial Differential Equations Using the Fourier Transform

    11.6. The Spectrum of the Negative Laplacian in One Dimension

    11.7. The Fourier Transform in Three Dimensions

    Chapter 12. The Laplace Transform

    12.1. Introduction

    12.2. Properties of the Laplace Transform

    12.3. Solving Differential Equations Using the Laplace Transform

    12.4. Solving the Heat Equation Using the Laplace Transform

    12.5. The Wave Equation and the Laplace Transform

    Chapter 13. Solving PDEs With Green's Functions

    13.1. Solving the Heat Equation Using Green's Function

    13.2. The Method of Images

    13.3. Green's Function for the Wave Equation

    13.4. Green's Function and Poisson's Equation

    Appendix 1

    Appendix 2

    Appendix 3

    Bibliography

    Index

    Copyright

    Academic Press is an imprint of Elsevier

    125 London Wall, London EC2Y 5AS, United Kingdom

    525 B Street, Suite 1800, San Diego, CA 92101-4495, United States

    50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States

    The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom

    Copyright © 2018 Elsevier Inc. All rights reserved.

    No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.

    This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).

    Notices

    Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary.

    Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.

    To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

    Library of Congress Cataloging-in-Publication Data

    A catalog record for this book is available from the Library of Congress

    British Library Cataloguing-in-Publication Data

    A catalogue record for this book is available from the British Library

    ISBN: 978-0-12-814759-7

    For information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals

    Publisher: Katey Birtcher

    Acquisition Editor: Katey Birtcher

    Editorial Project Manager: Lindsay Lawrence

    Production Project Manager: Bharatwaj Varatharajan

    Designer: Victoria Pearson

    Typeset by TNQ Books and Journals

    Dedication

    To Bessie, Katie, and Elizabeth. The lights of my life.

    Preface

    The major purposes of this book are to present partial differential equations (PDEs) and vector analysis at an introductory level. As such, it could be considered a beginning text in mathematical physics. It is also designed to provide a bridge from undergraduate mathematics to the first graduate mathematics course in physics, applied mathematics, or engineering. In these disciplines, it is not unusual for such a graduate course to cover topics from linear algebra, ordinary differential equations and partial differential equations (PDEs), advanced calculus, vector analysis, complex analysis and probability, and statistics at a highly accelerated pace.

    In this text we study in detail, but at an introductory level, a reduced list of topics important to the abovementioned disciplines. In PDEs, we consider Green's functions, the Fourier and Laplace transforms, and how these are used to solve PDEs. We also study using separation of variables to solve PDEs in great detail. Our approach is to examine the three prototypical second-order PDEs—Laplace's equation, the heat equation, and the wave equation—and solve each equation with each method. The premise is that in doing so, the reader will become adept at each method and comfortable with each equation.

    The other prominent area of the text is vector analysis. While the usual topics are discussed, an emphasis is placed on viewing concepts rather than formulas. For example, we view the curl and gradient as properties of a vector field rather than as simply equations. A significant portion of this area deals with curvilinear coordinates to reinforce the idea of conversion of coordinate systems.

    Reasonable prerequisites for the course are a course in multivariable calculus, familiarity with ordinary differential equations to the point of being able to solve a second-order boundary problem with constant coefficients, and some experience with linear algebra.

    In dealing with ordinary differential equations, we emphasize the linear operator approach. That is, we consider the problem as being an eigenvalue/eigenvector problem for a self-adjoint operator. In addition to eliminating some tedious computations regarding orthogonality, this serves as a unifying theme and a more mature viewpoint.

    The level of the text generally lies between that of the classic encyclopedic texts of Boas and Kreyszig and the newer text by McQuarrie, and the PDE books of Weinberg and Pinsky. Topics such as Fourier series are developed in a mathematically rigorous manner. The section on completeness of eigenfunctions of a Sturm–Liouville problem is considerably more advanced than the rest of the text and can be omitted if one wishes to merely accept the result.

    The text is written at a level where it can be used as a self-contained reference as well as an introductory text. There was a concerted effort to avoid situations where filling in details of an argument would be a challenge. One thought in writing the text was that it would serve as a source for students in subsequent courses that felt I know I'm supposed to know how to derive this, but I don't. A couple of such examples are the fundamental solution of Laplace's equation and the spectrum of the Laplacian.

    The major changes from the first edition are first that a chapter on generating functions has been added. This gives a nice way of considering solutions to LaGuerre equations, Hermite equations, Legendre's equations, and Bessel equations. Second, in depth analyses for the rigid rotor, one-dimensional quantum mechanical oscillator, and the hydrogen atom are presented. Also, more esoteric coordinate systems have been moved to an appendix.

    Chapter 1

    Preliminaries

    Abstract

    This chapter focuses on some of the important equations and techniques of mathematical physics. It is a fortuitous fact that many of the most important such equations are linear. The chapter describes the methods of transforming some important functions to other coordinate systems. The most common coordinate systems, besides Cartesian coordinates, are cylindrical and spherical coordinates. The reason for considering different coordinate systems is that many problems can be simplified if the appropriate coordinate system is used. For example, the most important partial differential equations in physics and mathematics—Laplace's equation, the heat equation, and the wave equation—can often be solved by separation of variables if the problem is analyzed using Cartesian, cylindrical, or spherical coordinates.

    Keywords

    Approximate identity; Bessel function; Bessel’s inequality; Cauchy integral formula; Curl and Laplacian; Dirac-δ function; Distribution; Divergence; Eigenvalues and eigenfunctions; Fourier coefficients; Gamma function; Gradient; Jacobian; Maclaurin series; Pointwise convergence; Power series expansion in integration; Power series; Principle of superposition; Self-adjoint operator; Spherical and cylindrical coordinates; Taylor series; Uniform convergence; Uniformly Cauchy

    1.1. Self-Adjoint Operators

    The purpose of this text is to study some of the important equations and techniques of mathematical physics. It is a fortuitous fact that many of the most important such equations are linear, and we can apply the well-developed theory of linear operators. We assume knowledge of basic linear algebra but review some definitions, theorems, and examples that will be important to us.

    Definition:

    A linear operator (or linear function) from a vector space V to a vector space W, is a function for which

    for all and scalars a1 and a2.

    One of the most important linear operators for us will be

    where a0(x), a1(x) and a2(x) are continuous functions.

    Definition:

    If is a linear operator, then a nonzero vector is an eigenvector of with eigenvalue λ if

    Note that cannot be an eigenvector, but 0 can be an eigenvalue.

    Example:

    For , we have

    so eax is an eigenvector of with eigenvalue a.

    An extremely important example is . Among its properties are

    We leave it as Exercise 1 to show that if is an eigenvector of with eigenvalue λ, then is also an eigenvector of with eigenvalue λ.

    Definition:

    An inner product (also called a dot product) on a vector space V with scalar field F (which is the real number icon or the complex number icon ) is a function :V  ×  V  →  F such that for all f, g, h  ϵ  V and a  ϵ  F

    with equality if and only if f  =  0.

    A vector space with an inner product is called an inner product space.

    If , the usual inner product for is

    If the vector space is , then we must modify the definition, because, for example, under this definition, if , then

    Thus, on , for , we define

    We use the notation , which is interpreted as the square of the length of f, and is the distance from f to g.

    We shall be working primarily with vector spaces consisting of functions that satisfy some property such as continuity or differentiability. In this setting, one usually defines the inner product using an integral. A common inner product is

    where a or b may be finite or infinite. There might be a problem with some vector spaces in that with f  ≠  0. This problem can be overcome by a minor modification of the vector space or by restricting the functions to being continuous and will not affect our work. We leave it as Exercise 4 to show that the function defined above is an inner product.

    On some occasions it will be advantageous to modify the inner product above with a weight function w(x). If w(x)  ≥  0 on [a,b], then

    is also an inner product as we show in Exercise 5.

    Definition:

    A linear operator A on the inner product space V is self-adjoint if

    Self-adjoint operators are prominent in mathematical physics. One example is the Hamiltonian operator. It is a fact (Stone's theorem) that energy is conserved if and only if the Hamiltonian is self-adjoint. Another example is shown below. Part of the significance of this example is due to Newton's law F  =  ma.

    Example:

    The operator is self-adjoint on the inner product space

    with inner product

    We must show

    that is,

    To do this, we integrate by parts twice. Let

    so

    The periodicity of f and g forces . Thus,

    Integrating the integral on the right by parts with

    we have

    Notice that if [a,b] is of length 2π, then is a subset of V.

    We next prove two important facts about self-adjoint operators.

    Theorem:

    If is a self-adjoint operator, then

    1. The eigenvalues of are real;

    2. Eigenvectors of with different eigenvalues are orthogonal; that is, their inner product is 0.

    Proof:

    1. Suppose that f is an eigenvector of with eigenvalue λ. Then

        and

        Since is self-adjoint,

        and since , we have , so λ is real.

    2. Suppose

        Then

        and

        So

        and thus .

    Example:

    We have

    for m and n integers. This is because sin (nx) and cos (mx) are eigenfunctions of the self-adjoint operator with the inner product defined above with different eigenvalues.

    We shall use the technique of the example above to prove the orthogonality of functions such as Bessel functions and Legendre polynomials without having to resort to tedious calculations.

    Fourier Coefficients

    We now describe how to determine the representation of a given vector with respect to a given basis. That is, if is a basis for the vector space V, and if , we want to find scalars a1, a2,… for which

    If the basis satisfies the characteristic below, then this is easy.

    Definition:

    If is a set of vectors from an inner product space for which

    then is called an orthogonal set. If, in addition,

    then is called an orthonormal set. A basis that is an orthogonal (orthonormal) set is called an orthogonal (orthonormal) basis.

    Theorem:

    If is an orthogonal basis for the inner product space V, and if

    then

    Proof:

    We have

    Thus

    Note that if is an orthonormal basis, then .

    Definition:

    The constants {a1,a2,…} in the theorem above are called the Fourier coefficients of with respect to the basis .

    Fourier coefficients are important because they provide the best approximation to a vector by a subset of an orthogonal basis in the sense of the following theorem.

    Theorem:

    Suppose is a vector in an inner product space V, and is an orthogonal basis for V. Let {c1, c2, …} be the Fourier coefficients of with respect to . Then

    for any numbers di. Equality holds if and only if ci  =  di for every i  =  1,…,n.

    Proof:

    We assume the constants are real, and the basis is orthonormal to simplify the notation. We have

    (1)

    Now,

    since is an orthonormal basis, as we verify in Exercise 10. Also, .

    Thus, the right-hand side of Eq. (1) is

    Following the first steps in the argument above, we get

    (2)

    Finally,

    with equality if and only if ci  =  di for all i  =  1,…,n.

    Note that from Eq. (2), we have Bessel's inequality

    Example:

    In this example, we demonstrate an application of eigenvalues and eigenfunctions (eigenvectors) to solve a problem in mechanics.

    Suppose that we have a body of mass m1 attached to a spring whose spring constant is k1. See Fig. 1.1.1. We assume that the surface is frictionless. If x1 is the displacement of the spring from equilibrium, then, according to Hooke's law, the spring creates a force .

    Figure 1.1.1

    Then

    Now consider the coupled system shown in Fig. 1.1.2.

    We use the convention that force is positive if it pushes a body to the right.

    We suppose that both springs are under no tension if the masses are at points a and b.

    Suppose that the masses are at points x1 and x2.

    Force on mass m1:

    1. Force due to spring 1: If x1>a, then spring 1 is stretched an amount x1−a and pulls m1 to the left. If the spring constant of spring 1 is k1, then the force on m1 due to spring 1 is

    2. Force due to spring 3: If x2−x1<ba, then spring 3 is compressed an amount (ba)−(x2−x1). If the spring constant of spring 3 is k3 then spring 3 pushes the body m1 to the left with

    Enjoying the preview?
    Page 1 of 1