Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Fundamentals of Mathematical Physics
Fundamentals of Mathematical Physics
Fundamentals of Mathematical Physics
Ebook1,077 pages4 hours

Fundamentals of Mathematical Physics

Rating: 3 out of 5 stars

3/5

()

Read preview

About this ebook

Indispensable for students of modern physics, this text provides the necessary background in mathematics for the study of electromagnetic theory and quantum mechanics. Clear discussions explain the particulars of vector algebra, matrix and tensor algebra, vector calculus, functions of a complex variable, integral transforms, linear differential equations, and partial differential equations.
This volume collects under one cover the mathematical ideas formerly available only by taking many separate courses. It offers in-depth treatments, with a minimum of mathematical formalism. Suitable for students of physics, allied sciences, and engineering, its only prerequisites are a course in introductory physics and a course in calculus. Examples at the end of each chapter reinforce many important techniques developed in the text, and numerous graded problems make this volume suitable for independent study.
LanguageEnglish
Release dateJan 16, 2013
ISBN9780486131603
Fundamentals of Mathematical Physics

Related to Fundamentals of Mathematical Physics

Titles in the series (100)

View More

Related ebooks

Physics For You

View More

Related articles

Reviews for Fundamentals of Mathematical Physics

Rating: 3 out of 5 stars
3/5

4 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Fundamentals of Mathematical Physics - Edgar A. Kraut

    Copyright

    Copyright © 1967,1995 by Edgar A. Kraut.

    All rights reserved.

    Bibliographical Note

    This Dover edition, first published in 2007, is an unabridged republication of the work originally published in 1967 by McGraw-Hill, Inc., New York. The author has provided a new appendix (Appendix C) for this edition on page 459, as well as an Errata list on page viii.

    International Standard Book Number

    9780486131603

    Manufactured in the United States by Courier Corporation

    45809102

    www.doverpublications.com

    preface

    The primary aim of this book is to provide the student of physics with most of the mathematical tools required for the study of electromagnetic theory and quantum mechanics.

    The only preparation expected of the reader is the completion of the first two years of college calculus and physics.

    At UCLA, the material on matrix algebra, vector analysis, Fourier series, and ordinary differential equations is given in the physics department in a one-semester course entitled Mathematical Methods of Physics. This course is normally taken by physics majors during their junior year. The course is also taken by many undergraduate and graduate students enrolled in other science departments. All the material in the book can be covered in two semesters or three quarters.

    As given at UCLA, the mathematical methods course provides the tools which may then be applied to the context of other physics courses. If the material in this book is not to be used this way, the instructor should provide the physical motivation himself. Some physics is involved in the examples and in the problems at the end of each chapter. This is particularly true of the second half of the book. The problems range from trivial to difficult. Even the best student will find some of them challenging. The student who does the less trivial exercises will usually find that he has learned something worth remembering.

    Chapter 1 is essentially a review of material which the reader has probably seen many times before. The book really starts with Chapter 2. The aim here is to permit the reader to rapidly acquire sufficient skill in manipulating real and complex matrices so that they will not constitute a stumbling block when he first encounters them in courses in classical and quantum mechanics. Most physics students are curious about what tensors are and what one does with them. I have tried to satisfy this curiosity in Chapter 2 without getting too deeply involved in details. Vector calculus, discussed in Chapter 3, is one of the most important new mathematical tools that an aspiring physics student must learn. In writing this chapter I have tried particularly to emphasize the idea that integral formulas relate the values taken on by the derivative of a function throughout a domain to the values which the function itself assumes on the boundary of the domain. This approach permits a unified and systematic development of integral formulas in more than one dimension. Topological restrictions on integral formulas have been treated in somewhat more detail than is customary in elementary texts. In physical applications, volume, surface, and line integrals are often taken over regions which move and deform with time. It is necessary to know how to differentiate such integrals with respect to time and how the appropriate formulas are derived. Finally, integral theorems involving regions of discontinuity must be suitably modified in order for the theorems to remain valid. It is in this connection that boundary conditions arise naturally. Though seldom discussed in elementary works, the modifications are carefully outlined here.

    A short treatment of complex variables appears in Chapter 4. The central idea in this chapter is that a function of a complex variable can be represented by a power series, or as a contour integral, and that such a function provides a conformal mapping of one surface onto another. The fact that these three characteristics, though equivalent, were really the basis of three separate developments of the subject is not often emphasized in contemporary treatments, which tend to present the subject as a single unified entity. However, I think some historical remarks help the beginner to understand complex analysis more easily.

    I have chosen to emphasize Riemann surfaces because they seem to cause beginners a lot of trouble. It is pointed out that a Riemann surface need not necessarily be a stack of flat sheets and that a closed curved surface such as a torus can also be a Riemann surface. Contour integration is also discussed in some detail, and a point is made of the fact that it may be easier to evaluate some integrals by using those residues outside a closed contour instead of those inside.

    Chapter 5 is a discussion of finite- and infinite-range integral transforms. The concept of a finite integral transform may not be the shortest way to present Fourier series; however it has many advantages where applications are involved. By use of integration by parts with appropriate kernels, the solution of many problems in ordinary and partial differential equations is reduced to a standard procedure involving evaluating a certain transform and then inverting it.

    Chapter 6 treats ordinary differential equations, particularly those which give rise to the special functions of mathematical physics. The properties of most of these special functions are examined, and their graphs are given. It is pointed out that most of the special functions of mathematical physics are solutions of a Sturm-Liouville differential equation and that they consequently enjoy certain important orthogonality properties. The last chapter, Chapter 7, provides an opportunity to apply all previous work to the solution of partial differential equations. I have tried to unify this chapter around the idea that there is a general method for handling problems involving linear partial differential equations. This method involves two steps. In the first step, one formally represents the solution of a partial differential equation in terms of a Green’s function with the aid of one or another of the integral theorems developed in Chapter 3. Next, one calculates this Green’s function for the particular coordinate system of interest by taking integral transforms or by using expansions in terms of appropriate special functions. This procedure is illustrated in some detail for the classical partial differential equations of mathematical physics, and many useful identities are derived in the process.

    In closing I would like to point out that a course in the spirit of this book is not a substitute for any course in pure mathematics. The student has much to gain in maturity and perspective from a rigorous mathematical study of any of the topics dealt with here. Certainly, he should study at least one of them in a formal junior or senior mathematics course.

    I would like to thank my students and colleagues for numerous useful suggestions, comments, and criticisms. Additional comments will be most gratefully received.

    EDGAR A. KRAUT

    errata

    Table of Contents

    Title Page

    Copyright Page

    preface

    errata

    CHAPTER ONE - vector algebra

    CHAPTER TWO - matrix and tensor algebra

    CHAPTER THREE - vector calculus

    CHAPTER FOUR - functions of a complex variable

    CHAPTER FIVE - integral transforms

    CHAPTER SIX - linear differential equations

    CHAPTER SEVEN - partial differential equations

    references

    APPENDIX A - infinite series

    APPENDIX B - power-series solution of differential equations

    APPENDIX C - a brief review of advances in matrix methods in physics and engineering

    index

    CHAPTER ONE

    vector algebra

    INTRODUCTION

    In elementary physics one learns that a vector is a quantity which has a magnitude and a direction. The rule for adding vectors is then chosen so as to be useful in describing the motion of a material object. For example, consider three points, say O, P1, and P2, which are not necessarily in a straight line. If a body is displaced in a straight line from O to P1 and then along another straight line from P1 to P2, the final position of the body is at P2. However, the body may also be placed at P2 by displacing it along the straight line directly joining O to P2. The single displacement OP2 represents the combined effect of the two displacements OP1 and P1P2. It is convenient to represent a displacement such as OP1 by an arrow of proper length and direction (Fig. 1-1).

    If P1P2 is applied after OP1 we represent it by placing the origin of arrow P1P2 at the end point of OP1, then the combined displacement OP2 = OP1 + P1P2 is the arrow leading from the origin of OP1 to the end point of P1P2. This is the diagonal of the parallelogram with sides OP1 and P1P2. This rule for finding OP1 + P1P2 is the familiar parallelogram law of vector addition.

    A displacement OP1 = x may be doubled to give a new displacement 2x or halved to give a displacement ½x. A negative multiple such as — 2x represents a displacement twice as large as x but in the direction opposite to x. In general, if x is multiplied by any real number c, the result cx represents a displacement c times as large as x. The direction of cx is along x for c > 0 and opposite to x for c < 0.

    In the plane, a vector A may be represented by an arrow with origin at (0,0) and end point at some suitable point (a1,a2). Then the vector sums and scalar multiples may be computed in terms of the coordinates, using the rules

    (1-1 )

    (1-2)

    It follows from Eqs. (1-1) and (1-2) that the vectors x, y, and z obey the algebraic rules

    (1-3)

    (1-4)

    Figure 1-1 Parallelogram law of vector addition.

    These rules are used to define vectors in an abstract way as follows. A vector space is defined as a set S whose elements are called vectors. The vectors are characterized by the following axioms:

    To every pair, x and y, of vectors in S there corresponds a vector x + y, called the sum of x and y, and such that (a) Addition is commutative, x + y = y + x (b) Addition is associative, x + (y + z) = (x + y) + z (c) There exists in S a unique vector 0, called a null vector, such that x + 0 = x for every vector x, and (d) To every vector x in S there corresponds a unique vector — x such that x + ( — x) = 0.

    To every pair, c and x, where c is a real (or complex) number and x is a vector in S, there corresponds a vector cx in S, called the product of c and x, in such a way that

    (a) Multiplication by real or complex numbers is associative, b(cx) = (bc)x, and

    (b) 1x = x for every vector x

    (c) Multiplication is distributive with respect to vector addition, c(x + y) = cx + cx, and

    (d) Multiplication by vectors is distributive with respect to arithmetic al addition, (b + c)x = bx + cx.

    If the numbers c entering into axiom 2 are all real, then the set S is a real vector space; if the numbers c are complex then S is called a complex vector space. Suppose that the numbers c are all real. Let Rn, n = 1,2,3, ... , be the set of all ordered n-tuples of real numbers. If x = (x1,x2, ... ,xn) and y = (y1,y2, ... ,yn) are elements of Rn, then we define

    (1-5)

    (1-6)

    (1-7)

    (1-8)

    Figure 1-2 Geometric representation of the two-dimensional vector A = (a,b).

    It is easy to verify that all the axioms of 1 and 2 are satisfied. Therefore Rn is a real vector space. One usually refers to Rn as an n-dimensional real coordinate space.

    In summary, we have two completely equivalent ways of looking at vectors. We can regard vectors as abstract entities which combine with one another according to the axioms of 1 and 2, or we can regard vectors as ordered n-tuples of real numbers which obey the laws (1-5) to (1-8). The latter viewpoint is closer to the original geometric idea of a vector, and for this reason we shall adopt it here.

    1-1 DEFINITIONS

    Scalars

    A scalar is a single real number, and a complex scalar is a single complex number.

    Vectors

    A two-dimensional vector (Fig. 1-2) is an ordered pair of real numbers such as (1,2) or (a,b). Geometrically, the vector (a,b) can be thought of as the position vector of the point x = a, y = b in a two-dimensional rectangular Cartesian coordinate system in which x is measured along the abscissa and y is measured along the ordinate. The vector (a,b) is represented by the symbol

    (1-9)

    The letters a and b, respectively, are called the first and second or x and y components of the vector A.

    Figure 1-3 Geometric representation of the three-dimensional vector A = (a,b,c).

    A three-dimensional vector (Fig. 1-3) is an ordered triple of real numbers such as (1,2,3) or (a,b,c). Geometrically, the vector (a,b,c) is thought of as the position vector of the point x = a, y = b, z = c in a three-dimensional rectangular Cartesian coordinate system whose axes are labeled x, y, and z. The vector (a,b,c) is represented by the symbol

    (1-10)

    The letters a, b, and c are called the x, y, and z components of the vector A.

    An n-dimensional vector is an ordered set of n real numbers such as (1,2,3, ... ,n) or (a1,a2, ... ,an). Geometrically, the vector (a1,a2, ... ,an) is thought of as the position vector of the point x1 = a1, x2 = a2, ... , xn = an, in an n-dimensional rectangular Cartesian coordinate system with axes labeled x1, x2, ... , xn. The vector (a1,a2, ... ,an) is represented by the symbol

    (1-11)

    The letter a1 is called the x1 component of A, a2 is called the x2 component of A, and so on.

    1-2 EQUALITY OF VECTORS AND NULL VECTORS

    If A = (a1,a2, ... ,an) and B = (b1,b2, . . . ,bn), then

    A = B

    if and only if

    (1-12)

    Figure 1-4 Parallelogram law of vector addition.

    If all the components of a vector are zero, then it is called a zero vector or null vector, thus:

    (1-13)

    or

    (1-14)

    or

    (1-15)

    1-3 VECTOR OPERATIONS

    Vectors can be added to, subtracted from, and multiplied by other vectors. Vectors can also be multiplied by scalars. If A = (a,b) and B = (c,d), then the addition of vectors A and B is defined in terms of the addition of the components of their respective representations (a,b) and (c,d), thus:

    (1-16)

    In general, if A = (a1, a2, ... , an) and B = (b1, b2, ... , bn), then

    (1-17)

    The student should verify for himself that the definition of vector addition given in Eq. (1-1) is consistent with the parallelogram law of vector addition (Fig. 1-4) taught in elementary physics courses.

    If A = (a1,a2, ... ,an), then the norm, magnitude, or length of A is defined by

    (1-18)

    If λ is a scalar, then multiplication of the vector A by the scalar λ is defined by

    (1-19)

    Scalar product

    The scalar or dot product of A = (a1,a2, . . . , an) and B = (b1,b2, . . . , bn)

    Figure 1-5 The scalar product A · B.

    is defined by (Fig. 1-5)

    (1-20)

    Notice that

    (1-21)

    DEFINITION: The cosine of the angle between A and B is defined by

    (1-22)

    Thus

    (1-23)

    Vector product

    The vector or cross product of a pair of three-dimensional vectors such as A = (a1,a2,a3) and B = (b1,b2,b3) is the vector defined by (Fig. 1-7)

    (1-24)

    The vector product will only be defined for three-dimensional vectors.

    The student should now verify the following properties of the cross product:

    (1-25)

    (1-26)

    (1-27)

    (1-28)

    The vectors A and B are called parallel (antiparallel) if there is a positive (negative) scalar such that

    (1-29)

    Figure 1-6 The mirror image of a right-handed coordinate system is a left-handed coordinate system.

    If A and B are neither parallel nor antiparallel and are not zero vectors, then Eqs. (1-22), (1-27), and (1-28) show that A × B is perpendicular to both A and B. Therefore, A X B is perpendicular to the plane determined by A and B. The ordered triple of vectors {A,B,C} is called a right-handed triple if and only if

    (1-30)

    The ordered triple of vectors {A,B,C} is called a left-handed triple if and only if

    (1-31)

    If C = (A x B) and C is not a zero vector, then the triple {A,B,C} is right-handed. The vector C defines the direction in which a right-handed screw will advance if its head is turned from A to B. This is the basis of the right-hand rule of elementary physics.

    Suppose now that the vectors A and B determine a plane. Choose a set of rectangular Cartesian coordinate axes labeled x, y, and z in such a manner that the vectors A and B lie in the x, y plane. Let the x axis be chosen so that it coincides with the direction of A. Then

    (1-32)

    (1-33)

    The vector A has only a single component directed along the x axis. This component has a magnitude equal to the length of vector A. Vector B has no z component since by definition it lies in the x, y plane. It does have x and y components which are equal to |B|cos θ and |B| sin θ, respectively. The angle θ is measured from the x axis to the vector B in the x, y plane. Since A lies along the x axis, the angle θ is also the angle between the vectors A and B. In this special (x,y,z) coordinate system Eqs. (1-24), (1-32), and (1-33) give

    (1-34)

    Figure 1-7 The vector product A × B.

    Writing

    (1-35)

    and noting Eq. (1-19), we obtain

    (1-36)

    Notice that the vector n is perpendicular to both A and B and thus to the x, y plane. In selecting the angle from A to B to be used in computing sin (A,B) the indeterminacy arising from the possibility of choosing either the interior or exterior angles between A and B is eliminated by requiring that the triple {A,B,n} be right-handed. Then if |A| > 0, |B| > 0, and A and B are neither parallel nor antiparallel, (1-30) gives

    (1-37)

    Thus we always choose the interior angle between vectors A and B in computing Eq. (1-36). If A and B are parallel or antiparallel, the vector product vanishes as a result of Eqs. (1-25) and (1-26).

    The scalar product of Eqs. (1-32) and (1-33) gives

    (1-38)

    in agreement with Eq. (1-22). In obtaining Eqs. (1-36) and (1-38) we have used a special system of coordinates in which A has only one nonzero component and B only two. The scalar and vector products, however, depend only upon the lengths of the two vectors and the angle between them and thus not upon the coordinate system used to give their representations.

    1-4 EXPANSION OF VECTORS

    To expand a vector means to represent it as the sum of some other vectors. As it stands, this definition is not sufficiently precise. Clearly, a displacement vector in the x, y plane can be regarded as the resultant of many different pairs of displacement vectors, which also lie in the x, y plane. On the other hand, a vector perpendicular to the x, y plane can never be represented as the sum of vectors lying entirely in the x, y plane. It seems that in the first case the vectors in the x, y plane provide too many expansions, while in the second case they provide no expansions at all.

    We can eliminate the problem of too many expansions by selecting certain vectors and insisting that all expansions be made with these. The problem of no expansions can be avoided by making sure that the vector we are trying to expand lies in the right space. In order to say this in a precise way several new terms must be introduced.

    Linear independence

    DEFINITION:

    The vectors {i1,i2, . . . ,in } are called linearly independent if and only if the only solution of

    (1-39)

    is

    (1-40)

    Otherwise they are linearly dependent.

    Two immediate consequences of this definition are that if {i1,i2, . . . ,in} is a linearly independent set of vectors, then no ik can be a zero vector, and no ik can be a linear combination of the preceding ones. (The student should prove these assertions.) The next concept we shall introduce has to do with generalizing the idea of the right space.

    Linear manifold

    If a subset M of the vector space S is such that, for all scalars a and b, M. contains the vector ax + by whenever it contains the vectors x and y, then M is a linear manifold.

    REMARK:

    Clearly the x, y plane in (x,y,z) space is a linear manifold. In fact, any plane passing through the origin in R³ space is a linear manifold. (A linear manifold in Rn for n ≥ 4 is called a hyperplane.) It follows from the definition that a linear manifold must always contain the null vector, for it contains x - x whenever it contains x, where x is any vector in M.

    A set of vectors {x1,x2, . . . ,xn} in M is said to span M if every vector A in M can be represented as a linear combination of {x1,x2, ... ,xn}; i.e., there are scalars {c1,c2, ... ,cn} depending on A such that

    (1-41)

    We see that when A lies in a linear manifold, then A can be expanded in terms of the vectors which span the manifold. However, the expansion of A is not yet unique. To make it unique we must assume not only that A lies in the manifold spanned by the vectors {x1,x2, . . . ,xn}, but also that the vectors {x1,x2, . . . xn} are linearly independent.

    When the vectors {x1,x2, ... ,xn} span M and are also linearly independent, then we say that they form a basis for M. In this case the representation (1-41) is unique. To see this one notes that if the representation is not unique, there can be a second such representation,

    (1-42)

    As a result, subtracting Eq. (1-41) from Eq. (1-42) gives

    (1-43)

    Since the basis vectors are assumed linearly independent,

    (1-44)

    Therefore Eq. (1-41) must be unique. We have now solved the expansion problem we posed in the beginning. We have shown that any arbitrary vector lying in a linear manifold can be uniquely expanded in any basis of the manifold. Thus in our initial example, we should choose for a basis two particular vectors which span the x, y plane and which are linearly independent. The right space for an arbitrary vector to lie in, in order to have a unique expansion, is then the x, y plane itself. This example is geometrically obvious in R³ but not quite as obvious in Rn.

    Dimension

    The linear manifold formed by the x, y plane has a basis consisting of two vectors, so we say that it is two-dimensional. In general, a linear manifold having a basis which contains n vectors is called "n-dimensional." As a consequence of the definition of dimension, any n-dimensional manifold must contain n linearly independent vectors which form the basis of the manifold. One can show that any set of n + 1 vectors in an n-dimensional linear manifold must be linearly dependent.

    EXAMPLE 1-1:

    Rn. The n-dimensional real coordinate space Rn is obviously a linear manifold since if x = (x1,x2, ... ,xn) and y = (y1,y2, . . . ,yn) are ordered n-tuples of numbers belonging to Rn, then ax + by must also be an ordered n-tuple of numbers. Therefore it must also belong to Rn.

    The ordered n-tuples

    (1-45)

    are a set of vectors which span Rn because any vector

    A = {a1,a2, ... an}

    in Rn is an ordered n-tuple, and any n-tuple can always be written as

    (1-46)

    i.e., as

    (1-47)

    Furthermore, the vectors {e1,e2, . . . ,en} are linearly independent since Eq. (1-45) requires that the only solution of

    (1-48)

    be

    c1 = 0, c2 = 0, . . . , cn = 0

    Hence the set of vectors {e1,e2, ... ,en} spans Rn and is linearly independent. Thus they form a basis for Rn. Since the basis for Rn contains n vectors, Rn is an n-dimensional linear manifold by definition.

    The scalar product rule (1-20) applied to {e1,e2, ... ,en} shows that

    (1-49)

    Thus each vector in the basis has unit length and is orthogonal to every vector of the basis, except itself. A basis satisfying Eq. (1-49) is called an orthonormal basis. Choosing an orthonormal basis in a linear manifold corresponds to introducing a set of rectangular Cartesian coordinates in the manifold and then expressing any vector in the manifold in terms of its components along the coordinates axes.

    In Rn any arbitrary vector A has a unique expansion

    (1-50)

    in terms of the orthonormal basis (1-45). The expansion coefficients ak are obtained by projecting A along each coordinate axis,

    (1-51)

    EXAMPLE 1-2:

    R³. Any vector A in R³ is an ordered triple A = (a1,a2,a3) and can be written as

    (1-52)

    or as

    (1-53)

    Figure 1-8 Expansion of vector A in terms of an orthonormal basis {e1,e2,e3}.

    The vectors e1 = (1,0,0), e2 = (0,1,0), e3 = (0,0,1) span R³, are linearly independent, and form an orthonormal basis for R³, since

    (1-54)

    (1-55)

    A rectangular Cartesian coordinate system can be introduced so that the x1, x2, and x3 axes point in the directions of e1, e2, and e3 respectively. The components a1, a2, and a3 of the vector A are measured along the x1, x2, and x3 axes, respectively. Notice that

    (1-56)

    since

    (1-57)

    The component a1 of the vector A is geometrically the projection of the vector A along the x1 axis. A similar remark applies to the other components of A.

    The space R³ is special in that the vector cross product is defined only for R³. The student should verify that an orthonormal basis in R³ satisfies

    (1-58)

    (1-59)

    (1-60)

    Figure 1-9 Expansion of vector A in terms of a linearly independent but nonorthogonal basis {i1,i2,i3 }.

    Oblique bases

    Any basis of Rn which is not an orthogonal basis is called an oblique basis. For example, any pair {i1,i2} of nonorthogonal, linearly independent vectors in the plane R² constitutes an oblique basis for R². Such a basis corresponds to introducing an oblique coordinate system in R² because any vector in R² can be written as

    (1-61)

    where b1 and b2 are the components of A measured along the oblique axes. The magnitude of A becomes

    (1-62)

    in oblique coordinates, provided |i1| = 1 and |i2| = 1.

    EXERCISE:

    Show that Eq. (1-62) is the law of cosines for an oblique triangle.

    Schmidt orthogonalization process

    Given the set of n linearly independent vectors {i1,i2, ... ,in}, the Schmidt process enables one to construct another set of n linearly independent unit vectors, {e1,e2, . . . ,en}, such that ei · ej = 0 whenever i j.

    The Schmidt process begins with the statement that the vectors of the set {i1,i2, ... ,in} are linearly independent. Then

    (1-63)

    can be satisfied only by choosing c1 = 0, c2 = 0, ... , cn = 0. It follows that i1 ≠ 0; for if it were zero, the numbers c1 = 1, c2 = 0, c3 = 0, ... , cn = 0 would satisfy Eq. (1-63), and hence the vectors would be linearly dependent, contradicting our hypothesis. Let

    Then

    and the set {e1,i2, ... ,in} is still a linearly independent set. Next choose

    and notice that

    is orthogonal to e1. Let

    then e2 and e1 are orthogonal unit vectors. The set of vectors {e1,e2,i3, ... ,inby

    = i3 -

    Enjoying the preview?
    Page 1 of 1