Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Square Summable Power Series
Square Summable Power Series
Square Summable Power Series
Ebook181 pages1 hour

Square Summable Power Series

Rating: 5 out of 5 stars

5/5

()

Read preview

About this ebook

This text for advanced undergraduate and graduate students introduces Hilbert space and analytic function theory, which is centered around the invariant subspace concept. The book's principal feature is the extensive use of formal power series methods to obtain and sometimes reformulate results of analytic function theory.
The presentation is elementary in that it requires little previous knowledge of analysis, but it is designed to lead students to an advanced level of performance. This is achieved chiefly through the use of problems, many of which were proposed by former students. The book's tried-and-true approach was developed from the authors' lecture notes on courses taught at Lafayette College, Bryn Mawr College, and Purdue University.
LanguageEnglish
Release dateDec 8, 2014
ISBN9780486801360
Square Summable Power Series

Related to Square Summable Power Series

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Square Summable Power Series

Rating: 5 out of 5 stars
5/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Square Summable Power Series - Louis de Branges

    Index

    [ 1 ]

    Theory of Formal Power Series

    Any work with Hilbert space or analytic function theory requires an understanding of the complex number system. The complex numbers are pairs a + ib of real numbers a and b. By definition

    The addition and multiplication of complex numbers have all the familiar properties of real numbers, known as the field postulates.¹ In addition to these properties, the complex numbers have a new property, called conjugation. The conjugate of a + ib is a ib, and we write a ib = (a + ib)–. Note that the product of a complex number and its conjugate is positive,

    except when a + ib = 0. The absolute value of a + ib is

    where, as usual, the square root sign refers to the positive choice of root. The distance between complex numbers a + ib and c + id is

    which is the ordinary Euclidean distance when a complex number is thought of as a point in the plane.

    A sequence (an + ibn) of complex numbers is said to converge to a complex number a + ib if for any given ∊ > 0, no matter how small, there is some number N, depending on ∊, such that

    . A sequence (an + ibn) of complex numbers is said to be Cauchy if for any ∊ > 0, there is some corresponding N such that

    . It is not hard to show that every convergent sequence of complex numbers is a Cauchy sequence. A deeper theorem states that, conversely, every Cauchy sequence of complex numbers converges. This important property of the complex number system is known as completeness.

    Hereafter a complex number will be referred to simply as a number, and we will not usually break it up into real and imaginary parts.

    A formal power series is an expression

    where the coefficients are numbers. Formal power series are added in the obvious way. If

    and if

    then

    Formal power series can also be multiplied by numbers. If

    and if w is a number, then

    The addition and multiplication of power series have obvious properties, known as vector space postulates.

    A vector space over the complex numbers is a set of elements f, g, h, ···, called vectors, with these properties. For each pair f and g of vectors there is a vector h called the sum of f and g and written h = f + g. Addition is associative: (f + g) + h = f + (g + h). Addition is commutative: f + g = g + f. There is a zero vector 0 such that 0 + f = f + 0 = f for every vector f. There is a negative vector –f for every vector f such that (–f) + f = f + (–f) = 0. We also assume, as part of the definition of a vector space, that for every vector f and every number α, there is a product fα, which is a vector. Multiplication is associative: f(αβ) = ()β whenever f is a vector and α and β are numbers. The distributive laws hold: (f + g)α = + , f(α + β) = + . We also assume that 0 and 1 have the ordinary significance for multiplication: f0 = 0 and f1 = f for every vector f. (We use the same notation for the zero element of a vector space and the zero element in the number system.)

    A transformation T which takes vectors into vectors is said to be linear if

    for all vectors f and g and all numbers α and β. Every number w defines a linear transformation, which is also called w, by wf = fw. So it does not matter on which side of a vector a number is written.

    Problem 1. If f(z) = a0 + a1z + a2z² + ···

    and

    are formal power series, define the product f(z)g(z) by

    Now consider given power series f(z) = a0 + a1z + a2z² + ··· and h(z) = c0 + c1z + c1z² + ⋯, where f(z) is not identically zero. Let ar be the first nonzero coefficient in f(z) and suppose that c0 , ···, cr–1 are all zero. Show that there exists a unique formal power series g(z) = b0 + b1z + b2z² + ··· such that h(z) = f(z)g(z).

    Problem 2. If w is any number, show that

    formally (i.e., in the sense of formal power series). We will write

    Problem 3. If w is any number define a formal power series

    Show that e(a+b)z = eaz × ehz formally if a and b are numbers.

    Problem 4. The nth Laguerre polynomial Ln(t) is defined to be the coefficient of zn in the formal expansion

    where the expansion of (1 – z)–1–n is obtained from the expansion of (1 – z)–1 on differentiating formally n times. Verify that

    By an inner product a, b for every pair of vectors a and b of the space. It is assumed to have these properties:

    Linearityαa + βb, c = α α, c + β b, c for all vectors a, b, c, and all numbers α, β.

    Symmetrya, b b, a – for all vectors a and b.

    Positivitya, a > 0 for every nonzero vector a.

    (z) be the set of all formal power series f(z) = a0 + a1z + a2z² + ··· such that

    (z) is a vector space with the usual addition and multiplication by numbers. Then we will introduce an inner product in this space.

    If f(z) = a0 + a1z + a2z² + ··· is a square summable power series and if w is a number, then

    is also square summable, and

    Now we show that the sum f(z) + g(z) of two square summable power series is square summable. Let f(z) = a0 + a1z + a1z² + ⋯ and g(z) = b0 + b1z + b2z² + ···. Notice that

    so that

    so that

    for every n. It follows that

    So f(z) + g(z) is square summable.

    (z) are a consequence of the vector space axioms for the space of all formal power series. We omit this verification.

    (z). We claim that if f(z) = a0 + a1z + a2z² + ··· and g(z) = b0 + b1z + b2zconverges. This is a consequence of the identity

    which follows from the expansions

    Since the series

    f, g . Linearity, symmetry, and positivity of this definition are easily verified. We omit these verifications.

    We now take up general properties of vector spaces with inner products. Define the norm, or length, of a vector a by

    which is possible by the positivity of an inner product. If a and b are vectors, || b a || is taken as the definition of the distance from a to b.

    An inner product is a generalization of the dot product in three-dimensional space, which is, however, a vector space only over the real numbers. In Euclidean space the dot product of two vectors is equal to the product of the lengths of the vectors and the cosine of the angle between the vectors. Since a cosine is a number between –1 and +1, the dot product is in absolute value no more than the product of the lengths of the vectors. The same result is true for any vector space with inner product and is known as the Schwarz inequality. If a and b . This may be proved by taking the self-product for the vector

    This is

    By the positivity of an inner product,

    b, a a a, a b a, b |² < || a ||²|| b ||² if a and b are linearly independent. Equality holds in the Schwarz inequality only when a and b are linearly dependent.

    The triangle inequality states that the distance from point a to point c is no more than the sum of the distance from point a to point b and the distance from point b to point c. If a, b, c are vectors, then

    For the proof we substitute u = b a and v = c b . Since each side is non-negative, it is sufficient to show that

    or that

    . To see this expand the left side linearly:

    We must show that

    . This is true by the

    Enjoying the preview?
    Page 1 of 1