Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

A Short Course in Automorphic Functions
A Short Course in Automorphic Functions
A Short Course in Automorphic Functions
Ebook239 pages3 hours

A Short Course in Automorphic Functions

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This concise three-part treatment introduces undergraduate and graduate students to the theory of automorphic functions and discontinuous groups. Author Joseph Lehner begins by elaborating on the theory of discontinuous groups by the classical method of Poincaré, employing the model of the hyperbolic plane. The necessary hyperbolic geometry is developed in the text. Chapter two develops automorphic functions and forms via the Poincaré series. Formulas for divisors of a function and form are proved and their consequences analyzed. The final chapter is devoted to the connection between automorphic function theory and Riemann surface theory, concluding with some applications of Riemann-Roch theorem.
The book presupposes only the usual first courses in complex analysis, topology, and algebra. Exercises range from routine verifications to significant theorems. Notes at the end of each chapter describe further results and extensions, and a glossary offers definitions of terms.
LanguageEnglish
Release dateOct 27, 2014
ISBN9780486799926
A Short Course in Automorphic Functions

Related to A Short Course in Automorphic Functions

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for A Short Course in Automorphic Functions

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    A Short Course in Automorphic Functions - Joseph Lehner

    FUNCTIONS

    [ I ]

    Discontinuous Groups

    An analytic function f is called automorphic with respect to a group Γ of transformations of the plane if f takes the same value at points that are equivalent under Γ. That is,

    for each V Γ and each z D, the domain of f. If we want to have nonconstant functions f, we must assume there are only finitely many equivalents of z lying in any compact part of D. This property of Γ is known as discontinuity.

    The most important domain for f from the standpoint of applications is the upper half-plane † H. Now from (1) with f analytic in H we deduce that V is analytic in H and maps H into itself. It is natural to require that V be one-to-one in order that V−1 should be single-valued. Hence F is a linear-fractional transformation. The group Γ will therefore be a group of linear-fractional transformations, or as we shall call them, linear transformations.

    The present chapter is devoted to a study of discontinuous groups of linear transformations.

    1.Linear Transformations

    1A. A linear transformation is a nonconstant rational function of degree 1; that is, a function

    where α, β, γ, δ are complex numbers and z is a complex variable. The function w is defined on all of the complex sphere Z except z = −δ/γ and z = ∞. With the usual convention that w/0 = ∞ for u ≠ 0, we have

    and we obtain w(∞) by continuity:

    In particular, −δ/γ will be ∞ if and only if γ = 0, and in that case α/γ = ∞: the infinite points of the two planes then correspond under the mapping w.

    As a rational function, w is regular in Z except for a simple pole at z = −δ/γ. Suppose γ = 0. Then necessarily δ ≠ 0, α ≠ 0 (because of αδ βγ ≠ 0) and

    Hence dw/dz = α/δ ≠ 0 and w is conformal at every finite z. At infinity we must use the uniformizing variables z′ = l/z, w′ = 1/w; then we find that

    and w′ is conformal at z′ = 0, which by definition means that w is conformal at z = ∞.

    If γ ≠ 0, we have

    hence dw/dz ≠ 0 and w is conformal except possibly at z = −δ/γ, z = ∞. At z = − δ/γ we must use the variables z′ = z + δ/γ, w′ = 1/w:

    so that (dw′/dz′)z = 0 ≠ 0. At z = ∞ the correct variables are z′ = 1/z, w′ = w and we get

    yielding the same conclusion.

    Solving (2) for z we get

    which is also a linear transformation and so is defined on the extended w-plane. The mapping z w is therefore onto and hence one-to-one, and we may write z = w−1.

    Putting these results together we can assert:

    THEOREM. The linear transformation (2) is a one-to-one conformal mapping of all of Zon itself.

    For this reason a linear transformation is also called a confortnal automorphism of Z.¹

    If two linear transformations are equal,

    for all z, then each function has the same zero and the same pole, and this gives

    from which

    It follows that λ = μ: two equal transformations have proportional coefficients, and the converse is obviously true. But multiplying the coefficients of a transformation by λ multiplies its determinant by λ². This permits us to normalize the transformation by requiring its determinant αδ βγ to be 1. We shall do this from now on: a linear transformation is a function

    where a, b, c, d are complex numbers. We usually write (3) in the form w = Tz.

    Occasionally we shall make use of functions of the form (2)—that is, linear fractional mappings with nonzero determinant not necessarily equal to 1. Such functions will be called bilinear mappings.

    by defining, for w = Tz, u = Sw,

    That is, the product of two linear transformations is simply their composition regarded as mappings. ‡ It is seen that STz is a linear transformation if we recall that the determinant of STz is the product of the determinants of Sw and Tz and so is 1. We have observed that each transformation has an inverse that is again a linear transformation, and the transformation Iz = z = (1 . z + 0)/(0 . z is a group.

    As we saw above, a linear transformation Tz is equal to each of the transformations (ρT)z, ρ ≠ 0, and to no others. Since det T = det (ρT) = ρ² det T = 1, we must have ρ = ±1. A linear transformation, therefore, determines its coefficients up to a factor of ±1. It is then reasonable to associate to the linear transformation (3) the two matrices ±T, where

    Moreover, the product of two linear transformations corresponds to the product of their matrices, as one calculates easily.

    with the group of all 2 × 2 unimodular matrices with complex entries, a group we shall denote by QC or SL(2, C). Indeed, if we define

    then f is a mapping from Qc , and

    that is, f is a homomorphism onto. Its kernel certainly includes ±I, where I = (1 0 | 0 1), and, by what has been said, can include nothing else. Hence

    The identification of QC is permissible provided we identify each T ΩC with −T.

    Let T QC and let S be a bilinear mapping. Then STS−1 ∈ ΩC. In general, if T has a certain property with respect to a set A, then STS−1 has the same property with respect to the set S(A). This enables us to assume, for example, that a certain special point is ∞, etcetera. The transition from T to STS−1 is merely a change of coordinates in the plane.

    One of the best-known properties of a linear transformation is that it maps a circle or straight line onto a circle or straight line. This can be seen as follows. The equation

    with A, C real, includes all circles (A ≠ 0) and all straight lines (A = 0). Writing z = (aw + b)/(cw + d), ad bc = 1, and substituting, we find

    where

    Since A′ and C′ are obviously real, this is also the equation of a circle or straight line.²

    1B. Linear transformations are classified by their fixed points (solutions of Tz = z). When c ≠ 0 in T = (a b | c d), the fixed points are given by the equation

    There are two finite solutions z = ξ1, ξ2 of this equation:

    where χ = a + d is the trace † of T. (Note use of ad bc − 1 in this calculation.) The two fixed points are coincident if and only if χ = ±2; then the transformation is called parabolic.

    LEMMA. 1. The trace is invariant under T ATA−1, where A is a nonsingular matrix.

    This follows at once from χ() = χ(ΒA).

    When c = 0, we have T∞ = ∞; that is, ∞ is always one fixed point. T is then

    and b/(d a) is the other fixed point if d a. When d = a, T reduces to

    a translation, and we arbitrarily call ∞ the second fixed point. The two fixed points are coincident. Thus T is parabolic and we observe that χ = 2, as it should be. A parabolic transformation with fixed point is a translation, and conversely. Finally, if b = 0, T is the identity.

    There are never more than two fixed points unless T is the identity. A corollary of this remark is the following: if three distinct points z1, z2, z3 have the same images under S and T, then S = T. For S−1T fixes each zi and so is the identity.

    Let us first suppose ξ1, ξ2 are finite and distinct (c ≠ 0, χ² ≠ 4). Set up the transformation W = W(z),

    W has fixed points ξ1 and ξ2, and if W(∞) = w(∞) where w = Tz, then W will be the same transformation as w. Since w(∞) = a/c, this gives κ = (a 1)/(a 2). As thus calculated, κ is always finite, for ξ2 = a/c combined with (6) would yield ad bc = 0. We call

    the normal form of w = Tz; κ is called the multiplier of T. But 1/κ may also be regarded as the multiplier of T since we can write the transformation in the form

    In other words, whether we regard κ or 1/κ as the multiplier depends on which fixed point is labeled ξ1 and which ξ2. As a way out of the ambiguity let us define the multiplier to be the pair (κ, 1/κ).

    Since κ + κ −¹ is a symmetric function of ξ1, ξ2, it must be a rational function of the coefficients of (6). A calculation using ξ1 + ξ2 = (a d)/c, ξ1ξ2 = −b/c yields

    LEMMA. 2. The multiplier is invariant under T ATA−1, where A is a bilinear mapping.

    Indeed, (9) and Lemma 1 give κ + κ−1 = κ′ + κ′−1, where κ′ = κ(ATA−1), Hence κ κ′ or κκ′ = 1. Thus the multiplier of T′ is the pair (κ′, 1/ κ′), which is the same as the pair (κ, 1/ κ).

    When c = 0 but χ² ≠ 4, there is but one finite and one infinite fixed point, and the normal form of T is

    We find that (9) is still valid (remember ad = 1). Hence Lemma 2 holds. Next, assume ξ1 − ξ2 ≠ ∞ (that is, c ≠ 0, χ = ±2). The transformation

    where the sign before c is the sign of χ, has the unique fixed point ξ1; moreover w(∞) = T(∞). Therefore (10) is the normal form for a parabolic transformation with finite fixed point. When ξ1 = ξ2 = ∞, the normal form is

    We arbitrarily define κ = 1 in these cases in order to satisfy (9) and verify Lemma 2.

    Lemma 2 has now been proved for all linear transformations.

    On the assumption that T is not parabolic, κ can have any value other than 0 or 1. Writing κ = ρeiθ, ρ > 0, 0 ≤ θ < 2π, we classify as follows:

    When T is parabolic, κ = 1 and χ = ±2. In the nonparabolic case we use (9) and deduce the following;

    THEOREM. A necessary and sufficient condition that T be elliptic, hyperbolic, or parabolic is that χ be real and | χ | < 2, | χ | > 2, or | χ | = 2, respectively. A necessary and sufficient condition that T be loxodromic is that χ be nonreal. The transformations T and ATA−1, A a bilinear mapping, are simultaneously elliptic, hyperbolic, etcetera.

    In this book we shall have no use for loxodromic transformations.

    Exercise 1. A linear transformation T is periodic of order n if and only if n > 1 is the smallest integer for which Tn = I. Prove that T is of order n if and only if T is elliptic and its multiplier is a primitive nth root of unity. Show that the trace of T is 2 cos πl/n, (l, n) = 1, where (l, n) is the greatest common divisor of the integers l and n.

    Exercise 2. Let T be nonelliptic. If for some z and some sequence (n) → ∞ we have Tnz z0, then z0 is a fixed point of T. Is the result true for elliptic T?

    1C. In this book we shall be concerned almost entirely with linear transformations that map the upper half-plane H on itself. Such transformations correspond to matrices with real entries. Indeed, if T is in this class, it maps the real axis E on itself (by continuity of T and T−1). Now z ∈ E only if z and so T . Since the mapping T determines its coefficients up to a factor of ±1, we either have a = ā, …, d or a = −ā, …, d = −d; that is, either a, b, c, d are all real or all are pure imaginary. But in the latter case we would have

    and T(i) ∉ H. Hence a necessary condition that T preserve H is that it have real coefficients. This condition is clearly sufficient, since a linear transformation with real coefficients obviously preserves E, and because of the determinant condition † it maps H into H.

    Call ΩR the subgroup of ΩC consisting of matrices with real entries. We have proved

    THEOREM 1. T ΩR if and only if T preserves H.

    An element of ΩR will be called a real transformation or a real matrix. It is clear that a real transformation preserves the real axis as well as the lower half-plane. A real transformation is never loxodromic; it is elliptic, hyperbolic, or parabolic according as χ = a + d is, in absolute value, less than 2, greater than 2, or equal to 2. An elliptic transformation has two conjugate nonreal fixed points; a hyperbolic transformation has two real fixed points; a parabolic transformation has one real fixed point.

    A fixed circle of T is a circle or straight line that is mapped on itself by T. The easiest way to discuss the fixed circles is to make a nonsingular transformation z z′ of the plane that carries the fixed points of T to 0 and ∞ or, in the case of parabolic T, carries the fixed point to ∞. In the first case T becomes Tz′ = κz′ with the same κ as in T (see IB, Lemma 2). When T is elliptic, κ = eiθ, and T′ is a rotation about the origin; the fixed circles are circles with center at the origin and each fixed circle is orthogonal to the family of rays through the origin—that is, orthogonal to the family of lines joining the fixed points. When

    Enjoying the preview?
    Page 1 of 1