Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Theory of Lie Groups
Theory of Lie Groups
Theory of Lie Groups
Ebook351 pages4 hours

Theory of Lie Groups

Rating: 0 out of 5 stars

()

Read preview

About this ebook

"Chevalley's most important contribution to mathematics is certainly his work on group theory. . . . [Theory of Lie Groups] was the first systematic exposition of the foundations of Lie group theory consistently adopting the global viewpoint, based on the notion of analytic manifold. This book remained the basic reference on Lie groups for at least two decades." — Bulletin of the American Mathematical Society
Suitable for advanced undergraduate and graduate students of mathematics, this enduringly relevant text introduces the main basic principles that govern the theory of Lie groups. The treatment opens with an overview of the classical linear groups and of topological groups, focusing on the theory of covering spaces and groups, which is developed independently from the theory of paths.
Succeeding chapters contain an examination of the theory of analytic manifolds as well as a combination of the notions of topological group and manifold that defines analytic and Lie groups. An exposition of the differential calculus of Cartan follows and concludes with an exploration of compact Lie groups and their representations.
LanguageEnglish
Release dateMar 30, 2018
ISBN9780486829661
Theory of Lie Groups
Author

Claude Chevalley

Claude Chevalley (1909-1984) served on the faculty of Princeton University and was resident at the Institute for Advanced Study. He was a member of the Bourbaki and was awarded the Cole Prize of the American Mathematical Society.

Read more from Claude Chevalley

Related to Theory of Lie Groups

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Theory of Lie Groups

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Theory of Lie Groups - Claude Chevalley

    {b}.

    CHAPTER I

    The Classical Linear Groups

    Summary. Chapter I introduces the classical linear groups whose study is one of the main objects of Lie group theory. The unitary and orthogonal groups are defined in §I, together with a series of other groups. Their fundamental property of being compact is established.

    Section II is concerned with the study of the exponential of a matrix. The property for a matrix of being orthogonal or unitary is defined by a system of non-linear relationships between its coefficients; the exponential mapping gives a parametric representation of the set of unitary (or orthogonal) matrices by matrices whose coefficients satisfy linear relations (Cf. Proposition 5, §II, p. 8). The reader may observe that the spaces Ms, Msh, MS, MR which are introduced on p. 8 all contain YX XY whenever they contain X and Y. Although we could have given here an elementary explanation of this fact, we have not done so, on account of the fact that the full importance of this result can only be grasped much later (in Chapter IV). In the cases of the orthogonal and unitary group, the linearization can also be accomplished by the Cayley parametrization (which we have not introduced); however, the exponential mapping is more advantageous from our point of view because it preserves some properties of the ordinary exponential function (Cf. Proposition 3, §IV, p. 13).

    Sections III and IV are preliminary to the result which will be proved in Section V (Proposition 1, p. 14). Hermitian matrices are defined in terms of the unitary geometry in a complex vector space (unitary geometry is defined by the notion of hermitian product of two vectors, just as euclidean geometry can be defined in terms of the scalar product). Proposition 2, §III, p. 10 shows that the unitary matrices are the isometric transformations of a unitary geometry.

    The proposition which asserts that the full linear group can be decomposed topologically into the product of the unitary group and the space of positive definite hermitian matrices (Proposition 1, §V, p. 14) is the prototype of the theorems which allow us to derive topological properties of general Lie groups from the properties of compact groups. A similar decomposition is given for the complex orthogonal group (Proposition 2, §V, p. 15).

    Sections VI and VII are preliminary to the definition of the symplectic groups. The symplectic group is defined to be the group of isometric transformations of a symplectic geometry (Definition 1, §VII, p. 20). In §IX, we construct a representation of Sp(n) by complex matrices of degree 2n. The consideration of the conditions which the matrices of this representation must satisfy leads to the introduction of a new group, the complex symplectic group Sp(n, C). It can be seen easily that Sp(n, C) stands in the same relation to Sp(n) as GL (n, C) to U(n) or as O(n, C) to O(n). A proposition of the type of Proposition 1, §V, p. 14 could be derived without much difficulty for Sp (n, C). However, we have not found it necessary to state this proposition, which is contained as a special case of a theorem proved later (Corollary to Theorem 5, Chapter VI, §XII, p. 211).

    §

    I. THE FULL LINEAR GROUP AND SOME OF ITS SUBGROUPS

    The n-dimensional complex cartesian space Cn may be considered as a vector space of dimension n over the field C of complex numbers. Let ei be the element of Cn whose i-th coordinate is 1 and whose other coordinates are 0. The elements e1, · · · , en form a base of Cn over C.

    A linear endomorphism α of Cn are given. There corresponds to this endomorphism a matrix (aij) of degree n; we shall denote this matrix by the same letter α as the endomorphism itself. Conversely, to any matrix of degree n with complex coefficients, there corresponds an endomorphism of Cn.

    Let α and β be two endomorphisms of Cn, and let (aij) and (bijis again an endomorphism, whose matrix (cij) is the product of the matrices (aij) and (bij); i.e.

    the set of all matrices of degree n with coefficients in Cand we associate with the matrix (aij) the point of coordinates b1, · · · , bn² in Cnand Cn². Since Cnand Cn².

    be any topological space, and let φ let aij(t) be the coefficients of the matrix φ(t). It is clear that φ will be continuous if and only if each function aij(t) is continuous.

    It follows immediately from this remark and from the formulas (1) that the product σ of two matrices σ, is a continuous function of the pair (σ, ), considered as a point of the space

    If α = (aij), we shall denote by the transpose of α, i.e. the matrix (aij), with aij = aij. We shall denote by the complex conjugate of α, i.e. the matrix = (aij). It is clear that the mappings α tα, α with itself. If α and β are any two matrices, we have

    A matrix σ will be called regular is the unit matrix of degree n. A necessary and sufficient condition for a matrix σ

    If an endomorphism σ of Cn maps Cn onto itself (and not onto some subspace of lower dimension), the corresponding matrix σ is regular and σ has a reciprocal endomorphism σ–1.

    If σ is a regular matrix, we have

    If σ are regular matrices, σ is also regular and we have

    It follows that the regular matrices form a group with respect to the operation of multiplication.

    Definition 1. The group of all regular matrices of degree n with complex coefficients is called the general linear group. We shall denote it by GL (n, C).

    Since the determinant of a matrix is obviously a continuous function of the matrix, GL(n, C. We may consider the elements of GL(n, C.

    If σ = (aij) is a regular matrix, the coefficients bij of σ– l are given by expressions of the form

    where the Aij′s are polynomials in the coefficients of σ. It follows that the mapping σ σ–1 of GL(n, C) onto itself is continuous. Since this mapping coincides with its reciprocal mapping, it is a homeomorphism of order 2 of GL(n, C) with itself.

    and σ → tσ are homeomorphisms of GL(n, C) with itself. The first but not the second is also an automorphism of the group GL(n, C).

    If σεGL(n, C), we shall denote by σ* the matrix defined by the formula

    We have

    Hence, the mapping σ σ* is a homeomorphism and an automorphism of order 2 of GL(n, C).

    Definition 2. A matrix σ is said to be orthogonal if σ = σ *. The set of all orthogonal matrices of degree n will be denoted by O(n). If only σ = σ*, σ is said to be complex orthogonal; the set of these matrices will be denoted by O(n, C). If only = σ*, σ is said to be unitary. The set of all unitary matrices will be denoted by U(n).

    and σ → σ* are continuous, the sets O(n), O(n, C) and U(n) are closed subsets of GL(n, C). Because these mappings are automorphisms, O(n), O(n, C) and U(n) are subgroups of GL(n, C). We have clearly

    Definition 3. We shall say that the matrix σ is real if its coefficients are real, i.e. if σ = . The set

    will be denoted by GL(n, R).

    Therefore, we have also

    The determinant of the product of two matrices being the product of the determinants of these matrices, it follows that the matrices of determinant 1 form a subgroup of GL (n, C).

    Definition 4. The group of all matrices of determinant 1 in GL(n, C) is called the special linear group. This group is denoted by SL (n, C). We set SL(n, R) =SL(n, CGL(n, R); SO(n) = SL(n, CO(n); SU(n) = SL(n, CU(n).

    It is clear that SL (n, C), SL (n, R), SO (n), SU(n) are subgroups and closed subsets of GL(n, C). They may be considered as subspaces of GL(n, C).

    Theorem 1. The spaces U(n), O(n), SU(n) and SO(n) are compact.

    Since O(n), SU(n) and SO(n) are closed subsets of U(n), it is sufficient to prove that U(n) is compact. A matrix σ is unitary if and only if t). If σ = (aij), the equation tis equivalent to the conditions

    Since the left sides of these equations are continuous functions of σ, U(n) is not only a closed subset of GL(n, C. Moreover, the conditions Σjajiāji = 1 imply |aiji, j n). It follows that the coefficients of a matrix σεU(nand Cn², we see that U(n) is homeomorphic to a closed bounded subset of Cn². Theorem 1 is thereby proved.

    §

    II. THE EXPONENTIAL OF A MATRIX

    Let α be any matrix of degree n, and let μ be an upper bound for the absolute values of the coefficients xij(α) of αbe the coefficients of αpp < ∞ ; we set α= the unit matrix). We assert that

    This is true for p = 0. Assume that our inequality holds for some integer p 0; then

    which proves that the inequality holds for p + 1.

    It follows that each of the nconverges uniformly on the set of all α such that |xij(αμ. In other words, the series

    is always convergent, and uniformly so when α .

    Definition 1. We denote by exp α the sum of the series

    The function exp α into itself.

    Proposition 1. If σ is a regular matrix of degree n, then

    In fact, we have σαpσ–1 = (σαpσ–1)p

    Proposition 2. If λ1, · · · , λn are the characteristic roots of α, each occurring a number of times equal to its multiplicity, the characteristic roots of exp α are exp λ1, · · ·, exp λn.

    We shall prove this by induction on n. It is obvious for n = 1, because then α is a complex number. Now, assume that n > 1 and that the proposition holds for matrices of degree n — 1.

    Let λ1 be a characteristic root of α; then there is an element a ≠ 0 in Cn such that αa = λ1a. Let e1 be the point whose coordinates are 1, 0, · · · , 0. Because a ≠ 0, there exists a regular matrix σ such that σa = e1. Then σασ–1e1 = λe1; in other words,

    where the *’s indicate complex numbers and is a matrix of degree n – 1. We have

    and therefore

    If λ2, · · · , λn are the characteristic roots of ᾱ, those of α, which are the same as those of σασ–1, are λ1, λ2, · · · , λn. The proposition being true for matrices of degree n – 1, it follows that the characteristic roots of exp ᾱ. are exp λ2, · · · , exp λn, and those of exp (σασ–1) are exp λ1, exp λ2, · · · , exp λn. But these are also the characteristic roots of σ(exp α)σ–1 (Cf. Proposition 1) and hence of exp α. Proposition 2 is thereby proved.

    Corollary 1. The determinant of the matrix exp α is exp Sp α.

    This follows at once from the facts that the trace and the determinant of a matrix are respectively the sum and the product of the characteristic roots.

    Corollary 2. The exponential of any matrix is a regular matrix.

    Proposition 3. If α and β are permutable matrices (i.e. if αβ = βα) then exp (α + β) = (exp α) (exp β).

    Since α and β are permutable, we can expand (α+ β)p by the binomial formula:

    Therefore, for any integer P, we have

    where RP is the sum Σ(k,l) αk/k! βl/l!, extended over all combinations (k,l) such that max (k, l) > P, k + l 2 P. The number of these combinations of indices is P(P + 1). On the other hand, if μ is an upper bound for the coefficients of α and β, the absolute value of any coefficient of αk/k! β l/l! is at most η(ημ)k/k!(ημ)l/l(ημ0)²P/P!, where μ0 is some number > 0. It follows that the coefficients of RP are smaller than P(P + 1)(ημ0)²P/P! in absolute value and that RP tends to 0 as P increases indefinitely. The formula to be proved is an immediate consequence of this fact.

    Corollary. If t is a real variable and α a fixed matrix, the mapping t → exp tα is a continuous homomorphism of the additive group of real numbers into GL(n, C).

    If α is any matrix, we have clearly

    It follows from the Corollary to Proposition 3 that we have also

    Proposition 4. There exists a neighbourhood U of O in which is mapped topologically onto a neighbourhood of in GL(n, C) by the mapping α → exp α.

    by the point of Cn² whose coordinates are the coefficients xij(α) of α it follows that the coefficients yij(α) of exp α are integral analytic functions Fij(· · · , xkl(α), · · ·) of the coefficients of α. It is clear that the terms of degrees < 2 in the Maclaurin expansion of Fij(· · · , xkl, · · ·) are δij + xij. It follows immediately that the Jacobian of the n² functions Fij with respect to their n² arguments is equal to 1 when xkl k, l n). By the theorem on implicit functions, we know that the mapping of Cn² into itself which assigns to the point of coordinates xij the point of coordinates Fij(· · · , xkl, · · ·) maps topologically some neighbourhood of the origin onto a neighbourhood of the point of coordinates yij = δij. Proposition 4 follows immediately.

    Definition 2. A matrix α is said to be skew symmetric if tα + α = 0, skew hermitian if tα + ᾱ = 0.

    We shall denote by Ms the set of skew symmetric matrices, by Msh the set of skew hermitian matrices, by MS the set of matrices of trace O and by MR the set of real matrices.

    Lemma 1. We can find a neighbourhood U of O in which satisfies the following conditions: 1) it is mapped topologically onto a neighbourhood of in GL(n, C) by the mapping α → exp α; 2) the trace of any αεU is smaller than in absolute value; 3) the condition αεU implies αεU, tαεU, ᾱεU.

    Let U1 be a neighbourhood of O which satisfies the first and second conditions; we denote by — U1 the set of matrices — α with αεU1, and we define similarly tU1, Ū1. The set U = U(— U(tU1Ū1 satisfies the conditions of Lemma 1.

    Proposition 5. Let U be a neighbourhood of O in which satisfies the conditions of in the following groups: SL(n, C), U(n), SU(n), GL(n, R), SL(n, R), O(n), SO(n), O(n, C).

    We know that the mapping α → exp α maps every subset of U topologically. If αεMs, we have exp αε SL(n, C) by Corollary 1 to Proposition 2 above. If αεMs, we have t(exp α) = exp () = exp (— α) = (exp α)–1, which proves that exp α is complex orthogonal. In a similar way, we prove that, if αεMsh, then exp α is unitary. Conversely, if exp αε SL(n, C), αεU, the conditions exp (Sp α) = 1, |Sp α| < 2π imply Sp α = 0, whence αεMs. If exp αεO(n, C), αεU, we have tαεU, — αεU and exp (ta) = exp (— α), whence ta = — a and αεMs. In a similar way, we see that if αεU and exp a is unitary, then αεMsh. If a is real, exp a is also real; conversely, if αεU is such that exp a is real, we have exp a = exp whence a = . Proposition 5 follows immediately from these facts.

    The sets MS, Msh, MR, MS Msh, MR Ms, MR Msh, MR Ms Msh, Ms may all be considered as vector spaces over the field R of real numbers; as such, their dimensions are 2n² — 2, n², n², n² — 1, n² — 1, n(n — l)/2, n(n — 1)/2 and n(n — 1) respectively. We have therefore proved:

    Proposition 6. In each of the groups GL(n, C), SL(n, C), U(n), SU(n), GL(n, R), SL(n, R), O(n), SO(n), O(n, C) there exists a neighbourhood of the neutral element which is homeomorphic to an open set in a real cartesian space of suitable dimension. These dimensions are: 2n² for GL(n, C), 2n² – 2 for SL(n, C), n² for U(n), n² – 1 for SU(n), n² for GL(n, R), n² – 1 for SL(n, R), n(n – 1)/2 for O(n) and SO(n), n(n – 1) for O(n, C).

    §III. HERMITIAN PRODUCT

    As we have already observed, the space Cn may be considered as a vector space of dimension n over C, with the base {e1, · · · , en} introduced in §I, p. 2. In this section, we shall use the notation az (instead of za) for the product of a vector a by a number z; this notation will be preferable when we come to quaternions.

    Definition 1. Let and be vectors in Cn. We define their hermitian product a · b by

    We define the length of a to be the number

    0, and that ||a|| = 0 implies a = 0.

    The number a · b is, for a fixed, a linear function of b; i.e.

    However, if b is fixed, a · b is not a linear function of a, for we have

    whence

    Definition 2. A vector a is called a unit vector if ||a|| = 1. Two vectors a and b are said to be othogonal if a · b = 0. A set of vectors is said to be orthonormal if every vector of the set is a unit vector and any two different vectors of the set are orthogonal.

    Proposition 1. Let a1, · · · , a m be m linearly independent vectors. Then there exists an orthonormal set {b1, · · · , bm} such that, for each k m), the sets {a1, · · · , ak} and {b1, · · · , bk} span the same subspace of Cn.

    We proceed by induction on m. Proposition 4 holds for m = 1; in fact, we have a1 ≠ 0, and we may define b1 to be a1||a1||–1. Assume that m > 1 and that Proposition 1 holds for systems of m — 1 vectors. Then, we can find vectors b1, · · · , bm–1 such that, for every k m — 1, the sets {a1, · · · , ak} and {b1, · · · , bk} span the same subspace of Cn. Now let us consider the vector c = am Because am is linearly independent of a1, · · · , am–1, c does not lie in the space spanned by a1, · · · , am–1. We define bm to be c||c||–1. Obviously, ||bm|| = 1 and (using the orthogonality of b1, · · · , bm–1),

    which shows that bm is orthogonal to b1, · · · , bm–1. Then {b1, · · · , bm} is an orthonormal set which spans the same space as {a1, · · · , am} : Proposition 1 is proved for systems of m vectors.

    Corollary 1. Any vector subspace of Cn has an orthonormal base.

    Corollary 2. Any unit vector a of Cn belongs to an orthonormal base of Cn.

    In fact, a may be taken as the first element of a base of Cn. If we apply to this base the construction of the proof of Proposition 1, we obtain an orthonormal base of Cn whose first element is a.

    We shall now consider the matrices of degree n as endomorphisms of Cn, in the way which was explained in §I.

    Proposition 2. A necessary and sufficient condition that a matrix σ be unitary is that ||σa|| = ||a|| for all a Cn. This condition implies that σa · σb = a · b for any two vectors a and b in Cn.

    First, let α = (aij) be any matrix. We have αei = Σjejαji, whence āji = (αei) · ei. We have also tαej = Σieiαji, whence aji = ei · (tαej) and (αei) · ej = ei · (ej). It follows easily that

    for any two vectors a = Σieiαi and b = Σjejbj.

    If σ is a unitary matrix, we have σa · σb = a · (t σb) = a · b, and, in particular, ||σa||² = ||a||², whence ||σa|| = ||a||.

    Conversely, assuming that this condition is satisfied for every a, we have

    whence

    Replacing b we have also

    whence σa · σb = a · b = a · t σb. We have therefore a · (b t σb) = 0 for every a, whence b = t σb (we may for instance take a = b t σb). The formula b = t σb being true for every b, t σ is the unit matrix, which proves that σ is unitary.

    Because the set {e1, · · · , en} is orthonormal, it follows that the set {σe1, · · · , σen} is orthonormal for every unitary σ. Conversely, let {a1, · · · , an} be any orthonormal set; then there exists a matrix σ = (aij) such that σei = aii n). Since

    we see that σ is unitary. In particular, we obtain

    Proposition 3. If a is a unit vector, there exists a unitary matrix σ such that σe1 = a.

    We shall say that a vector a = Σieixj is real if its coordinates x1, x2, · · · , xn are real. If a and b are real vectors, the number a · b is also real.

    Proposition 4. A matrix σ is orthogonal if and only if the two following conditions are satisfied: 1) σa · σb = a · b for any two real vectors a and b; 2) if a is any real vector, σa is also real.

    These conditions are certainly satisfied if σ is orthogonal, since in that case σ is unitary and real. Conversely, let us assume that the conditions are satisfied. Let a = Σieixj and b = Σjejyj be any two complex vectors. Since e1, · · · , en are real vectors, we have

    and hence σ is unitary. Since σ is also real, it is orthogonal.

    The process of orthonormalisation which was used in proving Proposition 1, if applied to a system of real vectors, leads to real vectors. Hence:

    Corollary 2a to Proposition 1. Any real unit vector belongs to an orthonormal base of Cn composed of real vectors.

    In the same way that we proved Proposition 3, we derive:

    Proposition 3a. If a is a real unit vector, there exists an orthogonal matrix σ such that σe1 = a.

    §IV. HERMITIAN MATRICES

    Definition 1. A matrix α is called hermitian if tα = ᾱ.

    The reader will observe that the mapping α → tᾱ is not an automorphism of GL(n, C) and that the hermitian matrices do not form a subgroup of GL(n, C).

    Proposition 1. A matrix α is hermitian if and only if αa · b = a · αb for any two vectors a and b in Cn.

    In fact, if α is hermitian, the result follows immediately from Formula (1), §III, p. 10. Conversely, if the condition is satisfied, and if b is any vector in Cn, we have a · αb = a · tb for all aεCn, whence αb = tb and α = tᾱ.

    Proposition 2. If α is a hermitian matrix and σ is unitary, σασ–1 is again hermitian. Moreover, there exists a unitary matrix σ0 such that is a diagonal matrix. If α is real, σ0 may be assumed to be orthogonal.

    The first part of the proposition follows at once from the fact that

    We shall prove the second part by induction on the degree n of the matrix α. It is obvious for n = 1. Assume that n > 1 and that our assertion holds for matrices of

    Enjoying the preview?
    Page 1 of 1