Introduction to the Theory of Determinants and Matrices
()
About this ebook
Originally published in 1958.
A UNC Press Enduring Edition -- UNC Press Enduring Editions use the latest in digital technology to make available again books from our distinguished backlist that were previously out of print. These editions are published unaltered from the original, and are presented in affordable paperback formats, bringing readers both historical and cultural value.
Related to Introduction to the Theory of Determinants and Matrices
Related ebooks
A Geometric Algebra Invitation to Space-Time Physics, Robotics and Molecular Geometry Rating: 0 out of 5 stars0 ratingsThe Malliavin Calculus Rating: 5 out of 5 stars5/5Measure and Integration: A Concise Introduction to Real Analysis Rating: 0 out of 5 stars0 ratingsHistory of Functional Analysis Rating: 1 out of 5 stars1/5Production, Multi-Sectoral Growth and Planning: Essays in Memory of Leif Johansen Rating: 0 out of 5 stars0 ratingsRuin Probabilities: Smoothness, Bounds, Supermartingale Approach Rating: 0 out of 5 stars0 ratingsUniversal Languages Introduction Rating: 3 out of 5 stars3/5Representations of Lie Groups, Kyoto, Hiroshima, 1986 Rating: 0 out of 5 stars0 ratingsThe Language of Mathematics: Utilizing Math in Practice Rating: 0 out of 5 stars0 ratingsParticle Mechanics Rating: 4 out of 5 stars4/5A Lexicon of Economics Rating: 0 out of 5 stars0 ratingsTopology and Its Applications Rating: 3 out of 5 stars3/5Introduction to Calculus and Analysis II/1 Rating: 5 out of 5 stars5/5Guide to a Great Career: Landing the Dream Job Rating: 0 out of 5 stars0 ratingsBASIC MATH: An Introduction to Calculus Rating: 0 out of 5 stars0 ratingsThe General Theory of Employment, Interest and Money Rating: 0 out of 5 stars0 ratingsThe Game: How the World of Finance Really Works Rating: 5 out of 5 stars5/5The XVA of Financial Derivatives: CVA, DVA and FVA Explained Rating: 0 out of 5 stars0 ratingsAn Introduction to Proof through Real Analysis Rating: 0 out of 5 stars0 ratingsRandom Processes: Measurement, Analysis and Simulation Rating: 5 out of 5 stars5/5The Computer Graphics Interface: Computer Graphics Standards Series Rating: 5 out of 5 stars5/5Latin Squares: New Developments in the Theory and Applications Rating: 0 out of 5 stars0 ratingsStochastic Analysis of Mixed Fractional Gaussian Processes Rating: 0 out of 5 stars0 ratingsHow to Get Good Grades and Learn More in School Rating: 0 out of 5 stars0 ratingsModern Algebra Essentials Rating: 0 out of 5 stars0 ratingsAn Introduction to Differential Geometry - With the Use of Tensor Calculus Rating: 4 out of 5 stars4/5Superconductivity Rating: 0 out of 5 stars0 ratingsRings of Continuous Functions Rating: 0 out of 5 stars0 ratings
Mathematics For You
The Little Book of Mathematical Principles, Theories & Things Rating: 3 out of 5 stars3/5My Best Mathematical and Logic Puzzles Rating: 5 out of 5 stars5/5Algebra - The Very Basics Rating: 5 out of 5 stars5/5Quantum Physics for Beginners Rating: 4 out of 5 stars4/5Algebra I Workbook For Dummies Rating: 3 out of 5 stars3/5The Everything Guide to Algebra: A Step-by-Step Guide to the Basics of Algebra - in Plain English! Rating: 4 out of 5 stars4/5Basic Math & Pre-Algebra For Dummies Rating: 4 out of 5 stars4/5Mental Math Secrets - How To Be a Human Calculator Rating: 5 out of 5 stars5/5Real Estate by the Numbers: A Complete Reference Guide to Deal Analysis Rating: 0 out of 5 stars0 ratingsCalculus Made Easy Rating: 4 out of 5 stars4/5Flatland Rating: 4 out of 5 stars4/5Standard Deviations: Flawed Assumptions, Tortured Data, and Other Ways to Lie with Statistics Rating: 4 out of 5 stars4/5A Mind for Numbers | Summary Rating: 4 out of 5 stars4/5The Math of Life and Death: 7 Mathematical Principles That Shape Our Lives Rating: 4 out of 5 stars4/5Relativity: The special and the general theory Rating: 5 out of 5 stars5/5The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the History of Mathematics Rating: 3 out of 5 stars3/5Game Theory: A Simple Introduction Rating: 4 out of 5 stars4/5Introducing Game Theory: A Graphic Guide Rating: 4 out of 5 stars4/5Geometry For Dummies Rating: 5 out of 5 stars5/5Summary of The Black Swan: by Nassim Nicholas Taleb | Includes Analysis Rating: 5 out of 5 stars5/5The Thirteen Books of the Elements, Vol. 1 Rating: 0 out of 5 stars0 ratingsIs God a Mathematician? Rating: 4 out of 5 stars4/5Mental Math: Tricks To Become A Human Calculator Rating: 5 out of 5 stars5/5Algebra I For Dummies Rating: 4 out of 5 stars4/5The Everything Everyday Math Book: From Tipping to Taxes, All the Real-World, Everyday Math Skills You Need Rating: 5 out of 5 stars5/5
Reviews for Introduction to the Theory of Determinants and Matrices
0 ratings0 reviews
Book preview
Introduction to the Theory of Determinants and Matrices - Edward Tankard Browne
CHAPTER I
FUNDAMENTAL CONCEPTS
1. Rings and Fields.
Consider a set of elements a, b, c, … and two rules of combination which we shall call addition (written a + b), and multiplication (written a × b, a·b, ab) such that if a and b are any two elements of , then a + b and ab are uniquely defined elements of . Suppose further that addition and multiplication obey the following five laws:
(1.1) a + b = b + a, (commutative law of addition);
(1.2) a + (b + c) = (a + b) + c, (associative law of addition);
(1.3) The equation a + x = b always has a solution in ;
(1.4) a(bc) = (ab)c (associative law of multiplication);
(1.5) a(b + c) = ab + ac; (b + c)a = ba + ca (distributive law).
A set of elements satisfying the above conditions is called a ring.
The condition (1.3) merely stated that subtraction is always possible in a ring. The uniqueness of subtraction is not postulated but can be proved from the conditions (1.1), …, (1.5). The unique solution of (1.3) is then written x = b – a.
If in addition to the above conditions, the relation
(1.6)
is satisfied for arbitrary elements a, b of the set, then 9i is called a commutative ring.
It can be shown that every ring contains a unique element 0, called the zero element of the ring, which has properties that for every element a of 9t
(1.7)
(1.8) .
If a ring contains an element e such that ae = a for every a in , then e is called a right unity element of the ring. Similarly, an element f such that fa = a for every a of is called a left unity element. A ring may possess no unity element at all. On the other hand it may have right unity elements, but no left, and vice versa, (cf. ex. 10, section 5). If, however, has both a right unity element e and a left unity element f, the two must be identical. For from the first condition fe = f, while from the second fe = e, whence e = f.
Examples of rings:
The set of all integers, positive, negative and zero;
The set of all even integers;
The set of all numbers when a and b are integers;
The set of all rational numbers;
The set of all polynomials in a single variable with real coefficients;
The set of all continuous functions of a real variable x on the interval 0 ≤ x ≤ 1.
The set of all quaternions a + bi + cj + dk where a, b, c, d are integers and the quaternion units i, j, k satisfy the relations i² = j² = k² = –1, ij = –ji = k, jk = –kj = i, ki = –ik = j. This is an example of a non-commutative ring.
The notion of ring is a very important one in modern abstract algebra and numerous books on the subject have appeared recently. However, since it is not our purpose to make a special study of rings in this course, we shall not pursue the discussion further at this point.
Consider now a ring which, in addition to satisfying the conditions (1.1), …, (1.6), satisfies also the following:
(1.9) The equation ax = b has a solution in if a ≠ 0. The ring is then said to be a field .
Examples of fields:
The set of all rational numbers, (the rational number field);
The set of all real numbers, (the real field);
The set of all complex numbers, (the complex field);
The set of all numbers of the form where a and b are rational numbers;
The set of all rational functions f(x)/g(x) in one variable with real coefficients;
The set of all 2 by 2 matrices where a and b are integers.
The condition (1.9) merely states that in a field division except by zero is always possible. The quotient, which can be shown to be unique, is denoted by the symbol b/a.
In a field there is always a unique unity element, usually denoted by 1, such that for every element a of the field
(1.10)
Further, in a field if ab = 0 and a ≠ 0, then b = 0; that is we can always divide through by any element that is different from zero.
2. The Matrix.
DEFINITION. If a11, a12, …, ann are elements of a ring , the mn elements
(2.1)
arranged in a rectangular array of m rows and n columns is called an m by n matrix over the ring .
The matrix (2.1) is commonly denoted by a single capital letter A. It is frequently convenient to denote the matrix A by the abbreviated symbol (aij) or ||aij||, where aif denotes the element in the i-th row and the j-th column of A. It is to be noted that a matrix is not a single quantity at all but a set of quantities, or elements, and if a single element is changed the matrix is changed.
One of the simplest examples of a matrix is the one row and two column matrix (2, –5) representing the Cartesian coordinates of a point P in a plane. An n-dimensional vector (al, …, an) is a one row matrix or a one column matrix.
The numbers m and n are called the dimensions of the matrix, and we refer to A as an m by n or as an m × n matrix. If m = n the matrix is square. In any case the elements a11, a22, … are called the diagonal elements, or the elements in the principal diagonal. A matrix with n rows and n columns is referred to an n-square matrix or a square matrix of order n.
For most of the applications the elements of the matrix will belong to the field of real numbers or to the field of complex numbers. However, except in cases where the operation of division is involved, it is usually not necessary to postulate that the elements belong to a field. When the term commutative ring
is employed the student will find it helpful to visualize the ring of integers as an example, or perhaps better, the set of all polynomials in the real variable x with real coefficients.
3. Certain Operations with Matrices.
DEFINITION. Two matrices A = (aij) and B = (bij) with elements in a ring are said to be equal if, and only if, they have the same dimensions (m rows, and n columns) and each element of A is equal to the corresponding element of B (aij = bij; i = 1, …, m; j = 1, …, n).
In particular, A is said to be zero, if, and only if, each element of A is zero. In this case we write A = 0.
DEFINITION. By the sum (or difference) of two m × n matrices A and B, we mean that m × n matrix C = A ± B each of whose elements cij is equal to the sum (or difference) of the corresponding elements of A and B.
Note that the sum and difference are not defined unless A and B have the same dimensions. Note also that if A = B, then A – B = 0, where 0 stands for the zero matrix.
In order to distinguish them from matrices, we frequently refer to elements of the ring , or to polynomials in one or more variables with coefficients in as scalars. We denote scalars by small Latin letters a, b, c, k, l, etc., or by small Greek letters α, β, γ, …, etc. Latin capitals will be used to denote matrices and vectors.
DEFINITION. If A is a matrix over a commutative ring and k a scalar, we mean by kA or Ak the matrix obtained from A by multiplying each element by k.
If A, B, C, X, are m × n matrices over , k and l scalars, it is now easy to establish the following properties:
(3.1) A + B = B + A, A + (B + C) = (A + B) + C;
(3.2) A + X = B is always solvable;
(3.3) k(A + B) = (A + B)k = kA + kB = Ak + Bk;
(3.4) (k + l)A = kA + lA = Ak + Al = A(k + l);
(3.5) (kl)A = k(lA) = l(kA) = A(kl) = (Ak)l.
4. Multiplication of Matrices.
DEFINITION. Let A = (aij) be an m × n matrix and let B = (bij) be an n × q matrix over a ring. Then by the product AB, in that order, we mean the m × q matrix C = (cij) such that
(4.1)
We shall usually be concerned with commutative rings in which case the order of the a’s and the b’s in (4.1) can be changed. In case, however, 9t is not a commutative ring, in the product AB the a’s must be written on the left of the b’s.
That is, if A is m × n and B is k × q, in order that the product AB may exist we must have k = n. We express this by saying that A and B must have their contiguous dimensions equal. British writers say that A and B are then conformable for multiplication. If this condition is satisfied, then AB is an m × q matrix the element in the i-th row and j-th column of which is the sum of the products of the elements of the i-th row of A by the corresponding elements of the j-th column of B. If A is m × n and B is k × q the product BA is not defined unless q = m.
EXAMPLE. Let
,
then
.
On the other hand,
.
We can establish at once the following theorems:
THEOREM 4.1. If A is an m × n matrix, B and C n × q matrices, then A(B + C) = AB + AC; that is, matrix multiplication is distributive with respect to addition.
First of all, we note that the matrix B + C is an n × q matrix so that the left member is m × q. Moreover, each term on the right is an m × q matrix, so that the dimensions of the two sides are the same. We have then only to show that the element in the (i, j) position is the same on both sides. The elements in the (i, j) positions of the two members are respectively
(4.2)
.
Since the elements of the three matrices are assumed to belong to a ring, they satisfy the condition (1.5) so that the two expressions in (4.2) are the same.
THEOREM 4.2. If A, B and C are three matrices of dimensions m × n, n × p, and p × q, respectively, then A(BC) = (AB)C; that is, matrix multiplication is associative.
First we note that each of the products in the theorem is an m × q matrix. Now the elements in the i-th row of A are
,
while the elements in the j-th column of BC are
where the summation index t extends in all cases from 1 to p. Hence the element in the (i, j) position of the product A (BC) is
.
Since we are dealing here with, finite sums, the order of the summations may be changed, thus
.
But this is precisely the element in the i-j position of the matrix (AB)C. Hence the theorem is established. Furthermore the result may be shown by an easy induction to hold for the product of any number of matrices.
Since the sum, difference and product of two n-square matrices are n-square matrices, we have in view of (3.1), (3.2) and theorems 4.1 and 4.2 the following theorem:
THEOREM 4.3. The set of all n-square matrices with elements in a commutative ring constitute a (non-commutative) ring.
Let us denote by In the n-square matrix with 1’s in the main diagonal and zeros elsewhere; for example,
.
If then A is any m × n matrix, it is easy to verify that
(4.3) .
Moreover, Im and In are the only two matrices such that (4.3) holds for every A. For let A be m × n and suppose that XA = A for every m X n matrix A. First we note that X must be m-square, otherwise XA has not the same dimensions as A. Next, take A = Im, then since XIm = X, and since by hypothesis XIm = Im, it follows that X = Im.
Note. If X = , , then for this particular A, we have XA = A, but the relation does not hold for every A.
If A is a square matrix, for the product AA we write A², and for (AA)A = A(AA), we write A³. If m is any positive integer, it is easy to show by theorem 4.2 that the product A A … A, to m factors is well defined. This product will be written Am.
5. Products by Partitioning.
Let A be an m × n matrix, and B be an n × q matrix, with elements in a commutative ring . Let m1, m2; n1 n2, n3; and q1, q2 be positive integers such that
,
and let us partition A and B as indicated below:
,
Note that the rows of B are partitioned in exactly the same way as are the columns of A. Each of the matrices A and B can then be looked upon as a matrix of matrices, thus
(5.1) , ,
where the dimensions of A11, A12 etc. are m1 × n1, ml × n2, etc. We now show that the product C = AB can be obtained thus
(5.2)
where Cij is the mi × qi matrix.
(5.3)
.
Note that in the expression on the right in (5.3), the Aij are not in general commutative with the Bij so that in forming the product AB, the Aij are to be written on the left.
Now in (5.3) the matrix Ai1 is of dimensions mi × n1 while Bij is of dimensions n1 × qi. Hence the product Ai1 B1i is of dimensions mi × qi. Similarly, it follows that each term on the right in (5.3) is of dimension mi × qi, so that the sum Cij is defined and is of the dimensions stated.
Next, we form the element in the k-th row and l-th column of C.
(5.4)
.
If now 1 ≤ k ≤ ml 1 ≤ l ≤ q1 the summations on the right are the elements on the k-th row and the l-th column of the matrices A11B11, A12B21 and A13B31, so that their sum is the element in the (k, l) position of C11 as defined in (5.3). If k = m1 + k′, l = q1 + l′, where 1 ≤ k′ ≤ m2, 1 ≤ l′ ≤ q2, (5.4) is the element in the (k′, l′) position of C22. Thus the rule for multiplying by partitioning is established.
A case of particular interest here is the following. Let A, B, C, … be n-square matrices with elements in a commutative ring . Let m1, m2, …, ma be positive integers such that m1 + m2 + … + m, = n and partition each matrix as follows.
(5.5)
Thus each matrix is partitioned as to column precisely as it is partitioned as to row. We can then regard A, for example, as an s × s matrix whose elements Aij are mi × mi, matrices. Since each of the remaining matrices B, C, … etc. is partitioned in exactly the same way, we can not only form products by the method just indicated but we can form sums and differences as well; thus
Exercises
1. If , , ,
find A2, B2, C2, AB, BA, AC, CA, BC, CB.
2. If , ,
show that A² = B² = C² = I, AB = BA = C, BC = CB = A, AC = CA = B.
3. The same as Exercise 2 for each of the following triples
, , ;
, , ;
, , .
4. If , , show that A² = B² = [1/2(A + B)]² = I; (A – B)² = 0.
5. If , , k any number, show that [kD + (1 – k)E]² = I.
6. If , show that A² – 11A + 10I = 0.
7. If , , show that U² = U, and V² = V(U and V are said to be idempotent); show also that UV = VU = 0, U + V = I.
8. If , show that X² = 0. X is said to be nil-potent.
9. If , , ,
show that AX = AY, although X ≠ Y.
10. Show that if a1 and a2 range over the ring of all integers, the set of 2 × 2 matrices constitute a ring. Show that the ring contains left unity elements, but no right unity element.
11. If
, show that irrespective of the numbers a, b, c, d, we have .
12. If , , and , , , show that A² = B² = C² = I4; AB + BA = AC + CA = BC + CB = –2I4.
13. Show that if the n-square matrix X is commutative with every n-square matrix A, then X is necessarily a scalar, i.e., a matrix of the form kI.
CHAPTER II
ELEMENTARY PROPERTIES OF DETERMINANTS
6. The Determinant of a Square Matrix.
Let
(6.1)
be an n-square matrix whose elements aij belong to some field , for example, the field of rational members. Associated with A are certain scalar junctions, i.e., elements of the field , which are of great importance. One of these is the determinant of the matrix (written |A|), which we now proceed to define.
Consider two elements aij and akl in the array (6.1) which do not lie either in the same row or in the same column of the array, that is, i ≠ k and j ≠ l. If one of these elements lies to the right of and above the other in the array, the pair is called a negative pair; otherwise the pair is called a positive pair. For example, the pair a31, a22 is a negative pair while a11, a23 is a positive pair. We can then state the following definition:
DEFINITION. Let A be an n-square matrix (aij) with elements in a field . Write down all possible products, each of n factors, that can be obtained by picking one and only one element from each row and from each column. There will be n! such products. In each product count the number σ of negative pairs. If σ is even, attach a plus sign to the product; if σ is odd, attach a minus sign. The algebraic sum of all these n! terms is the value of or the expansion of |A|; that is,
(6.2)
where i1, i2, …, in varies over all the n! permutations of the numbers 1, 2, …, n, and σ indicates the number of negative pairs in the product.
For example, if n = 4, the expression a12 a24 a33 a41 is, except possibly for sign, a term in the expansion of |A|. Since in this product the pairs (a12, a24), and (a12, a33) are positive pairs while (a12, a41), (a24, a33), (a24, a41) and (a33, a41) are negative pairs, here σ = 4 so that the expression as it stands (or with the + sign attached) is a term in the expansion of |A|.
The sign which is to be prefixed to a term of a determinant can be found in another way.
In considering a permutation il, i2, …, in, of the set of numbers 1, 2, …, n, we may select some definite order, say the order 1, 2, 3, …, n, as normal order. A permutation is then said to present as many inversions as there are instances in which a number is followed by one which in the normal order precedes it. Thus, if the normal order is 1, 2, 3, 4, the permutation 4132 presents the four inversions 41, 43, 42, 32.
Consider now two elements aij and akl (i ≠ k, j ≠ l) selected from different rows and from different columns of the matrix A in (6.1). Suppose first i < k, so that aij lies in a row above that in which akl lies. The pair (aij, akl) is then a positive or a negative pair according as akl lies in a column to the right of or to the left of the column in which allies, that is, according as j < l or j > l, or according as the permutation jl does or does not present an inversion from the normal order: 1, 2, …,n. Indeed it is easy to see that if we do not postulate i < k, the