Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Infinite Matrices and Sequence Spaces
Infinite Matrices and Sequence Spaces
Infinite Matrices and Sequence Spaces
Ebook526 pages4 hours

Infinite Matrices and Sequence Spaces

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This clear and correct summation of basic results from a specialized field focuses on the behavior of infinite matrices in general, rather than on properties of special matrices. Three introductory chapters guide students to the manipulation of infinite matrices, covering definitions and preliminary ideas, reciprocals of infinite matrices, and linear equations involving infinite matrices.
From the fourth chapter onward, the author treats the application of infinite matrices to the summability of divergent sequences and series from various points of view. Topics include consistency, mutual consistency, and absolute equivalence; the core of a sequence; the inefficiency and the efficiency problems for infinite matrices; Hilbert vector space and Hilbert matrices; and projective and distance convergence and limit in sequence spaces. Each chapter concludes with examples — nearly 200 in all.
LanguageEnglish
Release dateJun 10, 2014
ISBN9780486795065
Infinite Matrices and Sequence Spaces

Related to Infinite Matrices and Sequence Spaces

Titles in the series (100)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Infinite Matrices and Sequence Spaces

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Infinite Matrices and Sequence Spaces - Richard G. Cooke

    INDEX

    CHAPTER 1

    DEFINITIONS AND PRELIMINARY IDEAS

    1.1. Differences between finite, and infinite, matrix theory.

    An infinite matrix is a twofold table A = (ai, j) (i, j = 1, 2, … ∞) of real or complex numbers, with addition and multiplication defined by

    where λ is any real or complex number, i.e., a scalar.

    Thus if AB = (ci, j, whenever this sum exists. In (ai, j), the first suffix i specifies the row, the second suffix j specifies the column.

    It will be found that the treatment of infinite matrices differs radically from that of finite matrices. There are several reasons for this :

    (i) In the finite theory, determinants play a fundamental part; but their value is lost, to a very large extent, in the theory of infinite matrices.

    (ii) Existence problems frequently arise for infinite matrices which have no counterpart in the finite theory ; e.g., although two infinite matrices A and B may both exist, their product AB may not exist, since

    may diverge for some, or all, values of i, j.

    Whereas the theory of finite matrices is a branch of Algebra, the theory of infinite matrices is a branch of Analysis. This in itself involves totally different treatments for the two subjects ; and it might reasonably be expected (as is indeed the case) that the latter subject is connected with the general theory of functions.

    (iii) A large number of theorems have been established for finite square matrices of order n. It might be supposed that by merely letting n tend to ∞ in these theorems, we should obtain corresponding theorems for infinite matrices. However, owing to convergence and other difficulties, this rarely happens ; for exceptions, see Turnbull, (2), (3), where a few results are so obtained, and Gurr, (1).

    (iv) In general, as we shall see, the types of problem solved by aid of infinite matrices are of an entirely different character from those solved by the use of finite matrices.

    1.2. Some problems involving the use of infinite matrices.

    Let us illustrate § 1.1, (iv) by considering a few problems in which infinite matrices occur.

    (a) Suppose that we are given a system of an infinity of linear equations in an infinity of unknowns x1, x2, …, say

    the coefficients ai, k, set out as a twofold table (i, k = 1, 2, …), form an infinite matrix.

    If we write xk, 1 = xk, xk, j = 0 for j > 1, yi, 1 = yi, yi, j = 0 for j > 1, then (1.21) can be put in the matrix form AX = Y, with the proviso that we only consider solutions X of the latter equation in which all elements of X other than those of its first column vanish.

    Suppose that the elements in the p-th row of a matrix B are all 0 except bp,q which is 1; then the p-th row of BC is the same as the q-th row of C. Similarly, if the elements of the q-th column of C are all 0 except cp, Q, which is 1, then the q-th column of BC is the same as the p-th column of B. Hence if I is the matrix for which every element of the leading diagonal (ai, i, i = 1, 2, …) is 1, and all other elements are 0, then IA = AI = A. I is called the unit matrix; its elements are denoted by δi, j, so that

    Now suppose that a matrix –1A (called a left-hand reciprocal of A) exists, such that –1AA = I.

    Then, under certain conditions, if we multiply both sides of AX = Y on the left by –1A, we obtain X = –1AY, the required solution (see § 3.2).

    This shows the necessity for a treatment of the reciprocals of infinite matrices (see Chapter 2) ; these also occur in a great many other connexions. The allied topic of the solution of linear equations (of a more general type) in infinite matrices is also suggested ; this is considered in Chapter 3.

    (b) A very important application of infinite matrices is to the theory of the summability of divergent sequences and series, to be treated from various points of view in Chapters 4–10.

    An extremely simple example of this is the well-known method of summation by arithmetic means of a sequence zn, i.e.,

    This is a particular case of the transformation of zn by an infinite matrix (αn, k) into another sequence zn′, viz.,

    Comparing (1.22) and (1.23), we see that summation by arithmetic means is equivalent to transformation by the infinite matrix

    i. e., by the matrix αn,k = l/n (1 ≤ k n), = 0 (k > n).

    In § 4.1, necessary and sufficient conditions will be found for (an, k) transform every convergent sequence zn z into another convergent sequence zn′ → z′, where z′ is not, in general, equal to z. These conditions are

    A matrix an, k satisfying these conditions is called a Kojima matrix, or K-matrix, and αk, α are called its characteristic numbers.

    If in addition z′ → z, i.e., the limit of the transformed sequence zn′ when n → ∞, is the same as that of the original sequence zn, then αk = 0 and α = l. In this case the K-matrix is called a Toeplitz matrix, or T-matrix. Thus the matrix (1.24) is a T-matrix; and all T-matrices have the property of consistency, i. e., every convergent sequence is transformed by such a matrix into another convergent sequence with the same limit when n → ∞.

    But, in addition, T or K matrices will frequently transform divergent sequences into convergent sequences. And the extension of these results to series (in place of sequences) is comparatively simple.

    When the above main theorems have been established, a great wealth of results will follow (see Chapters 4–10).

    (c) A further important application of infinite matrices is to the Heisenberg-Dirac theory of quantum-mechanics [Frenkel, (1), Vol. II, Chapters III, IV; Kemble, (1), Chapter X, etc.; Birtwistle, (1), 64, 88, 101, 134, 264].

    Here two central problems consist in solving two linear equations in infinite matrices : (i) AX XA = I, where A is a given matrix, and I is the unit matrix ; this is the quantization equation; and (ii) AX XD = 0, where A is a given matrix, and D is a diagonal matrix, i.e., a matrix all of whose elements are 0 except those, di (i = 1, 2, …), in the leading diagonal. These equations are considered in Chapter 3. Schrodinger’s method of solving quantum-mechanics problems by means of his wave equation leads, for a rigorous mathematical treatment, to the theory of the spectrum [von Neumann, (1), Stone, (1), Cooper, (1), (2)]. For this purpose, a study of Hilbert space is indispensable ; an introduction to this is given in Chapter 9.

    1.3. Some fundamental definitions.

    The idea of finite rectangular matrices of m rows and n columns, i.e., of order m × n, will be familiar to the reader. When there is only one row, so that m = 1, the matrix is called a row vector, and is denoted by

    When there is only one column, so that n = 1, the matrix is called a column vector, and is denoted by

    this notation is preferable to

    being more economical of space. The terms row vector, column vector, and the notation adopted above, were introduced by Turnbull (Turnbull, (1), 6, 149 ; (2), 107).

    The system of equations (1.21) is, in matrix notation, Ax = y, where A is the infinite matrix (ai, j), and x, y are column vectors with an infinity of elements x = {xj}, y = {yj} (j = 1, 2, … ∞); we might describe such a column vector as a matrix of order ∞ × 1, and a row vector with an infinity of elements could be described as a matrix of order 1 × ∞.

    Thus matrix × column vector = column vector, whenever the product on the left exists.

    The matrix obtained from A by interchanging rows and columns is called the transpose (or transverse) of A, and is denoted by A′ so that for the matrix A′ we have ai, j = aj, i. Thus, if x is a row vector, then x′ is a column vector, and viceversa. Again, if x is a row vector, then Ax′ is a column vector ; and if x is a column vector, then xA′ is a row vector.

    The zero matrix 0 is the table ai, j = 0 for every i and j; it is obvious that 0A = A0 = 0.

    But if AB = 0, it does not follow that A or B must be the zero matrix. In fact, this does not follow even for the simplest types of matrices, such as diagonal matrices D, Δ.

    Thus suppose that in D ≡ (di), di = 0 when i is even, and di ≠ 0 when i is odd; and that in Δ ≡ (δi), δi =0 when i is odd, and δi ≠ 0 when i is even. Now from the multiplication law we see that (di)(δi) = (di δi), so that the product of two diagonal matrices is a diagonal matrix. Then, in our case, = (di = (diδi) = 0, although neither D nor Δ is the zero matrix.

    Throughout this book I will always denote the unit matrix.

    The diagonal matrix λI, where λ is a scalar, is called a scalar matrix ; all its leading diagonal elements are equal to λ.

    If every row of a matrix A contains only a finite number of non-zero elements, A is said to be row-finite; if the same is true with respect to every column, A is said to be column-finite. Thus, if A is row-finite, ai, j = 0 for j qi, say, where qi is some function of i. If ai, j = 0 for j q, where q is independent of i, A is called a row-bounded matrix. Similarly, if A is column-finite, ai, j = 0 for i rj say; if ai, j ≥ 0 for i r where r is independent of j, A is called a column-bounded matrix.

    If ai, j = 0 for j > i, A is called a lower semi-matrix ; if ai, j = 0 for j < i, A is called an upper semi-matrix. Thus a lower semi-matrix is row-finite (but not in general row-bounded), and an upper semi-matrix is column-finite (but not necessarily column- bounded). The matrix (1.24) of the arithmetic means is an example of a lower semi-matrix.

    A is said to be symmetric if ai,j = aj, i, and skew-symmetric if ai, j = –aj, i, for every i and j.

    The conjugate of A is the complex conjugate, in the usual sense, of ai,j.

    If A′ is the transpose of A, A′ is called the associated matrix of A, and is denoted by A*, so that A* ≡ A′ is the conjugate of the transpose of A.

    A is said to be Hermitian if A* = A, and skew-Hermitian if A* = – A.

    From the above definitions, we see that

    in the sense that the existence of either side implies that of the other side, and the equality of the two sides. The first three results are obvious ; the fourth follows from the fact that ∑ck and ∑ck are both convergent or both divergent. The fifth result states that the transpose of the product of two matrices is equal to the product of their transposes in reverse order. To prove it, we have only to observe that

    where (AB)i, j denotes the element of the product matrix AB in the i-th row and j-th column. The sixth result follows in the same way.

    If a Hermitian has only real elements, it is a real symmetric matrix, and if a skew-Hermitian has only real elements, it is a real skew-symmetric matrix; so that all theorems on Hermitian matrices include theorems on real symmetric matrices as special cases.

    If AB = I, then B is called a right-hand (r.h.) reciprocal of A, denoted by A–1; and A is called a left-hand (l.h.) reciprocal of B, denoted by –1B.

    If A and B are both different from 0, and if AB = 0, then B is called a r.h. zero-divisor of A, denoted by A⁰, and A is a l.h. zero-divisor of B, denoted by ⁰B.

    is called the spur (or sometimes trace) of the matrix A, and is denoted by sp A (or tr A). For applications of the spur to the quantum theory of radiation, see Heitler, (1), 87, 151, 183 ; and in connexion with the mathematical theory of atom-mechanics, see von Neumann, (1), 93–101.

    1.4. A few characteristic properties of infinite matrices.

    Since the product of two diagonal matrices (di) and (δi) is the diagonal matrix (diδi), we have (di)(δi) = (δi)(di), so that for diagonal matrices, multiplication is commutative. Thus diagonal matrices constitute an example of certain classes of matrices which satisfy the commutative law AB = BA.

    But in general, products of matrices are not commutativefor every i and j, even assuming that both series converge for every i and j. In fact, AB may not exist when BA exists ; e.g., if bi, j = 0 when j > l, then BA = (bi, 1 a1,j), which exists for arbitrary bi, 1 and A when j when j = 1 (i = 1, 2, …), so that AB does not exist when the last sum diverges.

    The sum of two matrices always exists, and

    The distributive law

    holds in the sense that if AB and AC exist, then also A(B + C) exists and is equal to AB + AC. But A(B + C) may exist when AB and AC do not exist.

    For example, this is so if ai, j = l for every i and j, bi, j = di + 1, ci, j = di – 1 for every jconverges.

    From the above remarks on diagonal matrices, it is obvious that multiplication of any finite number of diagonal matrices is associative; e.g., if (di(1)), (di(2)), (di(3)) are any three diagonal matrices, and (di(1))(di(2)) × (di(3)) means that we first find the product (di(1))(di(2)), and then multiply this by (di(3)), then we have

    We now extend this to lower semi-matrices.

    If (aij), (bi, j) are both lower semi-matrices, then

    and

    Hence the product of two lower semi-matrices is a lower semi-matrix.

    Again, if (ci, j) is a third lower semi-matrix, we have

    , and the summation extends to all values of p, q for which i p q j; hence

    and so the products of lower semi-matrices are associative.

    For further examples of associative matrices, see (2.3, III).

    But multiplication of infinite matrices is not in general associative ; e.g., if ai,j = l, ci,j = 1 for every i and j, and if

    then (AB)C A(BC); for (AB)C and A(BC) are respectively equal to the left and right-hand sides of (1.41).

    As an example of a matrix satisfying (1.41), we may take

    then it is easy to prove that

    We shall say that a product of infinite matrices is associative if (a) the product exists for every succession of the multiplications involved, the order of the factors remaining the same ; (b) all the products so obtained are equal.

    If, in a set S of matrices, (a) S contains the scalar matrices, (b) every finite product of matrices belonging to S exists and is associative, (c) S is closed under finite sum and finite product (i.e., every finite sum and finite product of matrices of S belongs to the set S), then the set is called an associative field .

    Diagonal matrices, and row-finite and column-finite matrices form associative fields.

    In a self-associative field, positive integral powers of a matrix A may be defined by induction as

    If A² = A ≠ 0, A is said to be idempotent.

    If r is the least positive integer such that Ar = 0 (A ≠ 0), A is said to be nilpotent with index r.

    Thus I is idempotent ; and the matrix A for which a2,l is the only non-zero element, is nilpotent with index 2. For less trivial illustrations of idempotent and nilpotent matrices, see Examples 1, Nos. 1, 9, 10, 11, 12, and Gurr, (1).

    If UU* = U*U = I, U is called a unitary matrix.

    If AA′ = AA = I, A is called an orthogonal matrix.

    Obviously a real orthogonal matrix is a real unitary matrix, and conversely ; but a complex orthogonal matrix differs from a complex unitary matrix.

    If A and B are orthogonal, and AB (AB)′ and (AB)′ AB exist and are associative, then AB is orthogonal.

    For (AB)′ = BA′, and so, under the given conditions,

    and

    which proves the result.

    If A is symmetric and orthogonal, then A² = I. For A = A′, and AA′ = I.

    1.5. A few special matrices.

    (i) Suppose that P is the matrix whose successive rows are the p-th, q-th, … rows of I, where p, q, … is a permutation of 1, 2, ….

    Then PA is the matrix whose successive rows are the p-th, q-th, … rows of A; i.e., PA is obtained from A by rearranging the order of its rows. The matrix P is called a permutator.

    If, in the i-th row of P, 1 figures in the ki-th place, i.e., pi, ki = 1 and all other elements in the i-th row are 0, then PA is obtained from A by rearranging its rows in the order k1, k2, k3, …, where this last sequence is a permutation of 1, 2, 3, ….

    Permutators for columns (instead of rows) can be formed in the same way, with AP in place of PA in the above argument.

    (ii) Let S be the matrix (si,j) where s1,1 = 1, si, i+1 = 1 (i ≥ 2), and all other elements of S are 0.

    Then SA is A without its second row, and ASis A without its second column.

    For, if the elements of SA , then

    And if the elements of AS, we have

    This proves the result.

    A matrix such as S is for this reason called a selector. More generally, if S is the matrix obtained from the unit matrix by omitting its p-th, q-th, r-th, … rows, then SA is the matrix obtained from A by omitting its p-th, q-th, r-th, … rows. The general definition is as follows. Let f(i) be a one-valued strictly increasing function of i, which assumes positive integral values for positive integral values of i. When

    and all other elements of S are 0, S is called a (row-) selector. SA is then obtained from A by omitting those rows of A whose suffixes are missed by f(i).

    When sf(j), j = 1 (j = l, 2, …), and all other elements of S are 0, S is called a (column-) selector. AS is then obtained from A by omitting those columns of A whose suffixes are missed by f(j).

    To find the selector which removes the jr-th rows (r = l, 2, …) from A.

    The required matrix S is given by

    where

    Hence si, i = 1 and si, j = 0 (j i) when 1 ≤ i < j1,

    and

    when

    (iii) C is called a combinator, if its non-zero elements consist only of the leading diagonal elements and a single non-diagonal element cm, n.

    If in C all the leading diagonal elements are equal to 1, and cm, n = r then CA is obtained from A by adding r times the n-th row to the m-th row.

    For the elements of CA are given by

    This idea of the combinator can be generalized. In the unit matrix, replace the r-th row by arbitrary numbers c1, c2, …; then we obtain a (row-) combinator C in the sense that in CA all the rows are the same as in A except the r-th, which is replaced by the combination of rows

    where it is supposed that the ci are so chosen that these series are absolutely convergent, in order that their sums should exist and be independent of the order of the terms.

    Denoting the sum (1.51) by ej, we have cA = e (in matrix notation), where c = [c1, c2, …], e = [e1, e2, …]; it is an illustration of the general process : row vector × matrix = row vector, whenever the product on the left exists. Thus the row-combinator C has a row c replacing the r-th row of I ; and CA replaces the r-th row of A by cA. Combinators are used, in particular, when there is a linear homogeneous relation between the rows, say

    for producing a zero row in A.

    If C is formed by replacing the r-th column in I by c1, c2, …, it is called a (column-) combinator. AC then differs from A only in that its r-th column is the combination of columns

    1.6. The structure of a matrix.

    We have A + λI = (ai,j + λδi, j), i.e., λ is to be added to each leading diagonal element only of A, so that

    Since positive integral powers of a matrix A have been defined, a definite meaning can be attached to a polynomial f(A) in A, in a self-associative field.

    In the theory of finite matrices, latent roots play an important part. Suppose that (ai,j) is a square matrix of order n, and that its determinant is denoted by† det (A); then the latent roots of A are the roots of the equation det (λI A) =0, which is of the n-th degree in λ, viz.,

    This is called the characteristic equation of A ; and f(A) is the characteristic function corresponding to A.

    We have the classical theorem of Hamilton-Cayley that every finite matrix satisfies its characteristic equation ; i.e., f(A) = 0.

    Also, every finite matrix satisfies an equation ϕ(A) = 0 of least degree (which may be unity in the case of a scalar matrix), where ϕ (A) is a factor of f(A), or is f(A) itself. [For proofs, see Julia, (1), Part I, 81 ; Wedderburn, (1), 23 ; the whole theorem is essentially due to Jordan ; see Jordan, (1), 114].

    We also have a classical theorem on the structure of a finite matrix A, that when the latent roots of A are distinct, A can be expressed in a simple way in terms of these latent roots and certain idempotent matrices associated with A ; and this theorem can then be extended to cover the case where multiple latent roots occur, expressing A in terms of the latent roots and the principal idempotent and nilpotent elements of A corresponding to a particular latent root [see Wedderburn, (1), 25–30.]

    It would clearly be desirable to extend these results to infinite matrices, when possible. The only attempt so far made in this direction is a direct extension of Wedderburn’s exposition to a class of infinite lower semi-matricesis convergent.

    The first (and perhaps most serious) difficulty which is encountered in attempting to extend these structural theorems to general infinite matrices is that of obtaining what corresponds to the latent roots of a finite matrix. We may define a latent root of an infinite matrix A as any scalar λ for which Ax = λx, where x ≠ 0 is a column vector {x1, x2, … }. But this leads to a discussion of the spectrum [von Neumann, (1), Stone, (1)], a subject which is far from easy.

    In the case of a lower semi-matrix A, however, the latent roots λi are defined by λi = ai, i(i = 1, 2, … ∞), since then det (λI A) vanishes, no matter what the order of the determinant may be.

    1.7. The exponential function of an infinite lower semi-matrix.

    We have seen that a definite meaning can be assigned (in a self-associative field) to a polynomial in an infinite matrix A. For suitably restricted classes of infinite matrices, this may be extended to more general functions than polynomials, e.g., the exponential function.

    It is well known that the exponential function E(z) ≡ ez has no zero in the complex plane of z. This result has been greatly extended by Dienes, (3), who shows that the exponential junction has no zero in the linear associative algebras to a finite base, and that it has no self-associative zero in finite non-associative linear algebras.

    We now have the following result, also due to Dienes.

    (1.7, I). The exponential function exists for every lower semi-matrix ; all its values are lower semi-matrices, and it misses all the nilpotent values in the field of lower semi-matrices.

    (i) Since lower semi-matrices form an associative field, and in particular are self-associative, the exponential function may be defined as

    where the matrix element {E(A)}i, j is given by

    Let b be the maximum modulus of the elements ak, j in the first i rows of A ; then

    Hence E(A) has a definite meaning in the field of lower semimatrices.

    (ii) In the Cauchy product of E(A), E(B), the terms which are homogeneous and of the n-th degree in A and B are

    which is equal to (A + B)n/n! in an associative field when A and B commute (i.e., when AB = BA).

    (iii) Since, from (ii), E(A + B) = E(A)E(B) when AB = BA, put B = –A, so that E (–A) is an ordinary element in the field of lower semi-matrices.

    Now from definition, E(0) = I; hence it follows that

    and so it is impossible for E(A) to assume the value 0.

    Further, suppose that E(A) = X, so that E(nA) = Xn, n being a positive integer. Then, since E(nA) cannot be zero, E(A) cannot assume a value X such that Xn = 0; i.e., E(A) misses all the nilpotent values in the field of lower semimatrices. [For an extension to Kr or Kc matrices, see Examples 2, No. 18; and for an extension to Hilbert matrices, see Examples 9, No. 4.]

    1.8. Semi-continuous and continuous matrices.

    It is often convenient to use the operator obtained by replacing the positive integer n in the infinite matrix (an, k) by a continuous positive variable, say ω. Instead of writing ,k we then usually write ak(ω) (see § 4.1, etc.).

    Further, both positive integers n and k are sometimes replaced by continuous positive variables, say x and y; and we then usually write a(x, y) for ax,y.

    The latter type of operator is often referred to as a " continuous matrix ", although it is not (strictly) a matrix ; it has been employed in the Quantum theory, and in abstract statistics ; see also § 3.5.

    The former type of operator, ak(ω), will be called a " semi-continuous matrix ".

    The usual matrix an, k would then naturally be called a " discontinuous matrix " ; but we shall only employ this term on a few isolated occasions when distinction has to be made between an, k and ak(ω), e.g., in (5.4, III), (5.5, III), (5.5, IV). Otherwise a discontinuous matrix will simply be called a matrix

    EXAMPLES 1

    1. Show that a real diagonal matrix is idempotent if, and only if, all its elements are 0 or 1, one at least being different from 0.

    2. Prove that a lower semi-matrix is orthogonal if, and only if, it is a diagonal matrix with elements ± 1.

    3. Show that if all the principal diagonal elements of a lower semi-matrix A are zero, then the first k diagonals of Ak consist wholly of zero elements.

    4. Show in detail that if (U I) A = 0, where A and U are lower semi-matrices, and ai, i ≠ 0 for any i, then U is the unit matrix.

    5. If A′ is the transpose of any matrix A, and AA′, AA both exist, show that AA′ and AA are both symmetric.

    6. Show that for any matrix A, such that AA* and A*A both exist, AA* and A*A are both Hermitian.

    7. Prove that every finite product of permutators and any given matrix A is associative if A figures once only in the product.

    8. Show that every finite product of permutators and two given matrices A, B is associative if (i) both A and B figure once only in the product, (ii) A precedes Bis absolutely convergent, where rk is any permutation of k = 1, 2, 3, …, and is different for different values of k.

    9. In the matrix A, the non-zero elements are all in one row (column). Show that A is idempotent if, and only if, the leading diagonal element in that row (column) is 1. Show also that A is nilpotent with index 2 if, and only if, that diagonal element is 0.

    10. , where b and c are not both zero, is idempotent if, and only if, (i) a + d = 1, and (ii) ad bc = 0.

    11. If

    where all rows after the second consist solely of zero elements, and bc ≠ 0, show that A is idempotent if, and only if, (i) a + d = 1, (ii) the elements of the first row are in constant ratio to the corresponding elements of the second row.

    12. If the matrix A of No. 11 is such that bd ≠ 0, show that A is nilpotent with index 2 if, and only if, (i) a + d = 0, (ii) the elements of the first row are in constant ratio to the corresponding elements of the second row.

    13. If X is any diagonal matrix, and E(X) is the exponential

    Enjoying the preview?
    Page 1 of 1