Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Linear Algebra and Linear Operators in Engineering: With Applications in Mathematica®
Linear Algebra and Linear Operators in Engineering: With Applications in Mathematica®
Linear Algebra and Linear Operators in Engineering: With Applications in Mathematica®
Ebook993 pages6 hours

Linear Algebra and Linear Operators in Engineering: With Applications in Mathematica®

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Designed for advanced engineering, physical science, and applied mathematics students, this innovative textbook is an introduction to both the theory and practical application of linear algebra and functional analysis. The book is self-contained, beginning with elementary principles, basic concepts, and definitions. The important theorems of the subject are covered and effective application tools are developed, working up to a thorough treatment of eigenanalysis and the spectral resolution theorem. Building on a fundamental understanding of finite vector spaces, infinite dimensional Hilbert spaces are introduced from analogy. Wherever possible, theorems and definitions from matrix theory are called upon to drive the analogy home. The result is a clear and intuitive segue to functional analysis, culminating in a practical introduction to the functional theory of integral and differential operators. Numerous examples, problems, and illustrations highlight applications from all over engineering and the physical sciences. Also included are several numerical applications, complete with Mathematica solutions and code, giving the student a "hands-on" introduction to numerical analysis. Linear Algebra and Linear Operators in Engineering is ideally suited as the main text of an introductory graduate course, and is a fine instrument for self-study or as a general reference for those applying mathematics.
  • Contains numerous Mathematica examples complete with full code and solutions
  • Provides complete numerical algorithms for solving linear and nonlinear problems
  • Spans elementary notions to the functional theory of linear integral and differential equations
  • Includes over 130 examples, illustrations, and exercises and over 220 problems ranging from basic concepts to challenging applications
  • Presents real-life applications from chemical, mechanical, and electrical engineering and the physical sciences
LanguageEnglish
Release dateJul 12, 2000
ISBN9780080510248
Linear Algebra and Linear Operators in Engineering: With Applications in Mathematica®

Related to Linear Algebra and Linear Operators in Engineering

Titles in the series (6)

View More

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Linear Algebra and Linear Operators in Engineering

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Linear Algebra and Linear Operators in Engineering - H. Ted Davis

    2

    Preface

    H. Ted Davis and Kendall T. Thomson

    This textbook is aimed at first-year graduate students in engineering or the physical sciences. It is based on a course that one of us (H.T.D.) has given over the past several years to chemical engineering and materials science students.

    The emphasis of the text is on the use of algebraic and operator techniques to solve engineering and scientific problems. Where the proof of a theorem can be given without too much tedious detail, it is included. Otherwise, the theorem is quoted along with an indication of a source for the proof. Numerical techniques for solving both nonlinear and linear systems of equations are emphasized. Eigenvector and eigenvalue theory, that is, the eigenproblem and its relationship to the operator theory of matrices, is developed in considerable detail.

    Homework problems, drawn from chemical, mechanical, and electrical engineering as well as from physics and chemistry, are collected at the end of each chapter—the book contains over 250 homework problems. Exercises are sprinkled throughout the text. Some 15 examples are solved using Mathematica, with the Mathematica codes presented in an appendix. Partially solved examples are given in the text as illustrations to be completed by the student.

    The book is largely self-contained. The first two chapters cover elementary principles. Chapter 3 is devoted to techniques for solving linear and nonlinear algebraic systems of equations. The theory of the solvability of linear systems is presented in Chapter 4. Matrices as linear operators in linear vector spaces are studied in Chapters 5 through 7. The last three chapters of the text use analogies between finite and infinite dimensional vector spaces to introduce the functional theory of linear differential and integral equations. These three chapters could serve as an introduction to a more advanced course on functional analysis.

    1

    Determinants

    1.1. Synopsis

    For any square array of numbers, i.e., a square matrix, we can define a determinant—a scalar number, real or complex. In this chapter we will give the fundamental definition of a determinant and use it to prove several elementary properties. These properties include: determinant addition, scalar multiplication, row and column addition or subtraction, and row and column interchange. As we will see, the elementary properties often enable easy evaluation of a determinant, which otherwise could require an exceedingly large number of multiplication and addition operations.

    Every determinant has cofactors, which are also determinants but of lower order (if the determinant corresponds to an n × n array, its cofactors correspond to (n − 1) × (n − 1) arrays). We will show how determinants can be evaluated as linear expansions of cofactors. We will then use these cofactor expansions to prove that a system of linear equations has a unique solution if the determinant of the coefficients in the linear equations is not 0. This result is known as Cramer’s rule, which gives the analytic solution to the linear equations in terms of ratios of determinants. The properties of determinants established in this chapter will play (in the chapters to follow) a big role in the theory of linear and nonlinear systems and in the theory of matrices as linear operators in vector spaces.

    1.2. Matrices

    A matrix A is an array of numbers, complex or real. We say A is an m × n-dimensional matrix if it has m rows and n columns, i.e.,

    (1.2.1)

    The numbers aij (i = 1,…, m, j = 1,…,n) are called the elements of A with the element aij belonging to the ith row and jth column of A. An abbreviated notation for A is

    (1.2.2)

    By interchanging the rows and columns of A, the transpose matrix AT is generated. Namely,

    (1.2.3)

    The rows of AT are the columns of A and the ijth element of AT is aji, i.e., (AT)ij = aji. If A is an m × n matrix, then AT is an n × m matrix.

    When m = n, we say A is a square matrix. Square matrices figure importantly in applications of linear algebra, but non-square matrices are also encountered in common physical problems, e.g., in least squares data analysis. The m × 1 matrix

    (1.2.4)

    and the 1 × m matrix

    (1.2.5)

    are also important cases. They are called vectors. We say that x is an m-dimensional column vector containing m elements, and yT is an n-dimensional row vector containing n elements. Note that yT is the transpose of the n × 1 matrix y—the n-dimensional column vector y.

    If A and B have the same dimensions, then they can be added. The rule of matrix addition is that corresponding elements are added, i.e.,

    (1.2.6)

    Consistent with the definition of m × n matrix addition, the multiplication of the matrix A by a complex number α (scalar multiplication) is defined by

    (1.2.7)

    i.e., αA is formed by replacing every element aij of A by αaij.

    1.3. Definition of a Determinant

    A determinant is defined specifically for a square matrix. The various notations for the determinant of A are

    (1.3.1)

    We define the determinant of A as follows:

    (1.3.2)

    where the summation is taken over all possible products of aij in which each product contains n elements and one and only one element from each row and each column. The indices l1,…,ln are permutations of the integers 1,…, n. We will use the symbol Σ′ to denote summation over all permutations. For a given set {l1,…,ln}, the quantity P denotes the number of transpositions required to transform the sequence l1, l2,…,ln into the ordered sequence 1, 2,…,n. A transposition is defined as an interchange of two numbers li and lj. Note that there are n! terms in the sum defining D since there are exactly n! ways to reorder the set of numbers {1, 2,…,n} into distinct sets {l1, l2,…,ln}.

    As an example of a determinant, consider

    (1.3.3)

    The sign of the second term is negative because the indices {2, 1, 3} are transposed to {1, 2, 3} with the one transposition

    and so P = 1 and (–1)P = − 1. However, the transposition also could have been accomplished with the three transpositions

    in which case P = 3 and (− 1)P = − 1. We see that the number of transpositions P needed to reorder a given sequence l1,…,ln is not unique. However, the evenness or oddness of P is unique and thus (− 1)P is unique for a given sequence.

    Exercise 1.3.1.

    Verify the signs in Eq. (1.3.3). Also, verify that the number of transpositions required for a11a25a33a42a54 is even.

    A definition equivalent to that in Eq. (1.3.2) is

    (1.3.4)

    If the product al1lal22··· alnn is reordered so that the first indices of the alii are ordered in the sequence 1,…,n, the second indices will be in a sequence requiring P transpositions to reorder as 1,…,n. Thus, the n! n-tuples in Eqs. (1.3.2) and (1.3.4) are the same and have the same signs.

    The determinant in Eq. (1.3.3) can be expanded according to the defining equation (1.3.4) as

    (1.3.5)

    It is obvious by inspection that the right-hand sides of Eqs. (1.3.2) and (1.3.4) are identical since the various terms differ only by the order in which the multiplication of each 3-tuple is carried out.

    In the case of second- and third-order determinants, there is an easy way to generate the distinct n-tuples. For the second-order case,

    the product of the main diagonal, a11a22, is one of the 2-tuples and the product of the reverse main diagonal, a12a21, is the other. The sign of a12a21 is negative since {2, 1} requires one transposition to reorder to {1, 2}. Thus,

    (1.3.6)

    since there are no other 2-tuples containing exactly one element from each row and column.

    In the case of the third-order determinant, the six 3-tuples can be generated by multiplying the elements shown below by solid and dashed curves

    The products associated with solid curves require an even number of transpositions P and those associated with the dashed curves require an odd P. Thus, the determinant is given by

    (1.3.7)

    in agreement with Eq. (1.3.3), the defining expression. For example, the following determinant is 0:

    (1.3.8)

    The evaluation of a determinant by calculation of the n! n-tuples requires (n − 1)(n!) multiplications. For a fourth-order determinant, this requires 72 multiplications, not many in the age of computers. However, if n = 100, the number of required multiplications would be

    (1.3.9)

    where Stirling’s approximation, n! ~ (n/e)n, has been used. If the time for one multiplication is 10−9 sec, then the required time to do the multiplications would be

    (1.3.10)

    Obviously, large determinants cannot be evaluated by direct calculation of the defining n-tuples. Fortunately, the method of Gauss elimination, which we will describe in Chapter 3, reduces the number of multiplications to n³. For n = 100, this is 106 multiplications, as compared to 3.7 × 10¹⁵⁸ by direct n-tuple evaluation. The Gauss elimination method depends on the application of some of the elementary properties of determinants given in the next section.

    1.4. Elementary Properties of Determinants

    If the determinant of A is given by Eq. (1.3.2), then—because the elements of the transpose AT are aij—it follows that

    (1.4.1)

    However, according to Eq. (1.3.4), the right-hand side of Eq. (1.4.1) is also equal to the determinant DA of A. This establishes the property that

    1. A determinant is invariant to the interchange of rows and columns; i.e., the determinant of A is equal to the determinant of AT.

    For example,

    Another elementary property of a determinant is that

    2. If two rows (columns) of a determinant are interchanged, then the determinant changes sign.

    For example,

    From the definition of D in Eq. (1.3.2),

    it follows that the determinant D′ formed by the interchange of rows i and j in D is

    (1.4.2)

    Each term in D′ corresponds to one in D if one transposition is carried out. Thus, P and P′ differ by 1, and so (–1)P′ = (–1)P+1 = −(–1)P. From this it follows that D′ = − D. A similar proof that the interchange of two columns changes the sign of the determinant can be given using the definition of D in Eq. (1.3.4). Alternatively, from the fact that DA = DAT, it follows that if the interchange of two rows changes the sign of the determinant, then the interchange of two columns does the same thing because the columns of AT are the rows of A.

    The preceding property implies:

    3. If any two rows (columns) of a matrix are the same, its determinant is 0.

    If two rows (columns) are interchanged, D = −D′. However, if the rows (columns) interchanged are identical, then D = D′. The two equalities, D = −D′ and D = D′, are possible only if D = D′ = 0.

    Next, we note that

    4. Multiplication of the determinant D by a constant k is the same as multiplying any row (column) by k.

    This property follows from the commutative law of scalar multiplication, i.e., kab = (ka)b = a(kb), or

    (1.4.3)

    Multiplication of the determinant

    gives

    from which we can conclude that D/2 = 0 and D = 0, since D/2 has two identical columns. Stated differently, the multiplication rule says that if a row (column) of D has a common factor k, then D = kD′, where D′ is formed from D by replacing the row (column) with the common factor by the row (column) divided by thecommon factor. Thus, in the previous example,

    The fact that a determinant is 0 if two rows (columns) are the same yields the property:

    5. The addition of a row (column) multiplied by a constant to any other row (column) does not change the value of D.

    To prove this, note that

    (1.4.4)

    The second determinant on the right-hand-side of Eq. (1.4.4) is 0 since the elements of the ith and jth rows are the same. Thus, D′ = D. The equality DA = DAT establishes the property for column addition. As an example,

    Elementary properties can be used to simplify a determinant. For example,

    (1.4.5)

    The sequence of application of the elementary properties in Eq. (1.4.5) is, of course, not unique.

    Another useful property of determinants is:

    6. If two determinants differ only by one row (column), their sum differs only in that the differing rows (columns) are summed.

    That is,

    (1.4.6)

    This property follows from the definition of determinants and the distributive law (ca + cb = c(a + b)) of scalar multiplication.

    As the last elementary property of determinants to be given in this section, consider differentiation of D by the variable t:

    (1.4.7)

    or

    (1.4.8)

    The determinant D′i is evaluated by replacing in D the elements of the ith row by the derivatives of the elements of the ith row. Similarly, for D″i, replace in D the elements of the ith column by the derivatives of the elements of the ith column. For example,

    (1.4.9)

    1.5. Cofactor Expansions

    We define the cofactor Aij as the quantity (–1)i+j multiplied by the determinant of the matrix generated when the ith row and jth column of A are removed. For example, some of the cofactors of the matrix

    (1.5.1)

    include

    (1.5.2)

    In general,

    (1.5.3)

    Note that an n × n matrix has n² cofactors.

    Cofactors are important because they enable us to evaluate an nth-order determinant as a linear combination of n (n − 1)th-order determinants. The evaluation makes use of the following theorem:

    COFACTOR EXPANSION THEOREM. The determinant D of A can be computed from

    (1.5.4)

    or

    (1.5.5)

    Equation (1.5.4) is called a cofactor expansion by the ith row and Eq. (1.5.5) is called a cofactor expansion by the jth column.

    Before presenting the proof of the cofactor expansion, we will give an example.

    Let

    (1.5.6)

    By the expression given in Eq. (1.3.7), it follows that

    The cofactor expansion by row 1 yields

    and the cofactor expansion by column 2 yields

    To prove the cofactor expansion theorem, we start with the definition of the determinant given in Eq. (1.3.4). Choosing an arbitrary column j, we can rewrite this equation as

    (1.5.7)

    where the primed sum now refers to the sum over all permutations in which lj = i. For a given value of i in the first sum, we would like now to isolate the ijth cofactor of A. To accomplish this, we must examine the factor (–1)P closely. First, we note that the permutations defined by P can be redefined in terms of permutations in which all elements except element i are in proper order plus the permutations required to put i in its place in the sequence 1, 2,…,n. For this new definition, the proper sequence, in general, would be

    (1.5.8)

    We now define P′ij as the number of permutations required to bring a sequence back to the proper sequence defined in Eq. (1.5.8). We now note that |j i| permutations are required to transform this new proper sequence back to the original proper sequence 1, 2,…,nand Eq. (1.5.7) becomes

    (1.5.9)

    which we recognize using the definition of a cofactor as

    (1.5.10)

    A similar proof exists for Eq. (1.5.4).

    With the aid of the cofactor expansion theorem, we see that the determinant of an upper triangular matrix, i.e.,

    (1.5.11)

    where uij = 0, when i > j, is the product of the main diagonal elements of U, i.e.,

    (1.5.12)

    To derive Eq. (1.5.12), we use the cofactor expansion theorem with the first column of U to obtain

    where Ui1 is the i1 cofactor of U. Repeat the process on the (n − 1)th-order upper triangular determinant, then the (n − 2)th one, etc., until Eq. (1.5.13) results. Similarly, the row cofactor expansion theorem can be used to prove that the determinant of the lower triangular matrix,

    (1.5.13)

    is

    (1.5.14)

    i.e., it is again the product of the main diagonal elements. In L, lij = 0 when j > i.

    The property of the row cofactor expansion is that the sum

    replaces the ith row of DA with the elements aij of the ith row; i.e., the sum puts in the ith row of DA the elements ai1, ain,…,ain. Thus, the quantity

    puts the elements α1, a2,…,αn in the ith row of DA, i.e.,

    (1.5.15)

    Similarly, for the column expansion,

    (1.5.16)

    Example 1.5.1.

    The cofactor expansion of DA by the first-column cofactors involves the same cofactors, A21, A21, and A31, as the cofactor expansion of DA′, by the first column. The difference between the two expansions is simply that multipliers of A11, A21, and A31 differ since the elements of the first column differ.

    Consider next the expansions

    (1.5.17)

    and

    (1.5.18)

    The determinant represented by Eq. (1.5.17) is the same as DA, except that the jth column is replaced by the elements of the kth column of A, i.e.,

    (1.5.19)

    The determinant in Eq. (1.5.19) is 0 because columns j and k are identical. Similarly, the determinant represented by Eq. (1.5.18) is the same as DA, except that the ith row is replaced by the elements of the kth row of A, i.e.,

    (1.5.20)

    The determinant in Eq. (1.5.20) is 0 because rows i and k are identical.

    Eqs. (1.3.2) and (1.3.4) embody the alien cofactor expansion theorem:

    ALIEN COFACTOR EXPANSION THEOREM. The alien cofactor expansions are 0, i.e.,

    (1.5.21)

    The cofactor expansion theorem and the alien cofactor expansion theorem can be summarized as

    (1.5.22)

    where δkj is the Kronecker delta function with the property

    (1.5.23)

    1.6. Cramer’s Rule for Linear Equations

    Frequently, in a practical situation, one wishes to know what values of the variables x1, x2,…,xn satisfy the n linear equations

    (1.6.1)

    These equations can be summarized as

    (1.6.2)

    Equation (1.6.2) is suggestive of a solution to the set of equations. Let us multiply Eq. (1.6.2) by the cofactor Aik and sum over i. By interchanging the order of summation over i and j on the left-hand side of the resulting equation, we obtain

    (1.6.3)

    By the alien cofactor expansion, it follows that

    (1.6.4)

    unless j = k, whereas, when j = k, the cofactor expansion yields

    (1.6.5)

    Also, it follows from Eq. (1.5.19) that

    (1.6.6)

    where Dk is the same as the determinant D except that the kth column of D has been replaced by the elements b1, b2,…,bn.

    According to Eqs. (1.3.2) and (1.3.4), Eq. (1.6.3) becomes

    (1.6.7)

    Cramer’s rule follows from the preceding result:

    CRAMER’S RULE. If the determinant D is not 0, then the solution to the linear system, Eq. (1.6.1), is

    (1.6.8)

    and the solution is unique.

    To prove uniqueness, suppose xi and yi for i = 1,…,n are two solutions to yields

    (1.6.9)

    Multiplication of Eq. (1.6.9) by Aik and summation over i yields

    (1.6.10)

    or xk = yk, k = 1,…, n, since D ≠ 0. Incidentally, even if D = 0, the linear equations sometimes have a solution, but not a unique one. The full theory of the solution of linear systems will be presented in Chapter 4.

    Example 1.6.1.

    Use Cramer’s rule to solve

    Solution.

    Even if the determinant found no other role, its utility in mathematics is assured by Cramer’s rule. When D ≠ 0, a unique solution exists for the linear equations in Eq. (1.6.1). We shall see later that, in the theory and applications of linear algebra, the determinant is important in a myriad of circumstances.

    1.7. Minors and Rank of Matrices

    Consider the m × n matrix

    (1.7.1)

    If m r rows and n r columns are struck from A, the remaining elements form an r × r is said to be an rth-order minor of A. For example, striking the third row and the second and fourth columns of

    (1.7.2)

    generates the minor

    (1.7.3)

    We can now make the following important definition:

    DEFINITION. The rank r (or rA) of a matrix A is the order of the largest nonzero minor of A.

    For example, for

    (1.7.4)

    and so rA = 2. On the other hand, all of the minors of

    (1.7.5)

    , are 0. Thus, rA = 1 for this 3 × 4 matrix. For an m × n matrix A, it follows from the definition of rank that rA ≤ min(m, n).

    Let us end this chapter by mentioning the principal minors and traces of a matrix. They are important in the analysis of the time dependence of systems of equations. The jth-order trace of a matrix A, trj A, is defined as the sum of the jth-order minors generated by striking n j rows and columns intersecting on the main diagonal of A. These minors are called the principal minors of A. Thus, for a 3 × 3 matrix,

    (1.7.6)

    (1.7.7)

    (1.7.8)

    For an n × n matrix A, the nth-order trace is just the determinant of A and tr1 A is the sum of the diagonal elements of A. These are the most common traces encountered in practical situations. However, all the traces figure importantly in the theory of eigenvalues of A. In some texts, the term trace of A and the objects trj A are called the invariants of A.

    Exercise 1.7.1.

    Show that all of the traces of the matrix

    (1.7.9)

    are positive. We will show in Chapter 6 that this implies that A has only negative eigenvalues. It also implies that, independently of initial conditions, the solution x to the equation

    (1.7.10)

    always vanishes with increasing time t.

    Problems

    1. Evaluate the determinant of the matrix given by Eq. (1.7.9) using the formulas

    2. Solve the following determinants by elementary operations

    (a) 

    (b) 

    (c) Solve the following set of equations:

    3. Evaluate the determinants

    and

    4. Using Cramer’s rule, find x, y, and z for the following system of equations:

    5. Using Cramer’s rule, find x, y, and z for the following system of equations:

    6. Show that

    7. Use the determinant properties to evaluate

    8. Using Cramer’s rule, find x, y, and z, where

    9. Using Cramer’s rule, find x, y, and z for the system of equations

    10. Using Cramer’s rule, find x, y, and z for the system of equations

    11. Using Cramer’s rule, find x, y, and z for the system of equations

    12. Let

    Show that

    13. What is the rank of

    14. Evaluate the determinant

    by first generating zero entries where you can and then using a cofactor expansion.

    15. Show that

    16. What is the rank of

    17. Give all of the minors of

    18. Give all of the traces of the matrix whose determinant is shown in Problem 15.

    19. Solve the equation

    20. Without expanding the determinant, show that

    21. Consider the set of matrices

    (a) Denning the determinants Dn as

    find a recursion relation for Dn (i.e., Dn = f(Dn–1, Dn–2,…; a, b)).

    (b) Letting a = 0.5 and b = 1, write a computer program to evaluate D99.

    22. Consider the n-dimensional matrix

    For the case where a = 1 and x = 2 cos θ, prove that the determinant is given by D = sin(n + 1)θ/ sin θ as long as θ is restricted to 0 < θ < π.

    23. Find the determinant of the n × n matrix whose diagonal elements are 0 and whose off-diagonal elements are a, i.e.,

    24. Find the following determinant:

    25. Prove the following relation for the Vandermonde determinant:

    Further reading

    Aitken, A. C. Determinants and Matrices. Edinburgh: Oliver and Boyd; 1948.

    Aitken, A. C. Determinants and Matrices. New York: Interscience; 1964.

    Amundson, A. R. Mathematical Methods in Chemical Engineering. New Jersey: Prentice-Hall; 1964.

    Bronson, R. Linear Algebra: an Introduction. San Diego: Academic Press; 1995.

    Muir, T. A Treatise on the Theory of Determinants. New York: Dover; 1960.

    Muir, T. Contributions to the History of Determinants, 1900–1920. London/Glasgow: Blackie & Son; 1930.

    Nomizu, K. Fundamentals of Linear Algebra. New York: McGraw-Hill; 1966.

    Stigant, S. A. The Elements of Determinants, Matrices and Tensors for Engineers. London: Macdonald; 1959.

    Turnbull, H. W. The Theory of Determinants, Matrices and Invariants. London/Glasgow: Blackie; 1928.

    Vein, R. Determinants and Their Applications in Mathematical Physics, Springer, New York.

    2

    Vectors And Matrices

    2.1. Synopsis

    In this chapter we will define the properties of matrix addition and multiplication for the general m × n matrix containing m rows and n columns. We will show that a vector is simply a special class of matrices: a column vector is an m × 1 matrix and a row vector is a 1 × n matrix. Thus, vector addition, scalar or inner products, and vector dyadics are defined by matrix addition and multiplication.

    The inverse A−1 of the square matrix A is the matrix such that AA−1 = A−1A = I, where I is the unit matrix. We will show that when the inverse exists it can be evaluated in terms of the cofactors of A through the adjugate matrix

    Specifically, by using the cofactor expansion theorems of Chapter 1, we will prove that the inverse can be evaluated as

    We will also derive relations for evaluating the inverse, transpose, and adjoint of the product of matrices. The inverse of a product of matrices AB can be computed from the product of the inverses B−1 and A−1. Similar expressions hold for the transpose and adjoint of a product. The concept of matrix partitioning and its utility in computing the inverse of a matrix will be discussed.

    Finally, we will introduce linear vector spaces and the important concept of linear independence of vectors sets. We will also expand upon the concept of vector norms, which are required in defining normed linear vector spaces. Matrix norms based on the length or norm of a vector are then denned and several very general properties of norms are derived. The utility of matrix norms will be demonstrated in analyzing the solvability of linear equations.

    2.2. Addition And Multiplication

    The rules of matrix addition were given in Eq. (1.2.6). To be conformable for addition (i.e., for addition to be defined), the matrices A and B must be of the same dimension m × n. The elements of A + B are then aij + bij; i.e., corresponding elements are added to make the matrix sum. Using this rule for addition, the product of a matrix A with a scalar (complex or real number) α was defined as

    (2.2.1)

    Using the properties of addition and scalar multiplication, and the definition of the derivative of A,

    (2.2.2)

    we find that

    (2.2.3)

    We can therefore conclude that the derivative of a matrix dA/dt is a matrix whose elements are the derivatives of the elements of A, i.e.,

    (2.2.4)

    Note that |dA/dt| ≠ d|A|/dt. The determinant of dA/dt is a nonlinear function of the derivatives of aij, whereas the derivative of the determinant |A| is linear.

    If A and B are conformable for matrix multiplication, i.e., if A is an m × n matrix and B is an n × p matrix, then the product

    (2.2.5)

    is defined. C is then an m × p matrix with elements

    (2.2.6)

    Thus, the ijth element of C is the product of the ith row of A and the jth column of B, and so A and B are conformable for the product AB if the number of columns of A equals the number of rows of B. For example, if

    then

    (2.2.7)

    whereas BA is not defined.

    Exercise 2.2.1.

    Solve the linear system of equations

    (2.2.8)

    where

    (2.2.9)

    and

    (2.2.10)

    If x and y are n-dimensional vectors, then the product xTy is defined since the transpose xT of x is a 1 × n matrix and y is an n × 1 matrix. The product is a 1 × 1 matrix (a scalar) given by

    (2.2.11)

    xTy is sometimes called the scalar or inner product of x and y. The scalar product is only defined if x and y have the same dimension. If the vector x is positive as long as x is not 0. We define the length ||x|| of x as

    (2.2.12)

    is not necessarily positive—or even real. In this case, we define the inner product by

    (2.2.13)

    and the length of x by

    (2.2.14)

    is the square of the modulus of xi. Here, again, x† denotes the adjoint of x, namely, the complex conjugate of the transpose of x,

    (2.2.15)

    where xi* is the complex conjugate of xi. The length ||x|| has the desired property that it is 0 if and only if every component of x is 0 (i.e., if x =

    Enjoying the preview?
    Page 1 of 1