2 Addition, Subtraction, and Multiplication of a Matrix by a Scalar
Tải bản đầy đủ
48
â†œæ¸€å±®
â†œæ¸€å±®
MATRIX ALGEBRA
2.2.1 Multiplication of Matrices
There is a restriction as to when two matrices can be multiplied. Consider the product
AB. To multiply these matrices, the number of columns in A must equal the number
of rows in B. For example, if A is 2 × 3, then B must have 3 rows, although B could
have any number of columns. If two matrices can be multiplied they are said to be
сопformable. The dimensions of the product matrix, call it C, are simply the number
of rows of A by the number of columns of B. In the earlier example, if B were 3 × 4,
then C would be a 2 × 4 matrix. In general then, if A is an r × s matrix and B is an s × t
matrix, then the dimensions of the product AB are r ×Â€t.
Example 2.5
A
2 1 3
4 5 6
2×3
B
C
c11 c12
1 0
2 4 = c
21 c22
−1 5
2× 2
3× 2
Note first that A and B can be multiplied because the number of columns in A is 3,
which is equal to the number of rows in B. The product matrix C is a 2 × 2, that is,
the outer dimensions of A and B. To obtain the element c11 (in the first row and first
column), we multiply corresponding elements of the first row of A by the elements of
the first column of B. Then, we simply sum the products. To obtain c12 we take the sum
of products of the corresponding elements of the first row of A by the second column
of B. This procedure is presented next for all four elements ofÂ€C:
Element
c11
1
(2,1, 3) =
2 2(1) + 1(2) + 3(−1) = 1
−1
c12
0
(2,1, 3) =
4 2(0) + 1(4) + 3(5) =
19
5
c21
1
(4, 5, 6) =
2 4(1) + 5(2) + 6(−1) = 8
−1
c22
0
(4, 5, 6) =
4 4(0) + 5(4) + 6(5) =
50
5
Chapter 2
â†œæ¸€å±®
â†œæ¸€å±®
Therefore, the product matrix CÂ€is:
1 19
C=
8 50
We now multiply two more matrices to illustrate an important property concerning
matrix multiplication.
Example 2.6
A
2
1
B
1
4
5 2 ⋅ 3 + 1 ⋅ 5
=
6 1 ⋅ 3 + 4 ⋅ 5
3
5
B
3
5
AB
2 ⋅ 5 + 1 ⋅ 6 11
=
1 ⋅ 5 + 4 ⋅ 6 23
A
5
6
BA
1 3 ⋅ 2 + 5 ⋅ 1
=
4 5 ⋅ 2 + 6 ⋅ 1
2
1
16
29
3 ⋅ 1 + 5 ⋅ 4 11
=
5 ⋅ 1 + 6 ⋅ 4 16
23
29
Notice that AB ≠ BA; that is, the order in which matrices are multiplied makes a difference. The mathematical statement of this is to say that multiplication of matrices
is not commutative. Multiplying matrices in two different orders (assuming they are
conformable both ways) in general yields different results.
Example 2.7
A
x
Ax
3 1 2 2
18
1 4 5 6 = 41
2 5 2 3
40
( 3 × 3) ( 3 × 1) ( 3 × 1)
Note that multiplying a matrix on the right by a column vector takes the matrix into a
column vector.
3 1
(2, 5)
= (11, 22)
1 4
Multiplying a matrix on the left by a row vector results in a row vector. If we are
multiplying more than two matrices, then we may group at will. The mathematical
statement of this is that multiplication of matrices is associative. Thus, if we are considering the matrix product ABC, we get the same result if we multiply A and B first
(and then the result of that by C) as if we multiply B and C first (and then the result of
that by A), thatÂ€is,
A B CÂ€=Â€(A B) CÂ€= A (BÂ€C)
49
50
â†œæ¸€å±®
â†œæ¸€å±®
MATRIX ALGEBRA
A matrix product that is of particular interest to us in ChapterÂ€4 is of the followingÂ€form:
x′
1× p
S
p× p
x
p ×1
Note that this product yields a number, i.e., the product matrix is 1 × 1 or a number.
The multivariate test statistic for two groups, Hotelling’s Tâ•›2, is of this form (except for
a scalar constant in front). Other multivariate statistics, for example, that are computed
in a similar way are the Mahalanobis distance (sectionÂ€3.14.6) and the multivariate
effect size measure D2 (sectionÂ€4.11).
Example 2.8
â•‡â•›â•› x′â•‡â•‡â•‡â•‡Sâ•…â•›â•›â•‡â•›xÂ€â•›â•›â•›=Â€â•›(x′S)Â€â•‡â•‡â•›â•›x
4
10 3 4
= (46, 20) =
(4, 2)
184 + 40 = 224
2
3 4 2
2.3â•‡ OBTAINING THE MATRIX OF VARIANCES AND COVARIANCES
Now, we show how various matrix operations introduced thus far can be used to obtain
two very important matrices in multivariate statistics, that is, the sums of squares and
cross products (SSCP) matrix (which is computed as part of the Wilks’ lambda test)
and the matrix of variances and covariances for a set of variables (which is computed
as part of Hotelling’s Tâ•›2 test). Consider the following set ofÂ€data:
x1
x2
1
1
3
4
2
7
x1â•›=â•›2
x2â•›=â•›4
First, we form the matrix Xd of deviation scores, that is, how much each score deviates
from the mean on that variable:
X
X
1 1 2 4 −1 −3
X d = 3 4 − 2 4 = 1
0
2 7 2 4 0
3
Next we take the transpose of Xd:
−1 1 0
X′d =
−3 0 3
Chapter 2
â†œæ¸€å±®
â†œæ¸€å±®
Now we obtain the matrix of sums of squares and cross products (SSCP) as the product of X′d and Xd:
−1
SSCP =
−3
1
0
−1
0
1
3
0
−3
ss1
0 =
ss
3 21
ss12
ss2
The diagonal elements are just sums of squares:
ss1 = (−1)2 + 12 + 02Â€=Â€2
ss2 = (−3)2 + 02 + 32Â€=Â€18
Notice that these deviation sums of squares are the numerators of the variances for the
variables, because the variance for a variableÂ€is
s2 =
∑ (x
ii
i
− x)
2
(n − 1).
The sum of deviation cross products (ss12) for the two variablesÂ€is
ss12Â€=Â€ss21Â€=Â€(−1)(−3) + 1(0) + (0)(3)Â€=Â€3.
This is just the numerator for the covariance for the two variables, because the definitional formula for covariance is givenÂ€by:
n
∑ (x
i1
s12 =
i =1
− x1 ) ( xi 2 − x2 )
n −1
,
where ( xi1 − x1 ) is the deviation score for the ith case on x1 and ( xi2 − x2 ) is the deviation score for the ith case on x2.
Finally, the matrix of variances and covariances S is obtained from the SSCP matrix
by multiplying by a constant, namely, 1 ( n − 1) :
S=
SSCP
n −1
S=
1 2 3 1 1.5
=
2 3 18 1.5 9
where 1 and 9 are the variances for variables 1 and 2, respectively, and 1.5 is the
covariance.
Thus, in obtaining S we have done the following:
1. Represented the scores on several variables as a matrix.
2. Illustrated subtraction of matrices—to get Xd.
51
52
â†œæ¸€å±®
â†œæ¸€å±®
MATRIX ALGEBRA
3. Illustrated the transpose of a matrix—to get X′d.
4. Illustrated multiplication of matrices, that is, X′d Xd, to get SSCP.
5. Illustrated multiplication of a matrix by a scalar, that is, by 1 ( n − 1) , to obtainÂ€S.
2.4â•‡ DETERMINANT OF A MATRIX
The determinant of a matrix A, denoted by A , is a unique number associated with each
square matrix. There are two interrelated reasons that consideration of determinants is
quite important for multivariate statistical analysis. First, the determinant of a covariance matrix represents the generalized variance for several variables. That is, it is one
way to characterize in a single number how much variability remains for the set of
variables after removing the shared variance among the variables. Second, because the
determinant is a measure of variance for a set of variables, it is intimately involved in
several multivariate test statistics. For example, in ChapterÂ€3 on regression analysis,
we use a test statistic called Wilks’ Λ that involves a ratio of two determinants. Also,
in k group multivariate analysis of variance (ChapterÂ€5) the following form of Wilks’
Λ ( Λ = W T ) is the most widely used test statistic for determining whether several
groups differ on a set of variables. The W and T matrices are SSCP matrices, which are
multivariate generalizations of SSw (sum of squares within) and SSt (sum of squares total)
from univariate ANOVA, and are defined and described in detail in ChaptersÂ€4 andÂ€5.
There is a formal definition for finding the determinant of a matrix, but it is complicated, and we do not present it. There are other ways of finding the determinant, and
a convenient method for smaller matrices (4 × 4 or less) is the method of cofactors.
For a 2 × 2 matrix, the determinant could be evaluated by the method of cofactors;
however, it is evaluated more quickly as simply the difference in the products of the
diagonal elements.
Example 2.9
4
A=
1
1
2
A = 4 ⋅ 2 − 1 ⋅1 = 7
a b
In general, for a 2 × 2 matrix A =
, then |A| = ad − bc.
c d
To evaluate the determinant of a 3 × 3 matrix we need the method of cofactors and the
following definition.
Definition: The minor of an element aij is the determinant of the matrix formed by
deleting the ith row and the jth column.
Example 2.10
Consider the following matrix:
Chapter 2
â†œæ¸€å±®
â†œæ¸€å±®
a12 a13
↓
1 2
A = 2 2
3 1
↓
3
1
4
The minor of a12 (with this element equal to 2 in the matrix) is the determinant of the
2 1
matrix
obtained by deleting the first row and the second column. Therefore,
3 4
2 1
the minor of a12 is
= 8 − 3 = 5.
3 4
2 2
The minor of a13 (with this element equal to 3) is the determinant of the matrix
3 1
obtained by deleting the first row and the third column. Thus, the minor of a13 is
2 2
= 2 − 6 = −4.
3 1
Definition: The cofactor of aij =
i+ j
( −1)
× minor.
Thus, the cofactor of an element will differ at most from its minor by sign. We now
evaluate ( −1)i + j for the first three elements of the A matrix given:
a11 : ( −1)
=1
a12 : ( −1)
= −1
a13 : ( −1)
=1
1+1
1+ 2
1+ 3
Notice that the signs for the elements in the first row alternate, and this pattern continues for all the elements in a 3 × 3 matrix. Thus, when evaluating the determinant for a
3 × 3 matrix it will be convenient to write down the pattern of signs and use it, rather
than figuring out what ( −1)i + j is for each element. That pattern of signsÂ€is:
+ − +
− + −
+ − +
We denote the matrix of cofactors C as follows:
c11 c12
C = c21 c22
c31 c32
c13
c23
c33
53
54
â†œæ¸€å±®
â†œæ¸€å±®
MATRIX ALGEBRA
Now, the determinant is obtained by expanding along any row or column of the matrix
of cofactors. Thus, for example, the determinant of A would be givenÂ€by
=
|A| a11c11 + a12 c12 + a13c13
(expanding along the first row)
orÂ€by
=
|A| a12 c12 + a22 c22 + a32 c32
(expanding along the second column)
We now find the determinant of A by expanding along the firstÂ€row:
Element
Minor
Cofactor
Element × cofactor
a11Â€=Â€1
2 1
=7
1 4
7
7
a12Â€=Â€2
2 1
=5
3 4
−5
−10
a13Â€=Â€3
2 2
= −4
3 1
−4
−12
Therefore, |A|Â€=Â€7 + (−10) + (−12)Â€=Â€−15.
For a 4 × 4 matrix the pattern of signs is givenÂ€by:
+ − + −
− + − +
+ − + −
− + − +
and the determinant is again evaluated by expanding along any row or column. However, in this case the minors are determinants of 3 × 3 matrices, and the procedure
becomes quite tedious. Thus, we do not pursue it any furtherÂ€here.
In the example in 2.3, we obtained the following covariance matrix:
1.0 1.5
S=
1.5 9.0
We also indicated at the beginning of this section that the determinant of S can be
interpreted as the generalized variance for a set of variables.
Chapter 2
â†œæ¸€å±®
â†œæ¸€å±®
Now, the generalized variance for the two-variable example is just |S|Â€ =Â€ (1 × 9) −
(1.5 × 1.5)Â€=Â€6.75. Because for this example there is a nonzero covariance, the generalized variance is reduced by this. That is, some of the variance of variable 2 is shared
by variable 1. On the other hand, if the variables were uncorrelated (covarianceÂ€=Â€0),
then we would expect the generalized variance to be larger (because there is no shared
variance between variables), and this is indeed theÂ€case:
=
|S|
1 0
= 9
0 9
Thus, in representing the variance for a set of variables this measure takes into account
all the variances and covariances.
In addition, the meaning of the generalized variance is easy to see when we consider
the determinant of a 2 × 2 correlation matrix. Given the following correlation matrix
1
R=
r21
r12
,
1
the determinant of =
R R
= 1 − r 2 . Of course, since we know that r 2 can be interpreted as the proportion of variation shared, or in common, between variables, the
determinant of this matrix represents the variation remaining in this pair of variables
after removing the shared variation among the variables. This concept also applies to
larger matrices where the generalized variance represents the variation remaining in
the set of variables after we account for the associations among the variables. While
there are other ways to describe the variance of a set of variables, this conceptualization appears in the commonly used Wilks’ Λ test statistic.
2.5 INVERSE OF A MATRIX
The inverse of a square matrix A is a matrix A−1 that satisfies the following equation:
AA−1Â€=Â€A−1 AÂ€= In,
where In is the identity matrix of order n. The identity matrix is simply a matrix with
1s on the main diagonal and 0s elsewhere.
1 0 0
1 0
I2 =
I3 = 0 1 0
0
1
0 0 1
Why is finding inverses important in statistical work? Because we do not literally have
division with matrices, multiplying one matrix by the inverse of another is the analogue of division for numbers. This is why finding an inverse is so important. An analogy with univariate ANOVA may be helpful here. In univariate ANOVA, recall that
−1
the test statistic
=
F MS
=
MSb ( MS w ) , that is, a ratio of between to within
b MS w
55
56
â†œæ¸€å±®
â†œæ¸€å±®
MATRIX ALGEBRA
variability. The analogue of this test statistic in multivariate analysis of variance is
BW−1, where B is a matrix that is the multivariate generalization of SSb (sum of squares
between); that is, it is a measure of how differential the effects of treatments have been
on the set of dependent variables. In the multivariate case, we also want to “divide” the
between-variability by the within-variability, but we don’t have division per se. However, multiplying the B matrix by W−1 accomplishes this for us, because, again, multiplying a matrix by an inverse of a matrix is the analogue of division. Also, as shown in
the next chapter, to obtain the regression coefficients for a multiple regression analysis,
it is necessary to find the inverse of a matrix product involving the predictors.
2.5.1 Procedure for Finding the Inverse of a Matrix
1.
2.
3.
4.
Replace each element of the matrix A by its minor.
Form the matrix of cofactors, attaching the appropriate signs as illustrated later.
Take the transpose of the matrix of cofactors, forming what is called the adjoint.
Divide each element of the adjoint by the determinant ofÂ€A.
For symmetric matrices (with which this text deals almost exclusively), taking the
transpose is not necessary, and hence, when finding the inverse of a symmetric matrix,
Step 3 is omitted.
We apply this procedure first to the simplest case, finding the inverse of a 2 × 2 matrix.
Example 2.11
4 2
D=
2 6
The minor of 4 is the determinant of the matrix obtained by deleting the first row and
the first column. What is left is simply the number 6, and the determinant of a number
is that number. Thus we obtain the following matrix of minors:
6 2
2 4
Now for a 2 × 2 matrix we attach the proper signs by multiplying each diagonal element
by 1 and each off-diagonal element by −1, yielding the matrix of cofactors, whichÂ€is
6 −2
.
−2
4
The determinant of D = 6(4) − (−2)(−2)Â€=Â€20.
Finally then, the inverse of D is obtained by dividing the matrix of cofactors by the
determinant, obtaining
6
20
D−1 =
−2
20
−2
20
4
20
Chapter 2
â†œæ¸€å±®
â†œæ¸€å±®
To check that D−1 is indeed the inverse of D, noteÂ€that
D
6
4
2
20
2 6
−2
20
D −1
D −1
−2 6
20 20
=
4 −2
20 20
I2
−2 D
20 4 2 = 1 0
4 2 6 0 1
20
Example 2.12
Let us find the inverse for the 3 × 3 A matrix that we found the determinant for in the
previous section. Because A is a symmetric matrix, it is not necessary to find nine
minors, but only six, since the inverse of a symmetric matrix is symmetric. Thus we
just find the minors for the elements on and above the main diagonal.
1 2 3 Recall again that the minor of an element is the
A = 2 2 1 determinant of the matrix obtained by deleting the
3 1 4 row and column that the element is in.
Element
Matrix
Minor
a11Â€=Â€1
2 1
1 4
2 × 4 − 1 × 1Â€=Â€7
a12Â€=Â€2
2 1
3 4
2 × 4 − 1 × 3Â€=Â€5
a13Â€=Â€3
2 2
3 1
2 × 1 − 2 × 3Â€=Â€−4
a22Â€=Â€2
1 3
3 4
1 × 4 − 3 × 3Â€=Â€−5
a23Â€=Â€1
1 2
3 1
1 × 1 − 2 × 3Â€=Â€−5
a33Â€=Â€4
1 2
2 2
1 × 2 − 2 × 2Â€=Â€−2
Therefore, the matrix of minors for AÂ€is
7 5 −4
5 −5 −5 .
−4 −5 −2
Recall that the pattern of signsÂ€is
57
58
â†œæ¸€å±®
â†œæ¸€å±®
MATRIX ALGEBRA
+ − +
− + − .
+ − +
Thus, attaching the appropriate sign to each element in the matrix of minors and completing Step 2 of finding the inverse we obtain:
7 −5 −4
−5 −5 5 .
−4 5 −2
Now the determinant of A was found to be −15. Therefore, to complete the final step
in finding the inverse we simply divide the preceding matrix by −15, and the inverse
of AÂ€is
−7
15
1
A −1 =
3
4
15
1
4
3 15
1 −1
.
3
3
−1 2
3 15
Again, we can check that this is indeed the inverse by multiplying it by A to see if the
result is the identity matrix.
Note that for the inverse of a matrix to exist, the determinant of the matrix must not
be equal to 0. This is because in obtaining the inverse each element is divided by the
determinant, and division by 0 is not defined. If the determinant of a matrix BÂ€=Â€0, we
say B is singular. If |B| ≠ 0, we say B is nonsingular, and its inverse does exist.
2.6 SPSS MATRIX PROCEDURE
The SPSS matrix procedure was developed at the University of Wisconsin at Madison.
It is described in some detail in SPSS Advanced Statistics 7.5. Various matrix operations can be performed using the procedure, including multiplying matrices, finding
the determinant of a matrix, finding the inverse of a matrix, and so on. To indicate a
matrix you must: (1) enclose the matrix in braces, (2) separate the elements of each
row by commas, and (3) separate the rows by semicolons.
The matrix procedure must be run from the syntax window. To get to the syntax window, click on FILE, then click on NEW, and finally click on SYNTAX. Every matrix
program must begin with MATRIX. and end with END MATRIX. The periods are crucial, as each command must end with a period. To create a matrix A, use the following
COMPUTE AÂ€=Â€{2, 4, 1; 3, −2,Â€5}.