Tải bản đầy đủ
K-Group MANOVA: A Priori and Post Hoc Procedures

# K-Group MANOVA: A Priori and Post Hoc Procedures

Tải bản đầy đủ

176

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

5.2â•‡MULTIVARIATE REGRESSION ANALYSIS FOR A SAMPLE
PROBLEM
In the previous chapter we indicated how analysis of variance can be incorporated
within the regression model by dummy-coding group membership and using it as a
nominal predictor. For the two-group case, just one dummy variable (predictor) was
needed, which took on a value of 1 for participants in group 1 and 0 for the participants in the other group. For our three-group example, we need two dummy variables
(predictors) to identify group membership. The first dummy variable (x1) is 1 for all
subjects in Group 1 and 0 for all other subjects. The other dummy variable (x2) is 1
for all subjects in Group 2 and 0 for all other subjects. AÂ€third dummy variable is not
needed because the participants in Group 3 are identified by 0’s on x1 and x2, that is, not
in Group 1 or Group 2. Therefore, by default, those participants must be in Group 3. In
general, for k groups, the number of dummy variables needed is (k − 1), corresponding
to the between degrees of freedom.
The data for our two-dependent-variable, three-group problem are presented here:
y1

y2

x1

x2

2
3
5
2

3
4
4
5

1
1
1
1

0
0 
 Group1
0
0 

4
5
6

8
6
7

0
0
0

1

1  Group 2
1 

7
8

6
7

0
0

10
9
7

8
5
6

0
0
0

0
0 

0  Group 3
0

0 

Thus, cast in a regression mold, we are relating two sets of variables, the two dependent variables, and the two predictors (dummy variables). The regression analysis will
then determine how much of the variance on the dependent variables is accounted for
by the predictors, that is, by group membership.
In TableÂ€5.1 we present the control lines for running the sample problem as a multivariate regression on SPSS MANOVA, and the lines for running the problem as a
traditional MANOVA (using GLM). By running both analyses, you can verify that
the multivariate Fs for the regression analysis are identical to those obtained from the
MANOVA run.

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

 Table 5.1:â•‡ SPSS Syntax for Running Sample Problem as Multivariate Regression and
as MANOVA

(1)

(2)

TITLE ‘THREE GROUP MANOVA RUN AS MULTIVARIATE REGRESSION’.
DATA LIST FREE/x1 x2 y1 y2.
BEGIN DATA.
1 0 2 3
1 0 3 4
1 0 5 4
1 0 2 5
0 1 4 8
0 1 5 6
0 1 6 7
0 0 7 6
0 0 8 7
0 0 10 8
0 0 9 5
0 0 7 6
END DATA.
LIST.
MANOVA y1 y2 WITH x1 x2.
TITLE ‘MANOVA RUN ON SAMPLE PROBLEM’.
DATA LIST FREE/gps y1 y2.
BEGIN DATA.
1 2 3
1 3 4
1 5 4
1 2 5
2 4 8
2 5 6
2 6 7
3 7 6
3 8 7
3 10 8
3 9 5
3 7 6
END DATA.
LIST.
GLM y1 y2 BY gps
/PRINT=DESCRIPTIVE
/DESIGN= gps.

(1) The first two columns of data are for the dummy variables x1 and x2, which identify group membership (cf.
the data display in sectionÂ€5.2).
(2) The first column of data identifies group membership—again compare the data display in sectionÂ€5.2.

5.3â•‡ TRADITIONAL MULTIVARIATE ANALYSIS OF VARIANCE
In the k-group MANOVA case we are comparing the groups on p dependent variables
simultaneously. For the univariate case, the null hypothesis is:
H0 : µ1Â€=Â€µ2Â€=Â€·Â€·Â€·Â€= µk (population means are equal)
whereas for MANOVA the null hypothesis is
H0 : µ1Â€=Â€µ2Â€=Â€·Â€·Â€·Â€= µk (population mean vectors are equal)
For univariate analysis of variance the F statistic (FÂ€=Â€MSb / MSw) is used for testing the
tenability of H0. What statistic do we use for testing the multivariate null hypothesis?
There is no single answer, as several test statistics are available. The one that is most
widely known is Wilks’ Λ, where Λ is given by:
Λ=

W
T

=

W
B+W

, where 0 ≤ Λ ≤ 1

177

178

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

|W| and |T| are the determinants of the within-group and total sum of squares and
cross-products matrices. W has already been defined for the two-group case, where
the observations in each group are deviated about the individual group means. Thus
W is a measure of within-group variability and is a multivariate generalization of the
univariate sum of squares within (SSw). In T the observations in each group are deviated about the grand mean for each variable. B is the between-group sum of squares
and cross-products matrix, and is the multivariate generalization of the univariate sum
of squares between (SSb). Thus, B is a measure of how differential the effect of treatments has been on a set of dependent variables. We define the elements of B shortly.
We need matrices to define within, between, and total variability in the multivariate
case because there is variability on each variable (these variabilities will appear on the
main diagonals of the W, B, and T matrices) as well as covariability for each pair of
variables (these will be the off diagonal elements of the matrices).
Because Wilks’ Λ is defined in terms of the determinants of W and T, it is important to
recall from the matrix algebra chapter (ChapterÂ€2) that the determinant of a covariance
matrix is called the generalized variance for a set of variables. Now, because W and T
differ from their corresponding covariance matrices only by a scalar, we can think of
|W| and |T| in the same basic way. Thus, the determinant neatly characterizes within
and total variability in terms of single numbers. It may also be helpful for you to recall
that the generalized variance may be thought of as the variation in a set of outcomes
that is unique to the set, that is, the variance that is not shared by the variables in the
set. Also, for one variable, variance indicates how much scatter there is about the mean
on a line, that is, in one dimension. For two variables, the scores for each participant on
the variables defines a point in the plane, and thus generalized variance indicates how
much the points (participants) scatter in the plane in two dimensions. For three variables, the scores for the participants define points in three-dimensional space, and hence
generalized variance shows how much the subjects scatter (vary) in three dimensions.
An excellent extended discussion of generalized variance for the more mathematically
inclined is provided in Johnson and Wichern (1982, pp.Â€103–112).
For univariate ANOVA you may recall that
SStÂ€= SSb + SSw,
where SSt is the total sum of squares.
For MANOVA the corresponding matrix analogue holds:
T=B+W
Total SSCPÂ€=Â€ Between SSCP + Within SSCP
Matrix
Matrix
Matrix
Notice that Wilks’ Λ is an inverse criterion: the smaller the value of Λ, the more evidence for treatment effects (between-group association). If there were no treatment

Chapter 5

effect, then BÂ€=Â€0 and Λ =

W
0+W

â†œæ¸€å±®

â†œæ¸€å±®

= 1, whereas if B were very large relative to W then

Λ would approach 0.
The sampling distribution of Λ is somewhat complicated, and generally an approximation is necessary. Two approximations are available: (1) Bartlett’s χ2 and (2) Rao’s F.
Bartlett’s χ2 is given by:
χ2Â€= −[(N − 1) − .5(p + k)] 1n Λ p(k − 1)df,
where N is total sample size, p is the number of dependent variables, and k is the number of groups. Bartlett’s χ2 is a good approximation for moderate to large sample sizes.
For smaller sample size, Rao’s F is a better approximation (Lohnes, 1961), although
generally the two statistics will lead to the same decision on H0. The multivariate F
given on SPSS is the Rao F. The formula for Rao’s F is complicated and is presented
later. We point out now, however, that the degrees of freedom for error with Rao’s F
can be noninteger, so that you should not be alarmed if this happens on the computer
printout.
As alluded to earlier, there are certain values of p and k for which a function of Λ is
exactly distributed as an F ratio (for example, kÂ€=Â€2 or 3 and any p; see Tatsuoka, 1971,
p.Â€89).
5.4â•‡MULTIVARIATE ANALYSIS OF VARIANCE FOR
SAMPLE DATA
We now consider the MANOVA of the data given earlier. For convenience, we present
the data again here, with the means for the participants on the two dependent variables
in each group:

y1

G1

y2

y1

2
3
5
2

3
4
4
5

y 11 = 3

y 21 = 4

G2

G3

y2

y1

y2

4
5
6

8
6
7

y 12 = 5

y 22 = 7

â•‡7
â•‡8
10
â•‡9
â•‡7

6
7
8
5
6

y 13 = 8.2

y 23 = 6.4

We wish to test the multivariate null hypothesis with the χ2 approximation for Wilks’
Λ. Recall that ΛÂ€=Â€|W| / |T|, so that W and T are needed. W is the pooled estimate of
within variability on the set of variables, that is, our multivariate error term.

179

180

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

5.4.1â•‡ Calculation of W
Calculation of W proceeds in exactly the same way as we obtained W for Hotelling’s
Tâ•›2 in the two-group MANOVA case in ChapterÂ€4. That is, we determine how much the
participants’ scores vary on the dependent variables within each group, and then pool
WÂ€= W1 + W2 + W3,
where W1, W2, and W3 are the within sums of squares and cross-products matrices
for Groups 1, 2, and 3. As in ChapterÂ€4, we denote the elements of W1 by ss1 and ss2
(measuring the variability on the variables within Group 1) and ss12 (measuring the
covariability of the variables in Group 1).
 ss
W1 =  1
 ss21

ss12 
ss2 

Then, for Group 1, we have
ss1 =

4

∑( y ( ) − y
j =1

11 )

1 j

2

= (2 − 3) 2 + (3 − 3) 2 + (5 − 3) 2 + (2 − 3) 2 = 6
ss2 =

4

∑( y ( ) − y
j =1

2 j

21 )

2

= (3 − 4) 2 + ( 4 − 4) 2 + ( 4 − 4) 2 + (5 − 4) 2 = 2
ss12 = ss21

∑(y ( ) − y
4

j =1

1 j

11

)( y ( ) − y )
2 j

21

= (2 − 3) (3 − 4) + (3 − 3) (4 − 4) + (5 − 3) (4 − 4) + (2 − 3) (5 − 4) = 0
Thus, the matrix that measures within variability on the two variables in Group 1 is
given by:
6 0 
W1 = 

0 2
In exactly the same way the within SSCP matrices for groups 2 and 3 can be shown
to be:
 2 −1
6.8 2.6 
W2 = 
W3 = 

 −1 2 
 2.6 5.2 

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

Therefore, the pooled estimate of within variability on the set of variables is given by:
14.8 1.6 
W = W1 + W2 + W3 = 

 1.6 9.2
5.4.2â•‡ Calculation of T
Recall, from earlier in this chapter, that TÂ€=Â€B + W. We find the B (between) matrix,
and then obtain the elements of T by adding the elements of B to the elements of W.
The diagonal elements of B are defined as follows:
bii =

k

∑n ( y
j

ij

− yi ) 2 ,

j =1

where nj is the number of subjects in group j, yij is the mean for variable i in group
j, and yi is the grand mean for variable i. Notice that for any particular variable, say
variable 1, b11 is simply the between-group sum of squares for a univariate analysis of
variance on that variable.
The off-diagonal elements of B are defined as follows:
k

∑n ( y

bmi = bim

j

ij

− yi

j =1

)( y

mj

− ym

)

To find the elements of B we need the grand means on the two variables. These are
obtained by simply adding up all the scores on each variable and then dividing by the
total number of scores. Thus y1 = 68 / 12Â€=Â€5.67, and y2Â€=Â€69 / 12Â€=Â€5.75.
Now we find the elements of the B (between) matrix:
b11 =

3

∑n ( y
j

1j

− y1 )2 , where y1 j is the mean of variable 1 in group j.

j =1

= 4(3 − 5.67) 2 + 3(5 − 5.67) 2 + 5(8.2 − 5.67) 2 = 61.87
b22 =

3

∑n ( y
j =1

j

2j

− y2 ) 2

= 4(4 − 5.75)2 + 3(7 − 5.75)2 + 5(6.4 − 5.75)2 = 19.05
b12 = b21

3

∑n ( y
j

j =1

1j

)(

− y1 y2 j − y2

)

= 4 (3 − 5.67) ( 4 − 5.75) + 3 (5 − 5.67 ) (7 − 5.75) + 5 (8.2 − 5.67 ) (6.4 − 5.75) = 24.4

181

182

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

Therefore, the B matrix is
61.87 24.40 
B=

 24.40 19.05 
and the diagonal elements 61.87 and 19.05 represent the between-group sum of squares
that would be obtained if separate univariate analyses had been done on variables 1
and 2.
Because TÂ€=Â€B + W, we have
 61.87 24.40  14.80 1.6  76.72 26.000 
T=
+
=

 24.40 19.05   1.6 9.2   26.00 28.25 
5.4.3 Calculation of Wilks Λ and the Chi-Square Approximation
Now we can obtain Wilks’ Λ:
14.8
W
1.6
Λ=
=
76.72
T
26

1.6
14.8 (9.2) − 1.62
9.2
=
= .0897
26
76.72 ( 28.25) − 262
28.25

Finally, we can compute the chi-square test statistic:
χ2Â€=Â€−[(N − 1) − .5(p + k)] ln Λ, with p (k − 1) df
χ2Â€=Â€−[(12 − 1) − .5(2 + 3)] ln (.0897)
χ2Â€=Â€−8.5(−2.4116)Â€=Â€20.4987, with 2(3 − 1)Â€=Â€4 df
The multivariate null hypothesis here is:
 µ11   µ12   µ13 
 µ  =  µ  =  µ 
23
21
22
That is, that the population means in the three groups on variable 1 are equal, and
similarly that the population means on variable 2 are equal. Because the critical
value at .05 is 9.49, we reject the multivariate null hypothesis and conclude that
the three groups differ overall on the set of two variables. TableÂ€5.2 gives the multivariate Fs and the univariate Fs from the SPSS run on the sample problem and
presents the formula for Rao’s F approximation and also relates some of the output
from the univariate Fs to the B and W matrices that we computed. After overall
multivariate significance is attained, one often would like to find out which of the
outcome variables differed across groups. When such a difference is found, we
would then like to describe how the groups differed on the given variable. This is
considered next.

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

 Table 5.2:â•‡ Multivariate Fâ•›s and Univariate Fâ•›s for Sample Problem From SPSS MANOVA
Multivariate Tests
Effect
gps

Pillai’s Trace
Wilks’ Lambda
Hotelling’s Trace
Roy’s Largest Root

Value

F

Hypothesis df

Error df

Sig.

1.302
.090
5.786
4.894

8.390
9.358
10.126
22.024

4.000
4.000
4.000
2.000

18.000
16.000
14.000
9.000

.001
.000
.000
.000

1 − Λ1/s ms − p (k − 1) / 2 + 1
, where m = N − 1 − (p − k ) / 2 and
Λ1/s
p (k − 1)
s=

p 2 (k − 1)2 − 4
p 2 + (k − 1)2 − 5

is approximately distributed as F with p(k − 1) and ms − p(k − 1) / 2 + 1 degrees of freedom. Here
Wilks’ ΛÂ€=Â€.08967, pÂ€=Â€2, kÂ€=Â€3, and NÂ€=Â€12. Thus, we have mÂ€=Â€12 − 1Â€− (2 + 3) / 2Â€=Â€8.5 and
s = {4(3 − 1)2 − 4} / {4 + (2)2 − 5} = 12 / 3 = 2,
and
F=

1 − .08967 8.5 (2) − 2 (2) / 2 + 1 1 − .29945 16

=
⋅ = 9.357
2 (3 − 1)
.29945 4
.08967

as given on the printout, within rounding. The pair of degrees of freedom is p(kÂ€−Â€1)Â€=Â€2(3 − 1)Â€=Â€4 and
ms − p(k − 1) / 2 + 1Â€=Â€8.5(2) − 2(3 − 1) / 2 + 1Â€=Â€16.

Tests of Between-Subjects Effects
Source Dependent Variable Type III Sum of Squares df Mean Square F
gps
Error

y1
y2
y1
y2

(1)â•‡61.867
19.050
(2)â•‡14.800
9.200

2
2
9
9

30.933
9.525
1.644
1.022

Sig.

18.811 .001
9.318 .006

(1) These are the diagonal elements of the B (between) matrix we computed in the example:

61.87 24.40 

24.40 19.05 

B=

(2) Recall that the pooled within matrix computed in the example was

14.8 1.6 
W=

 1.6 9.2 
(Continued )

183

184

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

 TableÂ€5.2:â•‡ (Continued)
a nd these are the diagonal elements of W. The univariate F ratios are formed from the elements on the
main diagonals of B and W. Dividing the elements of B by hypothesis degrees of freedom gives the
hypothesis mean squares, while dividing the elements of W by error degrees of freedom gives the error
mean squares. Then, dividing hypothesis mean squares by error mean squares yields the F ratios. Thus, for
Y1 we have
F =

30.933
1.644

= 18.81.

5.5â•‡ POST HOC PROCEDURES
In general, when the multivariate null hypothesis is rejected, several follow-up procedures can be used. By far, the most commonly used method in practice is to conduct
a series of one-way ANOVAs for each outcome to identify whether group differences
are present for a given dependent variable. This analysis implies that you are interested
in identifying if there are group differences present for each of the correlated but distinct outcomes. The purpose of using the Wilks’ Λ prior to conducting these univariate
tests is to provide for accurate type IÂ€error control. Note that if one were interested in
learning whether linear combinations of dependent variables (instead of individual
dependent variables) distinguish groups, discriminant analysis (see ChapterÂ€10) would
be used instead of these procedures.
In addition, another procedure that may be used following rejection of the overall multivariate null hypothesis is step down analysis. This analysis requires that you establish
an a priori ordering of the dependent variables (from most important to least) based
on theory, empirical evidence, and/or reasoning. In many investigations, this may be
difficult to do, and study results depend on this ordering. As such, it is difficult to find
applications of this procedure in the literature. Previous editions of this text contained
a chapter on step down analysis. However, given its limited utility, this chapter has
been removed from the text, although it is available on the web.
Another analysis procedure that may be used when the focus is on individual dependent
variables (and not linear combinations) is multivariate multilevel modeling (MVMM).
This technique is covered in ChapterÂ€14, which includes a discussion of the benefits
of this procedure. Most relevant for the follow-up procedures are that MVMM can
be used to test whether group differences are the same or differ across multiple outcomes, when the outcomes are similarly scaled. Thus, instead of finding, as with the
use of more traditional procedures, that an intervention impacts, for example, three
outcomes, investigators may find that the effects of an intervention are stronger for
some outcomes than others. In addition, this procedure offers improved treatment of
missing data over the traditional approach discussed here.
The focus for the remainder of this section and the next is on the use of a series of
ANOVAs as follow-up tests given a significant overall multivariate test result. There

Chapter 5

â†œæ¸€å±®

â†œæ¸€å±®

are different variations of this procedure that can be used, depending on the balance
of the type IÂ€error rate and power desired, as well as confidence interval accuracy. We
present two such procedures here. SAS and SPSS commands for the follow-up procedures are shown in sectionÂ€5.6 as we work through an applied example. Note also that
one may not wish to conduct pairwise comparisons as we do here, but instead focus
on a more limited number of meaningful comparisons as suggested by theory and/or
empirical work. Such planned comparisons are discussed in sectionsÂ€5.7–5.11.
5.5.1â•‡ P
 rocedure 1—ANOVAS and Tukey Comparisons
With this procedure, a significant multivariate test result is followed up with one-way
ANOVAs for each outcome with a Bonferroni-adjusted alpha used for the univariate tests. So if there are p outcomes, the alpha used for each ANOVA is the experiment-wise nominal alpha divided by p, or a / p. You can implement this procedure by
simply comparing the p value obtained for the ANOVA F test to this adjusted alpha
level. For example, if the experiment-wise type IÂ€ error rate were set at .05 and if 5
dependent variables were included, the alpha used for each one-way ANOVA would be
.05 / 5Â€=Â€.01. And, if the p value for an ANOVA F test were smaller than .01, this indicates that group differences are present for that dependent variable. If group differences
are found for a given dependent variable and the design includes three or more groups,
then pairwise comparisons can be made for that variable using the Tukey procedure, as
described in the next section, with this same alpha level (e.g., .01 for the five dependent
variable example). This generally recommended procedure then provides strict control of the experiment-wise type IÂ€error rate for all possible pairwise comparisons and
also provides good confidence interval coverage. That is, with this procedure, we can
be 95% confident that all intervals capture the true difference in means for the set of
pairwise comparisons. While this procedure has good type IÂ€error control and confidence interval coverage, its potential weakness is statistical power, which may drop to
low levels, particularly for the pairwise comparisons, especially when the number of
dependent variables increases. One possibility, then, is to select a higher level than .05
(e.g., .10) for the experiment-wise error rate. In this case, with five dependent variables,
the alpha level used for each of the ANOVAs is .10 / 5 or .02, with this same alpha level
also used for the pairwise comparisons. Also, when the number of dependent variables
and groups is small (i.e., two or perhaps three), procedure 2 can be considered.
5.5.2â•‡Procedure 2—ANOVAS With No Alpha Adjustment
and Tukey Comparisons
With this procedure, a significant overall multivariate test result is followed up with
separate ANOVAs for each outcome with no alpha adjustment (e.g., aÂ€=Â€.05). Again,
if group differences are present for a given dependent variable, the Tukey procedure
is used for pairwise comparisons using this same alpha level (i.e., .05). As such, this
procedure relies more heavily on the use of Wilks’ Λ as a protected test. That is, the
one-way ANOVAs will be considered only if Wilks’ Λ indicates that group differences

185

186

â†œæ¸€å±®

â†œæ¸€å±®

K-GROUP MANOVA

are present on the set of outcomes. Given no alpha adjustment, this procedure is more
powerful than the previous procedure but can provide for poor control of the experiment-wise type IÂ€error rate when the number of outcomes is greater than two or three
and/or when the number of groups increase (thus increasing the number of pairwise
comparisons). As such, we would generally not recommend this procedure with more
than three outcomes and more than three groups. Similarly, this procedure does not
maintain proper confidence interval coverage for the entire set of pairwise comparisons. Thus, if you wish to have, for example, 95% coverage for this entire set of comparisons or strict control of the family-wise error rate throughout the testing procedure,
the procedure in sectionÂ€5.5.1 should be used.
You may wonder why this procedure may work well when the number of outcomes
and groups is small. In sectionÂ€4.2, we mentioned that use of univariate ANOVAs
with no alpha adjustment for each of several dependent variables is not a good idea
because the experiment-wise type IÂ€error rate can increase to unacceptable levels.
The same applies here, except that the use of Wilks’ Λ provides us with some protection that is not present when we proceed directly to univariate ANOVAs. To illustrate, when the study design has just two dependent variables and two groups, the use
of Wilks’ Λ provides for strict control of the experiment-wise type IÂ€error rate even
when no alpha adjustment is used for the univariate ANOVAs, as noted by Levin,
Serlin, and Seaman (1994). Here is how this works. Given two outcomes, there are
three possibilities that may be present for the univariate ANOVAs. One possibility
is that there are no group differences for any of the two dependent variables. If that
is the case, use of Wilks’ Λ at an alpha of .05 provides for strict type IÂ€error control.
That is, if we reject the multivariate null hypothesis when no group differences are
present, we have made a type IÂ€error, and the expected rate of doing this is .05. So,
for this case, use of the Wilks’ Λ provides for proper control of the experiment-wise
type IÂ€error rate.
We now consider a second possibility. That is, here, the overall multivariate null
hypothesis is false and there is a group difference for just one of the outcomes. In this
case, we cannot make a type IÂ€error with the use of Wilks’ Λ since the multivariate null
hypothesis is false. However, we can certainly make a type IÂ€error when we consider
the univariate tests. In this case, with only one true null hypothesis, we can make a
type IÂ€error for only one of the univariate F tests. Thus, if we use an unadjusted alpha
for these tests (i.e., .05), then the probability of making a type IÂ€error in the set of univariate tests (i.e., the two separate ANOVAs) is .05. Again, the experiment-wise type
IÂ€error rate is properly controlled for the univariate ANOVAs. The third possibility is
that there are group differences present on each outcome. In this case, it is not possible to make a type IÂ€error for the multivariate test or the univariate F tests. Of course,
even in this latter case, when you have more than two groups, making type IÂ€errors
is possible for the pairwise comparisons, where some null group differences may be
present. The use of the Tukey procedure, then, provides some type IÂ€error protection
for the pairwise tests, but as noted, this protection generally weakens as the number of
groups increases.