Tải bản đầy đủ
9 MANCOVA—Several Dependent Variables and Several Covariates

9 MANCOVA—Several Dependent Variables and Several Covariates

Tải bản đầy đủ

316

↜渀屮

↜渀屮

Analysis of Covariance

In examining the output from statistical packages it is important to first make two
checks to determine whether MANCOVA is appropriate:
1. Check to see that there is a significant relationship between the dependent variables and the covariates.
2. Check to determine that the homogeneity of the regression hyperplanes is satisfied.
If either of these is not satisfied, then covariance is not appropriate. In particular, if
condition 2 is not met, then one should consider using the Johnson–Neyman technique,
which determines a region of nonsignificance, that is, a set of x values for which the
groups do not differ, and hence for values of x outside this region one group is superior
to the other. The Johnson–Neyman technique is described by Huitema (2011), and
extended discussion is provided in Rogosa (1977, 1980).
Incidentally, if the homogeneity of regression slopes is rejected for several groups,
it does not automatically follow that the slopes for all groups differ. In this case, one
might follow up the overall test with additional homogeneity tests on all combinations
of pairs of slopes. Often, the slopes will be homogeneous for many of the groups. In
this case one can apply ANCOVA to the groups that have homogeneous slopes, and
apply the Johnson–Neyman technique to the groups with heterogeneous slopes. At
present, neither SAS nor SPSS offers the Johnson–Neyman technique.
8.10╇TESTING THE ASSUMPTION OF HOMOGENEOUS
HYPERPLANES ON€SPSS
Neither SAS nor SPSS automatically provides the test of the homogeneity of the
regression hyperplanes. Recall that, for one covariate, this is the assumption of equal
regression slopes in the groups, and that for two covariates it is the assumption of
parallel regression planes. To set up the syntax to test this assumption, it is necessary
to understand what a violation of the assumption means. As we indicated earlier (and
displayed in Figure€8.4), a violation means there is a covariate-by-treatment interaction. Evidence that the assumption is met means the interaction is not present, which is
consistent with the use of MANCOVA.
Thus, what is done on SPSS is to set up an effect involving the interaction (for a given
covariate), and then test whether this effect is significant. If so, this means the assumption is not tenable. This is one of those cases where researchers typically do not want
significance, for then the assumption is tenable and covariance is appropriate. With
the SPSS GLM procedure, the interaction can be tested for each covariate across the
multiple outcomes simultaneously.
Example 8.1: Two Dependent Variables and One Covariate
We call the grouping variable TREATS, and denote the dependent variables by
Y1 and Y2, and the covariate by X1. Then, the key parts of the GLM syntax that

Chapter 8

↜渀屮

↜渀屮

produce a test of the assumption of no treatment-covariate interaction for any of the
outcomes€are
GLM Y1 Y2 BY TREATS WITH€X1
/DESIGN=TREATS X1 TREATS*X1.

Example 8.2: Three Dependent Variables and Two Covariates
We denote the dependent variables by Y1, Y2, and Y3, and the covariates by X1 and X2.
Then, the relevant syntax€is
GLM Y1 Y2 Y3 BY TREATS WITH X1€X2
/DESIGN=TREATS X1 X2 TREATS*X1 TREATS*X2.

These two syntax lines will be embedded in others when running a MANCOVA on
SPSS, as you can see in a computer example we consider later. With the previous two
examples and the computer examples, you should be able to generalize the setup of the
control lines for testing homogeneity of regression hyperplanes for any combination of
dependent variables and covariates.
8.11╇EFFECT SIZE MEASURES FOR GROUP COMPARISONS IN
MANCOVA/ANCOVA
A variety of effect size measures are available to describe the differences in adjusted
means. A€raw score (unstandardized) difference in adjusted means should be reported
and may be sufficient if the scale of the dependent variable is well known and easily
understood. In addition, as discussed in Olejnik and Algina (2000) a standardized difference in adjusted means between two groups (essentially a Cohen’s d measure) may
be computed€as
d=

yadj1 − yadj 2
MSW 1/ 2

,

where MSW is the pooled mean squared error from a one-way ANOVA that includes
the treatment as the only explanatory variable (thus excluding any covariates). This
effect size measure, among other things, assumes that (1) the covariates are participant
attribute variables (or more properly variables whose variability is intrinsic to the population of interest, as explained in Olejnik and Algina, 2000) and (2) the homogeneity
of variance assumption for the outcome is satisfied.
In addition, one may also use proportion of variance explained effect size measures
for treatment group differences in MANOVA/ANCOVA. For example, for a given
outcome, the proportion of variance explained by treatment group differences may be
computed€as
η2 =

SS

effect
,
SS
total

317

318

↜渀屮

↜渀屮

Analysis of Covariance

where SSeffect is the sum of squares due to the treatment from the ANCOVA and SStotal is
the total sum of squares for a given dependent variable. Note that computer software
commonly reports partial η2, which is not the effect size discussed here and which
removes variation due to the covariate from SStotalâ•›. Conceptually, η2 describes the
strength of the treatment effect for the general population, whereas partial η2 describes
the strength of the treatment for participants having the same values on the covariates
(i.e., holding scores constant on all covariates). In addition, an overall multivariate
strength of association, multivariate eta square (also called tau square), can be computed and€is
η2multivariate = 1 − Λ

1

r,

where Λ is Wilk’s lambda and r is the smaller of (p, q), where p is the number of
dependent variables and q is the degrees of freedom for the treatment effect. This
effect size is interpreted as the proportion of generalized variance in the set of outcomes that is due the treatment. Use of these effect size measures is illustrated in
Example 8.4.
8.12 TWO COMPUTER EXAMPLES
We now consider two examples to illustrate (1) how to set up syntax to run MANCOVA on SAS GLM and then SPSS GLM, and (2) how to interpret the output, including determining whether use of covariates is appropriate. The first example uses
artificial data and is simpler, having just two dependent variables and one covariate,
whereas the second example uses data from an actual study and is a bit more complex,
involving two dependent variables and two covariates. We also conduct some preliminary analysis activities (checking for outliers, assessing assumptions) with the second
example.
Example 8.3: MANCOVA on SAS€GLM
This example has two groups, with 15 participants in group 1 and 14 participants in
group 2. There are two dependent variables, denoted by POSTCOMP and POSTHIOR
in the SAS GLM syntax and on the printout, and one covariate (denoted by PRECOMP). The syntax for running the MANCOVA analysis is given in Table€8.1, along
with annotation.
Table€8.2 presents two multivariate tests for determining whether MANCOVA is
appropriate, that is, whether there is a significant relationship between the two dependent variables and the covariate, and whether there is no covariate by group interaction.
The multivariate test at the top of Table€8.2 indicates there is a significant relationship
between the covariate and the set of outcomes (F€=€21.46, p€=€.0001). Also, the multivariate test in the middle of the table shows there is not a covariate-by-group interaction effect (F€=€1.90, p < .1707). This supports the decision to use MANCOVA.

Chapter 8

↜渀屮

↜渀屮

 Table 8.1:╇ SAS GLM Syntax for Two-Group MANCOVA: Two Dependent Variables and
One Covariate








TITLE ‘MULTIVARIATE ANALYSIS OF COVARIANCE’; DATA COMP;
INPUT GPID PRECOMP POSTCOMP POSTHIOR @@;
LINES;
1 15 17 3 1 10 6 3 1 13 13 1 1 14 14 8
1 12 12 3 1 10 9 9 1 12 12 3 1 8 9 12
1 12 15 3 1 8 10 8 1 12 13 1 1 7 11 10
1 12 16 1 1 9 12 2 1 12 14 8
2 9 9 3 2 13 19 5 2 13 16 11 2 6 7 18
2 10 11 15 2 6 9 9 2 16 20 8 2 9 15 6
2 10 8 9 2 8 10 3 2 13 16 12 2 12 17 20
2 11 18 12 2 14 18 16
PROC PRINT;
PROC REG;
MODEL POSTCOMP POSTHIOR = PRECOMP;
MTEST;
PROC GLM;
CLASS GPID;
MODEL POSTCOMP POSTHIOR = PRECOMP GPID PRECOMP*GPID;
MANOVA H = PRECOMP*GPID;
PROC GLM;
CLASS GPID;
MODEL POSTCOMP POSTHIOR = PRECOMP GPID;
MANOVA H = GPID;
LSMEANS GPID/PDIFF;
RUN;

╇ PROC REG is used to examine the relationship between the two dependent variables and the covariate.
The MTEST is needed to obtain the multivariate test.
╇Here GLM is used with the MANOVA statement to obtain the multivariate test of no overall PRECOMP
BY GPID interaction effect.
╇ GLM is used again, along with the MANOVA statement, to test whether the adjusted population mean
vectors are equal.
╇ This statement is needed to obtain the adjusted means.

The multivariate null hypothesis tested in MANCOVA is that the adjusted population
mean vectors are equal, that€is,
* 
* 
 µ11
 µ12
H0 :  *  =  *  .
 µ 21   µ 22 

319

320

↜渀屮

↜渀屮

Analysis of Covariance

 Table 8.2:╇ Multivariate Tests for Significant Regression, Covariate-by-Treatment Interaction, and Group Differences
Multivariate Test:
Multivariate Statistics and Exact F Statistics
S€=€1

M€=€0

N€=€12

Statistic

Value

F

Num DF

Den DF

Pr > F

Wilks’ Lambda
Pillar’s Trace
Hotelling-Lawley Trace
Roy’s Greatest Root

0.37722383
0.62277617
1.65094597
1.65094597

21.46
21.46
21.46
21.46

2
2
2
2

26
26
26
26

0.0001
0.0001
0.0001
0.0001

MANOVA Test Criteria and Exact F Statistics for the Hypothesis
of no Overall PRECOMP*GPID Effect
H€=€Type III SS&CP Matrix for PRECOMP*GPID
S€=€1

M€=€0

E€=€Error SS&CPMatrix

N€=€11

Statistic

Value

F

Num DF

Den DF

Pr > F

Wilks’ Lambda
Pillar’s Trace
Hotelling-Lawley Trace
Roy’s Greatest Root

0.86301048
0.13698952
0.15873448
0.15873448

1.90
1.90
1.90
1.90

2
2
2
2

24
24
24
24

0.1707
0.1707
0.1707
0.1707

MANOVA Test Criteria and Exact F Statistics for the Hypothesis of no Overall GPID Effect
H€=€Type III SS&CP Matrix for GPID
S€=€1

M€=€0

E€=€Error SS&CP Matrix
N€=€11.5

Statistic

Value

F

Num DF

Den DF

Pr > F

Wilks’ Lambda
Pillar’s Trace
Hotelling-Lawley Trace
Roy’s Greatest Root

0.64891393
0.35108107
0.54102455
0.54102455

6.76
6.76
6.76
6.76

2
2
2
2

25
25
25
25

0.0045
0.0045
0.0045
0.0045

The multivariate test at the bottom of Table€8.2 (F€=€6.76, p€=€.0045) shows that
we reject the multivariate null hypothesis at the .05 level, and hence conclude that
the groups differ on the set of adjusted means. The univariate ANCOVA follow-up F
tests in Table€8.3 (F€=€5.26 for POSTCOMP, p€=€.03, and F€=€9.84 for POSTHIOR,
p€=€.004) indicate that adjusted means differ for each of the dependent variables. The
adjusted means for the variables are also given in Table€8.3.
Can we have confidence in the reliability of the adjusted means? From Huitema’s
inequality we need C + (J − 1) / N < .10. Because here J€=€2 and N€=€29, we obtain

Chapter 8

↜渀屮

↜渀屮

 Table 8.3:╇ Univariate Tests for Group Differences and Adjusted€Means
Source

DF

Type I€SS

Mean Square

F Value

Pr > F

PRECOMP
GPID

1
1

237.6895679
28.4986009

237.6895679
28.4986009

43.90
5.26

<0.001
0.0301

Source

DF

Type III SS

Mean Square

F Value

Pr > F

PRECOMP
GPID

1
1

247.9797944
28.4986009

247.9797944
28.4986009

45.80
5.26

<0.001
0.0301

Source

DF

Type I€SS

Mean Square

F Value

Pr > F

PRECOMP
GPID

1
1

17.6622124
211.5902344

17.6622124
211.5902344

0.82
9.84

0.3732
0.0042

Source

DF

Type III SS

Mean Square

F Value

Pr > F

PRECOMP
GPID

1
1

10.2007226
211.5902344

10.2007226
211.5902344

0.47
9.84

0.4972
0.0042

General Linear Models Procedure Least Squares Means
GPID
1
2
GPID
1
2

POSTCOMP
LSMEAN
12.0055476
13.9940562
POSTHIOR
LSMEAN
5.0394385
10.4577444

Pr > |T| H0:
LSMEAN1€=€LSMEAN2
0.0301
Pr > |T| H0:
LSMEAN1€=€LSMEAN2
0.0042

(C + 1) / 29 < .10 or C < 1.9. Thus, we should use fewer than two covariates for reliable
results, and we have used just one covariate.
Example 8.4: MANCOVA on SPSS MANOVA
Next, we consider a social psychological study by Novince (1977) that examined the
effect of behavioral rehearsal (group 1) and of behavioral rehearsal plus cognitive
restructuring (combination treatment, group 3) on reducing anxiety (NEGEVAL) and
facilitating social skills (AVOID) for female college freshmen. There was also a control group (group 2), with 11 participants in each group. The participants were pretested and posttested on four measures, thus the pretests were the covariates.
For this example we use only two of the measures: avoidance and negative evaluation. In Table€8.4 we present syntax for running the MANCOVA, along with annotation explaining what some key subcommands are doing. Table€8.5 presents syntax
for obtaining within-group Mahalanobis distance values that can be used to identify
multivariate outliers among the variables. Tables€8.6, 8.7, 8.8, 8.9, and 8.10 present
selected analysis results. Specifically, Table€ 8.6 presents descriptive statistics for
the study variables, Table€8.7 presents results for tests of the homogeneity of the

321

322

↜渀屮

↜渀屮

Analysis of Covariance

regression planes, and Table€8.8 shows tests for homogeneity of variance. Table€8.9
provides the overall multivariate tests as well as follow-up univariate tests for the
MANCOVA, and Table€8.10 presents the adjusted means and Bonferroni-adjusted
comparisons for adjusted mean differences. As in one-way MANOVA, the Bonferroni adjustments guard against type I€error inflation due to the number of pairwise
comparisons.
Before we use the MANCOVA procedure, we examine the data for potential outliers,
examine the shape of the distributions of the covariates and outcomes, and inspect
descriptive statistics. Using the syntax in Table€8.5, we obtain the Mahalanobis distances for each case to identify if multivariate outliers are present on the set of dependent variables and covariates. The largest obtained distance is 7.79, which does not
exceed the chi-square critical value (.001, 4) of 18.47. Thus, no multivariate outliers

 Table 8.4:╇ SPSS MANOVA Syntax for Three-Group Example: Two Dependent Variables
and Two Covariates
TITLE ‘NOVINCE DATA — 3 GP ANCOVA-2 DEP VARS AND 2 COVS’.
DATA LIST FREE/GPID AVOID NEGEVAL PREAVOID PRENEG.
BEGIN DATA.
1
1
1
2
2
2
3
3
3

91 81 70 102
137 119 123 117
127 101 121 85
107 88 116 97
104 107 105 113
94 87 85 96
121 134 96 96
139 124 122 105
120 123 80 77

END DATA.

1
1
1
2
2
2
3
3
3

107 132 121 71
138 132 112 106
114 138 80 105
76 95 77 64
96 84 97 92
92 80 82 88
140 130 120 110
121 123 119 122
140 140 121 121

1
1
1
2
2
2
3
3
3

121 97 89 76
133 116 126 97
118 121 101 113
116 87 111 86
127 88 132 104
128 109 112 118
148 123 130 111
141 155 104 139
95 103 92 94

1 86 88 80 85
1 114 72 112 76
2 126 112 121 106
2 99 101 98 81
3 147 155 145 118
3 143 131 121 103

LIST.

GLM AVOID NEGEVAL BY GPID WITH PREAVOID PRENEG
/PRINT=DESCRIPTIVE ETASQ
╇/DESIGN=GPID PREAVOID PRENEG GPID*PREAVOID GPID*PRENEG.
╇GLM AVOID NEGEVAL BY GPID WITH PREAVOID PRENEG
/EMMEANS=TABLES(GPID) COMPARE ADJ(BONFERRONI)
â•…/PLOT=RESIDUALS
â•… /SAVE=RESID ZRESID
â•… /PRINT=DESCRIPTIVE ETASQ HOMOGENEITY
â•… /DESIGN=PREAVOID PRENEG GPID.
╇ With the first set of GLM commands, the design subcommand requests a test of the equality of regression
planes assumption for each outcome. In particular, GPID*PREAVOID GPID*PRENEG creates the
product variables needed to test the interactions of interest.
╇ This second set of GLM commands produces the standard MANCOVA results. The EMMEANS subcommand requests comparisons of adjusted means using the Bonferroni procedure.

Chapter 8

↜渀屮

↜渀屮

 Table 8.5:╇ SPSS Syntax for Obtaining Within-Group Mahalanobis Distance Values
â•… SORT CASES BY gpid(A).
SPLIT FILE by gpid.

â•…REGRESSION
/STATISTICS COEFF OUTS R ANOVA
/DEPENDENT case
/METHOD=ENTER avoid negeval preavoid preneg
/SAVE MAHAL.
EXECUTE.
SPLIT FILE OFF.
╇ To obtain the Mahalanobis’ distances within groups, cases must first be sorted by the grouping variable.
The SPLIT FILE command is needed to obtain the distances for each group separately.
╇ The regression procedure obtains the distances. Note that case (which is the case ID) is the
dependent variable, which is irrelevant here because the procedure uses information from the
“predictors” only in computing the distance values. The “predictor” variables here are the dependent
variables and covariates used in the MANCOVA, which are entered with the METHOD subcommand.

are indicated. We also computed within-group z scores for each of the variables separately and did not find any observation lying more than 2.5 standard deviations from
the respective group mean, suggesting no univariate outliers are present. In addition,
examining histograms of each of the variables as well as scatterplots of each outcome
and each covariate for each group did not suggest any unusual values and suggested
that the distributions of each variable appear to be roughly symmetrical. Further,
examining the scatterplots suggested that each covariate is linearly related to each of
the outcome variables, supporting the linearity assumption.
Table€8.6 shows the means and standard deviations for each of the study variables
by treatment group (GPID). Examining the group means for the outcomes (AVOID,
NEGEVAL) indicates that Group 3 has the highest means for each outcome and Group
2 has the lowest. For the covariates, Group 3 has the highest mean and the means for
Groups 2 and 1 are fairly similar. Given that random assignment has been properly
done, use of MANCOVA (or ANCOVA) is preferable to MANOVA (or ANOVA) for
the situation where covariate means appear to differ across groups because use of the
covariates properly adjusts for the differences in the covariates across groups. See
Huitema (2011, pp.€202–208) for a discussion of this issue.
Having some assurance that there are no outliers present, the shapes of the distributions
are fairly symmetrical, and linear relationships are present between the covariates and
the outcomes, we now examine the formal assumptions associated with the procedure.
(Note though that the linearity assumption has already been assessed.) First, Table€8.7
provides the results for the test of the assumption that there is no treatment-covariate
interaction for the set of outcomes, which the GLM procedure performs separately for

323

324

↜渀屮

↜渀屮

Analysis of Covariance

 Table 8.6:╇ Descriptive Statistics for the Study Variables by€Group
Report
GPID
1.00

2.00

3.00

Mean

AVOID

NEGEVAL

PREAVOID

PRENEG

116.9091

108.8182

103.1818

93.9091

N

11

11

11

11

Std. deviation

17.23052

22.34645

20.21296

16.02158

Mean

105.9091

94.3636

103.2727

95.0000

N

11

11

11

11

Std. deviation

16.78961

11.10201

17.27478

15.34927

Mean

132.2727

131.0000

113.6364

108.7273

N

11

11

11

11

Std. deviation

16.16843

15.05988

18.71509

16.63785

each covariate. The results suggest that there is no interaction between the treatment
and PREAVOID for any outcome, multivariate F€=€.277, p€=€.892 (corresponding to
Wilks’ Λ) and no interaction between the treatment and PRENEG for any outcome,
multivariate F€=€.275, p€=€.892. In addition, Box’s M test, M = 6.689, p€=€.418, does
not indicate the variance-covariance matrices of the dependent variables differs across
groups. Note that Box’s M does not test the assumption that the variance-covariance
matrices of the residuals are similar across groups. However, Levene’s test assesses
whether the residuals for a given outcome have the same variance across groups. The
results of these tests, shown in Table€8.8, provide support that this assumption is not
violated for the AVOID outcome, F€=€1.184, p€=€.320 and for the NEGEVAL outcome,
F = 1.620, p€=€.215. Further, Table€8.9 shows that PREAVOID is related to the set of
outcomes, multivariate F€=€17.659, p < .001, as is PRENEG, multivariate F€=€4.379,
p€=€.023.
Having now learned that there is no interaction between the treatment and covariates for any outcome, that the residual variance is similar across groups for each
outcome, and that the each covariate is related to the set of outcomes, we attend to
the assumption that the residuals from the MANCOVA procedure are independently
distributed and follow a multivariate normal distribution in each of the treatment
populations. Given that the treatments were individually administered and individuals completed the assessments on an individual basis, we have no reason to suspect that the independence assumption is violated. To assess normality, we examine
graphs and compute skewness and kurtosis of the residuals. The syntax in Table€8.4
obtains the residuals from the MANCOVA procedure for the two outcomes for each
group. Inspecting the histograms does not suggest a serious departure from normality, which is supported by the skewness and kurtosis values, none of which exceeds
a magnitude of 1.5.

Chapter 8

↜渀屮

↜渀屮

 Table 8.7:╇ Multivariate Tests for No Treatment-Covariate Interactions
Multivariate Testsa

Effect
Intercept

GPID

PREAVOID

PRENEG

GPID *
PREAVOID

GPID *
PRENEG

Hypothesis
df

Error
df

Sig.

Partial
eta
squared

b

Value

F

Pillai’s Trace
Wilks’ Lambda
Hotelling’s Trace
Roy’s Largest Root
Pillai’s Trace
Wilks’ Lambda
Hotelling’s Trace
Roy’s Largest Root
Pillai’s Trace
Wilks’ Lambda
Hotelling’s Trace
Roy’s Largest Root
Pillai’s Trace
Wilks’ Lambda
Hotelling’s Trace
Roy’s Largest Root
Pillai’s Trace

.200
.800
.249
.249
.143
.862
.156
.111
.553
.447
1.239
1.239
.235
.765
.307
.307
.047

2.866
2.866b
2.866b
2.866b
.922
.889b
.856
1.334c
14.248b
14.248b
14.248b
14.248b
3.529b
3.529b
3.529b
3.529b
.287

2.000
2.000
2.000
2.000
4.000
4.000
4.000
2.000
2.000
2.000
2.000
2.000
2.000
2.000
2.000
2.000
4.000

23.000
23.000
23.000
23.000
48.000
46.000
44.000
24.000
23.000
23.000
23.000
23.000
23.000
23.000
23.000
23.000
48.000

.077
.077
.077
.077
.459
.478
.498
.282
.000
.000
.000
.000
.046
.046
.046
.046
.885

.200
.200
.200
.200
.071
.072
.072
.100
.553
.553
.553
.553
.235
.235
.235
.235
.023

Wilks’ Lambda
Hotelling’s Trace
Roy’s Largest Root
Pillai’s Trace

.954
.048
.040
.047

.277b
.266
.485c
.287

4.000
4.000
2.000
4.000

46.000
44.000
24.000
48.000

.892
.898
.622
.885

.023
.024
.039
.023

Wilks’ Lambda
Hotelling’s Trace
Roy’s Largest Root

.954
.048
.035

.275b
.264
.415c

4.000
4.000
2.000

46.000
44.000
24.000

.892
.900
.665

.023
.023
.033

a

Design: Intercept + GPID + PREAVOID + PRENEG + GPID * PREAVOID + GPID * PRENEG
Exact statistic
c
The statistic is an upper bound on F that yields a lower bound on the significance level.
b

 Table 8.8:╇ Homogeneity of Variance Tests for MANCOVA
Box’s test of equality of covariance matricesa
Box’s M
F
df1
df2
Sig.

6.689
1.007
6
22430.769
.418

Tests the null hypothesis that the observed covariance matrices of the
dependent variables are equal across groups.
a
Design: Intercept + PREAVOID + PRENEG + GPID

325

Levene’s test of equality of error variancesa
AVOID
NEGEVAL

F

df1

df2

Sig.

1.184
1.620

2
2

30
30

.320
.215

Tests the null hypothesis that the error variance of the dependent variable is equal across groups.
a
Design: Intercept + PREAVOID + PRENEG + GPID

 Table 8.9:╇ MANCOVA and ANCOVA Test Results
Multivariate testsa
Effect
Intercept

PREAVOID

PRENEG

GPID

Value
Pillai’s Trace
Wilks’ Lambda
Hotelling’s Trace
Roy’s Largest
Root
Pillai’s Trace
Wilks’ Lambda
Hotelling’s Trace
Roy’s Largest
Root
Pillai’s Trace
Wilks’ Lambda
Hotelling’s Trace
Roy’s Largest
Root
Pillai’s Trace
Wilks’ Lambda
Hotelling’s Trace
Roy’s Largest
Root

F

Hypothesis
df

Error
df

Sig.

Partial eta
squared

.219
.781
.280
.280

3.783b
3.783b
3.783b
3.783b

2.000
2.000
2.000
2.000

27.000
27.000
27.000
27.000

.036
.036
.036
.036

.219
.219
.219
.219

.567
.433
1.308
1.308

17.659b
17.659b
17.659b
17.659b

2.000
2.000
2.000
2.000

27.000
27.000
27.000
27.000

.000
.000
.000
.000

.567
.567
.567
.567

.245
.755
.324
.324

4.379b
4.379b
4.379b
4.379b

2.000
2.000
2.000
2.000

27.000
27.000
27.000
27.000

.023
.023
.023
.023

.245
.245
.245
.245

.491
.519
.910
.889

4.555
5.246b
5.913
12.443c

4.000
4.000
4.000
2.000

56.000
54.000
52.000
28.000

.003
.001
.001
.000

.246
.280
.313
.471

a

Design: Intercept + PREAVOID + PRENEG +€GPID
Exact statistic
c
The statistic is an upper bound on F that yields a lower bound on the significance level.
b

Tests of between-subjects effects

Source

Dependent
variable

Type III sum
of squares

Corrected
model

AVOID
NEGEVAL

9620.404a
9648.883b

df

Mean
square

F

Sig.

Partial
eta
squared

4
4

2405.101
2412.221

25.516
10.658

.000
.000

.785
.604