5 Example 1: Examining School Differences in Mathematics
Tải bản đầy đủ
546
â†œæ¸€å±®
â†œæ¸€å±®
Hierarchical Linear Modeling
shows the first 15 cases for the data set used here, and TableÂ€13.3 shows descriptive
statistics. In TableÂ€13.2, schcode is the school ID (sorted from 1 to 419), and id is the
student ID variable. The student outcome is math, and ses is the student-level predictor. Note that some of the ses scores are negative, which is due to these scores being
centered around their respective school ses mean. At the school level, the focal variable of interest is the dichotomous public, with 73% of the schools in the sample being
public. The other school-level variable meanses, is included as a control variable, and
was formed by computing the mean of the uncentered (raw) student ses scores for the
students included in the sample from each of the given schools. Scores for mean ses
were then subsequently centered. Note that the student-level variables in TableÂ€13.2
vary within a school but the school-level variables are constant for each person within
a school. Also, note that even though we have variables at two different levels (student
and school), all of the variables appear in one dataÂ€file.
In addition, you might wonder why mean ses is needed in the analysis model, given
that we have a student ses variable. There are two primary reasons for this. First,
when student ses is group-mean centered, it cannot serve as a control variable for
any school-level predictor, because this form of centering makes the student predictor
uncorrelated with school predictors. As such, if we wish to use group-mean centering
for student ses and also control for ses differences between schools when we compare
public and private schools’ math performance, mean ses must be included as a predictor variable. Second, sometimes, the association between a predictor and an outcome
at level 1 (e.g., student ses and math) may differ from the association of these variables
at the school level (e.g., school mean ses and school mean math). When these associations differ, school mean ses is said to have contextual effect on math performance.
Table 13.2:â•‡ Data Set Showing First 15Â€Cases
Chapter 13
â†œæ¸€å±®
â†œæ¸€å±®
Table 13.3:â•‡ Variables and Descriptive Statistics for HLM Analysis
Variable
Variable name
Values
Mean
SD
57.73
0.00
8.78
6.07
0.73
0.00
0.44
4.94
Student-level
Math achievement
Socioeconomic status
Math
Ses
27.42 to 99.98
−21.71 to 24.10
School-level
School type
School ses
Public
Meanses
1Â€=Â€public, 0Â€=Â€other
−13.34 to 14.20
These within- and between-school associations, sometimes of intrinsic interest, are
estimated by including student ses and mean ses in the same analysis model. SectionÂ€13.6.1 discusses contextual effects in more detail.
In the analysis that follows, we assume that the researchers are interested primarily in
examining differences between public and private schools in math achievement. With
these data, researchers can not only examine whether public or private schools have,
on average, greater math achievement, but may also examine whether the association
between student ses and math is different for the two school types. What is desired,
perhaps, is to determine if there are schools where math performance is generally high
but that the ses-math slope is relatively small. Such a co-occurrence would indicate
that there are schools where students of varying ses values are all performing relatively
high in mathematics and that math performance does not depend in a great way on
student ses. If such schools are present, the analysis can then determine whether such
schools tend to be public or private.
13.5.1â•‡ The UnconditionalÂ€Model
Researchers often begin multilevel analysis with a completely unconditional model.
This model provides for us an estimate of the overall average across all students and
schools for the outcome (i.e., math), as well as an estimate of the variation that is
within and between schools for math. This modelÂ€is:
math ij = β0 j + rij , (7)
where the outcome math for student i in school j is modeled as a function of school
j’s intercept (β0j) and a residual term rij. Note that when no explanatory variables are
included on the right side of the model, the intercept becomes the average of the quantity on the left side. Thus, β0j represents a given school’s mathÂ€mean.
At level 2, school j’s intercept (or math mean) is modeled as function of a school-level
intercept and residual:
β0 j = γ 00 + u0 j (8)
547
548
â†œæ¸€å±®
â†œæ¸€å±®
Hierarchical Linear Modeling
Again, with no predictors on the right side of the model, γ00 represents the average of
the school math means, or is sometimes referred to as the overall average. The school
random effect (i.e., u0j) represents the deviation of a given school’s math mean from
the overall math average. Note that the residual terms in EquationsÂ€7 and 8 are assumed
to be normally distributed, with a mean of zero, and have constant variance, with the
student- or within-school variance denoted by σ2 and the school-level variance denoted
by τ00. The student and school random effects (rij, u0j) are assumed to be uncorrelated.
As before, the combined model is formed by replacing the regression coefficients in
EquationÂ€7 with the right-hand side of EquationÂ€8. This modelÂ€is
math ij = γ 00 + u0 j + rij , (9)
where there is one fixed effect (γ00), a school-level random effect (u0j), and a student
random effect (rij), the latter of which is referred to as a residual (not random effect)
by SAS andÂ€SPSS.
TableÂ€13.4 shows the SAS and SPSS commands needed to estimate EquationÂ€9, and
TableÂ€13.5 shows selected analysis results. In TableÂ€13.5, the results from SAS and
SPSS are virtually identical with a couple of differences (i.e., degrees of freedom
for tests of fixed effects and p values reported for tests of variances). First, in the Fit
Statistics table in SAS and in the Information Criteria table of SPSS, −2 Restricted
Log Likelihood is a measure of lack of fit (sometimes referred to as model deviance),
estimated here to be 48,877.3. This value can be used to conduct a statistical test
for the intercept variance (τ00), which we will illustrate shortly. In the Fixed Effect
output tables, the intercept (γ00) is estimated to be 57.67, which is the overall math
average. Typically, the intercept would not be tested for significance, unless zero
is a value of interest as the null hypothesis is that γ00Â€=Â€0. Note that the degrees of
freedom associated with the test of the fixed effect differs between SAS (418) and
SPSS (416.066). West, Welch, and Galecki (2014) explain that t tests with multilevel
models do not exactly follow a t distribution. As a result, different methods are available to estimate a degrees of freedom for this test. The MIXED procedure in SPSS
uses the Satterthwaite method (by default and exclusively) to estimate the degrees of
freedom, with this method intended to provide more accurate inferences when small
sample sizes are present. SAS PROC MIXED has a variety of methods available to
estimate this degrees of freedom. While the Satterthwaite method can be requested
in SAS, the syntax in TableÂ€13.3 uses the default method (called containment), which
estimates the degrees of freedom based on the model specified for the random effects
(West et al., 2014, p.Â€131).
In the Covariance Parameters table of TableÂ€13.5, the student-level variance in
math is estimated to be 66.55, and the school-level math variance is estimated to be
10.64. The Wald z tests associated with these variances suggest that math variation
is present in the population within and between schools (the null hypothesis for
each variance is that it is zero in the population). Note that when using these z tests
Chapter 13
â†œæ¸€å±®
â†œæ¸€å±®
Table 13.4: SAS and SPSS Control Lines for Estimating the Completely Unconditional
Model
SAS
SPSS
(1) P
ROC MIXED METHODÂ€=Â€REML NOCLPRINT
COVTEST NOITPRINT;
(2) CLASS schcode;
(3) MODEL mathÂ€=Â€/ SOLUTION;
RANDOM intercept / SUBJECT=schcode;
(4)
RUN;
(5)
(6)
(7)
(8)
(9)
MIXED math
/FIXED=| SSTYPE(3)
/METHOD=REML
/PRINT=G SOLUTION TESTCOV
/RANDOM=INTERCEPT | SUB
JECT(schcode)COVTYPE(VC).
(1) PROC MIXED invokes the mixed modeling procedure; METHODÂ€=Â€REML requests restricted maximum likelihood estimation, NOCLPRINT suppresses printing of the number of schools,
COVTEST requests z tests for variance-covariance elements, and NOITPRINT suppresses printing of
information on iteration history.
(2) CLASS defines the cluster-level variable and must precede the MODEL statement.
(3) MODEL specifies that math is the dependent variable and no predictors are included, although the intercept (γ00) is included by default, SOLUTION displays fixed effects estimates in the output.
(4) RANDOM specifies random effects for the intercept and the identifier (schcode) indicates that students are
nested in schools. This line is omitted when a deviance test is used for τ00.
(5) MIXED invokes the mixed modeling procedure and math is then indicated as the dependent variable.
(6) FIXED indicates that no fixed effects are included in the model although the intercept (γ00) is included by
default. SSTYPE(3) requests the type 3 sum of squares.
(7) METHOD requests restricted maximum likelihood estimation.
(8) PRINT requests school-level variance components, the fixed effect estimates and tests, and statistical
test results for the variance parameters.
(9) RANDOM specifies random effects for the intercept and the identifier (schcode) indicates that students
are nested in schools, COVTYPE(VC) requests the estimation of the intercept variance (τ00). This line is
omitted when a deviance test is used for τ00.
for variances, Hox (2010) recommends that the obtained p values be divided by 2
because while this z test is a two-tailed test, variances must be zero or greater. It is
important to note that SAS provides these recommended p values, whereas SPSS
does not. So, p values obtained from SPSS for variances should be divided by 2
when assessing statistical significance. Given the small p values here, the results
indicate, then, that within school, student math scores vary and between schools
math means vary. Note though that these z tests provide approximate p values as
variances are not normally distributed. More accurate inference for variances can be
obtained by testing model deviances, which are generally preferred over the z tests
and is discussedÂ€next.
As is the case with other statistical techniques discussed in this book, statistical tests
that compare model deviances may often be conducted when maximum likelihood
estimation is used. With multilevel modeling, two forms of maximum likelihood
estimation are generally available in software programs: Full Maximum Likelihood
549
Table 13.5:â•‡ Results From the UnconditionalÂ€Model
SAS
Fit Statistics
-2 Res Log Likelihood
AIC (smaller is better)
AICC (smaller is better)
BIC (smaller is better)
48877.3
48881.3
48881.3
48889.3
Solution for Fixed Effects
Effect
Intercept
Estimate
57.6742
Standard Error DF
0.1883
418
tÂ€Value
306.34
Pr > |t|
<.0001
Z Value
10.35
56.80
Pr > Z
<.0001
<.0001
Covariance Parameter Estimates
Cov Parm
Intercept
Residual
Subject
Schcode
Â€
Estimate
10.6422
66.5507
Standard Error
1.0287
1.1716
SPSS
Information Criteriaa
-2 Restricted Log Likelihood
Akaike’s Information Criterion (AIC)
Hurvich and Tsai’s Criterion (AICC)
Bozdogan’s Criterion (CAIC)
Schwarz’s Bayesian Criterion (BIC)
48877.256
48881.256
48881.257
48896.925
48894.925
The information criteria are displayed in smaller-is-better forms.
a
Dependent Variable: math.
Fixed Effects
Estimates of Fixed Effectsa
95% Confidence Interval
Parameter
Estimate
Std. Error Df
T
Sig.
Lower Bound Lower
Bound
Intercept
57.674234
.188266
306.344
.000
57.304162
a
416.066
58.044306
Dependent Variable: math.
Covariance Parameters
Estimates of Covariance Parametersa
95% Confidence Interval
Parameter
Estimate Std. Error Wald Z
Sig.
Lower Bound Lower
Bound
Residual
Intercept [sub- Variance
jectÂ€=Â€schcode]
66.550655 1.171618
10.642209 1.028666
.000
.000
64.293492
8.805529
a
Dependent Variable: math.
56.802
10.346
68.887062
12.861989
Chapter 13
â†œæ¸€å±®
â†œæ¸€å±®
(FML) and Restricted Maximum Likelihood (RML), with the latter preferred when
the number of clusters is relatively small because it provides for unbiased estimates
of variance and covariances. However, when RML is used, only variances and covariances (not fixed effects) may be properly tested with the deviance method. When
FML is used, both fixed effects and variance-covariance elements may be tested using
model deviances, although West etÂ€al. (2014, p.Â€36) recommend deviance tests of
variance-covariances be done with RML only and tests of fixed effects be conducted
with FML. In this example, RML, which is the default estimation procedure for SAS
and SPSS, is used for estimation.
To conduct a test using deviances to determine if the intercept varies across schools,
two models, one nested in the other, need to be estimated. Then, one obtains an overall
measure of model fit, the deviance, and computes the difference between the nested
and full model deviances. This difference, in effect, follows a chi-square distribution
with a given alpha level (i.e., .05) and degrees of freedom, where the latter is equal
to the difference in the number of parameters estimated between the full and nested
model. Note that since the intercept variance cannot be negative, Snijders and Bosker
(2012, p.Â€98) recommend halving the p values, which is the same as doubling the alpha
level used for the test (i.e., .10.)
To test the variance of the intercept (H0 : τ00Â€=Â€0) using deviances, the two comparison
models must be identical in terms of the fixed effects and can only differ in the variances estimated. Thus, to estimate an appropriate comparison model here, EquationÂ€7
is the level-1 model. The level-2 model is the same as EquationÂ€8 except there is no u0j
term in the model for β0j, as each u0j is constrained to be zero. As such, the variance of
β0j (i.e., τ00) in this new model is constrained to be zero. This new model, then, is nested
in the three-parameter model, represented by EquationÂ€9, and estimates two parameters: one fixed effect (like the previous model) but just one variance component, the
student-level variance (σ2). Note that to obtain the results for this nested model, you
use the same syntax as shown in TableÂ€13.4, except that the RANDOM subcommand
line is removed, which constrains τ00 toÂ€zero.
To complete the statistical test, we estimated this reduced two-parameter model and
found that the deviance, or the quantity −2 times the log likelihood, is 49,361.120,
whereas the original unconditional model deviance is 48,877.256 (as shown in
TableÂ€13.5). The difference between these deviances is 483.864, which is greater than
the corresponding chi-square value of 2.706 (.10, dfÂ€=Â€1). Therefore, we conclude that
the school math means vary in the population.
Summarizing the results obtained from this unconditional model, performance on the
math test is, on average, 57.7. Math scores vary both within and between schools.
Inspecting the variance estimates indicates that a majority of math variance is within
schools. In this two-level design, the intraclass correlation provides a measure of the
proportion of variability in the outcome that exists between clusters. For the example
here, the intraclass correlation provides a measure of the proportion of variability in
551
552
â†œæ¸€å±®
â†œæ¸€å±®
Hierarchical Linear Modeling
math that is between schools. The formula for the intraclass correlation for a two-level
modelÂ€is:
ρICC =
τ 00
(10)
τ 00 + σ 2
For the current data set, the intraclass correlation estimate thenÂ€is
ρICC =
τ 00
10.642
=
= .138. (11)
2
10.642 + 66.551
τ 00 + σ
Thus, about 14% of the variation in math scores is between schools. According to
Spybrook and Raudenbush (2009, p.Â€304), the intraclass correlation for academic outcomes in two-level educational research with students nested in schools is often in
the range from 0.1 to 0.2, which is consistent with the data here and suggests that an
important part of the math variation is present across schools.
13.5.2â•‡ Random-CoefficientÂ€Model
A second model often used in multilevel analysis is the random-coefficient model. In
this model, one or more predictors are added to the level-1 model, and the lower-level
intercept and slope for at least one of the predictors are specified to vary across clusters. In this example, student ses will be included as a predictor variable and we will
determine if the association between ses and math varies across schools. The level-1
or student-level modelÂ€is
(
)
mathij = β 0 j + β1 j sesij - ses j + rij , (12)
where group-mean centered ses is now included as a predictor at level 1. As discussed
in sectionÂ€ 13.4, with group-mean centering, β0j represents a given school j’s math
mean, and β1j represents the within-school association between ses and math. The
student-level residual term represents the part of the student-level math score that is
not predictable by ses, and rij ~ N(0, σ2).
In the school-level model, the regression coefficients of EquationÂ€12 serve as outcome
variables and no school-level predictors are included. This modelÂ€is
β0 j = γ 00 + u 0 j (13)
,
β1 j = γ 10 + u1 j
where the two fixed effects (i.e.,γ00 and γ10) represent the overall math average and
overall average of the student-level slopes relating ses to math. We allow the residual
Chapter 13
â†œæ¸€å±®
â†œæ¸€å±®
terms to vary and covary, as in EquationÂ€4. The combined expression for the multilevel
model isÂ€then
(
)
(
)
mathij = γ 00 + γ 10 sesij - ses j + u0 j + u1 j sesij - ses j + rij . (14)
TableÂ€13.6 shows the syntax that can be used to estimate EquationÂ€14 using SAS
and SPSS. TableÂ€13.7 shows selected SPSS results, as results from SAS, as we have
seen, are very similar. In TableÂ€13.7, the deviance for the random-coefficient model is
48,479.875. Recall that since RML was used, we cannot use this deviance to test any
hypotheses associated with the fixed effects. However, we will use this deviance to test
the slope variance (i.e., τ11). The estimates of the fixed effects are that the mean math
score is 57.7, and the average of the within-school ses-math slopes is .313, indicating
that student math scores increase by about .3 points as student ses increases by 1 point.
The corresponding t test (tÂ€=Â€18.759) and p value (< .001) for this association indicates
a positive association is present in the population.
For the variance and covariance estimates, we begin with the student-level residual variance in TableÂ€13.7, which is 62.18 (p < .001), indicating that significant
student-level variance in math remains after adding ses. The estimates for the school
variance-covariance components are readily seen in the last output table in TableÂ€13.7,
which is the variance-covariance matrix for the school random effects. This table
shows that the variance in math means between schools is 10.91, the variance in
slopes is .01, and the covariance between the school math means and ses-math slopes
Table 13.6: SAS and SPSS Control Lines for Estimating the Random-CoefficientÂ€Model
SAS
SPSS
PROC MIXED METHODÂ€=Â€REML NOCLPRINT
COVTEST NOITPRINT;
CLASS schcode;
MODEL mathÂ€=Â€ses / Â�
SOLUTION;
(1)
RANDOM intercept ses / typeÂ€=Â€un
(2)
SUBJECT=schcode;
RUN;
(3) MIXED math WITH ses
/FIXED= ses | SSTYPE(3)
(4)
/METHOD=REML
/PRINT=G SOLUTION TESTCOV
/RANDOM=INTERCEPT ses |
(5)
Â�SUBJECT(schcode) COVTYPE(UN).
(1) The MODEL statement adds ses as a predictor variable.
(2) The RANDOM statement specifies that random effects appear in the model for the school math means and
the ses-math slopes; typeÂ€=Â€un specifies that a variance-covariance matrix be estimated for the school random
effects. Note that removing ses from this statement would specify a random intercept model, which constrains
τ11 and τ01 toÂ€zero.
(3) The MIXED statement indicates that ses is included as a covariate.
(4) The FIXED statement requests that a fixed effect be estimated forÂ€ses.
(5) The RANDOM statement specifies that random effects appear in the model for the school math means
and the ses-math slopes; COVTYPE(UN) specifies that a variance-covariance matrix be estimated for the
school random effects. Note that removing ses from this statement would specify a random intercept model,
which constrains τ11 and τ01 toÂ€zero.
553
554
â†œæ¸€å±®
â†œæ¸€å±®
Hierarchical Linear Modeling
Table 13.7: SPSS Results From the Random-CoefficientÂ€Model
Information Criteriaa
-2 Restricted Log Likelihood
Akaike’s Information Criterion (AIC)
Hurvich and Tsai’s Criterion (AICC)
Bozdogan’s Criterion (CAIC)
Schwarz’s Bayesian Criterion (BIC)
48479.875
48487.875
48487.881
48519.215
48515.215
The information criteria are displayed in smaller-is-better forms.
a
Dependent Variable: math.
Fixed Effects
Estimates of Fixed Effectsa
95% Confidence Interval
Parameter
Estimate
Std. Error
Df
t
Sig.
Intercept
Sesgrpcen
57.675771
.312781
.188222
.016674
416.090
384.194
306.425
18.759
.000
.000
a
Lower Bound
Upper
Bound
57.305787
.279998
58.045755
.345565
Dependent Variable: math.
Covariance Parameters
Estimates of Covariance Parametersa
95% Confidence Interval
Parameter
Estimate
Std. Error Wald Z Sig.
Residual
62.176171 1.122366 55.397 .000
UN 10.909371 1.028421 10.608 .000
Intercept +
sesgrpcen [sub- (1,1)
-.162162
.067697 -2.395 .017
jectÂ€=Â€schcode] UN
(2,1)
UN
.011194
.007102 1.576 .115
(2,2)
a
Lower Bound
Upper Bound
60.014834
9.068958
64.415345
13.123270
-.294846
-.029477
.003228
.038814
Dependent Variable: math.
Random Effect Covariance Structure (G)a
Intercept | schcode
Intercept | schcode
sesgrpcen | schcode
Unstructured
a
Dependent Variable: math.
10.909371
-.162162
sesgrpcen | schcode
-.162162
.011194
Chapter 13
â†œæ¸€å±®
â†œæ¸€å±®
is −.16. The correlation, then, between school math means and ses-math slopes is
-.16 10.91 × .01 = -.48. This negative correlation indicates that schools with higher
math means tend to have flatter ses-math slopes, suggesting that math performance in
some schools is relatively high and more equitable for students having various ses values. Note that the value of the slope variance (.01) is not, perhaps, readily interpretable
and in an absolute sense seems small. To render the slope variance more meaningful,
we can compute the expression γ 10 ± 2 × τ11 , which obtains values of β1j that are 2
standard deviations above and below the mean slope value. For these data, these slope
values are .113 and .513. Thus, this suggests that there are schools in the sample where
the ses-math slope is fairly small (about a .11 increase in math for a point change
in ses), whereas this association in other schools is stronger (about a .51 increase in
math for a point change in ses). Further, using the z tests, the p values provided in the
Covariance Parameters table indicate that the variance in the math means (p < .001)
and the covariance of the math means and ses-math slopes (pÂ€=Â€.017) is significant at
the .05 level but that the variance in ses-math slopes is not (p / 2Â€=Â€.115 / 2Â€=Â€.058). As
discussed, these z tests do not provide as accurate inference as deviance tests, so in the
next section we consider using a deviance test to assess the variance-covariance terms
associated with the slope.
FigureÂ€13.2 provides a visual depiction of these results. This plot shows predicted math
scores for each of 50 schools as a function of student ses (with 50 schools selected
instead of all schools to ease viewing). Given that ses is group-mean centered, the
mean math score for a given school is located on the regression line above an ses score
77.0
67.0
Math
57.0
47.0
37.0
27.0
20.0
10.0
0.0
SES
10.0
Figure 13.2 Predicted math scores as a function of ses for each of 50 schools.
20.0
555
556
â†œæ¸€å±®
â†œæ¸€å±®
Hierarchical Linear Modeling
of zero. Examining the plot suggests that these mean math scores vary greatly across
schools. In addition, the plot also suggests that the math-ses slopes vary across schools
as some slopes are near zero, while others are mostly positive. Also, the negative correlation between the math means and math-ses slopes is evident in that schools having
predicted math scores greater than 57 when ses is zero tend to have slopes near zero
(flat slopes), whereas other schools (with lower mean math scores) tend to have positive math-ses slopes.
13.5.3â•‡Deviance Test for a Within-School Slope Variance and
Covariance
Previously, we showed how model deviances can be used to test a single variance
(e.g., τ00). We now show how model deviances can be used to test the variance and
covariance associated with adding a random effect for a within-school slope. As
before, we compare the deviance from two models, where one model is nested in
the other. The random-coefficient model (i.e., the full model) has already been estimated, and this model includes six parameters: two fixed effects (γ00 and γ01) and four
variance-covariance terms, that is, the student-level variance (σ2), the variance of the
math means (τ00), the slope variance (τ11), and the covariance between the math means
and ses-math slopes (τ01). The nested model that we will estimate will constrain the
slope variance (τ11) to zero and by doing so will also constrain the covariance (τ01)
toÂ€zero.
Recall that when testing variance-covariance terms, the two comparison models must
have the same fixed effects. Thus, for this reduced model, EquationÂ€12 remains the
student-level model. In addition, EquationÂ€13 is the school-level model, except that
there is no u1j term in the model for β1j, as each u1j is constrained to be zero (which
then constrains τ11 and τ01 to zero). Thus, the reduced model has four parameters: the
same two fixed effects as the random-coefficient model, but just two variances: the
student-level variance (σ2) and the variance of the math means (τ00). Note that this
random intercept model can be estimated with SAS and SPSS by removing ses from
the respective RANDOM statement from the syntax in TableÂ€13.6.
We estimated the random intercept model to conduct this deviance test. The estimate of the deviance from the random intercept model is 48,488.846, whereas the
random-coefficient model returned a deviance of 48,479.875. The difference between
these deviances is 8.971. AÂ€key difference between the deviance test of a single variance (as illustrated in sectionÂ€13.5.1) and the test of the variance and covariance here is
that this test statistic is not distributed as a standard chi-square test (SnijdersÂ€& Bosker,
2012, p.Â€99; West etÂ€al. 2014, p.Â€36). Instead, this test statistic follows a chi-bar distribution, which is a mix of chi-square distributions having different degrees of freedom.
Snijders and Bosker (2012, p.Â€99) provide selected critical values for such a distribution, and we use a critical value from their text given an alpha of .05 and when the
slope variance and covariance for a single predictor (here, ses) is being tested, with this
critical value being 5.14. Given in our example that the test statistic of 8.971 exceeds