Tải bản đầy đủ
Chapter 7. Testing Group Differences in Longitudinal Change

Chapter 7. Testing Group Differences in Longitudinal Change

Tải bản đầy đủ

120

Statistical Power Analysis with Missing Data

Table 7.1
Gender Differences in a Five‑Wave Longitudinal Design
Age
Gender

11

12

13

14

15

Boys
Girls

O
O

O
O

O
O

O
O

O
O

intervention, we would repeatedly measure participants on four more
equally spaced measurement occasions. We might expect that both
groups will show some improvements over time (the intervention group
due to the exercise, the control group due to motivational or other indi‑
rect influences), but we expect that the treatment group will show greater
increases in muscle strength and gait speed over the 4‑month period of
our study. See Curran and Muthén (1999) for a complete data application
of this type of design.
There is growing awareness that change over time (or the benefits from
an intervention) can vary systematically from one individual to another.
Figure 7.1, for example, plots individual trajectories for 16 individuals from
the Willett and Sayer (1994) study. Data for boys are shown with solid lines,
and data for girls are shown with dashed lines. From this small selection of
cases, it appears as though tolerance of deviant behavior generally increases
with age. The extent to which there are differences in longitudinal change
for boys and girls, however, is not clear. Individuals can vary in terms of
where they start (or where they finish, which might be of greatest interest
in an intervention study), as well as how they change over time. Some may
even decrease while others increase. Similarly, the rate of change (or extent
of benefit from our intervention) might differ systematically as a function of
age, gender, initial strength, muscle mass, motivation, or any of a host of fac‑
tors. Growth curve models (GCM) are an increasingly common way to ana‑
lyze change in such longitudinal designs (T. E. Duncan, Duncan, Strycker,
Li, & Alpert, 2006; McArdle, 1994; S. W. Raudenbush & Bryk, 2002).
Growth curve models allow researchers to estimate underlying devel‑
opmental trajectories, adjusting for measurement error. The parameters of

Table 7.2
Intervention Effects in a Randomized Five‑Wave Longitudinal Design
Treatment
Control

Pretest

R

Month 1

Month 2

Month 3

Month 4

O
O

X

O
O

O
O

O
O

O
O

121

Testing Group Differences in Longitudinal Change

4
Boys

3.5

Girls

Log Tolerance

3
2.5
2
1.5
1
0.5
0

11

12

13
Age

14

15

Figure 7.1
Plot of observed individual trajectories for 16 cases.

these underlying “true score” trajectories (such as the intercept and rate of
change) can be predicted, in turn, by other characteristics of individuals.
These inter‑individual differences in intra‑individual rates of change can
identify characteristics of individuals who benefit the most from the inter‑
ventions and those who benefit less. Because growth curve models allow
for the estimation of individual differences in change over time, researchers
can examine the differential responses to treatment in order to identify fac‑
tors associated with stronger or weaker responsiveness to treatment. There
is even a growing movement to use adaptive treatment strategies (Collins,
Murphy, & Bierman, 2004) in order to enhance intervention efficacy.
For example, Figure  7.2 shows plots of the estimated regression lines
through each individual’s data. Individuals have their own estimated
intercept and rate of change. As can be seen, most individuals’ tolerance
of deviant behavior is increasing, although there is considerable variabil‑
ity both in terms of where individuals are at age 11 and how quickly they
change over time.
As an alternative to traditional methods of repeated measures analysis
of variance or ANCOVA approaches, growth curve models allow for esti‑
mating true change over time, as well as for investigation of inter‑individ‑
ual differences in intra‑individual change. Under many circumstances,
growth curve models may also be more powerful statistical techniques
than the traditional alternatives mentioned above (cf. Cole, Maxwell,
Arvey, & Salas, 1993; Maxwell, Cole, Arvey, & Salas, 1991).

122

Statistical Power Analysis with Missing Data

4
Boys
Girls

Fitted Log Tolerance

3.5
3
2.5
2
1.5
1
0.5
0

11

12

13

14

15

Age
Figure 7.2
Estimated individual trajectories for the same 16 cases.

Curran and Muthén (1999; B. O. Muthén & Curran, 1997) have estimated
statistical power of growth curve models with complete data across a
variety of different sample sizes, differing numbers of measurement occa‑
sions, and effect sizes. However, what if we expect some proportion of
individuals to drop out of our study? What if the drop out is expected to
be systematic? How would these situations affect the statistical power to
evaluate longitudinal change due to the intervention?
In this application, we use data from Willett and Sayer (1994) to deter‑
mine statistical power for both detecting group differences in longitudinal
change and for assessing the variability in rates of longitudinal change.

The Steps
In the previous chapter we worked through a very simple example to
illustrate the effects of missing data on statistical power under a variety
of different circumstances. In this chapter, we once again work through
each of the seven steps in the process of conducting a power analysis with
missing data using a slightly more complex model.

123

Testing Group Differences in Longitudinal Change

Step 1: Selecting a Population Model
Our model extends the five‑wave complete‑data example presented in
Willett and Sayer (1994) to a situation with incomplete or missing data. In
their example, data on tolerance of deviant behavior (logged to improve
normality) were obtained from 168 eleven‑year‑old boys and girls. These
students were assessed on five occasions over a 4‑year period (at ages 11,
12, 13, 14, and 15) in order to observe the change in tolerance for devi‑
ant behavior over time. Willett and Sayer (1994) analyzed these data by
including gender and reported exposure to deviant behavior at Wave 1 as
potential predictors of change. Using the parameter estimates from their
published data, we derived the implied covariance matrices and mean
vectors for boys and girls. In this example, for simplicity, we assume equal
numbers of boys and girls in the sample, because their sample was nearly
equally divided on the basis of gender (48% boys). The model described
above is presented graphically in Figure 7.3.
The basic y‑side of the LISREL model for a confirmatory factor model con‑
sists of three matrices: Λy (LY), which contains the regression coefficients of
the observed variables on the latent variables; Ψ (PS), a matrix of the latent
variable residuals; and Θe (TE), a matrix of the observed variable residuals.
0.00290
0.01249

0.00292
Slope

Intercept
1

1

1

1

0

1

1 2

3

4

Age 11

Age 12

Age 13

Age 14

Age 15

e1

e2

e3

e4

e5

0.01863

0.02689

0.03398

0.02425

0.01779

Figure 7.3
Growth model. Data from “Using Covariance Structure Analysis to Detect Correlates and
Predictors of Change,” by J. B. Willett and A. G. Sayer, 1994, Psychological Bulletin, 116, 363–381.

124

Statistical Power Analysis with Missing Data

We will also include latent intercepts, t y (TY), and latent means, a (AL). The
estimated population parameters for this model can be specified as follows:



Λy = 





 0.01863

0

Θε = 
0

0

0


1
1
1
1
1

0
0.02689
0
0
0

0
1
2
3
4




 0.01249
, Ψ = 
 0.000209




0
0
0.03398
0
0

0
0
0
0.02425
0

0.00209 
,
0.00292 


0
0

 
0

0
0
 , and τ y =  0 

0
0

 
0.01779 
 0 

for both boys and girls, and



 0.22062 
 0.20252 
α=
 for boys and α = 
 for girls.
 0.08314 
 0.065584 

Step 2: Selecting an Alternative Model
Having defined our population model, a number of alternative models
might be of interest. In this chapter, we focus on two of them. Our pri‑
mary interest might be in whether the extent of longitudinal latent change
differs between boys and girls. An appropriate alternative hypothesis,
then, is that the means do not differ from one another by gender; both
groups change, on average, in the same way. The change parameter is rep‑
resented by a 21 for boys and girls. Our first alternative hypothesis, then, is
that a 21Boys = a 21Girls and will be evaluated with population data where the
change parameters differ by (0.08314 − 0.06584), or 0.0173.
A second useful alternative hypothesis that is important in growth
curve modeling is whether there is significant variability in how individ‑
uals change over time. In other words, is there evidence that individuals
change in different ways from one another? Rather than testing whether
the variance of the slope term is zero (i.e., everyone changes in an identi‑
cal fashion over time as assumed in repeated‑measures analysis of vari‑
ance), we compare it with a more realistic alternative that the variance of
the latent slope term, represented by element ψ22, can be tested against
an alternative hypothesis that it represents a trivial amount of variability,
defined here as 50% of its true variability. For the purposes of this example,

125

Testing Group Differences in Longitudinal Change

we also simultaneously test that the covariance between the latent inter‑
cept and latent slope, represented by element ψ21, is equal to zero. Our
second alternative hypothesis is multivariate and tests that the covariance
between intercept and slope (equal to 0.00209 in the population) is equal
to zero and that the variance of the latent slope (equal to 0.00292 in the
population) is equal to 0.00146 and will have 2 degrees of freedom.

Step 3: Generating Data According to the Population Model
Next we use the population parameters to generate data. In this case,
the covariance matrix and means are sufficient to estimate our model.
Expressed as a structural equation model, the implied population cova‑
riance matrix is Σ = Λ y ΨΛ′y + Θε and the expected vector of means is
µ y = τ y + Λ yα . SAS syntax to go from population parameters to the covari‑
ance matrix and vector of means implied by those parameters is provided
below.
proc iml;
ly = {1 0, 1 1, 1 2, 1 3, 1 4};
ps = {0.01249 0.00209, 0.00209 0.00292};
te = {0.01863 0 0 0 0,

0 0.02689 0 0 0,

0 0 0.03398 0 0,

0 0 0 0.02425 0,

0 0 0 0 0.01779};
tyb = {0, 0,0,0,0};
tyg = {0, 0,0,0,0};
alb = {0.22062, 0.08314};
alg = {0.20252, 0.06584};
sigma = ly*ps*ly`+te;
mub = tyb + ly*alb;
mug = tyg + ly*alg;
print sigma mub mug;
quit;

By matrix arithmetic, we obtain the following population covariance
matrix, which is identical for the boys and girls:



Σ=




0.03112
0.01458
0.01667
0.01876
0.02085

0.01458
0.044648
0.02460
0.02961
0.03462

0.01667
0.0246
0.06651
0.04046
0.04839

0.01876
0.02961
0..04046
0.07556
0.06216

0.02085
0.03462
0.04839
0.062116
0.09372




.




126

Statistical Power Analysis with Missing Data

Using the corresponding vectors of means for the boys and girls group the
implied means are
 0.22062 


 0.30376 
µu (Boys) =  0.38690 
 0.47004 
 0.55318 



 0.20252 


 0.26836 
and µ y (Girls) =  0.33420  .
 0.40004 
 0.46588 




This is everything needed to begin considering missing data in the model.
Step 4: Selecting a Missing Data Model
For the sake of this example, suppose that some portion of our popula‑
tion had complete data and the rest of our population had data only for
the first two occasions. If the data are MCAR, then the observed por‑
tions of each covariance matrix would be identical (in the population —
in any selected subsample, there would be some variation around the
overall population values). Under these circumstances the observed and
missing portions of our data would correspond with the ones below.
 0.03112

 0.01458
Σyy ( Incomplete) = 
?

?

?




0.01458
0.04648
?
?
?

?
?
?
?
?

?
?
?
?
?

 0.220062 


 0.30376 
µ y (Boys) = 
 , and µ y (Girls) =
?


?


?



?

?
?,
?

?
 0.20252 


 0.26836 

.
?


?


?



Things are not quite so simple for MAR data where the nonresponse is selec‑
tive. In order for data to be MAR, the probability that data are missing must
depend solely on observed data. Suppose that the probability that an obser‑
vation is missing depends on a weighted combination of their values on the
first two occasions. This also allows for the possibility of missing data at
waves 3 through 5 (ages 13 through 15). For this example, selection for MAR
data was determined by the value of scores at age 11 and 12, with the former
given twice the weight of the latter (i.e., s = 2 × t1 + 1 × t2). In other words,
the scores at age 11 were twice as important in predicting the likelihood of
missing data as the scores at age 12. In the missing data conditions, data
were set as missing for the third through fifth occasions of measurement.

Testing Group Differences in Longitudinal Change

127

Step 5: Applying the Missing Data Model to Population Data
If we use this weight to determine the probability that data will be observed
or unobserved, then both the covariance matrix and means would necessar‑
ily differ between the selected and unselected groups. The means would dif‑
fer because we selected them that way on a probabilistic basis. The covariance
matrices would differ because their values are calculated within each group
(i.e., deviations from the group means, not the grand mean). As we discussed in
Chapter 5, the formulas for how the population covariance matrices and means
will be deformed by this selection process have been known for a very long
time (Pearson, 1903), and they are straightforward to calculate, which we will
do here. For Monte Carlo applications, a researcher could perform the corre‑
sponding steps using raw data, which we will consider in detail in Chapter 9.
We can define w as a weight matrix containing the regression coeffi‑
cients linking the observed variables with nonresponse. In this case,
w = [2 1 0 0 0] . In the MCAR case, the weights for both t1 and
t2 would be 0 because, by definition, selection does not depend on any
observed — or unobserved — values. Pearson’s selection formula indi‑
cates that the mean value on our s is given as µ s = w µ y . Algebraically, we
can express the same associations as E(s) = 2 × E(t1) + 1 × E(t2) + 0 × E(t3)
+ 0 × E(t4) + 0 × E(t5). For this example, the expected value of s would be
0.74500 (2 × 0.22062 + 1 × 0.30376) in the boys group and 0.67340 (2 × 0.20252
+ 1 × 0.26836) in the girls group. Similarly, we can calculate the variance
of s as σ s2 = w Σw′. Algebraically V(s) = 4 × s11 + 4 × s12 + 1 × s22. So the
variance of s is 0.22928 (4 × 0.03112 + 4 × 0.01458 + 1 × 0.04648) in both the
boys and girls group (standard deviation = 0.47883).
As in Chapter 5, the values of s can be used to divide a population at
any point. If we wish to divide our population in half, we can cut it at the
mean. The segment of the population with values above the mean on s
would be selected into one group (complete data) and the segment of the
population with values below the mean on s would be in the unselected
group (missing data).
As we saw in Chapter 5, for a z‑score of 0, the PDF is approximately 0.40, and
the CDF is 0.50. Using these values, the means of s in the selected and unselected
portion of the boys group are 1.127 and 0.363. The means of s in the selected
and unselected portions of the girls group are 1.055 and 0.291. Similarly, the
variance of s is approximately 0.0833 in both halves of each group.
We again use these means and variances to calculate ω and κ in the selected
and unselected segments of our population. This gives us approximate val‑
ues for w of −2.777 for the selected and unselected portions of each group.
Approximate values of k in the selected portion of our population are 1.666 for
the boys and girls groups. Approximate values of κ for the unselected portion
of our population for both the boys and girls group are −1.666. SAS syntax to
calculate these quantities in 5% increments appears below.

128










































Statistical Power Analysis with Missing Data

proc iml;
ly = {1 0, 1 1, 1 2, 1 3, 1 4};
ps = {0.01249 0.00209, 0.00209 0.00292};
te = {0.01863 0 0 0 0,
0 0.02689 0 0 0,
0 0 0.03398 0 0,
0 0 0 0.02425 0,
0 0 0 0 0.01779};
al = {0.22062, 0.08314};
ty = ly*al;
sigma = ly*ps*ly +te;
w = {2,1,0,0,0};
mus = w*ty; * Use Boys or Girls Group Means;
vars = w*sigma*w`;
sds = root(vars);
do p = 0.05 to .95 by .05;
d=quantile('NORMAL',p);
phis = PDF('NORMAL',trace(d));
phiss = CDF('NORMAL',trace(d));
xPHIs = I(1)-phiss;
muss = mus + sds*phis*inv(xPHIs);
musu = mus - sds*phis*inv(phiss);
varss =
vars*(1 + (d*phis*inv(xPHIs)) (phis*phis*inv(xPHIs)*inv(xPHIs)));
varsu =
vars*(1 - (d*phis*inv(phiss)) (phis*phis*inv(phiss)*inv(phiss)));
omegas = inv(vars)*(varss - vars)*inv(vars);
omegau = inv(vars)*(varsu - vars)*inv(vars);
sigmas = sigma + omegas*(sigma*(w`*w)*sigma);
sigmau = sigma + omegau*(sigma*(w`*w)*sigma);
ks = inv(vars)*(muss - mus);
ku = inv(vars)*(musu - mus);
mues = ks*ps*ly`*w`;
mueu = ku*ps*ly`*w`;
tys = ty + ly*mues;
tyu = ty + ly*mueu;
print p, sigma ty sigmas tys sigmau tyu;
end;

Step 6: Estimating Population and Alternative
Models With Incomplete Data
Using the syntax above, we obtain the following values for the means and
covariance matrices for the selected (complete; top half) and unselected
(missing; bottom half) portions of our boys and girls groups. Because we
divided our population at the mean, the covariance matrix is the same in

129

Testing Group Differences in Longitudinal Change

both the selected and unselected portions. It is also identical for both boys
and girls. However, the means differ between selected and unselected
portions; they also differ between boys and girls.













yy

(Boys, Selected) =

.01473439
−.00155392
.00431147
.00444126
.00457104

−.00155392
.03059391
.01243131
.0155112
.0185911

.00431147
.01243131
.05718882
.02966036
.03611191

.00444126
.0155112
.029966036
.063044741
.047934445

.00457104
.0185911
.03611191
.04793445
.077547









.004311477
.01243131
.05718882
.02966036
.03611191

.00444126
.0155112
.022966036
.063004741
.047934445

.00457104
.0185911
.03611191
.04793445
.077547









∑ (Girls, Selected) =
yy













.01473439
−.00155392
.00431147
.00444126
.00457104

−.00155392
.03059391
.01243131
.0155112
.0185911




µ y (Boys, Selected) = 







µ y (Girls, Selected) = 





0.3486
0.4298
0.4834
0.5819
0.68044

0.3305
0.3944
0.4307
0.5119
0.59331




 , µ y (Boys, Unselected) =












0.0926
0.1777
0.2904
0.3582
0.4260




 , and µ y (Girls, Unselected) =















,




0.0745
0.1423
0.2377
0.2882
0.3387




.




In the unselected portion of the population, all values associated with
the last three measurement occasions would be unobserved. In order to
reflect this uncertainty about their true values in our models, we use the
conventions for estimating structural equation models with missing data
that we presented in Chapter 3.
For our input data matrices, the conventions are very simple. Each different
pattern of observed/missing data becomes its own group in our model. We
replace every missing diagonal element of the covariance matrix with ones.

130

Statistical Power Analysis with Missing Data

We replace every missing off‑diagonal element of the covariance and every
missing element of the mean vector with 0s. For the missing data condition
in the boys group, then, our input data would look like the following:
 .01473439

 −.00155392
Σ yy (Boys, Unselected) = 
0

0

0

 .09261372

 .17771997
µ y (Boys, Unselected) = 
0

0

0



−.00155392
.03059391
0
0
0

0
0
1
0
0

0
0
0
1
1

0
0
0
0
1




 and












Similarly, the incomplete data condition in the girls group would have the
following input data:
 .01473439

 −.00155392
Σ yy (Girls, Unselected) = 
0

0

0

 .07451372

 .14231997
µ y (Girls, Unselected) = 
0

0

0



−.00155392
.03059391
0
0
0

0
0
1
0
0

0
0
0
1
1

0
0
0
0
1




 and












Remember that these substituted values for the missing data elements are
only placeholders to give our input data matrices the same shape in the
complete and missing data conditions. They do not figure into any aspect
of the analyses, nor do the values influence our results.
Again, the effects of these placeholders are removed from our model
in the following way. Elements of lambda‑y and tau‑y that correspond
with missing observations are given values of zero to remove the effects
of the off‑diagonal elements and means. Elements of theta‑epsilon that
correspond with missing observations are given values of one to subtract