Tải bản đầy đủ
3 Practical Matters: What Impact Do Sampling Biases Have on Meta‑Analytic Conclusions?
COMBINING AND COMPARING EFFECT SIZES
viewed as untrustworthy and uninformative? Absolutely not. You should
remember that the available literature is all that we as scientists have, so if
you dismiss this literature as not valuable, then we have nothing on which
to base our empirical sciences. Moreover, it is important to remember that
a meta-analytic review is no more subject to sampling bias than other literature reviews. In fact, meta-analysis offers two advantages over traditional
approaches to literature review that allow us to face the challenge of sampling bias. First, meta-analysts typically are more exhaustive in searching
the literature than those performing narrative reviews, and the search procedures are made transparent in the reporting of meta-analyses. Second, only
meta-analysis allows you to evaluate and potentially correct for publication/
sampling bias. Although there is no guarantee that these methods will perfectly fix the problem, they are far better than simply ignoring it.
Publication bias poses a threat to the conclusions drawn in a meta-analysis.
Fortunately, there exist several methods for detecting potential bias, as well
as various methods of correction (though none are universally agreed upon).
I recommend considering multiple approaches to identifying these threats
(especially for publication bias, for which there are numerous approaches).
The value of considering multiple approaches was illustrated through an
ongoing example meta-analysis, in which some approaches suggested potential bias whereas others did not. The more evidence you can bring to bear on
these potential problems, the more likely you are to come to satisfying conclusions. I also recommend keeping updated with the literature on these topics, as these represent some of the most active areas of quantitative research
Begg, C. B. (1994). Publication bias. In H. Cooper & L. V. Hedges (Eds.), The handbook of
research synthesis (pp. 399–409). New York: Russell Sage Foundation.—This chapter
is a reasonably comprehensive, yet concise, overview of the issues involved in considering and evaluating publication bias in meta-analysis.
Rothstein, H. R., Sutton, A. J., & Borenstein, M. (Eds.). (2005b). Publication bias in metaanalysis: Prevention, assessment and adjustments. Hoboken, NJ: Wiley.—This is the
book to read if you want to learn as much as possible about publication bias. This
edited book contains 16 chapters, each considering in depth different methodological
and statistical approaches to avoiding, detecting, or correcting publication bias.
1. Although this statement is accurate for most types of publication bias, others
could exist. For example, if a particular therapy has a potential adverse side
effect, it is plausible that studies demonstrating this effect are more likely to be
suppressed. In this case, publication bias might be in the direction of overrepresentation of null (absence of adverse effects of the therapy) or negative (lower
rates of adverse effects with the therapy) results.
2. Some (Antes & Chalmers, 2003; Chalmers, 1990) have argued that a researcher
conducting a study but not fully reporting the results is unethical. This point is
easy to see when you consider clinical work in which side effects (unexpected
findings) are not reported or when studies failing to support a treatment are suppressed. The same argument applies, however, to basic research. Not reporting or
selectively reporting results poses costly (in terms of time and effort) obstacles
to the progression of basic science, even if the immediate impact on individuals
is not as evident as in applied research. Moreover, studies are often conducted
using external funding (public or private) and involve the time of the individuals
participating in the study; relegating the findings from these investments to the
file drawer represents a waste of the limited resources available to science.
3. To evaluate publication bias for the random-effects model I presented in Chapter
10, the test of moderation by publication status would rely on a mixed-effects
model. Examination of funnel plots and the effect size with sample size association do not differ whether you use a fixed- or random-effects model. Computation
of failsafe numbers assumes a fixed-effects model and has not yet been extended
to the random-effects framework.
4. It is also common to denote the effect sizes on the x-axis and the sample sizes on
the y-axis. Either choice is acceptable, as they allow the same examination. My
choice of plotting effect sizes on the y-axis and sample size on the x-axis is simply
because this is more consistent with the regression-based methods (in which you
examine whether effect sizes are predicted by sample size) described in the next
5. This particular plot assumes homogeneity, or the absence of between-study variability beyond expectable sampling fluctuations (i.e., the standard errors of effect
sizes). In the presence of heterogeneity, you expect greater dispersion of effect
sizes (i.e., a wider vertical span between the two solid lines), but the funnel plot
should retain this symmetric shape.
6. A method of “controlling for” this variable would be to regress effect sizes onto
this variable and then plot residuals (instead of effect sizes) in relation to sample
COMBINING AND COMPARING EFFECT SIZES
size. Although this funnel plot would not display the mean effect size, you could
still evaluate the symmetry for presence of publication bias. This same method
can be useful if there exist moderators of effect sizes that make visual inspection
of effect size funnel plots difficult.
7. This number is only meaningful when you have found a significant result. If the
obtained result is nonsignificant, then it is not meaningful to ask how many more
null results would be needed to reduce it to nonsignificance (it is already there).
In fact, using nonsignificant results in the equations for failsafe N will yield negative numbers.
8. I recommend conducting these analyses with the same scale of effect size used in
the meta-analysis. For instance, if Fisher’s Zr transformation of the correlation,
or the natural logarithm of the odds ratio, were used in the meta-analysis, these
should also be used in this computation.
9. I use different terminology here than that of Orwin (1983) and others (e.g.,
Becker, 2005). My rationale for this terminology is to ensure consistency with
terminology used in my earlier presentation of Rosenthal’s (1979) approach. It is
also worth noting here that I do not believe that you are limited to only selecting
one minimum meaningful value. I see value in reporting multiple failsafe Ns,
such as the number needed to reduce an effect size to a medium magnitude and
to a small magnitude.
Multivariate Meta‑Analytic Models
In Chapter 7 (Section 7.3), I described the difficulties of using multivariate
statistics (e.g., regression coefficients) as effect sizes in meta-analyses. However, this does not mean that we cannot answer multivariate questions through
meta-analysis. Rather than inserting multivariate statistics into our meta-analysis,
we can use meta-analysis to obtain bivariate statistics (e.g., correlations) that
are then used in multivariate analyses. This approach avoids the problems of
using multivariate statistics as effect sizes (e.g., the necessity that all studies use
the same variables in multivariate analyses), but itself contains some difficulties
of analytic complexity and requires that studies have collectively examined all
bivariate relations informing the multivariate analysis.
In this chapter, I introduce the cutting-edge practice of using meta-analysis
to obtain sufficient statistics for multivariate analysis. I first describe the general logic of this practice and then provide an overview of two statistical
approaches to fitting these multivariate meta-analytic models. Finally, I turn to
the practical matter of connecting meta-analytic findings to theories, a connection especially relevant to multivariate models that better evaluate theoretical
propositions than simpler models.
Before beginning this chapter, I want to make you aware of three important
cautions. First, I should warn you that the material presented in this chapter is
more technically challenging than most of the rest of this book. The techniques
I describe here rely on familiarity with matrix representations of multivariate
analyses, which I recognize many readers do not consider in their day-to-day
analyses. Second, I have not attempted to fully explain some nuances of these
approaches, as they quickly become even more technically challenging than
the material I do present. Third, because the techniques I describe here are
relatively new, many unresolved issues remain. Although I attempt to provide
COMBINING AND COMPARING EFFECT SIZES
an up-to-date, nontechnical overview of what we currently know and make
speculations about what I think are answers to unresolved issues (making clear
what is established vs. speculation), you should bear in mind that the state
of the art in this area is rapidly changing, so if you use these techniques you
should consult the most recent research.
12.1 Meta‑analysis to Obtain Sufficient Statistics
12.1.1 Sufficient Statistics for Multivariate Analyses
As you may recall (fondly or not) from your multivariate statistics courses,
nearly all multivariate analyses do not require the raw data. Instead, you
can perform these analyses using sufficient statistics—summary information
from your data that can be inserted into matrix equations to provide estimates of multivariate parameters. Typically, the sufficient statistics are the
variances and covariances among the variables in your multivariate analysis,
along with some index of sample size for computing standard errors of these
parameter estimates. For some analyses, you can instead use correlation to
obtain standardized multivariate parameter estimates. Although the analysis
of correlation matrices, rather than variance/covariance matrices, is often
less than optimal, a focus on correlation matrices is advantageous in the
context of multivariate meta-analysis for the same reason that correlations
are generally preferable to covariances in meta-analysis (see Chapter 5). I
next briefly summarize how correlation matrices can be used in multivariate
analyses, focusing on multiple regression, exploratory factor analysis, and
confirmatory factor analysis. Although these represent only a small sampling
of possible multivariate analyses, this focus should highlight the wide range
of possibilities of using multivariate meta-analysis.
12.1.1.a Multiple Regression
Multiple regression models fit linear equations between a set of predictors
(independent variables) and a dependent variable. Of interest are both the
unique prediction each independent variable has to the dependent variable
above and beyond the other predictors in the model (i.e., the regression coefficient, B) and the overall prediction of the set (i.e., the variance in the dependent variable explained, R2). Both the standardized regression coefficients of
each predictor and overall variance explained (i.e., squared multiple correlation, R2) can be estimated from (1) the correlations among the independent
Multivariate Meta-Analytic Models
variables (a square matrix, Rii, with the number of rows and columns equal
to the number of predictors), and (2) the correlations of each independent
variable with the dependent variable (a column vector, Riy, with the number
of rows equal to the number of predictors, using the following equations1
(Tabachnick & Fidell, 1996, p. 142):
Equation 12.1: Matrix equations for multiple regression
B i R ii
R 2 R `iy B i
• Bi is a k × 1 vector of standardized regression coefficients.
• Rii is a k × k matrix of correlations among independent variables.
• Riy is a k × 1 vector of correlations of independent variables with
the dependent variable.
• R2 is the proportion of variance in the dependent variable predicted by the set of independent variables.
• k is the number of predictors.
12.1.1.b Exploratory Factor Analysis
Exploratory factor analysis (EFA) is used to extract a parsimonious set of factors that explain associations among a larger set of variables. This approach
is commonly used to determine (1) how many factors account for the associations among variables, (2) the strengths of associations of each variable on
a factor (i.e., the factor loadings), and (3) the associations among the factors
(assuming oblique rotation). For each of these goals, exploratory factor analysis is preferred to principal components analysis (PCA; see, e.g., Widaman,
1993, 2007), so I describe EFA only. I should note that my description here is
brief and does not delve into the many complexities of EFA; I am being brief
because I seek only to remind you of the basic steps of EFA without providing
a complete overview (for more complete coverage, see Cudeck & MacCallum,
Although the matrix algebra of EFA can be a little daunting, all that is
initially required is the correlation matrix (R) among the variables, which is
a square matrix of p rows and columns (where p is the number of variables).
From this correlation matrix, it is possible to compute a matrix of eigenvectors, V, which has p rows and m columns (where m is the number of factors).2