Tải bản đầy đủ
5 Comparisons among r, g, and o

# 5 Comparisons among r, g, and o

Tải bản đầy đủ

Basic Effect Size Computation

119

Similarly, g and o can be computed from one another using the following
equations:
Equations 5.28 and 5.29: Converting between g and o

5.28: computing g from o: g  3 ln(o)
P
gP

5.29: computing o from g:

oe

3

• p is the numeric value ≈ 3.14.

Finally, you can transform from o to r by reconstructing the contingency
table (if sufficient information is provided), through intermediate transformations to g, or through one of several approximations of the tetrachoric correlation (see Bonett, 2007). An intermediate transformation to g or algebraic
rearrangement of the tetrachoric correlation approximations also allows you
to transform from r to o.
This mathematical interchangeability among effect sizes has led to
arguments that one type of effect size is preferable to another. For example,
Rosenthal (1991) has expressed preference for r over d (and presumably other
standardized mean differences, including g) based on four features. First,
comparisons of Equations 5.13 and 5.14 for r versus 5.20 and 5.21 for g reveal
that it is possible to compute r accurately from only the inferential test value
and degrees of freedom, whereas computing g requires knowing the group
sample sizes or else approximating this value by assuming that the group
sizes are equal. To the extent that primary studies do not report group sizes
and it is reasonable to expect marked differences in group sizes, r is preferable to d. A second, smaller, argument for preferring r to g is that you use
the same equations to compute r from independent sample versus repeated­measures inferential tests, whereas different formulas are necessary when
computing g from these tests (see Equations 5.20 and 5.21 vs. 5.22). This
should not pose too much difficulty for the competent meta-­analyst, but consideration of simplicity is not trivial. A third advantage of r over standardized
mean differences, according to Rosenthal (1991), is in ease of interpretation.
Whether r or standardized mean differences (e.g., g) are more intuitive to
readers is debatable and currently is a matter of opinion rather than careful study. It probably is the case that most scientists have more exposure to

120

CODING INDIVIDUAL STUDIES

r than to g or d, but this does not mean that they cannot readily grasp the
meaning of the standardized mean difference. The final, and perhaps most
convincing, argument for Rosenthal’s (1991) preference is that r can be used
whenever d can (e.g., in describing an association between a dichotomous
variable and a continuous variable), but it makes less sense to use g in many
situations where r could be used (e.g., in describing an association between
two continuous variables).
Arguments have also been put forth for preferring o to standardized
mean differences (g or d) or r when both variables are truly dichotomous.
The magnitudes of r (typically denoted with f) or standardized mean differences (g or d) that you can compute from a 2 × 2 contingency table depend
on the marginal frequencies of the dichotomies. This dependence leads to
attenuated effect sizes as well as extraneous heterogeneity among studies
when these effect size indices are used with dichotomous data (Fleiss, 1994;
Haddock et al., 1998). This limitation is not present for o, leading many
to argue that it is the preferred effect size to index associations between
dichotomous data.
I do not believe that any type of effect size index (i.e., r, g, or o) is inherently preferable to another. What is far more important is that you select the
effect size that matches your conceptualization of the variables under consideration. Linear associations between two variables that are naturally continuous should be represented with r. Associations between a dichotomous
variable (e.g., group) and a continuous variable can be represented with a
standardized mean difference (e.g., g) or r, with a standardized mean difference probably more naturally representing this type of association.14 Associations between two natural dichotomies are best represented with o.
If you wish to compare multiple levels of variables in the same meta­analysis, I recommend using the effect size index representing the more continuous nature for both. For example, associations of a continuous variable
(e.g., aggressive) with a set of correlates that includes a mixture of continuous
and dichotomous variables (e.g., a continuous rejection variable and a dichotomous variable of being classified as rejected) could be well represented with
the correlation coefficient, r (Rosenthal, 1991). Similarly, associations of a
dichotomous variable (e.g., biological sex) with a set containing a combination of continuous (e.g., rejection) and dichotomous (e.g., rejection classification) variables could be represented with a standardized mean difference
such as g (Sánchez-Meca et al., 2003). In both cases, it would be important
to evaluate moderation by the type (i.e., continuous versus dichotomous) of
correlate.

Basic Effect Size Computation

121

5.6 Practical Matters: Using Effect Size
Calculators and Meta‑Analysis Programs
As I described in Chapter 1, several computer programs are designed to aid
in meta-­analysis, some of which are available for free and others for purchase.
All meta-­analytic programs perform two major steps: effect size calculation
and effect size combination. Effect size combination (as well as comparison)
is the process of aggregating results across studies, the topic of Chapters 8–10
later in this book. Effect size calculation is the process of taking results from
each study and converting these into a common effect size, the focus of this
chapter.
Relying on an effect size calculator found in meta-­analysis programs to
compute effect sizes (as well as to combine results across studies) can be a
time-­saving tool. However, I discourage beginning meta-­analysts from relying on them. All of the calculations described in this chapter can be performed with a simple hand calculator or spreadsheet program (e.g., Excel),
and the meta-­analytic combination and comparison I describe later in this
book can be performed using these spreadsheets or simple statistical analysis
software (e.g., SAS or SPSS). In other words, I see little need for specific software when conducting a meta-­analysis.
Having said both that these programs can save time but that I recommend not using them initially, you may wonder if I think that you have too
much time on your hands. I do not. Instead, my concern is that these programs make it easy for beginning meta-­analysts who are less familiar with the
calculations to make mistakes. The value of struggling with the equations in
this chapter is that doing so forces you to think about what the values mean
and where to find them within the research report. The danger of using an
effect size calculator is of mindless use, in which users put in whatever values
they can find in the report that look similar to what the program asks for.
At the same time, I do not entirely discourage the use of these meta­analysis programs. They can be of great use in reducing the burden of tedious
calculations after you understand these calculations. In other words, if you are
just beginning to perform meta-­analyses, I encourage you to compute some
effect sizes by hand (i.e., using a calculator or spreadsheet program) as well
as using one of these programs. Inconsistencies should alert you that either
your hand calculations are inaccurate or that you are not providing the correct information to the program (or that the program is inaccurate, though
this should be uncommon with the more commonly used programs). After
you have confirmed that you obtain identical results by hand and the pro-

122

CODING INDIVIDUAL STUDIES

gram, then you can decide if using the program is worthwhile. I offer this
same advice when combining effect sizes, which I discuss later in this book.

5.7 Summary
In this chapter, I have described effect sizes as indices of association between
two variables, a definition that is somewhat restricted but that captures the
majority of uses in meta-­analysis. I also emphasized that effect sizes are not
statistical significance tests.
I also described three classes of effect sizes. Correlations (r) index associations between two continuous variables. Standardized mean differences
(such as g) index associations between dichotomous and continuous variables. Odds ratios (o) are advantageous in indexing the associations between
two dichotomous variables. I stressed that you should carefully consider the
nature of the variables of interest, recognizing that primary studies may use
other distributions (e.g., artificial dichotomization of a continuous variable).
I also suggested that your conceptualization of the distributions of the variables of interest should guide your choice of effect size index. Finally, I considered the practical matter of using available effect size calculators in meta­analysis programs. Although you should be familiar enough with effect size
computation that you can do so yourself, these effect size calculators can be
a time-­saving tool.

5.8Recommended Readings
Fleiss, J. H. (1994). Measures of effect size for categorical data. In H. Cooper & L. V.
Hedges (Eds.), The handbook of research synthesis (pp. 245–260). New York: Russell
Sage Foundation.—This chapter provides a thorough and convincing description of
the use of o as effect size for associations between two dichotomous variables. This
chapter does not provide much advice on estimating o from commonly reported data,
so readers should also look at relevant sections of Lipsey and Wilson (2001).
Grissom, R. J., & Kim, J. J. (2005). Effect sizes for research: A broad practical approach.
Mahwah, NJ: Erlbaum.—Although not specifically written for the meta-­analyst, this
book provides a thorough description of methods of indexing effect sizes.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-­analysis. Thousand Oaks, CA:
Sage.—This short book (247 pages) provides a more thorough coverage than that
of Rosenthal (1991), but is still brief and accessible. Lipsey and Wilson frame meta­analysis in terms of analysis of effect sizes, regardless of the type of effect size used.
Although only part of one chapter (Chapter 4) is devoted to effect size computation,