Tải bản đầy đủ
2 Analysis Weights: Review by the Data User

2 Analysis Weights: Review by the Data User

Tải bản đầy đủ

Preparation for Complex Sample Survey Data Analysis


• Checking and reviewing the scaling and general distribution of
the weights.
• Evaluating the impact of the weights on key survey statistics.
The subsequent sections describe these three activities in more detail.
4.2.1╇Identification of the Correct Weight Variables for the Analysis
The data user will need to refer to the survey documentation (technical report, codebook) to identify the correct variable name for the analysis
weight. Unfortunately, there are no standard naming conventions for weight
variables, and we recommend great caution in this step as a result. A number of years ago, a student mistakenly chose a variable labeled WEIGHT and
produced a wonderful paper based on the NHANES data in which each
respondent’s data was weighted by his or her body weight in kilograms. The
correct analysis weight variable in the student’s data file was stored under a
different, less obvious variable label.
Depending on the variables to be analyzed, there may be more than one
weight variable provided with the survey data set. The 2006 Health and
Retirement Study (HRS) data set includes one weight variable (KWGTHH)
for the analysis of financial unit (single adult or couple) variables (e.g.,
home value or total net worth) and a separate weight variable (KWGTR) for
individual-level analysis of variables (e.g., health status or earnings from a
job). The 2005–2006 NHANES documentation instructs analysts to use the
weight variable WTINT2YR for analyses of the medical history interview
variables and another weight variable (WTMEC2YR) for analyses of data
collected from the medical examination phase of the study. The larger sample of the National Comorbidity Survey Replication (NCS-R) Part I mental
health screening data (n = 9,282) is to be analyzed using one weight variable
(NCSRWTSH), while another weight variable (NCSRWTLG) is the correct
weight for analyses involving variables measured for only the in-depth Part
II subsample (n = 5,692).
Some public-use data sets may contain a large set of weight variables
known as replicate weights. For example, the 1999–2000 NHANES publicuse data file includes the replicate weight variables WTMREP01, WTMREP02,
…, WTMREP52. As mentioned in Chapter€ 3, replicate weights are used in
combination with software that employs a replicated method of variance estimation, such as balanced repeated replication (BRR) or jackknife repeated replication (JRR). When a public-use data set includes replicate weights, design
variables for variance estimation (stratum and cluster codes) will generally
not be included (see Section 4.3), and the survey analyst needs to use the program syntax to specify the replicated variance estimation approach (e.g., BRR,
JRR, BRR-Fay) and identify the sequence of variables that contain the replicate
weight values (see Appendix A for more details on these software options).

© 2010 by Taylor and Francis Group, LLC


Applied Survey Data Analysis

4.2.2╇ Determining the Distribution and Scaling of the Weight Variables
In everyday practice, it is always surprising to learn that an analyst who
is struggling with the weighted analysis of survey data has never actually
looked at the distribution of the weight variable. This is a critical step in preparing for analysis. Assessing a simple univariate distribution of the analysis
weight variable provides information on (1) the scaling of the weights; (2) the
variability and skew in the distribution of weights across sample cases; (3)
extreme weight values; and (4) (possibly) missing data on the analysis weight.
Scaling of the weights is important for interpreting estimates of totals and
in older versions of software may affect variance estimates. The variance
and distribution of the weights may influence the precision loss for sample
estimates (see Section 2.7). Extreme weights, especially when combined with
outlier values for variables of interest, may produce instability in estimates
and standard errors for complete sample or subclass estimates. Missing data
or zero (0) values on weight variables may indicate an error in building the
data set or a special feature of the data set. For example, 2005–2006 NHANES
cases that completed the medical history interview but did not participate
in the mobile examination center (MEC) phase of the study will have a positive, nonzero weight value for WTINT2YR but will have a zero value for
WTMEC2YR (see Table€4.1).
Descriptive Statistics for the Sampling Weights in the Data Sets Analyzed in
This Book










217,761,911 75,540,674





Suggests that the sampling weights have been normalized to sum to the sample size.
Cases with weights of zero will be dropped from analyses and usually correspond to individuals who were not eligible to be in a particular sample.
2005–2006, NHANES adults.

© 2010 by Taylor and Francis Group, LLC

Preparation for Complex Sample Survey Data Analysis


Table€4.1 provides simple distributional summaries of the analysis weight
variables for the NCS-R, 2006 HRS, and 2005–2006 NHANES data sets.
Inspection of these weight distributions quickly identifies that the scale of
the weight values is quite different from one study to the next. For example,
the sum of the NCS-R Part I weights is

∑ NCSRWTSH = 9, 282



while the sum of the 2006 HRS individual weights is

∑ KWGTR = 75, 540, 674



With the exception of weighted estimates of population totals, weighted
estimation of population parameters and standard errors should be invariant to a linear scaling of the weight values, that is, wscale ,i = k ⋅ w final ,i , where k
is an arbitrary constant. That is, the data producer may choose to multiply or
divide the weight values by any constant and with the exception of estimates
of population totals, weighted estimates of population parameters and their
standard errors should not change.
For many surveys such as the 2005–2006 NHANES and the 2006 HRS, the
individual case weights will be population scale weights, and the expected
value for the sum of the weights will be the population size:




wi  = N

For other survey data sets, a normalized version of the overall sampling
weight is provided with the survey data. To “normalize” the final overall
sampling weights, data producers divide the final population scale weight
for each sample respondent by the mean final weight for the entire sample:

wnorm ,i = wi / 

∑ w /n = w / w



Many public-use data sets such as the NCS-R will have normalized weights
available as the final overall sampling weights. The resulting normalized
weights will have a mean value of wnorm = 1.0 , and the normalized weights
for all sample cases should add up to the sample size:

© 2010 by Taylor and Francis Group, LLC


Applied Survey Data Analysis


norm , i



Normalizing analysis weight values is a practice that has its roots in the
past when computer programs for the analysis of survey data often misinterpreted the “sample size” for weighted estimates of variances and covariances required in computations of standard errors, confidence intervals, or
test statistics. As illustrated in Section 3.5.2, the degrees of freedom for variance estimation in analyses of complex sample survey data are determined
by the sampling features (stratification, clustering) and not the nominal
sample size. Also, some data analysts feel more comfortable with weighted
frequency counts that closely approximate the nominal sample sizes for the
survey. However, there is a false security in assuming that a weighted frequency count of

∑ w = 1, 000

corresponds to an effective sample size of neff = 1,000. As discussed in Section
2.7, the effective sample size for 1,000 nominal cases will be determined in
part by the weighting loss, Lw, that arises due to variability in the weights
and the correlation of the weights with the values of the survey variables of
interest. Fortunately, normalizing weights is not necessary when analysts use
computer software capable of incorporating any available complex design
information for a sample into analyses of the survey data.
4.2.3╇Weighting Applications: Sensitivity of
Survey Estimates to the Weights
A third step that we recommend survey analysts consider the first time that
they work with a new survey data set is to conduct a simple investigation of
how the application of the analysis weights affects the estimates and standard errors for several key parameters of interest.
To illustrate this step, we consider data from the NCS-R data set, where the
documentation indicates that the overall sampling weight to be used for the
subsample of respondents responding to both Part I and Part II of the NCS-R
survey (n = 5,692) is NCSRWTLG. A univariate analysis of these sampling
weights in Stata reveals a mean of 1.00, a standard deviation of 0.96, a minimum of 0.11, and a maximum of 10.10 (see Table€4.1). These values indicate that
the weights have been normalized and have moderate variance. In addition,
we note that some sampling weight values are below 0.50. Many standard
statistical software procedures will round noninteger weights and set the
weight to 0 if the normalized weight is less than 0.5—excluding such cases
from certain analyses. This is an important consideration that underscores

© 2010 by Taylor and Francis Group, LLC

Preparation for Complex Sample Survey Data Analysis


the need to use specialized software that incorporates the overall sampling
weights correctly (and does not round them).
We first consider unweighted estimation of the proportions of NCS-R Part
II respondents with lifetime diagnoses of either major depressive episode
(MDE), measured by a binary indicator equal to 1 or 0, or alcohol dependence
(ALD; also a binary indicator), in Stata:
mean mde ald if ncsrwtlg != .


Note that we explicitly limit the unweighted analysis to Part II respondents (who have a nonmissing value on the Part II sampling weight variable NCSRWTLG). The unweighted estimate of the MDE proportion is 0.316,
suggesting that almost 32% of the NCS-R population has had a lifetime
diagnosis of MDE. The unweighted estimate of the ALD proportion is 0.078,
suggesting that almost 8% of the NCS-R population has a lifetime diagnosis
of alcohol dependence.
We then request weighted estimates of these proportions in Stata, first
identifying the analysis weight to Stata with the svyset command and then
requesting weighted estimates by using the svy: mean command:
svyset [pweight = ncsrwtlg]
svy: mean mde ald


The weighted estimates of population prevalence of MDE and ALD are
0.192 and 0.054, respectively. The unweighted estimates for MDE and ALD
therefore have a positive bias (there would be a big difference in reporting a
population estimate of 32% for lifetime MDE versus an estimate of 19%).
In this simple example, the weighted estimates differ significantly from
the unweighted means of the sample observations. This is not always the
case. Depending on the sample design and the nonresponse factors that
contributed to the computation of individual weight values, weighted and
unweighted estimates may or may not show significant differences. When
this simple comparison of weighted and unweighted estimates of key
population parameters shows a significant difference, the survey analyst
should aim to understand why this difference occurs. Specifically, what are

© 2010 by Taylor and Francis Group, LLC


Applied Survey Data Analysis

the factors contributing to the case-specific weights that would cause the
weighted population estimates to differ from an unweighted analysis of the
nominal set of sample observations?
Consider the NCS-R example. We know from the Chapter 1 description
that, according to the survey protocol, all Part I respondents reporting symptoms of a mental health disorder and a random subsample of symptom-free
Part I respondents continued on to complete the Part II in-depth interview.
Therefore, the unweighted Part II sample contains an “enriched” sample of
persons who qualify for one or more mental health diagnoses. As a consequence, when the corrective population weight is applied to the Part II
data, the unbiased weighted estimate of the true population value is substantially lower than the simple unweighted estimate. Likewise, a similar comparison of estimates of the prevalence of physical function limitations using
2005–2006 NHANES data would yield weighted population estimates that
are lower than the simple unweighted prevalence estimates for the observed
cases. The explanation for that difference lies in the fact that persons who
self-report a disability are oversampled for inclusion in the NHANES, and
the application of the weights adjusts for this initial oversampling.
Repeating this exercise for a number of key variables should provide the
user with confidence that he or she understands both how and why the
application of the survey weights will influence estimation and inference for
the population parameters to be estimated from the sample survey data. We
note that these examples were designed only to illustrate the calculation of
weighted sample estimates; standard errors for the weighted estimates were
not appropriately estimated to incorporate complex design features of the
NCS-R sample. Chapter 5 considers estimation of descriptive statistics and
their standard errors in more detail.

4.3╇Understanding and Checking the
Sampling Error Calculation Model
The next step in preparing to work with a complex sample survey data set
is to identify, understand, and verify the sampling error calculation model
that the data producer has developed and encoded for the survey data set.
Information about the sampling error calculation model can often be found
in sections of the technical documentation for survey data sets titled sampling
error calculations or variance estimation (to name a couple of possibilities). In
this section, we discuss the notion of a sampling error calculation model and
how to identify the appropriate variables in a survey data set that represent
the model for variance estimation purposes.
A sampling error calculation model is an approximation or “model”
for the actual complex sample design that permits practical estimation of

© 2010 by Taylor and Francis Group, LLC

Preparation for Complex Sample Survey Data Analysis


sampling variances for survey statistics. Such models are necessary because
in many cases, the most practical, cost-efficient sample designs for survey
data collection pose analytical problems for complex sample variance estimation. Examples of sample design features that complicate direct analytic
approaches to variance estimation include the following:
• Multistage sampling of survey respondents.
• Sampling units without replacement (WOR) at each stage of sample selection.
• Sampling of a single primary sampling unit (PSU) from nonself-representing (NSR) primary stage strata.
• Small primary stage clusters that are not optimal for subclass analyses or pose an unacceptable disclosure risk.
The data producer has the responsibility of creating a sampling error calculation model that retains as much essential information about the original
complex design as possible while eliminating the analytical problems that
the original design might pose for variance estimation.
4.3.1╇ Stratum and Cluster Codes in Complex Sample Survey Data Sets
The specification of the sampling error calculation model for a complex sample design entails the creation of a sampling error stratum and a sampling
error cluster variable.
These sampling error codes identify the strata and clusters that the survey
respondents belong to, approximating the original sample design as closely
as possible while at the same time conforming to the analytical requirements of several methods for estimating variances from complex sample
data. Because these codes approximate the stratum and cluster codes that
would be assigned to survey respondents based on the original complex
sample design, they will not be equal to the original design codes, and the
approximate codes are often scrambled (or masked) to prevent the possibility of identifying the original design codes in any way. Sampling error stratum and cluster codes are essential for data users who elect to use statistical
software programs that employ the Taylor series linearization (TSL) method
for variance estimation. Software for replicated variance estimation (BRR
or JRR) does not require these codes, provided that the data producer has
generated replicate weights. If replicate weights are not available, software
enabling replicated variance estimation can use the sampling error stratum
and cluster codes to create sample replicates and to develop the corresponding replicate weights.
In some complex sample survey data sets where confidentiality is of
utmost importance, variables containing sampling error stratum and cluster codes may not be provided directly to the research analyst (even if they

© 2010 by Taylor and Francis Group, LLC


Applied Survey Data Analysis

represent approximations of the original design codes). In their place, the
data producer’s survey statisticians calculate replicate weights that reflect
the sampling error calculation model and can be used for repeated replication methods of variance estimation. We provide examples of analyses using
replicate weights on the book Web site, and interested readers can refer to
Appendix A to see how replicate weights would be identified in various software procedures.
Unfortunately, as is true for analysis weights, data producers have not
adopted standard conventions for naming the sampling error stratum and
cluster variables in public-use data sets. The variable containing the stratum code often has a label that includes some reference to stratification. Two
examples from this book include the variable SDMVSTRA from the 2005–
2006 NHANES data set and the variable SESTRAT from the NCS-R data set.
The variable containing the sampling error cluster codes is sometimes
referred to as a sampling error computation unit (SECU) variable, or it may
be generically called the “cluster” code, the PSU code, or pseudo-primary
sampling unit (PPSU) variable. Two examples that will be used throughout
the example exercises in this book include the SDMVPSU variable from the
2005–2006 NHANES data set and the SECLUST variable from the NCS-R
data set.
As discussed in Section 2.5.2, some large survey data sets are still released
without sampling error codes or replicate weight values that are needed for
complex sample variance estimation. In such cases, analysts may be able to
submit an application to the data producer for restricted access to the sampling error codes, or they may elect to use the generalized design effect that
is advocated by these data producers. For a limited number of survey data
sets, online, interactive analysis systems such as the SDA system* provide
survey analysts with the capability of performing analysis without gaining direct access to the underlying sampling error codes for the survey data
set. Analysts who are working with older public-use data sets may find that
these data sets were released with the sampling error stratum and cluster
codes concatenated into a single code (e.g., 3001 for stratum = 30 and cluster
= 01). In these cases, the variable containing the unified code will need to be
divided into two variables (stratum and cluster) by the survey data analyst
for variance estimation purposes.
4.3.2╇Building the NCS-R Sampling Error Calculation Model
We now consider the sample design for the National Comorbidity Survey
Replication as an illustration of the primary concepts and procedures for
* The SDA system is available on the Web site of the University of Michigan Inter-University
Consortium for Political and Social Research (ICPSR) and is produced by the ComputerAssisted Survey Methods Program at the University of California–Berkeley. Visit http://
www.icpsr.umich.edu for more details.

© 2010 by Taylor and Francis Group, LLC

Preparation for Complex Sample Survey Data Analysis


Original Sample Design and Associated Sampling Error Calculation Model for
the NCS-R
Original Sample Design
The sample is selected in multiple stages.
Primary stage units (PSUs), secondary stage
units (SSUs), and third stage units are
selected without replacement (WOR).

Sixteen of the primary stage strata are
self-representing (SR) and contain a single
PSU. True sampling begins with the
selection of SSUs within the SR PSU.
A total of 46 of the primary stage strata are
nonself-representing (NSR). A single PSU is
selected from each NSR stratum.

Sampling Error Calculation Model
The concept of ultimate clusters is employed
(see Chapter 3). Under the assumption that
PSUs (ultimate clusters) are sampled with
replacement, only PSU-level statistics
(totals, means) are needed to compute
estimates of sampling variance.
The ultimate clusters are assumed to be
sampled with replacement (SWR) at the
primary stage. Finite population corrections
are ignored, and simpler SWR variance
formulas may be used for variance
Random groups of PSUs are formed for
sampling error calculation. Each SR PSU
becomes a sampling error stratum. Within
the SR stratum, SSUs are randomly assigned
to a pair of sampling error clusters.
Collapsed strata are formed for sampling
error calculation. Two similar NSR design
strata (e.g., Strata A and B) are collapsed to
form one sampling error computation
stratum. The Stratum A PSU is the first
sampling error cluster in the stratum, and
the Stratum B PSU forms the second
sampling error cluster.

constructing a sampling error calculation model for a complex sample survey data set. Table€4.2 presents a side-by-side comparison of the features of
the original NCS-R complex sample design and the procedures employed to
create the corresponding sampling error calculation model. Interested readers can refer to Kessler et al. (2004) for more details on the original sample
design for the NCS-R.
Note from Table€4.2 that in the NCS-R sampling error calculation model the
assumption of with-replacement sampling of ultimate clusters described in
Chapter 3 is employed to address the analytic complexities associated with
the multistage sampling and the without-replacement selection of NCS-R
PSUs. The assumption that ultimate clusters are selected with replacement
within primary stage strata ignores the finite population correction factor
(see Section 2.4.2) and therefore results in a slight overestimation of true sampling variances for survey estimates.
To introduce several other features of the NCS-R sampling error calculation
model, Table€4.3 illustrates the assignment of the sampling error stratum and
cluster codes for six of the NCS-R sample design strata. In self-representing

© 2010 by Taylor and Francis Group, LLC


Applied Survey Data Analysis

Illustration of NCS-R Sampling Error Code Assignments
Sampling Error
Calculation Model

Original Sample Design







1 2 3 ╇ 4╇ 5╇ 6
7 8 9 10 11 12
1 2 3 ╇ 4╇ 5╇ 6
7 8 9 10 11 12


1 = {1, 3, 5, 7, 9, 11}
2 = {2, 4, 6, 8, 10, 12}
1 = {1, 3, 5, 7, 9, 11}
2 = {2, 4, 6, 8, 10, 12}









1 = 1701
2 = 1801
1 = 1901
2 = 2001

Recall from Section 2.8 and Table€4.2 that in self-representing (SR)
strata, sampling begins with the selection of the smaller area segment units. Hence, in the NCS-R, the sampled units (coded 1–12)
in each SR stratum (serving as its own PSU) are actually secondary
sampling units. We include them in the PSU column because this
was the first stage of non-certainty sampling in the SR strata.
Unlike the SR PSUs, which serve as both strata and PSUs (each SR
stratum is a PSU, that is, they are one and the same), the NSR strata
can include multiple PSUs. In the NCS-R, one PSU was randomly
selected from each NSR stratum (e.g., PSU 1701). NSR strata were
then collapsed to form sampling error strata with two PSUs each to
facilitate variance estimation.

(SR) design strata 15 and 16, the “area segments” constitute the first actual
stage of noncertainty sample selection—hence, they are the ultimate cluster
units with these two strata. To build the sampling error calculation model
within each of these two SR design strata, the random groups method is
used to assign the area segment units to two sampling error clusters. This is
done to simplify the calculations required for variance estimation. As illustrated, NCS-R nonself-representing strata 17–20 contain a single PSU selection. The single PSU selected from each of these NSR strata constitutes an
ultimate cluster selection. Because a minimum of two sampling error clusters per stratum is required for variance estimation, pairs of NSR design
strata (e.g., 17 and 18, 19 and 20) are collapsed to create single sampling error
strata with two sampling error computation units (PSUs) each.
Randomly grouping PSUs to form a sampling error cluster does not bias
the estimates of standard errors that will be computed under the sampling
error calculation model. However, forming random clusters by combining
units does forfeit degrees of freedom, so the “variance of the variance estimate” may increase slightly.
If the collapsed stratum technique is used to develop the sampling error calculation model, slight overestimation of standard errors occurs because the

© 2010 by Taylor and Francis Group, LLC

Preparation for Complex Sample Survey Data Analysis


collapsed strata ignore the true differences in the design strata that are collapsed to form the single sampling error calculation stratum. The following
section provides interested readers with more detail on the combined strata,
random groups, and collapsed strata techniques used in building sampling
error calculation models for complex sample survey data.
4.3.3╇Combining Strata, Randomly Grouping
PSUs, and Collapsing Strata
Combining strata in building the sampling error calculation model involves
the combination of PSUs from two or more different strata to form a single
stratum with larger pooled sampling error clusters. Consider Figure€ 4.1.
The original complex sample design involved paired selection of two PSUs
within each primary sampling stratum. For variance estimation, PSUs from
the two design strata are combined, with the PSUs being randomly assigned
into two larger sampling error clusters.
The technique of combining strata for variance estimation is typically
used for one of two reasons: (1) The sample design has large numbers of primary stage strata and small numbers of observations per PSU, which could
lead to variance estimation problems (especially when subclasses are being
analyzed); or (2) the data producer wishes to group design PSUs to mask
individual PSU clusterings as part of a disclosure protection plan. The sampling error calculation model for the NHANES has traditionally employed
combined strata to mask the identity of individual PSUs.
The random groups method randomly combines multiple clusters from a
single design stratum to create two or more clusters for sampling error estimation. This is the technique illustrated for NCS-R self-representing design
strata 15 and 16 in Table€4.3. Figure€4.2 presents an illustration of forming random groups of clusters for variance estimation purposes. In this illustration,







An example of combining strata.

© 2010 by Taylor and Francis Group, LLC