Document 12344389

advertisement
2003 Joint Statistical Meetings - Section on Government Statistics
Improving Quality in the Collection of Earnings:
The Survey of Income and Program Participation 2004 Panel
Nancy Bates and Aniekan Okon
U.S. Census Bureau, Demographic Surveys Division, Washington, DC 20233-8400
randomly assigning each to a treatment group or a control
group. Each household in the treatment group received a
modified SIPP instrument containing experimental
questions. Each household in the control group received the
current, standard SIPP instrument. Interviewers who
conducted the interviews were all experienced SIPP
interviewers who received special training on the new,
experimental questions and procedures. Two more field tests
with similar designs were conducted in 2001 and 2002, each
with two separate waves. (For a more comprehensive
background on the SIPP Methods Panel, see Doyle, Martin
and Moore, 2000.)
KEY WORDS: Labor Force, Earnings, Nonresponse,
Reporting options
1. Abstract1
In an attempt to reduce respondent burden and item
nonresponse to income questions, the Survey of Income and
Program Participation (SIPP) will implement new
procedures to collect earnings from work activities in the
2004 panel. In previous SIPP instruments, the primary
method of reporting employer earnings has been in monthly
amounts; when gathering business earnings, monthly is the
only option allowed. The 2004 panel will allow respondents
a choice in how they report earnings from businesses and
employers. Prior to a battery of questions about earned
income amounts, interviewers probe respondents for their
preferred time period of reporting earnings—either annually,
hourly, biweekly, bimonthly, monthly, or quarterly. The new
SIPP instrument also adds several probes in the earnings
section aimed at reducing item nonresponse and introduces
dependent-style interviewing techniques during the Wave 2+
interviews. In this paper, we discuss how the new methods
impacted item nonresponse and data quality in a series of
field tests.
In this paper, we describe new techniques designed to
encourage better income reporting in the 2004 SIPP. In
particular, from the Wave 1 questionnaire, we detail a new
method of flexible reporting periods, expanded use of
computerized verification checks, automated monthly
transformations, and new nonresponse follow-up probes.
From the Wave 2 interview, we describe the use of
dependent interviewing as a means of reducing nonresponse
to questions about earnings from wage and salary jobs and
businesses. We then present a comparison analysis of item
nonresponse metrics and mean and median reported earnings
between the new 2004 SIPP Panel and the current SIPP
instrument. We end with a brief discussion of how changes
in the 2004 SIPP will impact data output and then
summarize conclusions from the Methods Panel field tests.
2. Introduction
The 2004 Survey of Income and Program Participation
(SIPP) Panel has gone through significant instrument
redesigns since the inception of the SIPP Methods Panel.
The SIPP Methods Panel was established as a means of
designing and testing changes to the 2004 SIPP during a
series of production-like field tests. Some overriding goals
of the 2004 SIPP are to increase efficiency and accuracy,
reduce redundancy, develop “friendlier” wording, simplify
response tasks, and, in general, reduce respondent burden
(Moore et al. 2002).
3. Labor Force Income Questions
Like all sections of the core SIPP instrument, the 2004 labor
force section has undergone a program of changes and
testing. We limit the improvements discussed in this paper to
the Labor Force 2 section of the SIPP. This section of the
core instrument collects detailed information about earnings
from wage and salary jobs and businesses, profits from
businesses, earnings from tips, commissions, overtime and
bonuses, “other” business income, contingent work income,
and moonlight earnings.2 Our experience with the new
techniques is divided into results from the Wave 1 and Wave
2+ instruments. In addition to the broad methodological
changes discussed below, we note that the 2004 labor force
section also contains several new questions and imposes
different universes for old questions in some instances.
These are not discussed in detail here but are referenced a
few places in the paper and briefly discussed in Section 5.
The Methods Panel project carried out its initial field test in
August and September of 2000. The 2000 field test drew a
representative sample of households in six of the Census
Bureau's twelve regional offices—Philadelphia, Kansas City,
Seattle, Charlotte, Atlanta, and Dallas—
1
This paper reports the results of research and analysis
undertaken by Census Bureau staff. It has undergone a
Census Bureau review more limited in scope than that
given to official Census Bureau publications. This report
is released to inform interested parties of ongoing research
and to encourage discussion of work in progress.
3.1 Wave 1 - Reporting Options
2
The core SIPP routinely asks all detailed earnings
questions about the 1st and 2nd job or business only.
462
2003 Joint Statistical Meetings - Section on Government Statistics
“Earlier you stated that you usually work 40 hours per
week. At $8 per hour, that works out to about $320 every
week. Does that sound about right for your gross income?″
When reporting job earnings in the traditional SIPP,
respondents are asked:
Each time you were paid by [Employer Name] in [Month
X], how much did you receive BEFORE deductions?
The instrument then uses weekly gross amounts to calculate
the desired final outcome -- gross monthly amounts. This
method takes advantage of computer-assisted automation by
allowing flexibility in reporting options and putting more of
the income calculation burden on the computer, not the
respondent. Our intent was to increase the validity and
accuracy of the data being collected and reduce respondent
burden at the same time.
Interviewers are instructed to enter one or more gross
amounts for the reference month. This allows separate
paycheck amounts or a single aggregated monthly amount to
be entered. There is also an option to hit “A” and enter an
annual amount if the respondent reports being paid annually
in response to the question. Likewise, the instrument also has
an option to hit “C” and calculate a monthly amount by
entering an hourly wage and a number of hours worked. If
hourly is selected, a monthly amount is calculated and read
back to the respondent to verify. However, neither the hourly
nor the annual option is explicitly offered to the respondent
as part of the question. For salaried income from a business,
gross monthly amount is the only reporting option allowed.
3.2 Wave 1 – Item Nonresponse
When revising the SIPP 2004 labor force earnings section,
another of the goals was to reduce item nonresponse. Item
nonresponse occurs when a respondent refuses or is unable
to answer a particular question. This may occur in cases
where the reporting task is too difficult, the information is
not available, or the question is deemed too sensitive. This is
a fairly common occurrence for income questions in the case
of a proxy reporter or in cases where answers are withheld
because they are viewed as too private or inappropriate to be
shared with a stranger. Like other income surveys, the SIPP
has experienced substantial levels of item nonresponse to
income questions (Dixon et al. 2003, Atrostic and
Kalenkolski 2002, Roemer 2000).
The method in the 2004 instrument took a very different
tact—it is flexible and allows respondent-defined periods for
reporting job earnings and salaried business income. At the
beginning of the earnings section, respondents are informed
that the goal is to collect gross monthly earning amounts.
However, the instrument allows respondents to select their
preferred
periodicity
of
reporting:
monthly,
biweekly/bimonthly, annually, hourly, or quarterly (in the
case of businesses). If none of these are selected,
respondents may report “some other way”, in which case the
instrument prompts for monthly amounts. The exact
question wording is as follows:
In an effort to reduce item nonresponse, the 2004 SIPP
instrument instituted a two-tiered approach for nonresponse
follow-up probes during the Wave 1 interview. At the first
level, if the respondent answers “don’t know” or refuses an
income amount question, the respondent is asked to provide
an approximate monthly, biweekly or annual amount
depending upon the type of reporting method chosen.
The goal of this part of the survey is to find out about your
MONTHLY GROSS income from this job, BEFORE taxes
and other deductions. We can do this in several ways. We
can go straight to monthly totals, or we can try to work with
[FILL - weekly/biweekly/bimonthly] paychecks, hourly pay
rates, or annual income amounts if that would be easier.
What would be easiest for you?
The second level occurs if the respondent still answers
“don’t know” at the approximate income amount screen. The
respondent is then asked to try and report a gross monthly
amount (the default method). The specific wording is:
The rationale behind the new flexibility is to make retrieval
and reporting more natural and in line with how respondents
typically think about earnings. Although the goal of the SIPP
is to collect gross monthly income, respondents commonly
must use “calculating” techniques to arrive at these amounts;
e.g., recalling number of hours worked per week and
multiplying times pay rates and number of weeks worked
per month (U.S. Census Bureau 2000).
“Let’s try another way. How much did you receive BEFORE
deductions from [Employer Name] for [Month X]?”
If the respondent refuses at the default monthly screen, the
instrument skips over the labor force earnings questions. It
was our intent to reduce item nonresponse by encouraging
approximate reporting first and then trying one last time to
collect information by the standard monthly method.
In cases where an amount other than monthly is selected, the
SIPP 2004 instrument internally calculates a gross monthly
amount based on pay dates, pay periods, hours worked,
paycheck totals, annual salaries, and other pertinent
information necessary to make a calibrated monthly
transformation. It then verifies the amount to make sure the
calculation matches up with what the respondent has in
mind. For example, after a respondent reports income as
hourly wages, the instrument verifies by asking:
3.3 Wave 2- Dependent Interviewing
In Waves 2+ of the SIPP, most sections of the instrument
repeat questions from the first interview in order to measure
change over time. This concept is at the heart of longitudinal
surveys and critical for measuring the economic and social
condition of important subpopulations over time. As part of
the 2004 SIPP redesign, the decision was made to maximize
463
2003 Joint Statistical Meetings - Section on Government Statistics
the use of dependent interviewing in the labor force and
other sections of the instrument. (In this paper, “dependent
interviewing” refers to the use of information from prior
interviews, preloaded into the current computer-assisted data
record. It does not refer to the use of information from
administrative records or other sources.)
respondent is probed for the correct amount.
The decision to use dependent interviewing in this manner
was based on the desire to maximize the reporting of change
between waves—we feared that presenting previously
reported amounts up front might encourage respondents to
acquiesce and accept previous wave data to shorten the
interview.
The use of dependent interviewing is generally viewed as a
more efficient approach to collection of information, since
interviewers do not need to repeat questions from previous
waves (Mathiowetz and McGonagle 2000). A few published
studies indicate that the use of this technique leads to an
improvement in data quality by reducing reporting bias and
increasing item response (Hill 1994, Dibbs et al. 1995). The
survey methods literature categorizes dependent interviewing
into two techniques: “proactive” and “reactive.” (See Brown
et al. 1998.) In proactive dependent interviewing, the
respondent is first reminded of information acquired from a
previous interview and then asked about their current
situation, often by verifying whether previous information is
still accurate. With reactive dependent interviewing, a
question from the previous interview is re-asked during the
current interview and information from the previous
interview is used to detect discrepancies between current and
previous answers.
An example of the dependent interviewing path in the SIPP
2004 labor force earning section is illustrated below:
>EASYWAY< “The next questions are about your
gross income from [employer name], BEFORE
taxes and other deductions. Last time, we used
annual amount to report amounts. Is that still a
good way to go?”
If Yes, go to AMOUNT.
>AMOUNT< “What is your current salary?”
If Don’t Know or Refused and an annual amount
was reported in the last wave, go to PWAMT1
If Don’t Know or Refused and no annual amount is
available from last wave, go to APXAMT1
The 2001 Wave 2+ SIPP currently uses proactive dependent
interviewing in the section where information about labor
force participation is collected; however, no form of
dependent interviewing is used in the section that collects
income amounts from jobs or businesses. This changes in
the 2004 SIPP, which uses a combination of proactive and
reactive interviewing.
>PWAMT1< “Things may have changed, but I
have recorded from the last time that your annual
income from this job was about [$ previous wave
amount]. Does that still sound right?”
If Yes, record previous wave amount into current
wave.
If No, go to PWAMT1FIX
If Don’t Know, go to TRYMNTH
If Refused, record refused into current wave
amount.
In cases where the information is available from the previous
interview, respondents are reminded of the reporting method
they previously selected and asked if this is still an
acceptable way to report income (proactive method).3
However, when reporting actual amounts, we elected to use
dependent interviewing only when respondents failed to
provide an answer to questions about income amounts from
jobs and businesses (don’t know or refused). In such cases
(and where information from the previous interview is stored
and available), the instrument pulls up a screen that reminds
the respondent of amounts reported previously and asks the
respondent to verify if this is still accurate (reactive
interviewing). If it is, the instrument stores the previous
wave amount into the current wave amount field and the
interview proceeds. If the amount is not accurate, the
>PWAMT1FIX< “What is the correct amount?”
If Don’t Know, go to TRYMNTH
If Refused, record refused into current wave
amount..
>APXAMT1< “Can you give me an approximate
amount?”
If Don’t Know, go to TRYMNTH
If Refused, record refused into current wave
amount.
3
Preloading information from a previous wave and feeding it
back to the current wave is guided by the Census Bureau’s
Respondent Identification Policy (RIP). This policy guides the
sharing of personal information within a household (Gates
1998). Without approval from the respondent, information from
the last wave cannot be brought back and used in cases where
the respondent in Wave 2 is different from respondent in Wave
1. For the purpose of this paper, we assume that the respondent
has granted that permission.
>TRYMNTH< “Let’s try monthly amounts. How
much did you receive BEFORE deductions from
[Employer Name]”?
If Don’t Know or Refused, record DK / R into
current wave amount.
In the 2004 Wave 2+ instruments, all of the labor force
464
2003 Joint Statistical Meetings - Section on Government Statistics
individual nonresponse ratios across all adults by the
different earnings sections are summarized in Table 2.
Again, the table is arranged by replicate and by wave within
replicate. The Total Persons Earnings row represents the
aggregate nonresponse ratios for all of the labor force
earning items combined.
earnings screens also contain a feature which essentially
allows respondents to self-select a dependent amount
question. The “L” response option can be entered if the
respondent says something like “whatever I said during the
last interview.” If the dependent option turns out not to be
available—either because there was no amount from the
prior wave or there is a respondent identification problem—
the instrument issues a subtle apology: “I am sorry. That
information doesn’t seem to be in my computer.” It then
proceeds to a Wave 1-type follow-up probe question, asking
for approximate amount.
For all waves in each replicate, the new instrument yielded
significantly lower levels of missing data for items dealing
with person-level labor force earnings.4 For example, in
Replicate 2 Wave 1, the new instrument had an average ratio
of 0.11 missing items compared to 0.17 in the standard
SIPP. The improvements are especially noticeable in Waves
2 (0.07 compared to 0.24 and 0.11 compared to 0.20). We
surmise that the new dependent interviewing methods were
responsible for the decreased nonresponse during the Wave
2 interviews.
4. Results
4.1 Reporting Options
Table 1 illustrates the distribution of how respondents
reported income amounts from wage and salary jobs and
businesses. Results from the 2001 and 2002 replicates are
presented and separated by wave within each replicate.
Across replicates and waves, the results are very similar: the
majority of respondents to the new SIPP instrument (noted
as TEST in the tables) chose to report wage and salary job
earnings in a manner other than monthly. Over one-third
typically chose to report by annual amounts, around 30
percent
elected
to
report
earnings
via
weekly/biweekly/bimonthly paychecks, and around 10
percent used the hourly pay rate option. Less than 20 percent
selected to report using monthly aggregate amounts.
In all four field tests, the rate of nonresponse to the wage and
salary items was smaller in the new instrument than in the
control (in all but one, these differences were statistically
significant). For three of the field tests, we see lower rates of
nonresponse to the business earnings questions—two of
which are significantly different: Replicate 2 Wave 2 and
Replicate 3 Wave 1. This may be the result of restricting the
business earnings questions in the 2004 instrument to those
respondents who indicated a salary draw and/or work for an
incorporated business. The intent was to tailor these
questions to a more appropriate universe and reduce
potential confusion and extra reporting burden on business
owners whose primary earnings come from profits.
Nonresponse ratios for moonlight earnings and net
profits/losses were not significantly different between the
new and old instruments. As a whole, we view the
reductions in nonresponse as another major achievement for
the new instrument, particularly since missing data for this
section of the survey is historically substantial.
These distributions look very different from the 2001 SIPP
(noted as CONTROL in the tables). As mentioned
previously, while the old SIPP instrument allows
wage/salary income to be reported in hourly pay rates or
annual amounts, it does not exactly encourage it; that is, the
hourly and annual options are not read aloud to the
respondent. This is evidenced by the fact that the majority of
2001 SIPP instrument respondents reported monthly
amounts (around 80 percent across replicates and waves). In
the case of reporting salaried business earnings, monthly is
the only option allowed—in the new instrument where other
options exist, respondents reported most often by annual
amounts followed by monthly totals. We believe these
findings are strong evidence that the new ‘periodicity’
reporting method is a big improvement and major
accomplishment toward the goal of adopting a more natural,
efficient, and respondent-friendly questionnaire. It takes
advantage of computer-assisted automation by allowing
flexibility in reporting options.
4.3 Amounts
The final tables in our analysis examine monthly mean
earnings across the old and new SIPP instruments. We
present two tables: Table 3 reflects total person labor force
earnings; Table 4 represents only business earnings. Table 3
combines all income reported from wage and salary jobs (1st
and 2nd jobs), contingent jobs, salary from businesses (1st
and 2nd businesses), business profits, other business income,
and moonlight income. For the 2004 instrument, this also
includes income from new questions that specifically ask
about tips, commissions, bonuses or overtime and income
from 3rd+ jobs or 3rd+ businesses. (In the old SIPP, these
amounts were presumably collected by probes following
each monthly amount question.) The combined business
earnings reflect only income reported from incorporated or
4.2 Item Nonresponse
For each labor force section, we computed a nonresponse
ratio for each adult, in which the numerator was the number
of “don’t know” and “refused” responses to an income
amount question and the denominator was the total number
of such questions administered during that person’s
interview. We constructed this indicator to examine
differences in the degree of item-level “missingness”
between the old and new SIPP instruments. The averages of
4
T-tests were performed to detect significant differences
between the test and control panels. A sample design
effect is factored when performing the tests. See Rottach
(2003) for documentation. Tests are based on weighted
data.
465
2003 Joint Statistical Meetings - Section on Government Statistics
The goal of the SIPP Methods Panel was to produce a new
instrument for the 2004 SIPP that was streamlined, efficient,
respondent-friendly, and capable of collecting higher quality
income data. To achieve this goal, we incorporated new
methods—including flexible respondent-identified reporting
options, new follow-up probes to encourage response, and
the liberal use of both proactive and reactive dependent
interviewing. We took advantage of computer-assisted
automation by placing more calculation burdens on the
computer and fewer on the respondent.
salaried businesses and income reported as net profits from a
business.
Table 3 illustrates the mean combined person earnings by
replicate, wave, panel and month. Values in parentheses
reflect the maximum values for each month. These are
presented to illustrate outliers that may have impacted mean
values. Table 3 suggests that, despite the very different data
collection methods in the SIPP 2004, the total person
earnings are very comparable to the monthly amounts
collected in the old SIPP instrument. In fact, no significant
differences were found in mean amounts for any of the four
reference months in any of the field tests. This is reassuring
and suggests that, despite all the revisions in the labor force
section, the new SIPP instrument should produce estimates
similar to those collected during previous SIPP panels.
To evaluate whether our changes were successful, we
examined the distribution of reporting options for wage and
salary and business income, levels of item-nonresponse for
earning questions, and the monthly mean income amounts
for the various components of labor force income. Our
results were consistent across the four field tests.
Respondents reacted favorably to the non-monthly reporting
options available for the first time in the 2004 instrument
(namely, paycheck amounts and annual salary options). This
new method, coupled with the use of dependent interviewing
in Wave 2, also paid dividends in terms of item response:
Compared to the traditional SIPP instrument, the 2004
version consistently had a lower rate of missing data for
person earnings questions. We view this as a major
achievement, since labor force earnings constitute the lion’s
share of household income yet often suffer from non-trivial
levels of nonresponse.
Turning to business income amounts, with the exception of
Replicate 2 Wave 1, there were no significant differences in
mean monthly amounts between the old and new
instruments. The difference in Month 2 from this test is
likely the result of a large outlier case ($173,333) in the new
instrument sample. In Replicate 3, there are several months
where business means are noticeably larger for the old SIPP,
but the differences are not statistically significant. These
months also had large outlier values which obviously
influence the mean values. In these cases, however, the
medians are actually quite comparable between the old and
new instrument (not shown).
At the same time, however, the dollar amounts collected
using the new method were (for the most part) no different
from the monthly amounts reported in the traditional SIPP
instrument. The total monthly mean and median person
earnings were equivalent across all four field tests,
suggesting that income estimates will not be effected by the
new design changes. The new SIPP panel will go into full
production in January 2004. Based on our extensive research
and testing during the Methods Panel field tests, we feel
confident our goal of improving income data quality for the
SIPP will be realized.
5. Impact of Changes to the SIPP Data
Our field tests suggest that revisions to the SIPP instrument
will yield better quality data in the form of lower item
nonresponse but, at the same time, should not significantly
alter employment income estimates. However, because of
format changes, universe changes, and new questions and
question logic skips, the 2004 instrument output will be
different in some respects from the 2001 panel data.
For example: In several cases, the universe for a question is
smaller for SIPP 2004 because answers can be logically
inferred from a previous question. An example of this is a
respondent who works in a family business without pay or
indicates he does only unpaid work. In these cases, if the
respondent indicates he had weeks not worked during the
reference period, the instrument will skip the question about
whether he was paid for those weeks not worked and instead
pre-fill “no” to the question. The universe for the output will
not change because the instrument will automatically pre-fill
the answer for the newly skipped item. Thus, many of the
differences will be transparent to data users because the new
data will be mapped back to the old before data files are
made available to the public. Detailed information about
new items or changed items that impact question content,
universes or values is documented in a CD-ROM. (See
Doyle, Eargle and Villa, 2003.)
7. References
Atrostic, B.K., and Kalenkoski, C. (2002), “Item Response
Rates: One Indicator of How Well We Measure Income.”
Proceedings of the American Statistical Association, Survey
Research Methods Section [CD-ROM], Alexandria, VA:
American Statistical Association, pp. 94-99.
Bates, N. (2003), “Results of SIPP Methods Panel Labor
Force Sections Replicate 3 Wave 1.” Internal U.S. Census
Bureau Memorandum from Nancy Bates to SIPP Executive
Committee.
Brown, A., Hale, A., and Michaud, S. (1998), “Use of
Computer Assisted Interviewing in Longitudinal Surveys.”
Chapter 10 (pp. 185-200) in Couper, M., R. Baker, J.
Bethlehem, C. Clark, J. Martin, W. Nicholls, and J. O’Reilly
(eds.), Computer Assisted Survey Information Collection,
6. Conclusions
466
2003 Joint Statistical Meetings - Section on Government Statistics
New York: John Wiley & Sons.
Testing Results (Replicate 1, Round 1) - Section E:
Earnings.” February, 29.
Unpublished report.
Dibbs, R., Hale, A., Loverock, R., and Michaud, S. (1995),
“Some Effects of Computer Assisted Interviewing on the
Data Quality of the Survey of Labour and Income
Dynamics.” Proceedings of the International Conference on
Survey Measurement and Process Quality, Alexandria, VA:
American Statistical Association, pp.174-177.
Dixon, J., Doyle, P., Eargle, J., Griffin, D., McGovern, P.
(2003), “Quality at the Item Level: Terms, Methods and
Guidelines for Cross-Survey Comparisons - A Summary.”
Paper presented at the Federal Committee on Statistical
Methodology Research Conference, November 18,
Arlington, VA.
Doyle, P., Eargle, J., and Villa, C. (2003), “Survey of
Income and Program Participation- A New Instrument for
2004.” Paper prepared for the annual JSM Conference of the
American Statistical Association. San Francisco, CA, August
3-7 2003.
Doyle, P., Martin, E., and Moore, J. (2000), “The Survey of
Income and Program Participation (SIPP) Methods Panel
Improving Income Measurement.” SIPP Working Paper No.
234, November 13. U.S. Bureau of the Census. Gates, G.
(1998), “Respondent Identification Policy.” Internal U.S.
Census Bureau Memorandum from Gerald Gates to Chet
Bowie, August 6.
Gates, G. (1998). “Respondent Identification Policy.” U.S.
Bureau of Census internal memorandum. August 6, 1998.
Hill, D.(1994), “The Relative Empirical Validity of
Dependent and Independent Data Collection in a Panel
Survey.” Journal of Official Statistics, Vol. 10, 359-380.
Mathiowetz, N., McGongale, K. (2000), “An Assessment of
the Current State of Dependent
Interviewing in
Household Surveys.” Journal of Official Statistics, Vol. 16,
no. 4, pp. 401-418
Moore, J., Pascale, J., Doyle, P., Chan, A., and Griffiths, J.
(2002), “The SIPP Methods Panel Project: Using Field
Experiments to Improve Instrument Design.” Monograph
Paper for the International Conference on Questionnaire
Development, Evaluation and Testing Methods (QDET),
Charleston, SC, November 14-17 2002.
Roemer, M. I. (2000), “Assessing the Quality of the March
Current Population Survey and The Survey of Income and
Program Participation Income Estimates, 1990-1996.”
Housing and Household Economics Statistics Division,
June, U.S. Census Bureau, Washington D.C.
Rottach, R. (2003), “Design Effects for the SIPP Methods
Panel (MPSIPP).” Internal U.S. Census Bureau
Memorandum from R.A. Rottach to the Record, June 25.
U.S. Census Bureau (2000), “SIPP Methods Panel Cognitive
467
2003 Joint Statistical Meetings - Section on Government Statistics
ble 1. Preferred Methods of Income Reporting by Respondents in SIPP Methods Panel by Panel by Replicate
Wages/Salary
Business
Replicate 2 2001
Wave 1
Control
Replicate 3 2002
Wave 2
Test
Control
Wave1
Test
Control
77%
Test
Replicate 2 2001
Wave2
Wave 1
Replicate 3 2002
Wave 2
Test
Control
Test
Wave 1
Control
Test
Wave 2
Control
Test
Control
Control Test
17%
79%
15%
100%
24%
100%
38%
100%
43%
100%
37%
31%
N/A
30%
N/A
12%
N/A
8%
N/A
10%
N/A
10%
nthly Totals
83%
18%
86%
20%
eekly/bimonthly paychecks
N/A
30%
N/A
30%
rly Pay Rates
5%
10%
2%
9%
5%
13%
3%
11%
N/A
0
N/A
1%
N/A
0
N/A
0
rterly amount
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
0
N/A
11%
N/A
1%
N/A
3%
ual amount
12%
36%
12%
36%
18%
31%
19%
37%
N/A
49%
N/A
31%
N/A
41%
N/A
43%
e of the Above
N/A
N/A
N/A
N/A
N/A
3%
N/A
2%
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
6%
N/A
4%
N/A
6%
N/A
6%
N/A
14%
N/A
12%
N/A
4%
N/A
7%
1219
964
1148
879
1516
1367
1429
1152
152
65
126
92
92
69
154
73
N/A
Table 2. Labor Force II Aggregate Non-Response Metrics in SIPP Methods Panel by Panel by Replicate
Replicate 2 2001
Wave 1
CONTROL
Replicate 3 2002
Wave 2
TEST
CONTROL
Wave 1
TEST
CONTROL
Wave 2
TEST
CONTROL
TEST
Total Persons Earnings
.17
.11**
.24
.07**
.19
.16*
.20
.11**
Wage/Salary
.16
.06**
.23
.05**
.16
.14
.19
.09**
Business Earnings
.19
.23
.25
.03**
.34
.19*
.27
.18
Moonlight/Odd jobs/Self Employment
.07
.20
.24
.21
.04
.03
.10
.14
Net Profit/Loss
.39
.38
.34
.33
.35
.36
.32
.39
*Significant difference at .10-level. **Significant difference at .05-level. T-tests used to detect differences between ratios.
468
2003 Joint Statistical Meetings - Section on Government Statistics
Table 3. Mean Combined Person Earnings in SIPP Methods Panel by Month by Panel by Replicate
Replicate 2 2001
Control
Replicate 3 2002
Wave 1
Wave 2
Wave 1
Wave 2
Mean
Mean
Mean
Mean
Test
Control
Test
Control
Test
Control
Test
Monthly total 1
$3485
(40,000)
$3201
(54,167)
$3159
(38,460)
$3158
(40,000)
$3360
(90,000)
$3166
(35,833)
$3162
(53,000)
$3226
(41,658)
Monthly total 2
$3371
(49,000)
$3479
(173,333)
$3219
(38,460)
$3165
(40,000)
$3211
(70,000)
$3160
(33,333)
$3148
(53,000)
$3244
(41,658)
Monthly total 3
$3402
(141,400)
$3214
(40,000)
$3271
(60,000)
$3219
(40,000)
$3256
(80,000)
$3124
(33,333)
$3247
(130,000)
$3232
(41,658)
Monthly total 4
$3538
(70,000)
$3209
(54,167)
$3267
(68,000)
$3175
(40,000)
$3259
(90,000)
$3197
(91,657)
$3351
(70,000)
$3266
(41,658)
Table 4. Mean Combined Person Business Earnings in SIPP Methods Panel by Month by Panel by Replicate (Monthly maximum values denoted in parentheses)
Replicate 2 2001
Replicate 3 2002
Wave 1
Wave 2
Wave 1
Wave 2
Mean
Mean
Mean
Mean
Control
Test
Control
Test
Control
Test
Control
Test
Monthly total 1
$4217
(25,000)
$6156*
(23,333)
$3950
(30,000)
$4324
(15,000)
$8191
(100,000)
$4746
(20,000)
$4963
(30,000)
$4456
(20,833)
Monthly total 2
$4359
(27,500)
$9480*
(173,333)
$3450
(30,000)
$4213
(15,000)
$5771
(70,000)
$4680
(20,833)
$4778
(30,000)
$4604
(20,833)
Monthly total 3
$3773
(16,500)
$5814*
(23,333)
$4453
(60,000)
$4200
(15,000)
$6016
(80,000)
$4619
(25,000)
$8884
(130,000)
$4503
(20,833)
Monthly total 4
$4719
(62,000)
$5888
(25,000)
$3945
(30,000)
$4098
(15,000)
$8386
(100,00)
$4697
(20,833)
$5198
(30,000)
$4671
(20,833)
*Significant difference at .10-level. **Significant difference at .05-level. T-tests used to detect differences in mean values.
469
Download