Main Nonresponse in Household Interview Surveys

Nonresponse in Household Interview Surveys

A comprehensive framework for both reduction of nonresponse and postsurvey adjustment for nonresponse

This book provides guidance and support for survey statisticians who need to develop models for postsurvey adjustment for nonresponse, and for survey designers and practitioners attempting to reduce unit nonresponse in household interview surveys. It presents the results of an eight-year research program that has assembled an unprecedented data set on respondents and nonrespondents from several major household surveys in the United States.

Within a comprehensive conceptual framework of influences on nonresponse, the authors investigate every aspect of survey cooperation, from the influences of household characteristics and social and environmental factors to the interaction between interviewers and householders and the design of the survey itself.

Nonresponse in Household Interview Surveys:
* Provides a theoretical framework for understanding and studying household survey nonresponse
* Empirically explores the individual and combined influences of several factors on nonresponse
* Presents chapter introductions, summaries, and discussions on practical implications to clarify concepts and theories
* Supplies extensive references for further study and inquiry

Nonresponse in Household Interview Surveys is an important resource for professionals and students in survey methodology/research methods as well as those who use survey methods or data in business, government, and academia. It addresses issues critical to dealing with nonresponse in surveys, reducing nonresponse during survey data collection, and constructing statistical compensations for the effects of nonresponse on key survey estimates.Content:
Chapter 1 An Introduction to Survey Participation (pages 1–24):
Chapter 2 A Conceptual Framework for Survey Participation (pages 25–46):
Chapter 3 Data Resources for Testing Theories of Survey Participation (pages 47–77):
Chapter 4 Influences on the Likelihood of Contact (pages 79–117):
Chapter 5 Influences of Household Characteristics on Survey Cooperation (pages 119–154):
Chapter 6 Social Environmental Influences on Survey Participation (pages 155–189):
Chapter 7 Influences of the Interviewers (pages 191–217):
Chapter 8 When Interviewers Meet Householders: The Nature of Initial Interactions (pages 219–245):
Chapter 9 Influences of Householder–Interviewer Interactions on Survey Cooperation (pages 247–267):
Chapter 10 How Survey Design Features Affect Participation (pages 269–293):
Chapter 11 Practical Survey Design Acknowledging Nonresponse (pages 295–321):
Year: 1998
Language: english
Pages: 356
ISBN 13: 9781118490082
File: PDF, 6.36 MB
Download (pdf, 6.36 MB)
thank you very much for this book
18 September 2014 (11:57) 
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
Nonresponse in
Household Interview Surveys

Editors: Robert M. Groves, Graham Kalton, J. N. K. Rao, Norbert Schwarz,
Christopher Skinner
A complete list of the titles in this series appears at the end of this volume.

Nonresponse in
Household Interview Surveys

University of Michigan
Ann Arbor, Michigan
Joint Program in Survey Methodology
College Park, Maryland


A Wiley-Interscience Publication
New York · Chichester · Weinheim · Brisbane · Singapore · Toronto

This text is printed on acid-free paper. ©
Copyright © 1998 by John Wiley & Sons, Inc.
All rights reserved. Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any
form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise,
except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without
either the prior written permission of the Publisher, or authorization through payment of the
appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA
01923, (978) 750-8400, fax (978) 750-4744. Requests to the Publisher for permission should be
addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York,
NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ @ WILEY.COM.
Library of Congress Cataloging in Publication Data:
Groves, Robert M.
Nonresponse in household interview surveys / Robert M. Groves,
Mick P. Couper.
p. cm. — (Wiley series in probability and statistics.
Survey methodology section)
"Wiley-Interscience publication."
Includes bibliographical references and index.
ISBN 0-471-18245-1 (cloth : alk. paper)
1. Household surveys. I. Couper, Mick. II. Title. III. Series.
HB849.49.G757 1998
Printed in the United States of America
10 9 8 7 6 5 4 3


1. An Introduction to Survey Participation

Introduction, 1
Statistical Impacts of Nonresponse on Survey Estimates, 1
How Householders Think about Survey Requests, 15
How Interviewers Think about Survey Participation, 18
How Survey Design Features Affect Nonresponse, 20
The Focus of this Book, 22
Limitations of this Book, 22
Summary, 23

2. A Conceptual Framework for Survey Participation
Introduction, 25
Practical Features of Survey Nonresponse Needing Theoretical
Explanation, 25
A Conceptual Structure for Survey Participation, 29
Implications for Research, 42
Practical Implications for Survey Implementation, 45
Summary, 46
3. Data Resources for Testing Theories of Survey Participation

Introduction, 47
Approaches to Studying Nonresponse, 49
Qualitative Data from Interviewer Group Discussions, 51


Decennial Census Match of Survey Records to
Census Records, 51
Documentation of Interaction Between Interviewers
and Householders, 64
Surveys of Interviewers, 72
Measures of Social and Economic Ecology of Sample
Households, 74
Limitations of the Tests of the Theoretical Perspective, 76
Summary, 77

4. Influences on the Likelihood of Contact

Introduction, 79
Social Environmental Indicators of At-Home Patterns, 85
Household-Level Correlates of Contactability, 88
Interviewer-Level Correlates of Contactability, 94
Call-Level Influences on Contacting Sample Households, 95
Joint Effects of Multiple Levels on Contactability, 102
Summary, 114
Practical Implications for Survey Implementation, 115

5. Influences of Household Characteristics on Survey Cooperation

Introduction, 119
Opportunity Cost Hypotheses, 121
Exchange Hypotheses, 125
Social Isolation Hypotheses, 131
The Concept of Authority and Survey Cooperation, 141
Joint Effects of Indicators of Social Isolation and Authority, 143
Other Household-Level Influences on Cooperation, 145
Multivariate Models of Cooperation involving
Household-Level Predictors, 146
Summary, 150
Practical Implications for Survey Implementation, 153

6. Social Environmental Influences on Survey Participation

Introduction, 155
Trends in Response Rates over Time, 156
Cross-National Differences in Response Rates on
Similar Surveys, 172
"Natural Experiments" at the Societal Level, 173




Subnational Variation in Survey Cooperation, 175
Analysis of Environmental Influences on Cooperation, 179
Bivariate Relationships of Survey Cooperation and
Environmental Factors, 180
Marginal Effects of Individual Environmental Factors, 182
Summary, 185
Practical Implications for Survey Implementation, 187

7. Influences of the Interviewers

Introduction, 191
Interviewer Effects on Cooperation, 192
The Role and Task of Interviewers, 195
Socio-Demographic Characteristics of Interviewers, 196
Interviewer Personality, 198
Interviewer Experience, 200
Interviewer Attitudes and Expectations Regarding
Nonresponse, 205
Interviewer Behaviors, 209
Multivariate Models of Interviewer-Level Effects, 211
Summary, 215
Practical Implications for Survey Implementation, 215

8. When Interviewers Meet Householders: The Nature of
Initial Interactions


Introduction, 219
The Initial Interaction from the Householder's Perspective, 219
Cues for Judging the Intent of the Interviewer, 225
Interaction from the Interviewer's Perspective, 227
Empirical Measurement of the Interactions between Interviewers
and Householders, 230
Nature of the Householder-Interviewer Interaction, 231
Summary, 244

9. Influences of Householder-Interviewer Interactions on
Survey Cooperation


Introduction, 247
Tailoring, 248
Maintaining Interaction, 249
Useful Concepts Related to Tailoring, 250





Past Research on Interviewer-Householder Interaction
Affecting Cooperation, 252
Predicting the Outcome of Contacts Using Characteristics of
the Interaction, 253
Effects of Interviewer-Householder Interaction on the
Final Disposition of Sample Households, 261
Summary, 264
Practical Implications for Survey Implementation, 265

10. How Survey Design Features Affect Participation

Introduction, 269
The Balance of Cost, Timeliness, Measurement, and
Survey Errors, 270
Survey Design Features Affecting Likelihood of Contact
of Sample Households, 271
Survey Design Features Affecting Cooperation, 274
Summary, 292

11. Practical Survey Design Acknowledging Nonresponse



Introduction, 295
Selection of Sampling Frames, 297
Choice of Mode of Data Collection, 299
Design of Measurement Instruments, 302
Selection and Training of Interviewers, 306
Call Attempts on Sample Units, 307
The First-Contact Protocol, 308
Efforts at Nonresponse Reduction after the First Contact, 309
Postsurvey Adjustments for Unit Nonresponse, 310
Summary, 319






This book was written out of frustration. Its genesis came in 1986-1988 when a review of the then extant research literature of survey nonrepsonse yielded few answers to the question, "How important is nonresponse to surveys?"
In teaching courses in survey methodology, it was common for us to emphasize
that once a probability sample had been drawn, full measurement of the sample was
crucial for proper inference to apply. Bright students would sometimes question,
"How do we know when nonresponse implies error and when it doesn't? Is it cheaper and more effective to reduce nonresponse error by decreasing nonresponse rates
or by adjusting for it post hod Is it more important to reduce nonresponse due to
noncontact or nonresponse due to refusals? Why, after all, do people choose not to
cooperate with survey requests?" We felt unprepared for such questions and, indeed,
grew to believe that the lack of answers was a pervasive weakness in the field, not
just a result of our ignorance.
Gathering information post hoc about nonrespondents from diverse surveys,
which formed one of the central databases of this book, was an attempt to address a
critical weakness in the area—the lack of common information about nonresponse
across several surveys. (This was an idea stolen from Kemsley, who in 1971, mounted such a study in Great Britain.) Around 1988, the major U.S. federal household
surveys were beginning to consider redesign efforts related to incorporating new
population distribution data from the 1990 decennial census. We approached Maria
Gonzalez, of the Statistical Policy Office of the Office of Management and Budget,
who was leading an interagency group developing those redesign research plans.
Our idea was to draw samples of nonrespondents and respondents from household
surveys conducted about the time of the decennial census, and match their records
to the decennial census data. We would thus have at our disposal all variables on the
census form to describe the nonrespondents.
This was an idea whose time had clearly not come to the interagency group. We
received tough criticism on what practical lessons would be learned; how would
those surveys be improved because of the work, and so on. Maria should be credited
with quietly listening to the criticism, but forcefully arguing the merits of our case
to the survey sponsors. We dedicate this book to her memory.



What appear to the reader as 11 chapters of theory and analysis are based on
many person-years of effort, during which the perspectives on the conceptual foundations of survey participation evolved. Some history of the project may provide a
sense ofthat process.
We sought to develop a diverse set of surveys to match to the decennial records.
Ideally, we wanted to represent all major survey design variations among the
matched surveys. However, the match tool was to be a unit's address, so we were
limited to area frame surveys, most often conducted in face-to-face mode. We failed
to get cooperation from the commercial surveys we approached. We failed to get extra funds to add some academic surveys to the set.
In the end we established a consortium of funders including the Bureau of the
Census, Bureau of Justice Statistics (BJS), Bureau of Labor Statistics (BLS), National Center for Health Statistics (NCHS) and the National Institute on Drug Abuse
[NIDA, later called the Substance Abuse and Mental Health Services Administration (SAMSHA)]. Research Triangle Institute and the National Opinion Research
Center also provided documentation on surveys sponsored by NIDA and Census,
respectively, to facilitate the match and administered questionnaires to their interviewers. At each agency there were key contact people, who facilitated our work.
These were William Nicholls, Robert Tortora, and Jay Waite (Census Bureau),
Cathryn Dippo and Clyde Tucker (BLS), Michael Rand (BJS), Steve Botman
(NCHS), and Joseph Gfroerer (SAMHSA).
Completely independent of this research program, in early 1990, Groves took on
a temporary post as an Associate Director at the Bureau of the Census, as the project was nearing its implementation. Couper simultaneously took a post as visiting
researcher at Census. This permitted Couper to focus full time on the project between 1990 and 1994.
In 1989, samples were drawn from the Census Bureau surveys, mostly by staff in
the Statistical Methods Division of the Bureau, under the direction of Jay Waite.
John Paletta coordinated the selection of match cases. After the Census, in
1991-1992, Couper began a commuting life between Washington, DC, and Jeffersonville, Indiana, the vast processing complex for the U.S. Census Bureau. There he
worked with a team headed by Judith Petty. The leader of the match team, Maria
Darr, collaborated in defining the match methods, training and supervising staff,
and implementing quality control procedures. Couper directed the match effort, living out of a suitcase, eating too many meals at the Waffle House in Jeffersonville
(whose broken sign read "affle House"). Matching survey and census records was a
tedious, slow process, but the care and professionalism of the Jeffersonville staff
produced a match data set that we believe is as complete and accurate as possible.
Acquiring the completed survey data, cleaning data, merging files, determining
weighting schemes, variance estimators, and appropriate modeling techniques took
some time after the completion of the match in 1993.
We are both indebted to the executive staff of the Census Bureau, which provided
a research cocoon at Suitland, permitting Couper to focus entirely on the research
activities at crucial times during the match process, and Groves to join him after
ending his stint as associate director in 1992.



However, the work of the decennial match project was not our only focus during
the years 1988-1992. Even while the match project was being discussed, two other
lines of research were developing. The first was a refinement of conceptual thinking
on the process of survey participation. This was partially funded by the Census Bureau and was a collaborative effort with Robert Cialdini, a social psychologist who
has made important contributions to understanding helping behavior and compliance. We collaborated in a series of focus groups with interviewers from different
organizations, seeking insights from their expertise in gaining the cooperation of
persons in surveys. This led to a basic framework of influences on survey participation that forms the structure of this book. Cialdini provided important insights about
how survey participation decisions might resemble to other decisions about requests
and, more broadly, to attitude change. We are in his debt, especially for the insight
one Saturday morning that most decision making in the survey context must be
heuristically based, ill-informed by the central features of the respondent's job in a
When our interest grew concerning the effect of the social environment of survey
participation, we joined with Lars Lyberg, our friend and colleague, to organize a
set of international workshops on household survey nonresponse, starting in 1990.
These gave researchers in different countries a chance to compare notes on survey
participation across societies. The workshops have stimulated the replication of
nonresponse research across countries. Our own research has benefitted from such
replication. We have also learned much from the interactions and enjoyed the camaraderie. We thank the regulars at the meetings, including Lars, Bab Barnes, Sandy
Braver, Pam Campanelli, Cathy Dippo, Wim de Heer, Lilli Japec, Seppo Laaksonen,
Clyde Tucker, and many others.
The other line of research that arose in 1990 involved chances to test empirically
our ideas with new data collection efforts. Through the good graces of our colleague
Ron Kessler, we smuggled into the National Comorbidity Survey a set of interviewer observations that permitted key initial tests of our notions of the influence of contact-level interactions. This survey was supported by the National Institutes on
Mental Health (Grants MH46376 and MH 49098). Later we recieved support from
the National Institute on Aging (Grant ROI AG31059) to add similar measures to
the AHEAD survey, which permitted tests of the ideas on a survey of the elderly.
Bill Rodgers and Tom Juster were very supportive of including these in AHEAD.
Both of these grants were important to Chapters 8, 9, and Chapter 11 of this text.
After the match project data were available, Trivellore Raghunathan became a
collaborator when he joined the Survey Methodology Program at Michigan. He collaborated in translating our ideas and findings into a statisitcal modeling strategy
for postsurvey adjustment. Raghu deserves full credit for the two-stage adjustment
procedures in Chapter 11.
Audience. We've written the book for students of survey methodology: those in
school, practicing in the field, and teaching the subject. We assume basic knowledge
of survey design, at a level comparable to most initial undergraduate survey methods courses. The statistical models are kept simple deliberately



Those readers with limited time should read Chapters 1 and 2 in order to understand the conceptual framework. Then they should read the summary sections of
each chapter, as well as Chapter 11.
Those readers most interested in the practical implications of the work should
read the last sections of Chapters 4-9, labeled "Practical Implications for Survey
Implementation" as well as Chapters 10 and 11.
In using the book as a text in a course on survey nonresponse we have used
Chapters 2 and 4-10.
Collaborators. In addition to those mentioned above, other stimulating colleagues
helped shape the research. These include Toni Tremblay and Larry Altmayer at the
U.S. Census Bureau, and Joe Parsons, Ashley Bowers, Nancy Clusen, Jeremy Morton, and Steve Hanway at the Joint Program in Survey Methodology. Lorraine McCall was responsible for the interviewer surveys at the Census Bureau. Teresa Parsley Edwards and Rachel Caspar at Research Triangle Institute worked with us on
parts of the analysis of the National Household Survey on Drug Abuse. Brian Harris-Kojetin, John Eltinge, Dan Rope, and Clyde Tucker examined various features of
nonreponse in the Current Population Survey. Judith Clemens, Darby MillerSteiger, Stacey Erth, and Sue Ellen Hansen provided assistance at various points
during the work, especially on the Michigan Survey Research Center surveys. We
appreciate the criticisms of a set of students in a summer course on survey nonresponse in 1994 offered through the SRC Summer Institute in Survey Research Techniques. Finally, the administrative staff of the Joint Program in Survey Methodology, including Jane Rice, Pam Ainsworth, Nichole Ra'uf, Christie Nader, and
Heather Campbell, provided help at many crucial points.
We are members of the Survey Methodology Program (SMP) at the University of
Michigan's Institute for Social Research, a research environment that stimulates
theoretical questions stemming from applied problems. We thank our SMP colleagues for helping us think through much of the material we present in this book.
Jim House, as director of the Survey Research Center, has been a consistent supporter of bringing science to survey methodology and we thank him for being there.
We have profited from critical reviews by Paul Biemer, John Eltinge, Robert Fay,
Brian Harris-Kojetin, Lars Lyberg, Nancy Mathiowetz, Beth-Ellen Pennell, Stanley
Presser, Eleanor Singer, Seymour Sudman, Roger Tourangeau, and Clyde Tucker.
Errors remaining are our responsibility.
We are especially indebted to Northwest Airlines, whose many delayed and cancelled flights between Detroit Metro and Washington National airports permitted
long and uninterrupted discussions of the research.
Finally, we thank our editor at Wiley, Steve Quigley, for making the publication
process as trouble-free as possible.
Ann Arbor, Michigan
College Park, Maryland


We are grateful to various copyright holders for permission to reprint or present
adaptations of material previously published. These include the University of Chicago Press, on behalf of the American Association for Public Opinion Research, for
adaptation of material from Groves, Cialdini, and Couper (1992) and Couper
(1992), appearing in Chapters 2 and 10, and for reprinting a table from Dillman,
Gallegos, and Frey (1976), as Table 10.1; the Minister of Industry of Canada,
through Statistics Canada for adaptation of Couper and Groves (1992) in Chapter 7;
Statistics Sweden, for adaptation of Groves, R.M., and Couper, M.P. (1995) "Theoretical Motivation for Post-Survey Nonresponse Adjustment in Household Surveys," 11,1, 93-106, in Chapter 9; and "Contact-Level Influences on Cooperation
in Face-to-Face Surveys," 12, 1, 63-83, in Chapter 8; Kluwer Academic Publishers,
for adaptations of Couper, M.P, and Groves, R.M. (1996) "Social Environmental
Impacts on Survey Cooperation," 30, 173-188, in Chapter 6; and Jossey-Bass Publishers for adaptation of Couper and Groves (1996) in Chapter 5.


Nonresponse in
Household Interview Surveys

Nonresponse in Household Interview Surveys
by Robert M. Groves and Mick P. Couper
Copyright © 1998 John Wiley & Sons, Inc.



An Introduction to
Survey Participation



This is a book about error properties of statistics computed from sample surveys. It
is also a book about why people behave the way they do.
When people are asked to participate in sample surveys, they are generally free
to accept or reject that request. In this book we try to understand the several influences on their decision. What influence is exerted by the attributes of survey design,
the interviewer's behavior, the prior experiences of the person faced with the request, the interaction between interviewer and householder, and the social environment in which the request is made? In the sense that all the social sciences attempt
to understand human thought and behavior, this is a social science question. The interest in this rather narrowly restricted human behavior, however, has its roots in the
effect these behaviors have on the precision and accuracy of statistics calculated on
the respondent pool resulting in the survey. It is largely because these behaviors affect the quality of sample survey statistics that we study the phenomenon.
This first chapter sets the stage for this study of survey participation and survey
nonresponse. It reviews the statistical properties of survey estimates subject to nonresponse, in order to describe the motivation for our study, then introduces key concepts and perspectives on the human behavior that underlies the participation phenomenon. In addition, it introduces the argument that will be made throughout the
book—that attempts to increase the rate of participation and attempts to construct
statistical adjustment techniques to reduce nonresponse error in survey estimates
achieve their best effects when based on sound theories of human behavior.
Sample surveys are often designed to draw inferences about finite populations, by
measuring a subset of the population. The classical inferential capabilities of the



survey rest on probability sampling from a frame covering all members of the population. A probability sample assigns known, nonzero chances of selection to every
member of the population. Typically, large amounts of data from each member of
the population are collected in the survey. From these variables, hundreds or thousands of different statistics might be computed, each of which is of interest to the researcher only if it describes well the corresponding population attribute. Some of
these statistics describe the population from which the sample was drawn; others
stem from using the data to test causal hypotheses about processes measured by the
survey variables (e.g., how education and work experience in earlier years affect
salary levels).
One example statistic is the sample mean, an estimator of the population mean.
This is best described by using some statistical notation, in order to be exact in our
meaning. Let one question in the survey be called "X" and the answer to that question for a sample member, say the z'th member of the population, be designated by
Yj. Then we can describe the population mean by

where N is the number of units in the target population. The estimator of the population mean is often

where r is the number of respondents in the sample and w, is the reciprocal of the
probability of selection of the zth respondent. (For readers accustomed to equal
probability samples, as in a simple random sample, the wt is the same for all cases in
the sample and the computation above is equivalent to Sy,·/«.)
One problem with the sample mean as calculated above is that is does not contain any information from the nonrespondents in the sample. However, all the desirable inferential properties of probability sample statistics apply to the statistics
computed on the entire sample. Let's assume that in addition to the r respondents to
the survey, there are m (for "missing") nonrespondents. Then the total sample size is
n = r + m. In the computation above we miss information on thejw missing cases.
How does this affect our estimation of the population mean, 7 ? Let's first make
a simplifying assumption. Assume that everyone in the target population is either,
permanently and forevermore, a respondent or a nonrespondent. Let the entire target population, thereby, be defined as N = R + M, where the capital letters denote
numbers in the total population.
Assume that we are unaware at the time of the sample selection about which stratum each person belongs to. Then, in drawing our sample of size n, we will likely select some respondents and some nonrespondents. They total n in all cases but the actual number of respondents and nonrespondents in any one sample will vary. We
know that, in expectation, the fraction of sample cases that are respondent should be




equal to the fraction of population cases that lie in the respondent stratum, but there
will be sampling variability about that number. That is, E{r) =fR, where/is the sampling fraction used to draw the sample from the population. Similarly E(m) =fM.
For each possible sample we could draw, given the sample design, we could express a difference between the full sample mean, j ^ , and the respondent mean, in the
following way:

which, with a little manipulation becomes
yr=yn +


that is,
Respondent Mean = Total Sample Mean + (Nonresponse Rate)
x (Difference between Respondent and
Nonrespondent Means)
This shows that the deviation of the respondent mean from the full sample mean
is a function of the nonresponse rate (mln) and the difference between the respondent and nonrespondent means.
Under this simple expression, what is the expected value of the respondent mean,
over all samples that could be drawn given the same sample design? The answer to
this question determines the nature of the bias in the respondent mean, where "bias"
is taken to mean the difference between the expected value (over all possible samples given a specific design) of a statistic and the statistic computed on the target
population. That is, in cases of equal probability samples of fixed size the bias of
the respondent mean is approximately
B(yl) =


Bias(Respondent Mean) = (Nonresponse Rate in Population)
x (Difference in Respondent and
Nonrespondent Population Means)
where the capital letters denote the population equivalents to the sample values.
This shows that the larger the stratum of nonrespondents, the higher the bias of the
respondent mean, other things being equal. Similarly, the more distinctive the nonrespondents are from the respondents, the larger the bias of the respondent mean.
These two quantities, the nonresponse rate and the differences between respon-



dents and nonrespondents on the variables of interest, are key to the studies reported
in this book. Because the literature on survey nonresponse does not directly reflect
this fact (an important exception is the work of Lessler and Kalsbeek, 1992), it is
important for the reader to understand how this affects nonresponse errors.
Figure 1.1 shows four alternative frequency distributions for respondents and
nonrespondents on a hypothetical variable, y measured on all cases in some target

y r y,n


Figure 1.1. Hypothetical frequency distributions of respondents and nonrespondents. (a) High response
rate, nonrespondents similar to respondents, (b) Low response rate, nonrespondents similar to respondents.




population. The area under the curves is proportional to the size of the two groups,
respondents and nonrespondents.
Case (a) in the figure reflects a high response rate survey and one in which the
nonrespondents have a distribution of y values quite similar to that of the respondents. This is the lowest-bias case—both factors in the nonresponse bias are small.
For example, assume the response rate is 95%, the respondent mean for reported ex-



Figure 1.1. (c) High response rate, nonrespondents different from respondents, (d) Low response rate,
nonrespondents different from respondents



penditures on clothing for a quarter was $201.00, and the mean for nonrespondents
was $228.00. Then the nonresponse error is 0.05($201.00 - $228.00) = -$1.35.
Case (b) shows a very high nonresponse rate (the area under the respondent distribution is about 50% greater than that under the nonrespondent—a nonresponse
rate of 40%). However, as in (a), the values on y of the nonrespondents are similar to
those of the respondents. Hence, the respondent mean again has low bias due to
nonresponse. With the same example as in (a), the bias is 0.40($201.00 - $228.00)
= -$10.80.
Case (c), like (a), is a low nonresponse survey, but now the nonrespondents tend
to have much higher values than the respondents. This means that the difference
term, [yr - ym]9 is a large negative number—the respondent mean underestimates
the full population mean. However, the size of the bias is small because of the low
nonresponse rate, about 5% or so. Using the same example as in (a), with a nonrespondent mean now of $501.00, the bias is 0.05($201.00- $501.00) = -$15.00.
Case (d) is the most perverse, exhibiting a large group of nonrespondents, who
have much higher values in general on y than the respondents. In this case, mln is
large (judging by the area under the nonrespondent curve) and [yr-ym] is large in
absolute terms. This is the case of large nonresponse bias. Using the example above,
the bias is 0.40($201.00 - $501.00) = -$120.00, a relative bias of 60% of the respondent-based estimate!
To provide another concrete illustration of these situations, assume that the statistic of interest is a proportion, say, the number of adults who intend to save some
of their income in the coming month. Figure 1.2 illustrates the level of nonresponse
bias possible under various circumstances. In all cases, the survey results in a respondent mean of 0.50; that is, we are led to believe that half of the adults plan to

Nonrespondent bias,








Nonrespondent mean,






Figure 1.2. Nonresponse bias for a proportion, given a respondent mean of 0.50, various response rates,
and various nonresponse means.




save in the coming month. The x-axis of the figure displays the proportion of nonrespondents who plan to save in the coming month. (This attribute of the sample is not
observed.) The figure is designed to illustrate cases in which the nonrespondent
proportion is less or equal to the respondent proportion. Thus, the nonrespondent
proportions range from 0.50 (the no bias case) to 0.0 (the largest bias case). There
are three lines in the figure, corresponding to different nonresponse rates: 5%, 30%,
and 50%.
The figure gives a sense of how large a nonresponse bias can be for different
nonresponse rates. For example, in a survey with a low nonresponse rate, 5%, the
highest bias possible is 0.025. That is, if the survey respondent mean is 0.50, then
one is assured that the full sample mean lies between 0.475 and 0.525.
In the worst case appearing in Figure 1.2, a survey with a nonresponse rate of
50%, the nonresponse bias can be as large as 0.25. That is, if the respondent mean is
0.50, then the full sample mean lies between 0.25 and 0.75. This is such a large
range that it offers very little information about the statistic of interest.
The most important feature of Figure 1.2 is its illustration of the dependence of
the nonresponse bias on both response rates and the difference term. The much larger slope of the line describing the nonresponse bias for the survey with a high nonresponse rate shows that high nonresponse rates increase the likelihood of bias even
with relatively small differences between respondents and nonrespondents on the
survey statistic.

Nonresponse Error on Different Types of Statistics

The discussion above focused on the effect of nonresponse on estimates of the population mean, using the sample mean. This section briefly reviews effects of nonresponse on other popular statistics. We examine the case of an estimate of a population total, the difference of two subclass means, and a regression coefficient.
The Population Total. Estimating the total number of some entity is common in
government surveys. For example, most countries use surveys to estimate the total
number of unemployed persons, the total number of new jobs created in a month,
the total retail sales, the total number of criminal victimizations, etc. Using notation
similar to that in Section 1.2, the population total is XYif which is estimated by a
simple expansion estimator, Σνν,·^·, or by a ratio-expansion estimator,
Α^Σνν^,/Σνν,χ,), where X is some auxiliary variable, correlated with Y, for which
target population totals are known. For example, if y were a measure of the number
of criminal victimizations experienced by a sample household, and x were a count
of households, X would be a count of the total number of households in the country.
For variables that have nonnegative values (such as count variables), simple expansion estimators of totals based only on respondents always underestimate the total. This is because the full sample estimator is








that is,
Full Sample Estimate of Population Total = Respondent-Based Estimate
+ Nonrespondent-Based Estimate
Hence, the bias in the respondent-based estimator is




It is easy to see, thereby, that the respondent-based total (for variables that have nonnegative values) will always underestimate the full sample total, and thus, in expectation, the full population total.
The Difference of Two Subclass Means. Many statistics of interest from sample
surveys estimate the difference between the means of two subpopulations. For example, the Current Population Survey often estimates the difference in the unemployment rate for Black and nonBlack men. The National Health Interview Survey
estimates the difference in the mean number of doctor visits in the last 12 months
between males and females.
Using the expressions above, and using subscripts 1 and 2 for the two subclasses,
we can describe the two respondent means as
/ mx \ _



\ — \[y\r-y\n!]





These expressions show that each respondent subclass mean is subject to an error
that is a function of a nonresponse rate for the subclass and a deviation between respondents and nonrespondents in the subclass. The reader should note that the nonresponse rates for individual subclasses could be higher or lower than the nonresponse rates for the total sample. For example, it is common that nonresponse rates
in large urban areas are higher than nonresponse rates in rural areas. If these were
the two subclasses, the two nonresponse rates would be quite different.
If we were interested in y] - y2 as a statistic of interest, the bias in the difference
of the two means would be approximately

Many survey analysts are hopeful that the two terms in the bias expression above
cancel. That is, the bias in the two subclass means is equal. If one were dealing with




two subclasses with equal nonresponse rates that hope is equivalent to a hope that
the difference terms are equal to one another. This hope is based on an assumption
that nonrespondents will differ from respondents in the same way for both subclasses. That is, if nonrespondents tend to be unemployed versus respondents, on average, this will be true for all subclasses in the sample.
If the nonresponse rates were not equal for the two subclasses, then the assumptions of canceling biases is even more complex. But to simplify, let's continue to assume that the difference between respondent and nonrespondent means is the same
for the two subclasses. That is, assume [yrl -yml] = [yr2 -y„a\ Under this restrictive assumption, there can still be large nonresponse biases.
For example, Figure 1.3 examines differences of two subclass means where the
statistics are proportions (e.g., the proportion planning to save some of their income
next month). The figure treats the case in which the proportion planning to save
among respondents in the first subclass (say, high-income households) is yrl = 0.5
and the proportion planning to save among respondents in the second subclass (say,
low-income households) is yr2 = 0.3. This is fixed for all cases in the figure. We examine the nonresponse bias for the entire set of differences between respondents
and nonrespondents. That is, we examine situations where the differences between
respondents and nonrespondents lie between -0.5 and 0.3. (This difference applies
to both subclasses.) The first case of a difference of 0.3 would correspond to

[yr{-y„n] = 0.5-02 = 03
[Λ2-^ 2 ] = 0.3-0.0 = 0.3


Bias of difference
Nonresponse in two subclasses
Equal NR -h lst = .05, 2nd = .2 ■*■ 1st = .05, 2nd = .5


Difference between respondent and nonrespondent mean


Figure 1.3. Nonresponse bias for a difference of subclass means, for the case of two respondent subclass means (0.5, 0.3) by various response rate combinations, by differences between respondent and
nonrespondent means.



The figure shows that when the two nonresponse rates are equal to one another,
there is no bias in the difference of the two subclass means. However, when the response rates of the two subclasses are different, large biases can result. Larger biases in the difference of subclass means arise with larger differences in nonresponse
rates in the two subclasses (note the higher absolute value of the bias for any given
[y,~ym] v a l u e f° r the c a s e with a 0.05 nonresponse rate in subclass 1 and a 0.5 in
subclass 2 than for the other cases).
A Regression Coefficient Many survey data sets are used by analysts to estimate a
wide variety of statistics measuring the relationship between two variables. Linear
models testing causal assertions are often estimated on survey data. Imagine, for example, that the analysts were interested in the model
J ^ ß o + ßi*/**/
which, using the respondent cases to the survey, would be estimated by
λ·/= ßr0 + ßrl^,·«
The ordinary least squares estimator of ß H is


Both the numerator and denominator of this expression are subject to potential nonresponse bias. For example, the bias in the covariance term in the numerator is approximately
B(srx}) = — (Srxy-Smx})-^

—J^l - —){Xr-XJ(Yr-


This bias expression can be either positive or negative in value. The first term in the
expression has a form similar to that of the bias of the respondent mean. It reflects a
difference in covariances for the respondents (Srxy) and nonrespondents (SmX}). It is
large in absolute value when the nonresponse rate is large. If the two variables are
more strongly related in the respondent set than in the nonrespondent, the term has a
positive value (that is the regression coefficient tends to be overestimated). The second term has no analogue in the case of the sample mean; it is a function of crossproducts of difference terms. It can be either positive or negative depending on
these deviations.
As Figure 1.4 illustrates, if the nonrespondent units have distinctive combinations of values on the x and y variables in the estimated equation, then the slope of
the regression line can be misestimated. The figure illustrates the case when the pat-




Figure 1.4. Illustration of the effect of unit nonresponse on estimated slope of regression line.

tern of nonrespondent cases (designated by "O") differ from that of respondent
cases (designated by " · " ) . The result is that the fitted line on the respondents only
has a larger slope than that for the full sample. In this case, the analyst would normally find more support for an hypothesized relationship than would be true for the
full sample.

Considering Survey Participation a Stochastic Phenomenon

The discussion above made the assumption that each person (or household) in a target population either is a respondent or a nonrespondent for all possible surveys.
That is, it assumes a fixed property for each sample unit regarding the survey request. They will always be a nonrespondent or they will always be a respondent, in
all realizations of the survey design.
An alternative view of nonresponse asserts that every sample unit has a probability of being a respondent and a probability of being a nonrespondent. It takes the
perspective that each sample survey is but one realization of a survey design. In this
case, the survey design contains all the specifications of the research data collection. The design includes the definition of the sampling frame, the sample design,
the questionnaire design, choice of mode, hiring, selection, and training regimen for



interviewers, data collection period, protocol for contacting sample units, callback
rules, refusal conversion rules, and so on. Conditional on all these fixed properties
of the sample survey, sample units can make different decisions regarding their participation.
In this view, the notion of a nonresponse rate must be altered. Instead of the nonresponse rate merely being a manifestation of how many nonrespondents were sampled from the sampling frame, we must acknowledge that in each realization of a
survey different individuals will be respondents and nonrespondents. In this perspective the nonresponse rate above (mln) is the result of a set of Bernoulli trials;
each sample unit is subject to a "coin flip" to determine whether it is a respondent
or nonrespondent on a particular trial. The coins of various sample units may be
weighted differently; some will have higher probabilities of participation than others. However, all are involved in a stochastic process of determining their participation in a particular sample survey.
The implications of this perspective on the biases of respondent means, respondent totals, respondent differences of means, and respondent regression coefficients
is minor. The more important implication is on the variance properties of unadjusted and adjusted estimates based on respondents.

The Effects of Different Types of Nonresponse

The discussion above considered all sources of nonresponse to be equivalent to one
another. However, this book attempts to dissect the process of survey participation
into different components. In household surveys it is common to classify outcomes
of interview attempts into the following categories: interviews (including complete
and partial), refusals, noncontacts, and other noninterviews. The other noninterview
category consists of those sample units in which whoever was designated as the respondent is unable to respond, for physical and mental health reasons, for language
reasons, or for other reasons that are not a function of reluctance to be interviewed.
Various survey design features affect the distribution of nonresponse over these categories. Surveys with very short data collection periods tend to have proportionally
more noncontacted sample cases. Surveys with long data collection periods or intensive contact efforts tend to have relatively more refusal cases. Surveys with weak
efforts at accommodation of nonEnglish speakers tend to have somewhat more
"other noninterviews." So, too, may surveys of special populations, such as the elderly or immigrants.
If we consider separately the different types of nonresponse, many of the expressions above generalize. For example, the respondent mean can be described as a
function of various nonresponse sources, as in



yr yn


Wirf _









—^ϋν -y,i) + — ( y r - y n c ) + — O v - ; w
where the subscripts rf nc, and nio refer to refusals, noncontacts, and other noninterviews, respectively.




This focuses attention on whether when survey designs vary on the composition of
their nonresponse (i.e., different proportions of refusals, noncontacts, and other noninterviews), they produce different levels of nonresponse error. Do persons difficult
to contact have distinctive values on the survey variables from those easy to contact?
Do persons with language, mental, or physical disabilities have distinctive values
from others? Are the tendencies for contacted sample cases to sort themselves into either interviews or refusals related to their characteristics on the survey variables?
Consider a practical example of these issues. Imagine conducting a survey of
criminal victimization, where respondents are asked to report on their prior experiences as a victim of a personal or household crime. As will be seen in later chapters,
some of the physical impediments to contacting a sample household are locked gates,
no-trespassing signs, and intercoms. These are also common features that households
who have experienced a crime install in their unit. They are preventative measures
against criminal victimization. This is a situation in which early contacts in a survey
would be likely to have lower victimization rates than late contacts. At any point, the
noncontacts will tend to have higher victimization rates than contacted cases.
Now consider the causes of cooperation or refusal with the survey request. Imagine that the survey is described as an effort to gain information about victimization
in order to improve policing strategies in the local community. Those for whom
such a purpose is highly salient will tend to cooperate. Those for whom such a goal
is less relevant will tend to refuse. Thus, refusals might tend to have lower victimization rates than cooperators, among those contacted.
This situation implies that the difference terms move in different directions:
— ( Λ -5V) > 0




Now let's add to the situation the typical process of field administration. Initial
effort by interviewers is concentrated on contacting each sample unit. This initially
reaches those with low victimization rates, who disproportionately then refuse to be
interviewed. Initial refusal rates are quite high. As contact rates increase, victims,
who are interested in responding, are disproportionately contacted. They disproportionately move into the interviewed pool, increasing the victimization rate among
respondents. Alternatively, if efforts at higher response rates are concentrated on the
initial refusal cases, through refusal conversion, the interviewed pool will increasingly contain nonvictims, lowering the respondent victimization rate.
This is a case where the final nonresponse error is a function of the balance between the noncontact and the refusal rate. For any given overall response rate, the
higher the refusal rate, the more likely the survey will overestimate the population's
victimization rate. For any given overall response rate, the higher the noncontact
rate, the more likely the survey will underestimate the rate.
This example illustrates the need to dissect the causes of nonresponse into constituent parts that share relationships with the key survey variables. Considering
only the overall response rate ignores the possible counteracting biases of different
types of nonresponse. This process of dissection is one of the purposes of this book.



Reducing Nonresponse Rates

There are two traditional reactions to survey nonresponse among practitioners: reducing nonresponse rates and using estimators that include adjustments for nonresponse. As we discuss in more detail in Chapter 10, various survey design features
act to reduce specific sources of nonresponse.
There is a well-documented set of techniques to increase the likelihood of contacting sample cases. These include advance contacts by mail or telephone in faceto-face surveys in order to schedule convenient times to visit. They include setting
the number of days or weeks in the data collection period so that those households
that are rarely at home will nonetheless be contacted. In addition, interviewers are
trained to call repeatedly on sample units, seeking contact with the household. As
the field period progresses, calls on cases tend to be at different times of day or
evening; interviewers may be trained to attempt telephone contact, etc.
There are many design features chosen to reduce refusals as a source of nonresponse. These include the use of advance letters, attempting to communicate that
the survey is conducted by an organization with legitimate need for the information.
The advance communication sometimes contains a cash or in-kind incentive. The
interviewer attempts to make appointments with the sample person at times convenient for them to provide the interview. Repeated attempts to persuade reluctant respondents may involve switches to a different interviewer, persuasion letters, or visits by supervisors—all intended to communicate the importance of cooperating
with the survey request.
Finally, the design features to reduce the rate of "other noninterviews" include
the use of nonEnglish speaking interviewers, translation of the instruments into various languages, and the use of proxy respondents.
Most of these efforts to reduce nonresponse rates are aimed at different potential
causes of nonresponse, not directly different characteristics of nonrespondents.
They attack the rate term (m/n) in the expression, not the difference terms, [yr -ym].
This means they exert no direct control over the nonresponse error itself, but only
on one term of the error expression.
Since design decisions are made under cost constraints, designs often tend to use
the cheapest means possible to reduce the nonresponse rate. Usually, noncontact rates
can be reduced most cheaply, merely by making more calls on cases not yet contacted. If at any one point in a field period, the current noncontacts are quite different (on
the survey measures) from the current refusals, then it is possible that this strategy
would not reduce nonresponse error. That is, if [yr-y„c] is small, but [yr-yrj\ is large,
then moving cases from a noncontact status to an interview status may do little to reduce overall nonresponse error. This observation underscores how blindly the researcher must often make decisions on efforts to reduce nonresponse components.
Our work described in this book attempts to uncover differences in the mechanisms producing noncontacts and refusals, so that investigators might build survey
designs that employ more intelligence about differences among nonrespondents.
This intelligence can then be used either to reduce nonresponse during the data collection efforts or to mount more effective postsurvey adjustments for nonresponse.





Using Postsurvey Adjustment for Nonresponse Error Reduction

The other traditional approach to nonresponse is a statistical one, using estimation
procedures that attempt to reduce the effects of missing observations. In practice the
procedures used in postsurvey adjustment for missing data depend on how much information is available about the nonrespondent cases. At one extreme, if every survey variable of interest, except one, is known about the nonrespondents, then using
those variables to form an imputation model is common. The imputation model predicts a value for the missing variable for the case, conditional on values of all the
known variables. If the predictive model reflects strong relationships among the
variables, then the imputed values tend to be close to the value that would have been
obtained in the interview. Imputation is common in unit nonresponse in longitudinal
surveys, when full data records from a prior wave are available for a nonrespondent
to a current wave. In one-time surveys, it is rare to impute for unit nonresponse because little information is typically known about the nonrespondent cases.
For unit nonresponse (versus item missing data) imputation is less often used
than is case weighting. In weighting adjustments, some respondent cases (those resembling the nonrespondents) are given larger weights in the sample estimators
than are other respondent cases. "Weighting classes" (a group assigned the same
weight) are formed among the respondent cases. When cases in a weighting class
share similar likelihoods of participation and similar values on the survey variables,
then nonresponse error in the weighted estimator is lower than in the unweighted estimator. Sharing similar values on the survey variable must occur within a weighting
class both for respondents and nonrespondents. It is common that the reduced bias
of the weighted estimator is accompanied by somewhat higher sampling variances.
Thus, adjustment decisions are often tradeoffs between bias and variance properties
of unadjusted and adjusted estimators.
For purposes of this book, we focus on the common features of these adjustment
schemes, the specification of observed attributes of a sample that can inform the researcher about the unobserved attributes. Specifically, we seek to identify influences on survey participation that can be observed on all sample cases and used as
predictors in postsurvey adjustment models. Identifying the variables to observed
requires more understanding of the decision-making process of survey participation
than we had prior to mounting this research.


Over the years of studying survey participation we have learned the importance of
viewing the phenomenon from the sample householder's perspective. Survey designers and methodologists sometimes find it difficult to take this vantage point.
However, repeated contacts with householders, monitoring of survey introductions,
and focus groups with interviewers have convinced us that taking a survey researcher's viewpoint risks misunderstanding. This section presents one plausible
perspective that sample householders may take. (We use the term "householder"



throughout this book to include both those sample persons who become respondents
and those who remain nonrespondents.)
The contrast between this perspective and that of survey researchers is, first, that
none of the statistical requirements for complete enumeration of the sample are either understood or valued by the householders. Second, the importance to the sponsor or to society of obtaining the survey information is generally not shared by the
Householders may see survey requests as a specific type of request from a
stranger. There are several categories of those, which tend to sort themselves by the
medium of communication, the physical location of the request, and the nature of
the relationship between the requestor and person.

Requests from Others in Day-to-Day Life

It is useful to compare various characteristics of survey requests to householders
with requests by other types of organizations. Table 1.1 presents some characteristics of requests of unsolicited sales agents, business contacts, charities, and surveys.
By "sales" we mean all contacts with a household by a person attempting to sell
some good or service to the household. This would include approaches for telephone service, credit card services, home improvement products, encyclopedias,
vacuum cleaners, lawn services, and investment services. By "service calls" we
mean contacts with an unknown functionary of an organization that is already providing services or products to the household. This would include public utilities,
newspaper delivery services, cable television services, insurance agencies, or medical care services. The distinction between "sales" and "service calls" is thus
whether the household already has some relationship with the organization, even
though it has no relationship with the given person who makes contact with the
household. By "religions, charities" we mean any contact by an agent of an organization seeking funds from or actions by the household for its cause. This would include proselytizers for specific churches, collectors for contributions to volunteer
fire departments, medical research societies, public radio or television stations,
school fundraisers, environmental action groups, or societies aiding the poor. Finally, by "surveys" we mean any request for information for statistical purposes. This
would include government, academic, or commercial studies of the household population.
Table 1.1 compares these requests on several dimensions, including the likely
frequency of a household experiencing such a request, the level of public knowledge of the organization generating the request, the likely media of communication
of the request, the likelihood of prior contact with the requesting organization, the
use of incentives to the households associated with the request, the persistence at
contact of the requestor when dealing with those rarely at home or those reluctant to
grant the request, and the likelihood of ongoing contact.
At the current time in the United States, sales and service calls on households
probably are more common than charitable and survey requests. Name recognition
by large segments of the household population would be high for business contacts
(because the household is involved in an economic exchange with them) and for




Table 1.1. Selected characteristics of householder encounters with sales, business,
charities, and surveys requests





Frequency of





Level of public
knowledge of





Usual medium of



phone, mail

Mail, phone,

Prior contact
with sponsor





Use of incentives





Persistence at contact





Nature of Request

for goods
or services



Time, for

Likelihood of
ongoing contact





some national, long-standing charitable organizations (e.g., American Cancer Society). When survey sponsors are universities or government agencies, sometimes the
population may have prior knowledge of the requestor. Surveys and charitable requests use all three media of communication, but sales and service calls usually rely
on telephone and mail communication. Even with surveys and charities, the telephone and mail modes predominate over face-to-face contact.
In contrast to service calls and some charities, it is common that the sales and
surveys' approach is the first time for contact with the householder. Service calls
can refer to the past transactions with the household as a way to provide context for
the purpose of the request. A few charities and surveys use incentives as a way to
provide some token of appreciation to the householder for granting the request.
Charities send address labels, calendars, kitchen magnets, and offers of listing
donors' names publicly. Surveys sometimes offer money or in-kind gifts. Sales and
business requests rarely offer such inducements.
Sales and charity requests rarely utilize multiple attempts. If reluctance is expressed on first contact with the household in a sales call on the telephone, for example, the caller tends to dial the next number to solicit. Profit is generally not maximized by effort to convince the reluctant to purchase the product or service. Surveys
and service calls are quite different. Probability sample surveys often make repeated
callbacks to sample households attempting to obtain participation of the household.
Service-call communication will generate repeated calls until the issue is resolved.



Finally, service calls and charitable requests are often made by persons who have
had or will have ongoing relationships with the householder. When the requestor is
known by the householder, that prior knowledge can influence initial householder
behavior. Sales and survey calls are most often made by persons unknown to the
householder. In the early moments of interaction with the requestor, the householder may be attempting to determine whether the requestor is or is not known by them.

Participation in Surveys and in Other Social Activities

The fact that there are different reasons for requests, different purposes of requests,
and different institutions that are making requests that householders receive routinely may lead to standardized reactions to requests. These might be "default" reactions that are shaped by experiences over the years with such requests.
Service calls for clarification of orders, billing issues, and other reasons are
shaped by the fact that the requestor provides products or services valued by the
household. Charities and religious requests may be filtered through opinions and attitudes of householders about the group. Sales requests may generate default rejections, especially among householders repeatedly exposed to undesired sales approaches.
Survey requests, because they are rare relative to the other requests, might easily
be confused by householders and misclassified as sales calls, for example. When
this occurs, the householder may react for reasons other than those pertinent to the
survey request. The fact that surveys often use repeated callbacks is probably an effective tool to distinguish them from sales calls. When surveys are conducted by
well-known institutions that have no sales mission, interviewers can emphasize the
sponsorship as a means to distinguishing themselves from salespersons.
Government and academic surveys are de facto conducted by agents of major institutions in the society. Once the householder discerns such sponsorship of the survey request, it is likely that past contacts with the institution, knowledge about the
institution, or attitudes about its value to the householder or important reference
groups of the householder become relevant. That is, the householder uses knowledge of the sponsor to guide behavior. Once it is clear that the request concerns a
survey interview, then prior experiences with social research, interviews, polls, and
scientific studies may become salient to the decision of the householder. Finally, reactions to the interviewer provide input to the decision to cooperate.


Interviewers are "request professionals." They are the agents of the survey designer
who deliver the request for the survey interview. All of the design features that can
affect interest of householders in responding and willingness to provide information
and time to the interviewer are implemented by interviewers. We would thus suspect
that interviewers can have large effects on householders' reactions to survey requests.




Interviewers, however, have many other duties in most surveys. They must identify and document sample units. They must determine housing units' eligibility for
the sample. In many surveys they must select respondents within the household. After the householder grants the survey request, the interviewer must administer the
questionnaire, with care to communicate correctly the intent and meaning of each
question, to encourage candid and thorough responses from the householder, and to
record accurately the responses of the householder. Thus, contacting and gaining
participation of sample households is but one job in an interviewer's portfolio.
1.4.1 How Interviewers are Trained and Evaluated Regarding Response Rates
It is common for interviewers to receive two sorts of training prior to a survey. The
first type of training is generic to all survey work for their employing organization.
The second is specific to the survey they will soon begin.
General interviewer training tends to have several components. First, the administrative aspects of the job must be communicated. These include recording work
time, receiving and returning sample materials that identify sample households, and
communicating with supervisors. Second, the process of identifying sampling housing units assigned to the interviewer, correcting any errors in the identity, and documenting the outcome of calls on sample cases, must be described.
Next, the training often turns to issues of contacting sample units. It is common
to instruct interviewers to call on sample units at different times of the day and different days of the week. Sometimes, rigid guidelines for call patterns are given (e.g.,
first call on a weekday day, then an evening, then a weekend, until first contact is
made). Interviewers are sometimes instructed to ask neighbors when members of a
noncontacted household would be at home. Some organizations forbid interviewers
to seek such information, for fear of violating the confidentiality of the sample
household. In centralized telephone surveys, call scheduling is often handled by
software embedded in the computer assisted interviewing systems. However, little
useful information can be observed about sample households by telephone interviewers prior to the first contact.
Finally, the interviewers are instructed in the administration of a structured questionnaire. Usually this entails guidelines to read the questions exactly as written.
They are taught how to discern whether the response provided to a question is adequate for the purposes of the research. They are taught how to probe nondirectively
when given an inadequate answer. They are taught how to record responses to open
The training then moves to study-specific training. Study-specific training often
has some material on seeking cooperation of the sample household. It is common
for interviewers to be instructed in the larger purposes of the survey, to be supplied
with answers to commonly asked questions about the survey, and to be trained in issues about the confidentiality of provided data. Some organizations have interviewers role-play situations with diflferent types of reluctant respondents. The purpose of
this is to prepare interviewers with quick responses to objections to survey participation.



The bulk of the study-specific training, however, focuses on the administration
of the questionnaire. Interviewers are instructed in key definitions of terms, in the
intent of each question, in what constitutes adequate answers. They are instructed in
how to handle unusual circumstances that arise for some respondents.
In short, training interviewers to contact sample households and to obtain their
cooperation in the survey is but one part of their training. In many organizations, the
time devoted to the survey participation step in interviewer training is only a small
fraction of the time.
Once the interviewers begin work on a survey, most organizations will use the individual response rates they achieve as important performance indicators. Many organizations will produce daily or weekly statistical summaries of sample cases assigned, cases not yet contacted, cases interviewed, cases with initial refusals, cases
with other types of noninterviews, cases assigned for refusal conversion, etc. When
individual interviewers achieve lower than expected response rates, they are often
given remedial training to ameliorate the situation.



Over the years of the development of the survey method, various antidotes to nonresponse have been developed and proven valuable in diverse settings. These are features chosen by the survey designer prior to mounting the data collection step. Most
have cost implications for the survey. Some may increase the cost per completed interview; some may move costs from interviewer salaries to other components of the
budget (i.e., they reduce interviewer effort to obtain participation of sample units
but increase other staff or material costs).
A simple way to classify the design features is by what portion of nonresponse
they address: noncontact, refusals, or other noninterviews. The other relevant classification of techniques concerns when in the temporal order of a survey the design
feature is present.
1.5.1 Features to Enhance the Rate of Contact
There are essentially four methods of improving the rate of contact of sample
households. The first is to increase the number of calls on previously uncontacted
units. In area-based household surveys, this is at times implemented through rules
on number of visits to a sample neighborhood. In telephone surveys, this might be
implemented through software that controls the maximum number of calls. The second approach is to control the timing of repeated calls to sample units. Often, these
direct the interviewer to mix the time of calls between daytime, weekend, and
evening calls, in order to contact households with different at-home patterns. The
third approach is to increase the length of the data collection period. In a way, this
permits more variation of time of calls and more calls, but also addresses households who are temporarily absent because of out-of-town travel. The fourth design
feature is to permit interviewers to seek supplemental information about the noncontacted unit. This includes obtaining telephone numbers so that calls can be made




in another mode and seeking information about at-home patterns from neighbors,
doormen, or building managers. The latter technique is generally limited to face-toface surveys.
1.5.2 Features to Enhance the Rate of Cooperation
There are many more techniques that are used in practice to reduce refusals than to
reduce noncontacts. They are usefully divided into precontact methods, methods
during contact, and refusal conversion methods.
Agencies of data collection convey different levels of authority and legitimacy to
different populations. For example, it is common that government agencies obtain
higher cooperation rates than other survey organizations. Thus, one design decision
is the sponsorship of the survey and the affiliation and stature of the data collection
organization relative to the sample population. Related to this is describing the survey's purposes in ways that heighten the attention to uses that might benefit the
sample household or groups to which the household is affiliated.
Prior to an interviewer seeking an interview in a face-to-face survey, advance letters are sent to sample households in order to convey the sponsorhip and purpose of
the survey and to alert the household to an upcoming visit by the interviewer. In
telephone surveys, letters can be sent only to listed household numbers. These generally act to increase the willingness of the household to consider the request and to
heighten the confidence of interviewers in seeking participation. Sometimes the advance contact with the sample household contains some incentive, to increase the
benefits of participation. These might be prepaid monetary incentives or small gifts,
which attempt to increase willingness to grant the interview.
Upon contact with the sample household, all interviewer behavior affecting participation becomes relevant. In face-to-face surveys, this involves giving any descriptive brochures or material about the survey. The interviewers provide description of the purposes of the survey, the nature of the interview, the confidentiality
provisions affecting the data, etc. They react to any questions or signs of reluctance
from the householder according to training guidelines in an attempt to gain cooperation. When the contact is complete, the interviewers document the results of the
call and make notes about the nature of the interaction.
If the initial contacts result in a refusal to participate in the survey, various design
features are used to urge the household to reconsider its decision. These might include mailing a letter to the household reiterating the importance of their participation to the success of the survey and the importance of the survey to the community.
These letters may contain incentives to respond that attempt to increase the benefits
of cooperation by the household.
A different interviewer may make a "refusal conversion" call on the reluctant
household. Alternatively, a supervisor may call the household, in an attempt to convey the importance of the household's participation. Sometimes the mode of contact
will be changed to one offering greater likelihood of success (e.g., from telephone
to face-to-face contact).
When all else fails, the survey designers may radically reduce the burden of the
request, seeking a very short interview from the household in order to collect the
basic information useful to postsurvey adjustments.




This book attempts to document pervasive influences on persons' participation in
surveys. In doing so, it is concerned both with the statistical impact of nonresponse
on survey statistics, with survey design features that might decrease or increase participation, with the nature of interviewer-householder interaction producing the
participation decision, and with postsurvey adjustments to correct for nonresponse.
The book was written with the judgement that at this point in the development of
the field, basic research on the participation decision was needed. To be helpful, the
research would inform survey designers about what features of surveys tended to
increase participation and why they did so. To be useful, it would identify design
features that would improve the power of postsurvey adjustments for nonresponse.
Finally, to be valuable to survey managers, it would identify principles underlying
differential abilities of interviewers to obtain participation from householders.
In short, the book attempts to link several cultures within the survey field. It
seeks to build and test theoretical constructs influencing survey participation, but it
wishes to draw implications for practitioners. It seeks to find insights into ways to
reduce nonresponse in field administrations, but it attempts also to discover ways to
improve statistical adjustments by analysts of survey data.


Although our goal is a widely useful set of theories and findings about survey participation, the book has clear limitations. It focuses on surveys only of the U.S.
household population. It contains no direct investigation of participation in establishment, business, or organization surveys. Further, although we believe some of
the underlying principles apply to many survey designs of persons, it does not directly investigate the process of participation in surveys of memberships, social networks, or employee groups, where issues of group identity might be stronger.
All of the empirical data used in the book come from face-to-face surveys. There
are two reasons for this: a) we take advantage of a unique matching of survey respondent and nonrespondent records to the 1990 U.S. decennial census sponsored
by agencies conducting several face-to-face surveys; and b) the ability to observe
characteristics of nonrespondent households is greatly enhanced in face-to-face surveys versus telephone or self-administered surveys. The fact that face-to-face surveys are studied implies that we have limited ability to draw inferences about survey
modes not using an interviewer as the agent of the survey request. Because telephone surveys are increasingly common (especially in the United States), we comment throughout the book on the applicability of the results to the telephone survey
This is a book about "unit nonresponse," not "item nonresponse." We are interested in what induces people to grant a request for a survey interview from a
stranger who appears on their doorstep. We do not study the process by which respondents who begin an interview fail to supply answers to some questions. We be-




lieve that the influences toward this behavior are quite different from those of the
initial acceptance of the interview request.
We examine the process of decisions to participate in one-time surveys or the
first wave of a longitudinal survey. We do not examine the influences on dropping
out of a panel survey after initial response or the factors that influence long-term
panel retention. We suspect that the length of the initial-wave's interview, the sensitivity of questions in the first wave interview, the cognitive demands of the respondent task in the first wave interview, and the rapport built with the first wave interviewer make the process of continued cooperation in a longitudinal survey quite
distinctive from that of granting first-time survey requests.
Finally, the surveys we study are either collected directly by U.S. government
agencies or sponsored by U.S. government agencies. Government surveys throughout the world tend to have higher response rates than surveys conducted in the academic or commercial sectors. Some of the surveys we study intensively have unusually high contact and cooperation rates with sample households. At key points in the
book we discuss any limitations this poses on the inference from our findings.
We hope that the book provides a blend of conceptual frameworks that have
widespread utility and empirical tests that provide persuasive evidence. The next
chapter maps out the conceptual framework that guides the thinking about survey
participation throughout the book.


This book seeks to illuminate the behavioral foundations of a statistical error property of sample statistics—that arising from unit nonresponse. We address the topic
by first imposing a conceptual structure on our search for understanding, which is
described in full in Chapter 2. This chapter described the major building blocks in
models of the process of deciding to participate in a survey.
In studying the phenomenon, we are attracted to the viewpoint that the process of
granting a survey request is a stochastic one; that is, although it is subject to consistent and powerful influences, there are, in general, few deterministic features to the
process. There are, therefore, negligible proportions of truly "hard-core" nonrespondents.
We have found that dissecting the nonresponse phenomenon into one of noncontacts, refusals, and other causes, sensitizes us to considering alternative causes of
each outcome. Since these processes are mixes of ones under the control of the researchers (e.g., number of callbacks), and ones out of their control (e.g., the urbanicity levels of the target populations), studying each separately is important both for
practical implications of field administration and specification of postsurvey adjustment models.
We will consistently search for causes of different types of survey nonresponse,
seek observable proxy indicators of those causes, and suggest that they be used both
to guide targets for nonresponse reduction during data collection and be used for
postsurvey adjustment models.



The next two chapters lay out the theoretical orientation that guides the analysis
(Chapter 2) and review the data resources we bring to bear to address survey participation (Chapter 3). Then we begin presenting the results of empirical analysis. In
Chapter 4 we examine the process of contacting sample households. In Chapter 5
we look at household-level influences on survey cooperation among contacted
households. In Chapter 6 we turn to the social environmental influences on survey
participation. Chapter 7 studies how interviewers act to influence cooperation when
they contact householders. Chapters 8 and 9 turn to the interaction level, studying
what interviewers and householders say and do during contacts that portend later
cooperation with or refusal of the survey request. In Chapter 10 we review all the
survey design features that researchers control to affect levels of response rate. Finally, in Chapter 11 we review step-by-step the process of survey design, implementation, and analysis, and apply the knowledge we learned to each of the steps, with
the goal of producing overall survey statistics with minimal nonresponse error.

Nonresponse in Household Interview Surveys
by Robert M. Groves and Mick P. Couper
Copyright © 1998 John Wiley & Sons, Inc.



A Conceptual Framework
for Survey Participation



This chapter examines the components of survey participation that must be explained by a theory of survey participation, in order to develop effective methods to
reduce nonresponse or to construct useful postsurvey compensation schemes. It is a
multilevel conceptual framework that includes influences from the levels of the social environment, the household, the survey design, and the interviewer. It describes
the role of the interaction between interviewer and householder in affecting the decision regarding the survey. The chapter ends with a discussion of implications of
the theory for survey practices.
For surveys to be useful information gathering devices, sampling frames of
households and individuals must be possible, strangers must be able to visit or telephone sample housing units and gain access to their households, persons must be
willing to participate in an interview with the stranger and to trust pledges of confidentiality made regarding personal data provided to the interviewer. When many
decades have passed since this writing, it may be the case that the social ingredients
necessary for surveys to be useful tools of information assembly were present in a
fairly limited period of historical time. The increasing difficulty of gaining participation may be the beginning of the disintegration of the necessary ingredients permitting surveys to function. Even without such a dire future, in order to understand
the statistical implications of nonresponse, we must understand its behavioral bases.
Before examining the influences on survey participation from a theoretical viewpoint, it is important to be quite specific about the details of the phenomenon of survey participation itself. What do we mean by participation in household surveys?
Perhaps this is best answered by dissecting the process temporally. In the next



section we review those steps that yield interview data on sample units. We focus on
the case of a household survey, either based on an area frame or a telephone frame.
As shown in Figure 2.1, we move from locating and contacting the sample household, to identifying persons in the household, to choosing a respondent/informant,
to seeking their participation in the survey. Survey nonresponse can arise at all of
these points.

Contacting the Sample Household

Theoretically, the process of contacting a sample household is rather straightforward. As Figure 2.2 shows, the success at contacting a household should be a simple
function of the times at which at least one member of the household is at home, the
times at which interviewers call, and any impediments the interviewers encounter in
gaining access to the housing unit. In face-to-face surveys, the latter can include
locked apartment buildings, gated housing complexes, no-trespassing enforcement,
as well as intercoms or any devices that limit contact with the household. In telephone surveys, the impediments include "caller ID," "call blocking," or answering
machines that filter or restrict direct contact with the household.
In most surveys the interviewer has no prior knowledge about the at-home behavior of a given sample household. In face-to-face surveys interviewers report that
they often make an initial visit to a sample segment (i.e., a cluster of neighboring
housing units sampled in the survey) during the day, in order to gain initial intelligence about likely at-home behaviors. During this visit the interviewer looks for bicycles left outside (as evidence of children), signs of difficulty of accessing the unit
(e.g., locked apartment buildings), small apartments in multiunit structures (likely
to be single-person units), absence of automobiles, etc. Sometimes, when neighbors
of the sample household are available, interviewers seek their advice on a good time

Verify eligibility
of sample housing

Contact sample

Find household
informant for

Select respondent

Contact respondent
and assess ability
to respond

Seek interview
Figure 2.1. The process of survey participation.

Apply persuasion
for reluctant















of contact

Number of
! calls



Timing of j

Figure 2.2. A conceptual model for contacting sample households.

to call on the sample unit. This process is the practical method of gaining proxy information about what call times might successfully encounter the household at
home. In telephone surveys, no such intelligence gathering is possible. The only information about at-home practices of a sample household is obtained by calling the
number. (This imbalance leads to the larger number of calls required to make first
contact with a household in telephone surveys.)
In face-to-face surveys physical impediments to access are sometimes so strong
that they literally prevent all contact with a sample unit. For example, some higherpriced multiunit structures have doormen that are ordered to prevent entrance of all
persons not previously screened by a resident. Such buildings may be fully nonrespondent to face-to-face surveys. Similarly, although there is evidence that the majority of owners of telephone answering machines use them to monitor calls to their
unit when they are absent (see Tuckel and Feinberg, 1991; Tuckel and O'Neill,
1995), some apparently use them to screen out calls when they are at home, thus
preventing telephone survey interviewers from contacting the household.
Other impediments to contacting households may offer merely temporary barriers, forcing the interviewer to make more than the usual number of calls before first
contacting them. For example, an apartment building whose entrance is controlled
by a resident manager may require negotiations with the manager before access to
sample households is given.
For units without any physical impediments to contact, the challenge to the interviewer is finding a time when the household is at home. Information from time use
surveys, which ask persons to report on their activities hour by hour, have shown



common patterns of at-home behavior by weekday mornings and afternoons, weekday evenings, and weekends. Those in the employed labor force are commonly out
of the house, with the lowest rates of occupancy during the day. Interviewers make
repeated calls to households they do not contact on the first call. Their choice of
time for those callbacks can be viewed as repeated samples from a day-of-week,
time-of-day frame. They base their timing of successive calls on information they
obtain on prior unsuccessful visits and on some sense of consistency. For example,
interviewers are often trained to make a callback on a unit not contacted at the last
visit on a weekday afternoon by visiting on an evening or weekend.

Determining Eligibility, Obtaining a Household Roster

The next step in the process of survey participation can vary greatly by the target
population of the study and by what person or persons in the households are used as
respondents to the survey interview. Once contact is made with the household, if
there is no named householder designated as respondent (for example using sampling frames of persons), the interviewer must identify the appropriate respondents) for the survey.
There are three common types of respondent rules: a) a household informant
rule, sometimes seeking the most knowledgeable on the topics of interest; b) a randomly selected adult respondent; and c) a rule specifying that all householders are
to be interviewed. The first and third rules more often have provisions for using a
proxy respondent when the preferred respondent cannot or will not provide the interview. In some studies, only householders of a particular age, gender, or occupational status are eligible. For such surveys, the three rules above are used within the
eligible set.
In all respondent rules, one of the first tasks of an interviewer is to obtain knowledge of what persons are eligible to be a respondent within the household (e.g., in a
survey of those between the ages of 55-70, the interviewer would ask questions
about the ages of household members). In some designs, interviewers are free to use
any adult who appears to be competent to answer screening questions about the
In other designs, interviewers provide a brief introduction of themselves and the
survey and then seek to obtain a listing of all household members. This listing often
consists of a name or a relationship to the informant, gender, and age of the householder. The listing is then used to identify eligible persons and to provide a small
sampling frame of the eligibles for selection of respondents. After the listing and selection of respondents, the interviewer seeks to interview the chosen respondent(s).
Reluctance from householders often first arises during the attempt at listing the
household. This step commonly takes place after only a short introduction to the
purposes of the interviewer's visit. To some householders, these questions may seem
quite intrusive. The perceived violation of privacy may be exacerbated by the apparent lack of rationale for such questions. For example, when the interviewer has stated a purpose of learning about the expenses and consumer purchases of the household, it may not be clear why the interviewer begins with questions about each
member of the household.





Repeated Callbacks

Sometimes the attempt to obtain a household listing or otherwise identify or speak
to the respondent is unsuccessful in the first contact. In these cases, the interviewer
ends the initial conversation by asking for a time to call again in order to contact the
chosen householder.
This step is quite different from the attempts to find a call time prior to the first
contact. Interviewers rely greatly on the information provided by the informant on
the call. Interviewers will seek to make as firm an appointment as possible, to reduce the need for further calls.
Nonresponse may be threatened when a different respondent is chosen than the
person providing the household listing. At the very least, it is common that another
call must be made, based on an appointment time that is uncertain.

Refusal Conversion

Either at the initial contact (in which the household listing may have been obtained)
or a later contact with a chosen respondent, householders' reluctance to participate
in the survey may be so strong that they explicitly refuse to participate. They may
provide no reason for this at all (e.g., "I don't want to do this"); they may provide
superficial answers (e.g., "I'm too busy for this"); or they may tell the interviewer
more detailed reasons why they are refusing (e.g., "I don't want to do anything to
help the Federal government").
It is common for survey organizations to set such cases aside for a period of time
and then attempt another contact. In face-to-face surveys, the survey organization
might send a letter urging the householder to reconsider. A different interviewer or
perhaps the supervisor might be assigned the case for recontact.

Nonresponse Because of Incapacitation

At any of the contacts with the sample household, the interviewer may learn that the
chosen respondent is unable to provide the interview, regardless of how willing he
or she might be. This can arise from the inability to speak the languages the survey
organization is prepared to use to administer the interview or failure to find a suitable translator. Alternatively, the chosen respondent may suffer from physical health
problems that rob them of the energy necessary to answer the survey questions. Finally, the chosen respondents may suffer from depression, mental retardation, or a
variety of other mental health disorders that prevent them from comprehending the
questions or otherwise attending to the respondent task.



Once the interviewer contacts a sample household, we believe that the influences on
the householder's decision to participate arise from both relatively stable features of
the environment and their backgrounds, fixed features of the survey design, as well



as quite transient, unstable features of the interaction between the interviewer and the
householder. This conceptual scheme is portrayed in Figure 2.3, listing influences of
the social environment, householder, survey design features, interviewer attributes
and behavior, and the contact-level interaction of interviewers and householders.
The influences on the left of the figure (social environment and sample household) are features of the population under study, out of control of the researcher. The





• economic conditions

• topic

• survey-taking climate

• mode of administration

• neighborhood characteristics

• respondent selection






• household structure

• Socio-demographic

• socio-demographic

• experience

• psychological

• expectations





Decision to cooperate
or refuse
Figure 2.3. A conceptual framework for survey cooperation.




influences on the right are the result of design choices by the researcher, affecting
the nature of the survey requests and the attributes of the actors (the interviewers)
who deliver them. The bottom of the figure, describing the interaction between the
interviewer and the householder, is the occasion when these influences come to
bear. Which of the various influences are made most salient during that interaction
determines the decision outcome of the householder.

Social Environmental Influences on Survey Participation

Since surveys are inherently social events, we would expect that societal and grouplevel influences might affect their participation rates. There are a set of global characteristics in any society that affect survey participation. These factors serve to determine the context within which the request for participation takes place, and
constrain the actions of both householder and interviewer. For example, the degree
of social responsibility felt by a sample person may be affected by such factors as
the legitimacy of societal institutions, the degree of social cohesion, and so on. Such
factors influence not only the expectations that both interviewer and respondent
bring to the interaction, but also determine the particular persuasion strategies (on
the part of the interviewer) and decision-making strategies (on the part of the respondent) that are used. More specific to the survey-taking climate are such factors
as the number of surveys conducted in a society (the "oversurveying" effect) and
the perceived legitimacy of surveys.
We would expect, therefore, to the extent that societies differ on these attributes,
to observe different levels of cooperation for similar surveys conducted in different
countries. There is evidence for this (see de Heer and Israels, 1992), but the evidence is clouded by different design features used across countries, especially intensity of effort to reduce nonresponse. These include different protocols for advance
contact with sample households, for repeated callbacks on noncontacted cases, and
for dealing with initial refusals.
There are also environmental influences on survey cooperation below the societal level. For example, urbanicity is one of the most universal correlates of cooperation across the world. Urban dwellers tend to have lower response rates than rural
dwellers. This contrast has been commonly observed in part because the urbanicity
variable is often available from the sampling frame. The nature of urbanicity effects
on response rates has been found to be related to crime rates (House and Wolf,
1978) but also may be related to population density, the type of housing structures,
and household composition in urban areas. The effect may also be a function of inherent features of urban life—the faster pace, the frequency of fleeting single-purpose contacts with strangers, and the looser ties of community in such areas. We explore the issue of environmental influences (both at the societal and subnational
levels) in greater depth in Chapter 6.

Characteristics of the Sample Householder

The factors affecting nonresponse most widely discussed in the survey literature
are socio-demographic characteristics of the householder or sample person. These



include age, gender, marital status, education, and income. Response rates have
been shown to vary with each of these, as well as other, characteristics (see
Chapter 5).
There are other factors associated with these which also have been studied for
their relationship to response rates. These include household structure and characteristics, such as the number and ages of the household members and the quality and
upkeep of housing; and the past experience of the respondent, such as exposure to
situations similar to the interview interaction or a background that provided information or training relevant to the survey topic.
We do not believe these factors are directly causal to the participation decision.
Instead, they tend to produce a set of psychological predispositions that affect the
decision. Some of them are indicators of the likely salience of the topic to the respondent (e.g., socioeconomic indicators on income-related surveys); others are indicators of reactions to strangers (e.g., single-person households).
The socio-demographic factors and household characteristics all may influence
the householder's psychological predispositions. Feelings of efficacy, embarrassment, or helpfulness, and moods of depression, elation, or anger will all be affected
by these factors. All of these characteristics will then influence the cognitive
process that will occur during the interaction with the interviewer.
As we note in Chapter 1, we believe that few householders have strongly preformed decisions about survey requests. Rather, these decisions are made largely at
the time of the request for participation. Much social and cognitive psychological
research on decision making (e.g., Eagly and Chaiken, 1984; Petty and Cacioppo,
1986) has contrasted two types of processes. The first is deep, thorough consideration of the pertinent arguments and counterarguments, of the costs and benefits of
options. The second is shallower, quicker, more heuristic decision making based on
peripheral aspects of the options.
It is our belief that the survey-request situation most often favors a heuristic approach because the potential respondent typically does not have a large personal interest in survey participation and, consequently, is not inclined to devote large
amounts of time or cognitive energy to the decision of whether or not to participate.
Further, little of the information typically provided to the householder pertains to
the details of the requested task. Instead, interviewers describe the purpose of the
survey, the nature of the incentive, or the legitimacy of the sponsoring organization.
All of these in some sense are peripheral to the respondent's task of listening to the
interviewer's questions, seriously considering alternative answers, and reporting
honestly one's judgement.
Cialdini (1984) has identified several compliance principles that guide some
heuristic decision making on requests and appear to be activated in surveys. These
include reciprocation, authority, consistency, scarcity, social validation, and liking.
We briefly review these here (see also Groves, Cialdini, and Couper, 1992) and link
them to other concepts used in the literature.
Reciprocation. This heuristic suggests that a householder should be more willing
to comply with a request to the extent that compliance constitutes the repayment of




a perceived gift, fav