ORGL 501 LECTURE # 4: DATA COLLECTION STRATEGIES FOR

advertisement
Data Collection Strategies, pg. 1
E. F. Vacha, Revised 1/1/07
ORGL 501 LECTURE # 4: DATA COLLECTION STRATEGIES FOR BASIC AND
APPLIED RESEARCH
Required Reading: Chapter5
CONTENTS
Lecture Essay: Overview of Data Collection Strategies
1
Strengths and weaknesses of the most commonly used data collection
strategies including qualitative and quantitative observation; qualitative
interviews, and focus groups; questionnaires, surveys, and scales; and archival
and existing data. Be sure to read this section before selecting a data
collection strategy.
Appendix A:
Evaluating Surveys: Wording, Syntax, Organization & Formatting
16
Guidelines for reviewing and proofing surveys. Be sure to read appendix
if you plan to design your own questionnaire, survey, scale, or if you review a
fellow student’s questionnaire survey, or scale.
Appendix B:
Guidelines for Designing Mail and Internet Surveys
Brief guide to mail and web survey design. Be sure to read this
section if you plan to use a mail, E-mail, or web survey, or if you
review a fellow student’s mail, E-mail, or web sruvey design.
19
Introduction
One of the most important decisions researchers must make is to select a particular data
collection strategy from among the many alternatives available (structured observation,
participant observation, surveys, interviews, qualitative interviews, focus groups, archival data,
etc.). The following essay is designed to help you make intelligent decisions about which of the
many data collection strategies to use for your design. My goal here is not to tell you how to use
these strategies, but, rather, to help you decide which to use. Consult your text for excellent
discussions of how to implement each of these data collection strategies.
Many beginning researchers (and some experienced researchers) select a strategy before
Data Collection Strategies, pg. 2
E. F. Vacha, Revised 1/1/07
they select a topic. Occasionally researchers (especially less experienced researchers) will
describe themselves as a “qualitative researcher”, a “survey researcher”, or a “quantitative
researcher.” This tendency to select a method ahead of the final selection of a specific research
hypothesis or research problem is a major blunder. Furthermore, many researchers become
overly attached to a particular data collection strategy, in part, because using each requires a
great deal of learning. Learning to become a proficient participant observer, or interviewer, or
survey researcher can take months or years. As a consequence, it is tempting to stick to one or
two familiar data collection strategies rather than spend the time learning new ways to collect
and analyze data. The problem is that each approach is a poor measure or indicator of some types
of concepts. The only sensible way to choose a data collection strategy is to examine the
concepts involved in the hypothesis you wish to test or the research problem you wish to
confront, review the the literature to see how others have solved this problem, and then
choose the strategy that will give you the most valid and reliable operationalizations. The
following lecture describes some of the data collection strategies most commonly used in
organizational research, including their strengths and weaknesses. Once you decide which
strategy or strategies is most appropriate, you can use your text to learn how to correctly use that
strategy.
Remember, each data collection strategy has a unique set of strengths and weaknesses.
Because each data collection strategy has both strengths and weaknesses, selecting a strategy
before finalizing a research topic and reviewing the literature makes no sense. The correct way to
select a research strategy is to first select a topic; then review the research literature on that topic,
paying particular attention to the data collection strategies others have used; and, finally,
selecting the data collection method that is most appropriate for finding trustworthy answers to
the questions you will be asking about your topic. Selecting a data collection strategy always
involves weighing the advantages and disadvantages of each method, and then selecting the data
collection strategy (or combination of strategies) that is the best compromise.
Observing
The most obvious way to study something is to observe it. Observation provides
immediate and direct information that is not influenced by others’ opinions or beliefs about the
topic of research. There is no more reliable and valid way to operationalize peoples’ behavior,
because all other approaches to studying behavior (self-reports, others’ reports) are subject to
Data Collection Strategies, pg. 3
E. F. Vacha, Revised 1/1/07
more error and bias. Furthermore, observation can be conducted in natural settings without
disrupting the day-to-day routines of the people being studied. One of the most challenging
aspects of observational research is that it is impossible for observers to observe everything in
the research setting. As a consequence, the observer must decide what to observe and what to
ignore. (Even the use of recording devices and cameras does not eliminate this problem because
the placement of the device or camera will determine what is recorded.) There are two types of
observation, and each type—systematic (also called structured) observation and qualitative or
participant observation—takes a different approach to the problem of what to observe.
Systematic Observation
Systematic observation is usually used when the researcher has a hypothesis to test or a
very specific research problem because it requires specifying what to observe in advance. The
main focus of most systematic observation is the frequency of behavior. For example, a
researcher might be interested in studying how often workers assist each other as they complete a
task, or the researcher may be interested in learning how the language men and women use in a
work setting differs. Usually, systematic observation involves the use of some kind of
observation checklist or coding scheme The researcher records each instance of particular
behaviors on a checklist that is prepared in advance. Most checklists are simple forms, but many
systematic observers are beginning to use laptop computers and other electronic devices to
record behavior. Researchers who record the behavior they are studying with video cameras must
use a similar checklist to organize the data recorded so it can be analyzed. See pp. 372-376 of
your text for examples and instructions for creating behavior checklists.
Systematic observation usually requires development of a sampling strategy because the
observer cannot by present at all times, and observation is too exhausting to be conducted
without frequent breaks. Often observers use some form of time sampling (for example,
observing each subject for a specific amount of time and then switching to another subject;
selecting a sample of days to observe and times to observe; and the like) or event sampling
(observing a predetermined sample of frequent events or activities). Using checklists and
observational sampling often require a certain amount of practice, and observers must be trained
to use checklists in a consistent manner. One critical aspect of this process is developing precise
operational definitions of the concepts involved in the study so observers know which behaviors
represent those concepts.
One of my students used systematic observation to test the hypothesis that male police
officers are more respected than female police officers. She developed an operational definition
Data Collection Strategies, pg. 4
E. F. Vacha, Revised 1/1/07
of “respect” that specified what verbal and nonverbal behaviors indicate respect and disrespect.
Then she developed a simple form that listed the behaviors and provided space for recording
each instance of those behaviors. To obtain her data, she accompanied both male and female
police officers on their patrols (the department offered a “ride along” program for people who
wished to observe police in the field), being careful to do so at different times of the day and
different days of the week. She used her checklist to record the behavior of each citizen who
interacted with the officer (that is, she used event sampling, with each police citizen contact
being an event to be observed), as well as the date, time, duration of the contact, and reason for
the contact. Interestingly, she found that female officers experience more respect and less
disrespect (as measured by the frequency of the behaviors on her checklist) than male officers.
Systematic observation has many strengths. One of the most important is that it can be
used in natural settings, and, once the subjects become accustomed to the observer’s presence,
their behavior in real life situations can be observed. Another advantage of systematic
observation is that it provides the least biased measure of people’s behavior. Using interviews or
questionnaires to measure respondents’ behavior is not nearly as accurate because self-reports of
behavior, especially routine, frequent behavior, are often incomplete. Respondents do not always
remember or perceive their own behavior accurately, they are sometimes unwilling to talk about
the behavior (especially if it is considered to be private or potentially damaging to one’s
reputation), and they have a great deal of difficulty remembering how often they engage in
routine activities.
Systematic observation also has a number of weaknesses and limitations. One of the
most troubling, is that systematic observation does not tell us much about what people are
thinking and feeling. Of course, we can try to infer what is going on in the heads of our subjects,
but doing so is fraught with error because the same behavior can reflect a wide variety of
different mental and emotional states. For example, Max Weber pointed out that even as simple a
behavior as chopping wood can reflect any number of mental and emotional states. A person
chopping wood could be cold and wish to have a warm fire. But the same person could also be
chopping wood to sell it. Or, the person could be chopping wood for the exercise (Muhammad
Ali used to chop wood to help get himself in shape for his fights). Or, the person could be so
angry that he or she needs a physical release to vent that anger.
Another important weakness of all observation is that not all behavior is available for
observation. Since spying is both unethical and risky (getting caught could end one’s chance to
collect data), one must usually have subjects’ permission to observe them or confine one’s
observation to public behavior observable by anyone. Clearly systematic observation is not a
useful method for studying illegal (e.g., employee theft); personally damaging (e.g. immoral
behavior); or very private behavior (e.g., sexual behavior). Such behavior must usually be
Data Collection Strategies, pg. 5
E. F. Vacha, Revised 1/1/07
studied using interviews or questionnaires of self-reported behavior.
Finally, systematic observation is time consuming and labor intensive. Observers often
spend hours just waiting for the event or situation they need to study to occur, and they usually
can only observe one or a few people at a time. As a result, their samples are usually rather
small, and small samples are often not representative of the larger population the researcher is
interested in.
Qualitative or Participant Observation
Participant observation or ethnographic observation overcomes one of the main
weaknesses of systematic observation—lack of access to the thoughts and emotions of the people
being studied. Participant observation involves observing and interacting with the people being
studied. The participant observer can actually join in the activities of the people being studied,
or, if that is not appropriate, “shadow” the subjects of the study by accompanying them as they
go about their routine activities. This approach allows the observer to ask questions, discuss
events, and inquire into research subjects thoughts, beliefs, perceptions, and feelings.
Participant observation allows the observer to learn about people, situations, settings, etc.
that are poorly known or have been seldom studied because the participant observer does not
deal with the problem of not being able to study everything by specifying in advance what will
be observed and what will be ignored. Rather, participant observers constantly shift the focus of
what is observed (and what is ignored) as they learn more about the phenomenon. They analyze
their data as they collect it, and shift their focus as they learn more about the phenomenon being
studied. As a consequence, participant observation is particularly useful for exploratory research
concerning people, settings, or groups that have not been previously studied.
Strengths of participant observation. Participant observation has many of the strengths of
systematic observation. Usually, the participant observer is studying people as they go about
their everyday life in natural situations. The artificiality of the laboratory is avoided, and
participant observers do not have to rely on self-reports or others’ observations to learn about the
behavior or the people being studied. As previously indicated, it has two advantages over
systematic observation—the researcher can learn about the thoughts, emotions and perceptions
of those being studied, and the researcher does not need a hypothesis or detailed problem
statement to decide what to study because that decision does not have to be made in advance. As
a consequence, participant observation is a much better choice for exploratory or inductive
research designed to develop hypotheses about poorly studied phenomena.
One of the great benefits of participant observation is that it yields incredibly rich data.
Good participant observers learn more than just what people do. They also learn why they do it
Data Collection Strategies, pg. 6
E. F. Vacha, Revised 1/1/07
(or why they think they do it), how they feel at at the time, and how others react to the behavior.
It is the richness of the data produced from participant observation that makes it so well suited
for exploratory research. Hypotheses and theories derived from participant observation are
grounded in detailed knowledge about the objective and subjective lives of those studied. The
grounded theory developed from such research is often very useful and trustworthy, and later
research using other methods often confirms hypotheses developed from participant observation
studies.
Weaknesses of participant observation. Unfortunately, participant observation has many
of the weakness of systematic observation. It is even more time consuming, only a few people
can be studied, and it is usually very difficult to observe private, illegal, or immoral behavior.
Participant observation is less efficient than even systematic observation. Participant observers
often must observe for hundreds of hours to yield enough data from their necessarily small
samples to draw useful conclusions and generalizations. Furthermore, most participant observers
find they must transcribe their field notes, and doing so requires several hours for each hour of
observation. Analysis and coding the data requires many more hours, and, because analysis is
done continuously as more data is obtained, the lengthy process of coding and analyzing data is
repeated over and over again as the study proceeds. As a consequence, participant observation
can only be conducted with small samples. Furthermore, participant observation samples are
often less representative than those used by systematic observers because people are less likely to
be willing to let a participant observer join their activities. Participant observers never know for
sure whether their discoveries apply only to those studied, or if they can be generalized to the
larger world. As a consequence, while participant observation yields useful hypotheses,
confirming those hypotheses usually requires additional studies using systematic observation or
other methods that allow the researcher to study larger, more representative samples. For this
reason, in most cases participant observation should only be used for exploration of
subjects that have not been studied before. This important limitation means that no decision
about data collection strategies should be made before reviewing the research literature on
your topic. If there is little theory and research about your topic, participant observation is
probably a good choice because you will probably end up doing an exploratory study rather than
testing a hypothesis. But if your topic has been the subject of previous research and theorizing, it
may be more appropriate to select a hypothesis from the literature and use systematic
observation, surveys, or archival data to test it.
Asking Questions
Data Collection Strategies, pg. 7
E. F. Vacha, Revised 1/1/07
Another very obvious way to learn about people and organizations is to ask questions. In
general, the best way to learn about what people think perceive and feel is to ask them. While
there is no way to observe another’s subjective experiences, we can study subjective experiences
indirectly because people are aware of most of their thoughts, feelings, and perceptions. Thus,
while observation is the best way to study behavior, interviews, questionnaires, attitude scales,
and focus groups are the best data collection strategies for studying attitudes, beliefs, values, and
other internal states.
Questionnaires, Surveys, and Scales
The most efficient way to study people is through the use of structured written
questionnaires, scales, or surveys (to keep things simple, I will refer to all of these methods as
“surveys”). Consult your text for discussions of the unique features of each. Surveys can be
delivered relatively inexpensively through direct administration (either self-administered or read
to the respondent), the mail, by telephone, or, increasingly, electronically. A well designed
questionnaire can yield a large amount of data while requiring only a few minutes of the
respondents’ time.
Strengths of questionnaires, surveys, and scales. Because they are so efficient, structured
surveys yield the most data from the largest number of people for the least cost (both financial
and in terms of time and effort). As a consequence, they are the best data collection strategy for
studying large groups and for testing hypotheses with large, representative samples. Surveys also
lend themselves to probability sampling, and probability samples are the most likely to be
representative.
Surveys also allow researchers to avoid many of the ethical problems created by direct
observation. If the survey is administered to large groups of people, they can be completely
anonymous. Even when they are administered one at a time through the mail, by E-mail, or by
telephone, it is very easy to prevent identifying information from becoming attached to the data.
As a consequence, surveys are the preferred method for studying illegal, immoral, or private
behavior. Furthermore, self-report measures of such behavior can yield accurate information. For
example, numerous studies have compared the results of self-reported crime surveys with other
measures of crime (such as questioning respondents while using lie detectors, and questioning
knowledgeable others like parents and teachers), and most have found that anonymous selfreport surveys of illegal behavior yield surprisingly accurate results. So long as respondents are
convinced that the survey is truly anonymous, they are often willing to report such behavior on
Data Collection Strategies, pg. 8
E. F. Vacha, Revised 1/1/07
surveys.
Weaknesses of questionnaires, surveys and scales. Structured surveys do have some
limitations that make them poor choices for some kinds of research. One limitation already
mentioned is that they are poor measures of behavior. Since a survey relies on self-reports of
behavior, there is always the possibility that respondents will not report their activities
accurately. With anonymous surveys, inaccurate results are often obtained when respondents are
asked to report the frequencies of their most routine behaviors. For example, some years ago I
and a colleague investigated college students’ study habits. We asked them to complete and
submit a weekly diary of their study sessions for our classes, and we also asked them to fill out a
confidential survey at the end of the course that asked them to estimate how much they studied.
We found that our students estimated twice as much study on the confidential survey as they
reported on their weekly diaries.
Generally, respondents find it much more difficult to accurately report routine behavior
than unusual behavior. As a consequence, surveys of behavior on weekends generally yield more
accurate results than surveys of behavior during the week, because the things we do on weekends
are not routine and are more memorable than our weekday routine. (Perhaps this is why selfreported crime surveys are fairly accurate. While most of us break the law from time to time,
crime is not usually part of our daily routine.)
Another weakness of surveys is that they require prior knowledge about the topic under
study. A survey or questionnaire can only yield useful information if we ask the right questions,
and asking the right questions requires knowing something about the subject of study. Since
most research involves testing hypotheses, this is not usually a problem because the hypothesis
tells us what questions to ask. But, like systematic observation, surveys are poor tools for
exploratory research involving poorly studied topics. As a consequence, whenever possible,
surveys should be avoided when conducting exploratory research. This important limitation
means that no decision about data collection strategies should be made before reviewing the
research literature on your topic. If there is a considerable body of theory and research about
your topic, a survey is probably a good choice because you will probably end up testing a
hypothesis.
The limited utility of open-ended or qualitative questions on surveys. Sometimes
beginning researchers attempt to use surveys with open-ended questions to in an attempt to get
around the problem of not knowing what to ask on a survey, and many of you have probably
noticed that some structured surveys and interviews also include open-ended items. For example,
the current student evaluation of instruction questionnaire used at G.U. includes 10 structured
items asking students to rate various aspects of the course on a seven point scale, but it also
includes three open-ended items followed by a large space for the student to write a response.
Data Collection Strategies, pg. 9
E. F. Vacha, Revised 1/1/07
The first of these open ended items looks like this:
“If you were asked by one of your friends to describe this professor’s teaching, what would you
say?”
The second open-ended item looks like this:
“If you were to give this professor a one-sentance suggestion about how to improve his/her
teaching, what would you suggest?”
Inclusion of open-ended items can be useful because they allow the respondent to discuss
issues, ideas, and concerns that are not included in the structured portion of the survey. For
example, the G.U. course evaluation questionnaire does not include an item asking students to
rate their text. If the text had serious problems, the students could use the open-ended items to
discuss those problems. Since no questionnaire can ask every possible question related to the
topic, inclusion of a few open-ended items can be beneficial because it allows the respondents to
discuss or evaluate what is important to them even if there is no appropriate structured item.
These items can provide some valuable information, but they also can provide very
misleading results if they are not used correctly. One of the main problems with open-ended
items is that it is difficult to evaluate and interpret such responses. For example, here are some
typical responses to the first open ended item on the G.U. course evaluation survey obtained
from 20 students taking an upper division sociology class:
Student # 1: “He’s cool”
Student # 7: “I really like the class project”
Student # 2: “Pretty good prof.
Student # 8: “His tests are tough”
Student #3: “Very fair”
Student # 9: “The text was too advanced”
Student # 4: “The text is really boring”
Student # 10 “Too many stories & examples”
Student # 5: “He uses good examples”
Student # 11: “His stories keep class interesting”
Student # 6 “The text is too hard”
Student # 12 “He’s one of my favorite profs.”
The open ended responses provide some interesting insights to this professor’s teaching,
but it is impossible to draw firm conclusions from them. The most obvious problem is that only
12 of the 20 students bothered to complete the open-ended items. Such a low return rate is very
common with open-ended survey items. Another problem is that there is no way to tell whether
or not these comments reflect the majority’s opinions or not. For example, is the professor’s use
of stories and examples to illustrate his lectures a good idea? Also, three students mentioned
difficulties reading the text, but nine did not mention the text at all. Does that mean most
students do not have trouble with the text, or did most choose not to discuss the text even though
Data Collection Strategies, pg. 10
E. F. Vacha, Revised 1/1/07
its difficult to read? Finally, how should we interpret “he’s cool” and “pretty good prof?”
Its often a good idea to include a few open-ended items in a survey, but the data they
provide should be viewed as sources of hypotheses that might be tested later with a new
structured survey item. For example, the professor might want to include an item asking the
students to rate the text book the next time he asks students to evaluate his course. By doing so,
he could determine whether or not the problems with the text mentioned by the three respondents
reflect the thinking of most of the students in the class. Similarly, he might want to create an item
asking students to rate his use of examples and stories to illustrate his lectures to find out how
many find them helpful and how many do not.
In short, written responses to qualitative survey questions are at best suggestive of
hypotheses that could be tested with structured survey items at a later date because there is no
way to determine how many respondents hold similar views. As a consequence, surveys should
never consist solely or primarily of open-ended questions. However, open-ended qualitative
questions can provide excellent data if they are administered orally, because the researcher can
use additional “probe questions” to determine what respondents mean and the degree to which
other respondents hold similar views.
Qualitative Interviews and Focus Groups
Just as there is a qualitative alternative to systematic observation, there are also qualitative
alternatives to surveys. Sometimes participant observation is not possible when we want to
conduct exploratory research because we do not have access to the situation, we are interested in
an event from the past, or subjects would object to the presence of a researcher. The most
common solution to this problem is to conduct exploratory research using open-ended or
qualitative interviews. Qualitative interviewing involves beginning with just a few very broad
questions deliberately worded to make it impossible for the respondent to answer with a simple
“yes” or “no”. The goal is to use these questions to create a “guided conversation” between the
researcher and the respondent that sticks to the general topic without constraining or molding the
respondents’ comments. Usually the interviewer prepares “probe questions” to elicit elaboration
of the respondents answers. Both original interview questions and the probe questions are often
changed, elaborated, or eliminated as the researcher learns more about the subject of study by
analyzing the data as they are collected.
Focus groups are used in a similar way. The researcher begins with a small number of
questions, but they are presented to a small groups of respondents who are asked to discuss them
as a group. The researcher relies on probe questions to keep the conversation going and to
explore ideas as they emerge, and usually the responses of the group are recorded.
Data Collection Strategies, pg. 11
E. F. Vacha, Revised 1/1/07
Advantages of qualitative interviews and focus groups. qualitative interviews have many
of the same advantages as participant observation. They are best suited for exploratory studies,
and give a great deal of insight into the thoughts, perceptions and feelings of the respondents. As
a result, they provide much richer data than surveys. The use of probe questions allows the
researcher to change the focus of the interview to reflect new understandings and discoveries,
and, because respondents are allowed to discuss the topic in their own words, there is less chance
of imposing the researcher’s values and beliefs on the data. Furthermore, while interviewers
cannot directly observe the behavior of their respondents, they can learn a lot about how they
feel about the topic being discussed by observing and noting body language.
Weaknesses of qualitative interviews and focus groups. Although they are more
efficient than participant observation, qualitative interview techniques require small, often
nonrepresentative samples. Interviews typically take from one half to one or more hours to
conduct, and transcribing notes (from either a notebook or tape) can take several hours for each
interview. Coding and analyzing the data is very time consuming, and because it is conducted
continuously as more data are collected, coding and analysis is repeated over and over. As a
consequence, it is not unusual for qualitative interview studies or focus groups to utilize samples
of only two or three dozen respondents. Furthermore, because anonymity cannot be guaranteed
and the activity is time consuming for respondents, qualitative interview studies often rely on
convenience samples that are not likely to be representative of the larger group being studied. As
a consequence these methods are better suited for exploratory studies than for testing hypotheses.
These data collection strategies can also present ethical challenges because they usually
cannot be anonymous. Researchers must be careful to protect the identities of the respondents,
and they must devote a lot of effort to developing rapport and trust with respondents before they
can obtain much reliable information from them.
Using Existing Data
Organizations often collect a lot of data and information that can be used for research. The
computer has made this data more accessible than ever before. The possibilities for using
existing data to study organizations are endless. Furthermore, organizations provide more than
just statistics. With increasing use of computers, even routine communication by E-mail or
memos can be archived and studied. Just as is the case with observation and questioning,
archival data collection strategies use either quantitative or qualitative data.
Data Collection Strategies, pg. 12
E. F. Vacha, Revised 1/1/07
Statistical Archives
The computer is making it easier to access statistical data collected by government and
private agencies than ever before. State and federal governments have amassed huge archives of
community level data ranging from census data concerning income, education, work, poverty,
gender, race, and housing to data concerning crime, health, and voting patterns. More and more
of this data is available on-line and/or on compact disk. As a consequence, If you are interested
in studying a community, the census provides a great deal of information about the people who
live in it, and those data are available on the neighborhood level because communities are
divided into census tracts, which are in turn, divided into “blocks” that roughly match existing
neighborhoods. The census provides data about income, race, ethnicity, poverty, education, and
employment. County governments keep track of births, deaths, marriages, divorces, and
communicable diseases. Schools collect statistics on their students and keep their scores on
standardized tests.
Another source of archived data are research survey archives. Data from a wide range of
polls and surveys, can be accessed on-line (sometimes for a fee, sometimes at no cost). Many of
these surveys utilize large random samples, and some, like the General Social Survey are
conducted every year or two, As a consequence, they allow researchers to follow trends in
survey responses over years or decades.
Most large organizations amass a great deal of statistical information about their members,
employees, clients, and customers. Creative use of this information makes it possible to conduct
an entire study without leaving the library or one’s home. For example, many organizations keep
track of employee absences and employee turnover, and these data are closely correlated with
employee morale and job satisfaction. With appropriate ethical safeguards, these data sources
can be used for a wide range of research topics. Its quite possible that data collected by your
employer or school could be used to test a wide variety of hypotheses about organizations and
organizational behavior.
Advantages of statistical archives. The most obvious advantage of statistical archives is
that the data have already been collected. Usually they are available only in raw form, so they
must be compiled and analyzed statistically, but the actual work of data collection has already
been done. Observation and questioning provide only “snapshots” because data are usually
collected over a short period of time. A very important advantage of statistical archives is that
they are often longitudinal—the data have been collected over a period of years or decades. As a
consequence, these data allow researchers to study trends and long-term changes.
Weaknesses of statistical archives. The main disadvantage of statistical archives is that the
data were not collected to answer particular researchers’ questions. As a consequence, a
Data Collection Strategies, pg. 13
E. F. Vacha, Revised 1/1/07
researcher may need to access several different archives to get all of the information needed to
test a hypothesis or answer a research question. Doing so requires a great deal of familiarity with
the archives and considerable effort to locate archives that are suitable. Furthermore, the
knowledge explosion created by the computer has made a lot of data available—so much so that
wading through the mass of available data to locate the particular statistics needed can be time
consuming and frustrating.
Finally, successful use of statistical archives involves more than the ability to locate and
access the data. Because the data are collected for different purposes than research, using them to
test hypotheses often requires considerable ingenuity and creativity. However, when used
creatively, archival data can provide interesting and provocative information. Notice, for
example, the description in your text (page 382) of how statistics collected for Miss America
pageants have been used document changes in American’s standards of beauty over many
decades.
Qualitative Archival Data: Verbal, Written, and Visual Records
Not all archival data is statistical. Letters, memos, pictures, speeches, advertisements, the
contents of books and articles, and the like have all been used to conduct important research. The
most famous use of such data, Thomas and Znaniecki’s (1918) monumental study of the
experience of immigrants to the U.S. is an excellent example of how such data can be used
creatively to provide important insights. At the time of their study, the U.S. had just passed
through a period of unprecedented immigration. Between 1880 and 1920, millions of immigrants
had migrated to the U.S. In fact, in some years more people immigrated to the U.S. than were
born here. While this wave of immigration produced many changes and problems, very little was
known about the immigrant experience—how recent immigrants adapted to their new home, why
some were successful and others not, and the like. While no one was collecting data on
immigrants, the immigrants were producing masses of data by writing about their experiences in
letters to their friends and relatives in the “old country”. Thomas and Znaniecki produced the
first study of the immigrant experience by traveling to Poland and collecting as many of these
letters as possible. Many of the hypotheses they developed from these data almost a century ago
continue to be supported by more recent studies of immigrants to the U.S., and their book, The
Polish Peasant in Europe and America is still worth reading by anyone interested in the
immigrant experience.
As is the case with statistical data, the computer has made this kind of data more available
and easier to collect and analyze. For example, as more and more organizations rely on E-mail
Data Collection Strategies, pg. 14
E. F. Vacha, Revised 1/1/07
for communication within the organization, researchers have greater access to the writing and
thinking of organization members. Similarly, other documents such as memos and letters are
more likely to be archived somewhere on a server or hard drive. Creative use of these data can
test an infinite number of hypotheses and generate a great deal of new knowledge. The trick is to
step back from these data and examine them, not in terms of their original intent, but in terms of
how they could be use for a research purpose. For example, one of my students became
interested in how the gender of administrators influences the way they communicate with other
organization members, and how the gender of the receiver influenced their style of
communication. She was able to test a hypothesis about gendered communication by examining
several hundred memos she obtained from her workplace.
Not all archival data is written. Photography has been in use for at least 150 years, and this
means that there are many longitudinal photographic records that are available for researchers.
For example, one of my students, an avid baseball card collector, realized that he could use his
baseball card collection to operationalize changes in men’s facial hair fashions over almost a
century. There are numerous theories and hypotheses about fashions and fashion changes that he
was able to test using his card collection.
Advantages of verbal and pictorial records. as is the case with statistical archives, the
main advantage is that the data are already available. They may be locked up in dusty attics or
buried deep in computer hard drives, but they have already been created. Furthermore, a
technique, called “content analysis” has been developed to make analysis of such data more
efficient and objective (see pp. 396–401 for a description of the technique).
Disadvantages of verbal and pictorial records. As is the case with statistical archives, the
main disadvantage of this data collection strategy is that the data were not created for the purpose
of research. The data may be incomplete or fail to provide vital information. Another
disadvantage is that ethical concerns may make it difficult to obtain such data, and organizations
may be reluctant to release the data out of fear that it may not be treated confidentially.
Furthermore, when researchers study private communications, E-mails, memos, letters, and the
like, they must be sure that the organization not get access to damaging information about the
individuals generated by the research.
Conclusion
Clearly, there is no one best data collection strategy. Before you select a data collection
strategy for your study, you will need to carefully assess the strengths and weaknesses of the
various alternatives. Fortunately, much of this work will have already been done by other
Data Collection Strategies, pg. 15
E. F. Vacha, Revised 1/1/07
researchers interested in the same or related topics. For this reason, the first step in selecting a
data collection strategy is to review the research literature on your topic. If you are lucky,
you may even find existing data collection instruments (questionnaires, behavior checklists,
coding schemes, etc.) that you can adopt “as is” or adapt to meet the needs of your study. Doing
so is preferable to designing your own, especially if the existing instruments have been subjected
to validity and reliability testing. Just remember to cite your sources. If you must design your
own instruments, examination of existing instruments or authors’ descriptions of their data
collection strategies can provide useful ideas and guidelines.
Finally, because data collection strategies all have weaknesses, you may wish to consider
combining several different strategies. For example, if you wished to study client, customer,
employee, or student needs to help improve the services your organization provides, you might
want to consider combining qualitative methods with more quantitative methods. One such
strategy is to use open ended interviews or focus groups to explore the needs of a small sample
of research subjects because you may not know enough about their needs, opinions and problems
to know what to ask in a survey. This rich data source will probably suggest ideas you would not
think to ask about in a survey. However, because the sample must necessarily be small and may
include only those you know or can easily recruit, you might want to use your results to construct
a survey to verify your conclusions with a larger, more representative sample.
APPENDIX A
QUESTIONS TO ASK WHEN EVALUATING A SURVEY
Survey design involves four dimensions: Question Wording, Syntax, Organization,
and Formatting. When reviewing a survey (either your own or someone else’s) it is best to
begin with the content of the survey (question wording and syntax, ) then work on the
organization of the survey, and end by working on formatting.
There are a lot of factors to examine when evaluating a survey—far too many to check all
at once. As a result, it is best to review a survey four times. First check all of the items for
correct wording and make any necessary wording changes, Next, look at the organization of the
survey items and make any organization changes needed. Then examine syntax and make any
syntax changes needed. Finally, after making all of the wording, organization, and syntax
changes needed, examine the formatting of the survey and make any needed format changes.
The above procedure is best for evaluating and editing surveys. When you create a
survey, its best to reverse the order of tasks by first writing (wording) the items, then organizing
them, and then formatting them.
Below are some questions to ask when designing, evaluating, or proofing a survey. This
is not an exhaustive list, but it does identify most of the common errors my past students and
Data Collection Strategies, pg. 16
E. F. Vacha, Revised 1/1/07
clients have made when they designed their own surveys.
Wording:
1. Do Items contain terms that are vague or too complex?
a. Beware of terms like mass media (use "radio, TV and newspapers"); "marital status" (use "Are
you presently: married, divorced, separated, widowed?").
2. Are response alternatives too vague?
a. Instead of "frequently", "seldom", etc.; use specific choices like "more than 6 times", "4-5
times", etc. if possible.
b. To measure behavior, replace global estimates with counts in a recent time period if possible.
Not, "how many hours per week do you study"
Rather, "how many hours did you study last week?
3. Are slang terms, technical vocabularies, or other specialized terms avoided?
4. Are terms that bias questions such as emotionally laden words like "justice", "bureaucrat",
and "affirmative action" avoided?
Syntax:
1. Are active rather than passive verbs used?
Not "Students are given opportunities to ask questions by the professor";
rather, "The professor lets students ask questions".
2. Are there any double barreled questions?
a. "Is your supervisor fair and honest?" and "Did your parents encourage you to get a degree?" are
both double barreled. (Your supervisor could be unfair but honest, and your mother could
have discouraged you from getting a degree while your father encouraged you to do so.)
3. Have response alternatives been biased by leaving out common choices?
Not "never, seldom, always"; (The range between seldom and always is too wide.)
but "never, seldom, sometimes, often, always"
4. Have complex, too long, or clumsy sentences been replaced by simple concise sentences?
Organization:
1. Does the survey begin with an introduction describing the researchers, their goals, and
instructions?
a. Does the introduction explain whether or not the survey is anonymous, does it explain how
private information will be protected (if its not anonymous), does it inform respondents that
their participation is voluntary, and does it give the respondents permission to skip items that
Data Collection Strategies, pg. 17
E. F. Vacha, Revised 1/1/07
make them feel uncomfortable or are otherwise objectionable (it is ok to let respondents
know that their survey may not be used in the study if items are skipped)?
c. Are respondents told how to select their response from the alternatives (circle, check, etc.)?
b. Are respondents encouraged to use their “best guess” or to when completing multi-item
scales so they will be less likely to skip items?
2. Does the first item capture the respondents' interest, and is it easy to answer?
3. Are demographic questions and other sensitive items near the end of the survey?
4. Are early survey items sequenced to avoid biasing responses to later items?
a. Do items "funnel down" from general to specific? ("Over-all, how do you rate your instructor?"
should precede items such as "Were you graded fairly?")
5. Are similar items separated enough to discourage checking for consistency?
6. Are new directions provided each time the type of scale or response alternatives change?
7. For more than 8–10 items, is response set avoided by using both positive and negative
items?
Formatting:
1. Is there sufficient over-all vertical "white space", and enough space between items?
a. Will the respondents' answers be on the right line?
2. Is there enough space between response alternatives?
(Not "Year: Frosh__ Sophomore__ Junior__";
but, "Year: Frosh___
Sophomore___
Junior___
Senior_____")
SOME GUIDELINES FOR DESIGNING MAILED AND E-MAIL SURVEYS
Note: The following guidelines are based on several sources, including the following:
Dillman, Don A. 2007. Mail and Internet Surveys: The Tailored Design Method, 2nd
Edition. Hoboken, NJ: Wiley.
Dillman, Don A. 2000. Mail and Internet Surveys: The Tailored Design Method.
Wiley: New York.
Hoyle, Rick H., Monica J. Harris, and Charles M. Judd. 2002. Research Methods in
Social Relations, 7th ed. Wadsworth: New York.
Miller, Delbert C. 1991. Handbook of Research Design and Social Measurement, 5th
Ed. Sage: Newbury Park, CA
Data Collection Strategies, pg. 18
E. F. Vacha, Revised 1/1/07
Mailed and internet surveys are often popular choices because delivering them to
respondents seems like such an easy alternative to other methods for distributing surveys.
Unfortunately, they have not lived up to their promise because most people never respond to
them. Generally, a one-shot mailing or e-mailing of a survey will yield less than 25% returns,
and return rates of only 10%–15% are not unusual. Such low return rates mean that no matter
how carefully the sample is selected, there is a very good chance that the surveys returned will
not be representative of the population. When most people do not return a survey, we must
assume that those who do are, in some unknown ways, different from the general population
since the representative response is no response.
Because return rates influence external validity, a lot of effort has been devoted to boosting
the return rates of mailed surveys. E-mail surveys are so new that much less research has been
done on increasing their return rates, but most of the principles outlined below apply to both emailed and surface mail surveys. The existing research suggests that following the guidelines
below can yield return rates of mailed surveys sent to the general population of up to 65%-75%.
There is not enough research to reliably estimate the effects of these procedures on the return
rates of internet surveys, but the research that is available suggests that the practices described
below, especially the use of repeated contacts will have much the same effect.
E-mail surveys also present some ethical concerns because they are not anonymous.
Confidentiality can be maintained by sending an E-mail request for respondents to log on to a
website to complete the survey (several companies now offer this service), but my students and
clients have obtained even worse return rates when they have used this approach if only one
contact is used.
The strategies that do boost return rates also increase costs and effort needed to collect data.
As a result, it is often best to consider other options if they are available. One of the best
ways to obtain good return rates at low cost is to administer the survey to groups of individuals at
some gathering of members of the population. Company meetings, distribution to people waiting
for appointments, classes, company picnics, and other such gatherings are excellent choices if
they exist.
No one strategy can produce significantly higher return rates for mailed surveys. The
following guidelines outline a series of strategies, each of which can incrementally increase
return rates. In my experience consulting for organizations surveying their own members about
something of personal interest to members, just sending the prenotice letter, the survey and a
cover letter, and postcard reminder usually yields a return rate of at least 50%.
GUIDELINES FOR MAIL SURVEYS
Formatting and Mailing Questionnaires
Data Collection Strategies, pg. 19
E. F. Vacha, Revised 1/1/07
Multiple mailings are essential for obtaining high return rates. Dillman recommends four
first class mailings and one “special contact” (registered letter or telephone contact). A “one shot
mailing” typically results in less than a 25% return rates. Each contact increases return rates
slightly, and collectively they may increase return rates to 65% to 75% or more. In my
experience, a second mailing will increase the return rate to about 50% if the survey is
administered by an organization to its members.
1.
Send a brief prenotice letter a few days (never more than a week) before mailing the
questionnaire. The letter should be dated and have a personal inside address (not “dear
student”); it should explain what the survey is about in a single sentence; it should
explain why the survey is important, who will use it, and what it will tell us; thank the
respondent; have a real signature; and mention a token incentive. This letter will raise
the return rate by at least an additional six percent. (See the sample below.)
2.
Send the survey a few days later with a dated, signed, cover letter on letterhead with
a personal inside address and a personal salutation. Use a standard size envelope
unless a tangible incentive (like a pen) is included. Include the following information
in the cover letter: A request to participate; explain why the respondent was selected;
describe the usefulness of the survey, describe briefly how confidentiality will be
ensured (too long an explanation can discourage participation) and ask respondents
who do not wish to participate to return the blank survey (so you will know not to
bother contacting them again); mention the enclosed token of appreciation; express a
willingness to answer questions and includes a telephone number; and thank the
respondent. Sign the letter in a contrasting ink color so it does not look copied. (See the
sample cover letter below)
Address the envelope in as attractive a way as possible. Ideally, each envelope
should be individually typed. If mailing labels are used, avoid using last name first.
Using a mailing label with last name first makes the letter look like "junk mail" sent
indiscriminately to people who happen to appear on a mailing list.
Include an addressed, postage paid return envelope.
3.
Include an incentive with the questionnaire. According to Dillman’s (2000) review of
the extensive research on incentives, the use of a very small token of appreciation
(either $1.00 or $2.00 or a tangible gift like a pen with the name of the organization) is
second only to multiple contacts for increasing response rates. The incentive should be
small, and it should be sent to all respondents with the first mailing of the survey.
Dillman (2000) reports that several experiments on incentive size found that $2.00 is
optimal and yields a 19% to 31% greater return rate; $1.00 is almost as effective
(yielding about 5%-7% fewer returns). Cash appears to be superior and more cost
effective than checks. Material incentives seem to yield about half the increase in
returns of cash (10% - 15% increase in returns), but they may be more appropriate for
surveys from some nonprofit organizations. A chance to participate in lotteries, prize
drawings, and the like has little incentive effect.
4.
Include an I.D. number. Place a unique identifying number in the upper right hand
corner of the front of each questionnaire. Each person sampled should be assigned a
Data Collection Strategies, pg. 20
E. F. Vacha, Revised 1/1/07
unique number, and their name and number should be recorded. The number will be
used to determine whether follow-ups are needed. Neither the name nor the number
will appear in any reports or analyses. The number can be placed on the survey by
running the front pages through an address printer, using a clear address label, or
stamping each survey.
5.
Mailing and packaging the survey. Use standard size envelopes. The benefits of using
standard postage (either stamps or metered) rather than bulk mailing rates appear to
outweigh the cost benefit because it helps distinguish the survey from “junk mail” or
marketing ploys. Letterhead stationary is critical because it helps distinguish the survey
from marketing gimmicks and “junk mail”. Do not use special stamps that say things
like “Important materials are included”, “Your personal response is required”, and the
like. Doing so makes your mailing look like a commercial advertisement or marketing
gimmick.
6.
Send a third contact postcard about two weeks later to all respondents. The postcard
can thank those who have already mailed their survey and jog the memory of those
who forgot to do so. This mailing should increase the return rate by an additional eight
percent (Dillman, 2000: 181).
7.
Send a letter and replacement questionnaire about four weeks after the first
mailing to only those who have not responded. This letter should be similar to the
first letter, but the tone can be somewhat stronger. If you have received questions from
respondents who were confused by part of the survey or comments about the survey,
you may wish to include further directions or clarifications in this letter. This mailing
can add as much as 30% to the return rate.
8.
Initiate a fifth contact by telephone or certified mail. A certified mail contact may add
an additional 12% - 15%.
MODIFICATIONS FOR E-MAIL AND INTERNET SURVEYS
E-Mail questionnaires are adequate only for very brief surveys involving only a few
questions. E-mail messages do not necessarily appear on the respondents’ computer screens in
the same format as the researcher intended because there is very little standardization in E-mail
programs, and a lot o the formatting of messages (like line length, text color, and even the font)
is controlled by the receiver through preferences and settings. As a result, E-mail surveys must
be very simple and brief.
Web surveys have the advantage of greater control over formatting, but the respondents do
not receive the survey; they must select a link in the E-mail message or type a URL into their
browser. Just getting respondents to make the initial step of opening up their browser and
opening the survey web page is often a challenge. Many respondents simply hit the delete key
after reading the E-mail request to complete a web survey, and many others will not complete the
survey after going to the web site.
Data Collection Strategies, pg. 21
E. F. Vacha, Revised 1/1/07
The coverage problem. E-Mail and Internet surveys appear to be a cost effective choice for
survey studies, but they present significant problems for obtaining adequate return rates. Dillman
argues that E-mail and internet survey return rates depend on several factors. The most obvious
is “coverage” or access to computers, internet connections, and software. While some internet
surveys obtain thousands of responses, these large samples are inferior to much smaller samples
obtained in other ways because they are not at all representative of the population being studied.
An internet survey of organization members is a good choice only if all members of the
organization have equal and constant access to computers. For example, at universities, almost
all of the administrators, faculty, and secretarial/clerical workers have computers on their desks,
but most of the maintenance workers, custodial staff, security staff, and grounds keepers, have, at
best, shared and infrequent access to a computer. Generally, response rates vary greatly
depending on access. As a result, any internet or mail survey of university workers would
generate a biased sample, and would be a very poor choice for a study of all employees.
The coverage problem is even greater for studies of the general population. While more and
more households are obtaining computers, the quality and age of those computers vary greatly.
Furthermore, many households still lack internet access, and a large percentage of those that do
have internet access still rely on dial-up connections. Some groups (like low income families, the
the elderly, etc.) have significantly lower levels of computer and internet access, so any study of
the general population will inevitably and systematically under-sample some important groups.
A related problem is the effect of computer literacy. If all members of an organization have
access to computers and use them regularly, they will probably have the skills needed to
successfully complete an internet survey. But if some members of the organization rarely use
computers and the internet, their responses are much more likely to be thrown out of the study
because of errors.
Dillman’s Design Principles for E-Mail and Internet Surveys
Dillman has recently reviewed the research on E-mail and internet surveys. Summarizing
Dillman’s work is beyond the scope of this brief introduction. If you plan on designing an E-mail
or internet survey, try to obtain a copy of Dillman, Don. A (2007) Mail and Internet Surveys:
The Tailored Design Method. Hoboken, NJ: Wiley. 2007. If you plan on actually conducting an
e-mail or internet study at some future date, I strongly recommend getting your own copy of this
book. The principles below are quoted from Dillman to give you an idea of the issues that must
be considered when designing an internet or E-mail survey, but this quick listing only scratches
the surface.
Dillmans Nine Principles for E-Mail Survey Design. The following principles address the
major considerations in designing a survey sent either as the body of an E-mail or as an
attachment.
1. Utilize a multiple contact strategy much like that used for regular mail surveys. [See
the discussion of mailed surveys for more details.]
Data Collection Strategies, pg. 22
E. F. Vacha, Revised 1/1/07
2. Personalize all e-mail contacts so that none are part of a mass mailing that reveals
either multiple recipient addresses or a listserv origin. [This principle is an important
ethical standard for maintaining confidentiality.]
3. Keep the cover letter brief to enable respondents to get to the first question without
having to scroll down the page. [The computer screen usually shows only a part of a
standard page, and respondents often neglect to scroll down.]
4. Inform the respondents of alternative ways to respond such as printing and sending
back their response.
5. Include a replacement questionnaire with the reminder message.
6. Limit the column width of the questionnaire to about 70 characters in order to
increase he liklihood f wrap-around text.
7. Begin with an interesting but simple to answer question. [See the discussion of mailed
surveys for more details.]
8. Ask respondents to place X’s inside brackets to indicate their answers. [Brackets and
X’s are easy to type and familiar, and they appear the same in all fonts.]
9. Consider limiting scale lengths and making other accommodations to the limitations
of e-mail to facilitate mixed-mode comparisons when comparisons with other modes
will be made. [In other words, if respondents will be able to print and mail a survey, make
sure the mail survey and E-mail survey will be formatted identically, and in doing so,
adjust the printable survey to the limitations of E-mail surveys.[
Dillman’s Principles of Web Questionnaire Design. Again, this listing of principles from
Dillman’s book is no substitute for his writing, but it should help you evaluate web
questionnaires.
1. Introduce the web questionnaire with a welcome screen that is motivational,
emphasizes ease of responding, and instructs respondents about how to proceed to
the next page. [Getting respondents to go beyond the first page seems to be a critical part
of the process. If the first page is poorly designed, many respondents will simply close
their browsers.]
2. Choose for the first question an item that is likely to be interesting to most
respondents. easily answered, and fully visible on the welcome screen of the
questionnaire. [Remember, most screens do not show a complete page.]
3. Present each question in a conventional format similar to that normally used on
paper self-administered questionnaires. [Many web questionnaires use quirky formats
like centering the items {flush left is what people are used to seeing in paper
Data Collection Strategies, pg. 23
E. F. Vacha, Revised 1/1/07
questionnaires}, and these nonstandard formats just make the questionnaire harder to read
because most people begin reading at the upper left corner of a page.]
4. Avoid heavy use of color. [Many web designers like to use a lot of color, but the studies
of web survey construction suggest that heavy use of color increases errors. Part of the
problem is that computers do not all render colors in the same way, so what looks good on
the developer’s screen may be unreadable on a respondent’s computer.]
5. Avoid differences in the visual appearance of questions that result form different
screen configurations, operating systems, browsers, partial screen displays, and
wrap-around text. [What looks readable on a 17 inch monitor may be difficult to read on
a 14 inch monitor; what looks good on a PC with Explorer may be unreadable on a Mac
with Safari. Little research has been done on the effects of such distortions, but experience
with paper surveys tells us that formatting differences can influence error rates and
response rates.]
6. Provide specific instructions on how to take each necessary computer action for
responding to the questionnaire, and give other necessary instructions a the point
where they are needed. [For example, don’t assume all respondents will recognize and
know how to use drop-down lists. Also, repeat instructions at each point they are needed,
don’t force the respondent to scroll back to an earlier screen to figure out how to respond.]
7. Use drop-down boxes sparingly, consider the mode implications and identify each
with a “click here” instruction. [Its tempting to use a lot of drop-down boxes because
you can hide a lot of response choices behind them {that is why they are used for selecting
one’s state on mail order forms}. However respondents may not read through the drop
down list of choices with the same accuracy as they would if the entire list is displayed.]
8. Do not require the respondent to provide an answer to each question before being
allowed to answer any subsequent ones. [Forcing respondents to answer every question
has serious ethical problems, and often generates a lot of frustration. Frustrated
respondents are much less likely to complete the questionnaire.]
9. Provide skip directions in a way hat encourages marking of answers and being able
to click to the next applicable question. [Skip directions are directions to skip over some
items depending on how one answers an item {e.g., “if yes, go to #25; if no, go to #35}.
Studies of people completing web questionnaires have found that some respondents
completing a survey with skip directions have a tendency to click on the “click here” box
before answering the question.]
10. Construct web questionnaires so they scroll from question to question. [Some web
surveys have one question per screen, and the respondent must jump from one screen to
another to complete the questionnaire. Studies have revealed that respondents prefer to
be able to scroll back and forth through the questionnaire, and doing so helps them
maintain a sense of context and helps them concentrate.]
Data Collection Strategies, pg. 24
E. F. Vacha, Revised 1/1/07
11. Keep all response choices to a question on the same screen as the question.
[Respondents can make better choices if they can see all of the choices at one glance
without moving to another screen.]
12. Keep respondents informed of where they are in the completion process. [When
respondents do not know how much of the survey they have completed, some will often
quit even though they have only a few more questions to answer.]
13. Avoid question structures that have known measurement problems on paper
questionnaires. [Check all that apply instructions, ranking more than a few items, and
using lots of open-ended questions all cause problems. Dillman has found that
respondents do complete open-ended items more frequently on web surveys than paper
surveys, and their answers are often more complete. However, because these responses
are not standardized, interpreting the meaning of responses to open-ended questions can
be difficult or impossible. In general, open-ended questions should only be used in faceto-face interviews that allow the researcher to ask for clarification and expansion of
answers.]
SOME SAMPLE LETTERS
These letters were developed to provide my clients with examples of letters to be sent by mail. If
you plan to use a web or E-mail survey, you should send them by E-mail to make sure they go to
the person who will be completing the survey.
The Prenotice Letter
This letter should be written on letterhead under the signature of a person in authority;
ideally someone who is well known (by reputation or personally) to most of the respondents. The
letter should be no more than one page. The letter should appear to be a personal contact. Try to
avoid the appearance of a mass mailed form letter by including a personal inside address and a
signature in a contrasting color.
Sample letter adapted from Dillman, 2000.
November 1, 2008
William Johnson
509 Standard Avenue
Spokane, WA 99999
Dear William:
A few days form now you will receive in the mail a request to fill out a brief
questionnaire for an important study about student life at Gonzaga.
Data Collection Strategies, pg. 25
E. F. Vacha, Revised 1/1/07
It concerns the experiences of students while attending Gonzaga, and how they feel about
their experiences at G.U.
I am writing in advance because we have found many people like to know ahead of time
that they are being contacted. The study will help Student Life better understand the
experiences of G. U. students and whether or not their needs are being met.
Thank you for your time and consideration. It is only with the generous help of students
like you that our research can be successful.
Sincerely
Mary Smith
Director
Project REAL
P.S. We will be enclosing a small token of appreciation with the questionnaire as a
way of saying thanks.
The First Questinnaire Mailing
A cover letter should accompany the first survey. Again, it should be personalized as
much as possible, it should be on letterhead, it should be personally signed with the signature
in a contrasting ink, and the envelope should have a first class stamp or be metered as first
class. Do not include anything that makes it look like a marketing gimmick such as a stamp
on the envelope saying “urgent”.
Here’s a sample cover letter (adapted from Dillman. 2000):
November 3, 2008
William Johnson
509 Standard Avenue
Spokane, WA 99999
Dear William:
I am writing to ask your help in a study of G.U. students. We are contacting a random sample of
students to ask them to rate their experiences since they joined the Gonzaga community.
Results of the survey will be used to help Student Life and other campus organizations make Gonzaga
a better place for students like you. By understanding the positive and negative experiences of
students like you, we can do a better job meeting your needs. We will also be comparing the results of
this survey to the results of similar surveys conducted at other universities.
Your answers will be completely confidential and will be reported only as summaries in which no
individual answers can be identified. When you return the completed questionnaire, your name will be
Data Collection Strategies, pg. 26
E. F. Vacha, Revised 1/1/07
deleted form the mailing list and never connected to your answers in any way. The survey is
voluntary. However you can help us very much by taking a few minutes to share your experiences and
opinions about life at G.U. If for some reason you prefer not to respond, please let us know by
returning the blank questionnaire in the enclosed stamped envelope.
We have enclosed a small token of appreciation as a way of saying thanks for your help.
If you have any questions or comments about this study, we would be happy to talk with you. Our
number is 123-4567, or you can write to us at the address on the letterhead.
Thank you for helping us with this important study.
Sincerely,
Mary Smith
Director
The Follow-Up Postcard (or E-Mail for Internet Surveys)
Studies have found a follow-up postcard increases returns about 15%-20% . The idea is to
get the postcard to respondents while the survey may still be on their desks, but not so soon as to
appear to be "pushy". Most recommend sending the follow-up postcard reminder one to two
weeks after the survey is mailed. The postcard might include a message like the following. Send
the postcard to all respondents.
Last week a questionnaire seeking your opinion about the student life at G.U. was mailed to you.
Your name was drawn in a random sample of Gonzaga students.
If you have already completed and returned it to us, please accept our thanks. If not, please do so
today. Because it has been sent to a small, but representative, sample of Gonzaga students it is
extremely important that yours also be included in the study if the results are to accurately reflect the
opinions of all G.U. students.
If by some chance you did not receive the questionnaire or it got misplaced, please call me right now
at (___-___-____). I will get another one in the mail to you today.
Sincerely,
Mary Smith
Project Director
Data Collection Strategies, pg. 27
E. F. Vacha, Revised 1/1/07
The Second Follow-Up Letter (Or E-mail for Internet Surveys)
Most experts recommend sending a second follow-up letter with a replacement
questionnaire and postage paid return envelope four weeks after the initial mailing. The second
letter and questionnaire may increase returns by as much as 30%. The second follow-up letter is
similar to the cover letter, but it has a more urgent tone. The respondent is reminded of the
usefulness of the survey and the importance of including their opinions to make the results
representative. Care must be taken to avoid entering two surveys from the same person into the
data set. As long as the I.D. number is included on each questionnaire and it is entered with the
data, a quick check can be made by the data entry people to prevent entering two surveys from
the same person. A follow-up letter with a second questionnaire may add up to an additional
30% of the original sample.
A Third Follow-Up Letter ((Or E-mail for Internet Surveys)?
Dillman recommends a third follow-up letter sent by certified mail. This letter is similar to
the first cover letter, but it sets a deadline for receipt of the returned questionnaire. Respondents
are informed that unless their questionnaire is received by the deadline, their opinions can not be
included in the study. Other researchers telephone the respondent and try to administer the
survey over the telephone. This (and still later follow-ups) yield diminishing returns, but some
have been able to obtain as much as another 10%-15% of the sample with these extra efforts.
Download