- NIILM University

advertisement
ENGINEERING
ART
CHEMISTRY
MECHANICS
psychology
PHYSICS
history
LANGUAGE
BIOTECHNOLOGY
E
C
O
L
O
G
Y
MUSIC
EDUCATION
GEOGRAPHY
agriculture
law
DESIGN
mathematics
MEDIA
management
HEALTH
Research Methods For Library
Science
Subject: RESEARCH METHODS FOR LIBRARY SCIENCE
Credits: 4
SYLLABUS
Meaning of Research; Objectives of Research; Types of Research; Research Approaches;
Significance of Research; Research and Scientific Method; Importance of knowing how
Research is done; Research Process; Problems Encountered by Researchers in India; Meaning of
Research Design; Need for Research Design; Important Concept Relating to Research Design;
Different Research Designs; Basic Principles of Experimental Designs; Developing a Research
Plan.
Need for Sampling; Important Sampling Distributions; Sampling Theory; Interpretation; Why
Interpretation; Techniques of Interpretations; Precaution in Interpretation; Report Writing;
Interviewing Techniques; Understanding Surveys; Questionnaire Design; Receiving Completed
Questionnaires; Data Gathering and Analysis Techniques; Collection of Data; Evaluate and
Analyze the Data.
Content Analysis: Analysis and Size; Questioning the Content; Qualitative and Quantitative
Analysis; Anatomy of an on-line Focus Group; Affinity Groups; Internet Audience Research
Analyzing Online Discussions: Ethics; Data and Interpretation; Reporting the Findings.
Suggested Readings:
1. Media and Communication Research Methods: an Introduction to Qualitative and
Qualitative Methods; Arthur A; Sage Publications.
2. Mass Media Research: An Introduction ; Roger D. Wimmer; Joseph R. Dominick;
CengageBrain.com
3. Media Research Techniques; Arthur Asa Berger; Sage Publications.
CONTENT
Lesson No.
Topic
Page No.
Lesson 1
Introduction - what is research
1
Lesson 2
Research Design
4
Lesson 3
Research Planning
7
Lesson 4
Audience Research etc. - I
12
Lesson 5
Audience Research etc. - II
16
Lesson 6
Sampling
21
Lesson 7
Sampling - Multi-stage Sampling
25
Lesson 8
Questionnaires
29
Lesson 9
Question Formats
34
Lesson 10
Fieldwork
39
Lesson 11
Interview - Prepare Interviewer Instructions
46
Lesson 12
Feildwork
49
Lesson 13
Interviewing
53
Lesson 14
Survey
56
Lesson 15
Survey – Telephonic Survey
59
Lesson 16
Surveys- Mail Survey
63
Lesson 17
Survey – Survey Presentation
67
Lesson 18
Checking and Editing Surveys
71
Lesson 19
Case Study
74
Lesson 20
Content Analysis
79
Lesson 21
Content Analysis - Analysis and Size
82
Lesson 22
Content Analysis- Questioning the Content
86
Lesson 23
“Qualitative” and “Quantitative”
90
Lesson 24
Anatomy of an on-line focus group
96
Lesson 25
Group Discussions
99
Lesson 26
Affinity Groups
104
Lesson 27
Internet Audience Research
110
Lesson 28
Analyzing Online Discussions: Ethics, Data and Interpretation
114
Lesson 29
Reporting the findings
118
Lesson 30
Reporting the Findings - The form of TV Programs
121
CONTENT
Lesson No.
Topic
Page No.
Lesson 31
Copyright Act- 1957
127
Lesson 32
Using research well
138
Lesson 33
Are All Polls Valid?
142
LESSON 1:
INTRODUCTION - WHAT IS RESEARCH
Topics Covered
Meaning, Nature, Objectives, Significance, Importance,
Overview
Objectives
Upon completion of this Lesson, you should be able to:
• Understand what is research
• Identify the Objectives and Functions of Research
• Identify the Significance and importance of research
We will first ask what research is and whether, you ought to be
doing it anyway. After this, and forming the bulk of the
document, we shall look at the process of research itself.
Pause for a moment, think of the word ‘research’ - what images
come into your mind? Don’t try to define it, just think about
what it means to you. Write down a few ideas below.
- the white-coated scientist bent over a bubbling test tube. This
fits well the Chambers definition. In contrast, the ‘collate old
facts’ definition suggests heads bent over old manuscripts in
the British Library Reading Room.
These are just two types of research, we’ll consider a few more
ideas, but you may have thought of something different again.
The first of this list , the scientist, is perhaps the archetypal
image of experimental research. Of course, the white coated
technician is less well respected today than in the late 60’s when
Thunderbirds was first screened. A more acceptable modern
image might be the botanist in the Amazonian rain forest,
observing and discovering new creatures before they fall under
the axe and are consumed by fire. Behavioural and experimental
psychologists would also fall under this general heading.
The social scientist’s methods are different from the laboratory
(although quite similar to the botanist). They include interviews, observation and the collection of data. All of which will
be needed at very least during your requirements elicitation from
your client. Notice how the world of the social scientist is far
less controlled than that of the laboratory or even the botanist.
You can put a beetle in a ‘bug box’ and examine it, but social
situations collapse when dissected too closely. The ecologist has
similar problems.
Historical research corresponds to the British Library image,
reading original sources and other peoples books and papers.
Of course it does not stop there. The aim of the historian is to
understand the historical processes as well as to record it. One
of the key things a historian has to recognise is that all the
sources are biased - written from a particular perspective by a
particular person for some particular purpose. You will be faced
with similar issues, albeit from modern sources.
Look at the two dictionary definitions above. Which one, if
either, is closest to your images of research? The first is more
lofty - looking for totally new knowledge. The Oxford definition also includes the collation of existing knowledge. The
image, which immediately springs to my mind when I think of
research although not what I do) is of Brains on Thunderbirds
Journalists operate in a somewhat similar fashion. They do not
expect to generate new knowledge (although they may occasionally generate fiction). Instead, they cull from exiting sources:
people, books, other newspaper articles etc., in order to write
about their chosen subject. Note the range of sources they must
draw on. Also note that they will not attempt to thoroughly
understand the subject they are writing about, nor do they
attempt total coverage of the area. They have a goal, to write an
article, and they find out just enough to do that job. The
academic must take a deeper and wider perspective than this,
1
but do not underestimate the skill of the journalists. When
some event happens they have to find out enough within a few
hours to be able to write cogently about it.
cant academic work before hand and analysis of your results
after. One would hope that this will also contribute to making
the project interesting.
Finally we have industrial Research and Development. What is
the research element in it? Well, some firms do have ‘blue skies’
research laboratories whose job is to find exciting new things,
rather like (but better resourced) than a University research
atmosphere. However, most do not have sufficient spare
resources to use for this sort of enterprise. Instead, the job of
the commercial researcher is to draw on existing knowledge and
bring it to bear on a particular problem. This may involve
creating something new, but usually by some adaptation of an
existing solution. Like the journalist, the industrial researcher is
very goal directed, but has to do more. The journalist merely
has to gather enough information to write the article, the
industrial researcher must understand the information in order
to apply it to a new situation, that is, the product under
development.
To be integrative and intellectually challenging the project must
clearly involve research in the sense of ‘collating old facts’. That
is, aspects of the British Library image combined with the
focused attitude of industrial R & D.
Clearly, the R & D situation is closest to your project as you too
have a client and a product to produce. However, the situation
is not identical. Your aim is not only to produce a product, but
also to obtain a degree. Although your time may hardly seem
leisurely, you do in fact have more ‘leisure’ to reflect upon the
work you are doing, taking a more academic angle. In particular,
this might mean being somewhat broader in your searches for
information and considering more alternatives to a problem,
even after you have found a solution, which works.
But, the crunch, should the project be innovative - breaking new
ground, extending the sum of human knowledge, generating
new and novel solutions?
Well, it would obviously be nice to develop some new algorithm or discover some new fact about IT, and the best projects
will involve some level of innovation, but this is an undergraduate and not a research degree, so it is not necessary. On the
other hand, it would be hard to apply even standard techniques
to a new problem without there being something novel about
it. Every situation is slightly different and you will have to use a
level of ingenuity (another ‘I’ word) in dealing with it.
So, given you should be doing some research, how do you go
about it? You all know the old saying: “you should learn from
your mistakes”. Indeed, this will be an important part of your
final report. You will have to reflect upon what did and did not
work. You will be expected to diagnose your problems and
learn from them. However, you do not have enough time to
make too many mistakes, so you should avoid as many as
possible.
So, given these definitions of research and examples of
researchers, should you be doing research in your project, or
should that be left to the PhD students and academics. And, if
you should be doing research, which of the above types should
it be.
How do you avoid making your own mistakes? Well although
it is good to learn from your own mistakes, it is shrewd to learn
from other peoples mistakes. Find out what other people have
done right and done wrong before making the same mistakes
or even working hard to discover the same good things.
Let’s think about your project in terms of some ‘I’ words. First
it should be integrative, bringing together knowledge from
different areas. In most of your courses you consider some
particular aspect of computing, business or mathematics. In
your project you must use knowledge from a variety of areas
and sources. Some things you may already know from a course
you have done. Other things might need further investigation.
The project is an independent piece of work which is interesting
to you. As it is YOU doing the work, you are only expected to
produce what is reasonable for a final year undergraduate.
However, as it as an academic project, part of an honours
degree, it must also be intellectually challenging. Again, not the
minimal solution to a problem, but one that involves signifi-
2
To do this you must study other people’s work before embarking on your own - that is, more research. But it is not all of the
British Library kind.
You obviously need to read what other people have written:
books, academic papers, newspaper articles etc. In addition you
need to consider what they say by interviews, discussions etc.
Finally, examine what they make - in your context primarily
software, but also possibly the organisational structures, paper
based systems etc.
Assumptions of scientific method David Easton has laid down
certain assumptions and objectives of scientific method. They
are regularities, verification, techniques, quantification, values
systematisation, pure science, and integration.
1. Regularities
Scientific method believes that the world is regular and phenomena occur in patterns. Further there are discernible
uniformities in political and social behaviour which can8be
expressed as generalisation which are capable of explaining and
predicting political and social phenomena.
activities, none of these activities can be understood without
placing them in the wider context of his entire life.
In experimental design, the researcher can often exert a great deal
of control over extraneous variables and thus ensure that the
stimuli in the experimental conditions are similar. In a laboratory experiment, one has the opportunity to vary the treatment
in a systematic manner, thus allowing for isolation and precise
specification of the important difference.
Assignments
1. The task of defining the research problem often follows a
sequential pattern. Explain?
2. What do you mean by research? Explain its significance in
modern times?
2. Verification
Scientific method presupposes that knowledge should consist
of propositions that have been subjected to empirical tests, and
that all evidence must be based on observation.
3. Techniques
Scientific method attaches a great deal of importance to the
adoption of correct techniques for acquiring and interpreting
data. In order to make the research self conscious and critical
about the methodology, there is a need for the use of sophisticated tools-like multivariate analysis, sample survey and
mathematical models, simulation and so on.
4. Quantification
Science necessarily involves mathematical formulas and measurements. Theories are not acceptable if they are not expressed in
mathematical language. All observations must ‘be quantified
because quantification has advantages in terms of precision and
manageability.
5. Values
Values and facts are. two separate things. Science, it is claimed, is
value free. It is not concerned with what is “good”, “right”,
“proper” “desirable” etc. “Good” and “bad” is the concern of
philosophers. Scientific enquiry to be objective, therefore, must
be value free.
6. Systematisation
Scientific study demands that research should be systematic. it
means that it must be theory-oriented and theory directed. The
theory and research should form interrelated parts of a coherent
and orderly body of knowledge.
7. Pure Science
Scientific minded social scientists insist on pure science approach. They agree that theoretical understanding may lead to an
application of this knowledge to problems of life. Both the
theory and its application are parts of.scientific enterprise.
8. Integration
Finally, there is the question of integration of each social science
with other social sciences. The behaviouralists agree that man is
a social animal and while one may try to draw boundary lines
between the social, political, economic, cultural and other
3
LESSON 2:
RESEARCH DESIGN
Topics Covered
Research Design, importance, principles, Approaches
Objectives
Upon completion of this Lesson, you should be able to:
• What is research design
• Identify the importance of research design
• Know the principles of RD
The objective of science is to explain reality in such a fashion so
that others may develop their own conclusions based on the
evidence presented. The goal of this handbook is to help you
learn how to conduct a systematic approach to understanding
the world around us that employs specific rules of inquiry; what
is known as the scientific model.
The scientific model helps us create research that is quantifiable
(measured in some fashion), verifiable (others can substantiate
our findings), replicable (others can repeat the study), and
defensible (provides results that are credible to others—this
does not mean others have to agree with the results). For many
the scientific model may seem too complex to follow, but it is
often used in everyday life and should be evident in any research
report, paper, or published manuscript. The corollaries of
common sense and proper paper format with the scientific
model are given below.
Corollaries among the Scientific Model, Common Sense, and
Paper Format
Research Question
Develop a theory
Identify variables
Identify
hypotheses
Test the
hypotheses
Evaluate the results
Critical review
Why
Your answer
How
Expectations
Paper
Format
Intro
Intro
Method
Method
Collect/analyze data
Results
What it means
What it doesn’t
mean
Conclusion
Conclusion
Common Sense
Overview of First Four Elements of the
Scientific Model
The following discussion provides a very brief introduction to
the first four elements of the scientific model.
1. Research Question
The research question should be a clear statement about what
you intend to investigate. It should be specified before research
is conducted and openly stated in reporting the results. One
4.
a. identify the research objective (allows others to benchmark
how well the study design answers the primary goal of the
research)
b. identify key abstract concepts involved in the research
• Different Research approaches
Scientific Model
conventional approach is to put the research question in writing
in the introduction of a report starting with the phrase “ The
purpose of this study is . . . .” This approach forces the
researcher to:
Abstract concepts: The starting point for measurement.
Abstract concepts are best understood as general ideas in
linguistic form that help us describe reality. They range from the
simple (hot, long, heavy, fast) to the more difficult (responsive,
effective, fair). Abstract concepts should be evident in the
research question and/or purpose statement. An example of a
research question is given below along with how it might be
reflected in a purpose statement.
Research Question: Is the quality of public sector and private
sector employees different?
Purpose statement: The purpose of this study is to determine
if the quality of public and private sector employees is different.
2. Develop Theory
A theory is one or more propositions that suggest why an
event occurs. It is our view or explanation for how the world
works. These propositions provide a framework for further
analysis that are developed as a non-normative explanation for
“What is” not “What should be.” A theory should have logical
integrity and includes assumptions that are based on paradigms. These paradigms are the larger frame of contemporary
understanding shared by the profession and/or scientific
community and are part of the core set of assumptions from
which we may be basing our inquiry.
3. Identify Variables
Variables are measurable abstract concepts that help us describe
relationships. This measuring of abstract concepts is referred to
as operationalization. In the previous research question “Is the
quality of public sector and private sector employees different?”
the key abstract concepts are employee quality and employment
sector. To measure “quality” we need to identify and develop a
measurable representation of employee quality. Possible quality
variables could be performance on a standardized intelligence
test, attendance, performance evaluations, etc. The variable for
employment sector seems to be fairly self-evident, but a good
researcher must be very clear on how they define and measure
the concepts of public and private sector employment.
Variables represent empirical indicators of an abstract concept.
However, we must always assume there will be incomplete
congruence between our measure and the abstract concept. Put
simply, our measurement has an error component. It is unlikely
to measure all aspects of an abstract concept and can best be
understood by the following:
Variable
Level
Abstract concept = indicator + error
Country
Because there is always error in our measurement, multiple
measures/indicators of one abstract concept are felt to be better
(valid/reliable) than one. As shown below, one would expect
that as more valid indicators of an abstract concept are used the
effect of the error term would decline:
Letter Grade Ordinal
Age
Nominal
Ratio
Temperature Interval
Abstract concept = indicator1 + indicator2 + indicator3 + error
Levels of Data
There are four levels of variables. These levels are listed below in
order of their precision. It is essential to be able to identify the
levels of data used in a research design. They are directly
associated with determining which statistical methods are most
appropriate for testing research hypotheses.
Nominal: Classifies objects by type or characteristic (sex, race,
models of vehicles, political jurisdictions)
Properties
1. categories are mutually exclusive (an object or characteristic
can only be contained in one category of a variable)
2. no logical order
Ordinal: classifies objects by type or kind but also has some
logical order (military rank, letter grades)
Properties
1. categories are mutually exclusive
2. logical order exists
3. scaled according to amount of a particular characteristic they
possess
Interval: classified by type, logical order, but also requires that
differences between levels of a category are equal (temperature in
degrees Celsius, distance in kilometers, age in years)
Properties:
1. categories are mutually exclusive
2. logical order exists
3. scaled according to amount of a particular characteristic they
possess
4. differences between each level are equal
5. no zero starting point
Ratio: same as interval but has a true zero starting point
(income, education, exam score). Identical to an interval-level
scale except ratio level data begin with the option of total
absence of the characteristic. For most purposes, we assume
interval/ratio are the same.
The following table provides examples of variable types:
Reliability and Validity
The accuracy of our measurements are affected by reliability and
validity.Reliability is the extent to which the repeated use of a
measure obtains the same values when no change has occurred
(can be evaluated empirically). Validity is the extent to which
the operationalized variable accurately represents the abstract
concept it intends to measure (cannot be confirmed empiricallyit will always be in question). Reliability negatively impacts all
studies but is very much a part of any methodology/
operationalization of concepts. As an example, reliability can
depend on who performs the measurement (i.e., subjective
measures) and when, where, and how data are collected (from
whom, written, verbal, time of day, season, current public
events).
There are several different conceptualizations of validity.
Predictive validity refers to the ability of an indicator to
correctly predict (or correlate with) an outcome (e.g., GRE and
performance in graduate school). Content validity is the extent
to which the indicator reflects the full domain of interest (e.g.,
past grades only reflect one aspect of student quality). Construct
validity (correlational validity) is the degree to which one
measure correlates with other measures of the same abstract
concept (e.g., days late or absent from work may correlate with
performance ratings). Face validity evaluates whether the
indicator appears to measure the abstract concept (e.g., a
person’s religious preference is unlikely to be a valid indicator of
employee quality).
4. Identify Measurable Hypotheses
A hypothesis is a formal statement that presents the expected
relationship between an independent and dependent variable. A
dependent variable is a variable that contains variations for
which we seek an explanation. An independent variable is a
variable that is thought to affect (cause) variations in the
dependent variable. This causation is implied when we have
statistically significant associations between an independent and
dependent variable but it can never be empirically proven: Proof
is always an exercise in rational inference.
Association
Statistical techniques are used to explore connections between
independent and dependent variables. This connection between
or among variables is often referred to as association. Association is also known as covariation and can be defined as
measurable changes in one variable that occur concurrently with
changes in another variable. A positive association is represented by change in the same direction (income rises with
education level). Negative association is represented by
concurrent change in opposite directions (hours spent exercising
5
and % body fat). Spurious associations are associations
between two variables that can be better explained by a third
variable. As an example, if after taking cold medication for
seven days the symptoms disappear, one might assume the
medication cured the illness. Most of us, however, would
probably agree that the change experienced in cold symptoms
are probably better explained by the passage of time rather than
pharmacological effect (i.e., the cold would resolve itself in
seven days irregardless of whether the medication was taken or
not).
Causation
There is a difference between determining association and
causation. Causation, often referred to as a relationship, cannot
be proven with statistics. Statistical techniques provide evidence
that a relationship exists through the use of significance testing
and strength of association metrics. However, this evidence
must be bolstered by an intellectual exercise that includes the
theoretical basis of the research and logical assertion. The
following presents the elements necessary for claiming causation:
External and Internal Validity
There are two types of study designs, experimental and quasiexperimental.
Experimental: The experimental design uses a control group
and applies treatment to a second group. It provides the
strongest evidence of causation through extensive controls and
random assignment to remove other differences between
groups. Using the evaluation of a job training program as an
example, one could carefully select and randomly assign two
groups of unemployed welfare recipients. One group would be
provided job training and the other would not. If the two
groups are similar in all other relevant characteristics, you could
assume any differences between the groups employment one
year later was caused by job training.
Whenever you use an experimental design, both the internal
and external validity can become very important factors.
Internal validity: The extent to which accurate and unbiased
association between the IV and DVs were obtained in the study
group.
External validity: The extent to which the association between
the IV and DV is accurate and unbiased in populations outside
the study group.
Quasi-experimental: The quasi-experimental design does not
have the controls employed in an experimental design (most
social science research). Although internal validity is lower than
can be obtained with an experimental design, external validity is
generally better and a well designed study should allow for the
use of statistical controls to compensate for extraneous
variables.
Types of Quasi-experimental Design
1. Cross-sectional study: obtained at one point in time (most
surveys)
2. Case study: in-depth analysis of one entity, object, or event
3. Panel study: (cohort study) repeated cross-sectional studies
over time with the same participants
6
4. Trend study: tracking indicator variables over a period of
time (unemployment, crime, dropout rates)
LESSON 3:
RESEARCH PLANNING
Topics Covered
Audience research, Planning research, understanding audiences
Objectives
Upon completion of this Lesson, you should be able to:
• Types of audience research.
• How research is done: an overview.
• Planning a research project.
• Some findings about audiences
Planning Audience Research
Audience research is a systematic and accurate way of finding
out about your audience. There are two main things that
audience research can do:
1. estimate audience sizes, and
2. discover audience preferences.
Radio and TV stations are unique in having a special need for
audience research: this is the only industry that cannot accurately
count its audience. A factory will always count the number of
products it sells. A newspaper will (or could) always know its
paid circulation.
An organization that provides services rather than products (e.g.
a hospital) is able to accurately count the number of people
who walk through its doors. But radio and television programs
are given away free to their audiences, and there is no way of
measuring how many people tune into a program - without
audience research.
For this reason, audience research was one of the first forms of
market research. When radio became popular in rich countries in
the 1920s, audience research followed soon afterwards. In
countries where broadcasters depended on commercial revenue,
audience surveys are done to find out how many people would
hear a particular advertisement.
observation, mechanical measurement (people-meters) and
qualitative research.
Audience research methods can be applied for any activity with
audiences: not only radio and television stations, but also print
media, artistic activities, and (most recently) the internet. The
methods described in this book apply to all of these, as well as
to the study of societies (social research) and economic
behaviour (market research).
Audience Research, Social Research and
Market Research
Audience research, social research, and market research share a
common body of methods, with slight variations. So when
you know how to do audience research, you will also know how
to carry out many types of market research and social research.
Audience Research and Management Systems
The importance of feedback for any activity to be carried out
well, some form of feedback is needed. Try walking with your
eyes shut, and you will soon bump into something. Even
without your thinking about it, the feedback from your eyes is
used to correct your steps. In the same way, any organization
that does not keep its eyes open is likely to meet with an
accident.
In the media industries, the equivalent to walking is broadcasting the programs. The equivalent of watching where you are
going is audience research.
But when you are walking, you are doing more than simply
move your legs, and watch where you are going. You will also
have decided where you are walking to. Depending on what you
see, you will adjust your steps in the desired direction. And of
course, at any time you may change your direction of walking.
Whether the activity is walking or broadcasting, you can draw a
diagram of a “feedback loop”, like this:
In countries with public radio, such as Britain and New
Zealand, audience research began in the 1930s, seeking information from listeners. New Zealand’s first audience survey was in
1932. Postcard questionnaires were sent out to households with
radio licenses, asking questions such as “Do you listen on a
crystal set or a valve set?” and “Do you dance to broadcast dance
music?”
Since those days, audience research has moved far beyond radio
and television. The current growth area is internet audience
research. And, though printed publications have readers rather
than audiences, the same methods apply.
Methods of Audience Research
The most common method of audience research is the survey:
a group of people is selected, they are all asked the same
questions, and their answers are counted. But as well as surveys,
there are many other methods of audience research, including
In recent years, the study of management methods has
produced a system known as “strategic management.” It
follows the principles shown in the above diagram. Notice the
bottom box, labelled “Get information on results of action”.
Audience research is part of that box.
The importance of knowing what you’re doing, and why you’re
doing it
7
The Logical Framework
The Logical Framework method (Log Frame for short) begins
by creating a hierarchy of goals. It works like this:
1. State the main goal that you want the project to accomplish.
For example, to eliminate malaria in a region.
2. Then consider what other goals will need to be achieved to
meet the first goal.
In the case of the anti-malaria project, the three objectives
could be:
“we are broadcasting this program because we like to spend an
hour a week on public health” - but why is that? In fact, telecast
of a program will probably serve a number of different
purposes, because organizations with audiences usually have
multiple, fuzzy, and overlapping goals.
To check that the tree-hierarchy makes sense, you can create an
intent structure. This is done in the opposite way from forming
the hierarchy of goals. You begin at the top level of the tree (the
leaves, not the trunk). For each activity, consider “Why should
we do this? What will it achieve?”
a.
To encourage people to avoid being bitten by
mosquitoes;
b.
To make anti-malarial drugs readily available
c.
To eliminate malaria-carrying mosquitoes.
For most organizations with audiences, their logical framework
diagrams won’t look like trees, because each activity (program,
article, etc) will serve several purposes. A tree covered in
cobwebs might be a better example.
3. Now consider what must to be achieved to meet each of
those goals…and so on.
To complete the Logical Framework, several questions have to
be answered for each goal and sub-goal:
To continue the anti-malaria example, the goals for 2a could
include
a1.
Making anti-mosquito equipment widely available
a2.
Encouraging people to wear enough clothing at times
when mosquitoes are feeding
Advising people on how to avoid being bitten by
mosquitoes.
a3.
The process continues, adding more and more levels. The
highest levels are part of the initial plan. The lower levels are
activities rather than goals. At the lowest possible level, a worker
on the project might work towards goal by visiting a particular
school on a particular day, and giving the teachers information
that could be used in lessons..
The whole structure can be drawn like a tree, with the single
main goal at the bottom, and each branch dividing into more
and more goals, objectives, strategies, aims, purposes, or
activities. No matter what these are labelled, they are all a type of
plan. (With the tree analogy, notice that the trunk is what you’d
call the highest level - it’s really an upside-down tree.)
This tree-like approach works well for a project with a very
specific goal, such as the anti-malaria campaign. But organizations with audiences usually don’t have a single main purpose.
Many of them have several purposes, which are not clearly
defined: nothing as simple and as measurable as “reduce the
level of malaria in this region.” For public companies, it’s a little
easier: in many countries their stated goal is to maximize the
value of their shares. At least, that’s what they say: but in many
cases their shareholders could do better if the organization was
closed down and the money invested in a more profitable
concern. My own theory, after observing what really happens, is
that the primary purpose of any organization is to survive.
And for an organization with audiences, its primary purpose
(after survival) is to be creative: to provide enough entertaining,
inspiring, informative, and educational material that its audience
will stay with it - and the organization will survive..
For example, a television channel may decide to telecaste a
program about how to avoid catching malaria. The program’s
purpose for the anti-malaria campaign is clear, but what
purpose does it serve for the channel? The channel could say
8
• What resources are required to achieve this purpose?
• What constraints may prevent it; under what conditions will
it succeed?
• How will its success be evaluated?
This last question is where audience research comes in. Most
activities of an organization with an audience can’t be evaluated
without doing audience research.
When you do research planning you need to know well in
advance what type of reseach are you doing and for whom? To
get a desired method you should be wellversed with the
different types of research. The following matter gives you a
general idea of different types of research.
Research is a process, which is almost impossible to define.
There is a great deal of mystique about it and reluctance on the
part of many to consider undertaking it. It can cover a wide
range of studies, from simple description and investigation to
the construction of sophisticated experiments. A clear objective
provides the basis of the design of the project, for the selection
of the most appropriate methods. The basic skill lies in
selecting the most appropriate methods for the task, in hand. It
is possible to build in experience and to learn from past
mistakes but each project is different and requires a fresh
approach.
Research is carried out for two main reasons; as a means to an
end, or as an end in itself. Both are perfectly valid, but each
entails a rather different approach to the definition of the
problem at hand and to the formulation of objectives.
However, before the objective can be specified, it is necessary to
define what the problem is, and before that can be done there
must be a clear understanding of why the research is being
considered.
a. Research as a means to an end Solving specific problem is
one of the most common tasks, which the researcher is
called upon to perform, but for the researcher it presents the
most difficult projects. Problems are seldom simple and
usually have many dimensions; there is a need to work
quickly and to produce results upon which action can be
taken and it is necessary to keep the scale of the research. in
tune with the size of the problem.
Information is often required, not so much to solve a
specific problem as simply to remove uncertainty, or to
increase
a
b.
c.
Knowledge or understanding. A completely different
form of research is that which might be called
experimental. The research is concerned with
establishing what would happen if a change was made
to the existing arrangements or if something
completely new was introduced. It is possible to
construct a model which can be used for tests but it is
much more common to bring about the change,
perhaps doing it in only one part of the system, and to
measure what happens. Very closely related to this is
research which seeks to establish whether it is possible
to achieve something, or to bring about a change in
something, by adopt a given course of action. Before
embarking on research as the means to an end, it is
wise to be absolutely sure where the end is and what it
involves.
Research as an end in itself : This type of research
can, however, be divide into two main categories. There
is research which is primarily based on a detailed and
analytical review of the work of others. This is the type
of work which usually leads to a Mailler’S level
qualification. The next stage up the academic ladder is
the type of original research which leads to a Ph.D.
Both types of research call for careful preparation. A
crucial factor determining the success or failure of the
work is the scope of the project and the range to be
covered. Usually it is necessary to refine an initial idea
down to something which is manageable. There
should be sufficient literature on the subject to provide
the basis for the research, but not so much that it is
impossible to handle it within the time allowed. It may
be necessary to take account of he historical dimension
of the project, perhaps going back to original source
documents. This can be time-consuming and should
not be under-estimated. A careful literature search
should reveal the existence of other, related work,
which needs to be taken into account.
Relevance of research : Usually research inculcates
scientific and inductive thinking and it promotes
development of logical habits of thinking and
organisation. Research also provides the basis for nearly
all government policies in our economic system.
Research has its special significance in solving various
operational and planning problems of industry and
business. Research has its own contribution to the
existing stock of knowledge making for its
advancement. It is the pursuit of truth with the help
of study, observation, comparison and experiment.
Thus, research involves a systematic process of
advancement in knowledge whereby one might start
with the knowledge of details, and gain an
understanding of the general principles underlying
them.
According to Charles Peirce, the great American philosopher,
there are four methods of knowing about facts or fixing our
beliefs about various matters. These are tenacity, intuition,
authority and science.
Fred N. Kerlinger defines scientific research as a systematic,
controlled, empirical and critical investigation of hypothetical
propositions about the presumed relations among natural
phenomena.
Berelson and Steiner define science as a form of enquiry in
which procedures are public, definitions are precise, data
collection is objective, findings are replicable.
Pure research (which is also known as basic, theoretical or 1
fundamental research) always aims at enriching the theory by
unraveling the untold mysteries of nature. On the other hand,
_ applied or empirical research always aims at enriching the
application of the theory by discovering various new uses to,
which the findings of pure research may be put and by showing
the limitations of these findings. Following are some points of
difference between pure research and applied research:
Pure research
Applied research
Aims to illustrate the theory
1.
by enriching the
basis of a discipline.
Studies a problem usually
2.
from the focus of one
discipline.
Aims to solve a problem by
enriching the field of
application of a discipline.
Often several disciplines
collaborate for solving the
problem.
Often studies individual cases
3. Seeks generalizations.
without the objective
to generalize.
Works on the hypothesis that Recognizes that other variables
4.
variables not measured
are constantly changing.
remain constant.
Tries to say why things
Tries to say how things can be
5.
happen.
changed.
Reports in technical language
6.
Reports in common language.
of discipline.
It should be noted, however, that the above difference between
‘pure’ and ‘applied’ research is not as clear-cut in the social
sciences as it is in the natural sciences.
Categories of Research
There are different categories of research; basic vs applied;
descriptive vs analytical; quantitative vs qualitative; conceptual vs
empirical and so on.
Modern Methods
Following are the modern methods currently followed in
research.
i. Basic vs Applied Research
Research can either be basic (fundamental or pure) or applied.
By basic research is meant the investigation of problems in
order to further and develop existing knowledge. It is mainly
concerned with the generalisations and with the formulation of
a theory. Gathering knowledge for knowledge’s sake is termed
9
basic research. Research concerning some natural phenomenon
or relating to pure mathematics or physics or astrology are
examples of fundamental research. Similarly, research studies
concerning the behaviour characteristics of individuals with a
purpose of drawing some generalisation about their social
learning, memory pattern, intelligence level are also examples of
applied research.
ii. Descriptive vs Analytical Research
The descriptive research deals to depict the present state of
affairs as it exists without having any control over change of
variables. The main characteristics of this method is that the
researcher has no control over the variables. He can only report
what has happened or what is happening. For example, the
number of students enrolled in medical/engineering colleges
during 1990-1995, frequency of shopping, preferences of
people etc. In analytical research, on the other hand, the
researcher has to use facts or information already available, and
analyze these to make a critical evaluation of the material.
iii. Quantitative vs Qualitative Research
Quantitative research is based on the measurement of quantity
or amount. It is applicable to phenomenon that can be
expressed in terms of quantity. Qualitative research is concerned
with qualitative phenomenon. For example, studying what
makes I people to work hard, what makes people to be so lazy
will lead to arriving at some qualitative results such as challenging jobs, I attractive salary, opportunities to grow within the
organisations perhaps are the reasons for eliciting hard work.
Qualitative - research is specially important in the behavioural
sciences where the aim is to discover the underlying motives,
interests, personality, attitudes of human beings.
iv. Conceptual VS Empirical
Conceptual research is related to some abstract ideas or theory. It
is generally used by philosophers and thinkers to develop new
concepts or to interpret the existing ones. Empirical research is
data-based research coming up with conclusions which are
capable of being verified by observation or experiment. In this
research, the researcher should collect enough data to prove or
disprove his hypothesis. Empirical research is appropriate when
proof is sought that certain ‘variables affect other variables in
some way. Evidence gathered through experiments or empirical
studies is considered to be the most powerful support possible
for a given hypothesis.
v. Laboratory Research
The emphasis in laboratory research is to control certain
variables in such a way to observe the relationship between two
or three other variables.
vi. Clinical or Diagnostic Research
This type of research follows case study methods or in - depth
approaches to reach ,the basic casual relationship. This research
takes only a few samples and studies the phenomenon in depth
and observe the effects.
vii. Exploratory Research
The objective of exploratory research is the development of
hypothesis rather than their testing.
10
viii. Historical Research
This type of research utilizes historical sources like documents,
literature, leaflets, etc., to study events or ideas of the past. The
research methodology can be divided broadly into two categories traditional or pre-scientific methods, and scientific methods.
B. Traditional methods
Traditional methods are divided into four aspects, namely
philosophical, institutional, legal and historical.
i. Philosophical Methods
Philosophical method is the oldest of all the methods. It is
through application of this method that people have been
understanding human society. Secondly, philosophical approach
is not a narrowly focused one. Instead it takes an overall view of
human development but draws metaphysical conclusions.
ii. Institutional Approach
Institutional approach is the second long, standing approach to
enquiring into the nature of institutions. The concept institution covers a gamut of institutional structures like parliament,
secretariat, legal courts, educational institutions, etc and it also,
covers certain abstract institutions like property, family and
castle.
III. Legal Approach
Legal approach is different from institutional approach. It is law
that explains the nature of an institution. In the second place, it
is law that controls the institutions. Legal approach have several
inadequacies. Firstly, law is very remotely related to many of the
human activities. Secondly, law does not explain the location of
power. Legal position may give some amount of power to an
individual which is defined, but sometimes individuals,
without holding any legal position exercise tremendous
amount of power. Thirdly, legal approach does not explain the
character of social class - the nature of economically dominant
groups over all other social groups.
Fourthly, the ideologies of the individuals and groups do playa
very important role in moulding social institution and social
systems.
iv. Historical Approach
History as actuality means all that has been felt thought,
imagined, said and done by human beings as in relation to one
another and to their environment since the beginning of
mankind. History as record consists of documentary and other
primary evidence of history as actuality. Historical approaches
may broadly be divided into two: history as record of facts; and
history as record of facts and interpreting the facts from a correct
perspective. In other words, history is the record of heroic
stories of individuals and a chronological study of human
development in relation to nature and other institutions. There
are several types of documents to be consulted in the historical
method. Archival data is most important of all. There are three
reasons for this. First, more and more statistical census are
gathered by governments, offices, industries and others.
Secondly, more and more such data have become readily
available, partly because of the increasing interest and demand
shown by the public and research community and because of
computerisation of data. Thirdly, the number of data banks
making available data has increased. Personal documents
including life histories of people, public documents including
life histories of people, public and private documents like
diariesh secret files, literature, newspapers etc., are the other
important sources of information for those who use historical
method.
C. Scientific Method
Scientific method is a branch of study which is concerned with
observed facts systematically classified, and which includes
trustworthy methods for discovery of truths. The method of
enquiry is a very important aspect of science; perhaps this is the
most significant feature. Scientific method alone can bring about
confidence in the validity of conditions.
George Lundbery defines scientific method as one consisting of
systematic observation, classification, and interpretation of data.
The main difference between day-to-day generalisation and the
scientific method lies in the degree of formality, rigorousness,
verifiability and general validity of the latter. Observation,
hypothesis and verification are the three important components
of scientific enquiry.
Assumptions of scientific method David Easton has laid down
certain assumptions and objectives of scientific method. They
are regularities, verification, techniques, quantification, values
systematization, pure science, and integration.
theory and research should form interrelated parts of a coherent
and orderly body of knowledge.
7. Pure Science
Scientific minded social scientists insist on pure science approach. They agree that theoretical understanding may lead to an
application of this knowledge to problems of life. Both the
theory and its application are parts of scientific enterprise.
8. Integration
Finally, there is the question of integration of each social science
with other social sciences. The behaviouralists agree that man is
a social animal and while one may try to draw boundary lines
between the social, political, economic, cultural and other
activities, none of these activities can be understood without
placing them in the wider context of his entire life.
In experimental design, the researcher can often exert a great deal
of control over extraneous variables and thus ensure that the
stimuli in the experimental conditions are similar. In a laboratory experiment, one has the opportunity to vary the treatment
in a systematic manner, thus allowing for isolation and precise
specification of the important difference.
1. Regularities
Scientific method believes that the world is regular and phenomena occur in patterns. Further there are discernible
uniformities in political and social behaviour which can8be
expressed as generalization, which are capable of explaining and
predicting political and social phenomena.
2. Verification
Scientific method presupposes that knowledge should consist
of propositions that have been subjected to empirical tests, and
that all evidence must be based on observation.
3. Techniques
Scientific method attaches a great deal of importance to the
adoption of correct techniques for acquiring and interpreting
data. In order to make the research self conscious and critical
about the methodology, there is a need for the use of sophisticated tools-like multivariate analysis, sample survey and
mathematical models, simulation and so on.
4. Quantification
Science necessarily involves mathematical formulas and measurements. Theories are not acceptable if they are not expressed in
mathematical language. All observations must ‘be quantified
because quantification has advantages in terms of precision and
manageability.
5. Values
Values and facts are two separate things. Science, it is claimed, is
value free. It is not concerned with what is “good”, “right”,
“proper” “desirable” etc. “Good” and “bad” is the concern of
philosophers. Scientific enquiry to be objective, therefore, must
be value free.
6. Systematisation
Scientific study demands that research should be systematic. it
means that it must be theory-oriented and theory directed. The
11
LESSON 4:
Topics Covered
Audience research, Planning research, understanding audiences
Objectives
Upon completion of this Lesson, you should be able to:
• Types of audience research.
• How research is done: an overview.
• Planning a research project.
• Some findings about audiences
The Need for Audience Research
If you have an audience, and you don’t do audience research,
this is equivalent to walking with your eyes shut. But many
organizations (even those with audiences) survive without
doing audience research. How do they survive?
• Even if an organization doesn’t do systematic audience
research, it usually has some informal method of collecting
feedback, and sometimes these informal methods seem to
work well.
• When funding is guaranteed, regardless of audience size,
broadcasters can survive without audiences. Many shortwave
services have tiny or unknown audiences, but governments
fund them out of national pride.
• Organizations that rely on revenue from their audiences
often use the amount of revenue as a substitute for audience
research. This applies to most small businesses. As long as
they keep making money, they feel no need for audience
research. But when the flow of money unexpectedly declines,
the businesses often feel the need for market research.
Income flow will tell the owner what is happening, but not
why.
If you want to know why audiences react as they do, you need
audience research - or market research, or social research,
depending on your industry. In larger organizations, where
information about revenue is often delayed, or is complicated
by other factors, regular audience research (or market research)
can often provide an early indication of a change in the habits
of the audience (or the customers).
Varieties of Audience Research
Not all information-gathering is research. To qualify as research,
information-gathering must be systematic and unbiased. It
must cover the entire audience of interest. It should also avoid
subjectivity: if two people do the same research, they should
arrive at the same results.
Some politicians believe they can do public opinion research (yet
another form of audience research) by talking to taxi drivers,
because they believe that taxi drivers are typical of the whole
population. Of course, this might be true, but it probably isn’t.
In developed countries, most taxi drivers are men, fairly young,
and with below-average education. If you assume that the
12.
opinions of taxi drivers are typical, you are taking a big risk.
Audience research greatly reduces this risk.
Audience Measurement
As mentioned above, radio and television have special need of
audience research - simply to find out what most other
organizations already know: how widely their services are used.
Thus audience measurement is the most widely used form of
audience research.
There’s an important difference between audience research and
the customer information that non-broadcasting organizations
gather. These other organizations collect information (mostly
financial) about all their customers. If they calculate a total sales
figure, it should be completely accurate. Audience research,
because it relies on samples, can’t be accurate to the last digit but nor does it need to be.
If proper sampling procedures are used, you can estimate for a
given sample size (number of people interviewed) the range of
accuracy for any audience estimate.
A newspaper can make a statement like this: “Last week we sold
53,234 copies.”
A broadcaster, after doing an audience survey, could make a
statement like this: “Last week, the best guess at our audience is
52,000 listeners - but there is a 5% chance that the true figure is
smaller than 49,000 or larger than 55,000.” Interviewing more
people can reduce the margin of error (3,000 either way, in this
example), but it is always present whenever information is
based on a sample, instead of the whole population. The larger
the number of interviews, the smaller the margin of error.
Audience measurement is done in two main ways:
1. Surveys, asking people which programs or stations they
listened to, at which times, on which days.
2. Meters attached to TV sets (or occasionally to radios) , which
record the stations the set is tuned to, at which times, on
which days.
Meters are more accurate than memories, but are very expensive.
In most developed countries the television industry is large
enough and rich enough to afford meters, particularly when
there are commercial stations whose revenue depends on
accurate audience information. But in developing countries, and
those without commercial broadcasters, surveys are the
commonest method of audience measurement.
Audience measurement can find out only that a person (or
household) was tuned into a program at a particular time. It
provides no information about the amount of attention being
paid to the program, or opinions about the program, or other
matters related to the program.
Evaluation
Sometimes a program has a clear purpose. For example, a radio
program on health education might try to educate people on
how to prevent malaria. If that is the only purpose of the
program, its success can be evaluated using audience research
methods.
Outcomes from the program might include people being aware
of the program, people listening to it, people acting on its
advice, and eventually a fall in the number of people who have
malaria. (Of course, if the malaria rate does drop, there could be
many other reasons for this too. When something happens,
there are usually many different reasons.)
Another type of evaluation is testing a program not for social
effectiveness (as above) but to simply improve programs. For
example, a TV channel will make a pilot program and show it
to a small group of people. These viewers will be asked
questions about the program, and depending on their reaction,
the program might be broadcast, cancelled, or changed.
Understanding Your Audience
If you don’t want to measure the audience or evaluate a
program, why would you do audience research? A very important reason is to understand your audience. The more you
know about the types of people in your audience, their
backgrounds, their interests, and their preferences, the better
you can be at making programs to suit them.
Research as Program Content
Another reason for doing research is to use the results as
program content. Some stations, before an election, carry out
opinion polls, in which voters are asked who they intend to
vote for. The results are then broadcast.
How Research is Done: An Overview
Let’s begin with how not to do a survey.
Sometimes, broadcasters seem to say to themselves “Shall we
do a survey? ... Yes, why not? What a good idea!”
So they produce a questionnaire, writing down all the things
they want to know about their audience. Then they find some
people who will fill in the questionnaire. (This type of survey
nearly always uses questionnaires that the respondents fill in
themselves.) Perhaps there is a big fair being held nearby, so the
station prints a lot of questionnaires, and leaves a heap at the
fair, with a sign instructing people to take a questionnaire, fill it
in, and mail it back.
What they didn’t know was that producing the questionnaire
and getting some completed questionnaires back was the easiest
part of the process. Often, at this point the manager desperately
glances through the questionnaires, and declares “Yes! I knew it
all along: the listeners agree with me.” The questionnaires are
put away in a box. They gather dust for a year or two, and
eventually they are thrown out.
What a waste of effort! If this story wasn’t so common, it
would be funny.
How to Organize a Survey
Now that you’ve seen how not to do a survey, let’s look at a
better method. Whether you do the survey (or other research)
yourself, or commission another organization to do it, you
should first of all:
1. Know what you want to know, and
2. Know how you will use the results.
If you don’t know these, you will probably flounder in
indecision, and not find out what you really need to know.
Audience research projects are usually done in this order:
1. Define the purpose of the research.
You should be able to summarize this in one sentence.
2. Try to find out if the information you need is already
available.
If the information exists, you can stop now. If the
information is not available, you can go ahead with the
research plan.
3. How much is it worth to you, to know this?
Research can be very expensive. There are always ways to
reduce the cost, but they bring certain disadvantages.
4. Which research method is most appropriate?
If you need precise numerical information, a survey will be
needed. If you need to gain a broad understanding, and
numbers are not so important (e.g. the types of people in
your audience and what they prefer) qualitative research may
be more appropriate.
5. Who will do the research?
Will you do it by yourself, or hire a research organization to
do it all, or will it be some type of joint effort?
6. Now do the research.
This book explains how.
After this, the station may have a few completed questionnaires
- but probably only a small percentage of the number printed.
In one of the research cases where I know there were thousand
questionnaires were printed in Agra, but just 55 were filled in
and returned. (Nobody ever found out what happened to the
rest.)
7. When the research is finished, compare the results with your
activities.
Not all of these questionnaires will be fully completed - but the
station staff are probably used to forms that are unclear and
poorly filled in. Now, the staff wonder what to do next. They
begin to realize how much work will be required to process the
questionnaires - though they are not sure how this processing is
done.
Planning a Research Project
You can plan a research project by asking yourself, and answering, these questions:
What differences are there between the perfect activities (as
defined by your audience) and your current activities? What
needs to change? Why not change it, then?
• What do you already know about your audience?
• What do you need to know?
• How will you use the results?
13
What do you Already Know About your Audience?
It’s worthwhile to keep a list of some basic facts about your
audience. I have compiled a set of basic questions, which cover
most aspects of audience research. A well-informed publisher
should know most of the answers to these questions.
The Basic Questions of Audience Research
1. How large is the audience - both as an average, and as the
reach (number of different people)?
2. What kind of people make up the audience? How do they
differ from the whole population - e.g. in terms of age
group, sex, occupation, etc?
3. Where is your audience? In each part of your coverage area,
what percentage of the population are members of your
audience?
4. When does your audience use your publication (or tune into
your station) - what time of day, what day of week, etc?
5. How do your audience members spend their time? How
much of their time is spent being part of your audience?
And how much with your competitors?
6. What type of content (e.g. radio and TV programs,
newspaper articles) interests your audience most - and least?
7. What styles of presentation do your audience prefer, and
what styles do they dislike?
8. Which activities, attitudes, and other effects do your
publications cause among your audience?
9. How will your audience react to a new kind of program or
article that you might introduce?
10. How can you increase your audience? Is it best to try to find
new listeners? Or to bring lapsed listeners back? Or to
persuade existing listeners to spend more time with your
broadcasts?
11. What percentage of the population in your area know about
your station - and how much do they know about it?
12 . What is preventing people from using your service as much
as they might?
Most audience research is directed towards answering the above
general questions. Some of them, of course, are more than one
question. In fact, some of those questions can be divided into
hundreds of more precise questions.
With any proposed research project, it is useful to work out
which of the above general questions it tries to answer. Most
research projects will cover more than one of the general
questions, but if you have done no audience research before, it
will be impossible to cover all questions with a single project.
You would have to ask thousands of questions, and most
respondents would not have enough patience to answer so
many questions accurately.
Situation Assessment
A useful exercise to do when planning a research project is a
situation assessment. This is a systematic way of considering all
factors that might affect your organization. This often forms a
part of a marketing plan or a strategic planning exercise.
Three main factors that affect stations and audiences are broad
social trends, the audience environment, and your media
14
environment. To assess these factors, three tools you can use are
trend assessment, audience assessment, and SWOT analysis.
Trend Assessment
What are the major trends now happening, and expected to
continue over the next few years? I’ve divided all possible trends
into six broad groupings. For each trend, you can identify
aspects that are growing, and aspects that are declining. Most of
this may have to be based on opinion rather than fact. A good
reason for doing audience research is to convert the opinions
into facts.
Below is an example of a completed trend chart. Try doing one
for your area, using published information (such as Census
data) if this is available. Even if you’re not sure exactly what the
trends are, it’s useful to discuss these with your colleagues
before planning an audience research project. Some of this
information may not be available, and the audience research
project can be used to collect it.
Social Trends Chart
Type of trend
Growing
Declining
Demographic
More people aged over 50
Fewer people living on farms
Economic
Higher average income
Less unemployment
Political
More freedom of speech
Independence of local government
Environmental
More background noise
Smaller farms
Technology
Introduction of satellite TV
% with no electricity at home
Personal values
Desire for freedom among teenager s
Respect for the elderly
Audience preferences
Use of Internet
Willingness to watch serials?
The examples above should be replaced with your own information or beliefs. Some trends, such as the growth of the Internet,
may fall into several of these categories. Environmental trends
may not always be relevant to radio and TV audiences, but it’s
worthwhile to think about them - and they always provide
good material for programs.
Audience Assessment
This involves summarizing the social situation of your present
and potential audience. Here are some of the key questions to
which you should know the answers. The first group could be
answered without doing audience research, e.g. by using Census
data.
1. What is the area covered by your station or publication? This
can be divided into an inner area, where you face little
competition, and an outer area, where perhaps your station
can be received, but other stations may be more relevant.
2. How many people live in the inner and the outer areas?
3. What other media, and other activities are competing for
your audience’s time?
4. What sort of people does your station try to attract? (In
terms of age group, education, etc.)
5. How are these people distributed across the coverage area?
Are there small areas with a much higher concentration of
them?
The following questions can be answered only by doing
audience research:
6. What proportion of the inner and outer area populations
use your station?
...and so on: every type of person who would be affected by any
action your organization might take.
7. How often do they use it? At what times, on what days?
The first step in stakeholder analysis is to work out who the
stakeholders are. For each group of stakeholders, you should
consider:
8. What is your station’s share of their available time?
9. What types of people use your station most?
10. In what circumstances do people use your station?
If you can answer all of the above questions, you will have a
good picture of your audience.
SWOT analysis
• What they expect from you
• How they’d react if you stopped existing
• How they’d react if you greatly increased in size
• Any other issue that you think is important for stakeholders.
As well as the audience environment, there’s the media environment. A good way to think about this is to do a SWOT
analysis. SWOT stands for Strengths, Weaknesses, Opportunities, and Threats.
You’ll probably find that you don’t have all this information,
from each type of stakeholder. It’s helpful to guess, but
distinguish (a) what you know for sure, (b) what you have
good reason to suspect, and (c) what you are guessing at.
A SWOT analysis is done by getting a group of people to
answer four questions. People usually do a SWOT analysis by
considering each of the four factors in turn: S, W, O, T. But I’ve
found it’s better to go S, W, T, O. Though you can’t pronounce
it, the natural flow of human thought is to move from
problems towards solutions - like this:
If you’re planning some action which may be controversial, it’s
useful to consider each type of stakeholder in turn, and their
likely reaction to your proposed changes.
S. What are our particular strengths? What can we do better
than any other publisher?
When you have completed a situation assessment, using the
above four tools, you’ll probably realize that there are some
important questions that you don’t know the answers to.
That’s why a situation assessment leads naturally into audience
research.
W. What are our weaknesses? What things do we not do as well
as other publishers?
T. What are the threats to our organization? What might come
along that would make us irrelevant, or take away most of our
audience?
O. What opportunities could we seize? What aren’t our
competitors doing, that our audience would like? (Opportunities come and go quickly: if another radio station foolishly
changes its format and loses most of its listeners, perhaps your
station could gain them if it acts quickly.)
Who should be involved in a SWOT analysis? It can be done by
a single person (yourself, perhaps), but a single person will
probably not think of all the strengths, weaknesses, threats, and
opportunities. If a number of your staff meet, and spend a
few hours discussing these four questions, many more factors
will be included. It’s best to include some outsiders - even wellinformed audience members - because sometimes they can see
things that a station’s staff don’t notice.
Stakeholder Analysis
Stakeholders are types of people who have an interest in what
you are doing. For example, if you are running a commercial
radio station, your stakeholders will include
• your audience
• your advertisers
• local organizations which depend on you for information
(probably including local government)
• your staff
• your owners or shareholders
• your suppliers
• your neighbours
• your competitors
15
LESSON 5:
Topics Covered
Audience research, Planning research, understanding audiences
Objectives
Upon completion of this Lesson, you should be able to:
• Types of audience research.
• How research is done: an overview.
• Planning a research project.
• Some findings about audiences
Reasons for Research
There are several reasons for doing audience research. Depending on which reason applies in a particular situation, a different
type of research should be chosen. Four of the most common
reasons are to help in making a decision, to understand the
audience, to demonstrate facts to outsiders, and to provide
material for programs.
To Help in Making a Decision
The most effective small research projects I’ve seen have resulted
from the need to make a decision based on audience data.
Often, only a few very specific questions need to be asked, or
one main topic area covered.
The best solution here is a small survey (with a sample as little
as 100), or a set of 3 or 4 consensus groups. Use a survey if you
are clear about exactly what you want to know, and need a
numerical answer. Use consensus groups if you are uncertain
about the exact questions that should be asked, and you don’t
need exact numbers, but will be satisfied with statements such
as “the great majority prefer...”
To Understand the Audience
This is a more difficult task - and one that never stops. The
questions often asked by the organization’s management are
along the lines of “What type of people tune in to our station?
What interests them most? How do they spend their time?”
If this is your main interest, you could consider either a set of
consensus groups, or a detailed survey. In general, I recommend consensus groups. A survey will provide precise results
to the questions asked, but will give no information at all on
questions that weren’t asked. Also, a survey will cost a lot more,
and take more time.
To Demonstrate Facts to Outsiders
Commercial broadcasters want to convince manufacturers and
retailers to advertise on their station. For this, it helps to have
data showing the size, demographics, and interests of their
audience. A related purpose is a special-interest organization,
seeking support from a funding body, and providing survey
data to show the extent of public support for that organization.
This type of information is more convincing if it comes from a
survey, conducted very thoroughly by an impartial third party,
16.
such as an industry-wide body or market research company. If
your organization does the survey itself, the results will have
less credibility to outsiders, no matter how accurately you do the
work.
Alternatively, you could have your survey audited by an
independent organization, to confirm that the results you are
publishing are unbiased.
To Provide Material for Programs
Most media organizations can use research methods to gather
data about audiences, and make programs based on this data.
Audiences like to hear about public opinion, and general
reaction to issues of the day, and programs created from (or
supported by) research data always seem to be popular.
For this purpose, all research methods are suitable, including
surveys, consensus groups, and informal interviews. To gain
the fullest information, several different methods can be used.
What do you Need to Know - And how will you Use
that Knowledge?
Whatever the purpose of the research, the first stage is to ask
yourself “What do we need to know from this research?”
When an organization asks my group to carry out a research
project for it, the first thing I ask them to do is to write out a
list of questions that they want the research to answer. This is
not the same as writing a questionnaire; it is the list of questions that need to be answered. I also ask them to consider
what action they might take, resulting from the answers to a
question. If no action will result from a question, that question
will usually be of lower priority than one whose answers cause a
decision to be made.
When the list of questions has been prepared, the next stage is
to convert the questions into a set of hypotheses.
Hypothesis
A hypothesis is a statement whose truth can be tested in a
survey. For example, a manager of a TV station might say “Our
viewers are old.” This is not a real hypothesis, because it’s not
precise enough. You need to specify exactly what “viewers” and
“old” mean. For example “The people who watch our station
at least once a week are aged over 46.” That’s better, but it’s not
quite there: does “the people” mean “all the people” or “most
of the people”? How about this: “More than two thirds of the
people who watch our channel at least once a week are aged over
46.”
That’s a hypothesis. It can be tested by including two questions
in a survey, e.g.
• How Often do you Watch this Station?
• What age are you?
A hypothesis often has its beginnings in an assumption. The
staff of an organization often hold assumptions about their
audiences. But these assumptions are beliefs, not facts, and
often they aren’t true. Sometimes, the staff don’t even realize
they are making assumptions about their audience. Therefore, at
the planning stage of a survey, it’s valuable to include people
who know something about your organization, but can take a
broader view. These can be members of another organization
that you work with, some members of your audience, and
other kinds of other stakeholders. The most effective planning
groups seem to include a wide range of different types of
people.
One of the main problems in doing your own research, and
not consulting other stakeholders, is that you can lose this
broad viewpoint. This is where market research companies can
be most useful: in identifying the assumptions that you’re not
aware you’re making.
Who Should do the Research
If you do your own research, it is much cheaper - but that is
because most of the cost involves labour. You need to be
highly organized, and to have suitable staff with plenty of time
available. You also need to be well informed - for example, by
reading this book.
Cover an Issue More Broadly than you Think you
Need to
A radio station may want to increase its audience. Newcomers
to audience research might think that only listeners to the
station should be questioned, because non-listeners would not
be able to answer some of the questions. This is a mistake, but
you may not discover it till the research is finished. Always try to
measure the central activity (e.g. listening to radio) in a broader
context. In this example, don’t ask only about your radio
station, or even all radio stations in the area. What a radio
station is competing for is the audience’s time, so a comprehensive study needs to find out how people spend their time.
Keep it Small
When doing your first survey, don’t ask too many questions,
and don’t have too large a sample. Small surveys have a much
better record for being completed and acted on. There’s seldom
a need to interview more than 250 people — though 100
should be regarded as a minimum. If you’ve never done a
survey before, try to restrict the questionnaire to 2 pages, or
about 12 short questions. You can always do another survey
later.
If you hire a research group, it will be much more expensive,
but you should receive the results sooner. The work should be
of better quality, but may lose something in relevance.
I recommend that your first project should be a set of consensus groups - because
Usually the best results are achieved if you work closely with a
professional researcher, but learn as much as you can about the
process, and fit the purpose and results of the research into
your own management process. My advice is not to rush into
doing research straight away, but spend plenty of time with the
research company - not only your top manager, but also a
number of your staff.
• consensus groups are cheaper,
Comparisons
A problem with many surveys designed by novices is the lack
of information with which results can be compared. You might
ask, for example, your audience’s opinion of a presenter.
Suppose that you do a survey, and find that 56% regard the
presenter as “good”. Without a comparison, this figure is not
useful - is 56% high or low?
If you also asked about a number of presenters on your
station, or even some presenters on other stations, this
information would be much more useful.
Unless you already have a lot of information about your
audience (from previous research) you should make lots of
comparisons in a research project - more than you initially think
are necessary.
Guidelines for doing your Own Research
If you decide to do the research yourself, here are some
suggestions.
Decide what you Want to Find Out - and why
Don’t seek information that you’re not prepared to act on. If
you are determined to scrap that program anyway, why do
audience research? To prove something to others? If so, are they
going to believe the results of research you’ve done without
involving them? Not likely! The results will only justify the
effort if you can make use of them.
• consensus groups are easier to do than surveys,
• consensus groups need less organizing, and
• the results are available immediately, without computer
processing.
When you have done a set of consensus groups, you may find
you need to do a survey. If so, you will already have a set of
statements which can easily be converted into survey questions.
Use a Representative Sample
Take steps to make sure that the sample is a representative part
of the population. Don’t let the interviewers speak to whomever they please; people are not all alike.
And never assume that you, yourself are typical - for a start, you
know much more about your own station than most audience
members will.
Don’t Try to Produce a Mammoth Report
A few pages is usually enough: long reports usually go unread.
But do produce a brief report, even if only so that you’ll know
better next time. (The second audience research project is always
easier than the first.)
Don’t Stop
More than most other activities, an audience research project is
something in which every part relates to every other part. If you
stop halfway through the process, and return to the project
later, you’ll probably have forgotten why some questions were
asked. For the same reason, if you can read this whole book
without stopping, you’ll have a better idea of the interrelationships between different aspects of surveys.
Which is the Best Research Technique?
No single research technique is best, but each technique is
appropriate for a particular kind of situation. There’s an old
17
saying, common among researchers, and still true: “Research can
be fast, cheap, and accurate - pick any two.”
form a representative sample) you can expect that the figure of
64% may be about 5% in error. But whether the true figure is
55% or 70%, you will be much better informed than you were
before.
In other words:
• Quick, low-cost research is usually not accurate
So if you only want to get an approximate idea of your
audience,itis possible to do research quickly and cheaply, and
still have it accurate enough. The more you already know about
your audience, the more expensive it becomes to increase that
knowledge.
• Quick and accurate research is not cheap (and sometimes not
possible)
• Cheap and accurate research is usually slow.
In some situations, you don’t need very accurate research. If
you have never done audience research before, and have no
information about your audience, it’s not difficult or expensive
to gather some data.
The following table lists the main research methods, showing
their strengths and weaknesses. I also show their relative cost
and speed - but not their accuracy. That depends on sample size
(the larger, the better) and how well the project is done; any
method can be used well or poorly.
For example, if you don’t know the ages of your audience, you
could do a small survey and perhaps find that 64% were under
30 years old. If only 100 people were surveyed (as long as they
Face to face
Telephone
Written
Consensus groups
Internet
Cost
High
Medium
Low
Low
Very low
Speed
Medium
Fast
Slow
Fast
Fast
Prerequisites
Interviewers must be
able to reach
respondents
All must have
telephone
All must be literate
All must be able to
attend
All must have internet
access
Main problems
Organizing interviewer
tasks
Getting telephone
numbers
Dealing with poorly
completed
questionnaires
Getting useful results
Strong computer skills
needed
Here’s a more detailed consideration of the advantages and disadvantages of each of the main methods of audience research. Notice
that these types overlap: spoken surveys include face-to-face surveys, which in turn include face-to-face surveys at respondents’
homes. Advantages and disadvantages of the main type (e.g. spoken surveys) also apply to comments about the sub-type (e.g. faceto-face surveys).
18
Survey type
Advantages
Disadvantages
Spoken surveys
Effective in all situations, e.g. when
literacy level is low.
Need a lot of organization.
Face to face surveys
Usually provides very accurate results.
Any question can be asked. Can include
observation and visual aids.
Expensive, specially when large areas are
covered.
Face to face surveys at respondents’
home/work/etc.
Can cover the entire population.
Expensive; much organization needed.
Face to face surveys in public places
Can do lots of interviews in a short time.
Samples are usually not representative of
the whole population.
Telephone surveys
High accuracy obtainable if most
members of population have telephones.
No visual aids possible. Only feasible
with high telephone saturation.
Written surveys
Cheaper than face-to-face surveys.
Hard to tell if questions not correctly
understood. More chance of question
wording causing problems.
Mail surveys
Cheap.
Allows anonymity.
Requires high level of literacy and good
postal system. Slow to get results.
Self-completion, questionnaires collected
and delivered
Cheap. Gives respondents time to check
documents.
Respondents must be highly literate..
Fax surveys
Fast
Cheap.
Questionnaires with more than one page
are often only partially returned.
Email surveys
Very cheap
Quick results.
Samples not representative of whole
population. Some respondents lie. High
computer skills needed.
Web surveys
More easily processed than e mail
questionnaires
Many people don’t have good web
access..
Informal methods
Fast
Flexible
Can’t produce accurate figures.
Experience needed for comparisons.
Subjective. Most suitable for preliminary
studies.
Monitoring
Little work required
Cheap.
Often not completely relevant. Samples
often not representative. Most suitable
when assessing progress.
Observation (can be combined with
surveys)
More accurate than asking people their
behaviour.
Only works in limited situations.
Meters
More accurate than a sking people their
behaviour.
Very expensive to set up; measures
equipment rather than people. Can’t find
out reasons for behaviour.
Panels
Ability to discover changes in
individuals’ preferences and behaviour.
Need to maintain records of previous
contact, etc.
Depth interviews
Provide insights not available with most
other methods.
Expensive; need highly skilled
interviewers.
Focus groups
Provide insights not available with most
other methods.
Need highly skilled moderator, trained in
psychology etc.
Consensus groups
Instant results.
Clear wording.
Cheap
Secretary and/or moderator need strong
verbal skills. Don’t work well in some
cultures, e.g. Buddhist.
Internet qualitative research
Easy for a geographically dispersed
group to meet.
Low cost.
Doesn’t provide the subtlety of personal
interaction. Very new, so few experts
available to help with problems.
19
Combinations of Methods
When a study is done in several phases, one after another, you
can use different methods in each phase. This often applies with
screening surveys, when the first contact with respondents is
used to decide which respondents should receive a more
detailed questionnaire. The first contact should be a method
that excludes nobody, while the main questionnaire can use a
cheaper approach.
One example of this is the phone/mail survey. Initial contact is
made by phone. Respondents are asked a few questions. If they
give suitable answers (e.g. if they listen to your station) the
interviewer then asks if a questionnaire can be mailed to them,
about your station’s programs. Most respondents will agree.
Because the first contact has been a personal one, response rates
on the mail survey will be much higher than if mail had been
used in both phases.
Writing a Research Brief
When you commission a research project from an outside
research group, you need to write a brief, describing what you
want to know. This is sometimes called request for proposal.
The research group comes back to you with a detailed proposal,
outlining their proposed solutions to your problem (and of
course the cost).
Even if you are planning to do the research yourself, it’s an
excellent idea to write a brief. This helps you to focus on exactly
what you want to know. Having written your own brief, you
can complete it by adding your own proposal, which will show
how you will use audience research to answer your questions.
Sometimes, writing a brief will show you that your problem
can be solved without audience research
There’s also a third section, which can be added to the brief and
the proposal. This is an action plan, which describes any actions
you have in mind to take, depending on the result of the
research.
Briefs, proposals, and action plans need not be long - a few
pages is normally enough. They are very helpful in the later
planning stages, when you may be tempted to add all sorts of
new elements to the original problem. If the research project
seems to be getting so big that it will never be finished, review
your brief and proposal. When you have an original plan, new
ideas can be seen in the context of that plan, and sometimes
found unnecessary.
Points to Include in a Brief
1. Give your research project a name: no more than about 10
words. This will help you define it more clearly.
2. A statement of the reason why you need research. Keep it
short - and be honest!
3. Background of the problem. What a researcher should know
to understand your problem.
4. What you will do as a result of this research, and how your
action will depend on the results.
5. The main question you need answered.
6. The other internal questions that flow from this. (Don’t try
to write a questionnaire - that’s the researcher’s job. Instead,
20
focus on your own questions, and let the researcher worry
about how they should be answered.)
7. How certain you need to be about the results. (Research
results are never exact, because only a sample of the
population is used. Halving the uncertainty will almost
quadruple the cost.)
8. If there’s a date by which you must have the results, state it.
(If you give a date that is earlier than you need, this could
reduce the quality of the research, or increase the cost.)
Assignment
1. What is the meaning of measurement in research?
2. Write a research Brief taking the important considerations of
research?
LESSON 6:
SAMPLING
Topics Covered
Populations, Sampling frames, Samples, random sampling,
sample size, sample Selection, Cluster samples, Selecting
respondents
• All instances of tuning in to a radio station in the last seven
days
Upon completion of this Lesson, you should be able to:
...and so on. If you can express it in a phrase beginning “All,”
and you can count it, it’s a population of some kind. The
commonest kind of population used in survey research uses
the formula:
• Populations.
• All people aged X years and over, who live in area Y.
• Sampling frames.
The “X years and over” criterion usually rules out children
below a certain age, both because of the difficulties involved in
interviewing them and the lack of relevance of their answers to
many research questions.
Objectives
• Samples.
• Principles of random sampling.
• Choosing a sample size.
• Selecting starting points for surveys.
• Cluster samples in door-to-door surveys.
• Selecting respondents within households
Sampling: Introduction
Sampling is the key to survey research. No matter how well a
study is done in other ways, if the sample has not been
properly found, the results cannot be regarded as correct.
Though this chapter may be more difficult than the others, but
it is perhaps the most important chapter in this book. It applies
mainly to surveys, but is also important for planning other
types of research.
1. Populations
The first concept you need to understand is the difference
between a population and a sample.
To make a sample, you first need a population. In non-technical
language, population means “the number of people living in
an area.” This meaning of population is also used in survey
research, but this is only one of many possible definitions of
population. The word universe is sometimes used in survey
research, and means exactly the same in this context as population.
The unit of population is whatever you are counting: there can
be a population of people, a population of households, a
population of events, institutions, transactions, and so forth.
Anything you can count can be a population unit. But if you
can’t get information from it, and you can’t measure it in some
way, it’s not a unit of population that is suitable for survey
research.
For a survey, various limits (geographical and otherwise) can be
placed on a population. Some populations that could be
covered by surveys are...
• All people living in Pune.
• All people aged 18 and over.
Even though some populations can’t be questioned directly,
they’re still populations. For example, schools can’t fill in
questionnaires, but somebody can do so on behalf of each
school. The distinction is important when finding the answers
to questions like “What proportion of Noida schools have
libraries?” You need only one questionnaire from each school not one from each teacher, or one from each student.
Often, the population you end up surveying is not the
population you really wanted, because some part of the
population cannot be surveyed. For example, if you want to
survey opinions among the whole population of an area, and
choose to do the survey by telephoning people at home, the
population you actually survey will be people with a telephone
in their home. If the people with no telephone have different
opinions, you will not discover this.
As long as the surveyed population is a high proportion of the
wanted population, the results obtained should also be true for
the larger population. For example, if 90% of homes have a
telephone, the 10% without a phone would have to be very
different, for the survey’s results not to be true for the whole
population.
2. Sampling Frames
A sampling frame can be one of two things: either a list of all
members of a population, or a method of selecting any
member of the population. The term general population refers
to everybody in a particular geographical area. Common
sampling frames for the general population are electoral rolls,
street directories, telephone directories, and customer lists from
utilities which are used by almost all households: water,
electricity, sewerage, and so on.
It is best to use the list that is most accurate, most complete,
and most up to date, but this differs from country to country.
Some of these lists are of households, others are of people.
For most surveys, a list of households (specially if it is in street
order) is more useful than a list of people.
• All households in Mayur Vihar.
• All schools in Noida.
21
3. Samples
A sample is a part of the population from which it was drawn.
Survey research is based on sampling, which involves getting
information from only some members of the population.
If information is obtained from the whole population, it’s not
a sample, but a census. Some surveys, based on very small
populations (such as all members of an organization) in fact are
censuses and not sample surveys. When you do a census, the
techniques given in this book still apply, but there is no
sampling error - as long as the whole group participates in the
census.
Samples can be drawn in several different ways, such as probability samples, quota samples, purposive samples, and
volunteer samples.
Probability Samples
Sometimes known as random samples, probability samples are
the most accurate of all. It is only with a probability sample that
it’s possible to accurately estimate how different the sample is
from the whole population. With a probability sample, every
member of the population has an equal (or known) chance of
being included in the sample. In most professional surveys,
each member of the population has the same chance of being
included in the sample, but sometimes certain types of people
are deliberately over-represented in the sample. Results are
calculated compensate for the sample imbalance.
With a probability sample, the first step is usually to try to find
a sampling frame: a list of all members of the population.
Using this list, individuals or households are numbered, and
some numbers are chosen at random to determine who is
surveyed. If no population list is available, other methods are
used to ensure that every population member has an equal (or
other known) chance of inclusion in the survey.
Quota Samples
In the early days of survey research, quota sampling was very
common. No population list is used, but a quota, usually based
on Census data, is drawn up.
For example, suppose the general population is being surveyed,
and 50% of them are known to be male, and half of each sex is
aged over 40. If each interviewer had to obtain 20 interviews,
she or he would be told to interview 10 men and 10 women, 5
of each aged under 40, and 5 of each aged 40-plus. It is usually
the interviewers who decide how where they find the respondents. In this case, age and sex are referred to as control
variables.
A problem with quota samples is that some respondents are
easier to find than others. The interviewer in the previous
example may have quickly found 10 women, and 5 men over
40, but may then have taken a lot of time finding men under
40. If too many control variables are used, interviewers will
waste a lot of time trying to find respondents to fit particular
categories. For example (if interviews had been specified in
terms of occupation and household size, as well as age and sex)
“2 male butchers aged 40 to 44, living in households of 8 or
more people”.
It’s important with quota sampling to use appropriate control
variables. If some people in a category are more likely to take
22
part in the survey than others, and also likely to give different
answers from those in another category, then that category
should be a control variable.
For example, if women are more willing than men to be
surveyed (which is often true) and if the two sexes’ patterns of
answers are expected to be quite different, then the quota design
should obtain balanced numbers from each sex. In fact, sex and
age group are the two commonest control variables in quota
surveys, but occasionally a different variable can be the most
relevant. If you’re planning a quota sample, don’t assume that
by getting the right proportion in each age group for each sex,
everything else will be OK.
Pure quota samples are little used today, except for surveys done
in public places, but sometimes partial quota sampling can be
useful. A common example is when choosing one respondent
from a household. The probability method begins by finding
out how many people live in the household, then selecting an
interviewee purely at random. There are practical problems with
this approach, so inside randomly selected households, quota
sampling is often used.
Volunteer Samples
Samples of volunteers should generally be treated with
suspicion. However, as all survey research involves some
element of volunteering, there is no fixed line between a
volunteer sample and a probability sample. The main difference
between a pure volunteer sample and a probability sample of
volunteers is that in the former case, volunteers make all the
effort; no sampling frame is used.
The main source of problems with volunteer samples is the
proportion who volunteer. If too few of the population
volunteer for the survey, you must wonder what was so special
about them. There is usually no way of finding out how those
who volunteered are different from those who didn’t. But if
the whole population volunteer to take part in the survey,
there’s no problem.
When people who know nothing about sampling organize
surveys, they often have a large number of questionnaires
printed, and offer one to everybody who’s interested. Amateur
researchers often seem to feel that if the number of questionnaires returned is large enough, the lack of a sample design isn’t
important. Certainly, you will get some results, but you will
have no way of knowing how representative the respondents
are of the population. You may not even know what the
population is, with this method. The less effort that goes into
distributing questionnaires to particular individuals and
convincing them that participation is worthwhile, the more
likely it is that those who complete and return questionnaires
will be a very small (and probably not typical) section of the
population.
About the only way in which a volunteer sample can produce
accurate results (without being checked against a probability
sample), is if a high proportion of the population voluntarily
returns questionnaires. I’ve known this to work a few times,
usually in country areas with a small population, where up to
about 50% of all households have returned questionnaires.
Even so, if all the effort is left to the respondents, there’s no
certainty that somebody who wants to distort the results has
not filled in hundreds of questionnaires.
• The oldest who does not listen to radio
The same problems apply to drawing conclusions from
unsolicited mail and phone calls. For example, politicians
sometimes make claims like “My mail is running five to one in
favour of the stand I made last week.” There are all sorts of
reasons why the letters sent in may not be representative of the
population. The same applies to letters sent to broadcasting
organizations: all these tell you is the opinions of the letterwriters. It is only when the majority of listeners write letters
that the opinions expressed in these letters might be representative.
• A man who listens to radio all day
Purposive Samples
A purposive sample is one in which a surveyor tries to create a
representative sample without sampling at random.
One of the commonest uses of purposive sampling is in
selecting a group of geographical areas to represent a larger area.
For example, door-to-door interviewing can become extremely
expensive in rural areas with a low population density. In a
country such as India, it is not feasible to do a door-to-door
survey covering the whole country. Though areas could be
picked purely at random, if the budget was small and only a
small number of towns and cities could be included, you might
choose these in a purposive way, perhaps ensuring that different
types of town were included. However, there are better ways to
do this. Read on…
Maximum-diversity Samples
A maximum-diversity sample is a special kind of purposive
sample. Normally, a purposive sample is not representative, and
does not claim to be. A maximum-diversity sample aims to be
more representative than a random sample (which, despite what
many people think, is not always the most representative,
specially when the sample size is small).
Instead of seeking representativeness through equal probability,
it’s sought by including a wide range of extremes. This is an
extension of the statistical principle of regression towards the
mean - in other words, if a group of people that is (on average)
extreme in some way, it will contain some people who themselves are average. So if you sought a “minimum diversity”
sample by only trying to cover the types of people who you
thought were average, you’d be likely to miss out on a number
of different groups which might make up quite a high proportion of the population. But by seeking maximum diversity,
average people are automatically included.
When you are selecting a multi-stage sample the first stage
might be to draw a sample of districts in the whole country. If
this number is less than about 30, it’s likely that the sample will
be unrepresentative in some ways. Two solutions to this are
stratification and maximum-diversity sampling. For both of
these, some local knowledge is needed.
With maximum-diversity sampling, you try to include all the
extremes in the population. This method is normally used to
choose no more than about 30 units. For example, in a small
village, you might decide to interview 10 people. If this was a
radio audience survey, you could ask to interview
• The oldest person in the village who listens to radio
• The youngest who listens to radio
• A woman who listens to radio all day
• Somebody who has never listened to radio in his or her life
• The person with the most radios (a repairman, perhaps)
• The person with the biggest aerial
• A person who is thought to be completely average in all ways
...and so on. The principle is that if you deliberately try to
interview a very different selection of people, their aggregate
answers will be close to the average. The method sounds odd,
but works well in places where a random sample cannot be
drawn. And of course it only works when information about
the different kinds of sample unit (e.g. a person) is widely
known.
Map-based Sampling
When you are planning a door to door survey, it is tempting to
use a map as the basis for sampling. To get 100 starting points
for clusters, all you need to do is throw 100 darts at the map.
This method, if properly done, gives every unit of area on the
map an equal chance of being surveyed. This would be valid
only if your unit of measurement was a unit of land area — for
example, if you were estimating the distribution of a plant
species. If you are surveying people or households, this equalarea method will over-represent farmers and people living on
large properties. People living in high-density urban areas will be
greatly under-represented. Even within a small urban area, large
differences in density can exist.
Found Samples
Perhaps you have a list of names and addresses of some of
your audience, collected for a marketing purpose. This is known
as a found sample or convenience sample. It’s tempting to
survey these people, because it seems so easy. But don’t do it!
You have no way of knowing how representative such a sample
is. You can certainly get a result, but you won’t know to what
extent that result is true of people who were not included in the
sample.
Snowball Samples
If you’re researching a rare population, sometimes the only
feasible way to find its members is by asking others. First of all,
you somehow have to find a few members of the population by any method you can. That is the first round.
You now ask each of these first-round members if they know
of any others. These names form the second round.
Then you go to each of those second-round people, and ask
them for more names.
Keep repeating the process, for several more rounds. The
important thing is knowing when to stop. For each round, keep
a count of the number of names you get, and also the number
of new names - people you haven’t heard about before.
Calculate the number of new names as a percentage of the total
number of names. For example, if one round gives you 50
names, but 20 are for people who were mentioned in earlier
rounds, the percentage of new names for that round is 60%.
23
You’ll probably find that the percentage of new names rises at
first, then drops sharply. When you start hearing about the
same people over and over again, it’s time to stop - perhaps
when the percentage of new names drops to around 10%. This
is often at the fourth or fifth round.
You now have something close to a list of the whole population (and many of them will know that you’re planning some
research).
Snowball sampling requires a lot of work, if the population is
large, because you need to draw up an almost-complete list of
the population. So this method works best when a population
is very small. But if the population is small enough to list every
member without a huge amount of work, you could do a
census, rather than a sample: in other words, contact all of
them.
Snowball sampling works well when members of a population
know the other members. For example, if you are studying
people who speak a minority language, or who share some
disability, there’s a good chance that most of them know of
each other. The biggest problem with snowball sampling is that
isolated people, who are not known to other members of the
population, will not be included in your study, because you’ll
never find out about them. In the case of minorities, sometimes the more successful members will blend into the ruling
culture, feeling no need to communicate with other members
of that minority. So if you survey only the ones who know
each other, you may get a false impression. A partial solution to
this is to begin with a telephone directory or other population
list. If people in that population have some distinctive family
names, you can find them in the directory, and take those
people for the first round.
Stratification
The simplest type of sampling involves drawing one sample
from the whole survey area. If the coverage area of a radio
station is a large town and its surrounding countryside, there
may be a population list that covers the whole area - an electoral
roll, perhaps. If you want to select (say) 50 random addresses as
starting points for a door-to-door cluster survey, you could
simply pick 50 addresses from the population list.
This is simple, but there’s a slight danger that all 50 addresses
may be in the same part of the coverage area.
Stratification is easy to do, and you should use it whenever
possible. But for it to be possible, you need to have (a) census
data about smaller parts of the whole survey area, and (b) some
way of selecting the sample within each small area. For example,
if you were using a telephone directory as a sampling frame,
each residential listing might show the suburb where that
number was. (It doesn’t matter if the person mentioned in the
listing still lives there - you use a telephone directory as a list of
addresses, not people.) In this case, you’d need census data on
the number of households in each suburb, to be able to use
stratification effectively.
The principle of stratification is simply that, if an area has X%
of the population, it should also have X% of the interviews.
24
That’s close enough. However, having different cluster sizes
often confuses the interviewers, so you’d need two slightly
different sets of interviewer instructions.
LESSON 7:
SAMPLING - MULTI-STAGE SAMPLING
Topics Covered
Populations, Sampling frames, Samples, random sampling,
sample size, sample Selection, Cluster samples, Selecting
respondents
Objectives
Upon completion of this Lesson, you should be able to:
• Populations.
• Sampling frames.
• Samples.
• Principles of random sampling.
• Choosing a sample size.
• Selecting starting points for surveys.
• Cluster samples in door-to-door surveys.
• Selecting respondents within households
Multi-stage Sampling
With door-to-door surveys, sampling is done in several steps.
Often, the first step is stratification. For example, census data
can be used to select which districts in the survey area will be
included. In the second step, random sampling could be used,
but each district might need to be treated separately, depending
on the information available there. This would decide which
households would be surveyed. The third step would involve
sampling individuals within households, perhaps using quota
sampling.
Random Sampling
The Concept of Randomness
Before we discuss random sampling, you need to be clear about
the exact meaning of “random.” In common speech, it means
“anything will do”, but the meaning used in statistics is much
more precise: a person is chosen at random from a population
when every member of that population has the same chance of
being sampled. If some people have a higher chance than
others, the selection is not random. To maximize accuracy,
surveys conducted on scientific principles always use random
samples.
Imagine a complete list of the population, with an entry for
every member: for example, a list of 1500 members of an
organization, numbered from 1 up to 1500. Suppose you want
to survey 100 of them. To draw a simple random sample,
choose 100 different random numbers, between 1 and 1500.
Any member whose number is chosen will be surveyed. If the
same number comes up twice, the second occurrence is ignored,
as nobody will be surveyed more than once. So if the method
for selecting random numbers can produce the same number
twice, about 110 selections will need to be made to get 100
people.
Another type of random sampling, called systematic sampling,
is more commonly used. This ensures that no number will
come up twice. No matter how many thousands of people you
will interview, you need only one random number for systematic sampling.
In the above example, you are surveying 1 member in 15. Think
of the population as divided into 100 groups, each with 15
people. You need to choose one person from each group, so
you choose a random number between 1 and 15. Let’s say this
number is 7. You then choose the 7th person in each group. If
the members were numbered 1-15 in the first group, 16-30 in
the second, 31-45 in the third, and so on, you’d interview
people with numbers 7, 22, and 37 - adding 15 each time.
Exactly 100 members would be chosen for the survey, and their
numbers would be evenly spread through the membership list.
Sources of Random Numbers
The commonest source of random numbers in most countries
is the serial numbers on banknotes. There can be no bias in
using the last few digits of the first banknote you happen to
pull out of your pocket, because there should be an equal
chance of drawing each possible combination of numbers.
Other source of unpredictable large numbers (from which you
can use the last few digits) include lottery results, public
transport tickets, even stock market indexes.
You can also cheat. With systematic sampling, only one random
number is needed. Just think of a number, between 1 and the
upper limit. Though statisticians would frown, it will probably
make no difference to the results.
Principles of Random Sampling
The essential principle in survey research is that everybody in the
population to be surveyed should have an equal chance of
being questioned. If you do a survey, and everybody had an
equal chance of inclusion, you’re in a position to estimate the
accuracy of your results.
Every survey has sampling variation. If you survey 100 people,
and get a certain result, this result will be slightly different than
if you had surveyed another group of 100 people. This is like
tossing coins: if you toss a coin 100 times, you know that there
should be 50 heads and 50 tails. But the chances are quite strong
(92 in 100, to be exact) that you won’t get exactly 50 heads and
50 tails. However, the chances of getting 0 heads and 100 tails
are practically nonexistent.
Using statistical techniques, it’s possible to work out the exact
chances of every possible combination of heads and tails. For
example, there are 680 chances in 1000 that you’ll get between 45
and 55 heads in 100 throws. (If you doubt this, find 100 coins,
throw them 1000 times, and see the result for yourself!)
In the same way, even though you know the results from a
survey are not exactly accurate, they are probably pretty close —
25
but only if every member of the surveyed population had an
equal chance of being included in the survey.
To estimate how much sampling error there is likely to be in a
survey result, use the following table. “Standard error” means
(roughly) the average difference between the true figure and each
case.
Table of standard errors
n = p x q / SE 2
where:
n is the sample size: the number of people interviewed.
p is the percentage answering Yes to the question.
q is the percentage not answering Yes to the question.
SE is the standard error as shown in the table above.
An Example
You guess that maybe a quarter of all people listen to your
station, so p is 25%, and q is 75%. You want the figure to be
correct within 3%. If you do find a figure of 25% who listen,
you want to make sure the true figure is between 22% and 28%.
So to calculate the required sample size:
% of sample giving this
answer
Sample size (no. of interviews)
100
200
400
800
5 or 95%
2.2%
1.6%
1.1%
0.8%
n = 25 x 75 / (3 x 3)
10 or 90
3.0%
2.1%
1.5%
1.1%
= 208
15 or 85
3.6%
2.5%
1.8%
1.3%
20 or 80
4.0%
2.8%
2.0%
1.4%
30 or 70
4.6%
3.3%
2.3%
1.6%
40 or 60
4.9%
3.5%
2.4%
1.7%
50%
5.0%
3.5%
2.5%
1.8%
This formula (which I have over-simplified slightly), is useful in
working out how big a sample size you need for a given survey.
But to calculate the sample size you first have to know roughly
how many people will answer Yes to the question, and also
decide how large a standard error you can tolerate. For beginners, this is not simple. Another problem is that samples
calculated in this way can be horrifyingly large. For example, if
you changed the tolerance from 3% to 1% in the above
example, you’d have to interview 1875 people. Yet another
problem is that every question in a survey may require a
different sample size.
When using the above table, think of each question as having
two possible answers. Although a question may have more
than two answers (e.g. age groups of under 25, 25 to 44, and 45
or over), the number can always be reduced to two, conceptually.
For example, suppose 20% of a sample is in the 25 to 44
group. Therefore, the other 80% is in the “not 25 to 44” age
group. The margin of error on this 20/80 split is 4%, so the
true population figure is likely to be anywhere between 16% and
24%. There is one chance in three that it will be outside this
range, and 1 chance in 20 that it be outside twice this range: i.e.
less than 12 or more than 28%.
If all that sounds too difficult, just assume that the margin of
error is 5%, on any result. For example, if a survey finds that
25% of the population listen to your station, it’s likely that the
true figure will be somewhere between 20% and 30%. (Likely but not certain - because there’s a small chance that the true
figure could be less than 20% or more than 30%. A well-known
saying among statisticians is “statistics means never having to
say you’re certain.”)
Always remember that the above table shows only sampling
error, which is fairly predictable. There could also be other,
unpredictable, sources of error.
Note in the above table that the margin of error for 400
interviews is always half that for 100. This means that to halve
the error in a survey, you must quadruple the sample size. So
unless you have a huge budget, you must learn to tolerate
sampling error.
Choosing a Sample Size
There are several ways to choose a sample size: you can either
calculate it from a formula, or use a rough “rule of thumb.”
The formula for calculating the sampling error to a survey
question is:
26
In an ideal world, you’d calculate the sample size for a survey as
shown above, and cost would never be a problem. However, as
most surveys are done to a budget, your starting point in
practice may not be how much error you can tolerate, but rather
how little error you can get for a given cost.
To do this, you need to divide the cost of the survey into two
parts:
• a fixed part, whose cost is not proportional to sample size,
and
• a variable part, for which the cost is so much per member of
the sample.
Once you have allocated a proportion of the total budget to the
fixed cost, and estimated the cost of getting back each completed questionnaire, you can calculate the affordable sample
size.
But what if you don’t know the survey cost, and have to
recommend a sample size? This is where the rule-of-thumb is
useful.
For the majority of surveys, the sample size is between 200 and
2000. A sample below 200 is useful only if you have a very low
budget, and little or no information on what proportion of the
population engages in the activity of most interest to you — or
if the entire population is not much larger than that. A sample
size over 2000 is probably a waste of time and money, unless
there are subgroups of the population, which must be studied
in detail.
If you don’t absolutely need such large numbers, and have
more funds than you need, don’t spend it on increasing the
sample size beyond the normal level. Instead, spend it on
improving the quality of the work: more interviewer training,
more detailed supervision, more verification, and more pretesting. Better still, do two surveys: a small one first, to get
some idea of the data, then a larger one. With the experience
you gain on the first survey, the second one will be of higher
quality.
The sample size also depends on how much you know about
the subject in question. If you have no information at all on a
subject, a sample of only 100 can be quite useful, though its
standard error is large.
Rule of Thumb
Are you confused about which sample size to choose? Try my
rule of thumb:
Condition
Recommended
sample
No previous experience at doing surveys.
No existing survey data.
100 to 200
Some previous experience, or some
previous data. Want to divide sample into
sets of 2 groups (e.g. young/old,
male/female)
200 to 400
Have previous experience and previous
data.
Want to divide sample into sets of up to 4
groups.
Want to compare with previous survey
data.
400 to 600
A Common Misconception
Consider this question: if a survey in a town with 10,000
people needs a sample of 400 for a given level of accuracy, what
sample size would you need for the same level of accuracy in the
whole country, with a population of 10,000,000? (That’s 1000
times the population of the town.)
Did you guess 400,000? Many people do. The correct answer is
400.4 - you might as well call it 400.
The formula I gave above isn’t quite complete. The full version
has what’s called the finite population correction (or FPC) added
to the end, so the full formula is:
n=pxq/SE2 x (N-n)/N
where N is the population. Unless the sample size is more than
about 5% of the population, the (N-n)/N bit (the FPC) makes
almost no difference to the required sample size.
Is that too technical? Think of it another way. Imagine that you
have a bowl of soup. You don’t know what flavour it is. So
you stir the soup in the bowl, take a spoonful, and sip it. The
bowl of soup is the population, and the spoonful is the
sample. As long as the bowl is well-stirred (so that each
spoonful is a random sample), the size of the bowl is irrelevant. If the bowl was twice the size, you wouldn’t need to take
two spoonfuls to assess the flavour: one spoonful would still
be fine. This is equally true for human populations.
Nonrandom Sampling
Though random sampling is the ideal, sometimes it’s not
possible. In some countries, census information is either not
available, or so far out of date that it’s useless. Even when good
census data exists, there may be no maps showing the boundaries of the areas to which the data applies. And even when
there exist both good census data and related maps, there may
be no sampling frames.
The good news (from a sampling point of view) is that these
conditions usually apply in very poor and undeveloped
countries with large rural populations. In my experience, there’s
not a wide range of variation in these populations. This is a
difficult thing to prove, but I suspect that the more developed a
country, the more differences there are between its citizens. All
this is a way of saying that where random sampling is not
possible, perhaps it’s not so necessary.
The best solution I can think of is to use maximum diversity
sampling.
Maximum-diversity samples are normally drawn in several
stages, so they are multi-stage samples. The first stage is to
decide which parts of the population area will be surveyed. For
example, if a survey is to represent a whole province, and it’s
not feasible to survey every part of the province, you must
decide which parts of the province will be included. Let’s
assume that these parts are called counties, and you will need to
select some of these.
Maximum-diversity sampling works like this:
Stage 1
1. Think of all the ways in which the counties differ from the
province as a whole -specially ways relevant to the subject of
the survey. If the survey is about FM radio, and some areas
are hilly, reception may be poorer there. If the survey is
about malaria, and some counties have large swamps with a
lot of mosquitoes, that will be a factor. If the survey will be
related to wealth or education levels (as many surveys are), try
to find out which counties have the richest and best-educated
people, and which have the poorest and least-educated. Try
to think of about 10 factors, which are relevant to the survey.
2. Try to find objective data about these factors. Failing that, try
to find experts on the topics, or people who have travelled
around the whole province. Using this information, for each
factor make a list of the counties which have a high level of
the factor (e.g. lots of mountains, lots of swamps, wealthy)
and counties which have a low level (e.g. all flat, no swamps,
poor).
3. The counties mentioned most often in these lists of
extremes should be included in the survey. Mark these
counties on a map of the province. Has any large area been
omitted? If so, add another county, which is as far as
possible from all the others mentioned.
Stage 2
When the counties (or whatever they are called) have been
chosen, the next stage is to work out where in each county the
cluster should be chosen. Continue the maximum-diversity
principle by using the same principle in each country as in stage
1. If a county was chosen for its swampiness and flatness,
27
choose the flattest and swampiest area in the country. If it was
chosen for its mountains and wealth, choose a wealthy mountainous area.
To find out where these areas are, you will probably need to
travel to that county and speak to local officials. Sometimes you
then find that there are local population lists - e.g. lists of all
houses in the area. In that case, you might be able to use
random sampling for the final stage. If there are no population
lists you can use, the surveyed households will have to be
chosen by block listing, aerial photographs, or radial sampling see section ii for details of these methods.
Maximum diversity sampling can produce samples that are as
representative as random samples. The problem is that you can
never be sure of this.
Assignment
1. How will you distinguish between simple random sampling
and Cluster sampling?
2. Describe some of the important application and use of
computers in present times research?
28
LESSON 8:
QUESTIONNAIRES
Topics Covered
Planning, Types, Wording, Writing, Testing, Layout
Objectives
Upon completion of this Lesson, you should be able to:
• Planning the questionnaire.
• Types of question.
• Question wording.
• How to write a questionnaire.
• Program testing.
• Questionnaire layout.
• Testing questionnaires.
Key Preparation
Before you start to design your questions, clearly articulate what
problem or need is to be addressed using the information to be
gathered by the questions. Review why you’re doing the
evaluation and what you hope to accomplish by it. This
provides focus on what information you need and, ultimately,
on what questions should be used.
Directions to Respondents
1. Include a brief explanation of the purpose of the
questionnaire.
2. Include clear explanation of how to complete the
questionnaire.
3. Include directions about where to provide the completed
questionnaire.
4. Note conditions of confidentiality, e.g., who will have access
to the information, if you’re going to attempt to keep their
answers private and only accessed by yourself and/or
someone who will collate answers. (Note that you not
guarantee confidentiality about their answers. If a court sued
to see answers, you would not likely be able to stop access to
this information. However, you can assure that you will
make every reasonable attempt to protect access to their
answers.)
Content of Questions
1. Ask about what you need to know, i.e., get information in
regard to the goals or ultimate questions you want to
address by the evaluation.
2. Will the respondent be able to answer your question, i.e., do
they know the answer?
3. Will respondents want to answer the question, i.e., is it too
private or silly?
Wording of Questions
1. Will the respondent understand the wording, i.e., are you
using any slang, cultural-specific or technical words?
2. Are any words so strong that they might influence the
respondent to answer a certain way? Attempt to avoid use of
strong adjectives with nouns in the questions, e.g., “highly
effective government,” “prompt and reliable,” etc.
3. To ensure you’re asking one question at a time, avoid use of
the word “and” in your question.
4. Avoid using “not” in your questions if you’re having
respondents answer “yes” or “no” to a question. Use of
“not” can lead to double negatives, and cause confusion.
5. If you use multiple choice questions, be sure your choices are
mutually exclusive and encompass the total range of
answers. Respondents should not be confused about
whether two or more alternatives appear to mean the same
thing. Respondents also should not have a clearly preferred
answer that is not among the alternative choices of an
answer to the question.
Order of Questions
1. Be careful not to include so many questions that potential
respondents are dissuaded from responding.
2. Attempt to get recruit respondents’ motivation to complete
the questionnaire. Start with fact-based questions and then
go on to opinion-based questions, e.g., ask people for
demographic information about themselves and then go on
to questions about their opinions and perspectives. This gets
respondents engaged in the questionnaire and warmed up
before more challenging and reflective questions about their
opinions. (Consider if they can complete the questionnaire
anonymously; if so, indicate this on the form where you ask
for their name.)
3. Attempt to get respondents’ commentary in addition to
their ratings, e.g., if the questionnaire ask respondents to
choose an answer by circling an answer or provide a rating,
ask them to provide commentary that explains their choices.
4. Include a question to get respondents’ impressions of the
questionnaire itself. For example, ask them if the
questionnaire was straightforward to complete (“yes” or
“no), and if not, to provide suggestions about how to
improve the questionnaire.
5. Pilot or test your questionnaire on a small group of clients
or fellow staff. Ask them if the form and questions seemed
straightforward. Carefully review the answers on the
questionnaires. Does the information answer the evaluation
questions or provide what you want to know about the
program or its specific services? What else would you like to
know?
6. Finalize the questionnaire. Finalize the questionnaire
according to results of the pilot. Put a date on the form so
you can keep track of all future versions.
29
Types of Information Collected by Questions
Questions are geared to find out what people know, did, feel
and think.
1. To find out what information they know, ask them to
describe something, e.g., “Please describe ...”
2. To find out what they feel, ask them, e.g., “How do you feel
about ...?” or “How did you feel when ...?”
3. To find out what they think, ask them for their opinion on
something, e.g., “How could the .. be improved?”
4. To find out what they did, ask them to describe an activity
they did.
Two Types of Questions
1. Open-ended: No options are provided for the respondent
to answer the question. They must think of their own
response and describe it in their own words. If respondents
have and take the time to reflect on answers to the question,
you can get more meaningful information than from closed
questions.
2. Closed: The respondent is given a set of alternative choices
from which he or she can choose to answer the question, i.e.,
“yes,” “no,” multiple choice, a rating, ranking, etc. Closed
questions can usually be answered quickly, allowing you to
get a get a lot of information quickly. However, respondents
may rush through the questions and not take enough time
to think about their answers. Your choices may not include
the answer they prefer.
How you configure your questions together, depends on
whether they’re used in questionnaires, interviews or focus
groups.
Principles of Questionnaires
This chapter explains how to construct a questionnaire, mainly
for use in surveys. Other types of audience research don’t use
questionnaires much.
A questionnaire is a strange type of communication. It’s like a
play, in which one actor (the interviewer) is following rules and
reading from the script, while the other actor (the respondent)
can reply however he or she likes - but only certain types of reply
will be recorded. This is an unnatural social situation, and in
countries with no tradition of this kind of conversation,
respondents may need to have the principles explained to them.
Though it is easy to write a questionnaire, you need a lot of skill
and experience to write a good questionnaire: one in which every
question is clear, can be answered accurately, and has usable
results.
1. Planning the questionnaire
Working Out what you Need to Know
It seems to be a natural human tendency to jump into action:
to start writing a questionnaire the moment you decide to do a
survey. However better questionnaires result from planning the
structure before you start writing any questions. If you simply
start writing questions, you are likely to find out, too late, that
some important questions were omitted, and other questions
were not asked in a useful way.
30
It’s important to distinguish between questions that a respondent is to answer (questionnaire questions), and questions that
you (the publisher or organization) need answers to (internal
questions - sometimes called research questions). The questions
you ask yourself are usually unsuitable for asking respondents
directly. This is a problem with a lot of questionnaires written
by beginners.
Some of your internal questions might be:
1. What sorts of people tune in to our station?
2. How long do they tune in for?
3. What are the most popular programs?
4. If we introduced a talkback program, would this bring a
larger audience?
Often, one internal question will need several questionnaire
questions. Sometimes, one questionnaire question may help to
answer several internal questions.
I suggest you draw up a large table, with three columns, headed
Our question, How results will be used, and Priority - like this:
Our question
How results will be
used
Priority
What sorts of people tune in
to our station?
Background
information
2
If we introduced a talkback
program, would this bring a
larger audience?
If Yes: go ahead with
program
1
The priority column is there to help you reduce the number of
questions, if the questionnaire is too long: low priority
questions can be omitted.
How do you create such a table? And how can you make sure
you don’t miss any important internal questions? I suggest that
many staff be involved in creating internal questions. The more
people who are involved, the better the questionnaire will be
(even though it may take longer to develop). An excellent
method of working out internal questions is to hold a
discovery conference.
Make this table on a very large sheet of paper and put it on the
wall in a prominent place, where people will notice it, and be
able to add suggestions. Later, you can add a fourth column, to
show which questionnaire questions correspond with which
internal questions.
When you have worked out what you want to know, and with
what priority, then it is time to begin writing a questionnaire.
Choosing a Questionnaire Type
There are two main types of questionnaire: spoken and written.
With a spoken questionnaire, interviewers read the questions
aloud to respondents, and the interviewers fill in the answers.
With written questionnaires, there are no interviewers. Respondents read the questions, and fill in their own answers.
For surveys of the whole population, it is normally best to use
interviewers. Response rates are higher, fewer questions go
unanswered, there is no assumption that all respondents can
read, and results come back more quickly.
Written questionnaires are best when the surveyed population
is highly literate, and most respondents know of the surveying
organization - which is usually true for a media organization. If
most of the population have never heard of the organization,
the response rate is likely to be very low.
A suitable situation for written questionnaires is when an
organization surveys its staff. Mail panels, with an educated
population, can also work well, when people have agreed in
advance to be surveyed.
Deciding the Questionnaire Length
The length of a spoken questionnaire is the time it will take
people to answer, on average. There can be tremendous
variation between respondents, but a skilled interviewer can
hurry the most talkative people and encourage the most reticent,
reducing the variation. The questionnaire for your first survey
should be fairly brief, so that the average person will take no
more than 5 or 10 minutes to answer. An interviewer can
usually go through about two single-answer questions in a
minute, or 1 multiple-answer question. In 10 minutes, about
15 questions can be asked.
When interviewers are skilled and the questionnaire is interesting and not too difficult, a face-to-face interview can often take
up about 30 minutes. Telephone questionnaires should not last
more than about 15 minutes, on average. Both interviewers and
respondents find it much harder to concentrate on the telephone.
For printed questionnaires, a good maximum length is an A3
piece of paper, folded once to form a 4-page A4 leaflet. About
20 questions of average length will fit on a questionnaire of this
size. Beyond that, pages must be stapled together, response
rates fall off, return postage costs more, and it is generally a lot
of trouble. So if all possible, keep a written questionnaire down
to 4 A4 pages, perhaps with a separate covering letter.
It’s possible to use much longer questionnaires than the figures
given above, but skilled questionnaire design is needed. Even
so, the concentration of both interviewer and respondent tends
to drop off towards the end of a long questionnaire. And if
the questionnaire is very long, it takes weeks to analyse the
results.
On the other hand, if the questionnaire is too short, respondents can be disappointed, and feel they haven’t been able to
give their full opinions. There’s no advantage for a spoken
interview to take less than about 5 minutes, or a written
questionnaire less than 2 pages.
Satisfying respondents is an important consideration in
designing a questionnaire. This is specially true in a small
community. If people say that your questionnaire was frustrating to complete, you may have a high refusal rate for your next
survey.
Sequence of Questions
The first few questions will set the scene for the respondent. It’s
important, at the beginning, to have some questions that are
both interesting and easy to answer. As rapport gradually builds
up between interviewer and interviewee, more difficult and
personal questions can be asked.
In a good questionnaire, the questions will seem to flow in a
logical order. Any break in this logical order will be punctuated
by a few words of explanation from the interviewer, such as
“Now a few questions about you.” Such verbal headings
should be used every few minutes, to let the respondents know
what they’ll be asked about.
The questions should move gradually from the general to the
specific; this is called funnelling. For example, you may want to
ask a question on attitudes towards the radio stations in your
area, and also some questions about your own station’s
programs, without asking about the other stations’ programs.
At the beginning of the questionnaire, the respondents
shouldn’t know which station is conducting the survey. So if all
those questions about your own station are asked first,
respondents will think “Aha! So that’s the station which is
doing the survey!” Then, when it comes to the comparison of
stations, the respondents will seem to favour the station that
has organized the survey. Therefore, the question comparing
the stations should come before the specific question on
programs.
In planning a small questionnaire, it’s usually helpful to
determine which question (or small group of questions) is the
most important, and to build the questionnaire around this —
leading up to the most important question, and away from it
again.
The more sensitive a question, the closer it should be to the end
of the interview, for two reasons: firstly, rapport takes time to
build up, and secondly, if a respondent does get offended and
refuses to go on with the interview, little information will be
lost. Therefore, the demographic questions normally come close
to the end of the questionnaire.
At the end of a questionnaire, I normally include a very general
open-ended question, such as “Is there anything you’d like to
add?” or “Would you like to make any comments?” Not many
respondents have much to say at this point, but if a number of
them make similar comments, this is perhaps a sign that you
omitted a question that respondents think is important. So a
question like this is a quality-control check: often more useful in
the next survey than in the current one.
How to Write a Questionnaire
Questionnaire-writing should not be rushed, so don’t set
artificial deadlines. It’s common for a questionnaire to be
rewritten 10 times before it is ready to go. If, as a novice
researcher, you think the questionnaire is perfect after only one
or two rewritings, you probably haven’t checked it enough.
It’s important not to get too emotionally involved with a
questionnaire. When you have drafted a questionnaire, don’t
think of it as “your” questionnaire, to be defended at all costs
against attacks by others. Good questionnaires are group efforts
- the more people who check them and comment on them, the
better the questionnaires become.
Another difficulty is that, after you have written several drafts,
it’s hard for you to see what’s really there, because you’re
rememberingwhatwas there in the earlier drafts. This is a good
time to invite different people to comment on the latest draft.
Experienced interviewers are among the best people to consult
31
on questionnaire wording, because of their experience with
hearing respondents answer many questions.
When you are writing a questionnaire, you will spend a lot of
time re-typing and re-ordering questions. If you can use a word
processor for updating the drafts, you’ll save a lot of time.
Most modern word processing programs have Outline features
built in. I suggest you learn to use outlining. It is not difficult
to set up, and makes it very easy to rearrange the order of any
text with headings and sub-headings.
If you don’t have a word processor, type each question on a
separate piece of paper - this makes it much easier to insert new
questions, or change the sequence.
Substantive Questions
A substantive question is one about the substance of the
survey - the topics you want to know about. These are likely to
be different for every survey, and for every organization. This
seems so obvious that it’s hardly worth mentioning - but there
are other types of question too.
Filter Questions
In most surveys, there are some questions which do not apply
to everybody. For example, if some respondents have not heard
a radio program, there is no point in asking their opinion of it.
So the question on hearing the program would be used as a
filter. On a written questionnaire, it would look like this.
At some point, the development of a questionnaire must stop.
Among argumentative people, it’s possible to never reach a
point where all can agree on a questionnaire. Even the most
perfect questionnaire can be criticized along the lines that “You
can’t word the question that way, because some people might
answer such-and-such.” The real issue is not whether it is
possible to misunderstand a question, but what proportion of
respondents are likely to misunderstand it. This can only be
known from experience with that type of question. However,
any question can be misunderstood by some people - if they try
hard enough.
Q16. Have you heard the program?
1 Yes -> ask Q17
Another problem which can never be solved is how detailed a
question should be. When a small number of people are likely
to give a particular answer, should a separate category be
provided? For example, if you are recording the education level
of each respondent, should you include a category for “postgraduate degree” - which might apply only to one person in
100? The answer depends both on the number of people in the
category, and their importance. If the survey was mainly about
education, you probably would include that category, but in a
media usage survey of the whole population, it would
probably be unnecessary. The safe solution is to include an
“other” category, and ask interviewers to write in details of
whatever “other” answers turn up.
Question 16 is the filter question, because the people who
answer No are filtered out of answering questions 17, which
asks about program.
Much of the value of a survey depends on the sensitivity of the
interviewers. An interviewer who feels that a respondent may
have misunderstood a question will probe and re-check. In this
way, competent interviewers can compensate for a poorly
worded questionnaire. But don’t rely on this — you’ll certainly
get answers to a poorly worded question, if the interviewers are
thorough — but the answers may not apply to the exact
question that was printed.
It’s useful to end a questionnaire with a broad open-ended
question such as “Is there anything else you’d like to say that
hasn’t come up already?” This gives respondents an opportunity to communicate with you in their terms (the rest of the
questionnaire has been on your terms.) Though many replies to
such a question will be irrelevant, you’ll often find interesting
and thought-provoking comments, which can be turned into
questions for future surveys.
Types of Question
Questions can be described in several ways: by content, and by
format. This section deals with different types of content; the
next with different question formats.
32
2 No, or don’t remember -> skip to Q18
Q17. What is your opinion of the program?
1 Like it
2 Don’t care
3 Dislike it
Ask all:
Q18. etc.
Any question whose answers determine whether another
question is asked is known as a filter question. Sometimes (as
discussed above, in the chapter on sampling) the whole
questionnaire will be directed at only a certain category of
people. In such a case, there will be a filter question right at the
beginning. Depending on their answer to this question, people
either answer all the other questions, or answer none. Such
questionnaires take a lot less time for some respondents than
for others.
When the sample is small, you should make sure that filter
questions do not exclude too many people. Suppose you want
to ask three questions about a program, but exclude nonlisteners to the program. There are many ways of defining
non-listeners. For example, your filter question could be any of
these:
1. Have you ever in your life listened to the program ?
2. Have you listened to in the last year?
3. Have you listened to in the last month?
4. Do you listen to on most days of the week?
5. Have you listened to on every day in the last year?
If you define listeners as those who said Yes to the 5th version,
those people will be very well informed about the program, but
you may find only a few percent of respondents answering the
main questions about the program, because everybody else has
been filtered out.
At the other extreme, you could include everybody who had
ever listened to the program. Plenty of people would answer
the questions about the program, but the opinions of some of
them would be based on episodes they heard years ago.
The best solution is often to ask a filter question with a range
of answers, not only Yes or No, e.g.
Thinking of the program , about when did you last listen to a
full episode? In the last week? The last month? The last year?
Longer ago than a year?
All people who had listened in the last year would be asked the
questions about the program. It would then be possible to
compare the opinions of recent and not-so-recent listeners.
Demographic Questions
Most questionnaires include a number of demographic
questions. These are questions about the respondent’s characteristics and circumstances. Questions about sex, age group,
occupation, education, household type, income, religion, are all
demographic. These are included in surveys for two main
reasons:
Control Items
These are not real questions, but other data gathered by the
interviewer and recorded on the questionnaire. As already
mentioned, the respondent’s sex and residential locality are
usually written on questionnaires. Other information is often
useful, such as:
• the date and time of the interview,
• the duration of the interview (have the interviewer record
the starting and finishing times on the questionnaire)
• the number of other people interviewed at that address,
• the interviewer’s name,
... and anything which may affect the answers given. These
control items usually appear at the beginning and end of the
questionnaire. For written questionnaires, they can be entered
before the questionnaire reaches the respondent.
• As a check on the accuracy of the survey sample. If there are
50% males and 50% females in the population, then 50% of
the people surveyed should be of each sex. (In practice, most
surveys end up with a slight excess of females - if only
because they spend more time at home.)
• For comparison with answers to the substantive questions
of the survey - e.g. to find out the age and sex balance of
listeners to a particular radio station.
For surveys with small samples (up to 100 respondents) the
number of respondents will be too few for these comparisons.
If you split 100 people into six age groups, some age groups
will probably contain less than 10 people. Looking at the
distribution of station listening in each age group may mean
comparing 3 people in one age group with 5 in another. These
numbers are too small to prove anything at all. Even with a
large sample, there’s seldom much value in dividing people into
more than 6 demographic categories.
You should also compare your survey results with census
figures. Most censuses ask questions about age group, sex,
where people live, and the proportion of respondents who
work. It’s best to avoid asking about characteristics which many
people regard as private, such as income or religion: answers are
often inaccurate or misleading. Also, such questions upset some
people, unless they can see a reason for them. For example,
including questions on religious programs would justify a
question about the respondent’s religion.
An interviewer does not need ask some “questions,” such as
the sex of the respondent, and the area where the person lives.
The answers to such items are already known, and can simply be
written on the questionnaire.
Comparison Questions
It’s always interesting to compare results from different surveys.
If you can find data from an earlier survey conducted in your
area, or a survey on the same topic from anywhere else, you can
include some comparison questions in your survey. Copy the
exact question asked in the other survey, and find out how your
respondents compare with others. Demographic questions are
also comparison questions, when their results are used to
compare survey data with census data.
33
LESSON 9:
QUESTION FORMATS
Topics Covered
Planning, Types, Wording, Writing, Testing, Layout
Objectives
Upon completion of this Lesson, you should be able to:
• Planning the questionnaire.
• Types of question.
• Question wording.
• How to write a questionnaire.
• Program testing.
• Questionnaire layout.
• Testing questionnaires.
There are several different styles of question. The most
common are multiple-choice and open-ended questions.
Multiple Choice Questions
Here is a typical multiple choice question: the respondent is
asked to choose one answer from several possibilities offered:
“Which radio station do you listen to most often: Radio City,
Radio Mirchi, or some other station?”
• Radio City
• Radio Mirchi
• Other
To answer the question, the interviewer ticks the appropriate
box. The three boxes are supposed to cover all possible choices.
But what if the respondent answers “I don’t listen to radio at
all”? That’s not really an “other” station, so we probably need a
fourth choice: “no station.”
If you must offer a large number of choices, and the respondent cannot be expected to think of the correct one, it helps to
divide a question with a large number of multiple choices into a
number of smaller questions. This greatly reduces the number
of possible answers to be read out. In practice, it would be
much simpler to make this a single open-ended question, and
(if the respondent was unsure), only then to ask the three
prompting questions, and read off a short list of possible
stations.
Another alternative, when there are many possible answers to a
question, is to print them on a card, and hand this to respondents to choose their answers. But this cannot be done in
telephone surveys, or in places where many respondents are
illiterate.
Multiple-choice vs Multiple-answer
Don’t confuse multiple-choice questions with multiple-answer
questions. A multiple-choice question is one where the
respondent is told the possible answer choices. A multipleanswer question (which need not be multiple-choice) is one that
allows more than one answer. Whether a question can have one
answer or more than one depends on its meaning. Here is a
question with only one possible answer:
• “Which age group are you in?”
And here’s a multiple-answer question:
• “Please tell me the ages of your children.”
Multiple-answer questions must have at least one answer (even
if that is “does not apply”), but they can have many more than
that.
In a multiple-choice question, all possible answers must be
catered for. To account for unexpected answers, it’s usually a
good idea to include an “other” category — though it can be
annoying to find that “other” was the commonest answer. You
should try and keep “other” below 5% of the total, though this
is not always predictable. Pilot testing will help in revealing
common answers that should have been mentioned in a
multiple-choice question.
Often, a multiple-answer question is equivalent to a set of
single-answer questions, as in this example (based on a place
with 3 local radio stations)
A multiple-choice question normally needs a single answer.
Sometimes multiple answers are valid (e.g. “Which of the
following radio stations do you listen to?”), but when you’re
expecting one answer and get two, something is wrong.
Probably the answer categories are not mutually exclusive.
Do you listen to RADIO RAINBOW at least once a week?
[ ] Yes [ ] No
With a questionnaire filled in by respondents, multiple choice
questions can offer a large number of possible answers — the
practical limit is about 50, or a full page. But when an interviewer reads out the questions, it is difficult for respondents to
remember many of the possible answers when the interviewer
recites these as a long list. I recommend offering no more than
four choices, and limiting the total question length to about 25
words.
34.
a. Single answer series
Do you listen to Radio Mirchi at least once a week?
[ ] Yes [ ] No
Do you listen to Radio City at least once a week?
[ ] Yes [ ] No
b. Multiple-answer question
“Which radio stations do you listen to at least once a week?
(Circle all that apply)
Radio Mirchi
Radio City
RADIO RAINBOW
Sometimes respondents may be tempted to give all possible
answers to a question. This often applies to questions that ask
about reasons, e.g.
“Here are some reasons why people don’t listen to radio. Please
tell me the reasons that apply to you:
Not having a radio
Not knowing what stations there are
Not knowing what times the programs are on
Preferring TV
No time to listen to radio
Don’t like the programs
...etc
People tend to give several answers to such questions - but if
every respondent gives every possible answer, this doesn’t help
much. You can make respondents think a little harder by
limiting the number of answers to about 3 per person.
Respondents who give more than 3 answers can be asked
“Which are the three most important answers?”
Open-ended Questions with Limited Choice
Open-ended questions, as the name implies, are those where
the respondent can choose any answer. There are two types of
open-ended questions: limited choice and free choice. Limited
choice questions are essentially the same as multiple choice
questions, but those choices are not stated to the respondent.
For example, here’s a multiple-choice question.
• What is your marital status - never married, now married,
widowed, or divorced?
Here’s the limited choice version of the same question:
• What is your marital status? ...........
To answer a limited choice question, the interviewer either
ticks a box, or writes down the respondent’s answer.
As far as the respondent is concerned, the only difference
between multiple choice questions and limited choice questions
is that a list of possible answers is not given for limited choice
questions. The lack of prompting has two effects:
undetectable. When the respondent gives an occupation, the
interviewer must decide its category within a few seconds, by
ticking a box. It’s much better if the interviewer writes down
the occupation in full. Grouping of occupations can be done
more accurately and consistently after all the completed questionnaires have been returned to the survey office.
Open-ended Questions with Free Choice
A free choice question is one which has a near-infinite number
of possible answers. The questionnaire provides a dotted line
(or several) on which the interviewer writes the answer in the
respondent’s own words. An example of a free choice question
is:
• “What do you most like about Radio Mirchi’s breakfast
session?”
The problem with such questions is to make them specific
enough. Just because a respondent did not give a particular
answer, this does not necessarily mean that answer did not
apply. Perhaps it did apply, but the respondent didn’t think
of it.
Therefore, if you have some particular emphasis in mind, the
question wording must point respondents in that direction.
Also, respondents should be encouraged to give multiple
answers.
Thus, the results of the above question could not be used to
assess what respondents thought of the announcer on the
Radio Mirchi breakfast session: a respondent may like the
announcer very much, but like the news still more.
A more specific way to ask such questions is in balanced pairs,
e.g.
“Tell me some of the things you like about the Radio Mirchi
breakfast session.”
a. Some unexpected answers,
“And now please tell me some of the things you don’t much
like about the Radio Mirchi breakfast session.”
b. Many people will not think of an answer they should have
given.
With this format, any element, such as announcers or news, can
have the number of likes and dislikes compared.
Notice that the two versions of the above question began
identically. In a spoken interview, the respondent’s only cue that
the question has finished is a pause. If, in the multiple-choice
version, the interviewer pauses for too long after the word
“status” many respondents will answer immediately, before
hearing the choices. So where memory or recognition might be a
problem, it is acceptable to ask a limited-choice question
(without listing alternative answers), and then, if the respondent hesitates, to read out a list of possible answers. This
converts a limited-choice question into a multiple-choice one.
If an open-ended question is unclear to some respondents in a
pilot test, consider explaining it. You can put a question into
context by explaining why you are asking it, and what will be
done with the information.
Sometimes a question may have a limited - but very large number of possible answers. Examples are “What is your
occupation?” and “What is your favourite television program?”
In both cases, it would be possible to list hundreds of
alternative answers, but this is never done. There are two
solutions: either the humble dotted line is called into service, or
else pre-coded categories are provided. For example, occupations
might be coded as white collar, blue collar, and other. This latter
method is easier, but some information is lost. Worse still,
interviewer error in this situation is both common and
When there are hundreds of possible answers, and more than
one answer is possible, it’s good to break the question into
several groups, so that respondents don’t forget something. So
instead of asking “Which magazines have you read in the last
month,” say “I’d like to ask you about magazines you have read
or looked into in the last month. Think about magazines you
have read at home, at work, at school, in a public building, or in
somebody else’s home. Think about magazines you often read,
magazines you’ve seen occasionally, and magazines you’d never
seen before.”
Detailed wording, like that, will produce a much higher (and
more accurate) list of magazines read. However, the question
takes a lot longer, both to ask and to answer, than the onesentence version.
35
Numeric Questions
Questions answered with numbers are a special type of openended question. For example:
“What is your age?”
Enter number of years: .....
Though statistical software is designed mainly to handle
numbers, numeric questions are rare in audience surveys. Most
people are not good at giving precise numerical answers from
memory. For example, I once organized an event survey
including these two questions: “Which area do you live in?” and
“How many kilometres is that from here?” The answers could
be checked on a map. The average error in distance was more
than 20%.
Even when people in a survey are asked their exact age, you
always find unexpectedly large numbers aged 30, 40, 50, 60 and
so on: it seems that some people round off their age to the
nearest 10 (sometimes 5) years.
So if you’re doing a survey where accurate numbers are important, you can’t rely on respondents’ memory. If they are being
surveyed in their homes, the interviewer could ask to see
documents to check the figures. Though some respondents
may refuse, this method should produce more accurate results.
But do you really need this level of precision? Will 39-year-olds
really have different TV preferences from 38-year-olds? Will
people who live 7.2 km from the theatre be less likely to attend
than those who live 7.1 km away? Surely not - and unless the
sample size is very large, such differences would be barely
detectable. Therefore, most surveys ask about age groups, and
approximate numbers. Exact numerical answers are rarely
needed.
When to Ask Each Type of Question
You’ll find that once you have thought up a question, the form
it takes (whether multiple choice, limited-choice, or free choice)
is not related to its wording but to the number of possible
answers. Few questions can be easily converted from one of the
three types to another.
A good questionnaire needs both multiple choice questions
(with few possible answers) and free choice questions (to which
everybody could give a different answer). Multiple choice
questions are easily processed by counting, but provide little
detail. Free choice answers have a lot of detail, but the bulk of
that detail can be difficult to handle.
In professional surveys, with their sample sizes of several
thousand respondents, the free choice answers are always a
problem. Verbal responses take more time to process, and
computers can’t summarize them well. Thus, the most
common fate of questions with a large number of possible
answers is to have these answers divided into categories. A
coder reads all the answers, works out a set of categories (often
10 to 20), then decides which category each answer falls into.
The result of this process is a table showing the percentage of
people answering in each category, though each category is itself
a mixture of widely differing answers. In other words, to fit the
computer system, a lot of the information is lost by the
grouping of responses.
36
But when the sample size is less than about 500, the information need not be lost. Though it is still helpful to group the
open-ended answers (specially if you want to test some
hypothesis that you have), the volume of wording in the
answers is not too much to read through.
The use of verbatim responses can partly substitute for a small
sample. For example, with a large-scale survey you might try to
find out why people listen to one radio station rather than
another, by cross-tabulating demographic categories against the
station listened to. With a small-scale survey, the equivalent
would be studying the open-ended reasons given by those who
prefer each station.
The implication of this for a small survey is to make maximum
use of open-ended questions. Compared with multiple choice
questions, less can go wrong with question wording, and the
mathematical skills needed for normal survey analysis are largely
replaced by verbal skills, which are more common among
broadcasters.
However, a survey with only open-ended questions will
produce no numerical results at all. The most useful information is produced when open-ended and multiple choice
questions are combined, in effect covering the same topic in
different ways. For example...
1. What do you most like about Radio Mirchi’s breakfast
session?
2. What do you most dislike about Radio Mirchi’s breakfast
session?
3. To summarize Radio Mirchi’s breakfast session, would you
say it is an excellent program, or a good program, or not very
good?
Question 3 above both summarizes the results of questions 1
and 2, enables percentages to be calculated (e.g. 57% may have
thought the program excellent), and also serves as a check on
the two other answers. If a respondent has a lot of likes (Q.1)
and no dislikes (Q.2), but then rates the program as “not very
good”, this may show that he or she has not heard Q.3
properly. The interviewer is in a position to detect and ask
about the apparent discrepancy. It’s good practice to use this
cross-checking technique whenever possible.
Summary
Questionnaire Design
KISS - keep it short and simple.If you present a 20-page
questionnaire most potential respondents will give up in horror
before even starting. Ask yourself what you will do with the
information from each question. If you cannot give yourself a
satisfactory answer, leave it out. Avoid the temptation to add a
few more questions just because you are doing a questionnaire
anyway. If necessary, place your questions into three groups:
must know, useful to know and nice to know. Discard the last
group, unless the previous two groups are very short.
Start with an introduction or welcome message. State who you
are and why you want the information in the survey. A good
introduction or welcome message will encourage people to
complete your questionnaire.
Allow a “Don’t Know” or “Not Applicable” response to all
questions, except to those in which you are certain that all
respondents will have a clear answer. In most cases, these are
wasted answers as far as the researcher is concerned, but are
necessary alternatives to avoid frustrated respondents. Sometimes “Don’t Know” or “Not Applicable” will really represent
some respondents’ most honest answers to some of your
questions. Respondents who feel they are being coerced into
giving an answer they do not want to give often do not
complete the questionnaire. For the same reason, include
“Other” or “None” whenever either of these are a logically
possible answer. When the answer choices are a list of possible
opinions, preferences or behaviors you should usually allow
these answers.
Question Types
Researchers use three basic types of questions: multiple choice,
numeric open end and text open end (sometimes called
“verbatim”). Examples of each kind of question follow:
Multiple Choice
1. Where do you live?
___ north
___ south
___ east
___ west
Numeric Open End
2. How much did you spend on food this week?
___________
Text Open End
3. How can our company improve working conditions?
____________________________________________
Rating Scales and Agreement Scales are two common types of
questions that some researchers treat as multiple choice
questions and others treat as numeric open end questions.
Examples of these kinds of questions are:
Rating scales
4. How would you rate this product?
___ excellent
___ good
___ fair
___ poor
5. On a scale where 10 means you have a great amount of
interest and 1 means you have none at all, how would you
rate your interest in each of the following topics?
___ domestic politics
___ foreign affairs
___ science
___ business
Agreement Scale
6. How much do you agree with each of the following
statements?
strongly agree disagree strongly agree disagree
My manager provides constructive criticism ___ _____ ____
Our medical plan provides adequate coverage ___ _____ ___
I would prefer to work longer hours on ___ _____ ____
fewer days
Question and Answer Choice Order
There are two broad issues to keep in mind when considering
question and answer choice order. One is how the question and
answer choice order can encourage people to complete your
survey. The other issue is how the order of questions or the
order of answer choices could affect the results of your survey.
Ideally, the early questions in a survey should be easy and
pleasant to answer. These kinds of questions encourage people
to continue the survey. Grouping together questions on the
same topic also makes the questionnaire easier to answer.
Whenever possible leave difficult or sensitive questions until
near the end of your survey. Any rapport that has been built up
will make it more likely people will answer these questions. If
people quit at that point anyway, at least they will have answered
most of your questions. Answer choice order can make
individual questions easier or more difficult to answer. Whenever there is a logical or natural order to answer choices, use it.
Always present agree-disagree choices in that order. Presenting
them in disagree-agree order will seem odd. For the same
reason, positive to negative and excellent to poor scales should
be presented in those orders. When using numeric rating scales
higher numbers should mean a more positive or more agreeing
answer.
Question order can affect the results in two ways. One is that
mentioning something (an idea, an issue, a brand) in one
question can make people think of it while they answer a later
question, when they might not have thought of it if it had not
been previously mentioned. The other way question order can
affect results is habituation. This problem applies to a series of
questions that all have the same answer choices. It means that
some people will usually start giving the same answer, without
really considering it, after being asked a series of similar
questions. People tend to think more when asked the earlier
questions in the series and so give more accurate answers to
them.
The order in which the answer choices are presented can also
affect the answers given. People tend to pick the choices nearest
the start of a list when they read the list themselves on paper or
a computer screen. People tend to pick the most recent answer
when they hear a list of choices read to them. As mentioned
previously, sometimes answer choices have a natural order (e.g.,
Yes, followed by No; or Excellent - Good - Fair - Poor). If so,
you should use that order. At other times, questions have
answers that are obvious to the person that is answering them
(e.g., “What brand(s) of car do you own?”). In these cases, the
order in which the answer choices are presented is not likely to
affect the answers given. However, there are kinds of questions,
particularly questions about preference or recall or questions
with relatively long answer choices that express an idea or
opinion, in which the answer choice order is more likely to affect
which choice is picked.
37
Other Tips
1. Keep the questionnaire as short as possible. We mentioned
this principle before, but it is so important it is worth
repeating. More people will complete a shorter questionnaire,
regardless of the interviewing method. If a question is not
necessary, do not include it.
2. Start with a title (e.g., Leisure Activities Survey). Always
include a short introduction - who you are and why you are
doing the survey. You may want to leave a space for the
respondent to add their name and title. Some people will
put in their names, making it possible for you to recontact
them for clarification or follow-up questions. Indicate that
filling in their name is optional. Do not have a space for a
name, if the questions are sensitive in nature. Some people
would become suspicious and not complete the survey.
3. Start with general questions. If you want to limit the survey
to users of a particular product, you may want to disguise
the qualifying product. As a rule, start from general attitudes
to the class of products, through brand awareness, purchase
patterns, specific product usage to questions on specific
problems (i.e., work from “What types of coffee have you
bought in the last three months” to “Do you recall seeing a
special offer on your last purchase of Brand X coffee?”). If
possible put the most important questions into the first
half of the survey. If a person gives up half way through, at
least you have the most important information.
4. Make sure you include all the relevant alternatives as answer
choices. Leaving out a choice can give misleading results. For
example, a number of recent polls that ask Americans if they
support the death penalty yes or no have found 70-75% of
the respondents choosing “yes.” But polls that offer the
choice between the death penalty and life in prison without
the possibility of parole show support for the death penalty
at about 50-60%. While polls that offer the alternatives of
the death penalty or life in prison without the possibility of
parole, with the inmates working in prison to pay restitution
to their victims’ families have found support of the death
penalty closer to 30%.
5. So what is the true level of support for the death penalty?
The lowest figure is probably best, since it represents the
percentage that favor that penalty regardless of the alternative
offered. The need to include all relevant alternatives is not
limited to political polls. You can get misleading data
anytime you leave out alternatives.
6. Do not put two questions into one. Avoid questions such as
“Do you buy frozen meat and frozen fish?” A “Yes” answer
can mean the respondent buys meat or fish or both.
Similarly with a question such as “Have you ever bought
Product X and, if so, did you like it?” A “No” answer can
mean “never bought” or “bought and disliked.” Be as
specific as possible. “Do you ever buy pasta?” can include
someone who once bought some in 1990. It does not tell
you whether the pasta was dried, frozen or canned and may
include someone who had pasta in a restaurant. It is better
to say “Have you bought pasta (other than in a restaurant) in
the last three months?” “If yes, was it frozen, canned or
dried?” Few people can remember what they bought more
38
than three months ago unless it was a major purchase such
as an automobile or appliance.
7. The overriding consideration in questionnaire design is to
make sure your questions can accurately tell you what you
want to learn. The way you phrase a question can change the
answers you get. Try to make sure the wording does not
favor one answer choice over another.
8. Avoid emotionally charged words or leading questions that
point towards a certain answer. You will get different
answers from asking “What do you think of the XYZ
proposal?” than from “What do you think of the
Republican XYZ proposal?” The word “Republican” in the
second question would cause some people to favor or
oppose the proposal based on their feelings about
Republicans, rather than about the proposal itself. It is very
easy to create bias in a questionnaire. This is another good
reason to test it before going ahead.
Questionnaire Research Flow Chart
Questionnaire research design proceeds in an orderly and specific
manner. Each item in the flow chart depends upon the
successful completion of all the previous items. Therefore, it is
important not to skip a single step. Notice that there are two
feedback loops in the flow chart to allow revisions to the
methodology and instruments.
LESSON 10:
FIELDWORK
Topics Covered
Interview, Finding, Identifying, Introductions, Techniques,
Finishing, Verifying, Paperwork.
Objectives
Upon completion of this Lesson, you should be able to:
• Choosing the place of interview.
• Finding interviewers.
• Finding people at home.
• Identifying the right respondent.
• Introductions. Interviewing techniques.
• Finishing an interview.
• Verifying interviews.
• Paperwork for interviewers.
If you are working in the media, you probably think of
interviewing as a way of getting a story that can be broadcast or
printed. The goal of journalistic interviewing, apart from
accuracy, is to produce a story that will be of interest to the
audience.
Research interviewing is different. The main purpose is not to
keep an audience interested, but to get accurate and comprehensive information about the respondent. (In research, we have
“respondents” rather than “interviewees.”) Of course, journalistic interviewing also needs to be accurate, but reader interest is
the first consideration. No two journalistic interviews ask exactly
the same questions, but in survey interviewing, everybody is
asked the same questions. The goal is comparison of respondents, and the ability to produce information about whole
populations, rather than individuals.
There is one form of research interviewing which is very close to
journalistic interviewing: that is depth interviewing. Depth
interviewing is not used for drawing firm conclusions about
populations, but for initial explorations of a topic or a sample.
Fieldwork
When interviewers talk about the field, it’s not a farm, but the
place where they go to do their interviews: whether at people’s
homes, public places, or even a call centre where they do
telephone interviewing. The term fieldwork includes all
interviewing, as well as the activities that go with it: preparation,
interviewer supervision, and verification.
There are two main forms of fieldwork: face-to-face interviews,
and telephone interviews. Telephone interviews are much less
laborious (no walking - just ring a number) but also more
restrictive because nothing visible can pass between the interviewer and the respondent.
Principles of Survey Interviewing
In theory, survey interviewing is very simple. You have a
questionnaire, you read out the questions exactly as they are
written, and you record the respondent’s answers. What could
be simpler?
In practice, as you will soon discover when you begin interviewing, many things can go wrong. Some people refuse to be
interviewed. Some will say “I don’t know the answer to that
question - what’s your opinion?” Some will even try to
interview you. They may not answer the exact question you ask,
either because they have misunderstood it, or because they
don’t want to.
One of the first principles of interviewing is that the interviewer must not affect the response in any way: the respondent
should give the same answers to any interviewer. By your tone
of voice, or even by your facial expression, you can show a
respondent that you like or dislike some answers. Therefore it’s
important to ask each question in a completely neutral way,
giving no hint of your own opinion.
Even the way you look and dress can affect respondents. Many
studies have shown that respondents provide the most accurate
answers to interviewers who are similar to them, in social
status, sex, and skin colour. Interviewers who dress formally in
a poor area can scare respondents - who may then give answers
that they think the interviewer would like to hear, but which
may not be true.
Much of the skill in interviewing lies in establishing a feeling of
trust. The respondents need to be able to trust you - even
though they don’t know your opinions.
Interviewing People in their Homes
It’s usually best to interview people in their homes. People
usually feel more comfortable at home, and are not in a hurry to
finish the interview. If anything needs checking (such as the
bands on a radio) this can be done more easily at home. It’s
harder to lie to an interviewer at home, with other people
present.
Finally, homes can be sampled more accurately than people canbecause homes (unlike people) can’t move around, so it’s easier
to count and list them.
Though it’s usually best to interview people at home, in some
societies this is not possible.
When people protect themselves against contact with strangers,
it can be very difficult to interview them. However, in most
developing countries, this is not a problem, except when trying
to interview very rich people.
Interviewing in Public Places
Where it’s difficult to interview people at home, one alternative
is to interview them in public places.
The main problem with interviewing in public places is that
some people spend a lot more time in public places than others
do. This can be an advantage when the subject of the survey is
the public place itself. But when you want to get a true cross-
39
section of the population, you usually find that interviewing in
public places will produce an uneven balance of sex and age
groups. Though quotas can be used to ensure that (for
example) equal numbers of men and women can be interviewed, other problems remain.
For radio and TV surveys, interviewing in public places will
usually produce audience figures that are too low. Most radio
listening and TV viewing is done at home, and the people who
spend the most time listening to radio or watching TV
therefore spend less time in public places.
Using age and sex quotas won’t compensate for this. The only
solution (which, from my experience, doesn’t work very well) is
to include a question asking how much time each respondent
spends at home and in public places, and to use this information to give a different weight to each type of respondent. This
method requires information from a census or highly accurate
survey - and it still doesn’t cover people who spend no time at
all in public places - such as old and sick people, who often
spend a lot of time listening to radio.
Preparing for Fieldwork
Workspace
It’s difficult to run a survey without a good-sized office. In this
space you can brief interviewers, keep piles of questionnaires,
and analyse the data when the survey is finished. It’s useful to
have a large table. During training sessions, people can sit
around it. When questionnaires are coming back, they can be
sorted into heaps on this table.
Because the survey information is most vulnerable after the
questionnaires have been filled in, but before the information
has been entered onto a computer, the office should be made as
safe as possible from fire, burglary, etc. If the completed
questionnaires are lost, most of the research work is wasted.
Hiring Interviewers
The Qualities Needed by a Good Interviewer Include
Ability to establish rapport with a wide variety of people.
Successful interviewers are usually friendly, talkative types, with a
lot of curiosity about others, ability to fill in forms and
questionnaires correctly. It’s no use being friendly and talkative
if you don’t record the answers correctly - unfortunately, many
of the friendly, talkative types are not good with the paperwork.
Being physically fit: able to walk long distances and carry
reasonably heavy loads, and not being upset by bad weather.
Interviewers need to spend a lot of time outdoors, and survey
organizations often underestimate the amount of walking that
interviewers need to do.
Not becoming upset when people refuse to be interviewed.
(This is more a factor in western countries than in developing
countries, and more for telephone than face-to-face interviewers.)
Being resourceful and able to exercise initiative. (More important for face-to-face surveys than for telephone interviews.)
In a door-to-door survey, interviewers should know their
assigned area reasonably well. However, if the survey covers
confidential subjects, respondents usually give more honest
answers to interviewers who are strangers. This is a problem
40
with surveys on subjects such as health, but not usually a
problem for media surveys.
Being able to read a map, and create a rough sketch map of the
streets where they been interviewing people.
It’s usually an advantage to hire interviewers with some
experience of survey work. Even if the organization they
worked for previously has different procedures from yours,
much of the work will be similar, and they will already know
what conditions to expect.
Just go through the following case study. Try and corelate
The Coke & Pepsi Rivalry
This case was written by A. Mukund, ICFAI Center for
Management Research (ICMR). It is intended to be used as a
basis for class discussion
The case was compiled from published sources.
The Coke & Pepsi Rivalry
Our real competition is water, tea, nimbupani and Pepsi... in that order.”
- Coke sources in 1996.
“When you’re No 2 and you’re struggling, you have to be more innovative,
work better, and be more resilient. If we became No 1, we would redefine
the market so we became No 2! The fact is that our competition with the
Coca-Cola company is the single most important reason we’ve accomplished what we have. And if they were honest, they would say the same
thing.”
- Pepsi sources in 1998.
“Both companies did not really concentrate on the fundamentals of
marketing like building strong brand equity in the market, and thus had
to resort to such tactics to garner market shares.”
- Business India in 1998.
Pepsi vs. Coke
The cola wars had become a part of global folklore - something
all of us took for granted. However, for the companies
involved, it was a matter of ‘fight or succumb.’ Both print and
electronic media served as battlefields, with the most bitter of
the cola wars often seen in form of the comparative advertisements.
In the early 1970s, the US soft-drinks market was on the verge
of maturity, and as the major players, Coke and Pepsi offered
products that ‘looked the same and tasted the same,’ substantial market share growth seemed unlikely. However, Coke and
Pepsi kept rejuvenating the market through product modifications and pricing/promotion/distribution tactics. As the
competition was intense, the companies had to frequently
implement strategic changes in order to gain competitive
advantage. The only way to do this, apart from introducing
cosmetic product innovations, was to fight it out in the
marketplace.
This modus operandi was followed in the Indian markets as
well with Coke and Pepsi resorting to more innovative tactics to
generate consumer interest. In essence, the companies were
trying to increase the whole market pie, as the market-shares war
seemed to get nowhere. This was because both the companies
came out with contradictory market share figures as per surveys
conducted by their respective agencies - ORG (Coke) and IMRB
(Pepsi). For instance, in August 2000, Pepsi claimed to have
increased its market share for the first five months of calendar
year 2000 to 49% from 47.3%, while Coke claimed to have
increased its share in the market to 57%, in the same period,
from 55%.
Media reports claimed that the rivalry between Coke and Pepsi
had ceased to generate sustained public interest, as it used to in
the initial years of the cola brawls worldwide. They added that it
was all just a lot of noise to hardsell a product that had no
inherent merit.
The Players
Coke had entered the Indian soft drinks market way back in the
1970s. The company was the market leader till 1977, when it
had to exit the country following policy changes regarding
MNCs operating in India. Over the next few years, a host of
local brands emerged such as Campa Cola, Thumps Up, Gold
Spot and Limca etc. However, with the entry of Pepsi and Coke
in the 1990s, almost the entire market went under their control.
Making billions from selling carbonated/colored/sweetened
water for over 100 years, Coke and Pepsi had emerged as truly
global brands. Coke was born 11 years before Pepsi in 1887 and,
a century later it still maintained its lead in the global cola
market. Pepsi, having always been number two, kept trying
harder and harder to beat Coke at its own game. In this neverending duel, there was always a new battlefront opening up
somewhere. In India the battle was more intense, as India was
one of the very few areas where Pepsi was the leader in the cola
segment. Coke re-entered India in 1993 and soon entered into a
deal with Parle, which had a 60% market share in the soft drinks
segment with its brands Limca, Thums Up and Gold Spot.
Following this, Coke turned into the absolute market leader
overnight. The company also acquired Cadbury Schweppes’ soft
drink brands Crush, Canada Dry and Sport Cola in early 1999.
Coke was mainly a franchisee-driven operation with the
company supplying its soft drink concentrate to its bottlers
around the world. Pepsi took the more capital-intensive route
of owning and running its own bottling factories alongside
those of its franchisees. Over half of Pepsi’s sales were made by
its own bottling units.
Though Pepsi had a lead over Coke, having come in before the
era of economic liberalization in India, it had to spend the early
years fighting the bureaucracy and Parle’s Ramesh Chuahan every
step of the way. Pepsi targeted the youth and seemed to have
struck a right chord with the market. Its performance was
praiseworthy, while Coke had to struggle to a certain extent to
get its act right. In a span of 7 years of its operations in the
county, Coke changed its CEO four times. Media reports about
the troubles faced by Coke and the corrective measures it
adopted were aplenty.
The Rivalry on Various Fronts
I – Bottling
Bottling was the biggest area of conflict between Pepsi and
Coke. This was because, bottling operations held the key to
distribution, an extremely important feature for soft-drink
marketing. As the wars intensified, both companies took pains
to maintain good relationships with bottlers, in order to avoid
defections to the other camp.
A major stumbling block for Coke was the conflict with its
strategic bottling partner, Ramesh Chauhan of the Parle group
of companies. Coke alleged that Chauhan had secretly manufactured Coke’s concentrate. Chauhan, in turn, accused coke of
backtracking on commitments to grant him bottling rights in
Pune and Bangalore and threatened legal action. The matter
almost reached the courts and the strategic alliance showed signs
of coming apart. Industry observers commented that for a
company like Coke that was so heavily franchisee driven,
antagonizing its chief bottler was suicidal.
While all this was going on, Pepsi wasted no time in moving in
for the kill. It made huge inroads in the north, particularly in
Delhi where Chauhan had the franchise and also snapped up
the opportunity to buy up Coke’s bottler Pinakin Shah in
Gujarat. Ironically, the Gujarat Bottling Company owned by
Shah, also belonged in part to Chauhan for whom the sell-out
was a strategic counter-move in his battle with Coke. Coke
moved court and obtained an order enforcing its bottler’s
agreement with the Gujarat company, effectively freezing Pepsi’s
right to use the acquired capacity for a year. Later, Coke made a
settlement of $10 million in exchange for Chauhan foregoing
bottling rights in Pune and Bangalore
Towards the end of 1997, bottling agreements between Coke
and many of its bottlers were expiring. Coke began pressurizing
its bottlers to sell out and threatened them that their bottling
agreements would not be renewed. Media reports claimed that
Coke’s bottlers were not averse to joining hands with Pepsi.
They said they would rather offer their services to Pepsi than
selling out to Coke and discontinuing a profitable business. In
November 1997, Pepsi made a bid to gain from the feud
between Coke and its franchised bottlers. It declared that it was
ready to join hands with ‘any disgruntled Coke bottler,
provided the latter’s operations enhanced Pepsi’s market in areas
where Coke was dominant.’ Pepsi was even willing to shift to a
franchisee-owned bottling system from its usual practice of
focusing on company-owned bottling systems supplemented
by a few franchisee-owned bottling companies, provided it
found bottlers who would enhance both the quantity and
quality, especially in areas where Coke had a substantial
marketshare. Pepsi won over Goa Bottling Company, Coke’s
bottler in Goa and became the market leader in that city.
II – Advertising
When Coke re-entered India, it found Pepsi had already
established itself in the soft drinks market. The global advertisement wars between the cola giants quickly spread to India as
well. Internationally, Pepsi had always been seen as the more
aggressive and offensive of the two, and its advertisements the
world over were believed to be more popular than Coke’s. It
was rumored that at any given point of time, both the companies had their spies in the other camp. The advertising agencies
of both the companies (Chaitra Leo Burnett for Coke and HTA
for Pepsi) were also reported to have insiders in each other’s
offices who reported to their respective heads on a daily basis.
Based on these inputs, the rival agency formulated its own
plans. These hostilities kept the rivalry alive and healthy.
41
However, the tussle took a serious turn at times with complaints to Advertising Standards Council of India, and threats
of lawsuits.
While Pepsi always relied on advertisements featuring films
stars, pop stars and cricket players, Coke had initially decided to
focus on Indian culture and jingles based on Indian classical
music. These were also supported by coke advertisements that
were popular in the West. Somehow, Coke’s advertisements
missed the Indian pulse by a wide margin. Pepsi soon came to
be seen as a ‘defender’ who had humiliated the ‘invader’ with its
superior creative strengths. When Coke bagged the official
sponsorship rights to the 1997 Cricket World Cup, Pepsi created
media history by unleashing one of the country’s most
successful advertisement campaigns - the ‘Nothing Official
About It’ campaign1. Pepsi took on Coke, even when the latter
sponsored the replays of the matches, through the campaign,
‘Uncork a Cola.’ Media coverage of the war even hinted that the
exclusion of Rahul Dravid (Pepsi’s model) from the Indian
team had something to do with the war. However, Coke had its
revenge when it bagged the television sponsorship rights for
the 1997 Pepsi Asia Cup. Consequently, Pepsi, in spite of
having branded the event was not able to sponsor it.
The severe damage caused by the ‘Nothing Official About It’
campaign prompted Coke to shift its advertising account from
McCann Erickson to Chaitra Leo Burnett in 1997. The ‘EatSleep-Drink’ series of ads was born soon after. Pepsi responded
with ads where cricket stars ‘ate a bat’ and ‘slept on a batting
pad’ and ‘drank only Pepsi.’ To counter this, Coke released a
print advertisement in March 1998, in which cricketers declared,
‘Chalo Kha Liya!’ Another Thums Up ad showed two apes
copying Pepsi’s Azhar and Ajay Jadeja, with the line, ‘Don’t be a
bunder (monkey), Taste the thunder.’ For once, it was Pepsi’s
turn to be at receiving end. A Pepsi official commented, “We’re
used to competitive advertising, but we don’t make fun of the
cricketers, just the ad.” Though Pepsi decided against suing
Coke, the ad vanished soon after the dissent was made public.
Commenting on this, a Pepsi official said, “Pepsi is basically
fun. It is irreverent and whacky. Our rival is serious and has a
‘don’t mess with me’ attitude. We tend to get away with fun
but they have not taken it nicely. They don’t find it funny.”
Coke then launched one of its first offensive ads, ridiculing
Pepsi’s ads featuring a monkey. ‘Oye! Don’t be a bunder! Taste
the Thunder’, the ad for Thums Up, went with the line, ‘issued
in the interest of the present generation by Thums Up.’
The 1998 Football World Cup was another event the cola
majors fought over. Pepsi organized local or ‘para’ football
matches in Calcutta and roped in Indian football celebrity
Bhaichung Bhutia to endorse Pepsi. Pepsi claimed it was the
first to start and popularize ‘para’ football at the local level.
However, Coke claimed that it was the first and not Pepsi, to
arrange such local games, which Coke referred to as ‘pada.’ While
Pepsi advertisements claimed, ‘More football, More Pepsi,’
Coke utilized the line, ‘Eat football, Sleep football, Drink only
Coca-Cola,’ later replaced by ‘Live football, dream football and
drink only Coca-Cola.’ Media reports termed Pepsi’s promos as
a ‘me-too’ effort to cash in on the World Cup craze, while
42
Coke’s activities were deemed to be in line with its commitment
and long-term association with the game.
Coke’s first offering in the lemon segment (not counting the
acquired market leader brand Limca) came in the form of Sprite
launched in early 1999. From the very beginning, Sprite went on
the offensive with its tongue-in-cheek advertisements. The line
‘Baki Sab Bakwas’ (All the rest is nonsense) was clearly targeted
at Pepsi’s claims in its ads. The advertisement made fun of
almost all the Pepsi and Mirinda advertisements launched
during 1998. Pepsi termed this as Coke’s folly, claiming it was
giving Sprite a ‘wrong positioning,’ and that it was a case of an
ant trying to fight a tiger. Sprite received an encouraging
response in the market, aided by the high-decibel promotions
and pop music concerts held across the country. But Pepsi was
confident that 7 Up would hold its own and its ads featuring
film stars would work wonders for Mirinda Lemon in the
lemon segment.
When Pepsi launched an advertisement featuring Sachin
Tendulkar with a modified Hindi movie song, ‘Sachin Ala Re,’
Coke responded with an advertisement with the song, ‘Coke
Ala Re.’ Following this, Pepsi moved the Advertising Standards Council of India and the Advertising Agencies
Association of India, alleging plagarisation of its ‘Sachin Ala
Re’ creation by Coke’s advertising agency, Chaitra Leo Burnett, in
its ‘Coke Ala Re’ commercial. The rivals were always engaged in
the race to sign the most popular Bollywood and cricket
celebrities for their advertisements. More often than not, the
companies pitched arch-rivals in their respective fields against
each other in the cola wars as well. (Refer Table I)
In October 2000, following Coke’s ‘Jo Chaaho Ho Jaaye’
campaign, the brand’s ‘branded cut-through mark,2’ reached an
all-time high of 69.5% as against Pepsi’s 26.2%. In terms of
stochastic share, 3 Coke had a 3% lead over Pepsi with a 25.5%
share. Pepsi retaliated with a campaign making fun of Coke’s
advertisements. The advertisement had a mixed response
amongst the masses with fans of both the celebrities defending
their idols. In May 2000, Coke threatened to sue Pepsi over the
advertisements that ridiculed its own commercials. Amidst wide
media coverage, Pepsi eventually stopped airing the controversial advertisement. In February 2001, Coke went on the
offensive with the ‘Grow up to the Thums Up Challenge’
campaign. Pepsi immediately issued a legal notice on Coke for
using the ‘Yeh Dil Maange More’ phrase used in the commercial. Coke officials, however, declined to comment on the issue
and the advertisement continued to be aired.
Table I
Celebrity Endorsers *
Coke
Pepsi
Indian film industry
Karisma Kapoor, Hrithik Roshan,
Twinkle Khanna, Rambha, Daler
Mehndi, Aamir Khan, Aishwarya
**
Aamir Khan, Aishwarya Rai**,
Akshay Kumar, Shahrukh Khan, Rani
Mukherjee, Manisha Koirala, Kajol,
Mahima
Chaudhary,
Madhavan,
Amrish Puri, Govinda, Amitabh
Bachchan.
Cricket players
Robin Singh, Anil
Kumble,
Javgal
Srinath.
Azharuddin,
Tendulkar,
Dravid,
Ganguly.
Sachin
Rahul
Sourav
* The list is not exhaustive.
**Aamir and Aishwarya had switched from Pepsi to Coke.
III – Product Launches
Pepsi beat Coke in the Diet-Cola segment, as it managed to
launch Diet Pepsi much before Coke could launch Diet Coke.
After the Government gave clearance to the use of Aspertame
and Acesulfame-K (potassium) in combination (ASK), for use
in low-calorie soft drinks, Pepsi officials lost no time in rolling
out Diet Pepsi at its Roha plant and sending it to retail outlets
in Mumbai. Advertisements and press releases followed in
quick succession. It was a major victory for Pepsi, as in certain
parts of the world, Coke’s Diet Coke sold more than Pepsi Cola
itself. Brand visibility and taste being extremely important in
the soft drink market, Pepsi was glad to have become the firstmover once again.
Coke claimed that Pepsi’s one-upmanship was nothing to
worry about as Coke already had a brand advantage. Diet Coke
was readily available in the market through import channels,
while Diet Pepsi was rarely seen. Hence, Diet Coke has a brand
advantage. Coke came up later with a high-profile launch of
Diet Coke. However, as expected, diet drinks, as a percentage of
the total cola demand, did not emerge as a major area of focus
in the years to come. Though the price of the cans was reduced
from Rs 18 to Rs 15 in July 2000, it failed to catch the fancy of
the buyers. In September 2000, both the companies again
slashed the price of their diet cans by over 33% per cent to Rs
10. Both the companies were losing Rs 5-6 per can by selling it
at Rs 10, but expected the other products to absorb these
losses. A Pepsi official said that the diet cola constituted only
about 0.4% of the total market, hence its contribution to
revenue was considered insignificant. However, both companies
viewed this segment as having immense potential and the pricecuts were part of a long-term strategy.
Coke claimed that it was passing on the benefit of the 5% cut in
excise duty to the consumer. Industry experts, however,
believed that the price cut had more to do with piling up
inventories. Diet drinks in cans had a rather short shelf life
(about two months) and the cola majors were simply clearing
stocks through this price cut. However, by 2001, the diet-cola
war had almost died out with the segment posting extremely
low growth rates.
IV – Poaching
Pepsi and Coke fought the war on a new turf in the late 1990s.
In May 1998, Pepsi filed a petition against Coke alleging that
Coke had ‘entered into a conspiracy’ to disrupt its business
operations. Coke was accused of luring away three of Pepsi’s
key sales personnel from Kanpur, going as far as to offer Rs 10
lakh a year in pay and perks to one of them, almost five times
what Pepsi was paying him. Sales personnel who were earning
Rs 48,000 per annum were offered Rs 1.86 lakh a year. Many
truck drivers in the Goa bottling plant who were getting Rs
2,500 a month moved to Coke who gave them Rs 10,000 a
month. While new recruits in the soft drinks industry averaged
a pay hike of between 40-60% Coke had offered 300-400%.
Coke, in its reply filed with the Delhi High Court, strongly
denied the allegations and also asked for the charges to be
dropped since Pepsi had not quantified any damages. Pepsi
claimed that this was causing immense damage as those
employees who had switched over were carrying with them
sensitive trade-related information. After some intense bickering, the issue died a natural death with Coke emerging the
winner in another round of the battle.
Pepsi also claimed that its celebrity endorsers were lured into
breaking their contracts with Pepsi, and Coke had tried to
pressure the Board of Control for Cricket in India (BCCI) to
break a sponsorship deal it had signed for the Pepsi Triangular
Series. According to Pepsi’s deal with BCCI, Pepsi had the first
right of refusal to sponsor all cricket matches played in India
where up to three teams participated. The BCCI, however, was
reported to have tried to break this contract in favor of Coke.
Pepsi went to court protesting against this and won. Pepsi also
alleged that Coke’s Marketing Director Sanjiv Gupta was to join
Pepsi in 1997. But within days of his getting the appointment
letter, Coke made a counter offer and successfully lured Gupta
away.
V – Other Fronts
• Till the late 1980s, the standard SKU4 for a soft drink was
200 ml. Around 1989, Pepsi launched 250 ml bottles and the
market also moved on to the new standard size. When Coke
re-entered India in 1993, it introduced 300 ml as the smallest
bottle size. Soon, Pepsi followed and 300 ml became the
standard. But around 1996, the excise component led to an
increase in prices and a single 300 ml purchase became
expensive. Both the companies thus decided to bring back
the 200 ml bottle, In early 1996, Coke launched its 200 ml
bottles in Meerut and gradually extended to Kanpur,
Varanasi, Punjab and Gujarat, and later to the south. Pepsi
first tried the 200 ml size in Calcutta around 1997 but
withdrew it soon after. Neither company put in any
marketing effort behind the 200 ml, as the 300 ml meant
higher per-unit intake and more profits for the company,
bottler and the retailer. This hypothesis worked well till 1999
when the growth of the soft drinks market was a mere 5% as
compared to the 1998 figure of 20%. Reasoning that the Rs
9 price-point for the 300 ml bottle was hampering growth,
Coke and Pepsi re-introduced 200 ml bottles on a grand scale
in July (Mini Coke) and December (Chhota Pepsi) 1999
respectively. While Coke invested huge sums on local and
regional advertising, which included POP, cable TV and the
regional press aiming to capture the semi-urban and rural
markets, Pepsi’s advertisements were more city-centric.
Based on its previous experience with lower price points,
Coke launched Coke Mini in Karnataka at a price of Rs 5,
and accompanied this with an extensive billboard campaign
across Bangalore. Pepsi hit back with the introduction of
‘Chhota Pepsi’ at Rs 4.50. Though the initial campaign said
‘Offer till stocks last,’ Pepsi later decided to continue with the
offer to retain its customer base, till the price war was over.
Company sources revealed that it was purely a competition
driven move. A Pepsi official commented, “The 200 ml
bottles are unviable even at Rs 6. It is a good price point, but
will definitely hurt the bottler and the industry. Perhaps, a
43
200 ml bottle will be viable at Rs 7. But who will pay Rs 7 for
200 ml, when 300 ml is available at Rs 9?”
By 2001, the ‘minis’ were retailing at Rs 7 and the 300 ml at
Rs 10. As a variant, the ‘minis’ did prove to be a good
venture for the warriors, though they inevitably came with
added chances of keeping the companies on red-alert.
• In May 1996, Coke launched Thums Up in blue cans, with
four different pictures depicting ‘macho sports’ such as sky
diving, surfing, wind-surfing and snow-boarding. Much to
Pepsi’s chagrin, the cans were colored blue - the color Pepsi
had chosen for its identity a month earlier, in response to
Coke’s ‘red’ identity. The move came as a surprise because
even Coke executives had started referring to Pepsi as the
blue brand and the Pepsi employees as ‘the blue guys.’
Media reports said this was Coke’s move to ‘steal Pepsi’s
thunder.’ However, Coke officials denied this and said that
they had not adopted the blue color for Thums Up cans on
purpose. Also, the Thums Up blue was quite different from
the Pepsi blue. Pepsi sources, on the other hand, claiming it
as ‘a victory of the blues over the reds.’
• There were frequent complaints from both the players about
their bottlers and retailers being hijacked. Pepsi’s blue painted
retail outlets being painted in Coke’s red color overnight and
vice-versa was a common phenomena in the 1990s. Even
suppliers of Visicoolers, the glass door refrigerators, were
aligning themselves with either of the cola players. While
Norcool was selling only to Coke, Pepsi was the only
customer of Carrier. Norcool, the Norway-based glass door
freezer manufacturer owned by the Frigoglass group,
admitted that it had started manufacturing operations in
India only at the instance of Coke. Over half its global
production of Visicoolers was consumed by Coke. Even the
choice of the site for its plant at Manesar was driven by the
fact that it would be close to the Coke headquarters in
Gurgaon. Similarly, though Carrier Commercial
Refrigeration, suppliers to Pepsi, had an option of selling to
‘other kinds’ of consumers, it was a strict ‘no-no’ for Coke.
• Coke also turned its attention to Pepsi’s stronghold - the
retail outlets. Between 1996-98, Coke doubled its reach to a
reported 5 lakh outlets, when Pepsi was present at only 3.5
lakh outlets. To reach out to smaller markets, interceptor
units in the form of mobile vans were also launched by
Coke in 1998 in Andhra Pradesh, Tamil Nadu and West
Bengal. However, in its rush to beat Pepsi at the retail game,
Coke seemed to have faltered on the service front. For
instance, many shops in Uttar Pradesh frequently ran out of
stock and there was no servicing for Coke’s coolers. Though
Coke began servicing retail outlets on a daily basis like Pepsi,
it had to wait for a while before it was able to match Pepsi’s
retailing strengths.
One of Coke’s victories on the retail front was in the form
of its tie up with Indian Oil to set up dispensing units at its
petrol pumps. Pepsi responded by striking a deal with
Bharat Petroleum, whose network was far smaller than
Indian Oil’s. Of the estimated 2,50,000 retail outlets in the
country that sold soft drinks, Pepsi was stocked only at
2,00,000.
44
In the late 1990s, Pepsi and Coke kept trying to outdo each
other in sponsoring music concerts by leading artists in order to
reach out to youth. Pepsi also tied up with MTV to hold a
series of pop concerts across the country. Coke on the other
hand, tied-up with MTV’s rival Channel V for a similar venture.
There were frequent skirmishes regarding movie sponsorships
and vending rights at leading cinema halls.
In May 1999, the companies were involved in a ‘freebies war’ promotional schemes designed to help grow the overall cola
market besides the usual market share enhancement. Coke was
running as many as 12 volume-building, national-level consumer promotions, while Pepsi had 8 schemes for its brands.
Coke’s schemes ranged from crown exchanges to under the
crown prizes, which included toys, cars, free travel, consumer
durables etc. Pepsi had crown exchanges and under the crown
prizes as well, it also offered free gifts like cards and tattoos. A
huge outlay was involved in promoting these schemes, with
frequent media splashes.
Is the Rivalry Healthy?
In a market where the product and tastes remained virtually
indistinguishable and fairly constant, brand recognition was a
crucial factor for the cola companies. The quest for better brand
recognition was the guiding force for Coke and Pepsi to a large
extent. Colorful images, lively words, beautiful people and
places, interesting storylines, innovative/attractive packaging and
catchy jingles have made sure that the cola wars, though often
scoffed at, rarely go unnoticed. And that’s what it has all been
about till now. The management of both the companies had to
constantly adapt to the changing attitudes and demands of
their consumers or lose market share.
The wars seemed to have settled down into a pattern. Pepsi
typically won a market, sustained itself for a few years, and then
lost to a very determined Coke. In the earlier years, Coke was
content with advertising its product to build a strategic positioning for its product. With Pepsi’s offensive moves getting
stronger and stronger, Coke had no option but to opt for the
same modus operandi. Though the market share debates
would not have any conclusions, it would be safe to infer that
the cola wars were a major factor in keeping customer interest
alive in the segment so far. However, in the late 1990s, questions were raised about the necessity and more importantly,
about the efficacy of these wars. Answers for this would be too
difficult to ascertain and too shaky to confirm.
Questions for Discussion:
1. Analyze the development of the Indian soft drinks market
over the years and comment on the emergence of the MNC
players as the leaders within a few years of their entry.
2. Comment on the advertising strategies of Coke and Pepsi
with specific reference to the comparative and ‘spoof’
advertisements. Do you think that competition justifies such
moves? Give the reasons for your answer.
3. Write a brief note on the cola wars waged in the other areas,
besides the advertising front. Briefly comment on the ethical
issues involved in such wars.
4. What shape do you think the cola wars will take in a couple
of years from now? Is the consumer becoming indifferent
towards them? In such a scenario, is there any other way in
which Coke and Pepsi could enhance brand recognition?
Elaborate.
Additional Readings & References
1.
Karmali Nazneen, Leading The Way, Business India, 15
January, 1996.
Karmali Nazneen, Unidentical Twins, Business India, 15
January, 1996.
25. Kurian Boby, Coke Mini At…., Business Line, 24
February, 2000.
26. Bhattacharjee Dwijottam / Joseph Sandeep, Saving Coke,
Business World, 6 March, 2000.
27. Pande Bhanu, Is Less More, Business Standard, 9 May,
2000.
3.
Karmali Nazneen, Blue Thunder, Business India, 25 May
1996.
28. Ghosh Partha, Coke, Pepsi Suppliers Join The Cold
War, Business Line, 11 May, 2000.
29. Chatterjee Dev/Masanad Rajiv, Kaha Na War Hai….,
Indian Express, 13 May, 2000.
4.
Chhaya, Cola Vs Cola Vs Cola, Business Today, 7 May,
1997.
30. Kurian Boby, Pepsi, Coca-Cola Claim…., Business Line,
13 July, 2000.
2.
5. Pepsi Invites Coke Bottlers, Sets Terms, Business Line, 23
November, 1997.
31. Guha Diganta, Coca-Cola’s Wishful Thinking, Business
Line, 21 November, 2000.
6.
Chakraborty Alokananda, A New Fizz, Business India, 1
December, 1997.
32. Mukherjee Shubham, Cola Rivals Feel Heat Before
Summer, The Economic Times, 25 February, 2001.
7.
Karmali Nazneen, Number 2 And Loving It, Business
India, 9 February, 1998.
33. www.indianinfoline.com
8.
Datt Namrata, Fighting The Good Fights, Business
India, 9 March, 1998.
Pande Shammi/Ghosh Partha, Coke’s Ad Offensive
Shows It Means No Monkey Business, Business Line, 20
March, 1998.
1
9.
10. Balakrishnan Paran, It’s A Bird, It’s A Plane, No It’s
Pepsi, Business World, 17 April, 1998.
11. After The Blunder, Taste The Thunder, India Today, 20
April, 1998.
12. Bansal Shuchi/Mukerjea D.N., Slugfest, Business World,
22 April, 1998.
(Footnotes)
Since Pepsi could not get the official sponsoring rights for the
event, the phrase ‘Nothing official about it’ was used as a punch
line to indicate that Coke being the official sponsor was ‘no big
deal.’
A tool for measuring the percentage of people recalling an
advertisement.
2
Stochastic share figures is a tool used for measuring advertising
effectiveness. It quantifies the discrepancy between attitudes and
behavior of the target segment.
3
4
Stock Keeping Unit.
13. Rekhi Shefali, Coke Vs. Pepsi, Business Today, 4 May,
1998.
14. Ghosh Partha, Pepsi May File Suit Claiming Damages
From Coca-Cola, Business Line, 26 May, 1998.
15. Ramachandran Rishikesh, Coke, Pepsi Vying For Top
Slot In Football Promos, Business Line, 4 July, 1998.
16. Raman Manjari, Cola Wars Continued, Financial Express,
10 August, 1998.
17. Star Wars, India Today, 21 December, 1998.
18. Mathur Sujata, In The Indian passion Arena, A&M, 15
February, 1999.
19. Roy Vaishna, Playing It Cool, Business Line, 18 February,
1999.
20. Menon Sudha, Coke Scores From Pepsi’s Pitch, 3 April,
1999.
21. Singh Iqbal/Pande Bhanu, Red storm rising, Business
Standard, 4 May, 1999.
22. Dobhal Shailesh, Capping The Cola War, Business
Today, 7 May, 1999.
23. Ghosh Partha, Alert Pepsi Steals The Diet Fizz,
Business Line, 6 June, 1999.
24. Bansal Shuchi, Pepsi World Cup Super Six, Business
World, 21 June, 1999.
45
LESSON 11:
INTERVIEW - PREPARE INTERVIEWER INSTRUCTIONS
Topics Covered
Interview, Finding, Identifying, Introductions, Techniques,
Finishing, Verifying, Paperwork.
Objectives
Upon completion of this Lesson, you should be able to:
• Choosing the place of interview.
• Finding interviewers.
• Finding people at home.
• Identifying the right respondent.
• Introductions. Interviewing techniques.
• Finishing an interview.
• Verifying interviews.
• Paperwork for interviewers.
Prepare Interviewer Instructions
When the interviewers are being trained, they should be given a
printed set of notes, which they can use for reference when they
encounter a problem.
At the very least, the interviewer instructions should be a
photocopy of the survey questionnaire which has been filled in
as an example. Handwritten notes can be added, pointing out
likely problems and giving information on how to fill in some
items.
When fieldwork logs are being used, each interviewer should
also be given a photocopy of a completed log, to use as an
example.
Interviewer instructions can also include:
• What to say to respondents when introducing the survey.
• More detailed notes about particular questions: how to
probe, etc.
• How to fill in a pay claim, with an example of a completed
one.
• A map showing how to reach each cluster they will work in.
• An example of a sketch map for a cluster, showing where the
respondents live
• If the questionnaire is a complex one, with a lot of skipping,
it’s useful to give interviewers what I call a railway diagram.
This is a chart laid out like a map of a railway track, with each
“station” being a question, and some questions being
skipped for some respondents (just as an express train skips
some stations). Here’s an example:
46.
This railway diagram shows that everybody was asked the first
two questions, but after question 2, some respondents skipped
to question 14. After Q3, some people were asked Q4 and
others were asked Q5. Some respondents were not asked Q7.
Which questions were asked of all respondents? The diagram
shows that the only questions asked of all respondents were 1,
2, 14, and 15. Without using a railway diagram, it’s difficult to
answer this question accurately.
Training Supervisors
Supervisors need to learn the same topics as interviewers, but
to understand these more thoroughly than the interviewers do.
They also need to understand the principles of editing, coding,
and verification.
For a large survey, which will use many interviewers, it’s a good
idea to hold two training sessions:
• an initial one for supervisors and perhaps a small number of
interviewers, and
• a later training session for most of the interviewers (or more
than one, if training will be done in several different areas)
When you hold several training sessions, the pilot testing can be
done by the supervisors or the interviewers, before all the
questionnaires are printed. With the experience gained in the
first, smaller training session, the large training session should
run more smoothly.
Training Office Staff
When completed questionnaires are returned, they need to be
counted, checked, edited, coded, and (if computers are being
used) entered into a computer. This is covered in more detail in
the next chapter. Supervisors also need to understand these
processes - except perhaps computer data entry.
Area Checking
When a survey will cover a fairly small area - one city or region it’s often possible to visit some clusters before the interviewers
go there. This will give the survey organizers some idea of the
problems likely to be encountered by interviewers.
If population information is poor, and a highly accurate survey
is needed, you will need to use block listing. This means that
interviewers or supervisors will need to visit each cluster in
advance, to work out how many households are there, and
which ones should be sampled.
Even if there’s no need to check all the clusters in advance, or
the budget doesn’t allow this, it’s always a good idea for the
supervisors to visit a few of the clusters that seem to be
difficult in some way. So when the interviewers come to their
supervisors with their problems, the supervisors will know
what the interviewers are talking about.
Verification for Door-to-door Surveys
The international standard is that 10% of all interviews are
verified (or validated). An interview can be verified in several
ways:
• the supervisor can arrange to meet the interviewer, and
attend an interview with him or her
• another interviewer (or supervisor) revisits the respondent a
day or two later, and repeats key questions from the
interview.
• a postcard is sent to the respondent asking for his or her
confirmation that the interview was done, and perhaps to
confirm the answers from a few questions.
• when an interviewer leaves a questionnaire with the
respondent, to be collected a day or two later, a different
interviewer can be assigned to collect the completed
questionnaire.
The purposes of verification are to check that the interview was
actually done (that the interviewer did not sit at home and make
up all the answers) - and also to gain an estimate of the
variability of answers, or the extent to which respondents
change their minds
For verification to be effective, interviewers must know their
work will be verified. They should also know that some of that
verification will be unpredictable. For example, if verification is
done only by having supervisors accompany the interviewer, it
won’t be so effective.
In most circumstances, cheating by interviewers is rare. But
when an interviewer is inexperienced, and conditions are
difficult, and there’s a financial incentive to cheat, it occasionally
happens. In all my years of managing audience research, I’ve
known this to happen only a few times. Without verification,
such cheating would be very difficult to detect; but the main
deterrent is for interviewers to know that a tenth of their
interviews (and they don’t know which) will be verified.
For verification, you should prepare a shortened version of the
questionnaire, omitting all questions which would not produce
the same answer on a different day. Demographic details (age
group, occupation, sex, etc.) hardly ever change, so these
questions should be repeated to make sure the same respondent is being verified. Knowledge, habits, and awareness are
more likely to change within a few days. Questions on attitudes
produce much less stable answers, so there’s little point in
verifying these. There’s usually no need for a verification
interview to take more than five minutes, or include more than
two pages.
Checking the Early Interviews
Soon after completing each interview, the interviewer should
bring that questionnaire to his or her supervisor for checking.
Interviewers should of course check questionnaires themselves,
as soon as they have filled them in, but it’s surprising how
much a second person can see that the first person doesn’t
notice. At the beginning of a survey, supervisors should check
each interviewer’s questionnaires frequently - even daily - but
later checking can be less frequent.
When supervisors go out with interviewers in their fieldwork,
this should be done as early as possible. That way, any consistent mistakes than an interviewer is making can be corrected
before many interviews are affected.
Interviewers’ Preparation
Before an interview can begin, an interviewer usually has a lot
of work to do. In a normal door-to-door survey using clusters,
or she must
• find out where the cluster is,
• go there,
• find a selected household,
• choose a respondent,
• persuade that respondent to take part in the survey,
• and introduce the survey.
Only then can the interview begin.
Plan Cluster Visits
It’s usually necessary to make at least 2 visits to each cluster,
because some respondents won’t be home on the first visit.
This principle is very important for audience research: ignoring it
will produce overestimates of radio and TV audiences (because
most media use is at home). I recommend making at least 3
visits to each cluster before substituting other respondents.
If a cluster is a long way from the interviewer’s home, and the
budget doesn’t allow for overnight stays, an interviewer can
make 2 or 3 visits on the same day, coming back at different
times if necessary to find a respondent at home.
Travel expenses are a large part of the total cost of any door to
door survey. The bigger the area that the survey covers, and the
further interviewers must travel, the higher the travel expenses
will be. Planning cluster visits so that interviewers travel a short
a distance as possible is a very effective way of reducing the cost
of a survey. For example, sometimes transport costs can be
shared when several interviewers must go to a group of
neighbouring clusters.
Finding the Household
Much of the material in the next few sections has also been
covered above in the chapter on sampling, but here it is
presented from an interviewer’s point of view, with attention to
the practical problems.
Interviewing in Clusters
Because many of the costs of a door-to-door survey are related
to travel, reducing the amount of interviewers’ travel can save a
lot of money.
Therefore, most door-to-door surveys are done using clusters.
Instead of selecting households scattered at random all over the
survey area, households are grouped into clusters: usually 30 or
more of these, with between 4 and 20 households in each
cluster. The larger the cluster size, the less efficient the sample but the smaller the sample size, the higher the costs. A cluster is
usually small enough for an interviewer to walk from one end
to the other in less than half an hour.
Every cluster has a defined starting point. This could be a
dwelling (taken from a population list), or it could be a place
47
such as a street intersection. If a detailed map of the area is
available, each interviewer should be given a copy of that map,
and the starting point should be shown on the map. If there
are no maps, or if the maps do not show enough detail, the
interviewer will take longer to find the starting point, and the
verifier may not be able to find the same point.
Follow the Route
When the cluster’s starting point is found, the interviewer
follows a set of rules for finding households. Some examples
of these rules are:
• When the starting point is a street intersection: choose the
street that is heading the closest to north.
• Always follow the left hand side of the street, keeping the
houses on your left and the road on your right.
• If you reach the end of the street, turn left at the next
intersection. If there are no more dwellings visible, cross the
road and come back along the opposite side. This happens
when you enter a rural or commercial area. The street is
treated as if it is a dead-end one.
• If you go right around the block, and come back to the
dwelling you started from, cross the road, turn to face the
opposite direction, and continue your route on the other
side of the road.
• When the starting point is in a densely populated rural area
without roads: first, travel north, trying to interview people
in dwellings at least 50 metres (60 steps) apart. After
interviewing at 4 houses, travel north for at least 1 kilometre
(1200 steps, or minutes’ walk), then choose a clearly visible
turning point. Turn left, travelling west, trying to interview
people in dwellings at least 50 metres apart... And so on,
until a square of at least 1.2km on each side has been walked
around anti-clockwise, and you return to the starting point.
I recommend using the first of those rules, if possible: i.e.
follow the left hand side of the road, turn left if the road ends
in an intersection, and come back along the other side if the
road finishes with a dead end. This rule usually works well, in
both urban and rural areas, and is easy for interviewers to
follow.
Even if the starting point is a dwelling, it is normal not to
interview anybody there. Why is this? Mostly because population lists always seem to be incomplete or out or date. Even the
best population lists commonly omit at least 10% of dwellings.
So by not interviewing at the starting point, all other dwellings
are interviewed on the same basis.
Skip Intervals
People who live next door to each other usually have very
similar characteristics and opinions. So by making a cluster
larger, spreading it out, a wider range of people will be interviewed, and a better cross-section of the whole area obtained.
One way of effectively increasing the spread of the sample
without increasing the number of households per cluster is to
use a “skip interval” - not interviewing at every neighbouring
dwelling, but to leave gaps between the surveyed dwellings.
This slightly increases the distance that interviewers must walk,
48
but usually ensures that the end of a cluster is a different kind
of neighbourhood from the beginning of the cluster.
I once made a study of this, and found that the best skip
interval was 4 - i.e. interviewing at every 4th dwelling on the
route: interviewing and one, and missing out the next 3. With
this rule, using a cluster size of 10 (with one respondent per
household), the last dwelling surveyed would be at least 40
dwellings away from the starting point. (More than 40, if some
people refused to participate, or were not found at home.)
The more similar people are to their next-door neighbours, the
larger the skip interval should be. I recommend in all cases a
skip interval of at least 2 (i.e. interviewing at every second
household), but no more than about 6. Above that, the
interviewers tend to make mistakes counting households, and
also have to walk much further, with no great increase in the
diversity of the sample.
LESSON 12:
FEILDWORK
Topics Covered
Interview, Finding, Identifying, Introductions, Techniques,
Finishing, Verifying, Paperwork.
size is less than when one person is interviewed in each
household.
• Finding people at home.
Another alternative is to base the number of interviews in a
household on the number of people. This overcomes some of
the theoretical problems in choosing one person per household, a method which over-represent people living in small
households. A common version of this is to interview one
person in households with one or two adults, and two people
in households with three or more adults.
• Identifying the right respondent.
Whichever method is chosen, the interviewers need to know
• Introductions. Interviewing techniques.
• How many respondents to choose in each household, and
• Finishing an interview.
• How to select those respondents.
Objectives
Upon completion of this Lesson, you should be able to:
• Choosing the place of interview.
• Finding interviewers.
• Verifying interviews.
• Paperwork for interviewers.
Choosing the Respondents
If not all people in the household are to be surveyed, the next
step for an interviewer is to choose one or more respondents.
The methods are designed to give everybody an equal chance of
selection:
1. The birthday method: choose the person (usually above a
minimum age) who last had a birthday, or will next have a
birthday
2. The grid method: find out how many people in the
household are over the minimum age to be surveyed, and
choose one by looking up a table
3. A quota method: e.g. fulfilling a quota that 20% of
interviews must be with men aged over 45.
4. The youngest/oldest man/woman method - choosing
alternately the youngest man, youngest woman, oldest man,
and oldest woman.
The birthday method (no. 1) can’t be used unless everybody in a
household knows the birth dates of everybody else. The grid
method can’t be used when household members don’t know
or won’t tell the interviewer how many people live in the
household. (For example, in some countries, with taxes on each
person, households may pretend there are fewer people than
there really are.) The quota method isn’t as accurate as the
others, and tends to under-represent people who are often away
from home. Respondents sometimes find the youngest/oldest
man woman method hard to understand (“How can I be the
youngest man, when I’m 70 years old?” - Answer: He’s the only
man, so he’s both the youngest and the oldest).
One way of overcoming the problem of choosing respondents
within a household is to choose everybody. This works well
when questionnaires are self-completed (it stops people from
filling in others’ questionnaires), but results in less efficient
surveys in the case of personal interviews. Because people in a
household tend to be similar to each other, the effective sample
Making Appointments
When a respondent who lives in the household has been
chosen, that person may not be there when the interviewer first
calls. In these cases, it will be necessary to make an appointment
to return, when the chosen person is home.
But with door-to-door surveys, the samples are usually in
clusters of households. Because of the time and expense
involved in making many visits to a cluster, the interviewer
needs to arrange appointments to avoid too many visits.
Therefore appointment times need to be approximate. Instead
of saying “I’ll come back at 6.47pm tomorrow” the interviewer
needs to say something like “I’ll be back about 7pm tomorrow.”
Of course, appointment times need to be recorded by the
interviewer. It’s also a good idea to give each household where
an appointment is made a card showing the interviewer’s name
and the time and day of the appointment. This helps to ensure
that the respondent will be at home at the appointed time.
Repeat Visits
In societies where a lot of people are away from home working
(e.g. most developed countries) it’s usual for an interviewer to
make 3 or 4 visits to a cluster of households. In societies where
most people work at home, or nearby, two visits is often
enough.
These visits can even be on the same day, but at different times.
Substitution of Respondents
In cluster surveys, each cluster is designed to include a fixed
number of households - often around 10 of them. And
usually one interview is done at each household. So the total
sample in a survey is the number of interviews in each cluster,
multiplied by the number of clusters. For example, a survey
with 40 clusters of 10 households, and one interview at each
household, will produce a total sample of 400.
What happens if some of those planned interviews can’t be
made? This can happen for several reasons:
49
• because the person chosen as a respondent refuses to take
part in the survey
• because it is impossible to interview the chosen respondent
• because, on the interviewer’s final visit to the cluster,
somebody who has agreed to be interviewed is not home for
the appointment.
I call these lost interviews. There are two ways of dealing with
these: either the total sample size can be smaller than planned,
or another interview is made in that cluster, to bring the total
number of interviews back to the planned number. The latter
choice is known as substitution.
The advantages of substitution are that the final sample size
will be known in advance, and that the balance of respondents
across clusters will follow the sample design. The disadvantages
of substitution are that any problems with response rate are
hidden, and simply adding extra households at the end of a
cluster will not compensate for refusals if a particular type of
person is refusing.
In general, I have found that it is better to use substitution.
The main reason for this is that certain types of area have much
higher numbers than others of lost interviews. These are
usually inner-city areas and places with transient populations.
Without substitution, these areas (and the types of people who
live in them) are usually under-represented in a survey.
Substitution is normally done by adding extra households to
the end of the route in a cluster.
Take the example of a cluster that should have 10 interviews,
with a skip interval of 4: every fourth household is approached
for an interview. So 40 households need to be walked past.
Suppose that at one of the 10 households contacted, somebody refused to be surveyed, and that the selected respondent
at another household didn’t keep an appointment made on the
previous visit. If this was the interviewer’s last visit to the
cluster, 2 more households need to be added. These are added
to the end of the planned route. Beyond the 40th household,
another three are skipped, and the interviewer tries to get an
interview at the 44th household, and then another at the 48th.
If one of these refuses, or the selected person is not home, the
interviewer will have to walk past another three dwellings, and
seek an interview at the 52nd household. In the end, 10
interviews are completed, even though the cluster may have
grown much larger than the original span of 40 dwellings.
Screening Respondents
When a survey does not cover the whole population, there
must be a screening process to eliminate people who are not
eligible to participate. This is done by asking a few questions,
and eliminating people or households who don’t give suitable
answers. Sometimes these screening questions can be asked of
anybody in the household, but sometimes the selected
individuals must be asked. It’s easier if anybody in the household can be asked the questions, but this is feasible only if
everybody in the household knows the relevant details about
everybody else.
For example, a survey might cover only people who listen to a
particular radio station. Let’s call it FM99. (This survey would
not be able to find out what proportion of the population
50
listen to the station, because non-listeners would not be
included.) So a suitable screening question would be “Does
anybody in this household ever listen to radio FM99?”
If the answer is No, the interviewer notes this in the log, and
moves on to the next household in the route. If the answer is
Yes, the interviewer finds out how many people listen to the
station, and selects one or more respondents. If nobody who is
there when the interviewer first calls know the answer to the
question, the interviewer will have to call back later, when other
people are at home.
When only a small proportion of the population are involved
in the activity of interest (e.g. listening to FM99) asking
screening questions in door-to-door surveys is very expensive.
If only 1 person in 100 listens to FM99, about 100 households
must be visited to find one listener. For this reason, door-todoor surveys usually cover the whole population. At least it is
then possible to find out that 1 person in 100 listens to FM99.
But in telephone surveys, where people can be contacted much
more quickly, screening is less expensive, so more common.
Even for telephone surveys, when fewer than about 1 in 10
people qualify to be interviewed, screening is relatively expensive. It’s common to spend more time on screening out the 9 in
10 unwanted people than in interviewing the 1 in 10 who you
really want to contact. In this situation a good solution is to
include a screening question on a survey of the whole population which is being conducted for another purpose. Then, as
long as they give their permission, these respondents can be
recontacted later for a more detailed interview.
Screening Scores
Sometimes a screening question is obvious - for example “Do
you ever listen to FM99?” But when you want to survey
potential users of a service, not existing users, it’s much harder
to develop a suitable screening question. One way of doing this
is to find out which types of people use a service, then ask a
series of questions to identify these people. For example, I once
needed to find out about people who were potential listeners to
a radio station. An earlier survey had found that existing
listeners to this radio station tended to read a particular
newspaper, to prefer current affairs to sports programs, to have
tertiary education, and two other criteria that I don’t remember.
We asked 5 screening questions. In simplified form, these
included:
“Which newspapers do you read?”
“Would you rather listen to a sports program or current
affairs?”
“What is your highest level of education?”
...and two other questions.
For each of these questions, a respondent who gave the same
answer as most listeners to the station was given one point.
Respondents who received 4 or more points for the 5 questions, and did not already listen to the station, were “screened
in” - i.e. included in the main survey.
Interview logs
Another necessary piece of paper is the interview log: the
interviewer’s record of the attempts made to contact each
selected household. With cluster surveys, when the interviewer
visits a number of different households in a neighbourhood, a
single log is used for the cluster, with one line per household.
Usually there are between 4 and 20 households in a cluster.
With surveys that don’t use clustering - such as telephone
surveys - there is one page per household, or per telephone
number. In this case, the logs can be smaller: often A5 size. The
advantage of having a separate log for each interview is that the
logs can be sorted into heaps: for finished interviews, numbers
that will not produce interviews (e.g. refusals), pending
appointments, and numbers not yet contacted. It’s possible to
do a telephone survey without using logs: the same information is recorded at the beginning of each questionnaire.
However, a lot of paper will be wasted, because many telephone
numbers do not result in completed interviews.
In cluster surveys, logs are almost essential, because they give
interviewers a record of their progress in the cluster, on a single
sheet of paper.
INTERVIEW LOG FOR AMHARA SURVEY, 2000
Locality name: Fashion Street (Mumbai)
Cluster no. 14
Address of starting point 27 Main Rd
Interviewer- Naresh
Visit 1
Monday 8 / 5
Arrival time 0927 Departure time 1235
Visit 2
Tuesday 9 / 5
Arrival time 1525 Departure time 1840
Visit 3
Thursday 10 / 5
Arrival time 1930 Departure time 2110
C = completed interview
L = come back later
N = nobody home
U = unable to intervi ew R =refused
Address
Visit 1
Visit 2
Visit 3
A = appointment made
Interview
Verified
Information on logs
The interviewer log contains a lot of information:
1. about the cluster, the interviewer, and the visits:
•
where the cluster is
•
the code number of the cluster
•
the interviewer’s name
•
the dates and times of visits by the interviewer
2. about the households and the interviews
•
Address of dwelling and/or name of householder
•
Number of interviews to be made there (if not always 1)
Result of each visit. The main possibilities, in a doorto-door survey, are
•
•
interview completed
•
come back later, no specified time
•
appointment made to return on ... day at ...
time
•
nobody home - can’t make appointment
•
unable to interview (sick, deaf, no common
language,etc.)
•
refused to take part in survey
other results are possible, but rare. These can
be written in.
serial number of each interview made at that
household
•
•
On the next page is an example of a general purpose log, which
you can change as necessary to suit your situation. The top
section is usually filled in by the office staff, before the log is
given to the interviewer. The interviewer only has to fill in the
dates and times of visits, and the lines for the households.
The back of the log is usually blank. This space is used for
explanatory notes about particular households (which are rare).
I also encourage interviewers to write general comments about a
cluster on the back of the log. These comments can be useful
when planning later surveys, or for resolving problems.
Persuading the Unwilling Respondent
When a person selected as a respondent is found, the next step
is for the interviewer to persuade that person to be interviewed.
In most developing countries, this is no problem. However, if
refusal rates rise above about 10%, this can affect the accuracy of
the survey. (The people who refuse to take part may be
consistently different in some way - for example, if they are told
this is a survey about radio, the people who don’t listen to
radio may often refuse.)
It’s therefore important for the interviewer to be persuasive.
When the interviewers are being trained for the survey, they can
be given a list of likely excuses, and given reasons to overcome
those.
51
I hardly ever watch TV,
so I'm no use to you.
Everybody's opinion is important,
whether they watch TV or not. If you
don't watch, you will only be asked a
few questions.
population. If the survey is trying to represent the whole
population of an area, people should not be excluded because
they do not take part in the activity the survey is studying. For
example, if you are doing a survey of radio listening, and trying
to find out what percentage of the population listen to radio,
people who never listen to radio must be included in the survey
- otherwise you’d find that 100% of people listen to radio.
I'm too old for this.
Interview my children
instead.
It's very easy, and we want to get
opinions from everybody, no matter
how old or young.
Even if some respondents don’t think it’s useful to interview
them, don’t need to answer many questions, they should at
least be counted.
I'm too busy.
I can make an appointment to come
back later, at a time when you're not
busy.
Reason for refusing
Response from interviewer
I think TV programs are This is your chance to let the TV
terrible, so I won't costations know the public's opinion of
operate.
the programs.
Whenever somebody refuses to take part in the survey, the
interviewer needs to find out the reasons for refusal.
Temporary and Permanent Refusals
Very often, when somebody refuses to be interviewed, this is
because of the situation at the time. A woman whose baby is
demanding to be fed, a man who is about to go off to work, a
family having a meal, watching a favourite TV program, or
being visited by friends - in all these situations, an interview will
probably be unwelcome. So on receiving a refusal, the interviewer should find out the reason for this, and then ask “Can I
come back later?”
In most cases, the respondent will agree to be interviewed on
another day.
When a Respondent cannot be Interviewed
Sometimes you may find a person who cannot be interviewed.
They may be deaf, mentally deficient, drunk, senile, ill, not have
a language in common with the interviewer, or in some other
way not able to communicate well. Usually, about one or two
people in 100 are in this category. But are they always like this
(e.g. deaf), or only temporarily (e.g. drunk)? If the difficulty in
interviewing a selected respondent is only temporary, the best
solution is to make an appointment to return another day. If
the difficulty is permanent, the interviewer should write a note
explaining the problem, and find a substitute person to
interview. There are several ways to find a substitute:
• Pretend the uninterviewable person does not exist, and
choose the next best person in the same household.
• Abandon that household, and choose a person in a
neighbouring household.
• Don’t choose a substitute at all, so that the final sample size
is one less.
There are arguments for and against all three of these approaches. Usually the second method is best, because the other
two methods will distort the sample slightly.
It’s important to distinguish between somebody who cannot
be interviewed, and somebody who is not in the target
52
Some interviewers think that people who don’t do the activity
cannot be interviewed. An important part of interviewer
training is to make interviewers understand the difference
between those who don’t do the activity, and those who can’t
be interviewed. Mistakes in this area will seriously affect the
results of the survey. At the end of a survey, the percentage of
potential respondents declared uninterviewable should be
calculated for each interviewer. If any interviewer has more than
a few percent in this category, all these cases should be carefully
investigated.
LESSON 13:
INTERVIEWING
Topics Covered
Interview, Finding, Identifying, Introductions, Techniques,
Finishing, Verifying, Paperwork.
Objectives
Upon completion of this Lesson, you should be able to:
• Choosing the place of interview.
• Finding interviewers.
• Finding people at home.
• Identifying the right respondent.
• Introductions. Interviewing techniques.
• Finishing an interview.
• Verifying interviews.
• Paperwork for interviewers.
Interviewing
When a suitable respondent has been found, and is ready to be
interviewed, at last the interview can begin. It’s now time to
introduce the survey.
Introductions
When a likely respondent has been found, the interviewer now
needs to briefly explain the purpose of the survey. In a country
where most people have never been interviewed, the interviewer
may also need to explain what a survey is, and how it works.
Many survey organizations have a printed introduction on each
questionnaire, and require the interviewers to read this aloud.
To me, this always sounds very stilted, and it’s not a good way
of persuading people to take part in the survey - which is the
whole purpose of an introduction. Instead, I ask each interviewer to develop his or her own introduction, including:
• The name of the interviewer • The organization responsible for the survey (either the
research group or the client)
• The subject matter of the survey
• The general purpose of the survey (in the case of media
surveys, this is usually “to improve the programs”)
• The average time of the interview
• Making it clear that participation is voluntary (but without
encouraging refusal)
• An assurance of confidentiality.
Here ‘s an example of an introduction.
“Hello, my name is Prem Chopra. I’m working for Audience
Dialogue, and we’re doing a survey with owners of new houses
about the reasons why people choose to live in particular areas.
The purpose of the survey is to improve the planning of
housing facilities. So I’d like to ask you some questions. This
will take between 10 and 20 minutes, and all your answers will
be kept strictly confidential. So may I interview you, please? Is it
OK now, or can I make an appointment for another day?”
This was from a telephone survey in Delhi. For a face-to-face
interview, introductions are usually longer than this, and in
other countries, a much longer and more eloquent introduction
could be expected.
After the introduction, when the respondent has agreed to
participate (but before the questions begin) the interviewer can
give further information about the process. This can include:
• the respondent can withdraw cooperation at any time;
• the respondent can refuse to answer individual questions ;
• a more detailed explanation of the confidentiality provisions:
“The only reason I’m asking your name is so that my
supervisor can check up on me, to make sure I’ve really
interviewed you.”
The respondent at this point is invited to ask any questions
about the survey.
Creating a Comfortable Setting
Before the interview can begin, the respondent needs to feel at
ease, not threatened in any way, and not hurried. If the
interview will last more than a few minutes, it’s usually best to
sit down. In some countries, a research interview is such an
unusual activity that it may attract a lot of onlookers. I’ve seen
interviews in developing countries where more than 20 people
were watching and listening. This doesn’t make it easy for a
respondent - but the same person may also feel uncomfortable
closeted indoors with the interviewer, particularly one of the
opposite sex. If the respondent is a woman, and her children
are hanging around, this can be a great distraction. It’s often
best for the interviewer to take responsibility for shooing away
onlookers, explaining to them that the respondent has been
selected at random, and that this is a private matter.
If some emergency happens in the middle of an interview - e.g.
a child hurting itself - the interviewer should offer to suspend
the interview, and return later to finish it.
The Actual Interview
After you have located a respondent, persuaded him or her to
take part in the survey, and explained its purpose, the actual
interviewing is perhaps the easiest part of the process.
All the interviewer needs to do is to read out the questions,
follow the instructions, and record the answers given.
If all goes well, it is very simple - but some respondents aren’t
easy to deal with. What do you do when people misunderstand
the questions, give irrelevant answers, refuse to answer some
questions, and so on?
If an interviewer seems not to understand a question, the first
step is to repeat it, but more slowly. The reason for this is that
interviewers unconsciously speed up. By the time they are
53
interviewing their 20th respondent, they usually know the
questions by heart. There’s a tendency to speed up, to gabble the
question quickly. But each respondent is hearing the question
for the first time, and will need a few seconds to absorb its full
meaning.
When a respondent gives an answer that’s clearly irrelevant, the
interviewer should say something like “I’m not sure if you
heard that question properly. Let me repeat it....”
When the Respondent doesn’t Understand the
Question
If, after a question is repeated, the respondent still doesn’t
understand or is unable to answer, interviewers are tempted to
rephrase the question in simpler language. Most books on
surveys (and most survey organizations) say interviewers must
never do this, but if a question is badly worded, an interviewer
will usually try to get a valid answer.
One solution to this, of course, is to make sure that a question
(if it is the multiple-choice type) covers all possible alternatives,
that it is unambiguous, and that it is short enough to remember in its entirety.
Research on question wording has found that some types of
question - particularly questions about attitudes and beliefs - are
very sensitive to changes of wording. In some cases, changing a
single word can radically change the spread of answers. For
other types of question - about observable facts, such as age
group - the answers don’t vary much with changes in wording but these questions are usually understood clearly by respondents.
Despite the wishes of questionnaire writers, interviewers will
reword questions - specially in situations where they cannot be
checked, such as door-to-door surveys. To avoid this, training
must be very thorough, and you must explain to interviewers
how the wording can affect the responses. If interviewers are
treated as speaking-machines (as in many large market research
companies) the survey results will not be as accurate as possible.
In a survey, every multiple-choice question should allow for all
possible answers, but sometimes this doesn’t happen, and a
respondent may give an answer that’s not a listed alternative.
In this case, my advice is for the interviewer to write in the
answer that the respondent gives.
Probing
If an interviewer asks a question without also giving an
exhaustive list of possible answers, the respondent may not
answer it in the way intended, or may not give a full answer.
This calls for probing - which means asking follow-up questions to clarify the initial answer given by a respondent.
The more vaguely a question is worded, the more likely that the
interviewer will need to probe for an answer. Sometimes a
question is intentionally worded very loosely. This is perhaps
commonest when the survey organizers want to see how many
respondents mention a particular thing of special interest, but
don’t want to put words into people’s mouths by listing this
thing (whatever it is) as a possible answer.
Probing is better described by example, rather than being
explained in detail. Let’s take a question, and find out how to
54
get more detail out of the answers. We’ll start with a very vague
question, “What’s your opinion of the Prime Minister?” I’m
not recommending this as a real question in a survey, as it’s too
vague to be useful for most purposes; respondents are likely to
give all sorts of different answers which simply cannot be
compared. Any question as feeble as this should have been
weeded out at the piloting stage of a questionnaire - but if you
can probe this one, you can probe anything - specially when the
respondent avoids giving detailed answers.
Interviewer: What do you think of the Prime Minister?
Respondent: Oh, he’s OK, I guess.
As usual, a vague question gets a vague answer. The interviewer
now needs to probe, to make the answer more specific, but
without biasing the answer by making specific suggestions to
the respondent.
General Probes
There are some probe questions that can be used in practically
any situation, regardless of the previous answer. These include:
“Can you tell me some more about that?”
“Can you give some more details?”
“What do you mean by that?”
“Mmm-hmm.”
“Yes?”
“Can you explain that a bit more?”
“In what way?”
“I see...”
Pausing for several seconds, when an answer seems incomplete.
Similar phrases, which are not acceptable in probing, are
“Good” and “That’s right.” Respondents could take these to
mean that the interviewer agrees with and/or is satisfied with
the content of the answers; this could bias later answers.
Specific Probes
Specific probes (unlike the general probes just listed) are related
to the last answer given. For example, a suitable probe questions to follow up the answer “Oh, he’s OK, I guess” would be
“In what ways is he OK?” Notice the use of the plural “ways”
not “way”: the assumption is that there is more than one
answer to be given. To continue the imaginary dialogue:
Interviewer: In what ways would you say he’s OK?
Respondent: Well at least he’s better than the other man .
Interviewer: Who’s “that other man”?
Respondent: The P.M. before him, I can’t remember his name.
Notice that probing has two elements: expanding on the
answer, and making it clearer. For example, the interviewer had
to ask who the “other man” was. It would have been unwise
not to ask, and to assume (for example) that the respondent
meant the leader of the opposition.
Interviewer: Can you give any examples of how this Prime
Minister is better than the previous one?
Respondent (after a long pause): He seems to stick to his
promises a bit better.
The interviewer is getting onto slightly dangerous ground here,
by asking for examples, as the question being probed simply
asked for an opinion, not the facts that supported it. However
as the respondent didn’t directly answer the first probe question, a more drastic than usual measure is called for, to get the
respondent to give a more direct answer. After a slight departure
from the question, it’s now time to return to it:
Interviewer: Are there any other opinions you have about the
Prime Minister?
Respondent: Well I suppose he’s good enough to vote for
again, when I think of the one before him.
Interviewer: So to summarize your answer, you think the Prime
Minister is OK, better than the previous one because he seems
to stick to his promises a lot better, and good enough to vote
for again. Is there anything else you’d add to that?
Respondent (very bored by now): No, that’s all.
previous one. [Gave up probing here - he didn’t seem to have
much of an opinion at all.]
Though it may seem tedious to write so much, the danger of
summarizing the answer on the spot is that the flavour of the
answer may be lost.
Probing for specific media
When a radio or TV network has several channels some
audience members who use only one channel will give it the
generic name. Many of these do not realize there is more than
one station. So if you are asking about specific stations, and a
respondent answers with the name of the organization, some
probing questions are needed, e.g.
• Is the station AM or FM? (or for TV, VHF or UHF)
• What is its position on the dial? (Respondents can even be
asked to go and look at a radio tuned to that station, and
report on the approximate dial number)
The interviewer ends by asking “Is there anything else?” or
“Have I left out anything?”
Can you name some of your favourite programs or announcers
on that station?
The obvious danger in probing is creating an attitude where
none really existed. It’s common to find respondents like the
one in this example, whose opinions are not coherently formed.
The interviewer could have probed further, and the respondent
might have obligingly manufactured a detailed opinion on the
spot. However, if another interviewer had come along a month
or two later asking the same question, and the respondent had
forgotten the answer he gave the first time, the second
interviewer’s probing could construct a totally different answer,
building up from whatever aspect the respondent happened to
think of first.
As long as the interviewers are well trained in the differences
between the stations, they can usually work out which station a
respondent is referring to.
Though the above example may make probing seem difficult, I
deliberately chose an over-vague question and an uncooperative
respondent. Usually the progress flows much more smoothly.
The art lies in knowing what to ask, when. A good tactic, if you
are a trainee interviewer, is to memorize some of the stock
phrases and try probing your friends and family in normal
conversation. The longer they take to notice what’s happening,
the better you are doing it! After a little practice, you’ll find it
comes quite naturally.
Another skill to learn is when to stop probing. Sometimes
respondents start to feel very twitchy when they realize what is
happening, and they can see no end to this barrage of detailed
questioning. If you detect signs of defensiveness, explain why
you are probing: “Often people can’t think straight away of the
full answers they’d want to give, so I’m trying to help you make
sure you don’t miss giving part of the answer.” Maintaining the
right tone of voice while probing will usually help to prevent a
defensive reaction.
Filling in Questionnaires
The main principles of completing questionnaires are:
• Whenever a question is asked, all the answers given must be
accurately recorded.
• For all questions not asked (i.e. skipped over) nothing
should be recorded.
It’s also obvious that interviewers should write legibly. It’s easy
to say this when you’re sitting in an office, but when an
interview is being done outdoors, in wind and rain, it’s not
surprising that completed questionnaires are sometimes hard to
read. So it’s important, when designing a questionnaire, to
allow plenty of space for the interviewer to write open-ended
answers. Any money saved in the cost of paper for questionnaires is usually more than wasted in extra coding costs.
A good practice is to give interviewers more questionnaires than
they will need - about 20% more. Ask interviewers to recopy
any questionnaires that will be hard to read, and to send in both
the original questionnaire and the copy (with COPY written on
it). Then if any discrepancies are found with the neat copy, the
untidy original is there to be referred to. When copying questionnaires, interviewers usually copy the words exactly, but
sometimes forget to circle codes, tick boxes, etc.
When recording the answers given in probing, write down each
statement as it is made, word for word. It’s the convention to
separate each probe by a slash. General comments by the
interviewer are often helpful, and are usually enclosed in
brackets.) Thus the interviewer in the above example would
have written:
At least he’s better than other guy / i.e. previous PM, can’t
remember name/ seems to stick to his promises a bit better/
suppose he’s good enough to vote for again, when think of
55
LESSON 14:
SURVEY
Topics Covered
Action Planning in the Survey Process
Survey, Telephonic, Mails, Introductions, Conversion, Call logs,
Design, Followups.,Receiving, Fax surveys.
During the action planning process for any survey, management
has a unique opportunity to be responsive to viewers about
important concerns. For Audience/ Viewer surveys, the
opportunity is similar, though the goal is to involve viewers in
identifying internal processes that may be causing problems
perceived during programming. In either case, as the following
figure suggests, communication lies at the heart of the survey
process and is crucial at every stage.
Objectives
Upon completion of this Lesson, you should be able to:
• Understanding Surveys
• Identifing respondents for telephonic surveys.
• Knowing how to do a refusal conversion.
• Wrting a Call logs.
• Questionnaire design.
• How to do Followups.
• Receiving completed questionnaires.
• How to do Fax surveys.
This Research Note focuses on the topic of action planning; the
process of developing actions to address concerns raised by
survey results.
Some of our clients are conducting Audience/ Viewer studies.
Though we highly recommend involving survey respondents in
this part of the survey process, we recognize that this is
sometimes more easily done with general survey projects than
with Audience/ Viewer surveys, due to proximity and the
potential number of respondents. Part of this document will
discuss a method we have found effective for Audience/ Viewer
involvement in the action planning process.
The goal of a satisfaction survey, for Audience/ Viewers, is not
simply to measure the strengths and opportunities of a
channel. The survey is the initial step in an on-going process
designed to:
• maintain and improve superior performance
• improve viewer ship
• build credibility
• actively involve Audience/ Viewers in becoming partners
with program in the success of the channel
• increase Audience/ Viewer retention
From this perspective, what happens after the survey has been
developed and administered is just as important as ensuring
that the survey content reflects the goals of the survey.
Responsible survey communication involves discussion about
notonlywhat is happening but why it is happening. Recall that
when you distributed the surveys, you informed respondents
about the survey, why it was happening, and what you intended
to accomplish with the process. This initial communication can
stimulate interest for respondents. Many of them will be
interested to hear about the survey results. More importantly,
they will want to know what will change because of those
results.
56.
This communication can be a powerful tool for building and
managing credibility. Each part of the survey cycle should
communicate respect for the survey respondents and commitment to using the results for positive change. This
commitment for positive change is evident when an organization takes the time to involve the right people, (i.e., readers
and/or Audience/ Viewers) to properly understand the data
and develop appropriately responsive action plans.
What if we skipped the feedback session and went right into
the action planning process? Why isn’t it enough to tell
respondents about the plans for change? Consider a respondent
who does not hear the survey results and observes channel
actions that do not reflect the responses he or she personally
indicated. That respondent is then left with an unfavorable view
of the survey process and organizational credibility may suffer.
Whereas, when an organization shares the results, respondents
have the opportunity to understand that some of their
opinions may not coincide with the majority of survey
respondents. This gives respondents time to adjust their
expectations so that when action is taken, it can be seen as
appropriate. Therefore, it is important to provide respondents
with information about survey feedback as well as action plans.
Using the feedback process to stimulate action planning gives
management an opportunity to bring respondents and viewers
together to jointly focus on positive action rather than negative
feedback.
Involving Audience/ Viewers
Involving Audience/ Viewers in the action planning process can
be a challenge. Due to distance and potential numbers of
participants, it can also be expensive. We have found that an
effective way to involve Audience/ Viewers in this part of the
process is by expanding on Audience/ Viewer contact groups.
Audience/ Viewer contact groups are initially used to bring
together a cross-section of Channel viewers from different
departments and an appropriate sample of survey respondents
to discuss and clarify survey results. These groups can also be
effective for action planning. You may start the process during
the initial contact group, and follow up with a group dedicated
to the goal of developing effective action plans.
You can involve different samples of Audience/ Viewers for
each group, and conduct more than one group to get a good
understanding of the desires of your Audience/ Viewers.
Though for larger companies, it would be almost impossible to
involve all of your Audience/ Viewers in these groups, you can
make your Audience/ Viewers aware of the groups. Mention
that the participants were chosen by random sample (so
Audience/ Viewers won’t wonder why they weren’t chosen)
and discuss the resulting action plans. This can be accomplished
in a newsletter, a commercial, or other creative ways. Let your
Audience/ Viewers know you are involving them to direct your
Channel. This can serve not only to give you solid Audience/
Viewer information, but also to increase Audience/ Viewer
loyalty.
Ensuring Success
To increase the success of your action planning process, consider
the following steps:
• Based on a solid understanding of your Audience/ Viewers,
develop (or revise) and communicate your Channel’s vision
• Develop a series of action plans to move you toward
achieving that vision
• Write the plans with clear, concise directions for
implementation
• Prioritize your plans
• Implement the plans and develop a system for evaluating
progress to keeping the action on track
Vision
Robert Hunter said, “If you plant ice you’re going to harvest
wind.” His words are particularly relevant for the action
planning process. Change for the sake of change is a meaningless exercise that accomplishes little and often leads to disaster.
Successful action planning requires a vision.
It is likely that you are all familiar with vision statements. We
define them as a succinct statement about what an organization
aims to achieve or the role it desires to fill. The purpose of the
vision is to focus the efforts of the organization to get all levels
working toward a common goal. Referring to the vision
frequently during the action planning process provides a
common touchstone for all organizational change efforts and
serves to place activities in their proper context.
Development
The purpose of the action plan is to show the organization’s
concern and develop a method to respond to the issues
identified in the survey. Unfortunately, the place where a survey
usually breaks down is in taking action. As a result, it is
important to have a process for planning and implementing
action.
There are any number of ways to do action planning, none of
which is necessarily more right than the others. However, there
are a number of suggestions we can make that apply to all
methods of action planning.
Perhaps first and foremost is to maintain and demonstrate
top management commitment. Time and again you hear that
top management support is crucial for any undertaking. This is
never more true than during the action planning process.
Without demonstrated top management support nothing will
happen and a big opportunity will be lost.
The second suggestion is to monitor the pace of the change.
Initially there will be a strong sense of urgency to complete the
changes and get moving with the new and improved organization. Don’t rush into anything. Be deliberate in deciding what
to do and then introduce the changes at a comfortable pace.
Third, communicate. We indicated earlier that communication
was at the heart of the feedback and action planning process.
You need to communicate with viewers and Audience/ Viewers
about what is happening, why it is happening, what they can
expect, and how the change fits with the Channel’s long range
vision. Tie the actions back to the survey results as a way of
explaining the events and keep people informed of events as
they develop.
Fourth, involve respondents in the process. Research
suggests that people are more supportive of actions and
changes that they “own.” Therefore, involving them in the
action planning process will help ensure that they are supportive
of the changes you implement.
Fifth, make sure that individuals involved in the action
planning are accountable for implementing their actions.
Occasional status reports ensure that people will continue to
work on the projects. Tying compensation or reviews to
completion of the action plans is also a possibility.
Finally, focus on a few things at a time. Too many changes at
one time is a recipe for disaster. Select the critical few factors and
work them. Then, upon completion, select a few others. A
change in any part of the organization is likely to have an effect
on other parts of the organization, whether intended or not.
Consequently, you don’t want to engage in a number of
activities that, while appearing independent, have unplanned
effects on the organization.
Prioritizing
How do you select which factors to work on? There are several
ways. If your survey allowed viewers or Audience/ Viewers to
rank their priorities for change, you have an excellent starting
point. If, however, you don’t have such data, use the data
reports. You can focus on areas with the highest percentage of
unfavorable ratings. Reports showing key indicators of
satisfaction (through regression analysis), opportunity mapping
57
(through correlation studies), and other reporting, such as
Audience/ Viewer profiling, can all be important information
to help you prioritize your plans. Also, examine the responses
to the open-ended question or comments section. This is a rich
source of information as it can “flesh out” the responses to the
survey items and provide additional information about areas
not included in the survey body.
Another way to prioritize is to use the information gathered in
the feedback sessions. Summarizing the information gathered
in these sessions should give you a sense of what employees
and Audience/ Viewers feel are issues requiring attention.
Writing the Plans
At a minimum, when you communicate your plans to management, Audience/ Viewer, or employee groups, you need to
include:
• A brief description of the problem or issue
• A brief description of the proposed action
• The name of the person(s) responsible for implementation
and status reporting
• Starting date and proposed completion date
• A brief description of how the action plan’s effectiveness will
be evaluated
Implementing Action Plans
Once the action plans are developed there will be a tendency to
jump right into full scale change. Depending on the action, it
may be wise to select a pilot group (team, department, etc.) as a
test case. Gather information from that pilot group about the
changes you have made and ways that the process could be finetuned. Change the methods of intervention appropriately and
apply them to the rest of the Channel. If you become very
ambitious you can conduct a quasi experiment by introducing a
change to one unit while keeping other units constant. After a
specified time, compare the test unit to the control units to
determine if the intervention has the desired effect. Be sure to
consult a researcher before engaging in this activity so that you
can control all the variables in order to make meaningful
comparisons between your groups. Once you are sure that your
change is beneficial, make changes in a larger unit.
During this implementation phase one of the most important
things you can do is to continue communicating with your
employees. Remind them about what you are doing and why
you are doing it. Talk about the survey results and link your
actions back to those results. Celebrate successes. Consider
being equally candid about wrong starts but be sure to describe
explicitly what you learned from them and how you will use
this information.
Be sure to maintain accountability during the implementation:
gather information about how the changes are working, check
to ensure that timelines are being met, continue communication, and require updates on progress. One of the worst things
that can happen is that you start to implement a plan and then
stop in the middle. The changes you are making are important,
not only to you, your employees, and your organization, but
most importantly, to your Audience/ Viewers. You do not
58
want to engage in action planning half-heartedly; it requires
commitment and dedication of resources.
Once you have implemented your changes and there has been
enough time for people to adjust to them, it is time to resurvey. This can be either a mini-survey on areas affected by the
change or it can be a full-scale survey so you can compare your
current state against both your internal benchmark and your
future goal.
LESSON 15:
SURVEY – TELEPHONIC SURVEY
Topics Covered
Survey, Telephonic, Mails, Introductions, Conversion, Call logs,
Design, Followups.,Receiving, Fax surveys.
Objectives
Upon completion of this Lesson, you should be able to:
• Understanding Surveys
• Identifing respondents for telephonic surveys.
• Knowing how to do a refusal conversion.
• Wrting a Call logs.
• Questionnaire design.
• How to do Followups.
• Receiving completed questionnaires.
• How to do Fax surveys.
Telephone Surveys
Most of what has been written in the earlier chapters of this
coursepack also applies to telephone surveys, but for telephone
surveys some things are different: in particular, sampling and
some aspects of interviewing. These are covered in this chapter.
1. Sampling Telephone Numbers
When you are doing a telephone survey of the general population, there are two methods of sampling: from a telephone
directory, or by random digit dialling. Both methods work, but
both have their problems.
Sampling from Telephone Directories
A telephone directory is not a complete list of the residential
phone numbers in an area. For example, in Australia about
10% to 15% of residential numbers are “silent”: unlisted, by
the owner’s request. Other numbers are unlisted because they
have been connected after the directory went to press. Australian
directories are already three to six months out of date when
issued; as new directories are published about once a year, they
are up to 18 months out of date at the end of their issue
period. On average, a directory is a year out of date, and
between 10% and 20% of residential entries change each year.
Though the names attached to the numbers turn over fairly
rapidly, the numbers themselves are much more stable, and
generally continue from year to year. When somebody moves to
another exchange area, the new resident often takes over the
telephone number; when people move only a short distance
(within the same exchange area) they usually take their old
telephone number to their new address. When a subscriber
leaves an address, and the new resident does not take over the
phone number, it is usually reallocated to a different address in
the same exchange area, after about six months.
Because the numbers are more stable than the names or
addresses, telephone surveys normally use only the numbers,
regardless of whether the corresponding name or address is the
same as it was when the directory was printed.
Unlike door to door surveys, which normally use cluster
sampling, clustering is not used in telephone surveys. The main
purpose of clustering is to save on interviewer travel expenses,
which don’t exist in telephone surveys. Because clustering tends
to reduce the effective sample size, telephone surveys can use
smaller samples than door-to-door surveys.
The first stage in drawing a sample from a telephone directory is
to calculate how many telephone numbers are required. Begin
with the sample size you want, then enlarge that figure to allow
for refusals to be interviewed, ineligible numbers, and numbers
which are not answered.
Also decide how many people should be interviewed at each
household contacted. Do you want to interview only one
person per household, or all adults, or base the number of
interviews on the number of people in the household? As
interviewing more than one person per household saves very
little money, and interviewing more than one per household
reduces the effective sample size (qv), it’s normal to interview
only one person at each phone number.
How many Telephone Numbers to Sample
When you draw a sample of numbers from a telephone
directory, many of these don’t result in interviews. This
experience would be fairly typical:
Begin with 100 entries.
30 are business entries - leaving 70.
Attempt to ring 70 numbers. 5 turn out to be disconnected.
After many attempts, only 40 answer.
10 of the 40 refuse to participate in the survey.
This leaves 30 successful interviews from the 100 numbers.
If the area you want to survey is only a small part of the area
covered by the telephone directory, and the directory has a single
set of alphabetical entries, you will have to look through many
more than 100 entries to find 100 in the survey area.
Another problem arises if you don’t want to interview at all
households, but only those that meet certain criteria - e.g.
listening to your radio station. If only one person in 3 listens to
your station, you’d get only about 10 interviews (instead of 30)
from the list of 100 numbers.
The percentage of entries that are businesses varies greatly
between directories, and is often higher with small directories
than those from large cities.
The refusal rate can be anything between 1% and 40%, depending mainly on the skills of the interviewer, and where the
respondent lives (refusals are much more common in big cities),
but very little on the subject matter. With inexperienced
interviewers, to be on the safe side one should allow for a fairly
high refusal rate of say 25% in countries where phone surveys
59
have been widely used. In areas where almost no phone surveys
have been done before, refusal rates are usually very low, often
less than 1%.
The proportion of numbers which are never answered depends
mainly on how many times an unanswered number is re-rung.
If you ring back up to 10 times, at varying times of day and
days of week, leaving at least two weeks between the initial and
final attempts, you can reach 90 to 95% of numbers - except in
areas which have many holiday homes inhabited only a short
part of each year. If you don’t have the patience or the resources
to keep trying for so long, you should try at least three times to
reach each number - varying the time of day and day of week and you will usually succeed in reaching 85 to 90% of the
telephone numbers.
Drawing Samples from Phone Directories
Selecting Columns
If you need more numbers than there are usable columns in a
directory, you’ll need to select several line numbers in each
column. If you need fewer numbers than columns, you’d select
one line number, but not use every column.
In fact, phone directories are not always printed accurately, and
sometimes the first line fully visible above the template may be
line 19 or 17. Too bad. Think of the choice of line 18 as
referring to a particular place on the page, rather than a particular
line number. It won’t upset the results of the survey.
As you look at line 18 (or whatever) in each selected column,
copy it out if it is a residential entry with a number on that line,
and ignore it if it is:
a. a business-only entry, or
b. a line without a telephone number on it (i.e. the first line of
a multiple-line entry), or
c. a residential entry outside the area to be surveyed.
Even if you know the allocated number ranges, you still have to
contend with many non-existent numbers.
Sometimes, when you dial a nonexistent number, you get the
appropriate “no such number” dial tone. However, in some
areas, dialling a nonexistent number will produce the same tone
as a telephone that is ringing and not being answered. In this
case you can never be sure whether a number is nonexistent, or
whether its owners are seldom home. When you suspect that a
particular number may not be connected, the telephone
authority will usually verify this for you. But when you give the
authority a list of 100 possibly non-existed numbers, it’s less
likely to co-operate.
When you have chosen a sample using random digit dialling,
interviewing should not simply begin with the lowest number
selected, and work up to the highest. The danger is that the
highest-numbered prefixes, (which probably correspond with
particular geographical areas) may not be used, if you have
selected more numbers than you’ll eventually need. Therefore,
the order in which the selected numbers are rung should be
randomized, so that no bias is caused by unused numbers
being concentrated in a few localities.
If you are conducting a telephone survey over a wide area, it can
become quite expensive to make all calls from a central base. On
the other hand, if all calls are made from one location, you have
much better control over the workload, if any reallocation is
needed. Quality control is also better when all the interviewing
is done from a single office, as it’s easier to make sure that each
interviewer is using a standardized approach.
Telephone Interviewing
Most of the time, telephone interviewing is not very different
from face-to-face interviewing - but the lack of vision makes it
harder for interviewers and respondents to communicate well,
so extra steps have to be taken to compensate for this.
If it is a “after hours” listing as part of a business entry, and a
name is given (as in the B. Bloggs example above), see if there
is a main entry under that name. If there is, ignore the business
“after hours” listing. If not, or if no name is given, accept the
“after hours” listing as a valid number to be surveyed. If you’re
not sure whether a business is also a residence, include the
number on the list to be rung, and reject it later if it is only a
business.
The rest of this chapter describes some of the peculiarities of
telephone interviewing: call logs, work flow, and how interviewers need to adapt to invisible respondents.
Avoid Biases in Unused Numbers
In this way, you build up a list of residential numbers to be
surveyed. You shouldn’t need to use all the numbers, so take
steps to avoid any consistent pattern in the numbers which are
unused.
Call logs are an excellent method of managing work flow in a
telephone survey. Interviewers have several heaps of call logs on
their desks: one heap they are currently working from, and other
heaps for each type of call outcome.
To ensure that all sections of the phone directory are represented, call the sampled numbers in a different order from their
sequence in the directory. This will mean that the numbers that
are not called will come from all parts of the directory.
CALL LOG Phone (0. . . .) . . . . . . . . . . . . . Interview. . .
The easiest way to do this is to copy each number onto a
separate call log: a piece of paper which will record the history
of calling that number. Before doing any interviews, shuffle the
logs into random order.
Call logs
After experimenting with various ways of recording the
progress of telephone interviews, we found that the Call Log is
the simplest. A call log is a piece of paper: there is one for each
telephone number to be called in a survey.
Here’s an example:
Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Directory page . . .
Try Day Date Start time Stop time Interviewer Result*
1
2
3
4
60
5
6
* NOT ANSWERED... * ANSWERED...
It’s quite possible to have a call log printed as the first page of
each questionnaire, but many telephone numbers don’t result
in interviews, so this method will waste a lot of paper.
E = Engaged N = No English spoken
If you want to improve your survey methods, the call logs can
be entered into a computer (each log as one case) and analysed.
Among the most useful items to enter are:
F = Fax machine or computer R = Refused immediately
• Prefix (which gives the exchange area)
D = Disconnected number S = Spoke to somebody in
household
• Number of call attempts
G = Gave up after 10 rings B = Business number
A = Answering machine L = Ring back later: fill in below
“Good [evening], is that [number]? My name is [name] from
Bloggly Research. We’re doing a survey of radio listening and
your phone number has come up. I’d like to ask a few questions
about radio listening in your household. I need to speak to the
[oldest/youngest] [man/woman] [if youngest:aged 15 or over]
who lives there. Is that person there now?”
If the selected person is not home, arrange to ring back later.
Also arrange to ring back later if the selected person is home,
but too busy to speak now.
Tick one box:
[ ] Interviewed immediately
[ ] Ring back about . . . . . am/pm on . . . . .day and ask for . . . .
..........
[ ] Refused, because . . . . . . . . . . . . . . . . . . . . . . .
- Call logs should be quite small, because interviewers will have
several heaps of them on their desks. A5 (about 15 by 21
centimetres) is a good size.
The above example of a call log is a large one, because it has the
entire introductory script for the survey. This is easier for
inexperienced interviewers, but it fills a whole A4 sheet of
paper. More experienced interviewers can have the script on one
piece of paper, and write the call details (day, time, and result of
call) on a much smaller piece of paper or a card.
Before the interviewers receive these call logs, several items are
written at the top:
• the phone number to be rung (essential)
• the surname from the directory, or page number (in case
there’s a mistake, and you need to check back with the
directory)
• the type of person to be interviewed.
There are several ways to find the respondent within a household, as discussed in the Sampling chapter. The above call log is
designed to have YM, OM, YW, or OW written in the “Interview” space at the top right, showing whether to try to
interview the youngest man in the household, the oldest man,
the youngest woman, or the oldest woman.
If a call result is anything except the common listed outcomes
(one-letter codes), a brief description is written in the Result
column.
The interviewer writes his or her initials in the Interviewer
column.
When the interviewer ticks the box labelled Interviewed
Immediately, he or she picks up a questionnaire and begins the
actual interview.
• Final result
• Number of charged calls
This information can be used to calculate response rates, check
telephone charges. and so on.
Introductions in Telephone Interviews
1. Avoiding refusals
The first few seconds of a telephone interview are vitally
important in communicating the information needed. Imagine
you’re a respondent. Your telephone rings. You pick it up, and a
stranger’s voice begins to explain something about asking you
questions. Immediately, you are wary. What does this stranger
want from you? Now they are asking which person in your
household last had a birthday. They are talking in a strange kind
of way, as if they are reading aloud. You are suspicious. Why do
they want to know that? Is this a new kind of sales pitch? Why
do they want to speak to your daughter? Now they are talking
about a survey. Last week, a friend told you that somebody rang
him up, mentioning a survey, but really trying to sell insurance.
Another friend told you about a survey phone call: the interviewer said it would take “a few minutes” but it lasted almost
an hour.
You decide you want nothing to do with this survey. “We’re
not interested,” you say. “Goodbye.”
On the other end of the telephone line, the interviewer notes
down yet another refusal.
In the richer countries, about one telephone call in three ends
with a refusal. Most of these refusals seem to arise through
suspicion, or bad experiences with similar calls in the past.
Commercial firms often telephone people to try to sell things to
them. This can become so common, and so annoying, that
people defend themselves by having answering machines and
unlisted numbers. (They don’t know that an unlisted number
is no defence against random digit dialling.)
The best way to overcome this high refusal rate is by establishing a genuine dialogue with respondents. You have perhaps 30
seconds to convince them that your survey will benefit them in
some way.
Approaches we have found successful are:
1. If the respondent has had any previous contact with you,
establish this immediately. For example “Could I speak to
[name of person you want]?” When the person you want is
there, say something like “I’m ringing on behalf of the
Researcher (Name). I understand that you were a subscriber
last year - is that right?”
When the respondent remembers some prior dealing with
your organization, refusal rates are usually negligible.
61
2. When you are surveying the general population, and your
organization is not well known, refusal rates are likely to be
high. In this case, two quite different approaches have both
worked well for us:
2a.
Spend about a minute describing your organization,
and the purpose of the survey you are doing. Only
then, tell them that you’d like their opinions.
2b.
Immediately start to question the respondent, with no
preliminary explanation. After the first few questions
have aroused their interest, you can explain your
purpose. This is almost a way of tricking people into
answering questions, It works well, but for ethical
reasons, I don’t recommend it - specially if any answers
could harm the respondent in any possible way, or
respondents might regret giving any answers.
3. Assure respondents that you are not trying to sell them
anything.
4. In Australia, we found lower refusal rates when we didn’t
mention the word survey. (There are so many pseudo-surveys
around.) Instead, our interviewers said “I’d like to ask you a
few questions.”
5. Some interviewers have much higher refusal rates than
others. Those who speak cheerfully, quickly, and confidently
do best, while interviewers who speak slowly, hesitantly, or
seem to lack confidence have much higher refusal rates.
6. Most refusals happen in the first minute of a call. Therefore,
the first few questions should be interesting to the
respondent, not ask for any confidential information, and
not be difficult to answer.
Informing Respondents
In countries where telephone surveys are very frequent, refusals
are the main problem. In countries where telephone surveys are
rare, the main problem is not willingness to co-operate, but
respondents’ lack of understanding of the procedure: “Why
does this strange man want to talk to the youngest female in
my household, not counting any children under 15?”
Respondents should be told that they may decline to answer
any question. You should also tell them:
• the expected duration of the interview, e.g. “most people
take about 15 to 20 minutes”
• whether anybody else at the interviewer’s end will be able to
overhear it, and
• whether the answers they give will have their name attached
or be able to be read by anybody else in the survey group or
the organization commissioning the research.
62
LESSON 16:
SURVEYS- MAIL SURVEY
Topics Covered
Survey, Telephonic, Mails, Introductions, Conversion, Call logs,
Design, Followups.,Receiving, Fax surveys.
Objectives
Upon completion of this Lesson, you should be able to:
• Understanding Surveys
• Identifing respondents for telephonic surveys.
• Knowing how to do a refusal conversion.
• Wrting a Call logs.
It’s generally not worthwhile to do a mail survey with the
general public. Most people simply won’t answer, so you won’t
be able to determine how representative your results are. But
when small specialized populations are to be surveyed, mail
surveys can be very effective.
The biggest problem with mail surveys is a low response rate.
In my experience, the minimum response rate for producing
valid results is about 60%, but many mail surveys achieve less
than 30% return. To overcome this, you need to make it easier
and more rewarding for people to respond.
• How to do Fax surveys.
Most of this chapter also applies to other types of selfcompletion questionnaires - such as those distributed at events
(see the chapter on event surveys), and questionnaires left by
interviewers for respondents to fill in, to be collected when the
interviewer returns.
Mail Surveys
Making it Easy
• Questionnaire design.
• How to do Followups.
• Receiving completed questionnaires.
Sometimes the most appropriate way to do a survey is by mail.
If most of the following conditions apply, a mail survey could
be the best type to use:
• You have a complete list of names and addresses of the
population to be surveyed - such as members of an
organization.
• People in this population are able to read and write well, and
have an above-average level of education. (It’s often more
difficult to complete a written questionnaire than a spoken
one.)
• The people can be expected to have an interest in the success
of the organization sponsoring the survey. For example,
they are regular listeners to a radio station. Without this
interest, the response rate is likely to be very low.
• You’re not in a hurry to get results. It takes time for letters to
be sent out and returned. The shortest time between
sending out letters and getting enough completed
questionnaires back is about a month.
• The questionnaire does not include questions whose answers
are likely to be different if people read through the
questionnaire before answering any questions.
Any sets of questions which take the form of “Which radio
stations do you listen to?” followed a little later by “What’s
your opinion of FM99?” are likely to produce biased results, as
many people read through a questionnaire before beginning to
answer it. They’ll realize that FM99 is sponsoring the survey,
and many people will reward the sponsor by not criticizing it.
• The respondents will need to look up information from
some other source - concerning their finances, for example.
• You can’t afford to spend much on a survey. Mail surveys are
usually the cheapest type of survey. That’s why they are so
often used, even when they’re not appropriate.
People who are willing to return a mail questionnaire may not
get around to doing so without some prompting. For this
reason it’s normal to offer respondents some encouragement to
mail their questionnaires back.
1. Include a Return Envelope
The first method of encouragement is an easy way to get the
questionnaire back: a business reply or freepost envelope,
addressed to the originator of the survey. Freepost licences are
easy to obtain (in most countries), and the only costs involved
are associated with printing envelopes (in a format strictly
specified by the postal authority.
2. Give a Deadline
The second incentive seems trivial, but we have found it to be
surprisingly effective. Simply print, near the beginning of the
questionnaire, something like this:
Please try to return this questionnaire within 7 days
The shorter the request, the better it seems to work. Though
some people ignore such appeals, many take notice of this one,
and it will help you get more questionnaires back, sooner.
3. Offer an incentive
Surveys that don’t use interviewers tend to have much lower
response rates than surveys where the interviewer speaks to the
respondent. It’s much easier to ignore a questionnaire that
comes in the mail than to ignore a real interviewer. Therefore,
mail surveys need to use incentives, to boost the response rate.
There are two types of incentive, which I call psychological and
financial.
An psychological incentive is a way of making people feel good
about filling in a questionnaire - e.g. “If you like to use our
products, please help us improve them by completing this
questionnaire.”
63
After experimenting with incentives of various types and sizes,
I have reached two conclusions:
1. A small chance of winning a large amount works better than
the certainty of a small amount. It’s also much less work to
give out one large prize than lots of tiny ones.
Judging the size of the incentive is something of an art: if the
incentive is too small, many people won’t bother to respond.
But if you’re offering a huge prize for one lucky respondent,
some respondents will send in multiple questionnaires probably not with identical answers, some even answered at
random - simply to try to win the prize. Once I worked on a
project with a prize of a holiday in Europe. When we sorted the
file in order of respondents’ addresses, we found that some
had made multiple entries with slightly different versions of
their address. In other households, improbably large numbers
of people had returned questionnaires - and some of their
names looked suspiciously like cats and dogs!
Very large prizes produce diminishing returns: when the prize is
doubled, the response rate rises only a few percent. An amount
that I’ve found effective is the value of about a day’s wages for
the average respondent. For most people, this would be a
pleasant amount to win, but would not make them distort
their answers.
Offering several small prizes doesn’t work as well as one large
prize - unless respondents can clearly see that they have a higher
chance with a small prize - for example, one prize in each village
surveyed.
Because people who don’t use a service much also tend
not to respond to surveys about it, it’s a good idea to
add another emotional appeal, perhaps something like
this:
•
Even if you don’t listen to FM99 very often, your
opinions are still very important to us. We’re very keen
to provide better services to occasional listeners, and
without their feedback we can’t do this.
Another type of psychological incentive is a promise to tell
respondents about the survey’s results. This is simplest to fulfil
when you have regular contact with respondents, and notifying
them of results could be as simple as putting an article in their
next newsletter. One point in favour of this type of incentive is
that it can work with people who are unaffected by other types
of incentive.
Psychological incentives work well with regular users of a media
service, but non-users don’t usually have any emotional
attachment. Non-users usually respond well to financial
incentives, but regular users respond little to financial incentives
- unless the prize is very large. That’s why using both types of
incentive will produce a better balanced sample than using only
one type.
Questionnaire Design for Mail Surveys
Compared with questionnaires for telephone surveys (which
have to be read and completed only by interviewers), selfcompletion questionnaires need much more care taken with
their design.
Take care that the prize offered is something which won’t
discourage potential respondents who already have one. A
poorly chosen prize can affect the accuracy of a survey. For
example, a survey in the late 1980s set out to measure the
musical taste of the Australian population. A compact disc
player was offered as a prize. As the people who were most
interested in music probably owned a CD player already, the
survey would have underestimated Australians’ interest in
music.
Before having a questionnaire printed and mailed out, it’s
essential to test it thoroughly with 5 to 10 respondents: some
old, some young, some not very bright. Don’t use your office
staff, or your friends or relatives: they know too much about
your intentions. Go to strangers, who have never heard of your
survey before - but if the questionnaire is only for listeners to
your station, obviously the strangers should also be listeners.
Sit beside them while they fill in the draft version, and ask them
to think aloud.
In wealthy countries, the most effective kinds of prizes are
often small luxury items that cannot be stored up. Vouchers for
restaurant meals are often very effective. An incentive that
worked well in a survey of rich businessmen was the chance to
meet others of their kind: we offered a meal in a high-priced
restaurant for 10 respondents.
You need to make sure that:
• there are no errors in it (specially in any skip instructions, if
questions have been renumbered),
Don’t begrudge spending money on rewards: it’s usually more
than saved by the number of questionnaires not printed and
mailed out to achieve an equal number of responses.
2. It’s best to use two different kinds of reward at the same
time: psychological incentives as well as financial ones. By
psychological incentives, I mean reasoned appeals to
complete a questionnaire. These arguments can appeal to
either self-interest or philanthropy - sometimes both. For
example:
•
•
64
Please complete the questionnaire, to help us improve
our programs.
Broadcasting works best when there is two-way
communication, so please give us your views by filling
in this brief questionnaire.
• the questions are easily understood (even by people with a
limited command of the language), and
• the layout won’t cause people to omit questions.
You’ll find that even after you have produced many mail
questionnaires, occasional problems still occur. Having 5 to 10
testers fill in your questionnaire is an excellent way of improving the survey - as long as you take notice of any problems they
have, and any misunderstandings. If the testing results in
extensive changes to the questionnaire, find a new lot of testers
- 5 is usually enough, the second time.
Tell them Exactly How to Answer
In every question, make it clear to respondents how many
answers are expected, using instructions like this - placed after
the question, but before the possible answers.
Please tick one box.
Please tick all answers that apply to you.
Please give at least one answer, but no more than three.
Use a consistent method for answering. If you ask respondents
to answer some questions by ticking, and others by circling code
numbers, you’ll find that some put ticks where they should
have circled codes — and vice versa. To make it easy for most
respondents, use ticks wherever possible.
In some countries (including Australia, Japan, and Ethiopia) a
tick means “Yes” and a cross means “No.” In other countries,
such as the USA, a cross means “This applies” - which is almost
the same as Yes. People from countries where a cross means No
can get confused and give the opposite answers to the questionnaire writer’s intention - so before printing a questionnaire for a
country you don’t know well, be certain which convention
applies there.
One method I’ve used that works surprisingly well is to ask
respondents to write in code numbers. Most of them write
legible numbers, and this greatly speeds up the data entry.
When you have a series of questions, all with the same set of
possible answers, this method works well. Here’s an example.
Here are some questions about programs on FM99. Please give
your opinion of each program by writing one of these numbers on the line after its name:
1. if you like it very much
2. if you like it a little
3. if you don’t like it
4. if you have no opinion about it
Program 1 ___
beside those a reverse set of numbers, with their correct
answers. Though some respondents realize they’ve made a
mistake, others probably don’t. So take care to keep the answer
scales consistent - the numbers are quite arbitrary.
Also, avoid getting people to rank items, e.g.
Which of these age groups are you in? Please tick one box.
[ ] 10-14
[ ] 35-44
[ ] 15-19
[ ] 45-54
[ ] 20-24
[ ] 55-64
[ ] 25-34
[ ] 65 or over
more
[ ] Can’t remember
Note the arrow at the foot of the first column: it is used to
draw the respondent’s eye upwards and to the right. This
doesn’t matter with a question like age group - because people
know their age and will be looking for the group, but for other
types of question they might not notice some of the possible
answers. If all a question’s answers are different lengths, you’ll
probably have to use a single line for each - don’t change the
number of columns within a single question. Too confusing!
Don’t lay a question out like this:
Which of these age groups are you in? Please tick one box.
[ ] 10-14 [ ] 15-19 [ ] 20-24 [ ] 25-34 [ ] 35-44 [ ] 45-54 [ ] 55-64
[ ] 65 or over [ ] Can’t remember
You save a little space, but it’s hard to read. There’s too much
danger of somebody ticking the box on the right of the age
group instead of the box on the left.
For example, if you have never heard of Program 2, write 4 on
the line, like this
Code Numbers
In the above examples, I didn’t include code numbers for each
possible answer. In the lower layout, adding code numbers
would have made it even messier, but in the upper layout can
take code numbers without losing clarity. I recommend that
answer codes always be printed on questionnaires; this will
ensure more accurate data entry. Using a regular 2-column
layout, the code numbers would look like this:
Program 1 4
Which of these age groups are you in?Please tick one box.
Program 2 ___
Program 3 ___
If this isn’t clear to your questionnaire testers, you can add an
example, before the first program name, e.g.
The problem with giving examples in the questionnaire is that
sometimes this encourages people to give the answer shown in
the example. For that reason, it’s best to choose some sort of
non-answer (e.g. “does not apply”) for the example - and only
add an example if some testers don’t understand the question.
A common mistake is to use two sets of scales that run in
opposite directions. In the above example, an answer code of 1
meant “like it very much”. What would happen if that set of
questions was followed by this one?
These questions are about different radio stations. Please rate
each station on a scale of 1 to 10, where 1 means you never
listen, and 10 means you listen whenever possible.
After answering the previous set of questions, respondents
may have forgotten the meanings attached to the answer codes,
but they will remember that 1 is the best answer. Now they find
a set of questions where 1 is the worst answer. This is guaranteed to confuse people! You can see this when some
respondents cross out their first set of answers, and write
[ ] 10-14
5
2
[ ] 15-19
6
[ ] 20-24
3
7
4
[ ] 25-34
8
more
9
1
[ ] 35-44
[ ] 45-54
[ ] 55-64
[ ] 65 or over
[ ] Can’t remember
The code numbers don’t need to be large, and they can be in a
different type from the questionnaire. As long as the layout
looks clear, respondents hardly notice code numbers.
Multi-page Questionnaires
If the questionnaire is printed on both sides of a single piece
of paper, some people won’t notice the second side. You might
think that one way to avoid this problem is to have a halffinished question at the foot of page 1, and to continue it at the
top of page 2 - but I’ve tried this, and it doesn’t work well.
When a question is on one page and its possible answers on
another, some people don’t bother answering it. The best
method to encourage people to turn the page is to put
65
“More...” below the last question on a page - but don’t put this
word in a corner where it can be overlooked. Have it in bold
type, immediately after the preceding question - not out near the
right-hand margin.
Missing a page is only a problem with 2-page questionnaires.
When a questionnaire has more than 2 pages, respondents can
feel that there are more pages after the first.
Order of Questions
Make sure the first few questions are interesting, and easy to
answer. Multiple-choice questions asking about people’s
opinions are often good to put at the beginning. If the
questions seem interesting enough, and not difficult, many
people will begin to complete the questionnaire as soon as they
receive it.
Open-ended questions shouldn’t be too near the beginning,
unless the answers will need very little thinking. If people have
to think too hard about an answer, they’ll often put the
questionnaire away to fill in later - and perhaps never return to
it.
If there are any questions which some people might be
reluctant to answer, or might find difficult, put them somewhere near the middle of the questionnaire - but preferably not
at the top of a page. Some people read questionnaires from the
back to the front, while others look first at the top of a page.
With a self-completion questionnaire (unlike a spoken interview) you have no control over the order in which respondents
read the questions.
Make sure the return address is printed on the questionnaire, so
that a respondent who has lost the business reply envelope is
still able to return the questionnaire. This address should go at
the end of the questionnaire: as soon as the respondent
finishes filling it in, the address is immediately visible.
66
LESSON 17:
SURVEY – SURVEY PRESENTATION
Topics Covered
Survey, Telephonic, Mails, Introductions, Conversion, Call logs,
Design, Followups.,Receiving, Fax surveys.
Objectives
Upon completion of this Lesson, you should be able to:
• Understanding Surveys
• Identifing respondents for telephonic surveys.
• Knowing how to do a refusal conversion.
• Wrting a Call logs.
• Questionnaire design.
• How to do Followups.
• Receiving completed questionnaires.
• How to do Fax surveys.
Style of Presentation
The standard of design and printing of the questionnaire will
convey a certain meaning to respondents. The questionnaire
should look thoroughly professional, but not so lavishly
produced that respondents will assume it is advertising material
and throw it away unread. To help readers with poor eyesight in
poor light, the type should be at least 10 points, and there
should be strong contrast between the ink and the paper colour.
Using high-quality type and coated paper can compensate for
small lettering. A long questionnaire should appear shorter and
easier to fill in than it really is.
The writing style of a questionnaire is important to get right. It
shouldn’t be boring or official in tone, but nor should it be so
flippant as to encourage incorrect answers. The best style seems
to be more of a spoken style, rather than a written one. A mail
questionnaire should sound like spoken conversation when
read aloud.
And remember that you’re writing a questionnaire, not a form.
Write questions, not commands. A typical official form,
designed to save paper, is often ambiguous. What do you make
of this “question” - which recently saw in an amateur questionnaire:
What should you answer?
• Yes (I have some),
• No (they’re not here right now),
• 2 (that’s how many I have),
• 5 & 7 (their ages),
• Raju and Rohit (their names)
• or what?
If the question you want answered is “How many children
under 16 live in your household?” write exactly that. Saving a
little space doesn’t compensate for wrong answers.
Name of what? It will surprise you how many sorts of names
people will write on mail questionnaires. If you mean “Your
name,” why not say so? Most of these problems can be
eliminated by pilot testing, if you ask the testers to try and
deliberately misunderstand the questions.
Identifying Questionnaires
If you already have some data about each respondent, which
you want to combine with answers from his or her questionnaire, you will need to identify who returned each questionnaire.
The usual way of doing this is to print or write a serial number
on each questionnaire before sending it out. You can buy
stamping machines, resembling date stamps, which automatically increase the number every time you stamp them.
Another way is to stick a name and address label on each
questionnaire, and use window envelopes for sending questionnaires out, so that the same label serves two purposes. When
you do this, test the placement of the label on the questionnaire, and where the paper is folded - otherwise part of the
name and address may not be visible through the window in
the envelope.
For some types of question, respondents may not answer
honestly if they know their responses are not anonymous. We
have found that it’s safe to encourage people to obliterate their
name and/or serial number if they are worried about anonymity: usually less than 2% take up this offer. It’s also a good idea
to mention your privacy policy on the questionnaire, e.g. “We
will not give out your personal details to any other organization.”
There can be problems with unidentified questionnaire. For
example, somebody with strong opinions might photocopy
100 questionnaires and return them all. But if each questionnaire has a unique number, this copying is easily discovered.
Also, anonymous questionnaires are generally not filled in as
well as numbered ones. For highly sensitive questions, anonymity is desirable, but mail surveys may not be the best
method.
An important reason for numbering questionnaires is that,
without identification, any reminders become much less
effective and more expensive. If every questionnaire is numbered, you can check it off when it is returned, and send a
reminder letter to people who have not returned their questionnaires. When questionnaires are not numbered, you will need to
send reminder letters to the whole sample, even though most
people may have already returned questionnaires.
The Covering Letter
Don’t post out a questionnaire with no explanation of why
you’re sending it. You’ll get a better response if you include a
covering letter, supplying this information:
Another common error is “Name ……..”
67
• The organization sponsoring the survey (normally shown in
the letterhead).
• If the organization is not well-known, the letter should
briefly explain the purpose of the survey.
• How the recipient came to be chosen.
• Why their co-operation is wanted; e.g. how the survey results
will benefit them.
• What to do if they have queries — e.g. a phone number to
ring.
• Exhortation to return the questionnaire without delay.
• The privacy statement (as mentioned above)
It helps if the covering letter is signed by a person the respondents have heard of. For example, when members of an
organization are surveyed, the letter could be signed by the
organization’s president.
Research has found it makes almost no difference to the
response rate if the covering letter is personally signed (as
opposed to a printed signature) or personally addressed (as
opposed to “Dear club member”, or whatever). As both of
these personal touches create a lot of extra effort, they might as
well be avoided.
Printing and Posting Questionnaires
Mail surveys can involve handling formidable amounts of
paper. When sending out a typical questionnaire, each outgoing
envelope usually contains three items: the questionnaire, the
covering letter, and a freepost or business reply envelope to
return the completed questionnaire in. Attention paid to details
- such as the exact way in which questionnaires are folded - can
save a lot of time later.
If a questionnaire is printed on A4 size paper, and folded into
three equal parts vertically (to become 99 by 210 mm), it will fit
easily into the largest size envelope which still attracts postage at
the basic rate: 120 by 237 mm. (This applies in Australia, but
many other countries’ postal regulations are similar.) The reply
envelope should be intermediate in size between these two:
small enough to fit in the outgoing envelope without folding,
but large enough that the completed questionnaire can go in it
without re-folding. We use 110 by 220 mm freepost envelopes,
which are just right in this situation.
Packaging and posting out questionnaires can be a lot of effort.
If you have plenty of helpers but little money, you can call for
volunteers to do the mailing, and create a production line: one
person numbering the questionnaires, one folding, one packing
the envelopes, and so on. Fewer mistakes are made this way.
If money is no problem, there’s a much easier method: use a
mailing house: a company which organizes mail advertising. All
you need to do, when you use a mailing house, is supply one
master copy of the questionnaire and one covering letter, plus a
list of names and addresses of recipients,. The mailing house
will print the questionnaires and covering letters, do the packing
(often using machines), and even post it all out for you. The
cost of the printing and enveloping is usually about half the
cost of the outgoing postage.
Reminder Letters
68
To get a decent response rate in mail surveys, some system of
reminders is usually essential. It’s best to prepare for reminders
from the start: they need to be budgeted for, and ready to go at
the right moment.
To know when to send a reminder, keep a record of the
number of completed questionnaires arriving in the mail each
day. At some point, usually about two weeks after the initial
mailing (depending on on the efficiency of your country’s postal
system), the daily number will start to fall away markedly. Now
is the time to send out reminders.
There are two methods of sending reminders. If the questionnaires are not individually identified (e.g. numbered), you have
no way of knowing who has sent one back, and who hasn’t.
Therefore you have to sent a reminder letter to all the respondents. For example:.
We posted a questionnaire to you on 31 May. If you haven’t
already filled it in and returned it, please do so by 30 June, so
that your answers can be included in the survey results. If you
have lost the questionnaire, please ring us on 826-953, and we’ll
send you another copy.
If the questionnaires are identified, it’s easier and cheaper. You
can send a specific letter only to the people from whom you
haven’t yet received a completed questionnaire:
We posted a questionnaire to you on 31 May, but we haven’t yet
received it back from you. Please fill it in and return it by 30
June, so that your answers can be included in the survey results.
If you have lost the questionnaire, please ring us on 826-953,
and we’ll send you another copy. If you have already sent it in,
we thank you.
Notice the difference between this and the earlier reminder: “we
haven’t yet received it back from you” - much more specific (and
more effective) than the more loosely worded reminder above.
But to be able to do this, you must keep track of which
questionnaires have come back, and which haven’t.
One of the easiest ways to do this is to produce a large sheet of
paper with every questionnaire number on it. You can easily fit
500 numbers on one piece of A4 paper: 10 numbers across (if
they are short), and 50 lines down. When each questionnaire is
returned, cross out its number. At any time, the numbers not
crossed out are the ones which need reminders.
Alternatively, you can use a computer database program - such
as Epi Info or Filemaker Pro. If you are already using a database
to print the names and addresses, it’s easy to add a few more
fields: the date the questionnaire was returned, date of the first
reminder, date of the second reminder, date a replacement
questionnaire was sent, and so on. However, if you don’t
already have a database, it’s quicker to cross the numbers off a
piece of paper.
If you have phone numbers of most potential respondents,
and enough helpers available, a telephone reminder will work
more quickly than a mailed one.
Several weeks later, there may still be a lot of questionnaires
outstanding, so it could be time for a second reminder. At this
stage, the numbers involved are relatively small, and the chances
are the non-respondents have lost their questionnaires and/or
return envelopes. So it’s usual to send out a new questionnaire
and return envelope with the second reminder.
A reminder questionnaire normally has the same identification
number (or name) as the original one, but there is a way of
distinguishing it from the original questionnaire. For example,
the reminder questionnaire could be printed on a different
colour of paper, or have a suffix added to the identification
number. This prevents any person who returns two questionnaires from being counted twice.
Each successive reminder produces fewer and fewer responses,
so at some stage you will need to give up — or, if you’re very
keen to get responses, to try a telephone call, or a visit to the
household, or some other non-mail means of contact.
Receiving Completed Questionnaires
At the other end of a survey, when the completed questionnaires start arriving in the mail, mass paper-handling is again
required.
The stages are:
1. Count the number of questionnaires coming back each day.
2. Open the envelopes, and unfold the questionnaires. This
sounds too trivial to mention, but it can take many hours if
the sample is large. If you delay this stage, you won’t be able
to send out reminder letters on time.
different the non-respondents are. It’s therefore vital to make it
as easy as possible for respondents to complete and return the
questionnaires, and to follow up the non-respondents with
reminder letters.
Fax Surveys
I’ve had recently good success at doing surveys by fax. Of
course, this is only feasible when nearly the whole surveyed
population has fax machines, but most businesses in developed countries can now receive faxes.
If you have a computer with fax software and mail-merge
software, you don’t even need to print out the questionnaires.
You can prepare the questionnaire with a word processing
program, set up a mailing list of recipients’ fax numbers, and
send all the questionnaires from your computer. If you then
make another copy of the mailing list, and delete entries from it
whenever a questionnaire is returned, you can use this shorter
mailing list to send reminders. As a short fax is (in most
countries) cheaper than sending a letter, it’s economic to send
quite a lot of reminders, thus increasing response rates.
Some people get annoyed when unwanted faxes use up their
paper, so these surveys should be kept short: preferably one
page. As it’s then very easy for somebody to fill in the questionnaire, and fax it straight back to you, the responses for fax
surveys come back much more quickly than for mail surveys.
3. If the questionnaires are numbered, check off each one - e.g.
cross its number off master list, showing that this
respondent won’t need a reminder letter. If you are using a
database program, enter the details of the questionnaires
that have just arrived.
Perhaps this is because fax surveys are a novelty. If they become
common, the response rates could quickly become very low
indeed. The secret is to keep the questionnaires short and
interesting, and to keep sending reminders.
4. Type the answers from completed questionnaire into the
computer program you are using for data entry. You could
leave this stage till all the questionnaires have come in - but
the sooner you do it, the sooner you’ll find if there are any
problems which need to be fixed.
Now you’ve done a survey, and the questionnaires are all filled
in - what happens next? This chapter is about processing
completed questionnaires: analysing them, and reporting on the
results.
5. Store the questionnaires in numerical order. If some
respondents have obliterated or cut off their questionnaire
numbers, give these questionnaires a new number, above the
highest number sent out. For example, if the highest
numbered questionnaire was 1000, assign new numbers
starting from 2000. Usually, no more than about 3% of
respondents obliterate their numbers, even if they are told
they can do so. If this possibility is not mentioned in the
covering letter, less than 1% obliterate the numbers.
At some stage you have to call an end to the survey, and process
the results. A principle I’ve found useful is to stop when the
response rate increases by less than 1% in a week. Of course, if
there is some external reason for a deadline, you may have to
stop processing newly arrived questionnaires long before the
full response is reached.
Mail Surveys: Summary
The mail survey technique is one of the easiest ways of getting
valid information, but only when all the conditions are right,
when questionnaires are clear and easy to follow, and when there
are enough incentives to produce a satisfactory response rate.
Unless the response rate is over 60%, the information from the
survey will probably be useless, because you won’t know how
Basic Survey Analysis
Even in developing countries, most surveys are analysed by
computer these days. In case you don’t have access to a computer, I’ve also included a section on manual analysis - which
for a small survey can be quicker than computer analysis.
Safeguarding Completed Questionnaires
Take care of them! This is the point in a survey where the
information is most vulnerable. Except for telephone surveys
done in a single office, completed questionnaires will be
transported from each interviewer to the survey office. If the
questionnaires completed by an interviewer are lost, all that
person’s work will be wasted, and it will cost a lot of money to
repeat those interviews. It could also delay the survey results.
Therefore, use the safest possible methods of getting the
completed questionnaires from each interviewer to the survey
office. If the postal system is unreliable, and the survey did not
extend over a very large area, it’s a good idea for each interviewer
to bring their completed work back to the office. For surveys
extending over a larger area, the interviewers can deliver the
complete questionnaires to their supervisors, and the supervisors can bring the questionnaires to the office.
69
When there’s a high risk of losing questionnaires, it’s even
advisable to copy the questionnaires before returning to the
survey office.
Debriefings
The end of a survey is be a good opportunity to ask the
interviewers or supervisors about any problems they found
with the survey methods or the questionnaire. I’ve found it
helpful to hold debriefing meetings, where 5 to 20 interviewers
or supervisors bring back their completed questionnaires at the
same time, then discuss the survey, and how it could have been
improved.
Many survey organizers don’t hold debriefings, partly because
they believe the interviewers are uneducated or ill-informed, and
have little to offer. But interviewers and supervisors can gain a
very detailed knowledge of how respondents react to interviews, and without using this knowledge, survey methods can’t
be fully improved. The only situation where debriefings aren’t
useful is when you do the same survey over and over again.
Even then, it’s helpful to hold debriefings occasionally.
The findings are recorded, so that they can be referred to the
next time a survey is done.
Storing Questionnaires
As questionnaires come back from each interviewer, their arrival
should be recorded (e.g. on a wall chart). The questionnaires are
then put away (e.g. in cardboard boxes) in order of their serial
numbers. All questionnaires should have serial numbers.
With mail surveys, the completed questionnaires will arrive back
in any order. Sometimes it’s helpful to give each returned
questionnaire a new serial number: the first questionnaire
returned is number 1, the second is 2, and so on. It’s also useful
to have a date stamp, and to stamp on each questionnaire the
date when it was received. This enables you to analyse whether
there’s any systematic difference between questionnaires
returned early and those returned late, and to make some
estimate of what types of people don’t mail back their questionnaires at all.
The danger period for questionnaires runs from the time each
questionnaire is completed, to the time when it has been
entered on a computer file (for computer analysis), or when the
analysis is completed (if analysis is manual). During this period,
if the questionnaire is lost, all the effort of that interview will
be wasted. So at this time, the questionnaires should be kept in
a safe place, such as a locked room.
70
LESSON 18:
CHECKING AND EDITING SURVEYS
Topics Covered
Survey, Telephonic, Mails, Introductions, Conversion, Call logs,
Design, Followups,Receiving, Fax surveys.
Objectives
Upon completion of this Lesson, you should be able to:
• Understanding Surveys
• Identifing respondents for telephonic surveys.
• Knowing how to do a refusal conversion.
• Wrting a Call logs.
• Questionnaire design.
• How to do Followups.
• Receiving completed questionnaires.
• How to do Fax surveys.
Checking and Editing
Though completed questionnaires should already have been
checked by interviewers and supervisors, they need to be
checked again before (or during) data entry.
What to Check
Every questionnaire needs to be thoroughly checked:
All standard items at the beginning or end of a questionnaire
should be filled in. They usually include:
• the questionnaire serial number
• the place where the interview was done (often in coded
form)
• the interviewer’s name (or initials, or number)
• the date and time of interview.
These are not questions asked of the respondent, but information supplied by the interviewer. If the interviewer forgot to
include something here, the supervisor should have noticed,
and made sure it was added. But sometimes newly trained
supervisors don’t notice these omissions. They sooner these
problems are found, the more easily they can be corrected.
• Check that every question which is supposed to have only
one answer does not have more.
• Check that no question which should have been skipped has
an answer entered.
• If an answer has been written in because no code applied,
perhaps a new code will have to be created. This will have to
be done after looking at all answers to this question, after
going through all the questionnaires.
Recoding Frequent “other” Answers
It’s annoying to read a survey report and find that a large
proportion of the answers to a question were “other”. The goal
should be to make sure the “other” category is the one with the
fewest answers - certainly no more than 5%. Take for example
this question:
“Which languages do you understand?”
(Circle all codes that apply)
1 Hindi
2 Oriya
3 Telgu
4 English
5 Other - write in: ......................
If 10% of people gave an “other” answer, the written-in
responses will need to be counted. If 4% of people understood Urdu, and 3% understood Gujrati, two new codes could
be created:
6 = Urdu
7 = Gujrati
For each questionnaire mentioning these languages, the circled 5
should be crossed out (unless a different “other” language was
also mentioned), and 6 and/or 7 written in and circled. This
should reduce the remaining “other” figure to about 3%. (It
doesn’t matter that the code for “other” is no longer the
highest number. Code numbers have only arbitrary meaning in
this type of list. See below, about nominal variables.)
Unless at least 2% of respondents give a particular “other”
answer, it’s usually not worthwhile to create a separate code.
Sometimes a number of “other” answers can be grouped, e.g.
8 = Bengali languages
But when such a code has been made, there is no way to recode
the question except by going back to all the questionnaires with
that code. The principle should be not to combine any answers
which you might later want to look at separately.
Coding Open-ended Questions
With some open-ended questions, you expect to find many
answers recurring. For example: “What is your occupation?”
There will be some occupations which are very common, some
less common, and there will probably be a lot of occupations
which only one respondent in the sample mentions. With other
open-ended questions (such as “What do you like most about
listening to FM99?”) you may find that no two respondents
give the same answer.
For both types of question, the standard coding method is the
same: you take a sub-sample of answers to that question often the first 100 answers to come in. (That may be a lot more
than 100 questionnaires, if not everybody is asked the question.)
Each different answer is written on a slip of paper, and these
answers are then sorted into groups with similar meanings.
Usually, there are 10 to 20 groups. If fewer than 2 people in 100
give a particular answer, it’s not worthwhile having a separate
code for that answer - unless it has a very specific and different
meaning from all the others.
71
Having defined these 10 to 20 groups, a code number is then
assigned to each. Following the example of what people like
about FM99, these codes might be assigned.
Choosing Data Entry Software
Data entry can be done on a wide variety of computer programs
- so how do you choose one? I suggest two main principles:
01 = like everything about FM99
02 = like nothing about FM99
03 = the announcers in general
04 = the programs in general
05 = the music
06 = news bulletins
07 = talkback
08 = breakfast program
09 = Eugene Shurple
10 = other
1. Use software that speeds up data entry and minimizes errors.
A practical problem with such a coding scheme is that, the more
codes are defined, the more likely some are to be very similar,
and the coders may not be consistent in assigning codes to
answers.
When consistency is very important, any codes which are not
absolutely clear should be allocated by several coders working
together, or by a single supervisor. As new comments are
found, which are not covered by the original codes, new codes
will need to be added.
There are many ways in which an answer can be given a code what is most useful depends on any action you might take as a
result of the survey. If there are particular types of answer you
are looking for, you could create codes for these. For example, if
a station broadcasts programs in a particular language, that
language should be listed as a code. Even if no respondent
understands that language, this in itself is useful information.
For open-ended questions with predefined answers (such as
occupations) there may be no need to build a coding frame by
looking at the answers.
That’s one way to code open-ended questions. It works well for
questions with a limited number of answers, but for questions
on attitudes, opinions, and so on, counting the coded categories lose much of the detail in the answers. Another approach is
to use the whole wording of the answers - e.g. by entering the
verbatim answers on a computer file. The coding can then be
very simple, and summarize the exact wording. I’ve used coding
schemes such as...
0 = made no comment
1 = made a comment
or...
1 = positive or favourable comment
2 = negative or unfavourable comment
3 = neutral comment, or mixed positive and negative.
These very broad coding schemes are much quicker to apply, and
less dependent on a coder’s opinion. But the broad codes are
not very useful, unless you also report the exact wording of
comments.
Data Entry
When a questionnaire has been checked and edited, and codes
have been written in for the open-ended questions, the
questionnaires is ready to be entered into a computer.
72
2. Have a thorough knowledge of the software you are using.
Several types of software can be used: spreadsheets, databases,
and statistical programs.
Using Spreadsheets for Data Entry
Many beginners enter survey results into a spreadsheet program, such as Excel or Lotus 123. This can be done using one
row for each respondent, and one column for each field.
However, I don’t recommend using a spreadsheet: it’s far too
easy to make a mistake, it’s slow, and there’s no built-in checking
for valid values. Though it is possible to build in checking, only
expert users of spreadsheets would know how to do this.
Using Database Software for Data Entry
Databases are more difficult to set up than spreadsheets, but
they guard your results better: files are saved automatically, and
exact values can be checked. Some common database programs
are Filemaker Pro, Access, and Foxpro; there are also many
others.
However, these programs are designed mainly for accounting,
and don’t handle survey data easily. The most basic types of
survey analysis, such as frequency counts, usually can’t be done
by database programs - or not without considerable difficulty,
unless you’re an expert user of the program.
Using Statistical Software for Data Entry
Professional survey organizations mostly use statistical package
programs. These are called “package” programs because they do
many things, and one of those things is data entry.
The best known statistical program is SPSS (Statistical Package
for the Social Science). This is widely used in universities and
research companies around the world. It’s not particularly easy
to use (specially if you aren’t trained in statistics), and it’s not
available in many languages, and it’s very expensive - from
about 1,000 to 10,000 US dollars, depending on which parts of
it you buy. However there are many people who know how to
use it, and if you don’t do your own data analysis, perhaps you
can find somebody in a nearby university who will analyse the
survey for you, using SPSS.
The basic module of SPSS includes spreadsheet-like data entry,
but without much checking built in. There’s also a module
specifically designed for data entry, but it costs a lot more, and is
said to be difficult to set up properly (I haven’t used it).
Differences between Research, Computer, and
Statistical Terms
The varying words given to the same concepts often confuse
novice researchers. Here are some simple explanations.
Questions, Fields, and Variables
A field is a space to be filled - like a column in a table. For
example, the first field in most survey data files is the questionnaire ID number. (Field, in the computer sense, has absolutely
nothing to do with fieldwork.) Each field contains one variable:
you can’t have two variables in one field, or vice versa.
A question with one possible answer also occupies a field, and
corresponds to a variable. But a question with multiple answers
(e.g. “which languages do you understand?”) will have a
number of fields and variables. If a question has up to 6
possible answers, it will have 6 variables in 6 fields - though for
respondents who have given less than 6 answers, some of these
variables and fields will be blank
When you describe a questionnaire, you refer to questions.
When you’re doing data entry, or planning computer files,
“field” is the term to use. But when you’re analysing the data,
using a statistical program, a field becomes a variable.
Answers, Codes, and Values
An answer belongs to a question on a questionnaire, a code
occupies a field in a computer file, and a value belongs to a
variable. All three are much the same thing.
Questionnaires, Respondents, Interviews, Records,
and Cases
Again, these are really the same thing, but the name changes
with the context. An interviewer will speak of an interview
producing a questionnaire, a computer programmer will call the
information from one questionnaire a record (when it’s on a
computer file), and a statistician will call it a case.
Of course, there are also respondents and interviews: usually
there’s one respondent per questionnaire, and one questionnaire
per interview. Don’t make the common mistake of calling a
questionnaire or interview a “survey”: the survey is the whole
project.
When shown on a computer screen, one record will usually
correspond to one line - though this will often be too wide to
see on the screen all at once.
73
LESSON 19:
CASE STUDY
Topics Covered
Selecting Cases, Determine, Gathering, Analysis, Preparing,
Collecting, Evaluating, Analyze, Method
Objectives
Upon completion of this Lesson, you should be able to:
• Select the Cases and Determine Data Gathering and Analysis
Techniques
• Prepare to Collect the Data
• Collect Data in the Field
• Evaluate and Analyze the Data
• Prepare the report
• Applying the Case Study Method to an Electronic
Community Network
The Case Study as a Research Method
Introduction
Case study research excels at bringing us to an understanding of
a complex issue or object and can extend experience or add
strength to what is already known through previous research.
Case studies emphasize detailed contextual analysis of a limited
number of events or conditions and their relationships.
Researchers have used the case study research method for many
years across a variety of disciplines. Social scientists, in particular,
have made wide use of this qualitative research method to
examine contemporary real-life situations and provide the basis
for the application of ideas and extension of methods.
Researcher Robert K. Yin defines the case study research
method as an empirical inquiry that investigates a contemporary
phenomenon within its real-life context; when the boundaries
between phenomenon and context are not clearly evident; and
in which multiple sources of evidence are used (Yin, 1984, p.
23).
Critics of the case study method believe that the study of a
small number of cases can offer no grounds for establishing
reliability or generality of findings. Others feel that the intense
exposure to study of the case biases the findings. Some dismiss
case study research as useful only as an exploratory tool. Yet
researchers continue to use the case study research method with
success in carefully planned and crafted studies of real-life
situations, issues, and problems. Reports on case studies from
many disciplines are widely available in the literature.
This paper explains how to use the case study method and then
applies the method to an example case study project designed
to examine how one set of users, non-profit organizations,
make use of an electronic community network. The study
examines the issue of whether or not the electronic community
network is beneficial in some way to non-profit organizations
and what those benefits might be.
74.
Many well-known case study researchers such as H. Simons, and
R. K. Yin have written about case study research and suggested
techniques for organizing and conducting the research successfully. This introduction to case study research draws upon their
work and proposes six steps that should be used:
• Determine and define the research questions
• Select the cases and determine data gathering and analysis
techniques
• Prepare to collect the data
• Collect data in the field
• Evaluate and analyze the data
• Prepare the report
Step 1. Determine and Define the Research
Questions
The first step in case study research is to establish a firm research
focus to which the researcher can refer over the course of study
of a complex phenomenon or object. The researcher establishes
the focus of the study by forming questions about the
situation or problem to be studied and determining a purpose
for the study. The research object in a case study is often a
program, an entity, a person, or a group of people. Each object
is likely to be intricately connected to political, social, historical,
and personal issues, providing wide ranging possibilities for
questions and adding complexity to the case study. The
researcher investigates the object of the case study in depth
using a variety of data gathering methods to produce evidence
that leads to understanding of the case and answers the research
questions.
Case study research generally answers one or more questions
which begin with “how” or “why.” The questions are targeted
to a limited number of events or conditions and their interrelationships. To assist in targeting and formulating the
questions, researchers conduct a literature review. This review
establishes what research has been previously conducted and
leads to refined, insightful questions about the problem.
Careful definition of the questions at the start pinpoints where
to look for evidence and helps determine the methods of
analysis to be used in the study. The literature review, definition
of the purpose of the case study, and early determination of
the potential audience for the final report guide how the study
will be designed, conducted, and publicly reported.
Step 2. Select the Cases and Determine Data
Gathering and Analysis Techniques
During the design phase of case study research, the researcher
determines what approaches to use in selecting single or
multiple real-life cases to examine in depth and which instruments and data gathering approaches to use. When using
multiple cases, each case is treated as a single case. Each case’s
conclusions can then be used as information contributing to the
whole study, but each case remains a single case. Exemplary case
studies carefully select cases and carefully examine the choices
available from among many research tools available in order to
increase the validity of the study. Careful discrimination at the
point of selection also helps erect boundaries around the case.
The researcher must determine whether to study cases which are
unique in some way or cases which are considered typical and
may also select cases to represent a variety of geographic regions,
a variety of size parameters, or other parameters. A useful step
in the selection process is to repeatedly refer back to the purpose
of the study in order to focus attention on where to look for
cases and evidence that will satisfy the purpose of the study and
answer the research questions posed. Selecting multiple or single
cases is a key element, but a case study can include more than
one unit of embedded analysis. For example, a case study may
involve study of a single industry and a firm participating in
that industry. This type of case study involves two levels of
analysis and increases the complexity and amount of data to be
gathered and analyzed.
A key strength of the case study method involves using
multiple sources and techniques in the data gathering process.
The researcher determines in advance what evidence to gather
and what analysis techniques to use with the data to answer the
research questions. Data gathered is normally largely qualitative,
but it may also be quantitative. Tools to collect data can include
surveys, interviews, documentation review, observation, and
even the collection of physical artifacts.
The researcher must use the designated data gathering tools
systematically and properly in collecting the evidence. Throughout the design phase, researchers must ensure that the study is
well constructed to ensure construct validity, internal validity,
external validity, and reliability. Construct validity requires the
researcher to use the correct measures for the concepts being
studied. Internal validity (especially important with explanatory
or causal studies) demonstrates that certain conditions lead to
other conditions and requires the use of multiple pieces of
evidence from multiple sources to uncover convergent lines of
inquiry. The researcher strives to establish a chain of evidence
forward and backward. External validity reflects whether or not
findings are generalizable beyond the immediate case or cases;
the more variations in places, people, and procedures a case
study can withstand and still yield the same findings, the more
external validity. Techniques such as cross-case examination and
within-case examination along with literature review helps
ensure external validity. Reliability refers to the stability, accuracy,
and precision of measurement. Exemplary case study design
ensures that the procedures used are well documented and can
be repeated with the same results over and over again.
Step 3. Prepare to Collect the Data
Because case study research generates a large amount of data
from multiple sources, systematic organization of the data is
important to prevent the researcher from becoming overwhelmed by the amount of data and to prevent the researcher
from losing sight of the original research purpose and questions. Advance preparation assists in handling large amounts of
data in a documented and systematic fashion. Researchers
prepare databases to assist with categorizing, sorting, storing,
and retrieving data for analysis.
Exemplary case studies prepare good training programs for
investigators, establish clear protocols and procedures in
advance of investigator field work, and conduct a pilot study in
advance of moving into the field in order to remove obvious
barriers and problems. The investigator training program covers
the basic concepts of the study, terminology, processes, and
methods, and teaches investigators how to properly apply the
techniques being used in the study. The program also trains
investigators to understand how the gathering of data using
multiple techniques strengthens the study by providing
opportunities for triangulation during the analysis phase of the
study. The program covers protocols for case study research,
including time deadlines, formats for narrative reporting and
field notes, guidelines for collection of documents, and
guidelines for field procedures to be used. Investigators need to
be good listeners who can hear exactly the words being used by
those interviewed. Qualifications for investigators also include
being able to ask good questions and interpret answers. Good
investigators review documents looking for facts, but also read
between the lines and pursue collaborative evidence elsewhere
when that seems appropriate. Investigators need to be flexible
in real-life situations and not feel threatened by unexpected
change, missed appointments, or lack of office space. Investigators need to understand the purpose of the study and grasp the
issues and must be open to contrary findings. Investigators
must also be aware that they are going into the world of real
human beings who may be threatened or unsure of what the
case study will bring.
After investigators are trained, the final advance preparation step
is to select a pilot site and conduct a pilot test using each data
gathering method so that problematic areas can be uncovered
and corrected. Researchers need to anticipate key problems and
events, identify key people, prepare letters of introduction,
establish rules for confidentiality, and actively seek opportunities
to revisit and revise the research design in order to address and
add to the original set of research questions.
4. Collect Data in the Field
The researcher must collect and store multiple sources of
evidence comprehensively and systematically, in formats that can
be referenced and sorted so that converging lines of inquiry and
patterns can be uncovered. Researchers carefully observe the
object of the case study and identify causal factors associated
with the observed phenomenon. Renegotiation of arrangements with the objects of the study or addition of questions to
interviews may be necessary as the study progresses. Case study
research is flexible, but when changes are made, they are
documented systematically.
Exemplary case studies use field notes and databases to
categorize and reference data so that it is readily available for
subsequent reinterpretation. Field notes record feelings and
intuitive hunches, pose questions, and document the work in
progress. They record testimonies, stories, and illustrations
which can be used in later reports. They may warn of impending bias because of the detailed exposure of the client to special
attention, or give an early signal that a pattern is emerging. They
75
assist in determining whether or not the inquiry needs to be
reformulated or redefined based on what is being observed.
Field notes should be kept separate from the data being
collected and stored for analysis.
Maintaining the relationship between the issue and the evidence
is mandatory. The researcher may enter some data into a
database and physically store other data, but the researcher
documents, classifies, and cross-references all evidence so that it
can be efficiently recalled for sorting and examination over the
course of the study.
Step 5. Evaluate and Analyze the Data
The researcher examines raw data using many interpretations in
order to find linkages between the research object and the
outcomes with reference to the original research questions.
Throughout the evaluation and analysis process, the researcher
remains open to new opportunities and insights. The case
study method, with its use of multiple data collection methods
and analysis techniques, provides researchers with opportunities
to triangulate data in order to strengthen the research findings
and conclusions.
The tactics used in analysis force researchers to move beyond
initial impressions to improve the likelihood of accurate and
reliable findings. Exemplary case studies will deliberately sort
the data in many different ways to expose or create new insights
and will deliberately look for conflicting data to disconfirm the
analysis. Researchers categorize, tabulate, and recombine data to
address the initial propositions or purpose of the study, and
conduct cross-checks of facts and discrepancies in accounts.
Focused, short, repeat interviews may be necessary to gather
additional data to verify key observations or check a fact.
Specific techniques include placing information into arrays,
creating matrices of categories, creating flow charts or other
displays, and tabulating frequency of events. Researchers use the
quantitative data that has been collected to corroborate and
support the qualitative data which is most useful for understanding the rationale or theory underlying relationships.
Another technique is to use multiple investigators to gain the
advantage provided when a variety of perspectives and insights
examine the data and the patterns. When the multiple observations converge, confidence in the findings increases. Conflicting
perceptions, on the other hand, cause the researchers to pry
more deeply.
Another technique, the cross-case search for patterns, keeps
investigators from reaching premature conclusions by requiring
that investigators look at the data in many different ways.
Cross-case analysis divides the data by type across all cases
investigated. One researcher then examines the data of that type
thoroughly. When a pattern from one data type is corroborated
by the evidence from another, the finding is stronger. When
evidence conflicts, deeper probing of the differences is necessary
to identify the cause or source of conflict. In all cases, the
researcher treats the evidence fairly to produce analytic conclusions answering the original “how” and “why” research
questions.
Step 6. Prepare the Report
Exemplary case studies report the data in a way that transforms
a complex issue into one that can be understood, allowing the
reader to question and examine the study and reach an understanding independent of the researcher. The goal of the written
report is to portray a complex problem in a way that conveys a
vicarious experience to the reader. Case studies present data in
very publicly accessible ways and may lead the reader to apply the
experience in his or her own real-life situation. Researchers pay
particular attention to displaying sufficient evidence to gain the
reader’s confidence that all avenues have been explored, clearly
communicating the boundaries of the case, and giving special
attention to conflicting propositions.
Techniques for composing the report can include handling each
case as a separate chapter or treating the case as a chronological
recounting. Some researchers report the case study as a story.
During the report preparation process, researchers critically
examine the document looking for ways the report is incomplete. The researcher uses representative audience groups to
review and comment on the draft document. Based on the
comments, the researcher rewrites and makes revisions. Some
case study researchers suggest that the document review
audience include a journalist and some suggest that the
documents should be reviewed by the participants in the study.
Applying the Case Study Method to an
Electronic Community Network
By way of example, we apply these six steps to an example
study of multiple participants in an electronic community
network. All participants are non-profit organizations which
have chosen an electronic community network on the World
Wide Web as a method of delivering information to the public.
The case study method is applicable to this set of users because
it can be used to examine the issue of whether or not the
electronic community network is beneficial in some way to the
organization and what those benefits might be.
Step 1. Determine and Define the Research
Questions
In general, electronic community networks have three distinct
types of users, each one a good candidate for case study
research. The three groups of users include people around the
world who use the electronic community network, the nonprofit organizations using the electronic community network to
provide information to potential users of their services, and the
“community” that forms as the result of interacting with other
participants on the electronic community network.
In this case, the researcher is primarily interested in determining
whether or not the electronic community network is beneficial
in some way to non-profit organization participants. The
researcher begins with a review of the literature to determine
what prior studies have determined about this issue and uses
the literature to define the following questions for the study of
the non-profit organizations providing information to the
electronic community network:
Why do non-profit organization participants use the network?
How do non-profit organization participants determine what
to place on the electronic community network?
76
Do the non-profit organization participants believe the
community network serves a useful purpose in furthering their
mission? How?
the researcher makes adjustments and assigns investigators
particular cases which become their area of expertise in the
evaluation and analysis of the data.
Step 2. Select the Cases and Determine Data
Gathering and Analysis Techniques
Many communities have constructed electronic community
networks on the World Wide Web. At the outset of the design
phase, the researcher determines that only one of these
networks will be studied and further sets the study boundaries
to include only some of the non-profit organizations represented on that one network. The researcher contacts the Board
of Directors of the community network, who are open to the
idea of the case study. The researcher also gathers computer
generated log data from the network and, using this data,
determines that an in-depth study of representative organizations from four categories — health care, environmental,
education, and religious — is feasible. The investigator applies
additional selection criteria so that an urban-based and a ruralbased non-profit are represented in the study in order to
examine whether urban non-profits perceive more benefits
from community networks than rural organizations.
Step 4. Collect Data in the Field
Investigators first arrange to visit with the Board of Directors
of each non-profit organization as a group and ask for copies
of the organization’s mission, news clippings, brochures, and
any other written material describing the organization and its
purpose. The investigator reviews the purpose of the study
with the entire Board, schedules individual interview times with
as many Board members as can cooperate, confirms key contact
data, and requests that all Board members respond to the
written survey which will be mailed later.
The researcher considers multiple sources of data for this study
and selects document examination, the gathering and study of
organizational documents such as administrative reports,
agendas, letters, minutes, and news clippings for each of the
organizations. In this case, the investigator decides to also
conduct open-ended interviews with key members of each
organization using a check-list to guide interviewers during the
interview process so that uniformity and consistency can be
assured in the data, which could include facts, opinions, and
unexpected insights. In this case study, the researcher cannot
employ direct observation as a tool because some of the
organizations involved have no office and meet infrequently to
conduct business directly related to the electronic community
network. The researcher instead decides to survey all Board
members of the selected organizations using a questionnaire as
a third data gathering tool. Within-case and cross-case analysis
of data are selected as analysis techniques.
Step 3. Prepare to Collect the Data
The researcher prepares to collect data by first contacting each
organization to be studied to gain their cooperation, explain the
purpose of the study, and assemble key contact information.
Since data to be collected and examined includes organizational
documents, the researcher states his intent to request copies of
these documents, and plans for storage, classification, and
retrieval of these items, as well as the interview and survey data.
The researcher develops a formal investigator training program
to include seminar topics on non-profit organizations and their
structures in each of the four categories selected for this study.
The training program also includes practice sessions in conducting open-ended interviews and documenting sources,
suggested field notes formats, and a detailed explanation of the
purpose of the case study. The researcher selects a fifth case as a
pilot case, and the investigators apply the data gathering tools to
the pilot case to determine whether the planned timeline is
feasible and whether or not the interview and survey questions
are appropriate and effective. Based on the results of the pilot,
Investigators take written notes during the interview and record
field notes after the interview is completed. The interviews,
although open-ended, are structured around the research
questions defined at the start of the case study.
Research Question: Why do Non-profit Organization
Participants Use the Network?
Interview Questions: How did the organization make the
decision to place data on the World Wide Web community
network? What need was the organization hoping to fulfill?
Research Question: How do Non-profit Organization
Participants Determine What to Place on the Electronic
Community Network?
Interview Questions: What process was used to select the
information that would be used on the network? How is the
information kept up to date?
Research Question: Do the Non-profit Organization
Participants Believe the Community Network Serves a
Useful Purpose in Furthering their Mission? How?
Interview Questions: How does the organization know if the
electronic community network is beneficial to the organization?
How does the electronic community network further the
mission of the organization? What systematic tracking mechanisms exist to determine how many or what types of users are
accessing the organization information?
The investigator’s field notes record impressions and questions
that might assist with the interpretation of the interview data.
The investigator makes note of stories told during open-ended
interviews and flags them for potential use in the final report.
Data is entered into the database.
The researcher mails written surveys to all Board members with
a requested return date and a stamped return envelope. Once
the surveys are returned, the researcher codes and enters the data
into the database so that it can be used independently as well as
integrated when the case study progresses to the point of crosscase examination of data for all four cases.
Step 5. Evaluate and Analyze the Data
Within-case analysis is the first analysis technique used with each
non-profit organization under study. The assigned investigator
studies each organization’s written documentation and survey
response data as a separate case to identify unique patterns
within the data for that single organization. Individual investigators prepare detailed case study write-ups for each
77
organization, categorizing interview questions and answers and
examining the data for within-group similarities and differences.
Cross-case analysis follows. Investigators examine pairs of
cases, categorizing the similarities and differences in each pair.
Investigators then examine similar pairs for differences, and
dissimilar pairs for similarities. As patterns begin to emerge,
certain evidence may stand out as being in conflict with the
patterns. In those cases, the investigator conducts follow-up
focused interviews to confirm or correct the initial data in order
to tie the evidence to the findings and to state relationships in
answer to the research questions.
Step 6. Prepare the Report
The outline of the report includes thanking all of the participants, stating the problem, listing the research questions,
describing the methods used to conduct the research and any
potential flaws in the method used, explaining the data
gathering and analysis techniques used, and concluding with the
answers to the questions and suggestions for further research.
Key features of the report include a retelling of specific stories
related to the successes or disappointments experienced by the
organizations that were conveyed during data collection, and
answers or comments illuminating issues directly related to the
research questions. The researcher develops each issue using
quotations or other details from the data collected, and points
out the triangulation of data where applicable. The report also
includes confirming and conflicting findings from literature
reviews. The report conclusion makes assertions and suggestions for further research activity, so that another researcher may
apply these techniques to another electronic community
network and its participants to determine whether similar
findings are identifiable in other communities. Final report
distribution includes all participants.
Applicability to Library and Information
Science
Case study research, with its applicability across many disciplines, is an appropriate methodology to use in library studies.
In Library and Information Science, case study research has been
used to study reasons why library school programs close (Paris,
1988), to examine reference service practices in university library
settings (Lawson, 1971), and to examine how questions are
negotiated between customers and librarians (Taylor, 1967).
Much of the research is focused exclusively on the librarian as
the object or the customer as the object. Researchers could use
the case study method to further study the role of the librarian
in implementing specific models of service. For example, case
study research could examine how information-seeking
behavior in public libraries compares with information-seeking
behavior in places other than libraries, to conduct in-depth
studies of non-library community based information services
to compare with library based community information services,
and to study community networks based in libraries.
Conclusion
Case studies are complex because they generally involve multiple
sources of data, may include multiple cases within a study, and
produce large amounts of data for analysis. Researchers from
many disciplines use the case study method to build upon
theory, to produce new theory, to dispute or challenge theory, to
78
explain a situation, to provide a basis to apply solutions to
situations, to explore, or to describe an object or phenomenon.
The advantages of the case study method are its applicability to
real-life, contemporary, human situations and its public
accessibility through written reports. Case study results relate
directly to the common reader’s everyday experience and facilitate
an understanding of complex real-life situations.
Assignments
1. What are the basic assumptions for a case study method?
2. Describe some of the limitations of case study method?
LESSON 20:
CONTENT ANALYSIS
Topics Covered
Analyse,Content,Audience, Form, Recordings.
Objectives
Upon completion of this Lesson, you should be able to:
• How to analyze media content
• How to analyze audience content
• Understand the recordings
• Understanding the text
Content analysis is a method for summarizing any form of
content by counting various aspects of the content. This
enables a more objective evaluation than comparing content
based on the impressions of a listener. For example, an
impressionistic summary of a TV program, is not content
analysis. Nor is a book review: it’s an evaluation.
Content analysis, though it often analyses written words, is a
quantitative method. The results of content analysis are
numbers and percentages. After doing a content analysis, you
might make a statement such as “27% of programs on Radio
Lukole in April 2003 mentioned at least one aspect of
peacebuilding, compared with only 3% of the programs in
2001.”
Print media
Other writings
Broadcast media
Other recordings
Live situations
Observations
Newspaper items, magazine articles,
books, catalogues
Web pages, advertisements, billboards,
posters, graffiti
Radio programs, news items, TV
programs
Photos, drawings, videos, films, music
Speeches, interviews, plays, concerts
Gestures, rooms, products in shops
Media Content and Audience Content
That’s one way of looking at content. Another way is to divide
content into two types: media content and audience content.
Just about everything in the above list is media content. But
when you get feedback from audience members, that’s audience
content. Audience content can be either private or public. Private
audience content includes:
• open-ended questions in surveys
• interview transcripts
• group discussions.
Though it may seem crude and simplistic to make such
statements, the counting serves two purposes:
Public audience content comes from communication between
all the audience members, such as:
• To remove much of the subjectivity from summaries
• letters to the editor
• To simplify the detection of trends.
• postings to an online discussion forum
Also, the fact that programs have been counted implies that
somebody has listened to every program on the station: content
analysis is always thorough.
• listeners’ responses in talkback radio.
As you’ll see below, content analysis can actually be a lot more
subtle than the above example. There’s plenty of scope for
human judgement in assigning relevance to content.
1. What is Content?
The content that is analysed can be in any form to begin with,
but is often converted into written words before it is analysed.
The original source can be printed publications, broadcast
programs, other recordings, the internet, or live situations. All
this content is something that people have created. You can’t do
content analysis of (say) the weather - but if somebody writes a
report predicting the weather, you can do a content analysis of
that.
All this is content...
This chapter will focus mainly on public audience content and
on media content.
Why do Content Analysis?
If you’re also doing audience research, the main reason for also
doing content analysis is to be able to make links between
causes (e.g. program content) and effect (e.g. audience size). If
you do an audience survey, but you don’t systematically relate
the survey findings to your program output, you won’t know
why your audience might have increased or decreased. You
might guess, when the survey results first appear, but a
thorough content analysis is much better than a guess.
For a media organization, the main purpose of content analysis
is to evaluate and improve its programming. All media
organizations are trying to achieve some purpose. For commercial media, the purpose is simple: to make money, and survive.
For public and community-owned media, there are usually
several purposes, sometimes conflicting - but each individual
program tends to have one main purpose.
As a simple commercial example, the purpose of an advertisement is to promote the use of the product it is advertising: first
by increasing awareness, then by increasing sales. The purpose
79
of a documentary on AIDS in southern Africa might be to
increase awareness of ways of preventing AIDS, and in the end
to reduce the level of AIDS. Often, as this example has shown,
there is not a single purpose, but a chain of them, with each
step leading to the next.
Using audience research to evaluate the effects (or outcome) of a
media project is the second half of the process. The first half is
to measure the causes (or inputs) - and that is done by content
analysis. For example, in the 1970s a lot of research was done
on the effects of broadcasting violence on TV. If people saw
crimes committed on TV, did that make them more likely to
commit crimes? In this case, the effects were crime rates, often
measured from police statistics. The problem was to link the
effects to the possible causes. The question was not simply
“does seeing crime on TV make people commit crimes?” but
“What types of crime on TV (if any) make what types of
people (if any) commit crimes, in what situations?” UNESCO
in the 1970s produced a report summarizing about 3,000
separate studies of this issue - and most of those studies used
some form of content analysis.
When you study causes and effects, as in the above example,
you can see how content analysis differs from audience research:
• content analysis uncovers causes
• audience research uncovers effects.
The entire process - linking causes to effects, is known as
evaluation.
The Process of Content Analysis
Content analysis has six main stages, each described by one
section of this chapter:
• Selecting content for analysis
• Units of content
• Preparing content for coding
• Coding the content
• Counting and weighting
• Drawing conclusions .
2. Selecting Content for Analysis
Content is huge: the world contains a near-infinite amount of
content. It’s rare that an area of interest has so little content that
you can analyse it all. Even when you do analyse the whole of
something (e.g. all the pictures in one issue of a magazine) you
will usually want to generalize those findings to a broader
context (such as all the issues of that magazine). In other
words, you are hoping that the issue you selected is a representative sample. Like audience research, content analysis involves
sampling. But with content analysis, you’re sampling content,
not people. The body of information you draw the sample
from is often called a corpus – Latin for body.
Deciding Sample Size
Unless you want to look at very fine distinctions, you don’t
need a huge sample. The same principles apply for content
analysis as for surveys: most of the time, a sample between 100
and 2000 items is enough - as long as it is fully representative.
For radio and TV, the easiest way to sample is by time. How
would you sample programs during a month? With 30 days,
80
you might decide on a sample of 120. Programs vary greatly in
length, so use quarter-hours instead. That’s 4 quarter-hours each
day for a month. Obviously you need to vary the time periods
to make sure that all times of day are covered. An easy way to
do this, assuming you’re on air from 6 am to midnight, is to
make a sampling plan like this:
Day Quarter-hours beginning
1
2
3
0600 1030 1500 1930
0615 1045 1515 1945
0630 1100 1530 2000
and so on. After 18 days you’ll have covered all quarter-hours.
After 30 days you’ll have covered most of them twice. If that
might introduce some bias, you could keep sampling for
another 6 days, to finish two cycles. Alternatively, you could use
an 18-minute period instead of 15, and still finish in 30 days.
With print media, the same principles apply, but it doesn’t
make sense to base the sample on time of day. Instead, use
page and column numbers. Actually, it’s a lot easier with print
media, because you don’t need to organize somebody (or
program a computer) to record the on-air program at regular
intervals.
The Need for a Focus
When you set out to do content analysis, the first thing to
acknowledge is that it’s impossible to be comprehensive. No
matter how hard you try, you can’t analyse content in all
possible ways. I’ll demonstrate, with an example. Let’s say that
you manage a radio station. It’s on air for 18 hours a day, and
no one person seems to know exactly what is broadcast on each
program. So you decide that during April all programs will be
taped. Then you will listen to the tapes and do a content
analysis.
First problem: 18 hours a day, for 30 days, is 540 hours. If you
work a 40-hour week, it will take almost 14 weeks to play the
tapes back. But that’s only listening - without pausing for
content analysis! So instead, you get the tapes transcribed. Most
people speak about 8,000 words per hour. Thus your transcript
has up to 4 million words – about 40 books.
Now the content analysis can begin! You make a detailed
analysis: hundreds of pages of tables and summaries. When
you’ve finished (a year later?) somebody asks you a simple
question, such as “What percentage of the time are women’s
voices heard on this station?”
If you haven’t anticipated that question, you’ll have to go back
to the transcript and laboriously calculate the answer. You find
that the sex of the speaker hasn’t always been recorded. You
make an estimate (only a few days’ work, if you’re lucky) then
you’re asked a follow-up question, such as “How much of that
time is speech, and how much is singing?”
Oops! The transcriber didn’t bother to include the lyrics of the
songs broadcast. Now you’ll have to go back and listen to all
those tapes again!
This example shows the importance of knowing what you’re
looking for when you do content analysis. Forget about trying
to cover everything, because (a) there’s too much content
around, and (b) it can be analysed in an infinite number of
ways. Without having a clear focus, you can waste a lot of time
analysing unimportant aspects of content. The focus needs to
be clearly defined before you begin work.
groups give different kinds of comments, it will be best to use
individuals as the unit.
3. Units of Content
To be able to count content, your corpus needs to be divided
into a number of units, roughly similar in size. There’s no limit
to the number of units in a corpus, but in general the larger the
unit, the fewer units you need. If the units you are counting
vary greatly in length, and if you are looking for the presence of
some theme, a long unit will have a greater chance of including
that theme than will a short unit. If the longest units are many
times the size of the shortest, you may need to change the unit
- perhaps “per thousand words” instead of “per web page.” If
the interviews vary greatly in length, a time-based unit may be
more appropriate than “per interview.”
Units of Media Content
Depending on the size of your basic unit, you’ll need to take a
different approach to coding. The main options are (from
shortest to longest):
• A word or phrase. If you are studying the use of language,
words are an appropriate unit (perhaps can also group
synonyms together, and include phrases). Though a corpus
may have thousands of words, software can count them
automatically.
• A paragraph, statement, or conversational turn: up to a few
hundred words.
• An article. This might be anything from a short newspaper
item to a magazine article or web page: usually between a few
hundred and a few thousand words.
• A large document. This can be a book, an episode of a TV
program, or a transcript of a long radio talk.
The longer the unit, the more difficult and subjective is the
work of coding it as a whole. Consider breaking a document
into smaller units, and coding each small unit separately.
However, if it’s necessary to be able to link different parts of the
document together, this won’t make sense.
Units of Audience Content
When you are analysing audience content (not media content)
the unit will normally be based on the data collection format
and/or the software used to store the responses. The types of
audience content most commonly produced from research data
are
• Open-ended responses to a question in a survey (usually all
on one large file).
• Statements produced by consensus groups (often on one
small file).
• Comments from in-depth interviews or group discussions.
(Usually a large text file from each interview or group.)
In any of these cases, the unit can be either a person or a
comment. Survey analysis is always based on individuals, but
content analysis is usually based on comments. Most of the
time this difference doesn’t affect the findings, but if some
people make far more comments than others, and these two
81
LESSON 21:
CONTENT ANALYSIS - ANALYSIS AND SIZE
Topics Covered
Analyse,Content,Audience, Form, Recordings.
Objectives
Upon completion of this Lesson, you should be able to:
• How to analyze media content
• How to analyze audience content
• Understand the recordings
• Understanding the text
Large Units are Harder to Analyse
Usually the corpus is a set of the basic units: for example, a set
of 13 episodes in a TV series, an 85-message discussion on an
email listserv over several months, 500 respondents’ answers to
a survey question - and so on. What varies is (a) the number of
units in the corpus, and (b) the size of the units.
Differences in these figures will require different approaches to
content analysis. If you are studying the use of language,
focusing on the usage of new words, you will need to use a
large corpus - a million words or so - but the size of the unit
you are studying is tiny: just a single word. The word frequencies can easily be compared using software such as Wordsmith.
At the other extreme, a literary scholar might be studying the
influence of one writer on another. The unit might be a whole
play, but the number of units might be quite small - perhaps
the 38 plays of Shakespeare compared with the 7 plays of
Marlowe. If the unit is a whole play, and the focus is the literary
style, a lot of human judgement will be needed. Though the
total size of the corpus could be much the same as with the
previous example, far more work is needed when the content
unit is large - because detailed judgements will have to be made
to summarize each play.
Dealing with Several Units at Once
Often, some units overlap other units. For example, if you ask
viewers of a TV program what they like most about it, some
will give one response, and others may give a dozen. Is your
unit the person or the response? (Our experience: it’s best to
keep track of both types of unit, because you won’t know till
later whether using one type of unit will produce a different
pattern of responses.)
Preparing Content for Coding
Before content analysis can begin, it needs to be preserved in a
form that can be analysed. For print media, the internet, and
mail surveys (which are already in written form) no transcription
is needed. However, radio and TV programs, as well as recorded
interviews and group discussions, usually need to be transcribed before the content analysis can begin.
Full transcription – that is, conversion into written words,
normally into a computer file – is slow and expensive. Though
it’s sometimes necessary, full transcription is often avoidable,
82.
without affecting the quality of the analysis. A substitute for
transcription is what I call content interviewing (explained
below).
When content analysis is focusing on visual aspects of a TV
program, an alternative to transcription is to take photos of the
TV screen during the program, or to take a sample of frames
from a video recording. For example, if you take a frame every
30 seconds from a 25-minute TV program, you will have 50
screenshots. These could be used for a content analysis of what
is visible on the screen. For a discussion program this would
not be useful, because most photos would be almost identical,
but for programs with strong visual aspects – e.g. most dramas
– a set of photos can be a good substitute for a written
transcript. However this depends on the purpose of the
content analysis.
It’s not possible to accurately analyse live radio and TV programs, because there’s no time to re-check anything. While
you’re taking notes, you’re likely to miss something important.
Therefore radio and TV programs need to be recorded before
they can be content-analysed.
Transcribing Recorded Speech
Until you have actually tried to transcribe an interview by writing
out the spoken words, you’d probably think there was nothing
subjective about it. But when you actually start transcribing, you
soon realize that there are many styles, and many choices within
each style. What people say is often not what they intend - they
leave out words, use the wrong word, stutter, pause, or correct
themselves mid-sentence. At times the voices are inaudible. Do
you then guess, or leave a blank? Should you add “stage
directions” - that the speaker shouted or whispered, or somebody else was laughing in the background?
Ask three or four people (without giving them detailed
instructions) to transcribe the same tape of speech, and you’ll
see surprising differences. Even when transcribing a TV or radio
program, with a professional announcer reading from a script,
the tone of voice can change the intended meaning.
The main principle that emerges from this is that you need to
write clear instructions for transcription, and ensure that all
transcribers (if there is more than one) follow those instructions. It’s useful to have all transcribers begin by transcribing the
same text for about 30 minutes. They then stop and compare
the transcriptions. If there are obvious differences, they then
repeat the process, and again compare the transcriptions. After a
few hours, they are coordinated.
It generally takes a skilled typist, using a transcription recorder
with a foot-pedal control, about a day’s work to transcribe an
hour or two of speech. If a lot of people are speaking at once
on the tape, and the transcriber is using an ordinary cassette
player, and the microphone used was of low quality, the
transcription can easily take 10 times as long as the original
speech.
Another possibility is to use speech-recognition software, but
unless the speaker is exceptionally clear (e.g. a radio announcer) a
lot of manual correction is usually needed, and not much time
is saved.
Partial Transcription
Transcribing speech is very slow, and therefore expensive. An
alternative that we (at Audience Dialogue) often use is to make a
summary, instead of a full transcription. We play back the
recording and write what is being discussed during each minute
or so. The summary transcript might look like this:
0’ 0"
Moderator introduces herself
1’ 25"
Each participant asked to introduce self
1’ 50"
James (M, age about 25)
2’ 32"
Mary (F, 30?, in wheelchair)
4’ 06"
Ayesha (F, 40ish)
4’ 55"
Markus (M, about 50) - wouldn’t give details
What Form does the Content Take?
Content is often transformed into written form before content
analysis begins. For example, if you are doing a content analysis
of a photo exhibition, the analysis cannot be of the photos
themselves. However it might be about the topics of the
photos (from a written catalogue), or it might be about visitors’
reactions to the photos. Perhaps visitors’ comments were
recorded on tape, then transcribed. For most purposes, this
transcription could be the corpus for the content analysis, but
analysis with an acoustic focus might also want to consider how
loud the visitors were speaking, at what pitch, and so on.
Conversion into Computer-readable Form
If your source is print media, and you want a text file of the
content (so that you can analyse it using software) a quick
solution is to scan the text. Unless the text is small and fuzzy
(e.g. on cheap newsprint) only a few corrections are usually
needed per page.
If the content you are analysing is on a web page, email, or
word processing document, the task is easier still. But to analyse
this data with most text-analysis software, you will first need to
save the content as a text file, eliminating HTML tags and other
formatting that is not part of the content. First to save the web
page, then open it with a word processing program , and finally
save it as a text file.
Live Coding
5’ 11"
*
Grace (F, 38)
6’ 18"
Lee (M, 25-30, Java programmer)
7’ 43"
Everybody asked to add an agenda item
7’ 58"
**
James - reasons for choosing this ISP
This takes little more time than the original recording took:
about an hour and a half for a one-hour discussion. The
transcriber uses asterisks: * means “this might be relevant” and
** means “very relevant.” These marked sections can be listened
to again later, and transcribed fully.
If you are testing some particular hypothesis, much of the
content will be irrelevant, so it is a waste of time to transcribe
everything. Another advantage of making a summary like that
above is that it clearly shows the topics that participants spent a
lot of time discussing.
An important note: if you record the times as above, using a
tape recorder, make sure the tape is rewound every time you
begin listening to it and the counter is reset to zero, otherwise
the counter positions won’t be found.
If your purpose in the content analysis is very clear and simple,
an alternative to transcription is live coding. For this, the coders
play back the tape of the radio or TV program or interview,
listen for perhaps a minute at a time, then stop the tape and
code the minute they just heard. This works best when several
coders are working together. It is too difficult for beginners at
coding, but for experienced coders it avoids the bother of
transcription. Sometimes, content analysis has a subtle purpose,
and a transcript doesn’t give the information you need. That’s
when live coding is most useful: for example, a study of the
tone of voice that actors in a drama use when speaking to
people of different ages and sexes.
Analysing Secondary Data
Sometimes the corpus for a content analysis is produced
specifically for the study - or at least, the transcription is made
for that purpose. That’s primary data. But in other instances,
the content has already been transcribed (or even coded) for
another purpose. That’s secondary data. Though secondary
data can save you a lot of work, it may not be entirely suitable
for your purpose.
This section applies to content that was produced for some
other purpose, and is now being analysed. Content created for
different purposes, or different audiences, is likely to have
different emphases. In different circumstances, and in different
roles, people are likely to give very different responses. The
expectations produced by a different role, or a different situation, are known as demand characteristics.
When you find a corpus that might be reusable, you need to ask
it some questions, like:
83
Who Produced this, For What Audience, in what
Context?
It’s often misleading to look only at the content itself - the
content makes full sense only in its original context. The context
is an unspoken part of the content, but is often more important than the text of the content.
Why was it Produced?
Content that was prepared to support a specific cause is going
to be more biased than content that was prepared for general
information.
It’s safe to assume that all content is biased in some way. For
example, content that is produced by or for a trade union is
likely to be very different in some ways) from content produced
by an employer group. But (in other ways) the two sets of
content will share many similarities - because they are likely to
discuss the same kinds of issues. Content produced by an
advertiser or consumer group, ostensibly on that same topic, is
likely to have a very different emphasis. That emphasis is very
much part of the content - even if this is not stated explicitly.
How Old is this Corpus?
Is it still valid for your current purpose?
5. Coding Content
Coding in content analysis is the same as coding answers in a
survey: summarizing responses into groups, reducing the
number of different responses to make comparisons easier.
Thus you need to be able to sort concepts into groups, so that
in each group the concepts are both
• as similar as possible to each other, and
• as different as possible from concepts in each other group.
Does that seem puzzling? Read on: the examples below will
make it clearer.
Another issue is the stage at which the coding is done. In
market research organizations, open-ended questions are usually
coded before the data entry stage. The computer file of results
has only the coded data, not the original verbatim answer. This
makes life easier for the survey analysts - for example, to have
respondents’ occupations classified in standard groups, rather
than many slightly varying answers. However, it also means that
some subtle data is lost, unless the analyst has some reason to
read the original questionnaires. For occupation data, the
difference between, say “clerical assistant” and “office assistant”
may be trivial (unless that is the subject of the survey). But for
questions beginning with “why,” coding usually over-simplifies
the reality. In such cases it’s better to copy the verbatim answers
into a computer file, and group them later.
The same applies with content analysis. Coding is necessary to
reduce the data to a manageable mass, but any piece of text can
be coded in many different ways. It’s therefore important to be
able to check the coding easily, by seeing the text and codes on
the same sheet of paper, or the same computer screen.
Single Coding and Multi-coding
It’s usual in survey analysis to give only one code to each openended answer. For example, if a respondent’s occupation is
“office assistant” and the coding frame was this ...
84
Professionals and managers = 1
Other white collar = 2
Skilled blue-collar = 3
Unskilled blue-collar = 4
... an office assistant would be coded as group 2. But multiple
coding would also be possible. Occupations would be divided
in several different “questions,” such as
Question 1: Skill Level
Professional or skilled = 1
Unskilled = 2
Question 2: Work Environment
Office / white collar = 1
Manual / blue collar = 2
An office assistant could be classified as 2 on skill level and 1 on
work environment.
If you are dealing with transcripts of in-depth interviews or
group discussions, the software normally used for this purpose
encourages multiple coding. The software used for survey
analysis doesn’t actually discourage multiple coding, but most
people don’t think of using it. My suggestion is to use
multiple coding whenever possible - unless you are very, very
certain about what you are trying to find in a content analysis
(most likely because you’ve done the same study every month
for the least year). As you’ll see in the example below, multiple
coding lets you view the content in more depth, and can also be
less work than single coding.
Coding Frames
A coding frame is just a set of groups into which comments (or
answers to a question) can be divided – e.g. the occupation
categories shown above. In principle, this is easy. Simply think
of all possible categories for a certain topic. In practice, of
course, this can be very difficult, except when the topic is limited
in its scope - as with a list of occupation types. As that’s not
common in content analysis, the usual way of building a coding
frame is to take a subset of the data, and to generate the coding
frame from that.
An easy way to do this is to create a word processing file, and
type in (or copy from another file) about 100 verbatim comments from the content being analysed. If you leave a blank line
above and below each comment, and format the file in several
columns, you can then print out the comments, cut up the
printout into lots of small pieces of paper, and rearrange the
pieces on a table so that the most similar ones are together. This
sounds primitive, but it’s much faster than trying to do the
same thing using only a computer.
When similar codes are grouped together, they should be given
a label. You can create either conceptual labels (based a theory
youaretesting),or
in vivo labels (based on vivid terms in
respondents’ own words).
How Large Should a Coding Frame be?
A coding frame for content analysis normally has between
about 10 and 100 categories. With fewer than 10 categories, you
risk grouping dissimilar answers together, simply because the
coding frame doesn’t allow them to be separated. But with
more than 100 categories, some will seem very similar, and
there’s a risk that two near-identical answers will be placed in
different categories. If it’s important to have a lot of categories,
consider using hierarchical coding.
Hierarchical Coding
This is also known as tree coding, with major groups (branches)
and sub-groups (twigs). Each major group is divided into a
number of sub-groups, and each subgroup can then be divided
further, if necessary. This method can produce unlimited coding
possibilities, but sometimes it is not possible to create an
unambiguous tree structure – for example, when the codes are
very abstract. As an example, a few years ago I worked on a
study of news and current affairs items for a broadcasting
network. We created a list of 122 possible topics for news items,
then divided these topics into 12 main groups:
• Crime and justice
• Education
• Environment
• Finance
• Government and politics
• Health
• International events and trends
• Leisure activities and sport
• Media and entertainment
• Science and technology
• Social issues
• Work and industry
This coding frame was used for both a survey and a content
analysis. We invited the survey respondents to write in any
categories that we’d forgotten to include, but our preliminary
work in setting up the structure had been thorough, and only a
few minor changes were needed.
Because setting up a clear tree-like structure can take a long time,
don’t use this method if you’re in a hurry – a badly-formed tree
causes problems when sub-groups are combined for the
analysis.
Using an Existing Coding Frame
You don’t always need to create a coding frame from scratch. If
you know that somebody has done the same type of content
analysis as you are doing, there are several advantages to using
an existing coding frame. It not only saves all the time it takes
to develop a coding frame, but also will enable you to compare
your own results with those of the earlier study. Even if your
study has a slightly different focus, you can begin with an
existing coding frame and modify it to suit your focus. If you’d
like a coding frame for news and current affairs topics, feel free
to use (or adapt) my one above. Government census bureaus
use standard coding frames, particularly for economic data such as ISCO: the International Standard Classification of
Occupations. Other specialized coding frames can be found on
the Web, which are used for coding conflict in news bulletins.
• Have a detailed list showing each code and the reasons for
choosing it. When you update it, make sure that each coder
gets a copy of the new version.
• Use the minimum number of people to do the coding. The
consistency is greatest if one person does it all. Next best is
to have two people, working in the same room.
• Keep a record of each borderline decision, and the reasons
why you decided a particular category was the most
appropriate.
• Have each coder re-code some of each other coder’s work. A
10% sample is usually enough, but if this uncovers a lot of
inconsistency, re-code some more.
If you have created a coding frame based on a small sample of
the units, you usually find some exceptions after coding more
of the content. Often, at that point, you realize your coding
frame wasn’t detailed enough to cover all the units. So what do
you do now?
Usually, you add some new codes, then go back and review all
the units you’ve coded already, to see if they include the new
codes. So it helps if you have already noted the unit numbers
of any units where the first set of codes didn’t exactly apply.
You can then go straight back to those units (if you’re storing
them in numerical order) and review the codes. A good time to
do this review is when you’ve coded about a quarter of the total
units, or about 200 units - whichever is less. After 200-odd
units, new codes rarely need to be added. It’s usually safe to
code most late exceptions as “other” - apart from any important
new concepts.
This works best if all the content is mixed up before you begin
the coding. For example, if you are comparing news content
from two TV stations, and if your initial coding frame is based
only on one channel, you may have to add a lot of new
categories when you start coding items from the second
channel. For that reason, the initial sample you use to create the
coding frame should include units of as many different types as
possible. Alternatively, you could sort all the content units into
random order before coding, but that would make it much
harder for the coders to see patterns in the original order of the
data.
The Importance of Consistency in Coding
When you are coding verbatim responses, you’re always making
borderline decisions. “Should this answer be category 31 or 73?”
To maintain consistency, I suggest taking these steps:
85
LESSON 22:
CONTENT ANALYSIS- QUESTIONING THE CONTENT
Topics Covered
Analyse, Content, Audience, Form, Recordings.
Objectives
Upon completion of this Lesson, you should be able to:
• How to analyze media content
• How to analyze audience content
• Understand the recordings
• Understanding the text
Questioning the Content
When analysing media content (even in a visual form - such as a
TV program) it’s possible to skip the transcription, and go
straight to coding. This is done by describing the visual aspects
in a way that’s relevant to the purpose of the analysis.
For example, if you were studying physical violence on TV
drama programs, you’d focus on each violent action and record
information about it. This is content interviewing: interviewing
a unit of content as if it were a person, asking it “questions,”
and recording the “answers” you find in the content unit.
What you can’t do, of course, when interviewing the content is
to probe the response to enable the coding to be more accurate.
An interview respondent, when asked “Why did you say that?”
will answer - but media content can’t do that.
When you’re interviewing content, it’s good practice to create a
short questionnaire, and fill in a copy for each content unit. This
helps avoid errors and inconsistencies in coding. The time you
save in the end will compensate for the extra paper used. The
questionnaires are processed with standard survey software.
something else, which should be given a different kind of code.
The more abstract your coding categories, the more likely you
are to encounter this problem. If it’s important to capture and
analyse these overlapping codes, there are two solutions:
High-tech:
Use software specially designed for this purpose. It’s powerful,
but not cheap – and it takes time to learn to use it well.
Low-tech:
Cutting up transcripts and sorting the pieces of paper. Disadvantages: if interrupted, you easily lose track of what you’re
doing – and never do this in a windy place!
My suggestion: unless you’re really dedicated, avoid this type of
content analysis. Such work is done manly by academics
(because they have the time) but not by commercial researchers,
because the usefulness of the results seldom justifies the
expense.
Content Analysis Without Coding
Coding is another form of summarizing. If you want to
summarize some media content (the usual reason for doing
content analysis) one option is to summarize the content at a
late stage, instead of the usual method of summarizing it at an
early stage.
If your content units are very small (such as individual words)
there’s software that can count words or phrases. In this case,
no coding is needed, and the software does the counting for
you, but you still need to summarize the results. This means a
lot less work near the beginning of the project, and a little more
at the end. If you don’t do coding there’s much less work.
The main disadvantage of content interviewing is that you can’t
easily check a code by looking at the content that produced it.
This helps greatly in increasing the accuracy and consistency of
the analysis. But when there’s no transcript (as is usual when
interviewing content) you can check a code only by finding the
recording of that unit, and playing it back. In the olden days
(20th century) there were usually a lot of cassette tapes to
manage. It was important to number them, note the counter
positions, make an index of tapes, and store them in order.
Now, in the 21st century, computers are faster, and will store a
lot more data. I suggest storing sound and video recordings for
content analysis on hard disk, with each content unit as a
separate file. You can then flick back and forth between the
coding software and the playback software. This is a great timesaver.
Unfortunately, software can’t draw useful conclusions. Maybe in
10 years the software will be much cleverer, but at the moment
there’s no substitute for human judgement - and that takes a
lot of time. Even so, if your units are not too large, and all the
content is available as a computer file, you can save time by
delaying the coding till a later stage than usual. The time is saved
when similar content is grouped, and a lot of units can all be
coded at once.
Overlapping Codes
When the content you are analysing has large units of text – e.g
a long interview, this can be difficult to code. A common
problem is that codes overlap. The interviewee may be talking
about a particular issue (given a particular code) for several
minutes. In the middle of that, there may be a reference to
Use judges when there are likely to be disagreements on the
coding. This will be when any of these conditions applies:
86.
Using Judges
A common way to overcome coding problems is to appoint a
small group of “judges” and average their views on subjective
matters. Though it’s easy to be precise about minor points (e.g.
“the word Violence was spoken 29 times”), the more general
your analysis, the more subjective it becomes (e.g. the concept
of violence as generally understood by the audience).
• units are large (e.g. a whole TV program instead of one
sentence)
• you are coding vague concepts, such as “sustainability” or
“globalization”
• you are coding nonverbal items, such as pictures, sounds,
and gestures
• your findings are likely to be published, then challenged by
others.
The more strongly these conditions apply, the more judges you
need. Unless you are being incredibly finicky (or the project is
has very generous funding!) 3 judges is often enough, and 10 is
about the most you will ever need. The more specific the coding
instructions, the fewer the judges you will need. If you only
have one person coding each question, he or she is then called a
“coder” not a “judge” - though the work is the same.
Any items on which the judges disagree significantly should be
discussed later by all judges and revised. Large differences
usually result from misunderstanding or different interpretations.
Maybe you are wondering how many judges it takes before
you’re doing a survey, not content analysis. 30 judges? 100?
Actually, it doesn’t work like that. Judges should be trained to
be objective: they are trying to describe the content, not give
their opinions. All judges should agree as closely as possible. If
there’s a lot of disagreement among the judges, it usually
means their instructions weren’t clear, and need to be rewritten.
With a survey, respondents are unconstrained in their opinions.
You want to find out their real opinions, so it makes no sense
to “train” respondents. That’s the difference between judging
content and doing a survey. However if you’re planning to do a
content analysis that uses both large units and imprecise
definitions, maybe you should consider doing a survey instead
(or also).
6. Examples
This section has examples of content analysis, using various
types of coding described in section 5. The first example
demonstrates content questioning. Example 2 shows how
multi-coding can be done, while Example 3 covers the use of
software in automatic content analysis.
Example 1: Newspaper Coverage of Asylum Seekers
I’m working on a project that involves media content analysis,
without transcription. The project’s purpose is to evaluate the
success of a public relations campaign designed to improve
public attitudes towards asylum seekers. The evaluation is done
by “questioning” stories in news media: mainly newspapers,
radio, and TV. For newspaper articles, six sets of questions are
asked of each story:
1. Media Details
The name of the newspaper, the date, and the day of the week.
This information can later be linked to data on circulation and
readership, which is available from public sources.
2. Exact Topic of the News Story
Recorded in two forms: a one-line summary - averaging about
10 words, and a code, chosen from a list of about 15 main
types of topic on this issue. Codes are used to count the
number of occurrences of stories on each main type of topic.
3. Apparent Source of the Story
This can include anonymous reporting (apparently by a staff
reporter), a named staff writer, another named source, a
spokesperson, and unknown sources. If the source is known, it
is entered in the database.
4. Favourability of Story Towards Asylum Seekers
To overcome subjectivity, we ask several judges (chosen to cover
a wide range of ages, sexes, occupations, and knowledge of the
overall issue) to rate each story on this 6-point scale:
1 = Very favourable
2 = Slightly favourable
3 = Neutral
4 = Slightly unfavourable
5 = Very unfavourable
6 = Mixed: both favourable and unfavourable
When calculating averages, the “6” codes are considered
equivalent to “3”. The range (the difference between the highest
and lowest judge) is also recorded, so that each story with a large
range can be reviewed.
5. How Noticeable the Story was
This is complex, because many factors need to be taken into
account. However, to keep the project manageable, we consider
just three factors. For newspapers, these factors are:
• The space given to the story (column-centimetres and
headline size)
• Its position in the issue and the page (the top left of page 1
is the ideal)
• Whether there’s a photo (a large colour one is best).
For radio and TV, the above factors are modified to suit those
media, with an emphasis on time instead of space.
Each of these three factors is given a number of points ranging
from 0 (hardly noticeable at all) up to 3 (very noticeable indeed).
The three scores are then added together, to produce a maximum of 9. We then add 1 more point if there’s something that
makes the story more noticeable than the original score would
suggest (e.g. a reference to the story elsewhere in the issue, or
when this topic is part of a larger story).
6. Anything Unusual About this Story
The coders write comments when they notice something
unusual about the story, specially when an extra point is added
in the previous item. These comments can be referred to later
when trying to make sense of the results of the content
analysis.
All this information is recorded first on a one-page printed
form, then entered into a spreadsheet, so that weekly tables and
graphs can be produced, showing trends in coverage and
differences between media outlets, specially the balance between
the amount of coverage and its favourability.
Notice that this example (newspaper coverage of an issue) is a
much simpler task than the first (TV violence). If it appears
more complex, it’s because I’ve covered it in detail, to show
exactly how quantitative content analysis can be done. It’s
simpler because we know exactly what we are looking for: to
relate changes in media coverage to changes in public opinion.
For TV violence, on the other hand, it’s more difficult to decide
87
exactly what to look for, and even what “violence” is. (Angry
words? Slamming a door? Casual mention of a death? And so
on: many decisions to be argued about). If you’re a novice at
content analysis, don’t begin with a topic as complex as
violence.
Example 2 : Counting Words in Comments
This example is about automatic content analysis. If 390 people
living in the town were interviewed, and asked their views of
the town’s future. The open-ended answers were typed into a
computer file, and software (designed for literary content
analysis, but useful in this context too) was used to identify the
main themes. This was done by comparing the frequency of
keywords in the comments with those words’ frequency in
normal English. To avoid being overwhelmed by common
stopwords such as the and and, the program ignored these
words.
By looking at these KeyWords In Context (KWIC) I found a
small number of comments that summarized most respondents’ opinions on these issues. Though this method is much
less subtle than normal coding it’s very quick - which was
essential on this occasion.
7. Counting and Weighting
When all the preparation for content analysis has been done, the
counting is usually the quickest part - specially if all the data is
on a computer file, and software is used for the counting.
8. Coming to Conclusions
An important part of any content analysis is to study the
content that is not there: what was not said. This sounds
impossible, doesn’t it? How can you study content that’s not
there? Actually, it’s not hard, because there’s always an implicit
comparison. The content you found in the analysis can be
compared with the content that you (or the audience) expected or it can be compared with another set of content.
It’s when you compare two corpora (plural of corpus) that
content analysis becomes most useful. This can be done either
by doing two content analyses at once (using different corpora
but the same principles) or comparing your own content
analysis with one that somebody else has done. If you use the
same coding frame for both, it makes the comparison much
more straightforward.
These comparisons can be:
• chronological (e.g. this year’s content compared with last)
• geographical (your content analysis compared with a similar
one in another area)
• media-based (e.g. comparing TV and newspaper news
coverage)
• program content vs. audience preferences
...and so on. Making a comparison between two matching sets
of data will often produce very interesting results.
There’s no need to limit comparisons to two corpora: any
number of content analyses can be compared, as long as they
used the same principles. With two or three comparisons,
results are usually very clear, and with 10 or more, a few corpora
usually stand out as different - but with about 4 to 9 comparisons, comparisons can become rather messy.
88
These comparisons are usually made using cross-tabulation
(cross-tabs) and statistical significance testing. A common
problem is that unless the sample sizes were huge, there are
often too few entries in each cell to make statistical sense of the
data. Though it’s always possible to group similar categories
together - and easy, if you were using a hierarchical coding frame
- the sharpness of comparison can be lost, and the results,
though statistically significant, may not have a whole lot of
practical meaning.
Reporting on a Content Analysis
Ten people could analyse the same set of content, and arrive at
completely different conclusions. For example, the same set of
TV programs analysed to find violent content could also be
analysed to study the way in which conversations are presented
on TV. The focus of the study controls what you notice about
the content. Therefore any content analysis report should begin
with a clear explanation of the focus – what was included, what
was excluded, and why.
No matter how many variables you use in a content analysis,
you can’t guarantee that you haven’t omitted an important
aspect of the data. Statistical summaries are often highly
unsatisfying, specially if you’re not testing a simple hypothesis.
If readers don’t understand how you’ve summarized the
content, they will be unlikely to accept your conclusions.
Therefore it’s important to select and present some key units of
content in the report. This can be done when you describe the
coding frame. For each code, find and reproduce a highly typical
(and real) example. This is easier when units are short (e.g. radio
news items). When units are long (such as whole TV programs)
you will need to summarize them, and omit aspects that are
important to some people. If you used judges (who will have
read through the content already) ask them to select typical
examples. If the judges achieve a consensus on this, it will add
credibility to your findings.
Another approach is to cite critical cases: examples that only just
fitted one code rather than another - together with arguments
on why you chose the code you did. This helps readers understand your analysis, as well as making the data more vivid.
Explain your Coding Principles
When you read a content analysis report, the process can seem
forbiddingly objective - something like “37.3% of the references
to Company A were highly favourable, and only 6.4% highly
unfavourable.” However, this seeming objectivity can actually be
very subjective. For example, who decides whether a reference is
“highly favourable” or just “favourable”? Different people may
have different views on this - all equally valid. More difficult still,
can you count a reference to Company A if it’s not mentioned
by name, but by some degree of association? For example, “All
petrochemical factories on the Port River are serious polluters.”
Does that count as a reference to Company A if its factory is on
that river? And what if many of the audience may not know
that fact?
The point is that precise-looking percentages are created from
many small assumptions. A high-quality report will list all the
assumptions made, give the reasons why those assumptions
were made, and also discuss how varying some assumptions
would affect the results.
Conclusion
Content analysis can produce quite trivial results, particularly
when the units are small. The findings of content analysis
become much more meaningful when units are large (e.g. whole
TV or radio programs) and when those findings can be
compared with audience research findings. Unfortunately, the
larger the units, the more work the content analysis requires.
When used by itself, content analysis can seem a shallow
technique. But it becomes much more useful when it’s done
together with audience research. You will be in a position to
make statements along the lines of “the audience want this, but
they’re getting that.” When backed by strong data, such
statements are very difficult to disagree with.
Examples of Content Analysis
Here’s a detailed example of audience content analysis, from a
recent Audience Dialogue project. This involved reanalysing
open-ended questionnaire responses and “interviewing” them
to find out if they supported our theory. It was a simple
example of content analysis (so it should be easy for novices to
follow) but it also had some innovative aspects that experienced
content analysts might find interesting.
Example 4: TV Violence
An “interview” with a violent episode in a TV program might
“ask” it questions such as:
1. How long did you last, in minutes and seconds?
2. What program were you shown on?
3. On what channel, what date, and what time?
4. Was the program local or imported? Series or one-off?
5. What was the nature of the violent action?
6. How graphic or realistic was the violence?
7. What were the effects of the violence on the victim/s?
8. Who did the violent act: heroes or villains? Men or women?
Young or old? People of high or low social status?
9. Who was the victim/s: heroes or villains? Men or women?
(etc.)
10. To what extent did the violent action seem provoked or
justified?
...and so on. All the answers to the questions are available from
watching the program. Notice that some of the criteria are
subjective (e.g. the last one). Instead of relying on a single
person’s opinion on such criteria, it’s usual to have a number of
“judges” and record the average rating, often on a scale out of
10.
89
LESSON 23:
“QUALITATIVE” AND “QUANTITATIVE”
Topics Covered
Thoughts, Difference, Accuracy, Description, Watching.
Objectives
Upon completion of this Lesson, you should be able to:
• Some Thoughts About Epistemology
• Why Do We Think There’s a Difference?
• Accuracy
• Full Description, Thick Description: Watching the Margins
• “Qualitative” and “Quantitative”
The Epistemology of Qualitative
Research
It is rhetorically unavoidable, discussing epistemological
questions in social science, to compare “qualitative” and
“ethnographic” methods with those which are “quantitative”
and “survey”: to compare, imaginatively, a field study conducted
in a community or organization with a survey of that same
community or organization undertaken with questionnaires,
self-administered or put to people by interviewers who see
them once, armed with a printed form to be filled out. The very
theme of this conference assumed such a division.
How so? Both kinds of research try to see how society works,
to describe social reality, to answer specific questions about
specific instances of social reality. Some social scientists are
interested in very general descriptions, in the form of laws
about whole classes of phenomena. Others are more interested
in understanding specific cases, how those general statements
worked out in this case. But there’s a lot of overlap.
The two styles of work do place differing emphasis on the
understanding of specific historical or ethnographic cases as
opposed to general laws of social interaction. But the two styles
also imply one another. Every analysis of a case rests, explicitly
or implicitly, on some general laws, and every general law
supposes that the investigation of particular cases would show
that law at work. Despite the differing emphases, it all ends up
with the same sort of understanding, doesn’t it?
That kind of ecumenicism clearly won’t do, because the issue
does not go away. To point to a familiar example, although
educational researchers have done perfectly good research in the
qualitative style for at least sixty years, they still hold periodic
conferences and discussions, like this one, to discuss whether or
not it’s legitimate and, if it is, why it is. Surely there must be
some real epistemological difference between the methods that
accounts for this continuing inability to settle the question.
Some Thoughts About Epistemology
Let’s first step back, and ask about epistemology as a discipline.
How does it see its job? What kinds of questions does it raise?
Like many other philosophical disciplines, epistemology has
characteristically concerned itself with “oughts” rather than
90.
“is’s,” and settled its questions by reasoning from first principles rather than by empirical investigation. Empirical
disciplines, in contrast, have concerned themselves with how
things work rather than what they ought to be, and settled their
questions empirically.
Some topics of philosophical discussion have turned into areas
of empirical inquiry. Scholars once studied biology and physics
by reading Aristotle. Politics, another area philosophers once
controlled, was likewise an inquiry in which scholars settled
questions by reasoning rather than by investigation. We can see
some areas of philosophy, among them epistemology, going
through this transformation now, giving up preaching about
how things should be done and settling for seeing how they are
in fact done.
Aesthetics, for instance, has traditionally been the study of how
to tell art from non-art and, especially, how to tell great art from
ordinary art. Its thrust is negative, concerned primarily with
catching undeserving candidates for the honorific title of art and
keeping such pretenders out. The sociology of art, the empirical
descendant of aesthetics, gives up trying to decide what should
and shouldn’t be allowed to be called art, and instead describes
what gets done under that name. Part of its enterprise is exactly
to see how that honorific title -”art”- is fought over, what
actions it justifies, and what users of it can get away with.
Epistemology has been a similarly negative discipline, mostly
devoted to saying what you shouldn’t do if you want your
activity to merit the title of science, and to keeping unworthy
pretenders from successfully appropriating it. The sociology of
science, the empirical descendant of epistemology, gives up
trying to decide what should and shouldn’t count as science,
and tells what people who claim to be doing science do, how
the term is fought over, and what people who win the right to
use it can get away with.
So: this paper will not be another sermon on how we ought to
do science, and what we shouldn’t be doing, and what evils will
befall us if we do the forbidden things. Rather, it will talk about
how ethnographers have produced credible, believable results,
especially those results which have continued to command
respect and belief.
Such an enterprise is, to be philosophical, quite Aristotelian, in
line with the program of the Poetics, which undertook not to
legislate how a tragedy ought to be constructed but rather to see
what was true of tragedies, which successfully evoked pity and
terror, producing catharsis. Epistemologists have often
pretended to such Aristotelian analysis, but more typically
deliver sermons.
Why Do We Think There’s a Difference?
Two circumstances seem likely to produce the alleged differences
between qualitative and quantitative epistemologists of social
science make so much of. One is that the two sorts of methods
typically raise somewhat different questions at the level of data,
on the way to generalizations about social life. Survey researchers use a variant of the experimental paradigm, looking for
numerical differences between two groups of people differing
in interesting ways along some dimension of activity or
background. They want to find that adolescents whose parents
have jobs of a higher socioeconomic status are less likely to
engage in delinquency, or more likely, or whatever—a difference
from which they will then infer other differences in experience or
possibilities that will “explain” the delinquency. The argument
consists of an “explanation” of an act based on a logic of
difference between groups with different traits.
I don’t mean to oversimplify what goes on in such work. The
working out of the logic can be, and almost always is, much
more complicated than this. Researchers may be concerned with
interaction effects, and with the way some variables condition
the relations between other variables, in all this striving for a
complex picture of the circumstances attending someone’s
participation in delinquency.
Fieldworkers usually want something quite different: a description of the organization of delinquent activity, a description
which makes sense of as much as possible of what they have
seen as they observed delinquent youth. Who are the people
involved in the act in question? What were their relations
before, during, and after the event? What are their relations to
the people they victimize? To the police? To the juvenile court?
Fieldworkers are likewise interested in the histories of events:
how did this start? Then what happened? And then? And how
did all that eventually end up in a delinquent act or a delinquent
career? And how did this sequence of events depend on the
organization of all this other activity?
The argument rests on the interdependence of a lot of moreor-less proved statements. The point is not to prove, beyond
doubt, the existence of particular relationships so much as to
describe a system of relationships, to show how things hang
together in a web of mutual influence or support or interdependence or what-have-you, to describe the connections
between the specifics the ethnographer knows by virtue of
“having been there.”. Being there produces a strong belief that
the varied events you have seen are all connected, which is not
unreasonable since what the fieldworker sees is not variables or
factors that need to be “related” but people doing things
together in ways that are manifestly connected. After all, it’s the
same people and it’s only our analysis that produces the abstract
and discrete variables which then have to be put back together.
So fieldwork makes you aware of the constructed character of
“variables.” (Which is not to say that we should never talk
variable talk.)
A second difference which might account for the persistent
feeling that the two methods differ epistemologically is that the
situations of data gathering present fieldworkers, whether they
seek it or not, with a lot of information, whether they want it
or not. If you do a survey, you know in advance all the
information you can acquire. There may be some surprises in
the connections between the items you measure, but there will
not be any surprise data, things you didn’t ask about but were
told anyway. A partial exception to this might be the use of
open-ended questions, but even such questions are usually not
asked in such a way as to encourage floods of unanticipated
data suggesting new variables. In fact, the actual workings of
survey organizations discourage interviewers from recording
data not asked for on the forms.
In contrast, fieldworkers cannot insulate themselves from data.
As long as they are “in the field” they will see and hear things
which ought to be entered into their field notes. If they are
conscientious, or experienced enough to know that they had
better, they put it all in, even what they think may be useless,
and keep on doing that until they know for sure that they will
never use data on certain subjects. They thus allow themselves
to become aware of things they had not anticipated which may
have a bearing on their subject. They expect to continually add
variables and ideas to their models. In some ways, that is the
essence of the method.
Many Ethnographies
The variety of things called ethnographic aren’t all alike, and in
fact may be at odds with each other over epistemological details.
In what follows, I will concentrate on the older traditions (e.g.,
participant observation, broadly construed, and unstructured
interviewing) rather than the newer, trendier versions, even
though the newer versions are more insistent on the epistemological differences. What I have to say may well be read by some
as not the full defense of what they do they would make. So be
it. I’ll leave it to less middle-of-the-road types to say more. (I
will however talk about “ethnographers” or “fieldworkers”
somewhat indiscriminately, lumping together people who
might prefer to kept separate.)
A lot of energy is wasted hashing over philosophical details,
which often have little or nothing to do with what researchers
actually do, so I’ll concentrate less on theoretical statements and
more on the way researchers work these positions out in
practice. What researchers do usually reflects some accommodation to the realities of social life, which affect them as much as
any other actor social scientists study, by constraining what they
can do. Their activity thus cannot be accounted for or explained
fully by referring to philosophical positions. In short, I’m
describing practical epistemology, how what we do affects the
credibility of the propositions we advance. In general, I think
(not surprising anyone by so doing) that the arguments
advanced by qualitative researchers have a good deal of validity,
but not in the dogmatic and general way they are often proposed. So I may pause here and there for a few snotty remarks
on the excesses ethnographers sometimes fall into.
A few basic questions seem to lie at the heart of the debates
about these methods: Must we take account of the viewpoint
of the social actor and, if we must, how do we do it? And: how
do we deal with the embeddedness of all social action in the
world of everyday life? And: how thick can we and should we
make our descriptions?
The Actor’s Point of View: Accuracy
One major point most ethnographers tout as a major epistemological advantage of what they do is that it lets them grasp
the point of view of the actor. This satisfies what they regard as
a crucial criterion of adequate social science. “Taking the point of
91
view of the other” is a wonderful example of the variety of
meanings methodological slogans acquire. For some, it has a
kind of religious or ethical significance: if we fail to do that we
show disrespect for the people we study. Another tendency goes
further, finding fault with social science which “speaks for”
others, by giving summaries and interpretations of their point
of view. In this view, it is not enough to honor, respect, and
allow for the actors’ point of view. One must also allow them
to express it themselves.
For others, me among them, this is a technical point best
analyzed all social scientists, implicitly or explicitly, attribute a
point of view and interpretations to the people whose actions
we analyze. That is, we always describe how they interpret the
events they participate in, so the only question is not whether
we should, but how accurately we do it. We can find out, not
with perfect accuracy, but better than zero, what people think
they are doing, what meanings they give to the objects and
events and people in their lives and experience. We do that by
talking to them, in formal or informal interviews, in quick
exchanges while we participate in and observe their ordinary
activities, and by watching and listening as they go about their
business; we can even do it by giving them questionnaires
which let them say what their meanings are or choose between
meanings we give them as possibilities. To anticipate a later
point, the nearer we get to the conditions in which they actually
do attribute meanings to objects and events the more accurate
our descriptions of those meanings are likely to be.
It was argued that if we don’t find out from people what
meanings they are actually giving to things. In that case, we will,
of necessity, invent them, reasoning that the people we are
writing about must have meant this or that, or they would not
have done the things they did. But it is inevitably epistemologically dangerous to guess at what could be observed directly. The
danger is that we will guess wrong, that what looks reasonable
to us will not be what looked reasonable to them. This
happens all the time, largely because we are not those people
and do not live in their circumstances. We are thus likely to take
the easy way and attribute to them what we think we would feel
in what we understand to be their circumstances, as when
students of teen-age behavior look at comparative rates of
pregnancy, and the correlates thereof, and decide what the
people involved “must have been” thinking in order to behave
that way.
The field of drug use, which overlaps the study of adolescence,
is rife with such errors of attribution. The most common
meaning attributed to drug use is that it is an “escape” from
some sort of reality the drug user is said to find oppressive or
unbearable. Drug intoxication is conceived as an experience in
which all painful and unwanted aspects of reality recede into the
background so that they need not be dealt with. The drug user
replaces reality with gaudy dreams of splendor and ease,
unproblematic pleasures, perverse erotic thrills and fantasies.
Reality, of course, is understood to be lurking in the background, ready to kick the user in the ass the second he or she
comes down.
Such descriptions of drug use are, as could be and has been
found out by generations of researchers who bothered to ask,
92
pure fantasy on the part of the researchers who publish them.
The fantasies do not correspond to the experiences of users or
of those researchers who have made the experiments themselves. They are concocted out of a kind of willful ignorance.
Misinterpretations of people’s experience and meanings are
commonplace in studies of delinquency and crime, of sexual
behavior, and in general in studies of behavior foreign to the
experience and life style of conventional academic researchers.
Much of what anthropological and ethnographic studies have
brought to the understanding of the problems of adolescence
and growing up is the correction of such simple errors of fact,
replacing speculation with observation.
But “don’t make up what you could find out” hardly requires
being dignified as an epistemological or philosophical position.
It is really not much different from a more conventional, even
positivist, understanding of method, except in being even
more rigorous, requiring the verification of speculations that
researchers will not refrain from making. So the first point is
that ethnography’s epistemology, in its insistence on investigating the viewpoint of those studied, is indeed like that of other
social scientists, just more rigorous and complete. (I find it
difficult, and don’t try very hard, to avoid the irony of insisting
that qualitative research is typically more precise and rigorous
than survey research, ordinarily thought to have the edge with
respect to those criteria.)
One reason many researchers who would agree with this in
principle nevertheless avoid investigating actors’ viewpoints is
that the people we study often do not give stable or consistent
meanings to things, people, and events. They change their
minds frequently. Worse yet, they are often not sure what things
do mean; they make vague and woolly interpretations of events
and people. It follows from the previous argument that we
ought to respect that confusion and inability to be decisive by
not giving things a more stable meaning than the people
involved do. But doing so makes the researcher’s work more
difficult, since it is hard to describe, let alone measure, such a
moving target.
Epistemologically, then, qualitative methods insist that we
should not invent the viewpoint of the actor, and should only
attribute to actors ideas about the world they actually hold, if
we want to understand their actions, reasons, and motives.
The Everyday World: Making Room for
the Unanticipated
A second point, similar to the emphasis on learning and
understanding the meanings people give to their world and
experiences instead of making them up, is an emphasis on the
everyday world, everyday life, the quotidien.
The general idea is that we act in the world on the basis of
assumptions we never inspect but just act on, secure in the
belief that when we do others will react as we expect them to. A
version of this is the assumption that things look to me as they
would look to you if you were standing where I am standing.
In this view, “everyday understandings” refers not so much to
the understandings involved, say, in the analysis of a kinship
system - that this is the way one must behave to one’s mother’s
brother’s daughter, for instance -but to the deep epistemological
beliefs that undergird all such shared ideas, the meta-analyses
and ontologies we are not ordinarily aware of that make social
life possible.
Much theoretical effort has been expended on this concept. I
favor a simpler, less controversial, more workaday interpretation, either as an alternative or simply as a complement to these
deep theoretical meanings. This is the notion of the everyday
world as the world people actually act in every day, the ordinary
world in which the things we are interested in understanding
actually go on. As opposed to what? As opposed to the
simpler, less expensive, less time-consuming world the social
scientist constructs in order to gather data efficiently, in which
survey questionnaires are filled out and official documents
consulted as proxies for observation of the activities and events
those documents refer to.
Most ethnographers think they are getting closer to the real
thingthanthat,byvirtueofobservingbehavi
inor
situ or at
least letting people tell about what happened to them in their
own words. Clearly, whenever a social scientist is present, the
situation is not just what it would have been without the social
scientist. I suppose this applies even when no one knows that
the social scientist is a social scientist doing a study. Another
member of a cult who believes flying saucers from other planets
are about to land is, after all, one more member the cult would
not have had otherwise and, if the cult is small, that increase in
numbers might affect what the observer is there to study.
But, given that the situation is never exactly what it would have
been otherwise, there are degrees of interference and influence.
Ethnographers pride themselves on seeing and hearing, more
or less, what people would have done and said had the
observers not been there.
One reason for supposing this to be true is that ethnographers
observe people when all the constraints of their ordinary social
situation are operative. Consider this comparatively. We typically
assure people to whom we give a questionnaire or who we
interview that no one will ever know what they have said to us,
or which alternatives on the questionnaire they have chosen. (If
we can’t make that assurance, we usually worry about the
validity of the results.) This insulates the people interviewed
from the consequences they would suffer if others knew their
opinions. The insulation helps us discover people’s private
thoughts, the things they keep from their fellows, which is
often what we want to know.
But we should not jump from the expression of a private
thought to the conclusion that that thought determines the
person’s actions in the situation to which it might be relevant.
When we watch someone as they work in their usual work
setting or go to a political meeting in their neighborhood or
have dinner with their family—when we watch people do
things in the places they usually do them with the people they
usually do them with—we cannot insulate them from the
consequences of their actions. On the contrary, they have to take
the rap for what they do, just as they ordinarily do in everyday
life. An example: when I was observing college undergraduates,
I sometimes went to classes with them. On one occasion, an
instructor announced a surprise quiz for which the student I
was accompanying that day, a goofoff, was totally unprepared.
Sitting nearby, I could easily see him leaning over and copying
answers from someone he hoped knew more than he did. He
was embarrassed by my seeing him, but the embarrassment
didn’t stop him copying, because the consequences of failing
the test (this was at a time when flunking out of school could
lead to being drafted, and maybe being killed in combat) were a
lot worse than my potentially lowered opinion of him. He
apologized and made excuses later, but he did it. What would
he have said about cheating on a questionnaire or in an
interview, out of the actual situation that had forced him to
that expedient?
Our opinions or actions are not always regarded as inconsequential by people we study. Social scientists who study schools
and social agencies regularly find that the personnel of those
organizations think of research as some version of the
institutional evaluations they are constantly subject to, and take
measures to manipulate what will be discovered. Sometimes the
people we find it easiest to interview are on the outs with their
local society or culture, hoping to escape and looking to the
ethnographer for help. But, though these exceptions to the
general point always need to be evaluated carefully, ethnographers typically make this a major epistemological point: when
they talk about what people do they are talking about what they
saw them do under the conditions in which they usually do it,
rather than making inferences from a more remote indicator
such as the answer to a question given in the privacy of a
conversation with a stranger. They are seeing the “real world” of
everyday life, not some version of it created at their urging and
for their benefit, and this version, they think, deserves to be
treated as having greater truth value than the potentially less
accurate versions produced by other methods, whatever the
offsetting advantages of efficiency and decreased expense.
Full Description, Thick Description:
Watching the Margins
Ethnographers pride themselves on providing dense, detailed
descriptions of social life. Their pride often implies that the
fuller the description, the better, with no limit suggested. At an
extreme, ethnographers talking of reproducing the “lived
experience” of others.
There is something wrong with this on the face of it. The
object of any description is not to reproduce the object
completely - why bother when we have the object already? - but
rather to pick out its relevant aspects, details which can be
abstracted from the totality of details that make it up so that we
can answer some questions we have. Social scientists, for
instance, usually concentrate on what can be described in words
and numbers, and thus leave out all those aspects of reality that
use other senses, what can be seen and heard and smelled.
(How many monographs deal with the smell of what is being
studied, even when that is a necessary and interesting component, and when isn’t it?)
Ethnographers usually hail “advances” in method which allow
the inclusion of greater amounts of detail: photographs, audio
recording, video recording. These advances never move us very
far toward the goal of full description; the full reality is still a
long way away. Even when we set up a video camera, it sits in
one place at a time, and some things cannot be seen from that
93
vantage point; adding more cameras does not alter the argument. Even such a small technical matter as the focal length of
the camera’s lens makes a big difference: a long lens provides
close-up detail, but loses the context a wide-angle lens provides.
So full description is a will-of-the-wisp. But, that said, a fuller
description is preferable to, epistemologically more satisfying,
than a skimpy description. Why? Because, as with the argument
about the actor’s point of view, it lets us talk with more
assurance about things than if we have to make them up - and,
to repeat, few social scientists are sufficiently disciplined to
refrain from inventing interpretations and details they have not,
in one way or another, observed themselves. Take a simple
example. We want to know if parents’ occupations affect the
job choices adolescents make. We can ask them to write down
the parents’ occupations on a line in a questionnaire; we can
copy what the parents have written down somewhere, perhaps
on a school record; or we can go to where the parents work and
verify by our own observation that this one teaches school, that
one drives a bus, the other one writes copy in an advertising
agency.
Is one of these better than another? Having the children write it
down in a form is better because it is cheap and efficient.
Copying it from a record the parents made might be better
because the parents have better knowledge of what they do and
better language with which to express it than the children do.
Seeing for ourselves would still be open to question - maybe
they are just working there this week - but it leaves less room
for slippage. We don’t have to worry about the child’s ignorance
or the parents’ desire to inflate their status. Epistemologically, I
think, the observation which requires less inference and fewer
assumptions is more likely to be accurate, although the accuracy
so produced might not be worth bothering with.
A better goal than “thickness” - one fieldworkers usually aim
for - is “breadth”: trying to find out something about every
topic the research touches on, even tangentially. We want to
know something about the neighborhood the juveniles we
study live in, and the schools they go to, and the police stations
and jails they spend time in, and dozens of other things.
Fieldworkers pick up a lot of incidental information on such
matters in the course of their participation or lengthy interviewing but, like quantitative researchers, they often use “available
data” to get some idea about them. They usually do that,
however, with more than the usual skepticism.
It is time to mention, briefly, the well-known issue of “official
statistics” or, put more generally, the necessity of looking into
such questions as why records are kept, who keeps them, and
how those facts affect what’s in them. (None of this is news to
historians, who would think of this simply as a matter of
seeing what criticisms the sources they use have to be subjected
to.) Organizations don’t keep records so that social scientists can
have data but, rather, for their own purposes. This is obvious
in the case of adolescents, where we know that school attendance records are “managed” in order to maximize state
payments; behavioral records slanted to justify actions taken
toward “difficult” kids; and test scores manipulated to justify
tracking and sorting. Similarly, police records are kept for police
purposes, not for researchers’ hypothesis testing.
94
Ethnographers therefore typically treat data gathered by officials
and others as data about what those people did: police statistics
as data about how police keep records and what they do with
them, data about school testing as data about what schools and
testers do rather than about student traits, and so on. That
means that ethnographers are typically very irreverent and this
makes trouble.
It makes trouble where other people don’t share the irreverence,
but take the institution seriously on its own terms. Qualitative
researchers are often, though not necessarily, in a kind of
antagonistic relationship to sources of official data, who don’t
like to be treated as objects of study but want to be believed.
There’s not much more to say. Practitioners of qualitative and
quantitative methods may seem to have different philosophies
of science, but they really just work in different situations and
ask different questions. The politics of social science can seduce
us into magnifying the differences. But it needn’t, and
shouldn’t.
Further Thoughts
After the foregoing had been discussed at the conference, some
people felt that there were still unresolved questions that I
ought to have dealt with. The questions were ones that are
often raised and my answers to them are not really “answers,”
but rather responses which discuss the social settings in which
such questions are asked rather more than the questioners may
have anticipated.
One question had to do with how one might combine what are
sometimes called the “two modalities,” the qualitative and
quantitative approaches to social research. There is a little
literature on this question, which generally ends up suggesting a
division of labor, in which qualitative research generates
hypotheses and quantitative research tests them. This question
is invariably raised, and this solution proposed, by quantitative
researchers, who seem to find it an immense problem, and
never by qualitative researchers, who often just go ahead and do
it, not seeing any great problem, in that following the lead of
Robert E. Park, as I suggested in the paper.
Well, why don’t qualitative researchers think it’s a problem? They don’t think it’s a problem because they focus on
questions to be answered, rather than procedures to be
followed.
And how do researchers actually go about combining these
different kinds of data? This is not an easy matter to summarize briefly, because qualitative researchers have been doing this
for a very long time, and there are many examples of it being
done in many parts of the literature. It was noted in 1970 that
scientists learn their trade not by following abstract procedural
recipes, but rather by examining exemplars of work in their field
commonly regarded as well done. The best way to see how data
of these various kinds can be combined is to examine how they
were combined in exemplary works. This was obviously too
large a task for the conference paper.
A second question dealt with “validity,” noting that my paper
did not speak to that question, but instead talked about
credibility. Do I really think that that’s all there is to it, simply
making a believable case? Isn’t there something else involved,
namely, the degree to which one has measured or observed the
phenomenon one claims to be dealing with, as opposed to
whether two observers would reach the same result, which was
one of the ways some people interpreted my analysis of
credibility.
We come here to a difference that is really a matter not of logic
or scientific practice, but of professional organization, community, and culture. The professional community in which
quantitative work is done (and I believe this is more true in
psychology than in sociology) insists on asking questions about
reliability and validity, and makes acceptable answers to those
questions the touchstone of good work. But there are other
professional communities for whose workers those are not the
major questions. Qualitative researchers, especially in sociology
and anthropology, are more likely to be concerned with the
kinds of questions I raised in the body of my paper: whether
data are accurate, in the sense of being based on close observation of what is being talked about or only on remote indicators;
whether data are precise, in the sense of being close to the thing
discussed and thus being ready to take account of matters not
anticipated in the original formulation of the problem; whether
an analysis is full or broad, in the sense of knowing about a
wide range of matters that impinge on the question under
study, rather than just a relatively few variables. The paper
contains a number of relevant examples of these criteria.
Ordinarily, scholarly communities do not wander into each
other’s territory, and so do not have to answer to each other’s
criteria. Operating within the paradigm accepted in their
community, social scientists do what their colleagues find
acceptable, knowing that they will have to answer to their
community for failures to adhere to those standards. When,
however, two (at least two, maybe more) scholarly communities
meet, as they did in this conference, the question arises as to
whose language the discussions will be conducted in, and what
standards will be invoked. It is my observation over the years
that quantitative researchers always want to know what answers
qualitative researchers have to their Êquestions about validity
and reliability and hypothesis testing. They do not discuss how
they might answer the questions qualitative researchers raise
about accuracy and precision and breadth. In other words, they
want to assimilate what others do to their way of doing
business and make those other ways answer their questions.
They want the discussion to go on in their language and the
standards of qualitative work translated into the language they
already use.
That desire - can I say insistence? - presumes a status differential: A can call B to account for not answering A’s questions
properly, but B has no such obligation to A. But this is a
statement about social organization, not about epistemology,
about power in heirarchical systems, not about logic. When,
however, scholarly communities operate independently, instead
of being arranged in a heirarchy of power and obligation, as is
presently the case with respect to differing breeds of social
science, their members need not use the language of other
groups; they use their own language. The relations between the
groups are lateral, not vertical, to use a spatial metaphor. One
community is not in a position to require that the other use its
language.
That has to some extent happened in the social sciences, as the
growth of social science (note that this argument has a demographic base) made it possible for sub-groups to constitute
worlds of their own, with their own journals, organizations,
presidents, prizes, and all the other paraphernalia of a scientific
discipline.
Does that mean that I’m reducing science to matters of
demographic and political weight? No, it means recognizing
that this is one more version of a standard problem in relations
between culturally differing groups. To make that explicit, the
analogies to problems of translation between languages and
cultures. Super ordinate groups in situations of cultural contact
(e.g., colonial situations) usually think everything should be
translated so that it makes sense in their language rather than
being translated so that the full cultural difference in the
concepts in question are retained. They are very often powerful
enough, at least for a while, to require that that be done.
This problem of translation between culturally differing groups
is what was called attention to in noting that when there is a
substantial paradigm difference, as in the case of a paradigm
shift, the languages in which scientific work is conducted cannot
be translated into one another. If the groups are in fact
independent, then there is a translation problem and the same
dynamic - the question, you might say, of whose categories will
be respected - comes into play.
So what seem like quite reasonable requests for a little clarification are the playing out of a familiar ritual, which occurs
whenever quantitative workers in education, psychology, and
sociology decide that they will have to pay attention to work of
other kinds and then try to coopt that work by making it
answer to their criteria, criteria like reliability and validity, rather
than to the criteria I proposed, commonly used by qualitative
workers. I would say that I wasn’t not dealing with validity, but
was, rather, dealing with something else that seems as fundamental to me as validity does to others.
This will all sound at odds with my fundamental belief,
expressed in the paper, that the two styles of work actually share
the same, or a very similar, epistemology. I do believe that’s
true. But I also think that some workers get fixated on specific
procedures (not the same thing as epistemology), act as I have
described with respect to those procedures, and have this same
feeling that other styles of work must be justified by reference
to how they well they accomplish what those procedures are
supposed to accomplish.
Finally, some people asked how one could tell good from bad
or better from worse in qualitative work. I’ve already suggested
one answer in the criteria already discussed. Work that is based
on careful, close-up observation of a wide variety of matters
that bear on the question under investigation is better than
work which relies on inference and more remote kinds of
observations. That’s a criterion.
So these are matters that are deeper than they seem to be, in a
variety of ways, and mostly, I think, in organizational ways. I
haven’t, for reasons I hope to have made clear, answered these
questions as the people who asked them hoped. I’ve explained
things in my terms, and I guess they will have to do the
translating.
95
LESSON 24:
ANATOMY OF AN ON-LINE FOCUS GROUP
Topics Covered
Online, Facilities, selecting, interpretation, responding
Objectives
Upon completion of this Lesson, you should be able to:
• Ethics in online research.
• Screeners, recruitment, and virtual facilities.
• Analyze Knowledge of Data.
respondents to choose from and the virtual facility and/or the
qualitative researcher will select the best.
The time set for an on-line group should accommodate the
array of respondents participating. If there are East and West
Coast participants, groups can be conducted later in the evening
(based on GMT) or participants in similar time zones can be
grouped together.
• Preparation of Invites.
• How to select and moderate final respondents.
• How to Interpretation data
Anatomy of an On-line Focus Group
On-line focus groups, also referred to as cyber groups, e-groups,
or virtual groups, are gaining popularity as the research marketplace discovers the advantages they offer. In addition to saving
time and money spent traveling, they can easily bring together
respondents and observers in far-flung locations.
The on-line venue has been used for qualitative research since
approximately 1994, when a few research companies began
experimenting with discussion groups by borrowing chat room
technology. This has evolved into a dimension of qualitative
research, aided by customized software that creates virtual
facilities with waiting rooms, client backrooms, and focus group
rooms.
Screeners, Recruitment, and Virtual Facilities
Many elements of the on-line qualitative process are familiar to
qualitative researchers conducting in-person groups. Every online group is initiated by contracting with a virtual facility that
usually offers recruitment services as well as virtual rooms.
Virtual facilities typically recruit respondents electronically from
established panels, compiled on-line lists, targeted Web sites, or
client-provided lists. Sometimes, telephone recruiting is used to
make the initial recruitment contact or to obtain e-mail addresses. (Independent recruiters specializing in on-line group
recruitment are just beginning to appear and this will, undoubtedly, be another area of growth potential.)
Recruiting on-line groups requires specially crafted screeners that
are similar in content and depth to those used for in-person
groups. Since the screeners are administered electronically, some
questions are worded differently to disguise qualifying and
disqualifying answers. A professional on-line facility, in combination with a well-written screener, will thank and release all
disqualified respondents without them knowing why. This, as
well as putting a block on their electronic address, discourages
them from re-trying to qualify by logging back in or from
sharing information about the specific screener questions with
friends. Depending upon the target markets, it is not unusual
with high-incidence groups to have an excess of qualified
96.
Invitations and Preparation
Respondents who are invited to the group receive invitations
with passwords and passnames, instructions, dates, and times.
The invitation requests that they sign on to the site in advance
of the group, using the computer they will use during the
group, to guarantee that all technology is compatible. If there
are any complications or questions, the respondents can contact
tech support in advance to resolve them. They can also contact
tech support during the group for on-line support, as can the
moderator and client observers.
Discussion Guide Development and Design
The content and structure of the inquiry, as outlined in the
discussion guide, resembles in-person groups. The major
difference is in the actual presentation of questions that are
mostly written in full sentence form, in advance. The main topic
questions must be written clearly and completely otherwise
respondents will have to ask for clarification, which uses up
valuable time and diverts the attention of the group.
On-line groups are often shorter (typically 60 to 90 minutes)
than in-person groups and the ideal number (30 to 45) of
prepared questions depends on the complexity of the subject
and the amount of follow-up probes required. Whenever
desired, follow-up questions and probing can be interjected to
either an individual respondent or the entire group. This
enriches the inquiry and uncovers deeper insights. Unfortunately, sometimes research sponsors can insist on an excessive
amount of prepared questions that minimize the amount of
probing time. The result is a missed opportunity to uncover
deeper insights.
Preparation for Groups
Fifteen to 30 minutes prior to the group, the moderator and
technical assistant sign on to watch as respondents enter the
virtual waiting room using their passnames and passcodes.
Similar to in-person groups, some respondents arrive very early
and others arrive at the last minute. As they arrive, some virtual
facilities can administer a rescreener to re-profile them and to
assure that the attendee is the person who originally qualified.
In addition to a few demographic and product usage questions,
the rescreener can include a verification question that refers to a
piece of unique, personal info, such as the name of their first
teacher or pet, that was subtly asked in the original screener.
introduction, purpose, timeline, instructions for entering
responses, encouragement to be candid and honest, and
instructions for signing back on if they accidentally drop off.
Respondents are also encouraged to “feel free to agree, disagree,
or ask questions of each other that relate to the subjects being
discussed” and told that this interaction will help bring the
discussion to life.
On-line groups demand that a moderator possess strong and
fast keyboard skills or be willing to hire an assistant who does.
There are no unused moments during a group to accommodate
slow typists on the moderator side. Respondents can type
slower, but most are keyboard proficient and save time by
cutting corners on spelling and not worrying about sentence
construction. It helps to tell them right in the beginning that
“typo’s and sentances dont mater.”
While a group is underway, there may be technical problems
with respondents and clients that require telephone calls back
and forth to resolve. Simultaneously, the moderator is reading
and interpreting the response stream, responding to client
notes, composing probes and entering questions while
(potentially) dealing with all kinds of technical issues.
Show Rates
Show rates can vary dramatically based on a number of factors,
including: the origination of the respondent (on-line database,
established panel, Web site intercept, etc.), confirmation
procedures, respondent comfort and familiarity with the on-line
venue in general, and the typical kinds of other personal/
business commitments that can inhibit attendance. For eight
respondents to show, 10 or 15 may have to be recruited.
However, it should be noted that the weather, traffic, and
transportation can have less of a negative impact on show rates
since the respondents are typically participating from a variety of
locations and not encountering the same delays.
Selecting Final Respondents
Based on the rescreener information and final screener spreadsheet, the moderator and client select the respondents together,
similar again to in-person groups.
Moderating
For a moderator, the excitement and pace of moderating an online group can be likened more to a roller coaster ride than an
in-person group. Ideally, the discussion guide is downloaded
directly onto the site so the moderator can, with one click, enter
a question into the dialogue stream. However, another method
more frequently available and workable (although requiring
more concentration and actions by the moderator) is having the
discussion guide document loaded in a separate window
behind the virtual room to use for cutting and pasting each
question.
To begin a group, the moderator introduces the purpose of the
group and lays the ground rules. This includes a personal
Also, moderating on-line groups requires someone who relates
to the on-line venue and recognizes that respondents are adept
at developing relationships in this medium. Many respondents
participate in chat rooms and feel comfortable relating on-line.
At the same time, it is the responsibility of the moderator to
help make the respondents who are not as comfortable or
experienced feel valuable.
The strategy of on-line moderating resembles in-person
moderating. That is, the moderator follows the discussion
guide to the extent that it continues obtaining the desired
information. If a subject that was supposed to be covered later
in the group is brought up earlier by the respondents, those
questions can be inserted as the moderator sees fit. In addition,
if topics not covered in the guide are introduced, the moderator
can choose to interject a new line of questioning.
View for the Client Observers
If all is going well, most of the moderating elements mentioned above will be transparent to the research sponsor and
observers. In fact, it may even seem slow for them as they
passively sit in front of their computer watching the interaction.
It is important to point out that the optimal way for the client
to interact with the moderator is through one designated client
liaison. Similar to in-person groups where notes are passed to
the moderator, the designated liaison decides what is important
to pursue and approves questions given to the moderator.
These “notes” may be submitted to the moderator in private
message form or entered in the backroom response stream for
the moderator to see. The method of communication between
the client and moderator depends mostly on the virtual facility
being used and their software capabilities.
Technical Support
All virtual facilities offer some level of technical assistance. This
may be a technician whose role is to help everyone sign-on and
to help anyone who gets kicked off and has trouble re-entering.
Other technicians perform additional functions including
97
hosting the waiting room and interacting with respondents
while they wait.
rule out creative thinking but it certainly does reject the use of
guessing and intuition in arriving at conclusions.
Another option is for the moderator to hire their own project
assistant who greets the respondents and chats with them in
the waiting room - warming them up - while the moderator
takes care of any last-minute details with the clients and facility.
This assistant then supports the moderator throughout the
group in whatever capacity needed, which could include comoderating if, by remote chance, the moderator loses her/his
connection. This person also has an overview of the project
objectives, screening, discussion guide, and the moderator’s
style, areas that a virtual facility’s technical support person would
not be privy to.
2. Good Research is Logical
This implies that research is guided by the rules of logical
reasoning and the logical process of induction and deduction
are of great value in carrying out research. Induction is the
process of reasoning from a part to the whole whereas deduction is the process of reasoning from some premise to a
conclusion which follows from that very premise.
Transcripts
Soon after the completion of the groups, transcripts are
available for analysis and reporting. These transcripts, available
within a few hours or the next day, may document all interactions from sign-on to sign-off, or they may be slightly edited
(by the facility or moderator) to begin at the first question and
end with the last question, eliminating the hellos and goodbyes. Inappropriate respondent comments can be easily
removed.
Analysis
Analysis and reporting are similar to in-person groups, with the
exception that transcripts are quickly available for every group.
The analysis will be very inclusive and reflect the input of most
respondents since most of them answer every question. In the
absence of visual and verbal cues, analysis of some areas, such
as appeal, will be based on an interpretation of respondent
statements and the ratings they use to indicate levels of appeal.
Reporting
Reports are virtually the same as other qualitative reports
covering areas such as objectives, methodology, conclusions,
and detailed findings. They can be in topline, executive summary, or full report form. Typically, reports can be turned
around more quickly due to the immediate availability of the
transcripts.
A Qualitative Caveat
Results from on-line groups depend on the expertise and
qualifications of the professional who is conducting them. The
most knowledgeable and qualified professionals to conduct online groups are qualitative researchers who have research and
marketing expertise and experience managing group interactions. “Techies” sometimes attempt to do groups because they
are comfortable with the technology and mechanics and some
even have experience with chat groups. However, they often lack
research, analysis, moderating, and marketing expertise and the
results can suffer from these deficiencies.
Criteria of Good Research
The qualities of a good research are as under:
1. Good research is systematic
It means that research is structured with specified steps to be
taken in a specified sequence in accordance with the well defined
set of rules. Systematic characteristic of the research does not
98
3. Good Research is Empirical
It implies that research is related basically to one or more aspects
of a real situation and deals with concrete data that provides a
basis for external validity to research results.
4. Good Research is Replicable
This characteristic allows research results to be verified by
replicating the study and thereby building a sound basis for
decisions.
1.3 The Qualities of a Research
Investigator
Now-a-days research has become a highly specialised matter,
which calls for unique qualities of mind. The research investigator is expected to possess certain qualities if he is going to make
any contribution of value.
i. Keenness of observation
Since research involves data collection, the research investigator
should be a keen observer with an assured capacity to see the
difference between fact and fiction.. This calls for mental
alertness in the person.
ii. Making Meaningful Discrimination
A good research investigator ought to be able to distinguish the
relevant and irrelevant information in relation to the purposes
of his investigation and be able to reach a meaningful conclusion.
iii. Ability to Classify Research Data
In order to classify data, it is necessary to see how certain items
fall together in a group, because of their resemblance. Assigning
things to the groups to which they properly belong is like
sorting a number of coloured threads, all mixed together.
1.4 The Research Process
The research process as explained in Table 1.1 starts with
formulation of research problem, then with various method of
research, research design, sample design, data collection, analysis
and interpretation of data and finally ends with a research
report. It explains various stages such as problem technician,
development of research plan and so on.
LESSON 25:
GROUP DISCUSSIONS
Topics Covered
Knowing, principles, choosing, preparing, formation
to do successfully. The prerequisite has normally been an
advanced degree in psychology.
Upon completion of this Lesson, you should be able to:
The most common form of qualitative research is a focus
group.
• Know what are groups
Focus Groups
• Principles of consensus groups
The most common form of qualitative research is the focus
group. These are widely used for assessing the viability of
proposed new services or products. In each group, about 8
people meet to discuss a particular issue. The group is led by a
highly trained moderator, who begins the discussion at a very
general level, then gradually focuses in on the specific topic.
Respondents are not told this topic in advance, only the broad
area of interest.
Objectives
• Understand group discussions
• Choose sampling points
• Prepare a screening questionnaire
• Consensus formation in discussion
Most of this book describes formal survey methods; a
technique sometimes known as quantitative research. The
process of the survey method is this:
• The population is defined, and a representative sample is
selected.
• A questionnaire is prepared, in which everybody is asked
exactly the same questions, with exactly the same wording.
• The results of the survey come from counting the number
of people who gave each answer to each question.
Qualitative research is quite different. Though it has little in
common with formal surveys, it can be used to reach similar
conclusions. The process of qualitative research is:
• The population is defined, in the same way as a survey.
• Respondents are selected, using a sampling method.
• Respondents are often interviewed in groups.
• Instead of a questionnaire, a list of topics is used.
• The results are not expressed in numerical form, and formal
counts are seldom made.
But both types of research share the same context: typically this:
• The managers of an organization have a problem. To solve
it, they feel they need data about their audience, users, or
customers.
• A research study is carried out.
• The results of this study are used to solve the organization’s
problem, by producing audience data.
The organization’s managers may not care exactly how the
research is done; they simply want an answer to their questions,
or a solution to their problems. From this point of view, it may
not matter what form the research takes.
Qualitative research produces a wealth of information, not in
the form of numbers, but in the form of words. People whose
inclinations are verbal rather than mathematical (like most
media workers I know) often have trouble interpreting the
results of surveys, but they find the results of qualitative
research easier to understand and use. However, qualitative
research has been regarded as too difficult for untrained people
For example, if a TV station wants to assess a new type of
current affairs program, the people chosen for the group could
be those interested in watching information programs on
television. The discussion might begin with the types of
information program participants like and dislike, and the
reasons for those feelings. The discussion might then move
onto current affairs programs in general, then some specific
current affairs programs, then onto current affairs programs on
that channel. At this point the participants might be shown a
short pilot of the proposed new program, and asked to discuss
it. Such a group typically lasts from 1 to 2 hours.
So the focusing begins with the general, and moves towards the
particular. Focusing is also called funnelling - but a focus group
is never called a funnel group.
Everything the participants say is recorded, on either audio or
videotape. For the moderator, the hard work now begins. The
actual moderating can be learned quickly - it’s mainly a matter of
ensuring that everybody has a chance to speak, that some don’t
dominate others, and so on. However the analysis is much
more difficult and time-consuming. The moderator often
watches the video or listens to the tape several times, noting
participants’ expressions and gestures as much as the content of
their speech. Advanced training in psychology is almost
necessary, as is long experience at running focus groups. With
untrained moderators, interpretation of focus groups is highly
subjective: two different moderators may reach quite different
conclusions about the participants’ reaction to whatever was
being studied.
When the moderator has studied the tapes or transcripts, he or
she writes a report. There is no simple method of converting
what participants say and do into conclusions.
Focus groups are usually done in sets of 3 to 6 groups, each
group with a different type of person. For example, the
assessment of a pilot current affairs TV program might require
4 groups: perhaps men under 35, men over 35, women under
35, and women over 35.
99
To analyse focus group proceedings thoroughly usually takes a
full day’s work for each group, then another day or two to write
the report. For this reason, commissioning focus groups from
market research companies is expensive. Though few people
take part in the groups, far more time is spent on each person
than on interviewing respondents in a survey - and a much
higher level of skill is needed than for normal interviewing.
Because of the high cost of professionally organized focus
groups, some organizations are now running their own focus
groups, and even gaining some useful insights. However, their
lack of experience often leads them to misleading conclusions.
It may seem to them that their customers are highly satisfied, or
would watch a proposed new program in large numbers. Later,
they often discover that their conclusions were wrong: that the
innovation that the participants seemed to welcome is far from
popular among their whole audience.
Consensus Groups
The consensus group, is sort of halfway between a focus group
and a public meeting. It also includes elements of other research
techniques and negotiation techniques.
Principles of Consensus Groups
In every survey, the questionnaire ensures that everybody is
asked the same questions. The only variation can be in the
number of people giving each answer to each question. So
surveys begin with words (questionnaires), but the results are
always expressed in numbers.
Consensus groups work in the opposite way: the numbers
remain constant (more or less), but the wording of each
statement is adjusted until the great majority of participants
agree.
It’s important to realize that a consensus group does not try to
create a consensus among participants: that’s peace-making. This
is research: it simply tries to find and define any consensus that
already exists. Unlike a focus group (which narrows in on a
single topic) a consensus group normally covers a broad range
of topics.
The technique has two main stages: recruiting participants, and
holding discussions. Like focus groups, consensus groups are
never done singly, because with a small number of participants,
any one group may be atypical. Consensus groups are normally
done in sets of 3. There can be more than 3, but every extra
group adds less and less information.
Before participants are recruited, the planning process is much
the same as a survey: the organizers must decide what is to be
covered, among what population. When the subject and scope
of the study have been decided, nine steps follow:
A. Preparation
1. Within the area to be studied, three (or more) sampling
points are chosen, contrasting as much as possible.
2. At each of the sampling points, a venue is arranged. All you
need is a suitable room.
3. A short screening questionnaire is prepared.
4. At each sampling point, people are interviewed using the
screening questionnaire. The purpose of these interviews is
100
to find people who are both eligible and willing to attend a
consensus group.
B. Meeting
5. The group meets, either immediately, or up to several weeks
later. Each meeting lasts for two to three hours.
Approximately 12 participants are present, as well as 2
organizers: a moderator and a secretary.
6. The first stage of the meeting is introductory. The
participants briefly introduce themselves, giving some
relevant background information.
7. In the second stage of the meeting, the topics are discussed
by all participants. The moderator manages the discussion,
ensuring that everybody speaks freely, while the secretary
takes notes.
8. In the final stage of the meeting, consensus is sought. The
secretary puts up statements which most participants are
expected to agree with. Statements are modified, depending
on what participants say. When each statement is ready,
participants vote on it. On average, about 20 statements are
agreed on. This list of statements is the main outcome of
the meeting.
9. When three meetings have been held, the three lists of
statements are compared. Any statements shared by at least
two groups are the outcome of the study.
The rest of this chapter considers the nine steps in detail.
1. Choose Sampling Points
A sampling point is the centre of a geographical area, where a
small survey can be carried out. It can be either a group of
homes, or a public place where passers-by are recruited for the
study. For consensus group sampling to work effectively, at least
three sampling points are needed. They are chosen to be as
different as possible. For example, if your study is being done
for a radio station, the population will probably be all people
who live in that radio station’s coverage area. Within this area,
you should identify three contrasting localities, where the people
are as different as possible from each other. For example, if the
station broadcasts to a city and outlying rural areas, you might
choose:
• An inner-city area
• An area near the outer edge of the city
• A rural area.
Another way in which localities often vary is in wealth. Therefore it would be useful to choose one wealthy area, one poor
area, and one middle-income area. If there are ethnic, racial, or
tribal divisions among your audience, it may be essential to have
a separate consensus group for each ethnic group.
In some countries, women won’t give their true opinions in the
presence of men, so you need to have separate consensus
groups for men and women.
Whatever the basis for selecting the sampled areas, the main
goal is to make them as different as possible. This is known as
maximum-diversity sampling. It is explained in detail in the
chapter on sampling.
There is no reason why you should not have more than three
sampling points. We have found that each additional sampling
point adds less and less to the value of the study. However, in
some situations, more than three sampling points are needed
to adequately cover the variations in a population. A technique
we have often used compares a station’s potential listeners with
its current listeners. This requires choosing two groups at each
sampling point, or 6 groups in total. If both the sampling
points and the type of person invited to a group are different,
you cannot make clear conclusions about the causes of any
differences in the results.
If you hold separate groups for men and women, you will
probably need four groups: two or men and two of women. If
you want to be able to compare the opinions of men and
women, you need to use the same sampling points for each sex.
2. Organize a Venue
A venue for a consensus group is a space that will hold about
15 people and is free from interruptions.
In some countries, hotels and restaurants often have rooms
available for hire for meetings. We have also used clubrooms,
and borrowed office areas outside working hours. Another
possibility is to use a private house with a good-sized room,
paying the owner a small fee for the use of their space.
It is often better not to use the premises of the organization
sponsoring the research, as people may be reluctant to criticize
an organization when they are on its premises: for example, if
your research is for a radio station, avoid using its office as a
venue. But this depends on the type of people attending, and
on the organization.
Here are some factors to take into account when choosing a
venue:
• We usually provide something to eat and drink for the
participants. We find it helps them to relax. One advantage
of hotels and restaurants is that catering is included.
• A venue should be easy to find, specially for people who
have never been there before. Large public buildings are
usually well known to people living in the area.
• A venue should not be a place that some people would not
want to visit. In some cultures, for example, women do not
like to go to hotels.
• A venue should be quiet, particularly if the meeting is to be
recorded on audio tape. Noisy air-conditioning can be an
unexpected problem, making voices much more difficult to
understand, even though participants may be hardly aware
of the background noise. Noise is often a problem in hotels
and restaurants.
• The venue must be close to the sampling point. We have
found that some people are reluctant to travel for more than
about 15 minutes to a group discussion. Sometimes you
may have to change a sampling point, because no suitable
venue can be found there.
3. Prepare a Screening Questionnaire
The purpose of the screening survey is to find participants for
the consensus groups: people who are both eligible and willing
to participate. If, for example, you are assessing the programs
on a radio station, there is little point in inviting people who
don’t listen to that station. The key question on the screening
questionnaire could be
“Do you listen to FM99 at least once a week?”
(This is better wording than asking simply “Do you listen to
FM99?” If no time limit is included, people who listen only
very rarely to the station would answer Yes, and would not be
able to discuss the programs in detail.)
If you are interested in increasing the station’s audience, you will
need to speak to potential listeners to the station. There are
several ways to define potential listeners on a questionnaire.
When a station increases its audience, this is usually because
people who did formerly listen to the station (but infrequently)
began to listen more often. So most of your potential listeners
are probably occasional listeners already. In a screening survey,
you could ask
“How often do you listen to FM99: at least once a week,
occasionally, or never?”
All people answering “occasionally” would be considered
potential listeners.
If you are trying to assess the audience to something that does
not yet exist (such as a planned new radio station), you will
need to define the potential listeners in another way. This can be
done from two starting points:
a. A demographic group, such as “rich people living near the
city.” If these were defined as the target audience, the
purpose of the study would be to find the kind of radio
program that most appealed to these people.
b. A program idea, for example “a radio station specializing in
jazz, blues, and reggae.” In this case, the purpose of the
study would be to estimate how many people are interested,
what type of people they are, and exactly what program
content would most interest them. You may need to do a
small survey to find the participants for consensus groups.
When you have found a respondent who is eligible to take part
in a consensus group, the next step is to find out if they will
attend. We normally use wording like this.
“With the answers you’ve given, you’re eligible to take part in a
group discussion, to talk about news and current affairs in more
detail. We’d like to invite you to come to this discussion next
Tuesday night, the 15th of October. We’re holding it at……
(name of the venue), starting at 7pm, and finishing no later
than 10pm. People usually find these meetings very interesting,
and if you attend we’ll pay you incentives to cover your
expenses. Would you like to come along?”
1. Not interested
2. Interested, but can’t come
3. Agreed to come -> get name and address, and say we’ll send
a letter
Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The essential points included in the above script are:
• Invitation to attend
• Approximate description of the subject
101
• Date, time, and place of meeting
• Maximum duration of meeting
• Incentives, such as payment to respondents, offer to feed
them, meet interesting people, and so on.
The third type of question that can be included in a screening
questionnaire is the demographic question: their sex, their age
group, and perhaps their occupation and other similar data.
There are two reasons for obtaining this demographic information:
• To help ensure a proper balance of sexes, age groups, etc. in
the groups.
• To help find out the differences between the type of people
who attend the groups and those who do not.
4. Find Participants for the Discussions
Interviewing for a screening survey is done in the way described
in the chapter on interviewing. However when you do a
screening survey for consensus groups, it is not essential to
interview people in their homes, following a prescribed route.
When you are looking for characteristics that everybody shares,
sampling makes much less difference: for example, if you
wanted to find out how many legs humans have, almost any
sampling method would do.
Unless the group discussions are to be held immediately after
the recruitment, it is best to use only one or two interviewers at
each sampling point. Because you will be aiming for 12
participants in each group, not many interviews will be required
— unless the people eligible to take part are only a small
percentage of the population.
In order for 12 people to turn up, you will probably need more
than 12 to agree. In India, even when we send a letter to
confirm the details of the discussion, and then telephone each
participant the day before the discussion, major per cent, of
those who accept will fail to turn up. We have also found that
people who say they “might” come usually don’t. Therefore, we
usually get acceptances from two more people than we really
want: if we want 12, we obtain 14 acceptances.
Attendance Rates are Higher When
• Participants are paid well for attending.
• They are allowed to attend with a friend or relative.
• The meeting is at a convenient time of day.
• Participants are mostly over 25.
• Participants are regular users of your service, and they know
the group is about that service.
• The lead time between the invitation and the group is short.
• Participants are reminded of the meeting the day before it
takes place.
• Participants are sent letters confirming the arrangements.
• These letters have practical details of how to get to the venue
- e.g. a map
The worst way to organize a consensus group is to extend a
weak invitation to a lot of people to come. As this is very easy,
it may seem tempting. If you are running a radio station, you
102
may think “Why not advertise on air that we are doing a
research study, and invite listeners to come along?”
The problem here is that you have no control over the number
of people who turn up. It could be nobody, and it could be
hundreds. We have found that 12 people is about the ideal
number for a consensus group. With fewer than about 8
participants, there is too much danger of the responses being
atypical. With more than about 15, many participants are unable
to give their complete opinions.
In developing countries, usually less than 50% of people
eligible to attend a group will agree to do so. Considering all the
people who are not eligible, and the eligible people who do not
want to come to the group, and those who say they will come
but do not, sometimes it takes a lot of interviews to fill one
group of 12.
For example, if one person in 10 is eligible, and a third of those
attend a group, that’s 30 interviews for each person who
attends, or 360 interviews to fill a group. It is therefore not a
good idea to make the eligibility criterion too restrictive.
Another step you can take to reduce the number of interviews
is to offer an incentive to attend. If you can persuade two thirds
of the eligible people to attend instead of one third, only half
as many interviews will be needed. Therefore, if you pay the
people for attending, this can greatly reduce the total cost. (The
participants get more money, but the interviewers get less.)
We normally do the screening surveys between one week and
two weeks before the discussion. If given more than two
weeks’ notice, people tend to forget. With less than a week’s
notice, many people can’t attend, because they have already
made plans to do other things at the time.
But it is not essential to wait that long. Another possibility is to
hold consensus groups on the spot. All people who met the
criteria, and had an hour to spare, were invited to a group
discussion then and there. It took only ten to fifteen minutes to
find enough participants. Of course, this would not work
unless a large number of eligible people were nearby.
The above description of screening questionnaires involves a
separate interview for each person. In the resulting groups, each
participant will usually not know any of the other participants
— except in small towns, where many people already know each
other. In some ways this can be an advantage, because participants will not offend their friends by giving opinions they feel
the friends might disagree with. But in other ways, it can be a
disadvantage to hold a discussion among strangers: participants
may feel unwilling to reveal their opinions to people they do
not know and cannot trust.
Which of these two disadvantages is the stronger will vary from
country to country. My experience is that when the participants
in a group already know each other, they tend to express their
feelings more freely, when the topic is one of broad interest such as broadcasting. But if the topic is something that may
embarrass people, such as sexual behaviour, participants will be
more honest in the presence of people they have never met
before and probably will never meet again.
Sometimes it is better to have separate groups of younger and
older people. In some countries, it may be best not to mix
supporters of different political parties in the same group. This
separation can be done partly by careful selection of sampling
points, and partly through screening questionnaires. Remember
that the purpose of restricting a group to a particular type of
person is to enable the participants to speak more freely to each
other.
103
LESSON 26:
AFFINITY GROUPS
Topics Covered
Knowing, principles, choosing, preparing, formation
Objectives
Upon completion of this Lesson, you should be able to:
• Know what are groups
• Principles of consensus groups
• Understand group discussions
• Choose sampling points
• Prepare a screening questionnaire
• Consensus formation in discussion
Affinity Groups
Another approach is to organize a discussion among a group
of people who already know each other, such as members of a
sporting club, a group of people who work together, or a group
of students. These groups, formed by people who already
know one other, are called affinity groups.
The groups need not have a special purpose: they can simply be
groups of friends or neighbours. However, you should not
have a group made up only of people from a single family.
There is too strong a chance that they will not be typical of the
population, because the entire study would then be limited to
three families.
If the purpose of the group is to collect opinions, it’s usually
best to discourage husbands and wives from coming together:
they tend to inhibit each other. Often only one of them will
join the discussion. As each group is quite small, it would be
better to invite two people, who would give separate opinions.
But if discovering behaviour is your main interest, husbands
and wives can correct each other.
When affinity groups are used for a study, each group needs to
be as different as possible from each other (replacing the three
sampling points). For example, don’t choose three sporting
clubs, or three groups of neighbours. This type of sampling is
most effective when there is the largest possible contrast
between the types of person in each consensus group.
One problem that restricts the use of affinity groups is that not
everybody in an affinity group may be eligible. If a radio station
is studying its listeners, it does not matter if a few people in an
affinity group are not listeners, but if most people are nonlisteners, the group will not provide useful information. Also,
if people are not interested in the topic being studied, they may
disrupt the discussion by talking among themselves. Therefore
affinity groups are best when all (or almost all) the population
are eligible to be participate.
stimuli, or material to react to, as well as encouraging the reticent
to speak up, and discouraging the talkative from dominating
the proceedings. And when the topic is radio, one essential
function of the moderator is to stop the participants from
talking about television!
Each group should have a second person from the organizing
team, to act as secretary. Though it is possible for an experienced
moderator to fulfil both functions, it is valuable to have a
second organizer present, so that the two can compare their
conclusions after the participants have left. But if too many
people from the organizing team are present, participants are
likely to feel inhibited. If there are 12 participants, there should
not be more than about 5 organizers present. Apart from the
moderator and secretary, the other organizers should hardly
speak at all. Other people who may be present include:
• A video-camera or tape-recorder operator.
• Somebody to provide drinks or food for the participants.
• An observer or two from the organization for which the
study is being done.
Initial Questionnaires</B<>
It hardly ever happens that all participants in a group arrive at
the same time. Even when we ask people to be sure to arrive at
the advertised starting time, some arrive late, and others arrive
much too early. As soon as they arrive, participants are keen to
know what will be happening. It can be tiresome to repeat this
over and over again, as each new participant arrives. To give
them something to do, we usually have a preliminary questionnaire, which people can fill in as soon as they arrive. Those who
arrive late can fill in their questionnaire while the discussion
takes place.
As well as giving participants something to do while the others
arrive, these questionnaires can collect useful information. They
can also be used to raise questions that participants can think
about and discuss later: such as “If you could make one change
to FM99, what would it be?”
These questionnaires are short and simple. I try to restrict them
to a single sheet of paper, with questions only on one side. As
some people prefer to communicate in writing, we let participants keep their questionnaires throughout the discussion, and
invite them to write their thoughts and comments on the blank
back of the questionnaire.
5. Hold the Discussions
Seating Arrangements
A good type of room arrangement is a large table (or several
smaller tables pushed together) around which the participants
sit in chairs. The tabletop is useful for filling in questionnaires,
and for food, drink, and microphones.
Each group needs a discussion leader, or moderator. This
person (preferably not a radio or TV presenter, in whose
presence people may withhold criticism) feeds the group with
Another good arrangement is a D-shaped one. The participants
sit in a semi-circle, with the moderator near the centre. The
straight part of the D is the wall on which the results are
104.
displayed. The secretary sits at one end of the D. If there is a
photographer or video operator, this person usually stands
somewhere near the other end of the D.
In some cultures, people prefer to sit on the floor. This is no
impediment, but if the participants are to fill in questionnaires,
you will need to supply a writing surface (such as a clipboard)
for each participant.
Displaying Data
An essential part of the consensus group technique is the
poster: large sheets of paper, taped to the nearest wall, or on an
easel. On these posters, the secretary writes the findings of the
group, in large letters so that all participants can read them. (If
some or most are illiterate, symbols can be used as well as
words.) It’s also possible to use a blackboard or whiteboard,
but large sheets of paper are best because they can be taken away
and kept as a record of the discussion.
Lately we have found a better method. Instead of using a few
large sheets of paper, we have used many small sheets of paper,
about A4 size: one for each statement.
Electronic Recording
If possible, the whole discussion should be recorded, either on
video tape or on an audio cassette. Video has many advantages,
but it has the disadvantage of requiring a camera operator.
Though it is possible to set up a video camera and simply point
it at a group of people sitting in a semicircle, most detail will be
lost, without an operator to zoom in on each person as they
speak, and to record the reactions of other participants.
You might expect that to have a video operator present would
greatly distract the participants, but we have found this is not
so. After the first few minutes, participants usually stop noticing
the video camera. (This is obvious when replaying a video
recording, because participants seldom look directly at the
camera.) The video operator should intrude as little as possible,
and stand back at a reasonable distance. Note that, to focus on a
whole group of 12 or more people, you will often need a larger
room than you might expect - or a video camera whose lens can
zoom out to an unusually wide angle.
If a video camera is not available, the next best thing is to
record the discussion on audiocassette or minidisc. This does
not require a separate operator - the secretary or moderator can
work it. Some requirements for successful audio taping are:
• With a video recording, you can see who is speaking, but
with audio, some voices may sound the same. To help
identify voices, the moderator should address participants by
name as often as possible. The moderator may also need to
describe other things that are happening which will not be
recorded on the tape. For example, if a participant shows the
others a printed advertisement, the moderator should
describe what is happening.
• You need a good microphone. After much experimentation,
we discovered the boundary layer or PZM microphones or
similar microphones, which are made by several
manufacturers. A boundary mike looks like a flat piece of
metal, about 20 cm square. It sits on a tabletop or other flat
surface, and records all sounds reaching that surface. The
microphones built into cassette recorders are usually of too
low a quality to pick up voices clearly, specially when several
people are speaking at once, and the microphone is more
than a few metres away. Microphones designed for
broadcasting don’t work well in this situation, because they
are designed to pick up sounds from only one main
direction
• Double-check that everything is working! It is surprisingly
easy for something to go wrong when you are making a
recording. Batteries may go flat (in either the recorder or the
microphone), plugs may not be inserted fully, the tape can be
set to “pause” and forgotten, the volume control can be
turned right down, the machine can be switched to “play”
instead of “record”, and so on. I prefer a tape recorder with
an indication of the volume (flashing LED lights or a VU
meter), and a tape compartment with a clear window so that
you can check that the tape is turning.
Identifying Participants
We use name tents to help identify the participants to each other
and to the organizers. A name tent is a folded piece of cardboard, with a participant’s name written on it, like this:
Indica
If the name tents are put on the table in advance, this can be
used as a way of planning where people sit. If a few of the
participants are from the same family, they are likely to distract
the other participants by whispering if seated next to each other,
so it is a good idea to separate them.
If the participants are not sitting at a table, an alternative to
using name tents is to draw a map of the seating arrangements,
marked with all the participants’ names. This can be displayed
on a wall for all to see.
6. First Stage: Introduction
When almost all participants have arrived, the discussion can
begin. In the first stage, the moderator introduces himself or
herself, and asks each other participant to do the same. Here we
are looking for information that will help to understand that
person’s later comments. For example, if the topic is a radio
station, I find it helpful to ask participants to describe their
households, their daily routine, and their radio listening habits.
Another purpose of this first stage is to help each participant
gain confidence in speaking to others in the group.
Not a lot of detail is needed here: each person should talk for
one or two minutes. The moderator’s own introduction will set
an example for others to follow.
This stage usually about 20 minutes, for a group of 12 people.
7. Second Stage: Discussion
After all participants have made their introductory talk, the
moderator begins the second stage, by outlining the issues that
the discussion will cover. This should be done very broadly, so
that issues will be placed in their context. For example, if the
study is about a radio station’s programs, it is useful to gather
opinions on other media which may compete with radio for
people’s time: television, newspapers, and so on.
If the participants do not already know which station is
conducting the research, it may be best not to tell them just yet,
so that their opinions will be more unbiased. In some cultures,
105
if participants know which organization is conducting the
research, they are reluctant to criticize it in the presence of people
from that organization. Almost certainly, they will identify the
organization later in the discussion, so the early stages are the
only opportunity to hear completely unbiased opinions.
When each participant speaks in turn, this ensures that everybody has a say, but doing this for hours makes conversation
very awkward. My preference is to begin and end the discussion
phase by asking each participant in turn to say something about
the topic. For the rest of the discussion phase, anybody can
speak, as they wish. Sometimes the moderator needs to
intervene to prevent one or two people from dominating the
discussion, or to encourage the shyer participants to have their
say.
The organizers should not be drawn into the discussion. If a
participant asks the moderator “what do you think of this
matter?” the moderator should answer “it’s your opinions that
are important here, not mine. I don’t want to influence you. “
The purpose of the meeting is for the listeners to provide
information to the organization that is conducting the research.
But participants may try to reverse the flow of information, and
begin questioning the organizers in detail. If the organizers
respond, much of the meeting time can be wasted. The
moderator should handle such questions by stating that there
will be a “question time” at the end of the meeting.
The discussion itself can be either structured or unstructured
(the difference is explained below), and will typically run for 1 to
2 hours. The moderator can usually control the duration. While
discussion takes place, the secretary notes down statements
which most of the participants seem to agree with. These
statements are usually not written on a poster at this stage, but
on a sheet of paper for the secretary’s later use. We’ve found
that displaying statements soon seems to stop participants
from thinking further about a topic. If anything is written on a
poster at this stage, it should be a question or a topic, not a
conclusion.
Unstructured Discussions
An unstructured discussion is one in which the moderator
merely introduces the broad topic, and sits back to let everybody
else talk. The advantage of this approach is that participants feel
unfettered by restrictions, and may provide useful and unexpected insights. The disadvantage is that much of the
discussion may be irrelevant, and will not provide useful
information. Therefore it is normal for the moderator to
interrupt when the discussion drifts away from the stated topic.
For example, if the stated topic is radio, the discussion will
often drift onto television.
If the organizers have a list of questions they want answered,
an unstructured discussion will usually cover most topics,
without the moderator having to introduce the topics one at a
time. The moderator can have a list of topics, and check them
off as they are covered. Towards the end of the discussion
period, the moderator can ask specifically about topics that have
not been discussed.
Structured Discussions
With a structured discussion, the moderator introduces issues
one at a time, and asks the participants to discuss them. The
moderator should avoid asking the participants any questions
which can be answered Yes or No. (If that is the type of
information you want, you should be doing a formal survey,
not qualitative research.) Instead, the moderator should say
things like:
• ”Tell me some of the things you like about the breakfast
program on FM99.”
• “Why do you no longer listen to that program?”
Both of the above questions are the type that seek detailed
responses, and are loose enough to allow for unexpected
answers. If the questions asked are too specific, you risk not
finding out the essential facts.
Another type of structured discussion involves playing excerpts
of programs from a prepared tape. Reactions to specific
programs are uslaly more useful than general comment. For this
purpose, we normally prepare tapes of 10 to 20 short program
extracts, averaging around one minute — just long enough to
illustrate a particular point, or jog the memories of people who
are not sure whether they have heard or seen a program before.
Play one item at a time, then let people talk about it for a few
minutes.
Consensus group participants often express themselves vaguely,
making comments such as “I’m not very impressed with the
news on FM99.” Statements like this are not useful, and need
to be followed up with questions, such as “What is it that you
don’t like?” or asking them for some specific examples. If the
moderator does this a few times early in each discussion, the
other participants will see what sort of comment is most
useful.
Structured vs Unstructured
The main advantage of an unstructured discussion is that it will
often produce ideas which the moderator and secretary have not
thought of. The main disadvantage of unstructured discussions is that they take a lot longer, because participants tend to
drift off the subject.
However, a discussion need not be wholly structured or wholly
unstructured. It can begin in one of these modes, and move to
the other. It’s best to begin unstructured, and introduce
structure later. If you do it in the reverse order, beginning with
a structured discussion then allowing some unstructured time
at the end, participants seldom come up with fresh ideas.
Generating Statements
The output of a consensus group is a set of statements. These
can come from either the organizers or the participants:
From the moderator or secretary:
• A few initial statements, fairly bland (so as not to bias the
discussion) but clearly phrased (to show participants the
expected style)
• Statements agreed on by previous groups in the same series;
• Findings from earlier research on the same subject.
106
From Participants
• Their spontaneous opinions
• Their reactions to statements made by other participants
• Their impressions and reports of what other people (not
As the moderator reads out each statement, and points it out
on the wall, and asks how many agree with it, the secretary (who
is facing the participants) counts the number of cards or hands
being held up.
present) think.
• Each participant in turn can be asked to make a statement
that he or she thinks most people agree with.
We declare a consensus if at least three quarters of participants
agree: at least 8 out of 10 voters, 9 out of 11, or 9 out of 12.
Whether the threshold of consensus is set at 70% or 80% or
90%, it makes little difference to the results.
Initially, some participants make timid statements that provide
little information, even though most others would agree. For
example, “the sky is blue” is clear and concise - but how useful
is it? Even worse: “the sky is often bluish, but sometimes it’s
more grey.” When statement-making begins, the moderator
should encourage participants to make more daring statements,
which may only just reach a consensus.
If only a few participants do not agree with a statement, they are
asked which parts of it they don’t agree with. The moderator
asks these people “Could you agree with it if a few words were
changed?” Often, this is possible. If a statement expressed in
an extreme way is softened a little, more people will agree with
it.
8. Third Stage: Consensus
• People who do not listen to FM99 should have a
In the final stage, we seek consensus on the issues discussed in
the second phase of the session. By this time, the secretary will
have a list of statements that he or she thinks most participants
may agree with. The secretary now displays each statement in
turn, and the moderator invites participants to agree or disagree
with it.
The object is to find a statement phrased in such a way that the
great majority of participants are able to agree with it. The
original statement will usually need to be altered somewhat so
that as many people as possible can agree with it.
The simplest way for participants to show agreement is by
raising their hands. This works well in some cultures, where
people are more assertive or individualistic (e.g. Western and
Chinese-influenced societies), but in more group-minded
cultures (e.g. Buddhist societies, Japan) participants are often
reluctant to show that they disagree with others.
A more discreet way to indicate agreement is for a participant to
hold up a palm-sized piece of cardboard. This can be held close
to the chest, and is much less obvious to others than an
upraised hand.
We normally give each participant a bright green card - the colour
needs to contrast with the participants’ clothes, so that the cards
will be visible in the videotape or photographs.
Voting is in two stages. For each statement in turn, the
moderator reads it out, and shows it written on the wall.
The first stage is to check that everybody knows precisely what it
means. Participants are asked to raise their hands (or their cards)
if they are certain of its exact meaning; often they will not be. If
even a single participant admits to being uncertain, the statement should be reworded. Sometimes the person who first
made a statement needs to be asked exactly what it means. The
secretary or the moderator will then suggest a revised wording,
and participants are again asked if they know exactly what it
means. When all participants are certain of the meaning, the
voting can go ahead.
Participants are now asked to display their cards or raise their
hands if they agree with the statement. Some people want to
half-agree with a statement. We tell them: “Unless you’re sure
that you agree, don’t hold your card up.”
For example, this may strike some people as extreme...
loudspeaker set up in the street outside their home and be
forced to listen to FM99 all day.
Not many people would agree. A few more might agree
that...
• People who do not listen to FM99 should be given a free
radio which only receives FM99.
But hardly anybody would disagree with...
• People who do not listen to FM99 should be told that it
exists.
When consensus has been reached, the secretary writes the
modified statement on the poster, together with the
numbers who agree. For example, if 11 out of 12 people
agreed:
People who do not listen to FM99 should be told that it exists.
11/12
When there’s no Consensus
Sometimes it is not possible to reach consensus, and the group
will divide into two sections, with no common ground. In such
a case, try to get two statements, each with as many as possible
agreeing, and record both statements, with appropriate
annotation.
It’s best to put non-agreed statements aside until all statements
have been worked through, then come back and reconsider the
non-agreed statements. There are two reasons for this. Firstly,
it’s possible to spend so long arguing over non-agreed statements that the group has no time to finish properly. Secondly,
after all the original statements have been worked through,
participants will have a better knowledge of each other’s
positions, and more will easily agree on statements which they
could not agree on at first.
If few people agree with a statement (no more than about 4
out of 12) it’s often useful to reword the statement as the exact
opposite of the original. You might expect that if 4 of 12 agree
with a statement, 8 should agree with its opposite. But this is
often not true - sometimes almost everybody will agree with the
opposite. In other cases, the group will be evenly split.
When between one third and two thirds of the participants
agree with a statement, this can signal several things:
107
a. The statement may be confused, or contain several different
ideas: so try splitting the statement into two separate
statements.
b. There is a fundamental division of opinion within the
group: in this case, reword the statement to maximize the
split, so that there is a consensus within each of two
factions.
After the secretary has finished going through the statements
that were noted during the discussion stage, participants are
asked to add statements that they feel most others will agree
with. Each participant in turn can be asked to make a statement,
which is then modified and voted on in the same way.
The moderator may then offer some final statements for
evaluation. Unless this is the first of a set of consensus groups,
now is the time when statements agreed in earlier group
sessions can be shown to participants. Because the wording has
already been clarified in the earlier groups, there’s usually no
need to modify it now. It’s simply a matter of voting, which can
be very quick in this case.
When the purpose of the project is to help an organization
understand its audience, I’ve found it helpful to add some
groups with the staff as participants, keeping staff of different
status levels in separate groups. The statements produced by
staff are often very different from those produced by the
audience, and different groups of staff (unlike different
audience groups) often produce statements that are very
different from each other’s. This can be very educational for
management.
In a typical 2-hour session, most participants agree on about 20
to 30 statements. As a final step, the moderator can ask
participants to classify the statements into groups of similar
statements, then for each of these groups of statements to
produce a new statement summarizing the whole group.
Finally, the statements can be laid out on a large sheet of paper.
Imagine all possible statements being spread out in two
dimensions, as if on a map. An irregular shape on this map
might define the statements with which most people agree. At
the centre of the map are the statements that are so obvious
that they are hardly worth stating. For example, all regular
listeners to a radio station might agree with “I usually like what
I hear when I listen to FM99.” (Otherwise, they probably
wouldn’t listen.) Towards the edge of the irregular outline on
the map are the borderline statements, at the boundaries of
agreement of the station’s listeners. An example of a borderline
statement might be “Though I usually listen to FM99, I listen
to other stations when I don’t like the FM99 program.” These
borderline statements tend to be more interesting, and less
obvious. Beyond that borderline are the statements on which
agreement could not be reached.
The consensus-seeking stage of the discussion will typically last
between 30 minutes and one hour, depending on how many
statements are offered for discussion. Sometimes a group will
split into two sub-groups, which hardly agree on anything. In
these cases (which are rare) the consensus stage will take much
longer.
Why separate the discussion and consensus stages?
108
You may wonder why discussing issues and reaching consensus
are presented as two separate stages of the discussion.
Wouldn’t it be more efficient to take each topic one at a time,
reach consensus on that, then move on to the next topic? I have
tried this, but found it impedes the flow of discussion. Also,
returning to a topic at the consensus stage gives people more
time to gather their thoughts, and consensus seems more easily
reached after a time gap. The only exception is when the
discussion is structured, by being divided into a number of
clear parts - for example when a lot of short program excerpts
are being played to the participants, and they reach agreement on
each one separately.
9. Summarize the Results
Sometimes the most difficult part of running a consensus
group is to persuade the participants to leave at the end of it.
With some groups, nobody wants to go home, and most
participants may sit around and talk for an hour or two longer;
my record is 5 hours. I find these late sessions very useful. By
that time, the participants know each other (and the organizers)
much better, and may volunteer information which they did
not want to do in the “official” part of the discussion. For this
reason, I usually leave a tape recorder running for as long as
possible.
The outcome of each group is one or more posters listing the
agreed statements, showing how many people agreed with each
statement.
After three group sessions, you will have three sets of statements. The reason for holding three groups is that one group
may develop its thoughts in quite a strange way, perhaps due to
one or two powerful personalities. With two groups, one of
which may be atypical, you won’t know which is which, but
with three, if the results from one group are very different from
those of the other two, you will know that one is atypical.
Though you will never get three groups coming up with exactly
the same set of statements, we have always found strong
similarities. If at least two of the three groups came up with
similar statements, these statements can be safely assumed to be
representative of the whole audience sampled. No matter how
differently the three groups are selected, we usually find a lot of
agreement between the lists of statements when the discussions have been conducted in the same way. Observing the
similarities will give you confidence that the results are true of
the entire population studied, not only of the three disparate
groups.
Have the three lists of statements typed out, or lay them out in
map-like form as described above. This will be the basis of a
report resulting from the discussions. Add a description of any
taped stimulus material, and the criteria used for selecting
listeners, complete a survey summary form, and that may be
sufficient. If you have taped the discussions (whether on audio
or videotape) you can copy relevant comments to illustrate each
of the agreed statements, and include these in a more detailed
report.
In summary, the consensus group technique is one that can be
used by inexperienced researchers with reasonable safety. It is
more difficult to draw wrong conclusions with this technique
than with other types of qualitative research, and the findings
are often more directly useful than those from formal surveys.
However, the technique is not a simplistic one. Even highly
experienced researchers can use it, to supplement the focus
group technique. Inexperienced researchers will find that as they
conduct more and more consensus groups, they will be better
able to fashion the agreed statements. As well as being an
accurate reflection of the participants’ opinions, these statements are more usable by the organization which has
commissioned the research.
109
LESSON 27:
INTERNET AUDIENCE RESEARCH
Topics Covered
Understanding, Methods, Data, Possibilities
Objectives
Upon completion of this Lesson, you should be able to:
• Using existing data.
• Internet jargon
• Understanding Quantitative methods online
• Knowing the Qualitative methods
• Possibilities of internet research
Internet Audience Research
questionnaires always need to be checked carefully, and computers can’t do this as well as humans.
Also, large samples are no substitute for accurate samples. If a
million people visit a web site, and 100,000 of them respond to
a questionnaire, the results could still be completely wrong - if
the 900,000 who didn’t respond are different in some important way. And if the response rate is only 10% they probably
will be different. So it’s still important to ensure a good
response rate, of 70% or more. To achieve a high response rate,
the same criteria apply as for other surveys:
• Give people a strong incentive to respond. Like mail surveys,
and other surveys without personal contact between the
interviewer and respondent, internet surveys need to use
incentives, to boost the response rate.
Every two-way communication medium has been used for
market research: face to face, mail, and telephone. So it’s not
surprising that when the Internet became popular, it was quickly
put to use for audience research.
• Make it easy for people to fill in and return a questionnaire:
Internet audience research is still at an early stage, and many
technical problems can occur. However the cost savings are so
great that this method is well worth pursuing.
• Make the questionnaire interesting - even fun.
Internet Jargon
This chapter includes some terms that will be familiar to
experienced internet users, but not to most other people. The
first time each term appears, it’s briefly explained. Definitions of
the 100-odd commonest terms used in internet research can be
found on the Audience Dialogue Jargoncracker page.
1. Possibilities of Internet Research
Even in the richest countries, most people are not regular
internet users. Though the percentage of people using the
internet has been growing fast, the growth rate is sure to slow
down. I’d be surprised if by 2005 as many as 90% of the
population of any country are regular internet users - unless
computers become far easier to use.
When less than 90% of a population use a medium, it’s
dangerous to do a survey and expect that the non-users would
give the same answers, thus making the results true of everybody. This is why telephone surveys weren’t widely used until
the 1980s, when in the richest countries about 90% of households had a telephone. And it took 100 years for the telephone
to reach that level of coverage. Penetration of the internet is
already much faster than that, but for the next few years any
surveys done on the internet will have to be based on specific
populations already using the internet.
An aspect of surveys which the internet may change is sample
sizes. With the internet, it costs little more to survey a million
people than a hundred. So why bother with a random sample?
Why not interview everybody - such as all visitors to a web site?
This is an attractive idea, because sampling error disappears, and
personal computers are now fast enough to analyse huge
samples. However, there are disadvantages. Completed
110.
not too long, questions not too difficult to answer, not
repetitive, etc.
• Above all: follow-up is essential. Reminder messages must
be sent to people who haven’t yet returned questionnaires.
(But if you can’t contact the potential respondents - as is the
case with many internet surveys - follow-up is not possible.)
Advantages of Internet Research
1. Low Cost
The main attraction of internet research is its low cost. Because
the equipment has been paid for (mostly by respondents), the
cost to researchers is very low. No printed questionnaires, no
interviewer wages, no mail-out costs, no data entry costs. And
instead of paying for every phone call made (as with a telephone
survey), questionnaires can be emailed to an unlimited number
of people for the cost of a single phone call. Even the costs of
producing reports can be done away with: reports can be
distributed by email or on the Web.
This will have a revolutionary effect on the market research
industry. Costs will drop enormously: when the computer
setting-up costs have been paid, the only remaining major costs
for a survey will be the labour involved in questionnaire
development and analysis. And when a survey is repeated (e.g.
to monitor trends) the same questionnaire can be used over and
over again, and the analysis can be computerized. With all these
advantages, I predict that within a few years most professional
surveys in Western countries will use the internet whenever
possible..
Until then, only one population can be researched using the
internet, with no worries about sampling problems. That
population is (of course) people who already use the internet.
So the internet itself can be used to do audience research about
internet sites.
2. Fast Response
With a typical email or web survey, most of the responses come
back in less than one day. No other survey method can match
this - but in fact, such a fast response isn’t always desirable.
carefully enough), you don’t know what the correct answer is.
Most people don’t realize that some of their answers can be
checked. For example, the country that somebody is in can
usually be worked out from the IP number of their computer.
For broadcasting: if a survey is done too quickly, without
several callbacks, the people who are out a lot will be less likely
to be interviewed. Because most TV viewing is done at home, a
too-quick survey will overestimate TV audiences. Generally,
about 3 days are needed for a fully accurate survey. With the
Internet, this is easily possible - only a highly-staffed telephone
survey can match it.
Part of the reason why internet respondents lie is poor
questionnaire design. I’ve even lied myself, from frustration.
With a live interviewer, or a paper questionnaire, at least you can
write in an answer for a multiple-choice question which doesn’t
offer an answer to fit your situation. But with Web surveys, it’s
possible to force respondents to give one of several answers. If
no space is provided for comments, their only alternatives are to
lie or to give up on the questionnaire.
3. Ability to Show a Wide Range of Stimuli
As long as respondents have modern computers, they can see
far more on an internet questionnaire than a paper one.
Coloured pictures, even moving pictures, and sounds are all
possible. If you are asking “Have you ever seen Alfa MarathiTV
program?” you can show a short clip from it, to make sure that
respondents know exactly what is being asked. Though many
people don’t yet have a computer capable of showing all this,
within a few years they will.
Problems with Internet Research
On the Internet, there are new problems, which seldom occur
with more traditional surveys. The commonest problems (apart
from poor samples and low response rates) are multiple
submissions, and lies.
Problem 1. Multiple Submissions
Many people inadvertently submit a completed questionnaire
several times, by mistake. Often, they click a SUBMIT button to
send in their questionnaire. Nothing seems to happen, so they
click it again. Perhaps again - to make sure it is sent. So three
identical questionnaires arrive at the host computer. How can
the researchers know if this was really the same person sending
in a questionnaire three times, or three different people who
just happened to give the same answers?
Another reason for multiple submissions occurs when
respondents are offered a very attractive incentive - such as a
prize. So, when possible, use cookies to prevent people from
making multiple submissions. Cookies are small files which can
be automatically sent to a visitor’s computer), but they don’t
solve the problem completely. If a computer has more than one
user, a cookie will prevent a legitimate respondent from sending
in a questionnaire. If a respondent sends in a questionnaire by
mistake before it’s completed, the cookie will stop them from
making a correction.
Multiple submissions are also a problem with email surveys,
but these are easier to check, because the sender’s email address
is always known. However, some skilled computer users can
hide or fake their email - and many people have more than one
email address.
Problem 2. Lies
On the internet, respondents lie! This happens sometimes with
mail surveys and self-administered questionnaires, but it’s
usually rare. With real interviewers - whether face-to-face or by
telephone - respondents don’t have time to lie. But with
internet surveys, I’ve often found blatant lying. Though it’s
often possible to detect a lie (if you study the responses
Recently, for example, I was filling in a Web questionnaire. I was
asked: “Which state do you live in?” The 50 US states were then
listed. Perhaps the survey was only intended for Americans
(though it didn’t say so), or perhaps the writer didn’t realize
that the questionnaire could be seen by anybody in the world.
So, to find out what questions came next, I pretended to live in
some US state. I clicked the mouse somewhere, without even
looking at what state I chose.
Anybody who carefully analysed the results of that survey could
have found that I had an Indian IP number, but supp-osedly
lived in the USA. So it was possible for them to discover that
I’d lied - but, judging from the obvious lack of care taken in the
rest of the questionnaire, I suspect they never knew that I’d lied.
So if you don’t want respondents to lie, let them give their own
answers, without being forced to choose one of yours. I
strongly suggest that, on a Web questionnaire, every question
should have a space for comments or “other” answers - even a
question as seemingly simple as “Which sex are you?” (Sometimes men and women jointly complete a questionnaire. In this
situation, they’ll answer “both sexes.”)
Another advantage of having plenty of open-ended questions
is that liars will give themselves away with their comments - or
at least, raise strong suspicions. But of course, if you’re getting
thousands of responses a week and have only one researcher on
the job, there’s no time to read through the data file and assess
its accuracy.
Screeners, Recruitment, and Virtual Facilities
Many elements of the on-line qualitative process are familiar to
qualitative researchers conducting in-person groups. Every online group is initiated by contracting with a virtual facility that
usually offers recruitment services as well as virtual rooms.
Virtual facilities typically recruit respondents electronically from
established panels, compiled on-line lists, targeted Web sites, or
client-provided lists. Sometimes, telephone recruiting is used to
make the initial recruitment contact or to obtain e-mail addresses. (Independent recruiters specializing in on-line group
recruitment are just beginning to appear and this will, undoubtedly, be another area of growth potential.)
Recruiting on-line groups requires specially crafted screeners that
are similar in content and depth to those used for in-person
groups. Since the screeners are administered electronically, some
questions are worded differently to disguise qualifying and
disqualifying answers. A professional on-line facility, in combination with a well-written screener, will thank and release all
111
disqualified respondents without them knowing why. This, as
well as putting a block on their electronic address, discourages
them from re-trying to qualify by logging back in or from
sharing information about the specific screener questions with
friends. Depending upon the target markets, it is not unusual
with high-incidence groups to have an excess of qualified
respondents to choose from and the virtual facility and/or the
qualitative researcher will select the best.
The time set for an on-line group should accommodate the
array of respondents participating. If there are East and West
Coast participants, groups can be conducted later in the evening
(based on GMT) or participants in similar time zones can be
grouped together.
Invitations and Preparation
Respondents who are invited to the group receive invitations
with passwords and pass names, instructions, dates, and times.
The invitation requests that they sign on to the site in advance
of the group, using the computer they will use during the
group, to guarantee that all technology is compatible. If there
are any complications or questions, the respondents can contact
tech support in advance to resolve them. They can also contact
tech support during the group for on-line support, as can the
moderator and client observers.
Discussion Guide Development and Design
The content and structure of the inquiry, as outlined in the
discussion guide, resembles in-person groups. The major
difference is in the actual presentation of questions that are
mostly written in full sentence form, in advance. The main topic
questions must be written clearly and completely otherwise
respondents will have to ask for clarification, which uses up
valuable time and diverts the attention of the group.
On-line groups are often shorter (typically 60 to 90 minutes)
than in-person groups and the ideal number (30 to 45) of
prepared questions depends on the complexity of the subject
and the amount of follow-up probes required. Whenever
desired, follow-up questions and probing can be interjected to
either an individual respondent or the entire group. This
enriches the inquiry and uncovers deeper insights. Unfortunately, sometimes research sponsors can insist on an excessive
amount of prepared questions that minimize the amount of
probing time. The result is a missed opportunity to uncover
deeper insights.
Preparation for Groups
Fifteen to 30 minutes prior to the group, the moderator and
technical assistant sign on to watch as respondents enter the
virtual waiting room using their passnames and passcodes.
Similar to in-person groups, some respondents arrive very early
and others arrive at the last minute. As they arrive, some virtual
facilities can administer a rescreener to re-profile them and to
assure that the attendee is the person who originally qualified.
In addition to a few demographic and product usage questions,
the rescreener can include a verification question that refers to a
piece of unique, personal info, such as the name of their first
teacher or pet, that was subtly asked in the original screener.
112
Show Rates
Show rates can vary dramatically based on a number of factors,
including: the origination of the respondent (on-line database,
established panel, Web site intercept, etc.), confirmation
procedures, respondent comfort and familiarity with the on-line
venue in general, and the typical kinds of other personal/
business commitments that can inhibit attendance. For eight
respondents to show, 10 or 15 may have to be recruited.
However, it should be noted that the weather, traffic, and
transportation can have less of a negative impact on show rates
since the respondents are typically participating from a variety of
locations and not encountering the same delays.
Selecting Final Respondents
Based on the rescreener information and final screener spreadsheet, the moderator and client select the respondents together,
similar again to in-person groups.
Moderating
For a moderator, the excitement and pace of moderating an online group can be likened more to a roller coaster ride than an
in-person group. Ideally, the discussion guide is downloaded
directly onto the site so the moderator can, with one click, enter
a question into the dialogue stream. However, another method
more frequently available and workable (although requiring
more concentration and actions by the moderator) is having the
discussion guide document loaded in a separate window
behind the virtual room to use for cutting and pasting each
question.
To begin a group, the moderator introduces the purpose of the
group and lays the ground rules. This includes a personal
introduction, purpose, timeline, instructions for entering
responses, encouragement to be candid and honest, and
instructions for signing back on if they accidentally drop off.
Respondents are also encouraged to “feel free to agree, disagree,
or ask questions of each other that relate to the subjects being
discussed” and told that this interaction will help bring the
discussion to life.
On-line groups demand that a moderator possess strong and
fast keyboard skills or be willing to hire an assistant who does.
There are no unused moments during a group to accommodate
slow typists on the moderator side. Respondents can type
slower, but most are keyboard proficient and save time by
cutting corners on spelling and not worrying about sentence
construction. It helps to tell them right in the beginning that
“typo’s and sentances dont mater.”
While a group is underway, there may be technical problems
with respondents and clients that require telephone calls back
and forth to resolve. Simultaneously, the moderator is reading
and interpreting the response stream, responding to client
notes, composing probes and entering questions while
(potentially) dealing with all kinds of technical issues.
Also, moderating on-line groups requires someone who relates
to the on-line venue and recognizes that respondents are adept
at developing relationships in this medium. Many respondents
participate in chat rooms and feel comfortable relating on-line.
At the same time, it is the responsibility of the moderator to
help make the respondents who are not as comfortable or
experienced feel valuable.
The strategy of on-line moderating resembles in-person
moderating. That is, the moderator follows the discussion
guide to the extent that it continues obtaining the desired
information. If a subject that was supposed to be covered later
in the group is brought up earlier by the respondents, those
questions can be inserted as the moderator sees fit. In addition,
if topics not covered in the guide are introduced, the moderator
can choose to interject a new line of questioning.
Technical Support
All virtual facilities offer some level of technical assistance. This
may be a technician whose role is to help everyone sign-on and
to help anyone who gets kicked off and has trouble re-entering.
Other technicians perform additional functions including
hosting the waiting room and interacting with respondents
while they wait.
most knowledgeable and qualified professionals to conduct online groups are qualitative researchers who have research and
marketing expertise and experience managing group interactions. “Techies” sometimes attempt to do groups because they
are comfortable with the technology and mechanics and some
even have experience with chat groups. However, they often lack
research, analysis, moderating, and marketing expertise and the
results can suffer from these deficiencies.
Assignments
1. The advacement in computers is astornishing do you agree?
Answer pointing out various characteristics of computer?
Another option is for the moderator to hire their own project
assistant who greets the respondents and chats with them in
the waiting room - warming them up - while the moderator
takes care of any last-minute details with the clients and facility.
This assistant then supports the moderator throughout the
group in whatever capacity needed, which could include comoderating if, by remote chance, the moderator loses her/his
connection. This person also has an overview of the project
objectives, screening, discussion guide, and the moderator’s
style, areas that a virtual facility’s technical support person would
not be privy to.
Transcripts
Soon after the completion of the groups, transcripts are
available for analysis and reporting. These transcripts, available
within a few hours or the next day, may document all interactions from sign-on to sign-off, or they may be slightly edited
(by the facility or moderator) to begin at the first question and
end with the last question, eliminating the hellos and goodbyes. Inappropriate respondent comments can be easily
removed.
Analysis
Analysis and reporting are similar to in-person groups, with the
exception that transcripts are quickly available for every group.
The analysis will be very inclusive and reflect the input of most
respondents since most of them answer every question. In the
absence of visual and verbal cues, analysis of some areas, such
as appeal, will be based on an interpretation of respondent
statements and the ratings they use to indicate levels of appeal.
Reporting
Reports are virtually the same as other qualitative reports
covering areas such as objectives, methodology, conclusions,
and detailed findings. They can be in topline, executive summary, or full report form. Typically, reports can be turned
around more quickly due to the immediate availability of the
transcripts.
A Qualitative Caveat
Results from on-line groups depend on the expertise and
qualifications of the professional who is conducting them. The
113
LESSON 28:
ANALYZING ONLINE DISCUSSIONS: ETHICS, DATA, AND INTERPRETATION
Topics Covered
Understanding, Considerations, Collection, Management,
Preparations, Manipulations
Objectives
Upon completion of this Lesson, you should be able to:
• Using existing data.
• Know the Ethical Considerations
• Understand the Data Collection methods and Management
• Using the prepared Data.
• Data Manipulation and Preservation
Analyzing Online Discussions: Ethics,
Data, and Interpretation
Online discussions are attractive sources of information for
many reasons. Discussion forums frequently offer automated
tracking services, such as a transcript or an archive, so that you
can engage in animated conversation and analyze it at leisure, or
locate conversations that took place months or years ago. Online
tools provide an opportunity to observe a group without
introducing your own agenda, to follow the development of an
issue, or to review a public exchange that took place in the past,
or outside the influence of researchers and policymakers. You
can test additions and revisions to tools for communication,
building more effective online classrooms, research groups, and
professional organizations. Whether you are looking for ways
to improve interactions within a working group (Ahuja &
Carley, 1998), studying the interactions of a community that
interests you (Klinger, 2000), or assessing student learning,
online discussions can be a valuable tool.
An online discussion is identified by the use of a computermediated conversational environment. It may be synchronous,
such as real-time chat, or instant messaging, or asynchronous,
such as a list server, or bulletin board. It may be text-only, or
provide facilities for displaying images, animations, hyperlinks,
and other multimedia. It may require a Web browser, a Unix
connection, or special software that supports such features as
instant messaging. Tools for online conversation are becoming
increasingly sophisticated, popular, and available, and this
increases the appeal of using online discourse as a source of
data.
Online discussions present new opportunities to teachers,
policymakers, and researchers, but they also present new
concerns and considerations. This article is about access to, and
management and interpretation of, online data. Online research
is similar, but not identical to, face-to-face (f2f) research. There
are new ethical considerations that arise when it is not clear
whether the participants in a conversation know they are being
monitored, or when that monitoring is so unobtrusive that it
can easily be forgotten. Instead of collecting data using audio
and video recording as in f2f conversations, preserving online
114.
conversations requires ways to download or track the electronic
files in which the information is stored. Finally, in f2f interactions we examine body language and intonation as well as the
words spoken, and in an online interaction, we have to look
beyond the words written to the electronic equivalents of
gestures and social conventions. This article will address these
issues of ethics, data collection, and data interpretation.
This article is not about recommending any particular method
of analysis. Whether you use grounded theory, quantifying
techniques, experimental manipulations, ethnography, or any
other method, you will have to deal with issues of collecting
and managing data, as well as the structure of online communication.
Ethical Considerations
Before we consider how to analyze an online conversation, we
need to first consider what precautions should be taken to
protect participants in the conversation. Because online conversation is relatively new and unfamiliar, and takes place at a
distance, it is relatively easy to overlook possible ethical violations. People may not realize that their conversations could be
made public, may not realize that they are being monitored, or
may forget that they are being monitored because the observer’s
presence is virtual and unobtrusive. Some participants may feel
relatively invulnerable because of the distance and relative
anonymity of online exchanges, and may use these protections
to harass other participants. Online exchanges of information
require the same levels of protection as f2f exchanges, but it can
be more complicated to achieve this.
If you belong to a university or similar institution, you will
need the approval of an Institutional Review Board, created for
the protection of human beings who participate in studies.
Teacher-researchers and others who do not have an IRB and are
not associated with any such institution should nevertheless
follow the ethical principles and guidelines laid out by copyright
act.
The least problematic conversations are those that take place
entirely in the public domain; people know they are publishing
to a public area with unrestricted viewing, as if they were writing
a letter to the editor. Newsgroups are an example of such
exchanges—anyone with access to http://groups.google.com/
can access any conversation in the past twenty years. In many
cases, this sort of research is considered “exempt” under Govt.
guidelines for the protection of human subjects; for researchers
at institutions with an IRB, the board must confirm this status.
Still, even public areas may contain sensitive information that
the user inadvertently provided; novices are especially prone to
accidentally giving out personal information, or including
personal information without considering possible misuse. In
addition to the usual procedures for anonymizing data (e.g.,
removing names, addresses, etc.), there are some additional
concerns to address. Every post must be scoured for both
intentional and unintentional indicators of identity. Here are
some common ways that anonymity is compromised:
• Usernames like “tiger1000” do not provide anonymity;
people who are active online are as well known by their
usernames as their traditional names. Usernames must be
replaced with identifiers that provide no link to the actual
participant.
• You must also be vigilant in removing a participant’s .sig
(the signature file that is appended to a post) and any other
quotes, graphics, and other idiosyncratic inclusions that are
readily identifiable as belonging to a particular individual.
present before choosing tools for management and manipulation.
For text-only exchanges, a flatfile spreadsheet is often sufficient.
The text can be downloaded as plaintext, or cut and pasted in
sections. Paragraphs or lines of text become entries in the
spreadsheet, and can be parsed into smaller units if desired.
Once the data is placed in a spreadsheet, additional rows and
columns can be used to hold codes and comments, and the
spreadsheet can be sorted on these to reveal and examine
patterns.
• Identifying information is often embedded in a post
through quoting; for example, if I were quoted by another
participant, my email address might be embedded in the
middle of his or her message as “tiger1000
(sumit_life@sify.com) posted on 1 February 2002, 11:15.”
If a domain establishes any degree of privacy through membership, registration, passwords, etc., or if you wish to contact
participants directly, then the communications should be
considered privileged to some degree. In addition to the
safeguards required for public domain data, using these
conversations in research requires at very least the informed
consent of all participants whose work will be included in the
analysis, with explicit description of how confidentiality and/or
anonymity will be ensured. The procedures for informed
consent, recruitment, and data collection will require “expedited” or “full” review by an Institutional Review Board. Once
approval has been given, consent forms will have to be
distributed to every participant, and only the contributions of
consenting members can be stored and analyzed.
If you set up a site for collecting data, regardless of how much
privacy and anonymity you promise, you are ethically bound to
inform all potential participants that their contributions will be
used as data in research. One example of how to provide this
information has been implemented by the Public Knowledge
Project. To see how they obtained consent, visit http://
www.pkp.ubc.ca/bctf/terms.html. Likewise, if you contact
participants directly, you need to make their rights clear and
obtain their permission to use the information they provide for
research purposes before engaging in any conversation with
them.
In addition to preserving the safety and comfort of participants, you must also consider their intellectual property rights.
All postings are automatically copyrighted under international
laws. Extended quotes could violate copyright laws, so quoting
should be limited, or permission should be obtained from the
author prior to publication. For more about international laws,
visit http://www.law.cornell.edu/topics/copyright.html.
Data Collection and Management
Once you have received the necessary permissions and taken the
necessary precautions, the next concern is the best way to collect
and organize the data for analysis. An online exchange often
evolves over days or months, and may require handling tens of
thousands of lines of text, along with graphics, hyperlinks,
video, and other multimedia. Consider what media will be
Figure 1. This sample is taken from an analysis of an
online discussion of a book. At the top of the screen shot
is a participant’s entry, along with the information
regarding its position in the thread, date of posting, and
so on. Codes were added below each entry, shown at the
bottom of the screen shot. The color of the codes
corresponds to the color of the relevant text. Each
participant’s contribution can be added as an additional
column.
There are many cases, however, when this technique will be
ineffective. Because they can last for years, online conversations
differ from f2f conversations in that they can be extremely long,
often exceeding spreadsheet limits. Furthermore, they often
contain hyperlinks, graphics, video, and other multimedia; these
are often essential to the conversation, and most spreadsheets
will not display them. When it is desirable to maintain these
elements, there are two straightforward ways to do this. The
first is to simply download all the relevant files and create a
mirror archive on your own hard drive. This assures you
constant, reliable access to the data, but may take up large
amounts of space, and not all files can be downloaded (e.g.
there may be security restrictions, additional software requirements, or intellectual property considerations). An alternative
approach is to create a flatfile spreadsheet that contains
hyperlinks to the original exchanges rather than the exchanges
themselves. The disadvantage is that you cannot be sure the
115
original files will always be available, but the spreadsheets
containing these pointers take up very little space, are less
problematic legally and technologically, and provide the full
functionality of a spreadsheet (e.g., sorting and manipulation).
Figure 2. This example includes links to discussions on
developmental disabilities that affect schoolchildren.
The links take you to the conversation described in the
field notes.
The advantage to using a flat file database is that it allows for
flexible coding. The disadvantage is that it does not support any
particular theoretical perspective. For this reason, you may want
to begin by using a flat file, then transfer data to a theory-based
format after you have done some initial processing and can
narrow down what you want to focus on. Such tools are
described at the sites mentioned above.
Data Preparation, Manipulation, and
Preservation
Online data creates extremely large files, not only because of
their potential length but also because online conversations
tend to be highly repetitive. Replies often contain portions of
previous messages, if not the complete original; even if each
individual contribution is relatively short, quoting can quickly
create messages containing hundreds of lines. In addition,
multimedia elements tend to take up considerable space. It is
not unusual for a datafile to grow to 30 megabytes or more.
Files of this size are very difficult to manipulate and can be
prohibitively slow to load and save. Therefore, it may become
necessary to decide what information should be kept verbatim,
what should be deleted altogether, and what can be replaced
with a smaller reference code (e.g., if many participants quote
message 112, you might replace each reposting of this message
with the code “msg112”; advertisements might be indicated by
the code “banner ad” or a hyperlink to the ad on the original
site). These methods of abridging the record can be implemented before engaging in extensive analysis, so that the file
that you work with most often is the smallest one.
In deciding on these matters, you should be guided by your
research questions and you should preserve all information that
is relevant to your questions; thus, advertising may be a central
issue, or it may play a relatively small role. In any case, it is best
to err on the side of preserving too much information. Once
removed, a hyperlink, graphic, or reposted message can be
difficult to recover. Start by keeping as much information as
possible, and pare it down as you find elements that seriously
interfere with speed, or that are adding nothing to your analysis.
You may want to keep multiple versions, using the most
116
streamlined for your analysis, and archiving richer versions in
case they are needed later on.
Coding, Analysis, and Interpretation
The structure of an online exchange can be difficult to reconstruct, and its boundaries can be difficult to locate. Capturing
the perspective of participants, challenging in any context or
medium, is further complicated by new ambiguities created by
the way in which conversations are created, stored, and accessed.
While it may not be possible to resolve all inconsistencies and
ambiguities, being aware of them and their implications for any
particular interpretation is essential.
Reconstructing the Conversation
One significant difference between online and f2f conversations
is that participants often view online conversations differently.
Online discussions do not necessarily develop sequentially, nor
can we be sure that all participants are seeing the same exchange.
We can see this by comparing how listservs and bulletin boards
are visited and revisited. A listserv sends messages to the
subscriber’s email account. Listservs send all messages in
chronological order, regardless of the conversational thread to
which they belong, so multiple conversations are interleaved. It
is easy to miss a post, and each person may read a different set
of messages. If you join a listserv after a conversation has
begun, you will not see the beginning of the exchange. In
contrast, bulletin boards keep message separate by thread, and
all messages are available for the life of the bulletin board, or
until they are archived.
A participant may follow conversations thread by thread, read
everything written by a single author, skip from one topic to the
next, or otherwise deviate from the original presentation. You
should consider reviewing the conversation in a variety of ways
in order to understand better how participants receive and work
with the information.
For example, Usenet groups often attract users who only wish
to ask a single question, get an answer, and never return. In
addition, while some servers provide a complete, searchable
Usenet archive (http://groups.google.com/), others regularly
delete files to save space, or may not provide much in the way
of searchability. For these reasons, it is common for several
participants to ask the same question, sometimes word for
word, over and over. Understanding why this happens and
how the conversation develops requires looking at the records
both as if you are a user with access to the full record and, as if
you are a user with access to a very limited record. It is virtually
impossible to capture all possible viewings, but you will
probably want to capture several.
Tracking a conversation, regardless of the perspective you
choose, can be challenging, rather like assembling a rough-cut
jigsaw puzzle. The threads of conversation are easily broken; if
a participant or server changes a subject line, archiving tools
cannot follow the conversation and the line of thought
becomes disconnected. People use multiple accounts and
identities, either because they are deliberately trying to hide their
identity, or for innocent reasons, such as logging in differently
from work and home. There are, however, ways to reconstruct a
conversation. To track a thread, examine subject lines to see if
they correspond except for a reply indicator, look at dates of
posting, or examine the text for quotes from previous messages
in the thread or other references to previous postings in the
thread. In the case of users, even if participants’ usernames
change, they may be identifiable through their email addresses,
their signatures, hyperlinks to their home pages, or their writing
styles and preferred themes. For example, in analyzing one
Usenet group in which the topic of speed reading frequently
arose, I noted that there were several usernames employed by
one company; these users would respond as if they were
“ordinary” individuals, rather than identifying themselves as
company representatives. However, all used the same prefabricated plug for the company’s product. Thus, I could use this to
mark the posts as coming from related users, or perhaps the
same user.
Where, What, and Who is the Conversation?
In addition, consider the context. F2f conversations consist of
a relatively well-bounded exchange; the participants, location,
and duration are easier to determine than they are in online
discourse. Online, participants can possess multiple identities,
steal another’s identity, or carry on a conversation with themselves. The conversation not only crosses geographical
boundaries, but may send participants to archives of prior
exchanges, websites, FAQs, and other resources. As a result, the
conversation may not be neatly contained by a single listserv,
chat room, or other discourse environment. Even within a
single environment, the conversation you are interested in may
be no more than a few lines or posts tucked in among other
threads, spam (mass mailings), flames (inflammatory posts),
and announcements. Finally, regarding duration, online
conversations may last minutes or years, and may proceed at the
rate of several exchanges per minute or one exchange every few
weeks or months.
Given these complexities, the best approach is to be aware that
you will have to draw somewhat arbitrary boundaries around a
group of participants and exchanges, letting your choice be led
by your questions. If identifying participants is crucial (perhaps
you suspect that warring factions are trying to discredit one
another by posing as members of the other camp), then you
will have to look for clues that reveal identity and consider how
your interpretations are affected by the possibility of imposters.
If the conversation takes place amongst a small, tightly knit
group with a strong foundation of common knowledge, then
shared spaces like FAQs and group websites becomes crucial,
and should be included. If there have been significant changes
in the political or educational climate during the course of the
conversation, duration will become important, and the timeline
of the exchange may need careful examination.
at the more finely honed level of a discourse expert. You
should also become a participant in online communities before
trying to research them, gaining both everyday and scholarly
familiarity. Rather than just knowing the basics of navigation
and communication, it is important to be fluent in everyday
conventions and the online analogs of body language and
nonverbal f2f communication. These include “emoticons” (e.g.,
symbols for smiling 8^), disgust 8-P, and so on), as well as
conventions such as SCREAMING BY TYPING IN ALL
CAPS, or including actions, such as ::hugs newbie:: or <<grins
at newbie>>.
In addition, learn how to relate to participants as individuals; it
is easy to fall into the trap of treating them as disembodied
voices, or automatons, rather than as complete people. What are
their interests online and off? Is their style of conversation
friendly, combative, joking, pedantic? What topics will get an
emotional reaction from them? What sorts of conversational
moves will get a reaction from them (e.g., some people are
offended by posts IN ALL CAPS, and will tell the poster to
stop shouting)? In an extended conversation with a group, you
should get to a point that you can recognize participants
without relying solely on usernames.
Final Thoughts
The study of online discourse is still quite new, and there is
much about the treatment and analysis of these data that has
not yet been addressed. When faced with a situation for which
there is no standard procedure, the best course of action is to
begin with established techniques and then adapt these to the
online environment. Have a rationale for any adaptations or
deviations you decide to make because these will help you to
establish credibility with editors and peers and will allow others
to adopt, recycle, and refine your approach.
References
Ahuja, M. K. & Carley, K. M. (1998). Network structure in
virtual organizations. Journal of Computer-Mediated Communication, 3 (4). Available online: http://www.ascusc.org/jcmc/vol3/
issue4/ahuja.html.
Brem, S. K., Russell, J. & Weems, L. (2001). Science on the Web:
Student evaluations of scientific arguments. Discourse Processes,
32, 191-213. Available online: http://www.public.asu.edu/
~sbrem/docs/BremRussellWeems.webversion.htm.
Klinger, S. (2000). “Are they talking yet?”: Online discourse as
political action. Paper presented at the Participatory Design Conference, CUNY, New York. Available online: http://
www.pkp.ubc.ca/publications/talkingyet.doc.
Sarah K. Brem- Division of Psychology in Education - Arizona
State University
You will always have to draw boundaries, and there will never
be one right set of boundaries to draw. The important thing is
to draw them in such a way that you can explain your reasoning
to others, and in a way that allows you to get rich, useful, and
dependable answers to the questions that interest you.
Knowing How to Talk Online
We do not analyze f2f conversations without having some
experience with f2f conversation, both at an everyday level and
117
LESSON 29:
REPORTING THE FINDINGS
Topics Covered
Consider, Target selection, Deciding, Layout, Reports, Medium
Objectives
Upon completion of this Lesson, you should be able to:
• Consider the audience
2. Targeting a Report
Survey reports are read by a variety of different people, for
different purposes. I find it helpful to divide a report into
sections, focusing on a particular type of reader in each section...
For Decision-makers
• What medium should be used for reporting
People who don’t have time to read the whole report (or don’t
want to put in the effort required) will want a summary. This
can be done in three parts:
• A long written report
• Background information: a summary of how and why the
• Targeting a report
• Deciding the length of a written report
• A suggested report layout
Reporting the Findings
The final stage of a research project is presenting the results.
When you are deciding how to report the findings, you need to
make some decisions: should there be a written report, or is
some other format better? How long should a report be? What
should it include? What style should it use? This chapter will
help with those decisions.
1. Consider the audience
If you are a good audience researcher, the audience you should
think about now is the audience for the research findings: in
other words, the clients. The question here is what is the best
way to communicate the findings to the clients? What will they
understand most readily?
Any good audience researcher will always consider the audience
for the research report, and produce the report in a form that
will be the most useful for the audience. The usual types of
audience for a report are:
• The client organization:
•
its management
•
its general staff
• the audience that was researched - usually, the public
• funders: government, aid agencies, advertisers, etc.
• future researchers (who may be asked to repeat the research,
years later)
All of these groups have slightly different needs and interests.
Managers usually prefer an overview, with concise recommendations and precise numbers. The staff of a media organization
usually like a fairly detailed narrative report, covering their
particular area of work.
The audience also like narrative reports, but at a more general
level. Funders prefer to discover that the organization is doing
an excellent job, and is spending its money wisely. But they are
very sensitive to being hoodwinked, specially if the report is
being written within the media organization itself.
If all these groups are to be satisfied, you may have to produce
several different versions of the report.
118.
research was done. One page is usually enough.
• Summary of findings, written in plain language, with a
minimum of numbers (but cross-referenced to the detailed
findings).
• Recommendations arising from the survey. These can be
interspersed with the summary of findings, but
recommendations and findings should be clearly
distinguished - for example, by beginning each
recommendation with Recommendation.
• The longer the full report, the longer this summary should
be - but if the summary has more than about 10 pages, the
busy managers may not read it fully.
For Other Researchers (or Yourself, in a Year or Two)
Information on exactly how the survey was done, for anybody
who might want to repeat it. This information is often included
in appendixes to the main report. It includes the full questionnaire, the detailed sample design, fieldwork procedures,
interviewers’ instructions, data entry instructions, and recommendations for future research: how to do it better next time.
If the research is analysed by computer, a list of all the computer files is also useful.
For Specialist Staff
People who work in a particular department will usually be very
interested in what a research study found about their program
or department, but may not have much interest in other
departments. Sometimes I have prepared special short reports
for program-makers with a very strong interest in the research.
It may be enough to give them a computer-printed table, and
show them how to interpret it.
For the Media (and the Audience)
Though some audience members will be very interested in the
research findings, most will have only a casual interest. A short
summary - a page or two of print, or a few minutes of a radio
or TV program - is usually enough. An alternative presentation
for the general public is an interview with the main researchers;
this can be more interesting than a purely factual report.
3. What Medium Should be Used for Reporting?
In the past, most survey results have been presented as written
reports — often 50 to 1,000 pages long. However, a survey
report can take many forms. If it’s in writing, it can be long,
short, or released (like a serial) in several stages. It can be
produced as a poster, instead of a bound report. It can be a
formal presentation, a course, or a workshop. It can take the
form of a radio or TV program, or even a theatrical production.
Which method should you choose? Read on: the answer
depends on the time and funds available, but mostly on the
audiences for the report. (Perhaps presenting audience research
findings needs its own audience research.) Let’s consider the
possibilities, each in turn.
4. A long Written Report
Surveys cost a lot of money. After spending thousands of
rupees on a survey, the people who commissioned it often
expect a large detailed report - because those are the only results
they will see. Because of this expectation, long reports are
written. Perfecting these can take weeks, specially if they have a
lot of graphs. The printing and binding can take more time. By
the time such a report is finished, it may no longer be needed the information may already be out of date.
The problem with big reports is that they’re so daunting:
hundreds of pages, crammed with facts and figures - specially
figures. Hardly anybody likes large tables of numbers - but this
is what most market researchers produce, most of the time.
A short report has no space for detailed tables of figures - but it
can invite readers to consult this additional data - which could
be kept in a computer file, and printed out only on demand. It’s
usually no extra work to produce these appendixes, because
these are documents (such as the questionnaire) already created,
and used when writing the short report.
6. A series of Short Reports
If a survey has too many questions for a short report, you can
write several short reports. That way, the readers will get the first
results more quickly. Each report could be a few pages, covering
a few questions in the survey. Distribute these reports several
times a week, and (as I’ve found) they’ll be widely read by your
users. Though you may have to issue some corrections or
additions, the users will become much more involved with the
data.
When all the short reports have been produced, you can
combine them into a longer report, for reference.
7. A Preliminary Report
I don’t recommend producing one long report, but sometimes
clients or others insist on having one. As long reports take a
long time to write, it will be weeks before the clients receive their
report. By the time the final report arrives, parts of it may be
outdated.
In this situation, it’s advisable to produce a preliminary report,
as soon as you have provisional results. Preliminary reports
should not be too detailed; a few hours’ work is enough.
Writing a detailed preliminary report will slow down the
production of the main report —and readers of the first report
are likely not to bother reading a full report.
The simplest way to produce a preliminary report from survey
data is to take a copy of the questionnaire, and write percentage
results next to each answer choice. For questions with numerical
answers, write the average on the questionnaire. Don’t bother
with open-ended questions in a preliminary report — these take
too long to analyse. To supplement this annotated questionnaire, you could write a one-page introduction, with basic data
about the survey: the method used, the exact area covered, the
dates, and the sample size.
A typical would-be reader, when given a fat report, will immediately flick through its pages, and decide that it will take hours to
absorb. But not now; he or she is too busy. So she puts it aside,
and decides to go through it later. But there is always so much
work to do: more reports to read, important decisions to be
made, lots of meetings. So very often, the fat report is never
read, in full.
And you have wasted your time in writing such a large report.
(The only consolation is that, in a few years’ time, when
somebody else comes along to do a survey, a few parts of a
large report could be very useful.)
5. A Short Report
How short is a short report? My suggestion: if a report looks
short enough to read in full, as soon as a recipient gets it, then it
must be short. The maximum is about half an hour’s reading
time, or 20 pages maximum.
8. Deciding the length of a written report
When you are deciding whether to write a report of 1 page,
1000 pages, or something in between, here are some points to
bear in mind.
• The smaller a report, the more likely it is to be read.
• The longer a report, the more time it takes to write, so the
more likely it is to be out of date when it’s finished.
• If a report is too short, it probably can’t contain the
background information which is necessary to make a
decision. (And it won’t have enough detail to help with the
next survey.)
Usually, between 5 and 30 pages is fine. This applies to the main
section of the report, and doesn’t include any appendixes.
These don’t count — unless they make the report so thick that
people won’t try to read it. The more questions are asked, and
the larger the sample size, the longer the report must be (and
the less likely it will be read in full).
119
My general recommendations are:
1. If a survey has a sample less than 500, and less than 20
questions, do a short report (20 pages maximum)
2. If a survey has a sample more than 500, or more than 20
questions, do a series of short reports — one for each group
of questions, produced at least once a week.
3. If you must do a long report (e.g. because a sponsor insists),
precede it with a preliminary report, and follow it up with
some other way of communicating the information to the
clients, such as a presentation, course, or workshop.
not a substitute for a real report — too much is left unexplained. What works best in speech is not so effective in writing.
Courses
A problem with a lot of presentations is that the audience typically the staff of the client organization - are expected to
absorb all the survey findings in a short time. When a research
project produces a large amount of information, and the staff
need more time to understand it, it can be more effective to
present the results as a course.
Workshops
Presentations
Live presentations, using software such as Powerpoint or
Freelance, are becoming more and more popular. If computerized facilities aren’t available, overhead projectors or flip charts
can be used instead. Though a computer presentation looks
more advanced, it doesn’t provide any information that a handdrawn chart cannot also do.
A typical presentation lasts from 30 minutes to 1 hour, has to
40 slides or overheads, and is presented by the chief researchers.
After the presentation, the audience (usually a group of senior
staff - often 10 to 20 people) asks questions and the presenters
answer them. Audiences find it more interesting to have several
presenters than one single voice.
I find the most effective type of presentation displays graphs
and figures. The researchers doesn’t read these aloud - the
audience can see them perfectly well — but instead explain and
discuss the findings, engaging in a dialogue with the audience.
In these dialogue sessions, large blank sheets of paper should
be available, on which one presenter writes any conclusions or
requests for further analysis.
One problem with giving presentations to broadcasters and
media people is that they are used to a high standard of
presentation in programs, so researchers must present findings
very well indeed to gain the respect of their audience. So unless
you are a very experienced presenter, you should rehearse each
presentation (e.g. with other researchers) before it giving it to
the real audience. After each rehearsal, you usually find several
ways of improving the presentation, to make it clearer and
more interesting.
It’s unusual for a presentation to completely replace a written
report, but often the written report is shorter when a presentation is made. Handing out copies of the slides or overheads is
120
A workshop presentation normally lasts between a few hours
and a full day. The main difference between a workshop and a
course is that a course only presents the results - it doesn’t
consider how to use them. A workshop will usually have
decision-makers and managers as participants. The participants
(usually 5 to 10 of them) not only receive the results, but also
consider how the results can be used in changing the programs.
LESSON 30:
REPORTING THE FINDINGS - THE FORM OF TV PROGRAMS
Topics Covered
Consider, Target selection, Deciding, Layout, Reports, Medium
Objectives
Upon completion of this Lesson, you should be able to:
• Consider the audience
• Targeting a report
• What medium should be used for reporting
• A long written report
• Deciding the length of a written report
• A suggested report layout
Reports in the Form of TV Programs
Video presentations are also possible. These can be a mixture
of interview and slide presentations, perhaps with some shots
of the research interviewing and analysis.
The effort of producing such a program shouldn’t be seen as
wasteful. If you are dealing with TV people, and need to
convince them of the results of the survey, video is the format
that they are most comfortable with, and will respond to best.
Because figures and graphs can be shown in a video, there may
be no need for a supplementary written report.
Video “reports” can be surprisingly effective. I was told that the
managers never bothered to read written reports. As the
research was qualitative, using focus groups, we videotaped all
the focus group proceedings, and produced a videotape of
edited highlights from the groups, showing the most common
findings in the research participants’ own words. This conveyed
the survey results very clearly. It would have been better still if
we’d been able to add still shots of graphs, and written
summaries and introductions.
Which is the Best Type of Report?
The answer to this question depends on...
• How educated and knowledgeable about audience research is
the audience to the report?
If the audience is poorly informed, and not particularly
interested, a video or puppet-theatre report may be best.
If the research is done for a radio station whose staff prefer to
communicate by sound rather than in writing, a spoken
presentation or taped report can be best.
If a number of decisions need to be made, and the clients are
genuinely looking for research advice but aren’t sure how to
interpret it, a course or workshop is often the best solution.
Reports In Detail
A suggested Report Layout
An effective report will address each of the research issues in
turn, discussing the factors related to that issue, and introducing
the research data as evidence on the issue.
A less effective (but more common) way to present reports is to
give the results from each survey question, in the order that the
questions were asked.
Sometimes, the two approaches produce very similar reports.
The main difference is that issue-by-issue reports approach each
issue from the point of view of a manager who must make a
decision about something. The question-by-question reports
are more like a catalogue of miscellaneous findings, often not
relating each question to the actions that can be taken from its
results.
A common layout for a written report is:
Part 1: Introduction
• Contents (1 or 2 pages)
• Introduction, perhaps by an important person
• Summary of how the survey was done (1-2 pages)
• Summary of main conclusions (1-3 pages)
• Recommendations about the findings (1-2 pages)
• (or conclusions and recommendations combined)
• How much time will they have to absorb it?
Part 2: Detailed Findings
...for each main internal question:
• How many people need the results?
• Describing the internal question (1 page)
• How keen they are to implement the results?
• For each survey question dealing with that internal question:
If the audience (i.e. the people receiving the report) know a lot
about research methods, are very interested in the results, and
have plenty of time to study the results, then a full written
report is best. This hardly ever happens, so instead of a long
report, I recommend a short report (for a short, simple survey)
or a series of short reports (for a more complex survey).
If the audience is inexperienced with the results of audience
research, and has plenty of time, a course is often best.
If the audience includes a wide variety of people with a wide
variety of interest levels (e.g. the staff of a TV station) the
poster method often works well.
1-2 pages
• Conclusions about the internal question (1 page)
Part 3: Appendix
• Summary of sample design (1-2 pages)
• Details of the survey methods used (1-2 pages)
• Copy of the questionnaire - preferably as filled in by a real
respondent - but excluding or changing details which might
identify that person.
• Text of open-ended comments (2-10 pages, if present)
121
• Recommendations on survey methods -for next time (1
page)
• How to contact people who worked on the survey (1 page)
Each of the three parts is intended for a different audience.
• Part 1 is mainly for people who don’t have time to read all
the detail in the rest of the report, or who aren’t interested in
it.
• Part 2 is mainly for staff in the organization the survey
covered.
• Part 3 is mainly for other researchers. (The detail is
particularly useful when a follow-up survey is done, perhaps
years later. )
Many market research reports include an enormous set of tables
in the appendix: often hundreds of pages. I don’t agree with
this: if the tables are important, they should be discussed with
the main part of the report. If numbers aren’t important,
there’s no need to include them in the report, because nobody
will read them. If they are presented without explanation,
probably no readers will understand them fully. Maybe it’s best
to print off a few copies of the tables, show them only to
people who ask, and keep each copy in a safe, separate place —
because next time a survey is done, the tables could be very
useful.
How to Write up Detailed Findings (Part 2 of a
Report)
For each issue: state what needs to be decided — the internal
question.
List the survey questions which were chosen to give evidence on
the internal question.
For each survey question:
1. Give the full wording of the question — showing whether
the answers were prompted or open-ended.
2. State who was asked the question. (All respondents? Only
some?)
3. Give a table and/or graph of results.
4a.If the question was a nominal one, list each possible answer
and the percentage who gave that answer.
•
If the question allowed one answer, show that the
total of all answers was 100%
•
If the question allowed more than one answer, say so.
•
State the number of respondents corresponding to
100%
4b. If the question asked for a numerical answer, and there were
more than about 10 possible answers, don’t show the
number who gave each separate answer. Instead show
•
The minimum answer
•
The maximum answer
•
The average
Also show a distribution of answers. You could include a
line graph, or mention the upper and lower quartiles (the
answers given by the lowest 25% and the highest 25% of
those answering)
122
5. Summarize the results in a few sentences. Focus on the
meaning, rather than repeating the numbers shown in the
table.
6. If the sample size is large enough (i.e. at least 200 people
answered the question) and there are significant differences
between subgroups, describe these — perhaps in a table
#EG literacy by age/sex - Annotated examples
How to Present the Findings: as Words, Numbers, or
Graphs?
Written reports usually have three main elements: words,
numbers, and graphs. A lot of report research reports emphasize one of these, and pay little attention to the other two.
Many research reports are mostly tables of numbers, with only a
few pages of written explanation and no graphs.
I’ve found that readers of audience research reports usually have
a clear preference for one type of presentation - based on their
training and their work: journalists usually prefer words,
accountants prefer numbers, and TV producers prefer graphs.
Managers vary, depending on their training and background.
A successful report should balance all three of these components, giving similar information in various ways - but without
exact repetition. This makes it easier for readers to understand.
Explaining the results in words
Written reports
A lot of research reports are written in this style:
74.3% of respondents agreed that the Dainik Bhaskar had too
many crossword puzzles, while 59.6% agreed that it had
insufficient coverage of local news. 69.4% of women and 49%
of men said that the Dainik Bhaskar had too little local news, as
did 43.1% of those aged 15 to 24, 57.8% of those in the 25-44
age group, and 65.5% of those aged 45 and over.
This is very precise writing, but also difficult to understand. The
reader must go over it several times, to work out exactly what it
means. Bearing in mind the imprecision of surveys, there’s no
point in giving results to the nearest 0.1% - the nearest 1% is
quite enough.
Here’s a simplified version of the above passage, laid out for
better readability.
Readers of the Dainik Bhaskar were asked “What do you think
the Dainik Bhaskar has too much of, or too little of?”
Too much of...
• Crossword puzzles 74%
Too little of:
• Local news 60%
What sort of people thought the Dainik Bhaskar had too little
local news?
• 69% of women
• 49% of men
• 43% of people aged 15 to 24
• 58% of people aged 25 to 44
• 66% of people aged 45 and over.
Spoken Reports
If a report is in writing, the above format makes it clearer; but if
a report is spoken (e.g. as a radio program or a talk) listeners
don’t get a chance to hear it again. In this case, it’s even more
important to make sure the result is easily understood on first
hearing. Here’s an example of wording which is difficult to
understand when heard:
26% of people said they were happy with FM99’s service, while
347% said they were reasonably happy, and 31% said they
weren’t happy at all. The other 6% couldn’t decide.
This is almost incomprehensible, on first hearing. Presenting
the same information in a less precise (and slightly more
repetitive) way actually makes it easier to understand:
instead of percentages. For example, if the population
covered was 50,000, a table could show a figure of 25,000
instead of 50%. (But for this to be valid, everybody in the
population must have had an equal chance of being included
-and you must know the population size.)
• As questions using aided recall (e.g. “Have you heard of
FM99?”) mostly produce higher figures than questions with
unaided recall (“Which radio stations have you heard of?”)
always state whether respondents simply had to say Yes to a
suggested response, or had to think of an answer
themselves. But this is not necessary when everybody would
be expected to know all possible answers - e.g. “What is your
age?”
When asked how happy they were with FM99’s service, listeners
were divided into three groups, with about equal numbers in
each. One third were very happy with FM99’s service, one third
were reasonably happy, and the other third weren’t happy at all.
Presenting Data in Graphs
Bearing in mind the sampling error on most surveys, describing
26% as “one third” and not mentioning the 6% who couldn’t
decide is not misleading.
• Vertical bar charts (histograms)
Explaining the Results in Numbers
Here are some principles for presenting numbers in research
reports. All of these help to communicate the findings, and
make it easier for readers to understand the data correctly.
• Line charts
• Always include the full wording of a question near the
distribution of answers
• For every percentage: state what it is a percentage of. (The
whole sample? Some sub-group?)
• If a table is not based on the whole sample, but a sub-
group, say so. Show the number of respondents which
corresponds to 100%.
• If a set of percentages adds to 100% horizontally or
vertically, include a column or row to show this.
• For questions which can have more than one answer, in
which percentages can add to more than 100% of people
(but always add to 100% of answers to the question), show
whether the percentage is based on people or answers. If the
percentage is based on people, include a note such as
“Percentages add to more than 100% because some people
gave more than one answer.” If the percentage is based on
answers, say so.
• In complex tables, give a verbal example of how to read the
There are many different types of graph or chart, but most are
not used in audience research. Those used most often include:
• Pie charts
• Horizontal bar charts (also histograms)
• Pictograms
• Area charts
Though many different kinds of graph are possible, if a report
includes too many types, it’s often confusing for readers, who
must work out how to interpret each new type of graph, and
why it is different from an earlier one. I recommend using as
few types of graph as are necessary.
If you have a spreadsheet or graphics program, such as Excel it’s
very easy to produce graphs. You simply enter the numbers and
labels in a table, click a symbol to show which type of graph you
want, and it appears before your eyes. These graphs are usually
not very clear when first produced, but the software has many
options for changing headings, scales, and graph layout. You
can waste a lot of time perfecting these graphs. Excel (actually,
Microsoft Graph, which Excel uses) has dozens of options,
and it takes a lot of clicking of the right-hand mouse button to
discover them all. If you don’t have a recent and powerful
computer, this can be a very slow and frustrating program to
use.
The main types of graph include pie charts, bar charts (histograms), line charts, area charts, and several others.
• In two-way tables (one variable in each row of numbers, and
Pie Chart
A round graph, cut (like a pie) into slices of varying size, all
adding to 100%. Because a pie chart is round, it’s useful for
communicating data which takes a “round” form: for example,
the answers to “How many minutes in each hour would you
like FM99 to spend on news, music, and other spoken programs?” In this case, the pie corresponds to a clock face, and the
slices can be interpreted as fractions of an hour.
another in each column) show whether percentages are based
on the row total, the column total, the grand total, or some
other base.
• Readers are confused by percentages on varying bases, so if
the survey was a random one, consider showing projections
Pie charts are easily understood when the slices are similar in
size, but if several slices are less than 5%, it can be quite difficult
to read a pie chart. In that case the chart has to be very big,
taking perhaps half a page to convey one set of numbers. Not a
very efficient way to display information.
first figure (i.e. top left) - e.g. “i.e. 74.3% of all respondents
said the Dainik Bhaskar had too many crossword puzzles.”
• You can make large tables easier to interpret by circling any
figures which are particularly significant: e.g. the largest or
smallest in a row or column, or any figure discussed in the
text.
123
#PIE CHART
Vertical Bar Chart
Also known as a histogram. A very common type of graph,
easily understood. But when one of these charts has more than
about 6 vertical bars, there’s very little space below each bar to
explain what it’s measuring.
#V. BAR CHART
Horizontal Bar Chart
Exactly like a vertical bar chart, but turned sideways. The big
advantage of the horizontal bar chart is that you can easily read a
description with more than one word. Unfortunately, most
graphics software displays the bars upside down - you’re
expected to read from the bottom, upwards to the top.
However, you don’t need graphics software to produce a
horizontal bar chart: you can do it easily with a word processing
program. One of the easiest ways to do this is to use the |
symbol to produce the bars. This symbol is usually found on
the \ key; it is not a lower-case L or upper-case I or number 1. It
stands out best in bold type. For example:
above bar chart with the q symbol is a crude type of pictogram.
But unlike a bar chart made of entire qsymbols, a pictogram
shows partial symbols. If one little man means 4%, and the
figure to be graphed is 5%, you see one and a quarter little men.
#PICTOGRAM
Domino Chart
You won’t find this mentioned in books on statistics, because I
made it up. It was an invention that seemed to be required, and
is best described as a two-dimensional pictogram. It’s named
after the game of dominos, in which each piece has a number
of round blobs in a rectangular block.
Just as you use a bar chart or pictogram when graphing one
nominal variable, a domino chart is used to compare two
nominal variables — it’s the graphical equivalent of a crosstabulation. It is used to show the results of inquiries such as
“Do more men than women listen to FM99?” In this case, the
two variables are sex and listening to FM99.
Suppose 71% of respondents listen to FM99, and 52% are
men. To find the sex breakdown of FM99 listeners, you need
to produce a crosstab. The resulting table might look like this:
Q14. SEX OF RESPONDENT
Male
47.4%
Female 52.6%
Total
|||||||||||||||||||||||||
Sex
If each symbol represents 2% of the sample, you can usually fit
the graph on a single line. Round each figure to the nearest 2%
to work out how many times to press the symbol key. In the
above example, 47.4% is closer to 48% than to 46%, so I
pressed the | key 24 times to graph the percentage of men. This
is a very clear layout, and quick to produce, so it is well suited to
a preliminary report.
A more elaborate looking graph can be made by using special
symbols. For example, if you have the font Wingdings, you can
use the shaded-box symbol: q
This is wider than the | symbol, and no more than about 20
will fit on a normal-width line, if half the line is taken up with
the description and the percentage. Therefore, one q should be
equivalent to 5%:
No
Male
40%
12%
52%
Female
31%
17%
48%
Total
71%
29%
100%
To produce the domino chart, take each percentage in the main
part of the table (not counting the Total row or column), divide
each figure by 4, round it to the nearest whole number, and type
in that number of blobs. “Why divide by 4?” you may wonder.
It’s because each blob is equivalent to 4% of the people
answering both questions. The figure doesn’t need to be 4; it
could be 5 or 2, though a 5% blob often isn’t quite detailed
enough, while 2% produces so many blobs that it’s harder to
interpret the table at a glance. Note that, with one blob equalling
4%, there should be 25 blobs in the whole table — though
occasionally, due to rounding, there will be 24 or 26.
Here’s a domino chart of the above table:
Q14. SEX OF RESPONDENT
Male
47.4%
Female 52.6%
Total
Sex
qqqqqqqqqqq
Pictogram
Like a bar chart, a pictogram can be either vertical or horizontal,
but instead of showing a solid bar, a pictogram shows a
number of symbols - e.g. small diagrams of people. In fact, the
124
Listen to FM99?
qqqqqqqqq
100.0% = 325 cases
Total
Yes
|||||||||||||||||||||||||||
100.0% = 325 cases
Listen to FM99?
Yes
No
Male
•••••••••
•••
Female
••••••••
••••
Though the chart has less detail than the table, most people can
understand it instantly. It shows that slightly more men than
women listen to FM99.
Domino charts are specially useful when you have a group of
tables., and you want to compare the answers. A lot of domino
charts can fit onto a single page. The readers’ eyes will be drawn
to the cells with the largest and smallest numbers of blobs.
This is a very simple graph: easily understood, and easily
produced. Though graphics software doesn’t do domino charts
(yet) you can create a domino graph with the blob symbol in a
word-processing program.
Line Chart
This is used when the variable you are graphing is a numeric
one. In audience research, most variables are nominal, not
numeric, so line charts aren’t needed much. But to plot the
answers to a question such as “How many people live in your
household?” you could produce a graph like this:
#LINE CHART
It’s normal to show the measurement (e.g. percentage) upwards, and the scale (e.g. hours per week) on the horizontal
scale. Unlike a bar chart, it will confuse people if the scales are
exchanged. You’ll find that almost every line chart has a peak in
the middle, and falls off to each side, reflecting what’s known as
the “normal curve.”
A line chart is really another form of a vertical bar chart. You
could turn a vertical bar chart into a line chart by drawing a line
connecting the top of each bar, then deleting the bars.
A line chart can have more than one line. For example, you
could have a line chart comparing the number of hours per
week that men and women watch TV. There’d be two lines, one
for each sex. Each line needs to be shown with a different style,
or a different colour. With more than 3 or 4 lines, a line chart
becomes very confusing, specially when the lines cross each
other.
Area Chart
In a line chart with several lines - such as the above example,
with two sexes - each line starts from the bottom of the table.
That way, you can compare the height of the lines at any point.
An area chart is a little different, in that each line starts from the
line below it. So you don’t compare the height of the lines, but
the areas between them. These areas always add to 100% high.
You can think of an area chart as a lot of pie charts, flattened
out and laid end-to-end.
A common use of area charts in audience research is to show
how people’s behaviour changes across the 24 hours of the day.
The horizontal scale runs from midnight to midnight, and the
vertical scale from 0 to 100%. This area chart, taken from a
survey in Vietnam, shows how people divide their day into
sleep, work, watching TV, listening to radio, and everything else.
#AREA CHART
An area chart needs to be studied closely: the results aren’t
obvious at a glance. However, area charts provide a lot of
information in a small space.
Information About the Survey
Enough information needs to be supplied to enable informed
readers to judge the likely accuracy of the results. Therefore, you
need to include this type of information about how the
research was done:
• The research technique (survey, focus groups, observation,
etc.).
• The delivery mode (by telephone, by mail, etc).
• The population surveyed.
• Where the research was done (The whole country? One
region? One city?)
• When the fieldwork was done: the exact dates, if possible.
• The number of respondents interviewed.
• The organization that did the research.
• The organization (if any) that was identified as
commissioning the research.
• The response rate.
• The types of people who did the interviewing or fieldwork.
• Whether the information reported here was a full research
project on its own, or whether it formed a part of another
research project.
• Any other factor which may have influenced the results.
• Likely sampling error on a figure of 50%, at the 68% level of
confidence.
The above list is a long one, and the background data about a
research project can fill many pages if you include a lot of detail.
However it’s possible to include most of this information in
one paragraph — for example:
“This information comes from a telephone survey, with
interviews from 1050 people aged 18 and over living in the
Adelaide region. The research was done between 5 and 19 May
2000 by Audience Dialogue, using their trained interviewers.
The response rate was 77%. The survey was commissioned by
radio FM99, but respondents were not told this. It’s possible
that the results were influenced by FM101, which reported on
17 May that the survey was taking place, and urged respondents
to say that they only listened to FM101. Because only a sample
of the population was included, figures in this survey are likely
to vary from the true population figures by about 1.5% on
average.”
That explanation provides the information that well-informed
readers need, to judge the likely accuracy of the survey.
Should a Report Include Recommendations?
One area of argument is whether a research report should
include recommendations. The survey analyst, after spending
weeks with the data, understands it better than anybody else
ever will, and is in a good position to recommend certain
courses of action.
Usually, though, the analyst is not aware of all the constraints
on action, so these recommendations are seen as impractical and
often are not acted on. On the other hand, managers will often
dismiss recommendations as impractical, simply because they
have not considered them in detail.
Some researchers believe their job is only to produce the results,
and that it is up to users to make any recommendations. My
experience is that untrained users find it very difficult to draw
recommendations from research data, and if the researcher
125
doesn’t make recommendations, the results of the research are
often not acted on.
I’ve found that recommendations are best made not by the
researcher or users alone, but by both groups working together.
Soon after a survey report has been sent out, arrange a workshop session in which the implications of the survey can be
discussed, and recommendations formed, or decisions made.
This sort of workshop lasts from a few hours to a whole day depending on the number of questions in the survey, and how
much disagreement is expressed.
Assignments
1. Report writing is more an art that hinges upon practice and
experience. Explain?
126
LESSON 31:
COPYRIGHT ACT- 1957
i.
in relation to a literary or dramatic work, the author of
the work;
Objectives
ii.
in relation to a musical work, the composer;
Upon completion of this Lesson, you should be able to:
iii.
in relation to an artistic work other than a photograph,
the artist;
iv.
in relation to a photograph, the person taking the
photograph;
4
[in relation to a cinematograph film or sound
recording, the producer; and
Topics Covered
Titles, Interpretations, Meaning, Reproduce, Copy, Rights
• Short title, extent and commencement of Copyright act
• Interpretation of the content
• Meaning of publication
v.
• Works in which copyright subsists
• Understanding Right of author
vi.
• License to reproduce and publish works for certain purposes
Copy Right
[(dd) “broadcast” means communication to the
public5
1. Short Title, Extent and Commencement
1. This Act may be called the Copyright Act, 1957.
i.
by any means of wireless diffusion, whether in any one
or more of the forms of signs, sounds or visual images; or
2. It extends to the whole of India.
3. It shall come into force on such date1 as the Central
Government may, by notification in the Official Gazette,
appoint.
2. Interpretation
In this Act, unless the context otherwise requires,a. “adaptation” means,i. in relation to a dramatic work, the conversion of the work
into a non-dramatic work;
ii. in relation to a literary work or an artistic work, the
conversion of the work into a dramatic work by way of
performance in public or otherwise;
iii. in relation to a literary or dramatic work, any abridgement of
the work or any version of the work in which the story or
action is conveyed wholly or mainly by means of pictures in a
form suitable for reproduction in a book, or in a newspaper,
magazine or similar periodical; 2[* * *]
iv. in relation to a musical work, any arrangement or
transcription of the work; 3[and
v. in relation to any work, any use of such work involving its
re-arrangement or alteration;]
b. 4[“work of architecture”] means any building or structure
having an artistic character or design, or any model for such
building or structure;
c. “artistic work” meansi.
a painting, a sculpture, a drawing (including a diagram,
map, chart or plan), an engraving or a photograph,
whether or not any such work possesses artistic quality;
ii.
a 4[work of architecture]; and
iii.
any other work of artistic craftsmanship;
d. “author” means-
in relation to any literary, dramatic, musical or artistic
work which is computer-generated, the person who
causes the work to be created;]
ii.
by wire, and includes a re-broadcast;]
e. “calendar year” means the year commencing on the lst day of
January;
[(f) “cinematograph film” means any work of visual recording
on any medium produced through a process from which a
moving image may be produced by any means and, includes
a sound recording accompanying such visual recording and
“cinematograph” shall be construed as including any work
produced by any process analogous to cinematography
including video films;]
4
[(ff) “communication to the public” means making any work
available for being seen or heard or otherwise enjoyed by the
public directly or by any means of display or diffusion other
than by issuing copies of such work regardless of whether
any member of the public actually sees, hears or otherwise
enjoys the work so made available.
4
Explanation : For the purposes of this clause, communication
through satellite or cable or any other means of simultaneous
communication to more than one household or place of
residence including residential rooms of any hotel or hostel
shall be deemed to be communication to the public;
ffa. “composer”, in relation to a musical work, means the
person who composes the music regardless of whether he
records it in any form of graphical notation;
ffb. “computer” includes any electronic or similar device having
information processing capabilities;
ffc. “computer programme” means a set of instructions
expressed in words, codes, schemes or in any other form,
including a machine readable medium, capable of causing a
computer to perform a particular task or achieve a particular
result;
127
ffd. “copyright society” means a society registered under subsection (3) of section 33;]
g. “delivery”, in relation to a lecture, includes delivery by means
of any mechanical instrument or by 6[broadcast];
h. “dramatic work” includes any piece for recitation,
choreographic work or entertainment in dumb show, the
scenic arrangement or acting, form of which is fixed in
writing or otherwise but does not include a cinematograph
film;
(hh)”duplicating equipment” means any mechanical contrivance
or device used or intended to be used for making copies of
any work;]
i. “engravings” include etchings, lithographs, wood-cuts,
prints and other similar works, not being photographs;
7
j. “exclusive licence” means a licence which confers on the
licensee or on the licencees and persons authorised by him,
to the exclusion of all other persons (including the owner of
the copyright), any right comprised in the copyright in a
work, and “exclusive licensee” shall be construed accordingly;
k. “government work” means a work which is made or
published by or under the direction or control ofi.
the government or any department of the
government;
ii.
any Legislature in India;
iii.
any court, Tribunal or other judicial authority in India;
[(l) “Indian work” means a literary, dramatic or musical work,-
8
i.
ii.
the author of which is a citizen of India; or
which is first published in India; or
iii.
the author of which, in the case of an unpublished
work, is, at the time of the making of the work, a
citizen of India;]
[(m) “infringing copy” means,-
4
i. in relation to literary, dramatic, musical or artistic work, a
reproduction thereof otherwise than in the form of a
cinematographic film;
ii. in relation to a cinematographic film, a copy of the film
made on any medium by any means;
q. “performance”, in relation to performer’s right, means any
visual or acoustic presentation made live by one or more
performers;]
[(qq) “performer” includes an actor, singer, musician, dancer,
acrobat, juggler, conjurer, snake charmer, a person delivering
a lecture or any other person who makes a performance;]
3
r. 2[* * *]
s. “photograph” includes photo-lithograph and any work
produced by any process analogous to photography but does
not include any part of a cinema to graph film;
t. “plate” includes any stereotype or other plate, stone, block,
mould, matrix, transfer, negative, 7[, duplicating equipment]
or other device used or intended to be used for printing or
reproducing copies of any work, and any matrix or other
appliance by which 10[sound recording] for the acoustic
presentation of the work are or are intended to be made;
u. “prescribed” means prescribed by rules made under this Act;
[(uu) “producer”, in relation to a cinematograph film or sound
recording, means a person who takes the initiative and
responsibility for making the work;]
3
[(x) “reprography” means the making of copies of a work, by
photocopying or similar means;
4
xx. “sound recording” means a recording of sounds from
which such sounds may be produced regardless of the
medium on which such recording is made or the method by
which the sounds are produced;]
y. “work” means any of the following works, namely,i.
a literary, dramatic, musical or artistic work;
ii.
iii.
a cinematograph film;
a 10[sound recording];
z. “work of joint authorship” means a work produced by the
collaboration of two or more authors in which the
contribution of one author is not distinct from the
contribution of the other author or authors;
za.“work of sculpture” includes casts and models.
4
iii. in relation to a sound recording, any other recording
embodying the same sound recording, made by any means;
[3. Meaning of publication
For the purposes of this Act, “publication” means making a
work available to the public by issue of copies or by communicating the work to the public.]
iv. in relation to a programme or performance in which such a
broadcast, reproduction right or a performer’s right subsists
under the provisions of this Act, the sound recording or a
cinematographic film of such programme or performance,
if such reproduction, copy of sound recording is made or
imported in contravention of the provisions of this Act.]
4. When Work not Deemed to be Published or
Performed in Public
Except in relation to infringement of copyright, a work shall
not be deemed to be published or performed in public, if
published or performed in public, without the licence of the
owner of the copyright.
n. “lecture” includes address, speech and sermon;
5. When Work Deemed to be First Published in India
For the purposes of this Act, a work published in India shall be
deemed to be first published in India, notwithstanding that it
has been published simultaneously in some other country,
unless such other country provides a shorter term of copyright
for such work; and a work shall be deemed to be published
simultaneously in India and in another country if the time
between the publication in India and the publication in such
[(o) “literary work” includes computer programmes, tables and
compilations including computer 9[databases];
4
p. “musical work” means a work consisting of music and
includes any graphical notation of such work but does not
include any works or any action intended to be sung, spoken
or performed with the music;
128
other country does not exceed thirty days or such other period
as the Central Government may, in relation to any specified
country, determine.
7. Nationality of Author where the Making of
Unpublished Work is Extended Over Considerable
Period
Where, in the case of an unpublished work, the making of the
work is extended over a considerable period, the author of the
work shall, for the purposes of this Act, be deemed to be a
citizen of, or domiciled in, that country of which he was a
citizen or wherein he was domiciled during any substantial part
of that period.
13. Works in which Copyright Subsists
1. Subject to the provisions of this section and the other
provisions of this Act, copyright shall subsist throughout
India in the following classes of works, that is to say,-
a. in the case of a literary, dramatic or musical work, not being a
computer programme,-
i.
to reproduce the work in any material form
including the storing of it in any medium by electronic
means;
ii. to issue copies of the work to the public not being copies
already in circulation;
iii. to perform the work in public, or communicate it to the
public;
iv. to make any cinematograph film or sound recording in
respect of the work;
a.
original literary, dramatic, musical and artistic works;
v. to make any translation of the work;
b.
cinematograph films; and
c.
10
vi. to make any adaptation of the work;
vii. to do, in relation to a translation or an adaptation of the
work, any of the acts specified in relation to the work in subclauses (i) to (vi);
[sound recording].
2. Copyright shall not subsist in any work specified in subsection (1), other than a work to which the provisions of
section 40 or section 41 apply, unless,i.
ii.
iii.
in the case of a published work, the work is first
published in India, or where the work is first
published outside India, the author is at the date of
such publication, or in a case where the author was
dead at that date, was at the time of his death, a citizen
of India;
in the case of an unpublished work other than 12[work
of architecture], the author is at the date of the making
of the work a citizen of India or domiciled in India;
and
in the case of a 12[work of architecture], the work is
located in India.
Explanation : In the case of a work of joint authorship, the
conditions conferring copyright specified in this sub-section
shall be satisfied by all the authors of the work.
3. Copyright shall not subsista.
in any cinematograph film if a substantial part of the
film is an infringement of the copyright in any other work;
b.
in any 10[sound recording] made in respect of a literary,
dramatic or musical work, if in making the 10[sound
recording], copyright in such work has been infringed.
4. The copyright in a cinematograph film or a [sound
recording] shall not affect the separate copyright in any work
in respect of which or a substantial part of which, the film,
or as the case may be, the 10[sound recording] is made.
10
5. In the case of a 4[work of architecture], copyright shall
subsist only in the artistic character and design and shall not
extend to processes or methods of construction.
4
the doing of any of the following acts in respect of a work or
any substantial part thereof, namely,-
[14. Meaning of copyright
For the purposes of this Act, “copyright” means the exclusive
right subject to the provisions of this Act, to do or authorise
b. in the case of a computer programme,i.
to do any of the acts specified in clause (a);
[(ii) to sell or give on commercial rental or offer for sale or
for commercial rental any copy of the computer programme:
14
PROVIDED that such commercial rental does not apply in
respect of computer programmes where the programme
itself is not the essential object of the rental.]
c. in the case of an artistic work,i.
to reproduce the work in any material form including
depiction in three dimensions of a two dimensional
work or in two dimensions of a three dimensional
work;
ii.
iii.
to communicate the work to the public;
to issue copies of the work to the public not being
copies already in circulation;
iv.
to include the work in any cinematograph film;
v.
to make any adaptation of the work;
vi.
to do in relation to an adaptation of the work any of
the acts specified in relation to the work in sub-clauses
(i) to (iv);
d. in the case of a cinematograph film,i.
to make a copy of the film, including a photograph of
any image forming part thereof;
ii.
to sell or give on hire or offer for sale or hire, any copy
of the film, regardless of whether such copy has been
sold or given on hire on earlier occasions;
to communicate the film to the public;
iii.
e. in the case of a sound recordingi. to make any other sound recording embodying it;
129
ii. to sell or give on hire, or offer for sale or hire, any copy of
the sound recording regardless of whether such copy has
been sold or given on hire on earlier occasions;
iii. to communicate the sound recording to the public.
Explanation: For the purposes of this section, a copy which
has been sold once shall be deemed to be a copy already in
circulation.]
16. No copyright except as provided in this Act
No person shall be entitled to copyright or any similar right in
any work, whether published or unpublished, otherwise than
under and in accordance with the provisions of this Act or of
any other law for the time being in force, but nothing in this
section shall be construed as abrogating any right or jurisdiction
to restrain a breach of trust or confidence.
First Owner of Copyright
Subject to the provisions of this Act, the author of a work shall
be the first owner of the copyright therein:
PROVIDED thata. in the case of a literary, dramatic or artistic work made by the
author in the course of his employment by the proprietor of
a newspaper, magazine or similar periodical under a contract
of service or apprenticeship, for the purpose of publication
in a newspaper, magazine or similar periodical, the said
proprietor shall, in the absence of any agreement to the
contrary, be the first owner of the copyright in the work
insofar as the copyright relates to the publication of the work
in any newspaper, magazine or similar periodical, or to the
reproduction of the work for the purpose of its being so
published, but in all other respects the author shall be the
first owner of the copyright in the work;
b. subject to the provisions of clause (a), in the case of a
photograph taken, or a painting or portrait drawn, or an
engraving or a cinematograph film made, for valuable
consideration at the instance of any person, such person
shall, in the absence of any agreement to the contrary, be the
first owner of the copyright therein;
c. in the case of a work made in the course of the author’s
employment under a contract of service or apprenticeship, to
which clause (a) or clause (b) does not apply, the employer
shall, in the absence of any agreement to the contrary, be the
first owner of the copyright therein;
[(cc) in the case of any address or speech delivered in public,
the person who has delivered, such address or speech or if
such person has delivered such address or speech on behalf
of any other person, such other person shall be the first
owner of the copyright therein notwithstanding that the
person who delivers such address or speech, or, as the case
may be, the person on whose behalf such address or speech
is delivered, is employed by any other person who arranges
such address or speech or on whose behalf or premises such
address or speech is delivered;]
5
d. in the case of a government work, government shall, in the
absence of any agreement to the contrary, be the first owner
of the copyright therein;
130
[(dd) in the case of a work made or first published by or
under the direction or control of any public undertaking,
such public undertaking shall, in the absence of any
agreement to the contrary, be the first owner of the copyright
therein.
5
Explanation: For the purposes of this clause and section 28A,
“public undertaking” meansi.
an undertaking owned or controlled by government; or
ii.
a government company as defined in section 617 of
the Companies Act, 1956 (1 of 1956); or
iii.
a body corporate established by or under any Central,
Provincial or State Act;]
e. in the case of a work to which the provisions of section 41
apply, the international organisation concerned shall be the
first owner of the copyright therein.
Right of Author to Relinquish Copyright
1. The author of a work may relinquish all or any of the rights
comprised in the copyright in the work by giving notice in
the prescribed form to the Registrar of Copyrights and
thereupon such rights shall, subject to the provisions of
sub-section (3), cease to exist from the date of the notice.
2. On receipt of a notice under sub-section (1), the Registrar of
Copyrights shall cause it to be published in the Official
Gazette and in such other manner as he may deem fit.
3. The relinquishment of all or any of the rights comprised in
the copyright in a work shall not affect any rights subsisting
in favour of any person on the date of the notice referred to
in sub-section (1).
Term of Copyright
Term of copyright in published literary, dramatic, musical and
artistic works
Except as otherwise hereinafter provided, copyright shall subsist
in any literary, dramatic, musical or artistic work (other than a
photograph) published within the lifetime of the author until
17
[sixty] years from the beginning of the calendar year next
following the year in which the author dies.
Explanation : In this section the reference to the author shall,
in the case of a work of joint authorship, be construed as a
reference to the author who dies last.
Term of copyright in anonymous and pseudonymous works
1. In the case of a literary, dramatic, musical or artistic work
(other than a photograph), which is published anonymously
or pseudonymously, copyright shall subsist until 17[sixty]
years from the beginning of the calendar year next following
the year in which the work is first published:
PROVIDED that where the identity of the author is
disclosed before the expiry of the said period, copyright shall
subsist until 17[sixty] years from the beginning of the
calendar year next following the year in which the author dies.
2. In sub-section (1), references to the author shall, in the case
of an anonymous work of joint authorship, be construed,a.
where the identity of one of the authors is disclosed,
as references to that author;
b.
where the identity of more authors than one is
disclosed, as references to the author who dies last
from amongst such authors.
3. In sub-section (1), references to the author shall, in the case
of a pseudonymous work of joint authorship, be
construed,a.
where the names of one or more (but not all) of the
authors are pseudonymous and his or their identity is
not disclosed, as references to the author whose name
is not a pseudonym, or, if the names of two or more
of the authors are not pseudonyms, as references to
such of those authors who dies last;
b.
where the names of one or more (but not all) of the
authors are pseudonyms and the identity of one or
more of them is disclosed, as references to the author
who dies last from amongst the authors whose names
are not pseudonyms and the authors whose names are
pseudonyms and are disclosed; and
where the names of all the authors are pseudonyms
and the identity of one of them is disclosed, as
references to the author whose identity is disclosed or
if the identity of two or more of such authors is
disclosed, as references to such of those authors who
dies last.
c.
Explanation : For the purposes of this section, the identity of
an author shall be deemed to have been disclosed, if either the
identity of the author is disclosed publicly by both the author
and the publisher or is otherwise established to the satisfaction
of the Copyright Board by that author.
Term of Copyright in Posthumous Work
1. In the case of a literary, dramatic or musical work or an
engraving, in which copyright subsists at the date of the
death of the author or, in the case of any such work of joint
authorship, at or immediately before the date of the death
of the author who dies last, but which, or any adaptation of
which, has not been published before that date, copyright
shall subsist until 17[sixty] years from the beginning of the
calendar year next following the year in which the work is first
published or, where an adaptation of the work is published
in any earlier year, from the beginning of the calendar year
next following that year.
2. For the purposes of this section a literary, dramatic or
musical work or an adaptation of any such work shall be
deemed to have been published, if it has been performed in
public or if any 10[sound recording] made in respect of the
work have been sold to the public or have been offered for
sale to the public.
Term of Copyright in Photographs
In the case of a photograph, copyright shall subsist until
17
[sixty] years from the beginning of the calendar year next
following the year in which the photograph is published.
Term of Copyright in Cinematograph Films
In the case of cinematograph film, copyright shall subsist until
17
[sixty] years from the beginning of the calendar year next
following the year in which the film is published.
Term of Copyright in 10[sound recording]
In the case of a 10[sound recording], copyright shall subsist until
17
[sixty] years from the beginning of the calendar year next
following the year in which the 10[sound recording] is published.
Licence to Produce and Publish Translations
1. Any person may apply to the Copyright Board for a licence to
produce and publish a translation of a literary or dramatic
work in any language 5[after a period of seven years from the
first publication of the work.]
5
[(1A) Not with standing anything contained in sub-section
(1), any person may apply to the Copyright Board for a
licence to produce and publish a translation, in printed or
analogous forms of reproduction, of a literary or dramatic
work, other than an Indian work, in any language in general
use in India after a period of three years from the first
publication of such work, if such translation is required for
the purposes of teaching, scholarship or research:
PROVIDED that where such translation is in a language not
in general use in any developed country, such application may
be made after a period of one year from such publication.]
2. Every 18[application under this section] shall be made in such
form as may be prescribed and shall state the proposed retail
price of a copy of the translation of the work.
3. Every applicant for a licence under this section shall, along
with his application, deposit with the Registrar of
Copyrights such fee as may be prescribed.
4. Where an application is made to the Copyright Board under
this section, it may, after holding such inquiry as may be
prescribed, grant to the applicant a licence, not being an
exclusive licence, to produce and publish a translation of the
work in the language mentioned in 13[the applicationi.
subject to the condition that the applicant shall pay to
the owner of the copyright in the work royalties in
respect of copies of the translation of the work sold to
the public, calculated at such rate as the Copyright
Board may, in the circumstances of each case, determine
in the prescribed manner; and
ii.
where such licence is granted on an application under
sub-section (1A), subject also to the condition that the
licence shall not extend to the export of copies of the
translation of the work outside Indian and every copy
of such translation shall contain a notice in the
language of such translation that the copy is available
for distribution only in India:
PROVIDED that nothing in clause (ii) shall apply to the export
by government or any authority under the government of
copies of such translation in a language other than English,
French or Spanish to any country if1. such copies are sent to citizens of India residing outside
India or to any association of such citizens outside India; or
2. such copies are meant to be used for purposes of teaching,
scholarship or research and not for any commercial purpose;
and
3. in either case, the permission for such export has been given
by the government of that country:]
131
[PROVIDED FURTHER that no licence under this section]
shall be granted, unless-
specialised, technical or scientific research to the experts in any
particular field.
a. a translation of the work in the language mentioned in the
application has not been published by the owner of the
copyright in the work or any person authorised by him,
13
[within seven years or three years or one year, as the case
may be, of the first publication of the work], or if a
translation has been so published, it has been out of print;
6. The provisions of sub-sections (2) to (4) insofar as they are
relatable to an application under sub-section (1A), shall, with
the necessary modifications, apply to the grant of a licence
under sub-section (5) and such licence shall not also be
granted unless-
b. the applicant has proved to the satisfaction of the Copyright
Board that he had requested and had been denied
authorisation by the owner of the copyright to produce and
publish such translation, or that 13[he was, after due diligence
on his part, unable to find] the owner of the copyright;
b. the broadcast is made through the medium of sound and
visual recordings;
c. such recording has been lawfully and exclusively made for the
purpose of broadcasting in India by the applicant or by any
other broadcasting agency; and
19
c. where the applicant was unable to find the owner of the
copyright, he had sent a copy of his request for 13[such
authorisation by registered air mail post to the publisher
whose name appears from the work, and in the case of an
application for a licence under sub-section (1)] not less than
two months before 13[such application];
[(cc) a period of six months in the case of an application under
sub-section (1A) (not being an application under the proviso
thereto), or nine months in the case of an application under the
proviso to that sub-section, has elapsed from the date of
making the request under clause (b) of this proviso, or where a
copy of the request has been sent under clause (c) of this
proviso, from the date of sending of such copy, and the
translation of the work in the language mentioned in the
application has not been published by the owner of the
copyright in the work or any person authorised by him within
the said period of six months or nine months, as the case may
be;
13
(ccc) in the case of any application made under sub-section
(1A),i.
the name of the author and the title of the particular
edition of the work proposed to be translated are
printed on all the copies of the translation;
ii.
if the work is composed mainly of illustrations, the
provisions of section 32A are also complied with;]
d. the Copyright Board is satisfied that the applicant is
competent to produce and publish a correct translation of
the work and possesses the means to pay to the owner of
the copyright the royalties payable to him under this section;
e. the author has not withdrawn from circulation copies of the
work; and
f. an opportunity of being heard is given, wherever practicable,
to the owner of the copyright in the work.
[(5) Any broadcasting authority may apply to the Copyright
Board for a licence to produce and publish the translation of5
a. a work referred to in sub-section (1A) and published in
printed or analogous forms of reproduction; or
b. any text incorporated in audio-visual fixations prepared and
published solely for the purpose of systematic instructional
activities, for broadcasting such translation for the purposes
of teaching or for the dissemination for of the results of
132
a. the translation is made from a work lawfully acquired;
d. the translation and the broadcasting of such translation are
not used for any commercial purposes.
Explanation: For the purposes of this section,a. “developed country” means a country which is not a
developing country;
b. “developing country” means a country which is for the time
being regarded as such in conformity with the practice of the
General Assembly of the United Nations;
c. “purposes of research” does not include purposes of
industrial research, or purposes of research by bodies
corporate (not being bodies corporate owned or controlled
by government) or other associations or body of persons for
commercial purposes;
d. “ purposes of teaching, research or scholarship” includesi.
purposes of instructional activity at all levels in
educational institutions, including schools, colleges,
universities and tutorial institutions; and
ii.
purposes of all other types of organised educational
activity.]
A. License to Reproduce and Publish Works for
Certain Purposes
1. Where, after the expiration of the relevant period from the
date of the first publication of an edition of a literary,
scientific or artistic work,a.
the copies of such edition are not made available in
India; or
b.
such copies have not been put on sale in India for a
period of six months, to the general public, or in
connection with systematic instructional activities at a
price reasonably related to that normally charged in
India for comparable works by the owner of the right
of reproduction or by any person authorised by him in
this behalf, any person may apply to the Copyright
Board for a licence to reproduce and publish such work
in printed or analogous forms of reproduction at the
price at which such edition is sold or at a lower price for
the purposes of systematic instructional activities.
2. Every such application shall be made in such form as may be
prescribed and shall state the proposed retail price of a copy
of the work to be reproduced.
3. Every applicant for a licence under this section shall, along
with his application, deposit with the Registrar of
Copyrights such fee as may be prescribed.
4. Where an application is made to be Copyright Board under
this section, it may, after holding such inquiry as may be
prescribed, grant to the applicant a licence not being an
exclusive licence, to produce and publish a reproduction of
the work mentioned in the application subject to the
conditions that,i.
the applicant shall pay to the owner of the copyright in
the work royalties in respect of copies of the
reproduction of the work sold to the public, calculated
at such rate as the Copyright Board may, in the
circumstances of each case, determine in the prescribed
manner;
ii.
a licence granted under this section shall not extend to
the export of copies of the reproduction of the work
outside India and every copy of such reproduction
shall contain a notice that the copy is available for
distribution only in India:
PROVIDED that no such licence shall be granted unlessa. the applicant has proved to the satisfaction of the Copyright
Board that he had requested and had been denied
authorisation by the owner of the copyright in the work to
reproduce and publish such work or that he was, after due
diligence on his part, unable to find such owner;
b. where the applicant was unable to find the owner of the
copyright, he had sent a copy of his request for such
authorisation by registered air-mail post to the publisher
whose name appears from the work not less than three
months before the application for the licence;
c. the Copyright Board is satisfied that the applicant is
competent to reproduce and publish an accurate
reproduction of the work and possesses the means to pay to
the owner of the copyright the royalties payable to him
under this section;
d. the applicant undertakes to reproduce and publish the work
at such price as may be fixed by the Copyright Board, being a
price reasonably related to the price normally charged in India
for works of the same standard on the same or similar
subjects;
e. a period of six months in the case of an application for the
reproduction and publication of any work of natural science,
physical science, mathematics or technology, or a period of
three months in the case of an application for the
reproduction and publication of any other work, has elapsed
from the date of making the request under clause (a), or
where a copy of the request has been sent under clause (b),
from the date of sending of a copy, and a reproduction, of
the work has not been published by the owner of the
copyright in the work or any person authorised by him
within the said period of six months or, three months, as
the case may be;
f. the name of the author and the title of the particular edition
of the work proposed to be reproduced are printed on all
the copies of the reproduction;
g. the author has not withdrawn from circulation copies of the
work; and
h. an opportunity of being heard is given, wherever Practicable,
to the owner of the copyright in the work.
5. No licence to reproduce and publish the translation of a
work shall be granted under this section unless such
translation has been published by the owner of the right of
translation or any person authorised by him and the
translation is not in a language in general use in India.
6. The provisions of this section shall also apply to the
reproduction and publication, or translation into a language
in general use in India, of any text incorporated in audiovisual fixations prepared and published solely for the
purpose of systematic instructional activities.
Explanation : For the purposes of this section, “relevant
period”, in relation to any work, means a period ofa. seven years from the date of the first publication of that
work, where the application is for the reproduction and
publication of any work of, or relating to, fiction, poetry,
drama, music or art;
b. three years, from the date of the first publication of that
work, where the application is for the reproduction and
publication of any work of, or relating to, natural science,
physical science, mathematics or technology; and
c. five years from the date of the first publication of that work,
in any other case.
B. Termination of Licences Issued Under this Chapter
1. If, at any time after the granting of a licence to produce and
publish the translation of a work in any language under subsection (1A) of section 32 (hereafter in this sub-section
refereed to as the licensed work), the owner of the copyright
in the work or any person authorised by him publishes a
translation of such work in the same language and which is
substantially the same in content at a price reasonably related
to the price normally charged in India for the translation of
works of the same standard on the same or similar subject,
the licence so granted shall be terminated :
PROVIDED that no such termination shall take effect until
after the expiry of a period of three months from the date
of service of a notice in the prescribed manner on the person
holding such licence by the owner of the right of translation
intimating the publication of the translation as aforesaid:
PROVIDED FURTHER that copies of the licensed work
produced and published by the person holding such licence
before the termination of the licence takes effect may
continue to be sold or distributed until the copies already
produced and published are exhausted.
2. If, at any time after the granting of a licence to produce and
publish the reproduction or translation of any work under
section 32A, the owner of the right of reproduction or any
person authorised by him sells or distributes copies of such
work or a translation thereof, as the case may be, in the same
language and which is substantially the same in content at a
price reasonably related to the price normally charged in India
133
for works of the same standard on the same or similar
subject, the licence so granted shall be terminated:
PROVIDED that no such termination shall take effect until
after the expiry of a period of three months from the date of
service of a notice in the prescribed manner on the person
holding the licence by the owner of the right of reproduction
intimating the sale or distribution of the copies of the editions
of work as aforesaid :
PROVIDED FURTHER that any copies already reproduced by
the licensee before such termination takes effect may continue to
be sold or distributed until the copies already produced are
exhausted.]
4[Rights of Broadcasting Organisation
and of Performers]
Broadcast Reproduction Right
1. Every broadcasting organisation shall have a special right to
be known as “broadcast reproduction right” in respect of its
broadcasts.
Power to Restrict rights of Foreign Broadcasting
Organisations and Performers
If it appears to the Central Government that a foreign country
does not give or has not undertaken to give adequate protection
to rights of broadcasting organisations or performers, the
Central Government may, by order published in the Official
Gazette, direct that such of the provisions of this Act as confer
right to broadcasting organisations or performers, as the case
may be, shall not apply to broadcasting organisations of
performers whereof are based or incorporated in such foreign
country or are subjects or citizens of such foreign country and
are not incorporated or domiciled in India, and thereupon
those provisions shall not apply to such broadcasting
organisations or performers.]
Infringement of Copyright
Certain Acts not to be Infringement of Copyright
1. The following acts shall not constitute an infringement of
copyright, namely,a.
2. The broadcast reproduction right shall subsist until twentyfive years from the beginning of the calendar year next
following the year in which the broadcast is made.
3. During the continuance of a broadcast reproduction right in
relation to any broadcast, any person who, without the
licence of the owner of the right does any of the following
acts of the broadcast or any substantial part thereof,a.
re-broadcasts the broadcasts; or
b.
causes the broadcast to be heard or seen by the public
on payment of any charges;or
c.
makes any sound recording or visual recording of the
broadcast; or
makes any reproduction of such sound recording or
visual recording where such initial recording was done
without licence or, where it was licensed, for any
purpose not envisaged by such licence; or
d.
e.
sells or hires to the public, or offers for such sale or
hire, any such sound recording or visual recording
referred to in clause (c) or clause (d), shall, subject to
the provisions of section 39, be deemed to have
infringed the broadcast reproduction right.]
A. Other Provisions Applying to Broadcast
Reproduction Right and Performer’s Right
Sections 18, 19, 30, 53, 55, 58, 64, 65 and 66 shall, with any
necessary adaptations and modifications, apply in relation to the
broadcast reproduction right in any broadcast and the
performer’s right in any performance as they apply in relation to
copyright in a work:
PROVIDED that where copyright or performer’s right subsists
in respect of any work or performance that has been broadcast,
no licence to reproduce such broadcast, shall take effect without
the consent of the owner of rights or performer, as the case
may be, or both of them.]
134
a fair dealing with a literary, dramatic, musical or artistic
work 3[not being a computer programme] for the
purposes of[(i) private use, including research;]
4
ii. criticism or review, whether of that work or of any other
work;
[(aa) the making of copies or adaptation of a computer
programme by the lawful possessor of a copy of such
computer programme, from such copy
3
i. in order to utilise the computer programme for the purpose
for which it was supplied; or
ii. to make back-up copies purely as a temporary protection
against loss, destruction or damage in order only to utilise
the computer programme for the purpose for which it was
supplied;]
[(ab) the doing of any act necessary to obtain information
essential for operating inter-operability of an independently
created computer programme with other programmes by a
lawful possessor of a computer programme, provided that
such information is not otherwise readily available;
21
ac.
the observation, study or test of functioning of the
computer programme in order to determine the ideas
and principles which underline any elements of the
programme while performing such acts necessary for
the functions for which the computer programme was
supplied;
ad.
the making of copies or adaptation of the computer
programme from a personally legally obtained copy for
non-commercial personal use;]
b. a fair dealing with a literary, dramatic, musical or artistic work
for the purpose of reporting current eventsi.
in a newspaper, magazine or similar periodical, or
ii.
by 6[broadcast] or in a cinematograph film or by means
of photographs.
[Explanation: The publication of a compilation of addresses
or speeches delivered in public is not a fair dealing of such
work within the meaning of this clause;]
has provided copies of all covers or labels with which
the sound recordings are to be sold, and has paid in
the prescribed manner to the owner of rights in the
work royalties in respect of all such sound recordings
to be made by him, at the rate fixed by the Copyright
Board in this behalf
5
c. the reproduction of a literary, dramatic, musical or artistic
work for the purpose of a judicial proceeding or for the
purpose of a report of a judicial proceeding;
d. the reproduction or publication of a literary, dramatic,
musical or artistic work in any work prepared by the
Secretariat of a Legislature or, where the Legislature consists
of two Houses, by the Secretariat of either House of the
Legislature, exclusively for the use of the members of that
Legislature;
e. the reproduction of any literary, dramatic or musical work in
a certified copy made or supplied in accordance with any law
for the time being in force;
f. the reading or recitation in public of any reasonable extract
from a published literary or dramatic work;
g. the publication in a collection, mainly composed of noncopyright matter, bona fide intended for the use of
educational institutions, and so described in the title and in
any advertisement issued by or on behalf of the publisher,
of short passages from published literary or dramatic works,
not themselves published for the use of educational
institutions, in which copyright subsists:
PROVIDED that not more than two such passages from
works by the same author are published by the same
publisher during any period of five years.
Explanation : In the case of a work of joint authorship,
references in this clause to passages from works shall include
references to passages from works by any one or more of the
authors of those passages or by any one or more of those
authors in collaboration with any other person;
h. the reproduction of a literary, dramatic, musical or artistic
worki.
by a teacher or a pupil in the course of instruction; or
ii.
as part of the questions to be answered in an
examination; or
iii. in answers to such questions;
i. the performance in the course of the activities of an
educational institution, of a literary, dramatic or musical
work by the staff and students of the institution, or of a
cinematograph film or a 10[sound recording], if the audience
is limited to such staff and students, the parents and
guardians of the students and persons directly connected
with the activities of the institution 3[or the communication
to such an audience of a cinematograph film or sound
recording];
[(j) the making of sound recordings in respect of any literary,
dramatic or musical work, if-
4
i.
sound recordings of that work have been made by or
with the licence or consent of the owner of the right in
the work;
ii.
the person making the sound recordings has given a
notice of his intention to make the sound recordings,
PROVIDED thati. no alterations shall be made which have not been made
previously by or with the consent of the owner of rights, or
which are not reasonably necessary for the adaptation of the
work for the purpose of making the sound recordings;
ii. the sound recordings shall not be issued in any form of
packaging or with any label which is likely to mislead or
confuse the public as to their identity;
iii. no such sound recording shall be made until the expiration
of two calendar years after the end of the year in which the
first sound recording of the work was made; and
iv. the person making such sound recordings shall allow the
owner of rights or his duly authorised agent or
representative to inspect all records and books of account
relating to such sound recording:
PROVIDED FURTHER that if on a complaint brought before
the Copyright Board to the effect that the owner of rights has
not been paid in full for any sound recordings purporting to be
made in pursuance of this clause, the Copyright Board is, prima
facie, satisfied that the complaint is genuine, it may pass an
order ex parte directing the person making the sound recording
to cease from making further copies and, after holding such
inquiry as it considers necessary, make such further order as it
may deem fit, including an order for payment of royalty;
k. the causing of a recording to be heard in public by utilising
it,i.
in an enclosed room or hall meant for the common
use of residents in any residential premises (not being
a hotel or similar commercial establishment) as part of
the amenities provided exclusively or mainly for
residents therein; or
ii.
as part of the activities of a club or similar organisation
which is not established or conducted for profit;]
l. the performance of a literary, dramatic or musical work by an
amateur club or society, if the performance is given to a nonpaying audience, or for the benefit of a religious institution;
m.the reproduction in a newspaper, magazine or other
periodical of an article on current economic, political, social or
religious topics, unless the author of such article has
expressly reserved to himself the right of such reproduction;
n. the publication in a newspaper, magazine or other periodical
of a report of a lecture delivered in public;
o. the making of not more than three copies of a book
(including a pamphlet, sheet of music, map, chart or plan) by
or under the direction of the person in charge of a public
library for the use of the library if such book is not available
for sale in India;
p. the reproduction, for the purpose of research or private
study or with a view to publication, of an unpublished
135
literary, dramatic or musical work kept in a library, museum
or other institution to which the public has access:
PROVIDED that where the identity of the author of any
such work or, in the case of a work of joint authorship, of
any of the authors is known to the library, museum or other
institution, as the case may be, the provisions of this clause
shall apply only if such reproduction is made at a time more
than 23[sixty years] from the date of the death of the author
or, in the case of a work of joint authorship, from the death
of the author whose identity is known or, if the identity of
more authors than one is known from the death of such of
those authors who dies last;
q. the reproduction or publication ofi.
ii.
iii.
iv.
any matter which has been published in any Official
Gazette except an Act of a Legislature;
any Act of a Legislature subject to the condition that
such Act is reproduced or published together with any
commentary thereon or any other original matter;
the report of any committee, commission, council,
board or other like body appointed by the government
if such report has been laid on the Table of the
Legislature, unless the reproduction or publication of
such report is prohibited by the government;
i.
if no translation of such Act or rules or orders in that
language has previously been produced or published
by the government; or
ii.
where a translation of such Act or rules or orders in
that language has been produced or published by the
government, if the translation is not available for sale
to the public:
PROVIDED that such translation contains a statement at a
prominent place to the effect that the translation has not
been authorised or accepted as authentic by the government;
[(s) the making or publishing of a painting, drawing,
engraving or photograph of a work of architecture or the
display of a work of architecture;]
t. the making or publishing of a painting, drawing, engraving
or photograph of sculpture, or other artistic work falling
under sub-clause (iii) of clause (c) of section 2, if such work
is permanently situate in a public place or any premises to
which the public has access;
4
u. the inclusion in a cinematograph film ofany artistic work permanently situate in a public place or
any premises to which the public has access; or
any other artistic work, if such inclusion is only by way
of background or is otherwise incidental to the
principal matters represented in the film;
v. the use by the author of an artistic work, where the author
of such work is not the owner of the copyright therein, of
any mould, cast, sketch, plan, model or study made by him
for the purpose of the work:
PROVIDED that he does not thereby repeat or imitate the
main design of the work;
w. 2[* * *]
x. the reconstruction of a building or structure in accordance
with the architectural drawings or plans by reference to which
the building or structure was originally constructed:
PROVIDED that the original construction was made with
the consent or licence of the owner of the copyright in such
drawings and plans;
y. in relation to a literary, dramatic or musical work recorded or
reproduced in any cinematograph film, the exhibition of
such film after the expiration of the term of copyright
therein:
PROVIDED that the provisions of sub-clause (ii) of clause
(a), sub-clause (i) of clause (b) and clauses (d), (f), (g), (m)
and (p) shall not apply as respects any act unless that act is
accompanied by an acknowledgement-
any judgement or order of a court, Tribunal or other
judicial authority, unless the reproduction or
publication of such judgement or order is prohibited
by the court, the Tribunal or other judicial authority, as
the case may be;
r. the production or publication of a translation in any Indian
language of an Act of a Legislature and of any rules or
orders made thereunder-
i.
ii.
i.
identifying the work by its title or other description;
and
ii.
unless the work is anonymous or the author of the
work has previously agreed or required that no
acknowledgement of his name should be made, also
identifying the author;
[(z) the making of an ephemeral recording, by a broadcasting
organisation using its own facilities for its own broadcast by
a broadcasting organisation of a work which it has the right
to broadcast; and the retention of such recording for archival
purposes on the ground of its exceptional documentary
character;
3
za.
the performance of a literary, dramatic or musical work
or the communication to the public of such work or
of a sound recording in the course of any bona fide
religious ceremony or an official ceremony held by the
Central Government or the State Government or any
local authority.
Explanation : For the purpose of this clause, religious ceremony
including a marriage procession and other social festivities
associated with a marriage.]
2. The provisions of sub-section (1) shall apply to the doing
of any act in relation to the translation of a literary, dramatic
or musical work or the adaptation of a literary, dramatic,
musical or artistic work as they apply in relation to the work
itself.
Particulars to be Included in 10[Sound Recordings]
and Video Films
1. No person shall publish a 10[sound recording] in respect of
any work unless the following particulars are displayed on
136
the 10[sound recording] and on any container thereof,
namely,-
6 Substituted for the words “radio-diffusion” by Act No. 23 of
1983, w.e.f. 9th. August, 1984.
a.
the name and address of the person who has made the
10
[sound recording];
7 Inserted by Act No. 65 of 1984, w.e.f. 8th. October, 1984.
b.
the name and address of the owner of the copyright in
such work; and
c.
the year of its first publication.
2. No person shall publish a video film in respect of any work
unless the following particulars are displayed in the video
film, when exhibited, and on the video cassette or other
container thereof, namely,a.
b.
c.
if such work is a cinematograph film required to be
certified for exhibition under the provisions of the
Cinematograph Act, 1952 (37 of 1952), a copy of the
certificate granted by the Board of Film Certification
under section 5A of that Act in respect of such work;
the name and address of the person who has made the
video film and a declaration by him that he has
obtained the necessary licence or consent from the
owner of the copyright in such work for making such
video film; and
the name and address of the owner of the copy right
in such work.]
Author’s special rights
[(1) Independently of the author’s copyright and even after the
assignment either wholly or partially of the said copyright,
the author of a work shall have the righta.
to claim authorship of the work; and
4
b.
to restrain or claim damages in respect of any
distortion, mutilation, modification or other act in
relation to the said work which is done before the
expiration of the term of copyright if such distortion,
mutilation, modification or other act would be
prejudicial to his honour or reputation:
PROVIDED that the author shall not have any right to
restrain or claim damages in respect of any adaptation of a
computer programme to which clause (aa) of sub-section (1)
of section 52 applies.
Explanation : Failure to display a work or to display it to
the satisfaction of the author shall not be deemed to be an
infringement of the rights conferred by this section.]
2. The right conferred upon an author of a work by subsection (1), other than the right to claim authorship of the
work, may be exercised by the legal representatives of the
author.
Foot Notes
1 21st. January, 1958, vide Notification No. SRO 269, Gazette
of India, Ext. Part II, s. 3(ii), p, 167.
8 Earlier clause (1) substituted by Act No. 23 of 1983, w.e.f. 9th.
August, 1984.
9 Substituted for the words “data basis” by the Copyright
(Amendment) Act 49 of 1999, dated 30th. December, 1999.
10 Substituted by Act No. 38 of 1994, for the word “records”,
w.e.f. 10th. May, 1995.
11 Omitted by Act No. 23 of 1983, w.e.f. 9th. August, 1984.
12 Substituted by Act No. 38 of 1994, for the words “the
Copyright Board”, w.e.f. 10th. May, 1995.
13 Substituted by Act No. 23 of 1983, w.e.f. 9th. August, 1984.
14 Substituted by the Copyright (Amendment) Act, 1999.
15 The words “Indian Patents and” omitted by Act No. 23 of
1983, w.e.f. 9th. August, 1984.
16 Section 19 re-numbered as sub-section (1) thereof by Act
No. 23 of 1983, w.e.f. 9th. August, 1984.
17 Substituted by Act No. 13 of 1992, for the word “fifty”,
w.e.f. 28th. December, 1991.
18 Substituted by Act No. 23 of 1983, for the words “such
application”, w.e.f. 9th. August, 1984.
19 Substituted by Act No. 23 of 1983, for the words “Provided
that no such licence”, w.e.f. 9th. August, 1984.
20 Substituted for the words “twenty-five years” by the
Copyright (Amendment) Act 49 of 1999, dated 30th.
December, 1999.
21 Inserted by the Copyright (Amendment) Act 49 of 1999,
dated 30th. December, 1999.
22 Omitted by Act No. 65 of 1984, w.e.f. 8th. October, 1984.
23 Substituted for the words “fifty years” by the Copyright
(Amendment) Act 49 of 1999, dated 30th. December, 1999.
24 Substituted by Act No. 23 of 1983, for the words and figures
‘under section 19 of the Sea Customs Act, 1878’, w.e. f. 9th.
August, 1984.
25 Substituted by Act No. 23 of 1983, for the words ‘the
Specific Relief Act, 1877’, w.e.f. 9th. August, 1984.
26 Substituted by Act No. 23 of 1983, for the words ‘in section
42 of the Specific Relief Act, 1877’, w.e.f. 9th. August, 1984.
27 Substituted by Act No. 65 of 1984, w.e.f. 8th. October, 1984.
28 Substituted by Act No. 23 of 1983, for the words ‘a
Presidency Magistrate or a Magistrate of the first class’, w.e.f.
9th. August, 1984.
29 Substituted by Act No. 23 of 1983, for sub-section (3), w.e.f.
9th. August, 1984
2 Omitted by Act No.38 of 1994, w.e.f. 10th. May, 1995.
3 Inserted by Act No. 38 of 1994, w.e.f. 10th. May, 1995.
4 Substituted by Act No. 38 of 1994, w.e.f. 10th. May, 1995.
5 Inserted by Act No. 23 of 1983, w.e.f. 9th. August, 1984.
137
LESSON 32:
USING RESEARCH WELL
Topics Covered
Difference, Planning, Evaluation, Haste, Knowing
Objectives
Upon completion of this Lesson, you should be able to:
1. Difference between evaluative and creative research
2. Make contingency plans
3. Haste is the enemy of quality
4. Hold a planning meeting after the survey
Using Research Well
Even though managers often bemoan the lack of research, a lot
of research that’s done is never acted on. For researchers, this is
often frustrating, and for managers, irrelevant research is a
money-wasting irritant.
The key principle in using research data is to plan the action at
the same time you plan the survey. If the wrong population is
sampled, or the questions are not fully relevant, the research will
not be properly usable.
Here are five principles to follow, if you want to use research
well:
1. Avoid intermediaries.
2. Haste is the enemy of quality.
3. “I knew that all along.”
4. Make a contingency plan.
5. Hold a follow-up meeting.
Difference between Evaluative and
Creative Research
One obstacle to research being used is that people feel threatened. If somebody has managed a department for years, and
thinks he’s done it well, he’s not going to be receptive to some
of the audience (via research) informing him there are better
ways. And when people think the quality of their work is being
judged by research, they can become very defensive. An obvious
solution is to attack the research. This is often a problem with
evaluative research.
But when a new project is being planned, people are often keen
to use research, to find out more about the audience. In this
case, the danger is the opposite one - that vague research
findings will be believed too much. When a new concept is
presented - perhaps a new type of TV program - viewers can’t
really grasp the idea without seeing the program a few times.
They tend to be polite to the researchers, to say that this
proposal sounds like a good idea, and that they’d definitely
watch the program. (Maybe they don’t add that they might
watch it for only a few minutes.) The result is that new programs often gain smaller audiences than the research suggests.
Experienced researchers are more skeptical in such situations,
138.
not interpreting weakly expressed interest as a definite intention
to view.
1. Avoid Intermediaries
One sure way to produce unusable research is for the end-users
of research and the researchers not to communicate fully. Here’s
an example of how not to do it:
A middle manager wants some research done, and sends a
written request to his or her superior. The superior makes a few
changes and sends it on to say, a purchasing manager, who
rephrases it again to comply with corporate policy. The purchasing manager then contacts a researcher. If that researcher is not
then permitted to deal directly with the originator of the
request, the original idea will by now be so distorted that any
research on it will be useless!
Why is that? Because (a) the sample you really need is probably
one you’ll never quite be able to reach, and (b) the art of
question wording is a very subtle one. It usually takes three or
four meetings between researcher and client before a questionnaire and sample design are adequate.
2. Haste is the Enemy of Quality
The person who will use the results and the person who will
manage the research need to spend an adequate amount of time
discussing the research plan. Usually this will require at least two
meetings, and several hours at least. Time spent planning the
research is never wasted.
Often an organization will spend months vaguely thinking that
it needs some audience research done, and at the last moment
will decide that it needs the results as soon as possible. A false
sense of urgency is built up. With the resultant rush, mistakes
are made. As soon as the first results are released - or even in the
middle of a survey - people will begin saying “If only we’d
thought to ask them such-and-such...”
I’ve often experienced this frantic haste - specially when
advertising agencies are involved. There’s an old truism about
research: “It can be cheap, it can be fast, and it can be high
quality. Pick any two.” So hasty research will either be of low
quality, or very expensive. Take your pick.
3. “I Knew that all Along”
A common criticism of survey data is that you spent a lot of
money to find out what you already knew.
There’s a very educational way to overcome the “I knew it all
along” attitude. When the questionnaire is complete, and the
survey is ready to go, give all end-users a copy of the questionnaire. Ask them to estimate the percentage who will give each
answer to each question, and write these figures on the questionnaire, along with their names. Collect the questionnaires,
and summarize everybody’s guesses. When the survey has been
finished, compare the actual results with the guesses. Then it
will become obvious that:
They didn’t know it all along. Even experienced researchers are
doing well if they get as many as half the results within 20% of
the actual figure;
The act of guessing (OK, estimating) the answers will make the
users more aware of the audience, and more interested in the
results. I don’t know why this is so, but it always seems to
work out that way.
4. Make Contingency Plans
This involves deciding before the survey begins what will be
done with the results. Is any action foreshadowed? Or is the
purpose of the survey simply to increase broadcasters’ understanding of their audience? Or what? In practice, each question
usually has a different purpose.
Here’s a useful exercise, which is best done while the questionnaire is being written. For each question, write down...
Not all questions call for a specific action. To gain an understanding of the audience is also important - and you never
know when previously-collected information might suddenly
become relevant. But you can ask 1,000 questions and never ask
the exact one for which you’ll need an answer next month. I
suggest that questions which don’t lead to any action should be
given a low priority in a questionnaire.
5. Hold a Planning Meeting After the Survey
When the survey results are out, the researchers need to do
more than simply send out a report. The most effective followup occurs when the initial results are presented to a group of
end-users, who can then ask questions and make it clear which
questions need more detailed analysis. At this presentation,
everybody’s initial estimates of the answers can be brought out,
and a small reward perhaps offered for the closest guess.
The advantage of making a contingency plan is that it is often
several months between the questionnaire being written and the
survey results becoming available. It’s easy to forget why a
question was asked.
If the report is written after this initial presentation, it will
contain more relevant data, and less irrelevant material. When
the report has been finished and sent out, it’s a good idea to
hold a second presentation, this time focusing on how the
results can be used, what has been learned, and what further
research or information may be needed to make better decisions.
Here’s an example of a contingency plan.
Researcher’s responsibilities
Question:
Why did you not renew your subscription to Week?
(Please tick all boxes that apply.)
i. All research effort must be directly applicable to management
decision-making. It should never be just research for research
sake, but should be focused on some definite practical
problems. Research should guard itself against the danger of
being reduced to mere aimless fact-gathering. It has the
responsibility of digesting the data and presenting
management with the salient features which have a practical
utility and direct bearing on the problem in hand and thus it
helps in the choice of a decision.
ii. Research reports should be action - oriented and not
technique-oriented in their content. Researcher should not
forget the fact that the management is primarily concerned
with the findings which have a direct bearing on the
problem, and only secondarily with the techniques of the
research process. Reports should not therefore consist of
unnecessary technical jargon and must be wholly slanted
towards the practical use that management can make of the
findings.
a. The reason for its being asked, and
b. How the results could be acted on.
[1] Am no longer able to read to it (e.g. time is a problem)
[2] Price increase was too large
[3] Didn’t know subscription was due
[4] Haven’t got around to renewing, but may do so some day
[5] Subscribed mainly to get the program guide, now
discontinued
[6] Other reason: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reason for asking this question:
Find out how to get more subscribers to renew.
Contingency Plan
If answer = 1 or questionnaire returned blank: delete from
database
If 2 or 6: Send Letter A, pointing out increased benefits
If 3 or 4: Send reminder letter C
If 6: determine which of the above is most appropriate.
That example was for a census (all subscribers) rather than a
survey, and the question was very specific. Normally it would
not be possible to have an individual reaction for each member
of the population.
A contingency plan not be followed exactly after the results
arrive. In the month or two that may pass, many things can
change - or you may realize that your plan didn’t take enough
into account. Even so, when you produce a plan like this, it
helps to clarify your thinking. And if a group of managers make
a contingency plan together, it helps them all agree on what they
are really trying to achieve, and on the real purpose of the
survey.
iii. Researcher must clearly define the limitations of the findings.
All too often, the misuse of research findings stems from
the failure of researchers to make clear to management the
limitations of their findings.
iv. Researcher should work as a team with management.
Researchers must recognize the fact that manager can make
valuable contributions to the research process from the
marketing side about which they are better informed.
Researchers should, therefore, not confine themselves in the
narrow area of research alone but work actively with
management in a much broader context as a team as a result
of which each will understand the other better.
v. Researcher should “sell” himself to the management. While
it is true that management must develop a proper
understanding and appreciation of the value of marketing
139
research as a management tool, it is just as important that
Research should enlighten management on this score by
practical demonstrations rather than mere theoretical
expoundings on the subject.
lesser wastage for which the need is even more for the smaller
firms and industries because they cannot afford any wastage of
scarce resources.
vi. With the challenge posed by the increasingly complex and
thorough imaginative adaptation of existing techniques, the
horizons of Marketing Research should be enlarged. This
calls for a progressive and creative spirit among researchers,
who must be constantly alive to the problems of
management.
Further, in contrast to the vast outlays required by technical
research and development, Marketing Research requires a much
smaller investment. The importance of technical research for the
growth of a company cannot be denied. On the other hand,
considering the risks of the market place, the enormous capital
investment involved in the setting of a plant, and the irreversibility of investment decisions, one cannot deny, either, the
need for a smaller expenditure in Marketing Research before
investing on the plant, etc., to ensure the productivity of the
plant through sustained sales. Cost consciousness on the part
of management is essential, but this is applied as much to
Marketing Research as to any other side of the business.
Moreover, the value of Marketing Research as the base for
sound Marketing has to be recognised. Perhaps, the chief
reason for this cautious approach towards investment in
Marketing Research is that, while technical research produces
visible gains, the benefits of marketing Research are in most
cases latent and intermixed with the total marketing operation.
Research and Market
Research for the market or Marketing research is defined as the
systematic process of gathering, recording and analysing of all
problems relating to the transfer and sales of goods and from
producer to consumer. It includes such activities a1 analysis,
sales research, consumer research and ad” research. Marketing
Research, or MR in short, is between the manufacturer and the
consumer and the r providing consumer orientation in all
aspects of the II function. It is the instrument of obtaining the
knowledge the market and the consumer through objective
method guard against the manufacturer’s subjective bias.
Many empirical studies indicates that correct man decisions in
marketing are made in about 57 per cent problems handled.
This percentage indicates the need of marketing research
designed to raise the proportion 1 decisions. Management can
arrive at accurate decision when in possession of pertinent facts
correctly interpre obtained through marketing research provision
the decisions concerning the following problems. Which di
channels to utilise; which products to market; where 1 whether
or not advertising and other sales promotion used and if they
are, the nature of such advertising promotion. In other words
marketing research helps t the efficiency of marketing methods.
1. Value of Marketing Research vis-a-vis Costs
Marketing Research can contribute to efficient marketing
management in the following ways.
a. It ensures production to a carefully measured consumer
demand and thus avoids marketing failures. Marketing
Research studies the market to determine what exactly the
consumer needs and their characteristics of demand.
2. Type of marketing research
There are six basic type of marketing research. They are:
a. Market Caracteristics Research
The most fundamental type of marketing research is that which
studies the consumer market for the products of industry or
business services. By describing and measuring the consumer
market in terms of standard characteristics such as age, sex and
economic class these studies provide executives with an
understanding of the end use of their products which forms a
factual foundation for many of the most vital decisions in
marketing. These researches range from national studies having
a bearing on many phase of marketing policy to small specialized surveys confined to limited geographical areas or designed
to answer only one or two vital questions.
Some of the more common subject studies in connection with
a consumer Research are given below:
1. How long are consumers loyal to a particular brand?
b. It helps expansion through the discovery of new markets,
new products and new uses for existing products.
2. What factors, conditions and reasons effect consumer brand
loyalty?
c. Marketing Research helps to increase the effectiveness of
advertising and promotional efforts to stimulate sales
through the selection of :
3. How do ,users begin using a certain brand?
i. the best campaign and media schedule to research the target
consumers;
4. What are consumer’s reasons for using a certain brand?
5. What is the relationship between users and buyers?
6. What are the buying reasons and motives?
ii. the most fruitful timing of promotions;
iii. the markets where the advertising investment will bring in
the greatest returns.
7. What products are associated in usage? How is the sale
affected?
8. How often is the product used?
d. Marketing Research helps to reduce costs
9. In what units is the product purchased?
It helps to reduce distribution costs through optimum
coverage, and production costs through the increase in sales
leading to economics of large-scale production and simplification of product-line. Thus, Marketing Research does increase
net profits through greater efficiency of marketing efforts and
10. Where do consumers buy or prefer to buy the product?
140
b. Motivation Research
Motivation research employ social science techniques to find and
evaluate the motivating forces which cause certain behaviour in
the market place. It involves a penetrating analysis of the
thinking and attitudes of consumers to discover the subconscious reasons why consumers buy particular products and
specific brands. To uncover the deep-rooted reasons for market
place behaviour motivation researchers use such techniques as:
Word association tests; sentence completion tests; depth
interviews; group interviewing and thematic perception tests.
• Estimate deed items, sizes and varieties in inventory that tie
up working capital.
• Simplify the product line.
• Evaluate customers.
• Appraise warehousing, transportation and delivery costs in
the light of market requirements.
c. Size of the Market Research (Market Analysis)
Market research develops market and sales potentials and sales
quotas by determining how much of a commodity a given
market can be expected to absorb. In addition, indices of
potential for each city or trading area are computed so that
territories of salesmen, distributors and dealers may be properly
defined. Studies of potential sales permit the selection of
territories in which to concentrate sales or advertising effort.
Market analysis involves investigations of various elements of
consumer demand including total demand, relative demand,
replacement demand, market saturation and consumption rates.
d. Sales Research (Sales Analysis and Control)
Sales research, often referred to as internal analysis, is one of the
ritost fertile fields of marketing research. Two general areas are
included : The first is sales record analysis. It uses as the source
of information the records of sales transactions and accounting
data on sales accumulated in the ordinary conduct of the
business. Sales records are a variable gold mine for the researcher
with a keen analytical mind and imagination. By summarising,
re-arranging, comparing and studying sales data already available
in the records of the business, he brings out new facts buried in
the detail of day-to-day record keeping. He establishes new
relationships to marketing efficiency. He makes revealing
comparisons of performance between units such as sales
territories, product lines, dealers and individual salesmen. The
second area of sales analysis is sales organisation and operation
research. As a result of the scope of modem selling activities,
the size of sales departments and the complexity. of markets,
there are many aspects of the sales operation requiring application of marketing research techniques. Many of the basic
marketing weaknesses of the individual company lie in its sales
personnel, its organisation and operation. The importance of
developing the greatest possible efficiency in this area is
emphasised by the ultimate dependency of all business on sales
volume and the relatively large share of marketing expenses
allocated to the sales function. The application of marketing
research techniques to the selling operation may range from a
basic survey to specialised studies of specific sales problems.
f. Advertising Research
Corporate houses and organisations constantly ponder whether
their advertising is accomplishing its objective, that of influencing the minds, emotions and actions of existing and
prospective buyers. The answers can only come from advertising
research. Several types of marketing research contribute to the
development of the advertising plan because it can suggest the
kinds of appeals to use, the most suitable media and the
market to which advertising should be directed in order to
ensure maximum product demand. Advertising research, on
the other hand, attempts to measure whether advertising
accomplished the task assigned to it. To do this advertising
researchers employ opinion tests, split-run tests, readership
tests, recognition and recall studies.
g. Product Research
Product research embraces studies of consumer uses, habits and
preferences as to product design and technical and laboratory
experimental studies to’ develop products and packages which
will meet the desires of consumers. Consumer and laboratory
product research helps the manufacturer decide what to offer the
market as regards to product characteristics - sizes, shape, colour,
ease of use, packaging and price. The better a manufacturer
satisfies consumer. wants, the greater than quantities of the
product will be sold. If he fails to meet consumer demands, he
opens the door for competitors to obtain sales at his expense.
Assignment
Interpretation is a fundamental component of any research
process. Explain. Why so?
e. Distribution Research (Distribution Cost Analysis)
Distribution cost analysis provides the means by which
individual companies can find wastes in their marketing effort
and convert unprofitable segments of sales into profits.
Distribution cost analysis has proved a most valuable method
to:
• Analyse how salesmen spend their time.
• Analyse advertising budgets by territories, products and
cities.
• Analyse the effect of salesmen’s compensation methods on
distribution costs.
• Study distribution cost trends over a period of time.
141
LESSON 33:
ARE ALL POLLS VALID?
Topics Covered
Knowing, Risk, Voting, Ignoring
Objectives
Upon completion of this Lesson, you should be able to:
• Knowing what is opinion polls
• Ignoring response bias
• Voting online
• Risk of credibility
Are All Polls Valid?
The headline in the local paper read 3Com Out to Poll World on
Everything. The story heralded plans for a huge “survey” of
people worldwide, sponsored by 3Com. The project will allow
participants to “see how their answers compare with those of
people around the world,” according to the Associated Press.
The only problem with this groundbreaking study is that, as
the article casually mentioned in the very last paragraph, “the
poll will not be statistically valid.”
What this really means is that this study is actually representative
of nothing and nobody. It’s a huge, glorified focus group,
where we hear a lot of opinions with no way to quantify any of
them in a way that represents the world’s population accurately.
But when the “findings” from this “study” are released to the
media, do you really think the emphasis is going to be on the
fact that the data is not statistically valid?
Of course not. This little problem will be briefly explained in a
footnote. But other than that, this study will most likely be
treated as gospel truth. In short, this is a fabulous publicity
idea. Unfortunately, for reasons too numerous to list here, it’s
useless as serious research.
Lest you feel this is an attack solely on this upcoming “research,” stop and think for a moment how many similar
“research” projects you see every day. Local newscasts ask people
to call their 900-number and for only 75 cents they can vote in a
telephone “poll.” A men’s magazine will survey its readers, and
claim that the study is representative of all American men.
Websites take “polls” all the time, and aside from an occasional
footnote the resulting data is treated as if it’s a scientifically
conducted research study. Companies and organizations use
inadequate “research” methods to gather data all too frequently.
Unfortunately, many people treat surveys as if all you have to
do is ask enough people a bunch of questions, and because you
end up with a stack of data, somehow this means you have a
study that reflects the public’s real opinions. Management
leaders sometimes dismiss researchers’ concerns about “statistical significance” and “projectable data” as the cries of some
anal-retentive academics who figure if they didn’t conduct the
research themselves it must not be right.
142.
Researchers face this problem frequently: “It doesn’t have to be
scientific – it just has to be right!” (an actual quote from a
research client). The problem is that the two are inseparable –
either conduct it using statistically valid methods, or it won’t be
right.
Clients want to cut corners, save time, and avoid all the technical
jargon that researchers throw at them. Admittedly, researchers
don’t always help themselves when they rely too much on that
technical jargon, rather than trying to explain things so decisionmakers realize what they’re giving up when they cut corners.
Above all, survey data must be projectable in order to be of any
use at all. Simply put, this means that a survey is asking
questions of a small portion that is supposed to represent a
larger population. The portion who participate in the survey
must accurately and fully represent the larger population in
order for the research to be valid.
There are a number of different barriers to conducting a valid
study:
Ignoring Response Bias
Response bias is what happens when a certain type of person
responds to a survey, while a different type does not. For
instance, we know that women and older people are more likely
to participate in a telephone survey than are men or younger
people. Unless this factor is corrected, it will bias the survey
results. Telephone surveys that attempt to get a random
sample of the population but do not account for this factor
often end up with 70% of the respondents being female. If
women and men have different views on the survey topics, this
means the data is being biased by this response rate difference.
Here’s a quick example. Let’s say 60% of men plan to vote for
the Congress candidate in Delhi, while 60% of women plan to
vote for the BJP. If Delhi population is evenly split along
gender lines, this means the actual vote will be split 50/50
between the two candidates. In a telephone survey that ends
up 70% female and 30% male (as many surveys would without
controlling this factor) the findings would show that 54% of all
respondents support the Democrat, and only 46% support the
Republican. This would provide inaccurate and misleading
data.
This can be an even-greater problem with e-mail and mail
surveys. Response bias is more likely when the response rate
(i.e. the proportion of people who complete the survey) is low.
A mail survey that has a 5% response rate is probably useless,
because the chances of response bias are very high when a
response rate is that poor. The 5% who did respond are
probably different from the 95% who did not respond.
A common mistake is trying to use a sample of convenience.
You have website – why not just survey people who visit your
website, and say they’re representative for your content? Because
the truth is that they’re probably not representative.
This continues to be one of the major obstacles to effective online research. People who use the internet still are not fully
representative of the Indian population. Although this is
gradually changing, today they still tend to be younger, better
educated, higher income, and more likely to be male. If your
donor base is heavily weighted towards older women, as many
are, this will be a particular problem.
Viewing it internationally, the United States has a much higher
proportion of the population on-line than do most other
countries, which means a “worldwide” on-line survey will be
dominated by responses from the U.S., unless the sample is
very carefully controlled for this factor. Without taking these
factors into account, an on-line survey will give you a sample of
people who are not representative of your target.
Since the previous two or three years TV Channel Promotions
have been amongst the Top Categories on Print. This year too it
stands on the 12th position for the three quarters that have
passed (January-September 2003)
Taking a look at the top spenders (Networks), the Sahara
Network emerges the leader by a long distance accounting for
25% of the spends on TV Channel Promotions followed by
Star Tv Network, Sony Entertainment and Zee TV Network.
Amazingly, NDTV manages to feature in the Top 10 even
though it was totally absent in the first quarter (NDTV was
launched in April 2003).
Using a Self-selected Sample
This is a major problem with call-in polls or surveys that are
posted on a website – only those who really want to say
something about the topic will participate. Do you really think
someone who doesn’t care about the issue of abortion will
proactively take the time and spend the to call in and vote?
If a hotel gives you an 800-number to call to participate in a
survey, are you more likely to do so if you had a decent
experience staying at that hotel, or if you had a terrible experience and want to let someone know about it? Self-selected
samples generally provide very polarized responses – people
who love you or hate you will reply, but those who just don’t
care won’t bother.
Turning Qualitative Data into Quantitative
This is a basic, unchangeable fact: you cannot pull quantitative,
numeric data out of focus groups. Unfortunately, this is also
one of the most common research mistakes. Focus groups are
extremely useful for understanding how people perceive
something. Focus groups are completely useless for finding out
how many people feel a particular way. If nine out of ten people
in a focus group really liked your new prospecting mailing, this
does not mean that a majority of people you send it to will feel
the same way. The sample size is too small and the sampling
methods too non-representative for focus groups to provide
projectable data, even if you’re conducting a number of groups.
Analyzing the shares of various Channel genres highlights the
dominance of Entertainment Channels amongst all TV
Channels on Print. 62% of the revenue generated from this
category comes from Entertainment Channels. News channels
coming second, makes for some interesting reading.
Questions are raised all the time about whether researchers can
interview 400 people and say it represents a population of
millions. The answer is yes, as long as it’s done correctly. It’s
relatively easy to ignore these bogus call-in polls or poorly
conducted research studies when they’re reported in the media.
But if your own organization is making some of the same
mistakes in its research efforts, that’s not so easy to ignore – and
far more dangerous.
Originally published in The NonProfit Times, January 15, 2001
TV Channel Promotions continues to be one of the top
Categories On Print-Sahara Manoranjan emerges the
biggest spender: An AdEx India Analysis Nov 21, 03
On going deeper into the top promotions for the first three
quarters this year, eight of the top ten have been Program
Promotions, the other two being Channel Promotions for
News channels. Sahara again dominates here with 6 of the top
143
10 promotions, 5 of them for Sahara Manoranjan. The top
promotion was for ‘Karishma’ followed by ‘Mission Fateh’ and
‘Dil Ki Baatein’’
Predikta’ a contest from MAX which was on during the ICC
Cricket World Cup 2003 is ranked 8th
Looking at the innovations used in this category, Linked Ads
on Multiple Pages comes out on top. Figured Outline, Teaser
Campaigns and Linked Ads On Single Page are the other
innovations used.
Sahara Manoranjan’s ‘Karishma’ again tops the spenders on
Print Innovations. Four of the other top spots have been
grabbed by HBO. Sahara had gone for ads on consecutive pages
(Linked Ads), which was the largest contributor to this type of
innovation while HBO topped the Figured Outline Innovations (ads of irregular shape where parts of the creative eat into
the editorial content).
Analysis from AdEx India (A Division of TAM Media
Research Pvt. Ltd.)
144
TRP-Edoed
SO, HAS THE advertising industry been sold a lemon all
along? And, do their clients, in turn, think it has been a rupee
ill-spent on shows that actually have poor viewership? With the
lists of 625 households in Mumbai with peoplemeters installed
by independent television show-rating agencies being made
public through an anonymous source, agencies and clients could
well think they have been strung right along. Both research
agencies which administer the parallel rating systems — TAM
by AC Nielsen and INTAM by ORG-Marg — have been quick
off the blocks to refute any suggestions that household
viewership patterns have been manipulated in any way. However, confidence in the television rating points (TRPs), the
gospel for advertising agencies, has been shaken badly. For, it is
on the basis of those points that crores of rupees are sunk in
commercials that latch onto the best shows on television.
Consider the figures: There is almost Rs 4,000 crore worth of
advertising on Indian television. The top 15-20 ad agencies in
the country subscribe to either of the rating systems. These
agencies and media-buying houses make their decisions solely
on the basis of the TRPs and account for at least 75 per cent of
nationwide billings for TV advertising. Which means that at
least Rs 3,000 crore worth of advertising is virtually based on
recommendations that stem from TRPs. No small beer this.
The critical question that agencies are now asking is if the
confidential list of households has been released to the media
now, how long has it been circulating in the public domain? If
the leak had happened a while ago, the repercussions could be
severe. It could well mean that agencies have to go back to their
drawing boards to dissect the ratings on the basis of which they
have made recommendations to clients. Influencing a few
households in any city can alter TRP ratings radically, though
rating agencies stoutly deny any manipulation, even while
conceding the accuracy of the lists.
But is there smoke without fire? Would the anonymous vested
interests have released these lists, if they did not wish to expose
the fact that peoplemeter-households can be influenced? It may
be the tip of the iceberg and more sordid details could well
come tumbling out. Already, rival channels are pointing to
shows on a particular channel that are receiving top billings week
after week. Now that the lid has blown off, ad agencies are also
conceding that they who were supposed to be the watchdogs of
the ratings systems have not been vigilant of a system on which
crores of rupees of investments ride. Committees which
comprise agency officials and rating agencies have not met in
months. As long as the TRP machine churned out the stuff
week after week, nobody questioned the system.
Both the rating systems are on the verge of a merger with AC
Nielsen and ORG-Marg planning to join up. Both the agencies
have expressed their intention of rehauling the panels entirely.
Each has peoplemeters in at least 4,000 households across the
country. If the merger results in double the sample size in a
single rating system, TRPs could be more accurate than what the
system reflects now. TAM-INTAM would need to work in
tandem to restore the credibility of the rating systems in quick
time. Otherwise, agencies would again be reduced to making
gut-feel decisions rather than ones based on hard quantitative
data.
AC Nielsen, ORG-Marg not TRP-ed - To
Revamp Rating Panels; MR Agencies to
Track Leaks
Our Bureau
NEW DELHI, Sept. 5
DESPITE an impending merger between AC Nielsen and
ORG-Marg, the two research agencies have decided to go ahead
and revamp the household panels for the TAM and INTAM
rating systems respectively.
The move comes a day after lists of households where
peoplemeters — the device to record viewership patterns —
were made public. This is supposedly confidential data accessible to a select few employees in a company.
Speaking to Business Line, Mr L.V. Krishnan, CEO, TAM
Media Research, said, “We are discussing merger plans. But that
does not stop us from revamping our panel. We will go ahead
with it despite going in for a merger.”
The latest controversy surrounding television rating points
(TRPs) will hit Zee the most, for the channel launched a slew of
new programmes in the last week of August. According to
market sources, the verdict of the audiences on the new shows
would have come out within the next two days.
However, what has been bothering the research agencies is the
manner in which the leaks happened. Mr Ashok Das, President,
ORG-Marg, said in a statement, “The manner in which the lists
have been circulated (through an anonymous letter) to the
media would suggest that there is a vested interest behind this,
who is clearly not happy with independent ratings and wants to
discredit them.” The market research (MR) agency is planning to
investigate the matter thoroughly.
However, media planners will continue to rely on this data.
“TRPs are the currency with which media planners work with
and if it is taken away then we would have to regress to the
barter system,” said Ms Divya Gupta, Chief Operating Officer,
The Media Edge.
“I guess media planners will continue using TRPs. And
thankfully, we are dealing with market research agencies which
are professional and have international backing. Therefore, we
hope that corrective measures will be taken,” said Mr C. Rao,
President, Universal McCann.
The fact that the names of subscribers have been made public
makes the system vulnerable. “First it was Mumbai, next it
could be Delhi. So, it makes the system vulnerable,” Ms Gupta
said.
The data is used not only by the broadcasters or the advertisers
but even production houses which use the data to find out
how well their programmes have been doing.
Mr Ramesh Narayan, President, (Advertising Agencies Association of India (AAAI), said, “The happy part is that there is no
evidence of any kind of manipulation. Confidentiality is all-
important and since all names are public, there is a need to
immediately refresh the entire list.”
The outcome of this is the wake-up call to the industry that any
future system should have checks and brakes in it.
The Indian Broadcasting Foundation (IBF) has convened a
meeting next Friday with the research agencies, AAAI and
broadcasters to sort out the issue of TRPs. The apex industry
body had earlier decided to come up with a uniform standards
for TRPs. Mr Bhuvan Lall, Executive Director, IBF said, the
association will see that corrective measures are taken as soon as
possible.
IBF to Set Norms for TV Ratings
Nithya Subramanian
NEW DELHI, April 24
THE Indian Broadcasting Foundation (IBF) is likely to finalise
a set of standards for calculating television ratings, at its board
meeting scheduled for Thursday.
Sources in the broadcasting industry said that a committee
comprising of big broadcasters like Sony, Star, Zee and
Doordarshan has prepared the broad framework, which would
be discussed at the meeting.
Presently there are two research agencies, AC Nielsen and ORG
Marg, which provide data on television ratings, popularly
known as TAM and INTAM ratings respectively. “Initially, the
IBF had considered appointing a single research agency to
provide data,” said an industry insider.
“However with a merger announcement between the two
groups internationally, it is only a matter of time before the
impact is felt in India. Hence the Foundation may not appoint
one agency for TRPs (television rating points),” they said.
Another aspect is the need to broaden the sample base to
include the rural population. “With penetration of cable and
satellite (C&S) to smaller towns and even rural areas, this section
has been not adequately covered,” said sources.
Also, the number of peoplemeters used to measure viewership
ratings must be increased. “Currently, there are only a few
thousand such peoplemeters and these have to be increased
substantially,” said sources.
The IBF had, some months ago, decided to come up with a
uniform guidelines as the two rating agencies cover different
number of cities and each tracks different profile of people.
Standardisation of the basic requirements would help in
minimising conflicting data, officials had said.
Uniform parameters are important because advertising agencies
rely extensively on such data before making media related
decisions. Broadcasters too use the data to decide their programming schedules.
Broadcasters depend on any one of the two research agencies
for television ratings. For instance, Star Plus uses ratings given
by AC Nielsen-IMRB’s TAM, while Zee banks on ORG-Marg’s
INTAM figures.
“But, if the basic parameters are laid out and even if the merger
between the two research agencies does not happen in the
145
immediate future, the results may not be so skewed,” said a
senior media planner.
This is not the first time that the broadcasting industry has
decided to standardise guidelines. Earlier some broadcasters like
Doordarshan, Star, the Association of Advertising Agencies of
India (3As of I) and other groups had asked AC Nielsen to
work out the technical standards and had suggested their
ratings as the industry standard.
This initiative is part of IBF’s mandate of ironing out differences between the various business interests associated with the
entertainment sector. It has already signed an agreement with
the 3As of I for a policy on credit to the entertainment sector.
Jassi Jaisi Koi Nahin, But Uski TRP Thik
Nahin
Sudha Nagaraj
Times News Network[Saturday, December 06, 2003 ]
NEW DELHI : There months after its launch, the 9:30 pm
soap Jassi Jaissi Koi Nahin is fast becoming a habit that people
rush home to. Recruiting firms advertised for Jassi-like secretaries and Jassi-inspired caricatures were common in the run-up to
the polls.
Yet, the serial is not as high up on the TRP charts as it is on the
popularity list. Rather than question the credibility of TRP
rating. What this discrepancy shows is that TRPs do a disservice
to media planners by aggregating regional preferences and
losing out on the region-specific popularity of specific programming. Regional popularity, male-female viewership patterns and
other details often tell a different story.
“I agree the buzz around Jassi far exceeds the TRPs. But given
the time, I am sure the TRPs will follow and reiterate the same
thing - Jassi Jaissi Koi Nahin,” says a confident Sunil Lulla,
executive vice-president, Sony Entertainment Television who is
banking on slot performance rating which has risen five times.
Mr Lulla is also encouraged by the 6.7 rating the serial claimed
the week ending November 8, though the following two weeks
saw the ratings plummet to 5.5 and 4.7 - a fact attributed to
cricket matches and which dented most other serial viewerships
too. Star Plus ratings for the same slot that week dipped to 9.8,
according to TAM Media Express findings.
With a first episode rating of 3.6 which steadily climbed to 5.2
at the end of a dozen episodes people at Sony Entertainment
Television (SET) are aware that the enthusiasm for Jassi among
viewers has not yet translated into viewership on lines of Kaun
Banega Crorepati.
But the plain old Jane is collecting fans for sure. Females aged
15 and above (8.9 TRP for week ended November 8) for
starters. Bored urbanites in big cities have also taken a fancy for
the simple secretary - TRPs peaked at 8.7 in Mumbai, 9.2 in
Calcutta and 7.8 in Delhi .
In smaller markets (0.1 – 1 million) Jassi has bagged the
highest ever rating of 13.7 in Orissa while Madhya Pradesh and
West Bengal have recorded a peak average rating of 7.2 and 8
respectively.
While Chandigarh, Punjab and Haryana are also kind to her, the
real “soapy regions” like Maharashtra and Gujarat have to
146
develop a taste for plain Jassi, breaking free from holier than
thou Tulsi (Kyunki Saas Bhi Kabhi Bahu Thi -TRP of 10.1
week ending November 22) and prim and propah Parvathi
(Kahaani Ghar Ghar Ki - rated 10.0 on November 19).
On the same day, Star serial Kasauti Zindagi Kay enjoyed a 9.5
rating while Sanjivani won a 8.5 rating.
To prod viewers to make newcomer Jassi a habit, SET has
introduced one-hour specials of the half-hour ( 9:30 - 9:59 slot)
serial. The first during Diwali had Jassi (with a TRP of 6.4)
denting Star’s stronghold KGGK from a four-week average
share of 79% to 56% while SET’s share grew from an average
of 7% to 31%, the Company says.
“It is true that most channels, broadcasters and advertisers bank
on the Hindi-speaking markets. But the buzz about Jassi thanks to innovative marketing has drawn advertisers too,” says
Mr Lulla.
Other marketing gimmicks like Jassi Pals Club had 2,500 people
enrolling while an FM radio poll had Jassi winning over other
serial queens.
Viewership pattern is also an area to be understood by you to
learn polls. As the viewers are the people who get involved /
influence the polls. It is also vital to consider the sponsor/s of
the poll or survey. The agency which conducts survey or poll
takes money from some source and to know the source is
critical to understand the process.
The recent opinion polls and the exit polls for general election
2004 proved to be of no use as they all turned to be wrong if
compared with the actual. The same case happened in the
assembly elections in Madhya Pradesh, Rajestha and
Chatissgarh.
Why was this?
Was the sample size wrong or was the sample segment was
wrong. This question has various dimensions and to get to
know the answer we need to discuss the process of the polls/
surveys.
But such questions also call for discussions across all media
houses.
Notes
“The lesson content has been compiled from various sources in public domain including but not limited to the
internet for the convenience of the users. The university has no proprietary right on the same.”
9, Km Milestone, NH-65, Kaithal - 136027, Haryana
Website: www.niilmuniversity.in
Download