Week 4

advertisement

Things to Think About When You Want

To Do a Survey

What do you want to know?

How can you find out what you need to know?

Similar/same surveys

• Can help in design

• Can help validate your results

• May obviate the need for the study

– How best to reach people at JMU?

Things to Think About When You

Want To Do a Survey

• Constraints

– Money/time

– Sample vs. population

– Mailing/Bulk Email

Things NOT to do

– Too Long (dining survey)

– Sensitive Questions Near the Front

– Double-barreled questions

• Measure more than one item at the same time

• Confounds responses and interpretation.

– Ceiling and floor effects

Things NOT to do

– Indeterminate Time (alumni survey question on volunteerism)

– Response sets not matched to question or question type.

– Bias

• Blatant

• Subtle

Things NOT to do

– “Don’t knows,” and “Neutrals,” and “no opinions” that allow “weaseling out”

-

Lack of “Don’t knows,” and “Neutrals,” and “no opinions” where they could have important meaning.

Ceiling and floor effects in responses

Examples and Discussion

– Study of Coffee Flavors on Visitors to a

Coffee Shop

• 8 am

• 272 Questions

Examples and Discussion

– Study of potential clients for new clothing line beginning with questions about measurements and interests in “stretchable fabrics”

Examples and Discussion

– Do you believe that HOV lanes help reduce congestion and can help reduce pollution?

• Yes

• No

Examples and Discussion

– Do you plan on voting for the independent candidate?

• Yes

• No

Examples and Discussion

– How often do you use a pencil?

• 7 times

• 6 times

• 5times

• 4 times

• 3 times

• 2 times

• 1 time

Examples and Discussion

– How often do you attend soccer games?

• Very true

• Partly true

• Not true

• False

• Entirely False

Examples and Discussion

– Most smart people agree that the HOV lane is vital to reducing congestion. What do you think?

• I agree

• I disagree

Examples and Discussion

– Hybrid vehicles use less gasoline, make less noise, reduce owner costs. Please check which of the following hybrid programs you support.

• Tax incentives for owners

• Special HOV permisssions during peak hours

• None

Examples and Discussion

– Was your meal as good as you expected?

• Yes

• No

• Don’t know

– Alumni survey question about diversity in workplace.

Examples and Discussion

– How do you feel about euthanasia?

• Support

• Neutral

• Oppose

• Don’t know

Examples and Discussion

– Was the NATO embargo of Iraq in the 1990s effective?

• Yes

• No

• Don’t know

Examples and Discussion

– Please rate your dining experience.

• Excellent

• Terrible

Examples and Discussion

• Develop response categories that are mutually exclusive

Problem:

– From which one of these sources did you first learn about the tornado in Derby?

Radio

TV

Someone at work

While at home

While traveling to work

Examples and Discussion

• During this most recent ride, would you say that your seatbelt was fastened…

 All the time, that is, every minute the car was moving

 Almost all the time

Most of the time

About half the time

 Less than half the time

 Not all during the time the vehicle was moving

Examples and Discussion

• Use cognitive design techniques to improve recall

Problem:

– We would like to ask about the most recent time you drove or rode anywhere in an automobile or other vehicle such as a pickup or van.

During this most recent ride, would you say that your seatbelt was fastened…

All the time

Almost all the time

Most of the time

About half the time

Less than half the time

Not at all

Measurement in Marketing

Research

Basic Question-Response Formats

• Open-ended

• Close-ended

• Scaled-response

Basic Question-Response Formats

Open-Ended

Unprobed

• Open-ended question: presents no response options to the respondent

• Unprobed format: seeks no additional information

• Advantages:

• Respondent frame of reference; allows respondent to use his or her own words

Disadvantages:

• Difficult to code and interpret

• Respondents may not give complete answers

Basic Question-Response Formats

Open-Ended

Probed

• Open-ended question: presents no response options to the respondent

• Probed format: includes a response probe instructing the interviewer to ask for additional information or answer clarification

• Advantages:

• Elicits more-complete answers

• Respondent frame of reference

Disadvantages:

• Difficult to code, analyze, and interpret

Basic Question-Response Formats

Close-Ended

Dichotomous

• Close-ended question: provides a set of answers from amongst which the respondent must choose

• Dichotomous: has only two response options, such as “yes” - “no”; “have” – “have not”;

“male” – “female”

• Advantages:

• Simple to administer, code, analyze

Disadvantages:

• May oversimplify response options

• May be in researcher frame of reference

Basic Question-Response Formats

Close-Ended

Multiple Category

• Close-ended question: provides a set of answers from amongst which the respondent must choose

• Multiple response: has more than two answer choices; must have “mutually exclusive” and

“collectively exhaustive” answer set

• Advantages:

Allows for broad range of possible responses

• Simple to administer, code, and analyze

Disadvantages:

• May be in researcher frame of reference

• May not have all appropriate respondent answer options

Basic Question-Response Formats

Scaled-Response

Unlabeled

• Scaled-response question: uses a scale (parts:a statement, instructions, a response format) to measure respondent feeling, judgment, perception …

• Unlabeled: uses a scale that may be purely numerical (no words/phrases) or only the endpoints of the scale are identified

Advantages:

• Allows for degree of intensity to be expressed without researcher options

• Simple to administer, code, and analyze

Disadvantage:

• Scale may not reflect respondents’ view

Basic Question-Response Formats

Scaled-Response

Labeled

• Scaled-response question: uses a scale (parts: a statement, instructions, a response format) to measure respondent feeling, judgment, perception …

• Labeled: a scale where all choices/positions are identified with some descriptive word/phrase

• Advantages:

Allows for degree of intensity to be more clearly expressed

• Simple to administer, code, and analyze

• More consistency in responses

Disadvantage:

• Scale choices more limited or too detailed

Considerations in Choosing a Question-

Response Format

• The nature of the property being measured

Gender=dichotomous; liking for chocolate=scale

• Previous research studies

Use format of previous study if comparing

• The data collection mode

Cannot use some scales on the phone

• The ability of the respondent

Kids can only relate to certain types of visual scales

• The level of analysis points to scale type needed

Basic Concepts in Measurement

• Measurement: determining how much of a property is possessed; numbers or labels are then assigned to reflect the measure

• Properties: specific features or characteristics of objects, persons, or events that can be used to distinguish them from others

• Objective properties are physically observable or verifiable

• Subjective properties are mental constructs

Scale Characteristics Determine the Level of

Measurement (Level of Data)

• Nominal data:

The use of a descriptor, name, or label, to stand for each “unit” on the scale: “yes” “no”, “male”

“female”,etc.

• Ordinal data:

Objects, persons, events are placed in rank order on some characteristic in a specific direction.

Zero and distance have no meaning; rank 1, 2, 3, etc.

• Interval data:

Units of distance have meaning. There is an arbitrary zero point. Examples are temperatures in degrees Fahrenheit or Celsius; map distance Chicago to ?

• Ratio data:

Multiples have meaning. There is an absolute or natural zero point. Examples: the Kelvin temperature scale, sales/costs in $, market share in %

Primary Scales of Measurement

Scale

Nominal Numbers

Assigned to Runners

7 8 3

Finish

Ordinal Rank Order of Winners

Third place

Interval Performance

Rating on a

0 to 10 Scale

Ratio

8.2

Time to

15.2

Finish, in

Seconds

Second place

9.1

14.1

First place

9.6

13.4

Finish

Primary Scales of Measurement

Table 8.1

Scale

Nominal

Ordinal

Interval

Ratio

Basic

Characteristics

Numbers identify

& classify objects

Nos. indicate the relative positions of objects but not the magnitude of differences

Common

Examples

Social Security nos., numbering of football players

Quality rankings, Preference Percentile, rankings of teams rankings, market median in a tournament between them

Differences between objects

Zero point is fixed, Length, weight ratios of scale values can be compared

Temperature

(Fahrenheit)

Marketing

Examples

Brand nos., store

Descriptive Inferential

Percentages, Chi-square, types position, social class

Attitudes, opinions, index

Age, sales, income, costs

Permissible Statistics mode

Range, mean, standard

Geometric mean, harmonic mean binomial test

Rank-order correlation,

Friedman

ANOVA

Productmoment

Coefficient of variation

Figure 8.2

A Classification of Scaling

Techniques

Scaling Techniques

Comparative

Scales

Noncomparative

Scales

Paired

Comparison

Rank

Order

Constant

Sum

Q-Sort and

Other

Procedures

Continuous

Rating Scales

Itemized

Rating Scales

Likert

Semantic

Differential

Stapel

A Comparison of Scaling Techniques

Comparative scales involve the direct comparison of stimulus objects. Comparative scale data must be interpreted in relative terms and have only ordinal or rank order properties.

• In noncomparative scales , each object is scaled independently of the others in the stimulus set. The resulting data are generally assumed to be interval or ratio scaled.

Relative Advantages of Comparative Scales

• Small differences between stimulus objects can be detected.

• Same known reference points for all respondents.

• Easily understood and can be applied.

• Involve fewer theoretical assumptions.

• Tend to reduce halo or carryover effects from one judgment to another.

Relative Disadvantages of

Comparative Scales

• Ordinal nature of the data

• Inability to generalize beyond the stimulus objects scaled.

Comparative Scaling Techniques

Paired Comparison Scaling

• A respondent is presented with two objects and asked to select one according to some criterion.

• The data obtained are ordinal in nature.

• Paired comparison scaling is the most widely-used comparative scaling technique.

• With n brands, [n(n - 1) /2] paired comparisons are required.

• Under the assumption of transitivity, it is possible to convert paired comparison data to a rank order.

Obtaining Shampoo Preferences

Using Paired Comparisons

shampoo brands. For each pair, please indicate which one of the two brands of shampoo you would prefer for personal use.

Recording Form:

Jhirmack

Jhirmack Finesse Vidal

Sassoon

0 0

Head &

Shoulders

1

Pert

0

Finesse 1 a 0 1 0

Vidal Sassoon 1 1 1 1

Head & Shoulders 0 0 0 0

Pert 1 1 0 1

Number of Times

Preferred b

3 2 0 4 1 a A 1 in a particular box means that the brand in that column was preferred over the brand in the corresponding row. A 0 means that the row brand was preferred over the column brand. b The number of times a brand was preferred is obtained by summing the 1s in each column.

Preference for Toothpaste Brands

Using Rank Order Scaling

Form

Brand

1. Crest

Rank Order

_________

2. Colgate _________

3. Aim _________

4. Gleem _________

5. Sensodyne _________

6. Ultra Brite _________

7. Close Up _________

8. Pepsodent _________

9. Plus White _________

10. Stripe _________

Importance of Bathing Soap Attributes

Using a Constant Sum Scale

Instructions

On the next slide, there are eight attributes of bathing soaps. Please allocate 100 points among the attributes so that your allocation reflects the relative importance you attach to each attribute.

The more points an attribute receives, the more important the attribute is. If an attribute is not at all important, assign it zero points. If an attribute is twice as important as some other attribute, it should receive twice as many points.

Importance of Bathing Soap Attributes

Using a Constant Sum Scale

Fig. 8.5 cont.

Form

Average Responses of Three Segments

Attribute Segment I Segment II Segment III

1. Mildness

2. Lather

3. Shrinkage

4. Price

5. Fragrance

6. Packaging

7. Moisturizing

8. Cleaning Power

Sum

5

13

9

7

100

3

53

8

2

3

60

0

5

100

9

17

2

4

19

9

20

15

100

4

17

7

9

Noncomparative Scaling Techniques

• Respondents evaluate only one object at a time, and for this reason non-comparative scales are often referred to as monadic scales.

• Non-comparative techniques consist of continuous and itemized rating scales.

Continuous Rating Scale

Respondents rate the objects by placing a mark at the appropriate position on a line that runs from one extreme of the criterion variable to the other.

The form of the continuous scale may vary considerably.

How would you rate Sears as a department store?

Version 1

Probably the worst - - - - - - -I - - - - - - - - - - - - - - - - - - - - - - Probably the best

Version 2

Probably the worst - - - - - - -I - - - - - - - - - - - - - - - - - - - - - --Probably the best

0 10 20 30 40 50 60 70 80 90

Version 3

Very bad Neither good nor bad

Very good

Probably the worst - - - - - - -I - - - - - - - - - - - - - - - - - - - - ---Probably the best

0 10 20 30 40 50 60 70 80 90

100

100

Itemized Rating Scales

• The respondents are provided with a scale that has a number or brief description associated with each category.

• The categories are ordered in terms of scale position, and the respondents are required to select the specified category that best describes the object being rated.

• The commonly used itemized rating scales are the Likert, semantic differential, and Stapel scales.

Likert Scale

The Likert scale requires the respondents to indicate a degree of agreement or disagreement with each of a series of statements about the stimulus objects.

1. Sears sells high quality merchandise.

2. Sears has poor in-store service.

1

1

Strongly Disagree Neither Agree disagree agree nor disagree

2X

2X

3

3

4

4

5

5

Strongly agree

3. I like to shop at Sears.

1 2 3X 4 5

• The analysis can be conducted on an item-by-item basis (profile analysis), or a total

(summated) score can be calculated.

• When arriving at a total score, the categories assigned to the negative statements by the respondents should be scored by reversing the scale.

Semantic Differential Scale

The semantic differential is a seven-point rating scale with end points associated with bipolar labels that have semantic meaning.

SEARS IS:

Powerful --:--:--:--:-X-:--:--: Weak

Unreliable --:--:--:--:--:-X-:--: Reliable

Modern --:--:--:--:--:--:-X-: Old-fashioned

The negative adjective or phrase sometimes appears at the left side of the scale and sometimes at the right.

This controls the tendency of some respondents, particularly those with very positive or very negative attitudes, to mark the right- or lefthand sides without reading the labels.

Individual items on a semantic differential scale may be scored on either a -3 to +3 or a 1 to 7 scale.

A Semantic Differential Scale for Measuring Self-

Concepts, Person Concepts, and Product Concepts

1) Rugged

2) Excitable

3) Uncomfortable

4) Dominating

5) Thrifty

6) Pleasant

7) Contemporary

8) Organized

9) Rational

10) Youthful

11) Formal

12) Orthodox

13) Complex

14) Colorless

15) Modest

:---:---:---:---:---:---:---: Delicate

:---:---:---:---:---:---:---: Calm

:---:---:---:---:---:---:---: Comfortable

:---:---:---:---:---:---:---: Submissive

:---:---:---:---:---:---:---: Indulgent

:---:---:---:---:---:---:---: Unpleasant

:---:---:---:---:---:---:---: Obsolete

:---:---:---:---:---:---:---: Unorganized

:---:---:---:---:---:---:---: Emotional

:---:---:---:---:---:---:---: Mature

:---:---:---:---:---:---:---: Informal

:---:---:---:---:---:---:---: Liberal

:---:---:---:---:---:---:---: Simple

:---:---:---:---:---:---:---: Colorful

:---:---:---:---:---:---:---: Vain

Stapel Scale

The Stapel scale is a unipolar rating scale with ten categories numbered from -5 to +5, without a neutral point (zero). This scale is usually presented vertically.

SEARS

+5

+4

+3

+2

+1

HIGH QUALITY

-1

-2

-3

-4X

-5

+5

+4

+3

+2X

+1

POOR SERVICE

-1

-2

-3

-4

-5

The data obtained by using a Stapel scale can be analyzed in the same way as semantic differential data.

Basic Noncomparative Scales

Table 9.1

Scale Basic

Characteristics

Continuous

Rating

Scale

Place a mark on a continuous line

Itemized Rating Scales

Likert Scale

Semantic

Differential

Degrees of agreement on a 1

(strongly disagree) to 5 (strongly agree) scale

Seven - point scale with bipolar labels

Stapel

Scale

Unipolar ten - point scale, - 5 to +5, witho ut a neutral point (zero)

Examples

Reaction to

TV commercials

Measurement of attitudes

Brand, product, and company images

Measurement of attitudes and images

Advantages

Easy to construct

Easy to construct, administer, and understand

Versatile

Easy to construct, administer over telephone

Disadvantages

Scoring can be cumbersome unless computerized

More time-consuming

Controversy as to whether the data are interval

Confusing and difficult to apply

Summary of Itemized Scale Decisions

Table 9.2

1) Number of categories

2) Balanced vs. unbalanced

3) Odd/even no. of categories

4) Forced vs. non-forced

5) Verbal description

6) Physical form

Although there is no single, optimal number, traditional guidelines suggest that there should be between five and nine categories

In general, the scale should be balanced to obtain objective data

If a neutral or indifferent scale response is possible for at least some respondents, an odd number of categories should be used

In situations where the respondents are expected to have no opinion, the accuracy of the data may be improved by a non-forced scale

An argument can be made for labeling all or many scale categories. The category descriptions should be located as close to the response categories as possible

A number of options should be tried and the best selected

Balanced and Unbalanced Scales

Jovan Musk for Men is: Jovan Musk for Men is:

Extremely good

Very good

Good

Bad

Very bad

Extremely bad

Extremely good

Very good

Good

Somewhat good

Bad

Very bad

Rating Scale Configurations

Fig. 9.2

Cheer detergent is:

1) Very harsh --------------Very gentle

2) Very harsh 1 2 3 4 5 6 7 Very gentle

3) . Very harsh

.

.

Cheer

.

. Neither harsh nor gentle

.

. Very gentle

4) ____ ____ ____ ____ ____ ____ ____

Very Harsh Somewhat Neither harsh Somewhat Gentle Very harsh Harsh nor gentle gentle gentle

5) -3 -2 -1 0 +1 +2 +3

Very harsh

Neither harsh Very nor gentle gentle

Some Unique Rating Scale Configurations

Fig. 9.3

Thermometer Scale

Instructions: Please indicate how much you like McDonald’s hamburgers by coloring in the thermometer. Start at the bottom and color up to the temperature level that best indicates how strong your preference is.

Form:

Like very much

Dislike

100

75

50

25

0 very much

Smiling Face Scale

Instructions: Please point to the face that shows how much you like the

Barbie Doll. If you do not like the Barbie Doll at all, you would point to Face

1. If you liked it very much, you would point to Face 5.

Form:

1 2 3 4 5

Some Commonly Used Scales in

Marketing

Table 9.3

Attitude

Importance

Satisfaction

Purchase Intent

Purchase Freq

CONSTRUCT

Very Bad

Not all All Important

Very Dissatisfied

Definitely will Not Buy

Never

SCALE DESCRIPTORS

Bad

Not Important

Dissatisfied

Probably Will Not Buy

Rarely

Neither Bad Nor Good

Neutral

Good

Important

Neither Dissat Nor Satisfied Satisfied

Might or Might Not Buy Probably Will Buy

Sometimes Often

Very Good

Very Important

Very Satisfied

Definitely Will Buy

Very Often

Fig. 9.5

Scale Evaluation

Scale Evaluation

Reliability Validity Generalizability

Test/

Retest

Alternative

Forms

Internal

Consistency

Content Criterion Construct

Convergent Discriminant Nomological

Potential Sources of Error on Measurement

1) Other relatively stable characteristics of the individual that influence the test score, such as intelligence, social desirability, and education .

2) Short-term or transient personal factors, such as health, emotions, and fatigue .

3) Situational factors, such as the presence of other people, noise, and distractions.

4) Sampling of items included in the scale: addition, deletion, or changes in the scale items.

5) Lack of clarity of the scale, including the instructions or the items themselves.

6) Mechanical factors, such as poor printing, overcrowding items in the questionnaire, and poor design.

7) Administration of the scale, such as differences among interviewers .

8) Analysis factors, such as differences in scoring and statistical analysis.

Reliability

Reliability can be defined as the extent to which measures are free from random error, X

R

. If X

R perfectly reliable.

= 0, the measure is

• In test-retest reliability , respondents are administered identical sets of scale items at two different times and the degree of similarity between the two measurements is determined.

• In alternative-forms reliability , two equivalent forms of the scale are constructed and the same respondents are measured at two different times, with a different form being used each time.

Reliability

Internal consistency reliability determines the extent to which different parts of a summated scale are consistent in what they indicate about the characteristic being measured.

• In split-half reliability , the items on the scale are divided into two halves and the resulting half scores are correlated.

• The coefficient alpha , or Cronbach's alpha, is the average of all possible split-half coefficients resulting from different ways of splitting the scale items. This coefficient varies from 0 to 1, and a value of 0.6 or less generally indicates unsatisfactory internal consistency reliability.

Validity

• The validity of a scale may be defined as the extent to which differences in observed scale scores reflect true differences among objects on the characteristic being measured, rather than systematic or random error. Perfect validity requires that there be no measurement error (X

O

= X

T

, X

R

= 0, X

S

= 0).

Content validity is a subjective but systematic evaluation of how well the content of a scale represents the measurement task at hand.

• Criterion validity reflects whether a scale performs as expected in relation to other variables selected (criterion variables) as meaningful criteria.

Validity

Construct validity addresses the question of what construct or characteristic the scale is, in fact, measuring. Construct validity includes convergent, discriminant, and nomological validity.

Convergent validity is the extent to which the scale correlates positively with other measures of the same construct.

Discriminant validity is the extent to which a measure does not correlate with other constructs from which it is supposed to differ.

Nomological validity is the extent to which the scale correlates in theoretically predicted ways with measures of different but related constructs.

Data Collection in the Field, Response

Error, and

Questionnaire

Screening

Nonsampling Error in Marketing Research

• Nonsampling (administrative) error includes

• All types of nonresponse error

• Data gathering errors

• Data handling errors

• Data analysis errors

• Interpretation errors

Possible Errors in Field Data Collection

• Field worker error: errors committed by the persons who administer the questionnaires

• Respondent error: errors committed on the part of the respondent

Nonsampling Errors Associated

With Fieldwork

Possible Errors in Field Data Collection

Field-Worker Errors

Intentional

• Intentional field worker error: errors committed when a fieldworker willfully violates the data collection requirements set forth by the researcher

• Interviewer cheating: occurs when the interviewer intentionally misrepresents respondents. May be caused by unrealistic workload and/or poor questionnaire

• Leading respondents: occurs when interviewer influences respondent’s answers through wording, voice inflection, or body language

Possible Errors in Field Data Collection

Field-Worker Errors

Unintentional

• Unintentional field worker error: errors committed when an interviewer believes he or she is performing correctly

• Interviewer personal characteristics: occurs because of the interviewer’s personal characteristics such as accent, sex, and demeanor

Interviewer misunderstanding: occurs when the interviewer believes he or she knows how to administer a survey but instead does it incorrectly

• Fatigue-related mistakes: occur when interviewer becomes tired

Possible Errors in Field Data Collection

Respondent Errors

Intentional

• Intentional respondent error: errors committed when there are respondents that willfully misrepresent themselves in surveys

• Falsehoods: occur when respondents fail to tell the truth in surveys

• Nonresponse: occurs when the prospective respondent fails 1) to take part in a survey or

2) to answer specific survey questions

• Refusals (respondent does not answer any questions) vs. Termination (respondent answers at least one question then stops)

Possible Errors in Field Data Collection

Respondent Errors

Intentional

• Refusals typically result from the topic of the study or potential respondent lack of time, energy or desire to participate

• Terminations result from a poorly designed questionnaire, questionnaire length, lack of time or energy, and/or external interruption

Possible Errors in Field Data Collection

Respondent Errors

Unintentional

• Unintentional respondent error: errors committed when a respondent gives a response that is not valid but that he or she believes is the truth

Possible Errors in Field Data Collection

Respondent Errors

Unintentional…cont.

• Respondent misunderstanding: occurs when a respondent gives an answer without comprehending the question and/or the accompanying instructions

• Guessing: occurs when a respondent gives an answer when he or she is uncertain of its accuracy

Attention loss: occurs when a respondent’s interest in the survey wanes

• Distractions: (such as interruptions) may occur while questionnaire administration takes place

• Fatigue: occurs when a respondent becomes tired of participating in a survey

How to Control Data Collection Errors

Types of Errors

Intentional Field Worker Errors

Cheating }

Leading respondent

Unintentional Field Worker Errors

Interviewer Characteristics

Misunderstandings

Fatigue

Control Mechanisms

{ Good questionnaire, Reasonable work expectation, Supervision, Random checks

Validation

Selection and training of interviewers

Orientation sessions and role playing

Require breaks and alternate surveys

How to Control Data Collection Errors…cont.

Types of Errors

Intentional Respondent Errors

Falsehoods

Nonresponse

Control Mechanisms

{

{

Assuring anonymity and confidentiality

Incentives

Validation checks

Third person technique

Assuring anonymity and confidentiality

Incentives

Third person technique

How to Control Data Collection Errors…cont.

Types of Errors

Unintentional Respondent Errors

Misunderstandings

Guessing

Attention loss

Distractions

Fatigue

}

Control Mechanisms

{

{

{

Well-drafted questionnaire

Direct Questions: Do you understand?

Well-drafted questionnaire

Response options (e.g., “unsure”)

Reversal of scale endpoints

Prompters

Data Collection Errors with Online Surveys

• Multiple submissions by the same respondent

(not able to identify such situations)

• Bogus respondents and/or responses (“fictitious person,” disguises or misrepresents self)

• Misrepresentation of the population (overrepresenting or under-representing segments with/without online access and use)

Nonresponse Error

• Nonresponse: failure on the part of a prospective respondent to take part in a survey or to answer specific questions on the survey

• Refusals to participate in survey

• Break-offs (terminations) during the interview

• Refusals to answer certain questions (item omissions)

• Completed interview must be defined (acceptable levels of non-answered questions and types).

Nonresponse Error…cont.

• Response rate: enumerates the percentage of the total sample with which the interviews were completed

• Refusals to participate in survey

• Break-offs (terminations) during the interview

• Refusals to answer certain questions (item omissions)

Nonresponse Error…cont.

• CASRO response rate formula (not mathematically correct):

Reducing Nonresponse Error

• Mail surveys:

• Advance notification

• Monetary incentives

• Follow-up mailings

• Telephone surveys:

• Callback attempts

Preliminary Questionnaire Screening

• Unsystematic (flip through questionnaire stack and look at some) and systematic (random or systematic sampling procedure to select) checks of completed questionnaires

• What to look for in questionnaire inspection

Incomplete questionnaires?

Nonresponses to specific questions?

Yea- or nay-saying patterns (use scale extremes only)?

Middle-of-the-road patterns (neutrals on all) ?

Unreliable Responses

• Unreliable responses are found when conducting questionnaire screening, and an inconsistent or unreliable respondent may need to be eliminated from the sample.

Download