Issues in Customer Satisfaction Research Ed Blair University of Houston

advertisement
Issues in Customer Satisfaction
Research
Ed Blair
University of Houston
Outline:

Why do we care about satisfaction?

So… satisfaction with what?

Other measurement issues

Sampling issues

Method issues

Issues in using the results



To improve or develop products
To evaluate or improve operations
Psychological factors in perceived quality and satisfaction
Why do we care about satisfaction?
Why do we care about customer satisfaction?

Satisfaction relates to buying:



Satisfaction relates to “market driven quality:”



Satisfaction and customer loyalty/defections.
Satisfaction and referrals/word of mouth.
At the front end, market research should be used to set the performance
specifications against which quality is measured.
At the back end, customer satisfaction is the ultimate quality test, so research
is needed to measure satisfaction.
Satisfaction relates to product improvement/development opportunities


Satisfaction gaps
Benchmarking
So… satisfaction with what?

Measuring satisfaction with the relationship vs. satisfaction with a
transaction

Loyalty/defection and referrals relate more to the relationship


“I can afford to mess up, I just can’t afford to mess up the first time I do
business with a customer”
Quality management, product improvement/development and service
improvement opportunities relate more to the transaction

If we care about loyalty/defection or referrals, should we measure
satisfaction… or the direct phenomena of interest?

What aspects of the transaction or relationship to measure?


Elements usually chosen based on perceived importance to
customers and/or areas of operational responsibility
If measuring overall + specifics, measure overall first.
Other measurement issues:

What scale? 3, 4, 5, 7 points? Labeled or unlabeled? Unipolar or
bipolar? With or without a neutral point?



Use measures that give room for improvement and support a call to
action. If measures are near a ceiling, it is difficult to see improvements
or separate them from random error. (Ex: A well known hospital, a well
known utility, etc.)
A possible scale: Completely satisfied, mostly satisfied, somewhat
satisfied, dissatisfied.
When should measures be taken?



Memory for service events may decay quickly, so measures must come
soon after the event. This is less a problem for products, major services.
Satisfaction can vary with time. (Ex: hospitals, B & R)
The best time to measure one dimension can be different from the best
time to measure another. (Ex: order processing, product reliability)
Sampling issues:
Do we want all customers or customers who received service?


Are you measuring transactions or relationships?

Problems can be hidden by non-response if alienated customers
don't respond. (Ex: course evals?)

Do you want customers or transactions? Customers or dollars?


"The good news is that I only have one dissatisfied customer. The
bad news is that it's my largest customer."
In B2B, oversample key customers and report them separately.

How often can we measure them?
Method issues:
What method of administration?



Personal interview? Outbound phone? Outbound web? Inbound
phone? Inbound web? Service cards?
Methods vary by cost, response rate, volunteerism, ability of line
personnel to tamper. We normally want highest response, least
volunteerism.
Can the system be gamed?



“If there is any reason why you won’t be able to rate me a 5, I need to
know about it now.”
Is this necessarily a bad thing?
Issues in using satisfaction research (A)
“Satisfaction gaps” may be used to guide product improvement or
product development. Some issues:

The people who do buy your product can't tell you why people
don't buy. Example: nursing homes.

Data are always relative to the current market context. High
satisfaction with current products doesn't mean that they can't be
improved. Example: GTO.

Low dimensions may not be the best priorities for improvement.
Example: UH.

Consider different interpretations for "satisfiers" and "dissatisfiers."
Issues in using satisfaction research (B)
Satisfaction data also may be used to evaluate or improve (service)
operations. This can be frustrating for line managers because:
Satisfaction data can show fluctuations over time that are unexplainable
from an operational perspective.







Random sampling error
Satisfaction may be impacted by exogenous factors such as price.
Operational improvements may not improve satisfaction after they become the
norm. Example: pay at pump.
Haloing can cause confusion. Example: instructor on time.
Results may inherently differ across business units or operational areas.
Examples: hospital wards, hospital cities, medical care vs. billing.
Knowing that customers are less than completely satisfied doesn't tell
you how to improve.
Issues in using satisfaction research (C)
To keep line managers from rebelling:


Position satisfaction data as necessary, but acknowledge that
they are soft.
Use a group approach. “Our” results, not “your” results. Example:
Minnegasco.

Focus on opportunity rather than evaluation to minimize
defensiveness.

Translate satisfaction data into an operational action plan with
hard measures, and tie any incentives to that plan (rather than
satisfaction). Good example: Continental. Bad examples: …
Psychological factors in perceived quality

Perceived quality and prior beliefs. Examples: Bud, UH.

Perceived quality and inference. Examples: appliance repair.

Perceived quality and expectations. Example: pizza, flight arrival.

Perceived quality and cognitive frames. Example: value in use.

Perceived quality and attention. Examples: elevators, airplanes,
restaurants.

Perceived quality and situations. Examples: stockbroker, appliance
repair arrival time, restaurant service time.
Q&A
Download