Research In A World Without Questions

advertisement
Research In A World Without Questions
By Tom Ewing and Bob Pankauskas
The paper explores the extent to which researchers can uncover insight and meet client
expectations without asking direct questions. It centers on observation, experimentation
and behavioural change, and critically explores a number of methods including mobile
ethnography, social media monitoring, behavioural economics, and mass observation. The
paper first discusses the theoretical need for indirect research, then explores its role in
understanding deep consumer context, before describing how to observe consumer
behaviour and create actionable hypotheses for behavioural change. The paper is not based
on a single project, but includes 8 brief case studies of work done in these and other fields.
A WORLD WITHOUT QUESTIONS
Introduction
Is it possible to do market research without asking direct questions?
The idea may seem like a parlour game – the research equivalent of peeling a satsuma in one go –
but there are excellent reasons to take it seriously. This paper will argue that “research without
questions” is not only possible, it’s often essential – it’s the best way to get at what people actually
do, not just what they say they do.
This paper particularly covers behavioural economics, observational and ethnographic research,
social media research and innovative qualitative techniques, and seeks to show the extraordinary
possibilities of a “world without questions”.
Separately, these areas have been the subject of much interest from researchers in recent years.
This paper offers a review of the field, but also introduces new material in three areas.
The first is a unifying framework for considering behaviour and decision making – one which
particularly lends itself to “research without questions”. This is covered in Section III of the paper.
This is designed to make these new research areas more practically useful and coherent, by giving
buyers a framework for working out what they already know, what they need to observe and what
they can actually do about it.
The second is a series of studies and experiments conducted across these areas by BrainJuicer. These
generally have no client involvement and appear in the paper as “BrainJuicer Experiments”. They are
self-funded experiments using publically available tools, with the aim of creating research
hypotheses not business effects.
And the third is a selection of case studies from Allstate Insurance, which has a long track record of
working with innovative techniques. These appear in the paper as “Allstate Case Studies” and offer a
‘client’s-eye view’ of the changing face of research. Not all the case studies involve indirect
observation – they show that real research work will often involve a balance between direct
intervention and observation.
Where appropriate we also look at work done by other research companies, and outside the
industry, in an attempt to offer a rounded picture of “research without questions”. And though we
will focus on research without direct respondent interaction, where direct questioning would suit an
aim better we will explore ways of doing it well.
In a research environment characterised by ever-dwindling response rates and concerns over panel
quality, there are clear pragmatic reasons to move away from direct questioning and task-setting.
What’s exciting is that by making this move you arrive at research which reflects people’s real
decisions and behaviour far better than direct questions. The paper acts as both a guide and clarion
call for a kind of research based on immersion, observation and experiment.
I: The Case For Research Without Questions
Systems Of Thinking
Before we explain how to reduce our reliance on direct research, it’s worth asking why we might
need to.
The case for moving away from direct research is often couched in pragmatic terms: the threat of
falling response rates, matched against the opportunity created by the enormous increase in social
media and customer data.
But this is not the whole story. There is also a compelling theoretical case, based in how our brains
and decisions work.
According to the Nobel-winning psychologist Daniel Kahneman, the human mind contains two
systems of thinking – system 1 and system 2 – which influence our judgements and decisions. He
explains the systems, and explores their implications, in his essential book Thinking, Fast And Slow.
System 1 led decisions are fast, easy to make, rooted in experience and often implicit or
subconscious. System 2 led decisions are more considered, take more cognitive effort to make, and
we are more conscious of making them: they feel like thinking.
Kahneman offers the following example of the two systems in action – a simple, but famous maths
problem. “A bat and a ball cost $1.10. The bat costs $1 more than the ball. How much does the ball
cost?”.
The answer is 5 cents (a 5-cent ball and a $1.05 bat) – but for the vast majority of people, the answer
that springs to mind is 10 cents, because the question primes our brains to deduct $1 from $1.10,
rather than work to solve the equation that forms the real problem. Earlier this year, BrainJuicer
added the bat and ball question to an unrelated questionnaire – only 16% of respondents got it
right, and 79% gave the obvious but wrong “10c” answer.
What’s happening here, Kahneman says, is that our system 1 thinking process is urging us to accept
an apparently obvious answer. To get to the correct answer, we have to engage system 2 and work
through the problem. But very few do this – the default option is more attractive.
In this case, the system 2 answer was the correct one. But system 2 isn’t always more correct,
Kahneman says: what it mostly does is back up judgements made quickly and easily by system 1.
Most of our decisions are – for better or worse – led by system 1, and system 2 can then provide
convincing justification for said decisions if called upon to do so. As Kahneman says, “System 1 is the
Oval Office. System 2 is the Press Office.”
System 1 Decisions Vs System 2 Research
So what does this have to do with market research questions? Well, calling on people to justify their
actions is exactly what direct questioning tends to do. Direct response research often puts
respondents into a situation where it actively encourages them to think through their answers,
engaging their system 2 processing by asking them to actively recall past behaviour, express
preferences between artificial choices, or give number values to dry, formally written attributes.
There would be nothing wrong with this if it accurately captured behaviour, preferences or the
reasons behind them. But it often doesn’t. One example is a test on FMCG packaging carried out by
BrainJuicer to explore how the dual systems of thinking affected choices in a real world context.
BrainJuicer Experiment: Packaging With System 1 Appeal
We conducted this experiment for a client who was concerned that the test performance of
their packaging was not reflecting on-shelf performance. We hypothesised that in test
conditions, system 2 led decisions were taking precedence, and that in a store environment,
faster system 1 led decisions would be more important.
To demonstrate this we tested two packs. One (Brand B) had a smaller picture and gave
more pack space to a set of information including the nutritional benefits of the food. The
other pack (Brand A) was more visually appealing and less cluttered, but contained much less
information. Our hypothesis was that the visually appealing pack would have more intuitive,
“system 1” appeal.
Figure 1: Offline Pack Test Results.
To expore this – and to demonstrate how most research may not access these system 1 led
decisions – we tested the packs in two cells (see figure 1). The first cell was allowed to take as
long as it liked over answering, the second had a time limit. This was designed to get to the
initial system 1 judgement – not giving system 2 time to override it. Sure enough, we found
that respondents under time pressured conditions preferred the visually appealing pack over
the info-rich pack. What’s more, this matched the experiences of the client – explaining why
their competitors’ more immediate, emotionally-appealing packaging was driving sales
increases in the time-pressured environment of the supermarket.
What do we learn from this? That decisions taken in real-life contexts can be very different from
decisions recalled or simulated in a research environment. The implication is that we need a kind of
research which can better understand system 1 decision making – one which is more observational
and experimental in nature and minimises direct response. And this imperative dovetails with
marketers’ increased access to data and the decline in traditional response rates.
The case for supplementing direct research is clear. But how to actually do it? How can you tap into
emotional drivers and system 1 thinking to really understand consumers?
The Importance Of Experiment
Like most research, “research without questions” is all about understanding people – specifically
your customers. But it focuses on what they do, not what they say they do. It particularly focuses on
the moments and environments in which choices are made, and what triggers these choices.
People can’t necessarily tell us much about what triggers choices. Modern research increasingly
draws inspiration from a literature of psychological and social experiments – which have entered the
popular mind thanks to writers like Malcolm Gladwell or Dan Ariely. These experiments generally
work to reveal the hidden triggers behind people’s decisions, for instance:

Twice as many train passengers buy pretzels when they see people eating pretzels as they
board. (Herrmann, 2011)

Israeli judges are more likely to grant parole decisions just after they’ve had a break for a
meal. (Danziger et al, 2011)

Playing German music in a wine shop makes German wine outsell French by two to one:
playing French music makes French wine outsell German by five to one. (North et al, 1999)
What do these examples have in common? They all involve interventions that change the context of
the decisions. And in no case are the decision makers likely – or even able – to admit the influence of
those interventions. To understand people’s decisions it’s necessary to both explore and experiment
with the decision-making context – rather than assuming direct questioning can get you there.
The problem is that the scientific literature on behavioural change involves a myriad of experiments
stretching back decades. The typical insight manager has weeks to understand and solve their
problems. For a start, it’s only worth experimenting on factors that are actually within a brand
owner’s power to change.
If they’re to explore the role of context, they desperately need a framework within which to operate
– to help them understand quickly which kind of context to change, and to build hypotheses around
what might happen. They need a structure for understanding the context of people’s decisions.
Context, Near And Far
There are two kinds of context for people’s decisions – in this paper we’ll call them near context and
far context.
Near context includes things like the music playing in wine shops, or the rumbling stomachs of Israeli
judges. It’s what can influence a decision in the moment it’s made – and near context is what brands
can more often hope to control and manipulate.
Far context, on the other hand, sits in the background of people’s decisions. It includes things like
cultural and social norms, the personal background of customers, and so on. These can’t necessarily
be altered in the short term, but for behavioural interventions they can and must be understood.
Understanding of the things a brand can’t change is the bedrock for experimentation on the things
they can.
For example, think about the wine shop experiment, where playing German music increased the
sales of German wine. This was a British wine shop – and the mostly British customers would have a
shared cultural knowledge of what German or French music sounded like. In China, without this
cultural knowledge, the intervention would have to be redesigned. The research world is full of
examples of where ‘priming’ the respondents with different information will result in different
answers.
Does understanding these underlying factors need direct questioning? No: there are a host of
innovative techniques you can use to get into your customer’s world without needing to ask them
anything. In the next section of the paper we’ll explore ways of understanding background context
without direct research.
II: The Far Context
Far Context And Synthesis
Far context doesn’t change that much – there are cultural shifts and market disruptions to take into
account but most far context information has quite a long half-life of usefulness. So while speed of
research and freshness of information are important, what’s really vital is how the information is
presented.
Background information on the customer’s world needs to be three things. It has to be easily
spreadable and transferable within an organisation – which is one reason infographics and
information design have become such a hot issue recently. It has to be clear, so that everyone who
receives it understands the same thing (as much as is possible). And it has to be emotionally
satisfying, or else it won’t be paid attention to.
Clearly, these things apply whether you’re dealing with information sourced directly from research
respondents or not. But directly sourced information is rarely enough to understand the far context.
Stan Sthanunathan, head of insight at Coca-Cola, spoke at The Market Research Event in 2012 on the
need for researchers to move from analysis to synthesisi – in other words, from interpreting a single
set of data to pulling together multiple strands of knowledge. Good far context information is
inherently this kind of synthesis.
But great synthesis should also lead to an over-arching insight, a surprising and evocative essence
about why consumers do what they do in a certain context.
Allstate Case Study: Insights on Mayhem
Allstate uses a simple ABC model for insights – Attitude, Behaviour, Context. Consumers
behave the way they do because of a specific attitude in a specified context. If you change
the context you can change the behaviour. Allstate used this consumer insight framework to
help develop the successful Mayhem campaign, in which a “roguish character” personifies
the bad things which can happen to your car or home anywhere or anytime. Will your lower
cost insurance firm pay for the damages? The advert puts the focus on the near context –
how you feel when the unthinkable actually happens – as a way of appealing emotionally as
well as simply competing on price. Mayhem has allowed Allstate to engage with a much
broader group than its traditional target – and has even generated substantial ‘talk value’
for the brand, as seen by the more than a million fans the campaign has gained on Facebook.
Synthesis of this kind begins with desk research - a quick refresher on industry data sources and
market conditions, and an understanding of the basic information to be gained from existing
customer databases and prior survey data. In terms of conducting observational and experimental
research, the idea is to cover off anything which might shape or limit people’s choices –
demographics, income statistics, the regulatory environment and so on.
Adding Richness To Far Context
Covering off the basics gives you clear information – and clarifies the gaps you might need to fill. But
it doesn’t always provide much explanatory or emotional richness, and it certainly doesn’t bring your
customers’ world to life. How do you go deeper into your customers’ world?
Market researchers have long drawn inspiration from psychoanalysis, where the role of the
“question” is very different and less straightforward than in research.
Allstate Case Study: Emotional Motivators
Allstate needed to understand the emotional motivators for buying car insurance –
something direct questioning cannot get an accurate picture of. Instead Allstate used the
technique of regressing a person to their first experience with a product or service, where
initial imprinting of attitudes may be formed that have residual value throughout a person’s
life.
Allstate commissioned 60 one-on-one, in-depth emotional inquiries to get to these underlying
motivations. Participants were guided through relaxation and visualization exercises to
describe their very first experience with auto insurance. The focus was on real life
experiences, which allowed the researchers to tap into subconscious mental models
embedded in long-term memories. This both circumvents conventional responses to direct
questions and reveals the underlying personal and cultural factors which effect decision
making.
The study generated a rich body of insight about the emotional meanings and metaphors
attached to the mundane world of auto insurance – which perhaps not surprisingly, does
trigger strong emotions once you get below the surface. Some insights that can be shared
are: insurance was seen as a rite of passage to adulthood, as even a badge of independence
from parents. And consumers have aspirations to be ‘accepted’ by the more reputable,
higher tier insurance carriers. The information was shared with Allstate’s advertising agency
to help strengthen the emotional appeals in the Dennis Haysbert “Are You in Good Hands”
advertising.
Beyond psychoanalysis, the last twenty years have seen increasing interest from researchers in
structural approaches: broad theoretical structures which can provide lenses for understanding
people and their behaviour. One example is semiotics, which explains consumer behaviour through
the lens of signs in culture and the meaning created by their interaction. Another might be
evolutionary psychology, which explains behaviour by looking at how it might have adapted to
maximise reproductive fitness and survival.
Full discussion of these theoretical techniques lies outside the scope of this paper – and evolutionary
psychology and semiotics in particular are not often seen as complimentary theories! But they have
two important things in common: they concern themselves with the deep background context of
people’s behaviour, and they analyse it via applying their theories to observation, rather than
through direct questioning. Semioticians tend to work through the analysis of cultural artefacts;
evolutionary psychologists look for the evolutionary imperatives behind observed behaviour;
psychoanalysts delve into formative experiences – where we know unconscious habits form. Each
might prove useful for filling out the far context of a brand’s customers – particularly the social and
cultural norms they operate under.
But understanding cultural norms or evolutionary drives can create an abstract picture of the
consumer world: it doesn’t “bring the consumer to life”. This is something of a research cliché, but
it’s also the final and most crucial piece of the far context. “Bringing to life” is shorthand for
acquiring the gut, emotional understanding of what your customers typically do and why.
Bringing Customers To Life
Creating a vivid, insightful image of customers in the minds of decision makers is one of research’s
most important roles. This vital job has often been done by segmentation.
Not all segmentations try to ‘bring customers to life’: segmenting the customer base by transactional
data (value, frequency, etc.) is usually more abstract and linked to business planning. But
segmentations which use direct research tend to involve aggregating data and then
anthropomorphising it – turning it into a set of typical customers or potentials.
Because they are intended to create a broad portrait, these segments often lead to boring or
stereotypical images of the consumer, and say almost nothing about their feelings or everyday
experiences. If market research is the voice of the consumer, segmentations are Autotune.
How else can research bring the customer to life? In the world of direct questioning, MROCs –
Market Research Online Communities - offer a way forward – “always-on” access to the consumer,
embedded within an organisation, has been pioneered by providers like Communispace and more
recently InSites. But while MROCs are certainly inspiring and create a real emotional connection to
consumers, they still rely on direct questions, and so their data can fall into the same traps of
“System 2” led post-rationalisation as other direct techniques. Can we be more observational?
Later in this paper we’ll talk about using mobile ethnography to understand behaviour, but that
works best with a pre-defined behaviour you’re looking for. The point of understanding the far
context for decision making is getting to the “unknown unknowns”: you need to know how about
the rest of someone’s life, so you can better fit that behaviour in. And you often won’t know exactly
what you’re looking for.
What about social media monitoring, which we’ll examine in more detail later on? It’s observational,
and it gives us access to apparently spontaneous behaviour – but like segmentation it tends to be
understood in aggregate, which is better for understanding than empathy.
So another part of dealing with far context ought to be getting into consumer lives by disaggregating
data, restoring its specific qualities without removing its usefulness as a broad picture. Berlin firm
Philter Phactory’s Weavrs tool offers a way to do this. It uses keyword-based search algorithms to
create ‘bots’ or digital personae that can exist independently online and manifest interests and
experiences deriving from the moods, locations or other data they were originally programmed with.
BrainJuicer are not the only research firm to have used Weavrs, but their DigiViduals tool, developed
in conjunction with Philter Phactory, uses them for precisely the purpose described above: restoring
emotional richness to consumer understanding.
There are several ESOMAR papers which explain the DigiViduals concept (for example ap Hallstrom
and Shaw, 2012), so we’ll explore them via a case study from Allstate Insurance.
AllState Case Study: Bringing Segments To Life
Work done with Allstate Insurance shows the power of an approach which looks to
disaggregate social media data in this way. After a major segmentation study, Allstate had
clearly identified several distinct segments. They had detailed descriptions of the consumers
in each segment, providing vivid personas along with their attitudes and insurance
preferences. But they wanted something more meaningful – a wider context they could not
simply get from direct questioning.
To get this they used DigiViduals. DigiViduals are digital personae designed as a virtual
representation of a customer or consumer segment. The methodology behins with creating a
persona: programming the software with demographic information – age, location,
profession- and a set of keywords representing interests and emotions. These keywords are
used to trawl social media sources – like YouTube, Twitter, Flickr, and other platforms – to
create a stream of content including videos, photos, music, places visited, and so on.
This content forms the material for qualitative analysis of the DigiVidual’s life, worldview and
behaviour – creating a picture of the consumer which is often more insightful and
emotionally compelling than one built out of aggregate data.
Allstate’s experience with Digividuals was a positive one. The Digividuals added another
layer of meaning on top of the segmentation profiles – providing a visceral picture using
audio and visuals that gave a sense of who segment members were as people. DigiViduals
delivered understanding about the emotional motivators in their life—were they looking
back on their life or forward into the future? How were they responding to changes and risks
in their lives? It provided a sense of what was important in their lives and consequently, gave
clues to what they were likely to prefer and why. Adding this layer of understanding has
helped enhance communications efforts as well as improving the ‘intuitive’ knowledge levels
among senior managers
Can this vivid emotional context be accessed without using a proprietary technique? Yes: at the
heart of the DigiVidual is qualitative analysis of found online material, which is a skill any good social
media researcher will possess from analysis of brand conversations, for example. DigiViduals allow
for more serendipitous surfacing of this material, though, which makes them – in BrainJuicer’s view
– more surprising and emotionally compelling.
III: The Near Context
Understanding Near Context
We’ve looked at the far context of consumer decisions – the market background; social and cultural
norms; and consumer segments – and we’ve discussed ways to approach those without doing direct
consumer research.
But the ultimate point isn’t simply to understand consumers: the point is to change their behaviour.
And for that you need to understand their immediate decision contests – the near context. The
problem with understanding the near context is that it can be extremely hard to work out which
behavioural influences to focus on.
It’s tempting to zoom in on traditional marketing triggers – pack, pricing, promotions, and so on –
because these are things under the brand’s direct control. But as case studies across psychology and
behavioural economics show, there are a vast number of other elements that might influence
choices. People make so many decisions, under such a bewildering range of factors – from the
weather, to the music playing, to something a friend said on Facebook – that it can be very difficult
to work out where to focus.
The Behavioural Model
What’s needed first is a way of structuring these contextual factors – creating a framework that
allows us to understand them better. Once this structure is in place, it becomes easier for marketers
and insight managers to focus on the moments of decision and what might influence them.
BrainJuicer’s approach to this has been to develop a ‘behavioural model’ to create this structure. It
emerged out of the company’s interest in the advances in behavioural economics and social
psychology described earlier in the paper, which seemed to promise radical new approaches to
researching consumer decisions. BrainJuicer saw a need for a framework for thinking about
Kahneman’s “system 1 led decisions” and the various biases and heuristics which affect them.
This is not the only way of structuring the findings from behavioural science and psychology. Other
research firms, for instance The Behavioural Architects and Tapestry Works, have models of
behaviour which differ in their emphases but not strongly in the overall content. What is generally
agreed is that a model is necessary to make the vast sea of important behavioural findings more
practically useful to research buyers. BrainJuicer’s approach is offered as an example of how to
structure the material.
Figure 2: BrainJuicer’s behavioural model.
The behavioural model (see figure 2) breaks down the factors that affect decisions into three areas:
environmental, personal and social. Before going into more detail and examples, we’ll offer a brief
summary:
Environmental factors include the decision-making environment: anything from a store, through a
website, to a phone conversation or meeting room. But they also include the choice architecture –
how choices are presented to the decision maker: for instance default choices, opt in vs opt out
choices, and so on.
Social factors are more self-explanatory. Humans are a species evolved to copy other people, so
social factors in decision making include any evidence of other people’s behaviour. This doesn’t have
to involve directly witnessing behaviour – it can also include evidence of that activity, like a “views”
count on a YouTube video.
And finally, personal factors include an individual’s emotional state – happy, sad, angry, etc. – and
their visceral state – hunger, thirst, etc. – both of which might affect a decision. But it also includes
cognitive biases common to everybody in some degree which skew their decision making abilities,
like loss aversion and confirmation bias.
All these areas are interlinked – for instance, changes in the choice architecture can be intended to
trigger particular cognitive biases. But thinking of them separately makes it easier to focus on what a
brand owner can change and ought to experiment with.
It’s important to understand these areas because most of these factors simply couldn’t be measured
through direct questioning. They are the building blocks of effortless, unconscious system 1 led
decisions, and so people – who like to see themselves as autonomous, individual actors - are less
willing or able to assign these factors much importance in decision making. In fact they are crucial –
the cornerstone of “research without questions” and how we can get to what people do, not what
they say they do.
Environmental Factors
You can summarise the way environmental factors work on system 1 led decisions by saying that
people sense, choose and act.
System 1 led decisions are highly influenced by the physical environment they’re taken in: we stress
“sense” because this environment is multisensory – as the music in a wine shop example proves.
Consumers take in sensory information very rapidly, so marketers need to prime decisions by
appealing to those senses: visual cues are vital, and for some categories smell and touch can play an
important part. Think for instance of the way food smells prime consumers to act in a market, or the
way online and mobile app designers use textured images to evoke tactility and sensory pleasure.
Armed with this information people choose from the options presented to them. How this choice is
framed – the choice architecture - matters a great deal. For instance, when the Economist offered
people a straight choice of a web-only subscription or a more expensive print subscription, the
majority picked the cheaper (web-only) option. But when they added a third choice, web and print,
at the same high price as print-only, a majority picked the web and print bundle. The magazine had
successfully traded subscribers up, simply by framing the choice as getting something for free rather
than paying more. (Ariely, 2009)
Manipulating the choice architecture is already a major part of research: new product assessment
has become more predictive by using choice-based questions rather than simple monadic tests,
because it seems to be better aligned with people’s actual behaviors. And framing choices works in a
survey context too: Jon Puleston of GMI has presented many examples where framing questions in a
particular way (for example asking “what clothes would you wear on a first date?” instead of “what’s
your favourite outfit?”) has given much richer data. But of course framing doesn’t just enrich data –
it changes it too.ii
And finally, people have to act on these choices, and the environment plays a huge role in
determining whether they do. Queue length in stores, unwieldy login pages on websites, daunting
terms and conditions for services – all these can place barriers in the way of converting a choice into
an action. So the final role for marketers is to trigger that action.
Social Factors
One of the results of the rise to prominence of social media has been a renewed focus on how
important the social transmission of behaviour is. As Mark Earls puts it in his book Herd, moving the
focus from the individual to the social is a kind of Copernican revolution for marketing, so used is the
discipline to thinking in terms of the individual making autonomous choices. But most individual
choices are made unconsciously, and – as the pretzel-eaters on the train show – many are made in
response to other people.
Our three word mantra for describing social behaviour is see-share-copy. People see other people
doing things, they broadcast their own decisions, and they copy those made by others. The word
“share” has become familiar as a social media concept, but of course you don’t need Facebook or
Twitter to share your decisions – it’s happening as soon as you leave the house (and often before).
The main difference is that online, social interaction – and popular decisions – can be very easily
measured. “Social proof” – the factor that guides a lot of people’s choices, whereby a popular
decision is a safer and easier one to make – is readily apparent online. A site like Amazon bakes it
into its interface at almost every turn, constantly telling visitors what’s popular, what people making
similar choices also did, and signalling via star ratings whether other users feel a choice is the right
one.
So from a marketer’s perspective the crucial thing is that either the communications around your
brand, or ideally the decision to use it, are seen, shared and copied – to get the benefits of social
proof. This ties in with what we already know about advertising and brand share, as detailed by
Byron Sharp in his book How Brands Grow, where he talks about “fame” and “availability” as the
dynamics behind brand growth: empirical evidence, Sharp says, suggests that attracting new users is
always more important than trying to create brand loyalty. If we are a species of copiers, the
decision to use a brand needs to be as visible and copy-able as possible.
Personal Factors
Finally we have the ‘personal’ factors in a decision – the part of the decision that really does rely on
what’s happening in an individual’s head. Our summary of the personal influence on the decision
making process is that people feel, do and think – and the order is very important!
As the neuroscientist Antonio Damasio puts it, “We are not thinking machines that feel. We are
feeling machines that think.” Feeling tends to come before doing: a lot of system 1 decisions are
made in response to emotional reactions, and in general understanding the emotional response to
something (like a piece of communication, or a brand in general) is a better way to predict behaviour
around it than asking about that behaviour directly. Of course thinking still plays a role – when
decisions are genuinely difficult, for instance. But even then, there is usually a default decision which
people will find easier to take.
So BrainJuicer’s advice to marketers on personal factors is to work to make the decisions you expect
from people fun (emotionally appealing), fast, and perhaps most importantly easy to make.
IV: Data Without Questions
This model of behaviour gives you a framework for identifying behavioural interventions and
thinking about behavioural change. To flesh it out, though, you still need reliable information on
what decisions people are making at the moment.
How do you get behavioural data – and since people are poor witnesses to their own behaviour, how
do you get it without asking questions? This is where many newer research techniques – like social
media monitoring and mobile ethnography – have an opportunity to shine.
But we should approach them with caution. The value of insight to the client lies in the new
information that it provides. If social media monitoring, for instance, reflects opinions and attitudes
around a brand which a research buyer is already aware of, it offers little in the way of extra value.
Meaning From Social Media
In the research industry, the idea of ‘research without questions’ is probably most strongly
associated with social media research. The dominant metaphor for social media among researchers
is as the “ultimate focus group” – an endless source of free insight and information on how
customers relate to a brand. There are various ways of accessing and analysing this information – be
it sampling or scraping, text analysis or sentiment analysis. What they have in common is an
emphasis on social media content – aggregated to reveal overall sentiment or buzz, or drilled into to
understand specific issues.
In parallel to this there’s an area of social media research that focuses on the context of messages –
who sends them, who copies them, how they spread across which networks, and so on. This is often
framed in terms of ‘influence’, and non-research start-ups like Klout or PeerIndex have attracted a
lot of attention by looking to score individuals’ influence on social media. But a network is an object
that can be studied independently of the individuals within it.
In his recent book, I’ll Have What She’s Having, Mark Earls talks about the different ways people
copy one another or choose independently. Because people usually think of themselves as choosing
independently rather than copying it is incredibly difficult to understand the uptake of products or
choices without looking at the market at a network level. Earls has shown that by doing this – for
instance by looking at the diffusion curves of successful products in a market – you can work out
how people copy one another (whether they tend to copy experts or simply imitate peers, for
instance). This knowledge would be very difficult, if not impossible, to get via direct question.
From a behavioural perspective, social media monitoring can have definite advantages over direct
questioning. While Tweets, blog posts, comments et al. are self-reporting – and hence suffer from
our inability to witness our own behaviour accurately – they can also offer better access to
emotional reactions to brands and communications. As sentiment analysis improves, they are also a
potential option for brand tracking – though social media research cannot yet offer much sampling
precision.
However, one should remember what social media really is – people talking to other people about
what interests them at the moment. Many product categories are not ‘interesting’ and social media
has less value to market researchers. And the most accessible platforms present their own
challenges. Blogs offer the opportunity for well thought out comments – but they may well be the
product of system 2 post-rationalisation, rather than a reflection of real decision-making factors.
And on Twitter – the go-to destination for monitoring as long as Facebook remains mostly walled the prevalence of tweets with brief, almost superficial comments may be hard to analyse for the
opposite reason.
The main issue though is that while comments about brands and behaviour relating to them may be
common, it’s a lot harder to tease out information on the near context of that behaviour. Someone
tweeting that they are buying a can of coke, for instance, is not likely to offer much information
about the environmental or social factors around that choice – even if they mention they’re thirsty.
They will either be unconscious of it or not consider it relevant. For behavioural purposes, social
media runs into the problem of what goes unsaid and unseen.
Mobile Tools
What about mobile tools? Mobile is a crucial tool for capturing the near context of behaviour – it’s
the only research tool which accompanies the consumer in the decision-making moment.
A lot of the early discussion around mobile research revolved around the issue of whether it would
be possible to serve surveys across a range of handheld (and then tablet) devices. This seems to
have been quite the wrong approach. Instead of direct questioning, the most successful mobile
approaches have empowered participants to create research data which is shaped by their lives and
experiences, not by the artificial structure of a questionnaire.
Two examples of these task-based, question-free approaches are mobile ethnography, and a
touchpoint-based diary approach.
Mobile ethnography allows participants to define and capture context themselves. Typically
participants are set tasks. Unlike in traditional ethnography, where perceptions of behaviour are
filtered through the ethnographer, mobile ethnography asks participants to curate their own
behaviour by filming it. This can lead to particularly rich analysis, as the camera captures elements of
the context which participants do not describe or even notice.
BrainJuicer Experiment: Mobile Dog Ethnography
This study is an example of how mobile ethnography, while self-reported, can still capture
the richer context of consumer behaviour.
BrainJuicer co-funded an experiment with Del Monte] on dog owners’ attitudes to their pets
and treating – we asked people to film themselves feeding and walking their dogs. This
captured a lot of very vibrant and interesting behaviour, from both owners and dogs. One pet
was filmed splashing its dry food with water from another bowl to make it wetter, for
instance. As is common with mobile ethnography work, what is not described was as
important as what is. For instance, a participant stressing the high level of discipline she
imposed on her pet was seen in reality being inconsistent, and rewarding the dog even
though it was misbehaving.
Our methodology in this study was the EthOS ethnography app for iPhone, which is publically
available. It was developed by Siamack Salari of Virtual Lives and allows researchers to easily
put together task-based ethnographic projects. From the research perspective, the most
difficult part of the methodology remains the analysis stage, which can be time-consuming
and needs. From Del Monte’s perspective, the experiment was a qualified success, and they
learned more from it than from standard ethnographic or observational methods.
Mobile diaries are another useful form of task-based mobile research which gets at real behavioural
context. Pioneered by MESH Planning in the UK (see Blades, 2008), they tend to involve participants
using mobile devices to record every encounter they have with a brand or particular communication
– TV ads, billboards, word-of-mouth and social media experiences, and so on. They provide details
on the context of the encounter and their reaction – which can include photo or video evidence
allowing researchers to tease out details which might have had an impact on behaviour.
Allstate Case Study: Mobile Touchpoints
Allstate has also experimented with mobile diaries. For a period of three months, we
continuously recruited a sample of consumers to provide a 7 day diary of over 20+ consumer
touchpoints for four major insurance brands, including Allstate. This gave us a robust sample
of 2,258 people reporting their brand encounters at the moment of experience. They texted
only three things: what the touchpoint was, which brand, and what was the quality of the
experience.
Among other findings, this work was useful in identifying the real reach of various vehicles
for a major campaign – moving away from recall-based tasks. It identified the cumulative
reach of television advertising, digital banners, word-of-mouth and social media. (It showed,
not surprisingly, that social media is a low reach media vehicle relative to other options.).
This research has just been completed, but the work shows promise towards reconsidering
media allocations and opening up further discussions on cross media effectiveness.
BrainJuicer Experiment: The Visceral Shopper
In a self-funded experiment, BrainJuicer used mobiles in a similar way in an attempt to better
understand the influence of behavioural factors – personal, social and environmental – on
shopping habits.
The methodology was as follows. BrainJuicer recruited participants who were happy to share
shopping receipts and used mobile devices to deliver a short survey immediately before their
shopping experience, to capture their emotional and visceral states – and some basic
environmental information. After the shopping experience, BrainJuicer collected the
shoppers’ receipts, noted the time taken in-store, and delivered a second survey asking about
emotional and visceral states.
The results were fascinating. People really do spend more when they’re hungry, both in terms
of number of items and average price. When people are anxious they buy less on impulse,
but when people are happy they take less notice of promotions. And environmental factors
play a huge role. If participants took a trolley not a basket – even when only planning a small
shop – they spent more time in the store, while the sight of long queues cut the number of
items bought down.
As a self-funded experiment, this was not designed to meet specific business issues. However
the results have been interesting to several retail clients in terms of the fresh perspective
they provide on in-store behaviour. Aside from helping us understand the context of
shopping, this experiment proved how important collecting data in the moment can be.
Both mobile ethnography and diary-based research do involve direct interaction with participants –
though not via traditional surveys and questions. They are an excellent example of hybrid
methodologies emerging where pure surveys and pure observation individually would not meet
business needs.
But mobiles also fit into another growing research trend – the collection of passive data.
Passive Data Collection
Passive data collection means collecting data automatically, without requiring any participant
intervention in what is or isn’t recorded. In some areas of research this has been happening for a
while, particularly online, where the ability to automatically track clicks and URL trails has meant
that web audience measurement has been ‘passive’ for more than a decade. But two developments
have moved passive collection further into the research mainstream.
One is mobile – the ability of mobile devices to track not just in-device activity (apps used, internet
usage, etc.) but also location, temperature, even physical activity via accelerometers. Using and
analysing this data is often easier said than done but the range of data mobile devices will be able to
capture is only likely to increase.
The second aspect of passive data collection is neuroscience and biometrics – measurement of body
or brain response to stimulus. In its more complex forms (fMRI scanning, for instance) this procedure
is considerably more intrusive than any survey is likely to be! However, the promise of neuroscience
is still unfolding. While it is certainly measuring something – millions and millions of data points
from myriad brain synapses firing – it is often unclear what is being measured, whether we know
what it means and whether we should act on these measurements.
Allstate Case Study: Neuroscience Testing
In 2009, Allstate tested 12 television executions using one of the leading Neuromarketing
providers. The test gave mixed results – some consistent with previous testing methods,
some inconsistent but which made sense and some that were inconsistent with other
methods but also just didn’t make sense. The most interesting finding was that the
neuroscience test was the only testing system among three examined which correctly
identified a ‘winning’ direct response ad, suggesting its utility in capturing more realistic
consumer behaviour. While neuroscience testing shows promise, more in- market validation
is required to understand which of these measures best ties to business results .
But we are also seeing a generation of “light” versions of biometric techniques emerge – eyetracking is becoming an important part of communications and web testing, and devices to measure
pulse rate, galvanic skin response and other physical reactions are becoming more common too.
For researchers interested in behavioural triggers, the appeal of these techniques is clear: they
promise an unmediated and more honest understanding of stimulus and reaction, which makes
them very useful for anyone planning behavioural interventions to test particular stimuli which
direct research could probably not assess.
Ethnography and Observation
Passive measurement is an emerging option because of technological advances. But it’s also possible
to draw inspiration from older research forms which fell into disuse because of cost or analysis
issues. In the 1940s and 1950s, for instance, mass observation techniques were at the cutting edge
of research in the UK, used by the governments of the time to get a picture of mass behaviour and
sentiment (a kind of prototype social media monitoring, with pubs and cafes standing in for Twitter
or Facebook).
Observational techniques are invaluable for the researcher of behaviour who wants to avoid direct
questions. For while we are poor predictors of our own behaviour, we are better at assessing that of
others – and spotting possible behavioural triggers.
BrainJuicer Experiment: Behavioural Detectives
In 2011 BrainJuicer conducted an experiment in qualitative observational research, recruiting
a number of “Behavioural Detectives” to investigate behaviour around a certain topic – in
this case, excessive ‘binge drinking’ in British pubs. Using the behavioural model as a guide,
they asked one group to investigate the choice architecture of a pub or club environment,
and another to comment more generally on the environment and the cues it offered drinkers.
A third team observed social interaction around drinks-buying, and a fourth looked at
individual biases and behavioural factors. Each of the teams of “detectives” then prepared a
debrief for the researchers on their findings.
Synthesising the findings let BrainJuicer create a complete picture of the pressures and
triggers drinkers faced on a night out, and helped them formulate a number of
recommendations for behavioural interventions and innovations which might combat the
social problems around excessive drinking in the UK. As a self-funded experiment, this was
done to test a research method rather than meet a business need, though we did offer the
results to the UK Government’s DrinkAware team, tasked with combating binge drinking.
Ideas like beer menus and table service – to introduce breaks in consumption – are already
common in ‘craft beer’ pubs. Others, like signalling the amount consumed by leaving glasses
on the table longer, or removing the “last orders” bell, are less common. But the range and
variety of ideas was very high – proof that a behavioural focus can lead to innovative
thinking.
Using observational techniques also allows brand owners to unlock some of the customer insight
embedded in their own organisation. Frontline staff – salespeople or retail staff, for instance – have
a unique experience in terms of dealing with customers at scale, and are therefore often more
aware of behavioural influences than either individual customers or management-level employees.
Recruiting them as behavioural observers to tap this wisdom can pay dividents.
Big Data
Finally, we should mention what may yet become the ultimate “research without questions” tool –
the large-scale customer databases known colloquially as “Big Data”. Exploiting big data successfully
requires sophisticated data mining skills, powerful software and great analysts. With those in place,
remarkable results can be obtained. A New York Times story in February 2012iii explored the
example of how clothing brand Target had developed an algorithm which detected – with a high
likelihood – when its women customers had become pregnant. The store often knew before the
family did, leading to an incident where a pregnant teen’s father discovered his daughter’s condition
because of ads Target was sending her. More commonly, big data analysis is used to drive
improvements in cross-selling or customer service.
For all the excitement among some researchers at big data, the skillsets needed to work with and
analyse it are very different. The integration of big data is a huge challenge even for experts, let
alone researchers, and place the topic outside the scope of this paper. In theory, ‘big data’ ought to
work well in conjunction with the observational techniques we have discussed here, as a source of
both insight and hypotheses to test.
V: Agents Of Change
Launch, Then Test
Understanding the context of decision making, and discovering how your customers make decisions,
are bound to produce hypotheses on how you might change or influence the decisions they take. So
the endpoint of the observational, research without questions approach is running experiments that
test these hypotheses.
In doing this, we can take inspiration from online firms, who have created cultures of agile decisionmaking in which their user experience is constantly assessed and alternatives evaluated. “Launch,
then test” has become a common model for the biggest web brands: release a feature then quickly
correct any problems via incremental changes and upgrades. This is how Facebook, Amazon, Twitter
and Google have evolved – not through discrete version launches tested ahead of schedule, but
through continuous change based on rock-solid effectiveness data. Methods like split testing –
where users of a site are funnelled randomly to slightly different versions of a page, letting sites test
changes for effectiveness “live” – have had a huge impact on online brands’ approaches to user
research.
BrainJuicer Experiment: Split Testing In Question Wording
Split testing can also demonstrate the extent to which question wording can distort response.
In a self-funded experiment, BrainJuicer used Optimizely, a web-based split testing software
package, to set up two polls on the best-selling UK musicians of 2011, simply asking people
to select any they liked. The choices were identical in each case, and the wording of the
questions only differed in one way: one of them described the performers as “acts”, the other
identified them as “musicians” and “artists”.
The Optimizely software automatically routed people to one or other test at random. For
most acts there were few changes. But when characterised as an “artist”, the approval rating
for Amy Winehouse went up, suggesting this more loaded word made people consider her
more favourably.
As a self-funded experiment, this had no connection with any client business. However, the
experiment confirmed split testing as an extremely efficient and fast way of confirming
hypotheses about stimuli and question wording. The whole test was set up and run within a
couple of hours. All researchers need to run split tests is a software package, a web page
they control and a stream of visitors. Given those, and given the importance of wording and
stimuli in framing behavioural choice, we would strongly recommend using split testing in
stimulus and questionnaire design.
A related source of inspiration comes from the world of using design research for rapid prototyping.
Even better than asking consumers what they want is to show them prototypes and observe and
listen to their responses. Many high tech firms show the value of developing rough prototypes early
in the new product development process and then iteratively expose and refine them to get closer
to what consumers really want. This is a world where few questions are asked - but responses are
highly useful. Consumers aren’t very good at telling you what they want – but they certainly know it
when they see it.
But how practical is such agility in an offline or service-based setting? “Launch, then test” can’t easily
be applied to physical products. The principle behind it, though, remains vital – data is only valid
when it applies to a real-world context. You must test hypotheses and behavioural interventions in a
context as close to reality as possible.
For some brands, test sites and real-world interventions are one way of doing this. Retail brands, for
instance, might use a particular store to test a series of alterations to the decision-making context.
BrainJuicer are currently running a series of observation- and intervention-based experiments across
8 stores for a European clothing company. Each of the interventions will be based on a hypothesis
generated using the behavioural model framework – the effect of floor and carpet markings and
colourings on sales, for instance. Each intervention will last two weeks – one without the
intervention, one with – allowing for day-to-day and week-to-week comparisons.
By dealing with real consumer decisions and actual sales BrainJuicer hope to create a clear path to
demonstrating the ROI of particular interventions.
A World With Better Questions
Clearly, though, for many brands some kind of research is still necessary. But where direct research
is used it’s crucial to recreate the context as closely as possible, to avoid nudging people into taking
“system 2” considered decisions where in the real world they might follow their system 1 instincts.
Ways of doing this include putting questions into real contexts, using visual stimuli, testing alternate
wordings for framing effects (as described above), and creating “games” which can shift participants’
emotional or visceral states.
BrainJuicer Experiment: Pack Testing Game
Following the FMCG packaging example referred to in section II of this paper, BrainJuicer
developed an online pack testing module which would be applicable to any product bought in
supermarket conditions. The module takes the form of a kind of “game” where time limits
and distractions are used to change the participants’ contextual state. It puts them into a
situation where they make decisions more quickly and with less conscious cognitive
processing – more system 1, in other words, and closer to the state real shoppers are in.
Pilots of this game have found that it makes a dramatic difference to which brands people
choose. In standard research conditions they pick brands which appeal more to system 2
thinking, with more information and product benefits on the packaging. In the modified
conditions – while distracted and under pressure – they pick brands that have more system 1
appeal, with more attractive visual elements and less information. In other words, they
choose brands which make their decision faster, easier and more fun (see figure 3).
Figure 3: The pack test game as an online module.
As a self-funded experiment this is not designed to have specific business effects. Brainjuicer
are currently working to validate the findings with sales data – but believe that this is a great
example of how to manipulate research contexts to get results which reflect real world
conditions more closely.
We’ve focused in this paper mostly on behavioural research but attitudinal research still has an
important role to play, as if conducted properly it can get closer to the emotions that underpin
people’s instinctive decisions. AllState, for example, have had success with moving away from direct
survey questions, instead using photographic prompts to access emotions more directly. People
process visuals more rapidly and intuitively than text, so by selecting particular pictures to express
their feelings about a brand, they are more likely to give an honest and useful attitudinal response.
And visual stimuli give consumers a way to express how they feel when, left on their own, they are
hard pressed to come up with the ‘words’ that express how they feel.
Rigour And Validity
Finally, it’s worth briefly exploring how the questions of rigour and validity change in a world of
indirect research. Researchers have asked searching questions of new techniques for a long time –
are they representative? Is their data robust? This will continue to be the case.
The closer an innovative technique comes to direct intervention, the more pertinent we believe
these questions are. Neuroscientific methods, for instance, are routinely criticised for their small
sample sizes. But there is another research tradition many of these new methodologies draw on –
the psychology study, where samples are often highly unrepresentative and may not be statistically
robust.
This, we would argue, is because such studies concern themselves with a different kind of
robustness, one market research has paid less attention to. “Validity” and “robustness” in market
research generally relates to sample. But as critical in observational research is robustness of
context. Is the context representative of a real life situation? Can you isolate particular factors which
might be causing a change in behaviour, and hence can you generate useful and testable
hypotheses? For research based on observation, these are the critical questions of validity.
Observation is meaningless without intervention, but unless your data comes from a representative
context you won’t be sure which interventions to make.
Of course a researcher should not have to choose between a valid sample and a valid context. The
unrepresentative samples of psychology studies are a factor of funding not of choice. But certainly
statistically robust samples matter more when you are attempting to derive meaning from a set of
artificially generated survey data, rather than from observed behaviour.
Indirect research means a shift from understanding behaviour to changing it. This shift brings with it
a shift in ideas of validity: traditional research rigour is still important. Rigour in contextual
understanding is even more so. To put it another way, in survey conditions - where you’re relying on
recalled or predicted behaviour asked in a highly analytical context – all the sampling rigour in the
world won’t help you if you haven’t taken the real decision context into account. Robustly wrong is
still wrong.
VI: Summary Points And Conclusions
Summary Points
The key points we want to leave you with are as follows:

The increased availability of consumer data is liberating the insight industry from its reliance
on self-reported and directly obtained information (eg from surveys)

This is happening at the same time as advances in psychology and behavioural economics
offer us a new way of understanding consumer decisions.

People’s decisions are based more on fast, intuitive “system 1” thinking than on high-effort
“system 2” thinking. Observational, indirect research can take this into account better than
direct research could.

The marketers’ job is to make consumer decisions fun, fast and easy – more in line with
people’s “system 1” thinking.

So the focus is on and how to change it, which means understanding both the far and near
contexts of behaviour.

The far context can be assessed via desk research, demographic analysis, theoretical lenses
like semiotics, and immersion in consumer lives via social media.

The near context is easier to intervene in but more chaotic. It’s best understood by thinking
of it in terms of environmental, social and personal factors.

Data on near context behaviour can be sourced via mobile research, ethnography and mass
observation, and the analysis of transactional data.

This data will suggest hypotheses which should be tested in as near to a real world context
as possible.

This may involve direct respondent participation – but rather than questions, tasks should be
set which encourage “system 1” responses and emotional reaction.

Think of rigour and validity in terms of robust contexts first, robust samples second.
Conclusion
The market research industry will always need methods for direct questioning. But everything we
know about how the human mind works points to the need to minimise direct research as much as
possible. As we learn more about how people really behave, our trusted tools for direct research –
from ratings scales to MROCs – will need reinvention.
Research without questions is not a novelty or another option – it’s essential if we’re to truly
understand consumer decisions. It’s also more exciting, innovative, faster in many cases and more
actionable than direct research has been. A world entirely without questions is not on the horizon –
but the reign of the question in research has ended, and we should celebrate that. Our industry is
finally aligning itself around what consumers do, not what they say.
References
Ariely, Dan. (2009). Are We In Charge Of Our Own Decisions? at TED Conference.
Blades, Fiona. (2008). Capturing How A Catchphrase Caught On at MRS Conference.
Damasio, Antonio. (2010). Self Comes To Mind: Constructing The Conscious Brain. Pantheon.
Danziger, Shai. et al (2011). Extraneous Factors In Judicial Decisions. In Proceedings of the National
Academy Of Sciences of the United States Of America.
Earls, Mark. (2007). HERD. John Wiley And Sons.
Earls, Mark et al. (2011). I’ll Have What She’s Having. MIT Press.
ap Hallstrom, Anna and Shaw, Richard. (2012). Robots’ Journey To The East at ESOMAR Asia Pacific
conference.
Herrmann, Andreas. (2011). The Impact of Mimicry On Sales in Journal of Economic Psychology.
Kahneman, Daniel. (2011). Thinking, Fast And Slow. Allen Lane
North, Adrian et al. (1999). The Influence Of In-Store Music On Wine Selections in Journal Of Applied
Psychology.
Sharp, Byron. (2010). How Brands Grow. OUP.
Authors
Tom Ewing is Digital Culture Officer at BrainJuicer.
Bob Pankauskas is Director, Market Research at Allstate Insurance Co.
Endnotes
i
Summarised here: http://www.quirks.com/articles/2012/20120601.aspx
ii
http://www.dmd-winchester.org.uk/MA-DMP/JonPulestonGameTheory.pdf offers a summary.
iii
By Charles Duhigg. http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html
Download