A-Z of (Better) Brand Health Tracking Jenni Romaniuk About the A-Z of (Better) Brand Health Tracking Following the launch of my third book, Better Brand Health: Measures and Metrics for a How Brands Grow World, in 2023 for the month of February I posted the A-Z of Brand Health Tracking on LinkedIn each day, covering the little facts and useful tips around each letter. Here is a compilation of those posts. Professor Jenni Romaniuk is a Research Professor of Marketing and Associate Director (International) at the Ehrenberg-Bass Institute – the world’s largest centre for research into marketing. Thank you to everyone who has been on this fantastic ride, and particularly to my Ehrenberg-Bass Institute workmates who have provided such great support. Creating these posts has been much more work, and much more fun, than I ever anticipated! – Professor Jenni Romaniuk As the key architect behind the Ehrenberg-Bass approach to Distinctive Asset, Category Entry Point and Mental Availability measurement, Jenni has worked with companies all over the world to help them build stronger brands. Jenni has written three books: Professor Jenni Romaniuk Associate Director (International) Ehrenberg-Bass Institute, University of South Australia LinkedIn Website Building Distinctive Brand Assets, which helps marketers to future-proof their brand’s identity, How Brands Grow Part 2 which builds on the knowledge revolution started in How Brands Grow and her new book, Better Brand Health provides a valuable resource for those looking to get the most out of their brand health tracking. Ehrenberg-Bass Institute The Ehrenberg-Bass Institute is the world’s largest centre for research into marketing, based at the University of South Australia in Adelaide. The team of 60+ marketing scientists are advancing marketing knowledge, busting pseudo-science and marketing myths, and teaching marketers how marketing really works and how brands grow. We help Ehrenberg-Bass Sponsors all over the world to develop and benefit from a culture of evidence-based marketing. Jenni’s expertise spans mental and physical availability, brand equity, brand health tracking, word-of-mouth and advertising effectiveness. She was editor of the Journal of Advertising Research from 2014-2016, and now sits on the Journal’s Senior Advisory Board. 2 A-Z for (Better) Brand Health Tracking Ehrenberg-Bass Institute for Marketing Science 3 Specialist Research Services Distinctive Asset Measurement Category Entry Point Identification and Prioritisation Laws of Growth Analysis Media Planning Review Better Brand Health: The Workshop Distinctive Assets are the non-brand name triggers that remind category buyers of your brand. They play an important role in building Mental and Physical Availability and need to be developed and protected over the long-term. Category Entry Points (CEPs) are the building blocks of Mental Availability — they capture the thoughts that category buyers have as they transition into making a category purchase. The Ehrenberg-Bass Institute has conducted decades of research into marketing. This large body of research includes the discovery of a number of law-like patterns of buyer behaviour and brand performance. The Media Planning Review allows you to ensure your media decisions are optimal. The Ehrenberg-Bass Institute run a two-stage project which identifies CEPs for your category, benchmarks your brand’s current performance and identifies priority CEPs to develop for the short and long term. To check if the Laws of Growth apply in your categories/ countries, the Ehrenberg-Bass Institute will analyse your data (e.g. standard panel data) to document the fundamental laws-of-growth patterns, and highlight any meaningful deviations that may exist. This is a full day event designed to improve your brand health tracker. We cover data collection frequency, sampling issues and Key Performance Indicators (KPIs). All key areas you need to better track brand health. This is an opportunity to improve how you measure, collect and analyse data and align your team on this key area of measurement. The Ehrenberg-Bass Institute has an empirically validated approach to assessing the strength of potential Distinctive Assets, and will advise on the opportunities and threats for building a strong longterm brand identity. Learn more Learn more Based on this research, we will tailor recommendations outlining the key steps you should take for profitable brand growth. Evidence-based media decision making ensures that this audience has the right characteristics and is exposed to your communication when and where it matters most. The Ehrenberg-Bass Institute Media Planning Review provides the practitioner with the established evidence of how media works and brings clarity to the often murky waters of media decision making. Read more Read more Read more 4 A-Z for (Better) Brand Health Tracking Ehrenberg-Bass Institute for Marketing Science 5 Attitude A B2B B How do I like the, let me count the measures? Valentine’s Day* advertisements are here which means soon thoughts turn to love. Questions such as How much does he/she/they love me? swirl in the air. However, for brand health trackers every day, many times a day, is Valentine’s Day. This is great for getting gifts, but not so great for having an efficient and effective brand tracker. Attitude questions, which ask category buyers how they feel about a brand, can appear in questionnaires as: ∙ rating scales (e.g., rating the brand on a scale from love to hate, like to dislike, or terrible to perfect), ∙ attributes (e.g., is a brand I feel close to, a brand I love, a brand I care about) or; ∙ future plans/intentions (e.g., how do you feel about buying the brand in the future?). If you are unsure which one to keep, or are lucky enough to only have one measure, then do a quality check on that measure. Does it capture the full range of attitudes? In particular does it have a home for those with no attitude at all, which is often a very popular response from a brand’s non-buyers (more on this in later posts). *This post was written in Feb 2023 Three steps to improve your tracker Identify likely attitude measures. Analyse category buyer response patterns to test how similar they are. If there is duplication, whittle down the list to one attitude measure. B2B buyers have the same brains as everyone else. Analysis of the responses from B2B respondents to brand health questions reveals they follow the same patterns and suffer from the same biases as responses from B2C customers. For example see: Romaniuk, J., S. Bogomolova and F. Dall’Olmo Riley (2012). “Brand image and brand usage: Is a forty-year-old empirical generalization still useful?” Journal of Advertising Research 52(2): 243-251. The challenge with B2B is often reaching a quality sample, so make sure you use a panel provider with this expertise. Given the cost and difficulty of getting a good B2B sample, it’s even more important not to waste time on useless measures. The CEP paper and Better Brand Health both contain alternative approaches to identify Category Entry Points. This is helpful to get the right inputs, if you can’t easily survey your B2B customers. Unfortunately, I don’t have an easy non-survey solution for actually assessing brand health. In Better Brand Health, we have a chapter on the use of online data scrapping for brand health assessment purposes. The challenges we discuss there are even more likely to hold back online data quality in B2B categories. This means we can use the same question styles, the same response styles and the same analyses. However the inputs are likely to vary. For example, in Mental Availability measurement you will have different Category Entry Points (attributes) and different brand lists, but you can still use the same ‘free choice, pick any’ measurement approach. See my paper on Cateogry Entry Points in a B2B world for more on this: Read more 6 A-Z for (Better) Brand Health Tracking Ehrenberg-Bass Institute for Marketing Science 7 Category C No this is not about Category Entry Points, but rather about one of the key underlying principles for better brand health tracking, which is to Design for the Category. Regardless of your brand’s current size, you need to understand the whole category. If your brand is bigger, you not only need to understand your brand’s performance but also its threats, which are likely to be new/small brands nibbling at your market share, but also other big brands and medium ones too. Also, in the future you might launch a new brand in the category and then suddenly be looking through the smaller brand lens again. If you are a small or new brand, you need to understand your brand’s performance but you also need to understand where your future sales will come from, which the Duplication of Purchase Law tells us will most likely be the larger brands. 8 A-Z for (Better) Brand Health Tracking Designing for the category includes: ∙ When Recruiting all category buyers to be part of the sample, in line with the typical buying weight distribution. ∙ Including all the major competitors and smaller competitors (or a representative sample of smaller brands if necessary). A tip to check your brand health tracker is to look at your current measures though the lens of very different brand in the category (e.g., if you are a big brand, imagine a small brand using the same questionnaire) - what questions would you need/ want to change? These are the questions that have a bias in their design and that you should rethink now. Distinctive Assets Yes this topic is a bit predictable, but perhaps the point I want to make is less so. While Distinctive Assets are really valuable brand memories, they don’t need to be part of a brand health tracker. First, the tracker is a poor place to do the strategic research that you need to identify the brands’ longer term Distinctive Asset Palette. In strategy/ benchmarking research you should test many more assets than you will end up building for the brand. Therefore this is better tackled as a stand alone piece of research. A key outcome of strategy/benchmarking research is a subset of assets for the brand’s Distinctive Asset Palette. These assets are either currently close to 100% Fame and 100% Uniqueness, or the assets to build to get to 100% Fame and 100% Uniqueness. Therefore, you only need to monitor this subset of assets, and the (rare) instance where you might want to introduce a new asset. D Ongoing Distinctive Asset tracking should check you are protecting strong assets (staving off memory decay) and making progress on building the next wave of assets. You will need to also throw in some key competitor assets to mask the brand of interest, but this helps you keep track of key competitor activities. Your first follow up after the benchmarking/strategy piece is usually better after a year (at minimum) so you have time to remove inconsistencies and execute the asset building tactics. If you include Distinctive Assets in your tracker, this section needs to be placed at the start of the questionnaire, before any brand names are revealed. However, it may just be easier to cut down the strategy/benchmarking questionnaire and track this as a stand alone study. That gives you more flexibility in frequency and timing. Regardless of your tracking approach, do invest in ongoing systems to ensure that you create opportunities to use and build assets. This is better done before marketing activities go into the field. Ehrenberg-Bass Institute for Marketing Science 9 Exposure to Executions Brand health trackers often help diagnose the longer-term impact of recent past marketing activities on category buyer brains. This makes it useful to assess the exposure of category buyers to the brand’s marketing activities. To measure exposure, there are a range of advertising memorability measures, a common example being, Which brands in <insert category> have you seen advertising for recently? The strong presence of the brand in this advertising retrieval cue introduces such a bias that the results from this measure usually devolve to simply a test of brand awareness, whereby big brands score more while small brands score less. Indeed some researchers have noticed the correlation to be so strong, they use unprompted advertising awareness as a proxy for unprompted brand awareness, which is a detrimental to understanding either concept. An alternative approach is to use an execution-cued exposure test, whereby you test memory for exposure to the execution stripped of branding first, and then memory for the brand as a second step. 10 A-Z for (Better) Brand Health Tracking While this is more fiddly to do, it has the advantages of: 1. Providing a more specific, and richer execution-based cue to for buyers to access their memory. 2. Allowing you to capture brand memory separate from ad memory. This is useful because marketing can be ineffectual due to lack of reach or lack of branding, but the remedy for either differs. An execution-based exposure still has a slight brand buyer bias, as brand buyers notice advertising for their brand more than non-buyers, but has more variability across executions, and so can have greater diagnostic value than brand-cued approaches. E Let’s Forget Funnels I don’t understand the appeal of purchase funnels because they are: False – Funnels give a misleading view of the buying process, as they make it seem as if buyers should naturally follow a (similar) path from say, Awareness to Recommendation/Advocacy. Any observation of cohort buying data over time or word-of-mouth data quashes this assertion. Note: the argument that ‘we don’t really mean it that way’ doesn’t hold water with me as if you don’t mean it that way, why have a visual image that shows it that way - picture, 1000 words, and all that… Fraught with irrelevance – Funnels make simple data unnecessarily complex by turning numbers into ratios. Ratio metrics are a blessing for the insignificant as you can only get high ratios with low incidences. Futile – The vast majority of measures included in funnels are correlated with brand share/penetration, and so there are easier ways to see if you hare higher or lower than expected for your size for any metric (a scatterplot against brand size will often do). Frittering away your time – Ratios put a barrier between you and insight. The scores can lift or decline due to changes in the numerator or the denominator or both, so if your funnel ratio changes you then have to reverse engineer the calculation to work out what happened. This just wastes your time. F Awareness Interest Desire Action Loyalty Advocacy Ehrenberg-Bass Institute for Marketing Science 11 Good Measures If you want to bake a cake you need the ingredients to make a cake. If your ingredients are for a stir fry you are unlikely to end up with a cake, no matter how good a chef you are*. Sometimes we are presented with measures that have not been fully tested. How do we know if something new is a good measure to include in a brand health tracker? We can improve our odds of good measurement focusing on those that have the raw ingredients to be a brand growth indicator, and discarding on the ones that don’t. Here are some ingredients to look for: ∙ Measures that cover all category buyers, but particularly a brand’s non-buyers, and not just focused on a brand’s current buyers. ∙ Measures that draw on latent and/or nascent brand memories, and not just on strong emotions. ∙ Measures that can change without a great deal of buyer thought, and not just when a buyer has a ‘road to Damascus’ conversion. ∙ Measures that can capture a small change even when distributed across a wide group of people, and are not only relevant to a small (usually weird) group of buyers. There might be other filters we can use, but these can get you started. Any measure you see as a leading indicator for growth - how does it stack up on these criteria? G Handling the Haters Legend has it that if a customer is happy with a brand they tell three or four other people, but if they are unhappy with a brand they tell a whopping 10 to 20 other people. By implication, the world is therefore swamped with negative brand sentiment that marketers need to continually find and quash. Online reviews can easily give a similarly distorted view, because people typically only comment when they experience something very good or very bad. This means a large amount of rating data are missing - that of the pretty good, OK, did the job. This is illustrated in Better Brand Health, where we compare the sentiment distribution from an online rating scale with that generated from a survey of category buyers. This heightened attention on the haters can waste a lot of time and effort, and can (even unconsciously) have a detrimental affect on strategy if marketing activities are always viewed through the lens of avoiding or removing negative WOM. 12 A-Z for (Better) Brand Health Tracking The good news is that empirical evidence from Professor Robert East shows negative WOM for brands is given by fewer people than positive WOM, but at much the same rate*. The world (and pretty much every brand) has much more positive WOM than negative WOM. The legend is indeed a false tale. In brand health tracking, it’s worthwhile to benchmark WOM to get the full, unbiased, picture. You can check if the brand/company’s negative WOM (and positive WOM) is normal. This usually has the side benefit of showing you that negative WOM is insignificant and your should direct your efforts to more useful areas of brand management. Remember, in a world awash with metrics, knowing what metrics are not important is very helpful! This benchmark data can also help determine whether WOM requires close monitoring and so need ongoing tracking or is something to only monitor by exception, and only triggered by an event likely to stimulate WOM. H Heavy Buyers No one disputes heavy buyers are important for your brand’s current sales. Indeed its something of a circular argument because it is via their past sales that they usually get classified as heavy buyers (despite that in many CPG categories, only around 50% continue to be heavy buyers in a subsequent time period). So heavy buyer responses are important to capture in a Category Buyer Memory tracker. The issue arises when we mix all buyer groups together, particularly heavy buyers with very light/non-buyers. It becomes like trying to hear someone whispering in one ear when someone else is shouting in the other. Splitting out buyers from non-buyers is useful for most metrics, but splitting out heavy buyers from light buyers is useful for more difficult/more extreme measures. For example, we find heavy buyers have significantly higher Top-of-mind awareness scores than lighter category buyers, but not significantly higher spontaneous or prompted awareness (Hogan, 2015). H Something I wrote but did not post. Intentions-to-Buy Manna from heaven or the road to hell? Intentions-to-buy measures crop up in all sorts of research from advertising ‘brand lift’ studies to pack testing to, of course, brand health tracking. This all assumes, of course, a strong relationship between buyers’ stated intentions and their future buying behaviour. A typical intention-to-buy question looks something like this: Q: How likely are you to choose each of these brands next time you buy <insert category>? With five response categories: 1. Definitely will buy 2. Probably will buy 3. Might or might not buy 4. Probably will not buy 5. Definitely will not buy 14 A-Z for (Better) Brand Health Tracking I In the 1960’s consumer follow-up studies found that more consumers with no previously stated intention purchased a category than consumers with a previously stated intention. In reaction to this Thomas Juster created the Juster scale, which captures purchase probabilities rather than intentions, to improve the accuracy of future buying forecasts ( Juster 1966), something confirmed by a more recent meta-analysis (Wright & McCrae 2007). What is the difference? Intentions speak to the consumer having latent future plans, while probabilities are calculated on the spot, by the consumer, based on their current available knowledge. For example: Do you intend to get sick next winter? You would probably answer “Definitely not”. But if I asked you to assign a number to the probability you will get sick next winter, this is probably not zero, but would vary depending on your own immune system, whether you have kids, where you work etc. As Thomas Juster found, many people with zero plans to act can have a non-zero probability of acting. If you do want to continue using intentions-to-buy, Better Brand Health outlines key conditions where intentions-to-buy are thought to be more accurate. This could help you improve your current measure. For those looking to upgrade, the chapter also contains more detail on the wording of the Juster Scale. Now you might say, I know things can interfere between expressing and intention-to-buy and acting on it. I am just measuring the current consumer mindset. That is fine, just then appreciate this is attitude-tobuying measure, with more emphasis on the attitude than the buying, and don’t kid yourself it is an accurate measure of future behaviour. This measure is an easy way to ‘test’ if something could be linked to higher sales, for example: Did those exposed to this ad have higher purchase intent? Will this pack change lower purchase intent? Is perceiving the brand as innovative linked to higher purchase intent? Ehrenberg-Bass Institute for Marketing Science 15 Jobs To Be Done (JTBD) A question I am often asked is what is the difference between JTBD and Category Entry Points (CEPs)? Let me first start with what they have in common - they are both about understanding the category from a categorybuyer point of view, rather than the brand. Therefore, applying either approach should improve your attribute list and avoid the brand-based myopia that often dominates. A CEP approach draws from how our memory works, under the Associative Network Theories of Memory. It is based upon empirically observed facets of memory that affect the brand’s chance of being accessible in category buyer memory. This includes that even existing memories are not fixed, but subject to natural decay unless refreshed. Sometimes all we need to do to grow a brand is to turning a decaying memory into an easily retrieved one. We might not need to change the brand, but rather change category buyers memories for the brand. When combined with the framework of the W’s to give us a multifaceted perspective, a CEP approach can give us a pretty comprehensive view of how category buyers interact with the category. 16 A-Z for (Better) Brand Health Tracking JTBD and the idea of ‘hiring’ a brand to do a job can be a useful metaphor to help marketers think creatively about their category, and identify opportunities to innovate. But it is just a metaphor, and if we take it too seriously we can be fooled into thinking of the category buyer as a more logical, thoughtful actor than the the real world would show. I did not ‘hire’ Sushi Train for lunch today, I thought of options my parents would like, on a warm day, that was on the way home from the vet. Visiting Sushi Train was one option, ordering Uber Eats from Sushi Planet was another, but Sushi Train was going to be quicker (better physical availability) and so got selected at this time. At their best, CEPs provide insight to improve how you manage a brand now, while JTBD provide ideas for innovating the brand to change its future trajectory. But CEPs can stimulate innovation and JTBD can provide insight to current category interactions. I prefer CEPs because it is based in a real memory process that occurs, largely without us knowing. Therefore, in Better Brand Health, there is a chapter on methods to get CEPs. However, I think taking a JTBD approach is going to be better than doing neither. J JTBD and the idea of ‘hiring’ a brand to do a job can be a useful metaphor to help marketers think creatively about their category, and identify opportunities to innovate. Key Performance Indictators (KPIs) Now I love a KPI as much as the next marketer, but I think the desire for one magic number is holding back marketers and marketing thinking. In the red corner we have the ‘Emperor’ metrics, whereby apparently one metric reigns supreme and will tell you all you need to know about your brands health. The NPS is the most recent in a long line of such KPIs. I remember when the Conversion model was all the rage, turning how people convert to religions into a marketing KPI - OK might have been the age of ‘cult brands’ but talk about stretching the metaphor. Often marketers subscribe to this approach by default, and lean on a favourite metric as an oracle, such as Spontaneous awareness, without really knowing why. It’s risky to rely on one KPI out of one measure, as this one measure needs to capture everything important. Given the variety of possible memory changes, this seems improbable and indeed every sliver bullet metric proposed so far has been quickly unmasked. In the blue corner we have ‘Of the masses’ metrics that take say a bit of awareness and a bit of image, and a touch of attitude, combine them to create a single magic number. However, if the components all move in the same direction, why do you need them all? In this scenario, most measures are superfluous. However if the components move independently, how do you benefit from combining them? The number is of little value over time if a decrease in one component could counteract for an increase in another component. That destroys the simplicity the one number was supposed to provide. Instead, how about a middle ground. Let’s aim for more than one component part, but not all the measures all mixed up: A dashboard where every measure has earned its place, a wise council, where each member has its own area of expertise, and we have the knowledge to know when to ask. K That suggests considerable value lay KPIs that document changes in the memories of these future potential buyers. For most memory metrics we can only observe non-buyer memories if we analyse the brand’s very light/non-buyers separately. This is why in Better Brand Health’ mantra for good brand health measurement, the middle is ‘analyse for the buyer’. Buyers’ memories are also important, but perhaps we need different metrics to fully understand them. To create useful KPI’s we need a greater understanding of how marketing activities change brand memories and how brand memories buttress, or change, brand buying. So I encourage you to review the KPIs you prefer and ask yourself about the evidence as to why and when this metric matters? If you don’t really know then perhaps it’s time to learn more. As a start, if brand growth comes predominantly from expanding the customer base, this means a major shift in buyer behaviour will be the cohort of category buyers who did not buy the brand this time period, but who will end up buy it next time period. Ehrenberg-Bass Institute for Marketing Science 17 Love (Brand Love) L Memory Building M I just could not resist this one, given we are so close to Valentine’s day and in homage to the great songwriter Burt Bacharach who passed recently, here is a song for all marketers to sing. (to be song to the tune of ‘What do you get when you fall in love?’) What do you get when you build brand love? Your buyers will think, you live in a bubble And won’t buy you once, let alone double Marketers never build brand love again I’ll never track brand love again What do you get when you build brand love? You get a media plan that’s way to narrow You won’t reach non-buyers today or tomorrow I’ll never build brand love again No, no, I’ll never track brand love again What do you get when you build brand love? You waste enough money to annoy the CFO And of course, your brand won’t grow Marketers never build brand love again Don’t you know, I’ll never track brand love again I’m out of that dogma, that love is what binds sales I need to remember, that brand love doesn’t scale Don’t tell me I need buyers devout ‘Cause I’ve seen the evidence and I’m glad I’m out Out of that dogma, that love is what binds sales I need to remember, brand love doesn’t scale What do you get when you build brand love? You get advertising that doesn’t appeal To the normal buyers, you need to steal I’ll never build brand love again Don’t you know that I’ll never build brand love again I’ll never track brand love again In brand health tracking we often test how brand memories have changed over time. For example, has the link between the brand and a key attribute* improved? But it often feels like there is a disconnect between the marketing activity design process and the measurement of its effect on memory. The more we can close this gap, the more useful our brand health tracker will be. In Better Brand Health, I talk about two different types of memory effects we can see in brand attribute data: (a) messaging effects, which is a change in a specific brand-attribute link and (b) mental availability effects, which is a change in the freshness of the total network. Here I am going to focus on the ‘messaging effect’. To build and/or refresh specific brand-attribute links means paying attention to what we say and how we say it. 18 A-Z for (Better) Brand Health Tracking What we say So much effort goes into deciding on the advertised message. We isolate important messages we think will help the brand get bought. These messages become the specific memories you are building/refreshing, such as ’value for money’, ‘a special treat when out with your partner’ or ‘will make it easier to pay the bills’? To see a messaging effect, the crux of the message needs to be reflected in the attributes in your tracking. If your attribute list is really full of useful, relevant memories and your communications is trying to build useful, relevant memories then there should be a match. If there is not a match, then at least one side is not working for you. How we say it We want to build brand memories, not just advertising memories. This means we need both the brand and the message. First, we need to assess how well does the message translate into memories. The more easily and universally processed the message, the more likely it will result in a change observable in brand tracking. Second, we need to assess if we have excellent quality branding alongside the clear message, so the two are co-presented. The brand anchors the message in the right part of memory. BTW I am writing this while watching Super Bowl ads, and thinking about message usefulness and clarity, as well as branding quality. Here are some interesting contrasts from my viewing so far (you should be able to guess which is good and which is a poor example): Branding quality - T-mobile with Bradley Cooper versus Remy Martin with Serena Williams Message usefulness - Google’s FixedonPixel versus Workaday’s ‘Rockstar’ Message clarity - Hellman’s with John Hamm & Brie Larsen versus Michelob Ultra’s Serena William’s ad Remember having memory changes that show up in brand health tracking starts with having marketing activities capable of building brand memories. To build and/or refresh specific brandattribute links means paying attention to what we say and how we say it. Ehrenberg-Bass Institute for Marketing Science 19 Non-Buyers This post is about questionnaire wording (please don’t stop reading, it will be worth it I promise!). Linked to the theme of a brand’s Non-buyers, lets talk about how we can easily (and unintentionally) depress the responses from a brand’s non-buyers with just the addition of a few words to our tracker attributes. Do you have any attributes that are worded as a comparison with other brands? For example, ‘is more innovative than other brands’ or ‘has better service than competitors’. If so, you will get on average 30% fewer brand linkages from a brand’s nonbuyers that if you just used the general form of the attribute such as ‘innovative’ or ‘has good service’. The responses are lower because comparative worded attributes encourage category buyers to undertake a 2-step cognitive process: First - think of brands linked to the attribute; and Second - select a subset of options based on which one is ‘better than others’. 20 A-Z for (Better) Brand Health Tracking While all buyers undertake the same 2-step process, its more difficult for non-buyers to overcome these two hurdles to be linked to any attribute, because they are buyers of other brands and have no direct brand experience. So their memories for these brands are harder to retrieve. This approach gets you an evaluation of a brand (in an attitudinal sense), not an association with the brand (in a memory sense). Given that we want to see if a brand’s nonbuyers have been building brand memories, it seems like an unfortunately ‘own goal’ to track non-buyers using questions that depress their response. We are left with an even more incomplete view of the memory networks of the brand nonbuyers that are vitally important for growth. Note: If you really do want buyers to evaluate your brand versus competitors on a quality, then there are better ways to do this than a free choice, pick any, attribute measurement approach. More on this and the effect of other attribute wording modifications in Better Brand Health. BTW - I think we might need to change the name of non-buyers to not-yet buyers or potential buyers, as non-buyers seems to downplay their importance to the future of the brand. But not today as otherwise I would have to find another N! N If you really do want buyers to evaluate your brand versus competitors on a quality, then there are better ways to do this than a free choice, pick any, attribute measurement approach. Ownership Marketers can be a possessive bunch, always wanting to ‘own’ something. Here are a few evidencebased thoughts on when Ownership matters (and when it doesn’t). You can’t (and don’t need to) own a customer Trying to get buyers to buy only your brand is a waste of time and resources. Sole brand loyalty, where people only buy one brand for a category, is rare and is typically linked to light category buying. As Professor Andrew Ehrenberg said ‘Your customers are really other brand’s customers who buy your brand occasionally’. You can’t (and don’t need to) own an attribute Trying to be the one brand known for X quality (e.g., ‘top quality service’, ‘value for money’, ‘to treat the kids’) is that wonderful combination of both difficult and unnecessary. Unique brand linkages are rare (<3%. Instead we find around half of category buyers (46%) have multiple brands linked to an attribute and one-third have no links (33%). A brand owning an attribute in the eyes of its category buyers is rare. If an attribute is important to category buyers, they typically link it to multiple brands. This means you can’t avoid mental competition, you just need to get better at combatting it - this is why building Mental Availability is so important. But it is essential to own a Distinctive Asset There is an owl on the cover of Better Brand Health. This is the same owl that appeared on the cover of Building Distinctive Brand Assets. Hopefully by now, you see the owl and you think of the Ehrenberg-Bass Institute, and only the EhrenbergBass Institute. If you don’t know what owl I am talking about, look at a book cover, to build this asset in your memory for future reference! Empirically, owning a Distinctive Asset means you have (close to) 100% Fame and 100% Uniqueness. That is, pretty much every category buyer, when they experience the asset in the absence of the brand, thinks of your brand, and only of your brand. You need 100% Fame because then the Distinctive Asset always does its primary job. You need 100% Uniqueness to avoid evoking competitor brands and working against yourself. Direct your marketing resources to own the right things rather than trying to own everything. O A little Limerick I wrote for Max Winchester, who was disappointed that N was not Net Promoter There once was as Score called Net Promoter Which to lifting, marketers became devote-r Til research came along And showed face validity is not strong And the link to future brand growth is even remote-r Prominence A Pillar of Physical Availability In Better Brand Health, Professor Magda Nenycz-Thiel and I wrote a chapter to assess when Physical Availability should be part of a Category Buyer Memory (CBM) tracker. This question arose because I often see facets of Physical Availability get turned into attributes, whereby brand links to these attributes are tracked over time. You will have no idea if the colour, pack shape, logo placement or the pink cap explains why the packaging stands out. In any event, this is a very torturous way to gain insight that is much more easily obtained via direct measurement of Distinctive Assets. For example, Prominence, which is about the ability to easily find the brand in retail settings, gets converted to attributes such as is easy to find on shelf or has packaging that stands out. The issue with these attributes is they tell you very little about the brand’s actual prominence or how to improve it. Category buyers respond to these attributes in the same way as they do other attributes - brand buyers score higher than non-buyers, and big brands score more than small brands. In the Better Brand Health chapter, we talk about all three pillars: Presence, Prominence and Portfolio. This provides some better alternatives to measure these important elements of brand growth. But aha, I hear you say, ‘This means we can use the approach you show in Chapter 7 to identify brands that score higher or lower than expected!’ Yes you can, but you then what do you do with this information? How do you go from deviation to explanation and then turn it into something actionable? 22 A-Z for (Better) Brand Health Tracking First, you should do the background work to identify the shopping assets you have or want to build. Then, once you know the brand’s current or desired shopping assets, you can monitor them either in brand tracking research or as a stand alone study (see D for Distinctive Assets). P Quicker Always Better? It is fashionable in some areas to assess performance with a speed of response measure. This can involve directly timing how long it takes someone to respond, or indirectly through focusing on top-of-mind measures. This approach assumes that a quicker response = better performance. But is quicker always better? There are some times when yes, it is obvious. Being quicker to find on shelf, or quicker to find in an online market place, are easy examples when being quickest found is likely to reap rewards. When looking at factors that might effect the ability of the brand to be quickly found in those environments, timing measures could be useful dependent (outcome) variable. There are other issues, such as the generalisability of the testing environment(s) to the many, varied, real world environments (particularly in-store where even different stores in the same chain can have very different category plan-o-grams), but that is a method rather than measure issue. Q However, when it comes to brand memories, the advantage of being retrieved quicker is not so obvious. If I remember three restaurants for booking a celebratory dinner out, do you want to be the first restaurant I remember? Or perhaps it would be better to be the last? Or maybe it doesn’t matter because I can (like most people) hold multiple items in working memory and so I will contemplate all three. And for Distinctive Assets, does it really matter if the asset triggers the brand in 0.8 or 1.2 seconds? Or is the most important thing that it triggers the brand at all? And not a competitor brand either as well or instead? (See O for Ownership!). In the latter two examples, focusing on timing of responses or just on first response is of no benefit, and can be of great detriment as it can lead you to miss brand memories. There might be more of this in this series if I select Top-of-mind for T, but there is definitely more on this in Better Brand Health. changes you then have to reverse engineer the calculation to work out what happened. This just wastes your time. Ehrenberg-Bass Institute for Marketing Science 23 Rating Scales Two of the most common approaches for assessing brand performance on an attribute are: ∙ Free Choice, Pick Any where respondents tick brands linked to attributes. They can tick as many or as few as they like, and provide a binary 1 = yes, 0 =no or don’t know score for every brand on every attribute ∙ Rating Scales, where respondents rate every brand on every attribute. This is typically on a 5, 7 or 11 point scale, and so they provide each brand with a number on the scale. Which is better? On the surface it might seem like a simple trade off between easy to answer (Free Choice, Pick Any) versus sensitivity (Rating Scales), but the empirical results tell a different story. Both approaches rank brands in similar order, but suprisingly a Free choice, Pick Any approach has greater discrimination between brands than Ratings. 24 A-Z for (Better) Brand Health Tracking R There are two key reasons for this: 1. Few category buyers use the negative/disagree part of a brand attribute scale, which makes around half the scale points redundant. More scale points only increase sensitivity if they are needed. 2. Non-brand buyers who don’t know about the brand default to the scale midpoint (say 3 on a 5-point scale). Small brands have many non-buyers who don’t know them and this bumps up their score, which reduces the range of the scores from highest to lowest brand for rating scales. In a Free Choice, Pick Any approach, non-buyers who don’t know get zero, and so don’t count to a brand’s score. A third factor to remember is that we don’t store memories as ratings. Therefore, the ratings a brand gets are calculated ‘on the spot’ and only exist when the question is asked. In contrast, a Free Choice, Pick Any approach can mimic the Associative Network Structure of memory (provided you word the attributes right, see for example N is for Non Buyer), and so can draw directly from buyer memories. So if you are measuring brand attributes on rating scales, you can immediately improve your tracker by converting to a Free Choice, Pick Any approach. This collects better quality data in a way that is easier on respondents. BTW Academia loves scales because they are easier for multivariate analyses (such as regression) and you can get statistically significant difference between groups with lower sample sizes. That is why in academic studies even brand awareness often gets turned into a three item scale! Sample Screening Questions I will be the first to admit, Screening questions are not a Sexy topic. Today is Sunday, this is letter 19 and am not inspired to a Song, a Sonnet or even a good Story. But I will draw on an old Saying - rubbish in, rubbish out. At the start of the questionnaire, screening questions are really important because expertly crafted brand health questions, or the most sophisticated analysis, won’t save your insights if you have a biased sample. What should you aim for? Remember the first part of the mantra: Design for the category. Your tracker sample buying weight characteristics should mirror the normal category buyer weight distribution. You risk getting an excess of heavy category buyers, and a deficit of light category buyers if you have: 1. Buying timeframes that are short, relative to typical category buying frequency 2. Adding category buying weight requirements Excluding light/very infrequent category buyers is particularly damaging to your tracker insights if a category is growing or declining via penetration. S It is also useful to use externally collected buying data and run some parallel profiles on the buying weight metrics for your category, to check your sample is pretty close. If there is a big gap between normal buying and your tracker sample buying then taking steps to fix this can improve the quality of data you collect. Test different screening options for the impact on your sample. This will highlight which questions have the biggest impact on the composition of your sample and data quality. How well do you know the buying patterns of your category? To identify key category buying knowledge, Better Brand Health includes a four question checklist that highlights key information to help you make smart decisions on category buying screening questions, as well as when you want to capture category buying behaviour within the questionnaire. Ehrenberg-Bass Institute for Marketing Science 25 Top-of-Mind Brand Awareness Top-of-Mind Brand Awareness (TOMA) is the first brand is recalled, unprompted, with the category as the retrieval cue. This conjecture is supported by empirical analysis that shows: Simple to collect and easy to understand, TOMA is a popular brand health tracking measure as even if suppliers change, TOMA typically remains. This means it is often tracked over a long period of time, and provide a sense of continuity when trackers change. ∙ Relying on the single category cue under-represents the brand’s ability to be retrieved, but TOM is particularly restricts retrieval of smaller brands (see Better Brand Health for this). TOMA’s perceived value is linked to the idea that retrieving a brand quicker is an indicator of better future performance (see Q for is Quicker always better). When we started researching brand salience/ mental availability, TOMA was one of the measures we researched as a possible measure. We rejected it as unsuitable to measure Mental Availability because buyers use multiple cues to enter the category, therefore it made little sense conceptually that one single category cue can capture retrieval for all cues used in buying situations, particularly once you factor in how human memory works (see 2004 paper below). 26 A-Z for (Better) Brand Health Tracking ∙ When retrieval cues change, so do the brands that are evoked, including which are TOM; and But even as its own metric, TOMA is lacks evidence as a measure of brand growth because: ∙ TOM brand awareness disproportionately biases against retrieval of brands from non-buyers and for small brands from all buyers. ∙ Over time, changes in TOM awareness are largely concentrated in brand buyers, rather than the brands’ non-buyers needed for growth. Therefore, it fails the ‘key audience for growth’ test on at least two counts. In Better Brand Health, I present the evidence for each of these points, plus a few other areas of concern around TOMA and top-ofmind approaches in general. If you do rely on TOMA as a key Brand Performance Indicator, this evidence might be useful to put these readings at the Top of your reading list. T U-Shaped Distributions One of the most useful (nerdy) things to do with any data on a scale is to look at the distribution of responses. This can tell you a lot. Most people are familiar with normal distribution, which looks like a mountain and has the beautiful property of the mean being usually representative of the typical buyer. But much of the data we deal with in brand health is not distributed normally, so if we don’t understand the distribution, we risk making unfounded assumptions about the representativeness of the mean and how much variance there is around it. Those familiar with the Laws of Growth will know category and brand buying frequencies follow the Negative Binomial Distribution, which usually looks like a reverse J (see How Brands Grow 1 & 2 for more on this). There are many buyers buying rarely, if at all, and a long tail of a few people buying very frequently. Because of the long tail, the average (mean) buying rate is higher than the typical buying rate, which can lead you to assume normal buyers buy more often than they really do. Looking at the distribution can help you judge the usefulness of data from different sources. For example, the underlying distributions help us understand whether these ubiquitous online brand ratings can give a brand manager an accurate depiction of category buyers’ attitude to the brand. A U shaped distribution, with lots of people either really happy or really unhappy, and very few people in the middle, often pops up in ratings or review data online. This occurs largely because we only see responses from people motivated to leave a review, due to a very good or very bad experience. U In Better Brand Health, Professor Anne Sharp and I have a chapter that looks at whether online data scraping can replace brand health surveys. In it we compare the data from a survey and from yelp reviews for the same restaurant brands to illustrate how far online ratings data differs from normal. This highlights that online review data has to be interpreted with care, due to its biased approach to sample recruitment. So take care with the U-shaped distribution data such as from online review data - its mean is meaningless and its meaning is often unrepresentative. In contrast, brand attitude distributions collected on a normal sample of category buyers, follow a slightly positive skewed distribution, with the peak usually around the mildly positive/ somewhat agree point in the scale. Therefore online review data can give you the impression category buyers feel more strongly about the brand than they do. Ehrenberg-Bass Institute for Marketing Science 27 Valence in Word-of-Mouth Effects When choosing what to write for each letter, sometimes I feel like I am robbing Peter to pay Paul. For example today when thinking of V options, I thought Valence in word-of-mouth. Yes that is a V, but it could also be a W, which is tomorrows’ letter. Then, with a heavy sigh, I decided that tomorrow’s letter would be future Jenni’s problem and that V for Valence in word-of-mouth effects it is…. Back to Valence in word-of-mouth effects, I am sure we all logically intuit that positively valence WOM (PWOM) is good, and negatively valanced WOM (NWOM) is bad. And for the most part we would be right. We have benchmarked that about 3% of WOM has counterintuitive effects, where PWOM decreases your chance of buying and NWOM increases your chance of buying. This happens when there is a mismatch in preferences with the giver of the WOM - if they like it, I probably won’t, kind of thing. What we often fail to grasp is that valence also has an impact on WOM effects as it interacts with the probability of buying the brand being talked about. This means who’s responses we prioritise for WOM metrics, differs by Valence. 28 A-Z for (Better) Brand Health Tracking Most people have close to zero probability of buying most brands. Mostly because they are not in the market, and (close to) zero probability of buying the category means (close to) zero probability of buying brands within the category. For example, NWOM is most influential when it reaches current buyers, with sufficient headway in their future buying probability to decline in the face of bad news about a brand. However, someone with car insurance due for renewal in November, right now (in February) has a zero probability of buying most car insurance brands. Therefore, if that person receives NWOM about one of the zero (or very close) probability brands, nothing can happen as their buying probability can’t be lowered. In contrast, PWOM is most influential on buying when it reaches category buyers with low probability of buying, as these people have the greatest room to improve their chance of buying. PWOM received just before renewal for a brand, has only a small as the buyer was already going to buy. V Professor Robert East’s modelling (showcased in How Brands Grow Part 2) shows the influence of PWOM declining and NWOM increasing as a category buyers’ initial probability of buying a brand gets higher. This means if you are tracking WOM, you want metrics that measure: 1. the reach of NWOM amongst the brand’s buyers; and 2. the reach of PWOM amongst the brands non-buyers. Word-of-Mouthish Attributes W Are a Waste of Time Brand attributes list can get very long. Here is one tip to shorten them. Check if your list includes WOMish attributes, such as: ∙ A brand lots of people are talking about ∙ A brand you would recommend ∙ Is recommended by family and friends ∙ Has a buzz about it See the Word-of-mouth chapter in Better Brand Health for more on this and other WOM measurement issues. ∙ Heard people say positive things Just ask yourself, what are these attributes really measuring? There is no link to a time frame or even a specific WOM event, and therefore it is difficult to understand how a category buyer comes up with a response. If you don’t know what drives the response, how do you interpret or act upon the results if they change? Therefore these attributes add clutter not value. If you really want to measure WOM, then do this properly, don’t use a vaguely worded abstract as a proxy. An exception is the attribute ‘Recommended by <insert expert relevant to category, such as Dentists or Vets>’ which can be a CEP or baseline brand competency in high risk categories (e.g., infant pain relief). In this case you are not measuring actual WOM, but the category buyers’ perception that the brand is endorsed by experts. This perception of endorsement can provide the extra confidence to make a risky purchase. When a brand gets unusually high responses, this can usually be traced to the use of experts in advertising or social media. Ehrenberg-Bass Institute for Marketing Science 29 eXcellence in X (and Y) Axes I was fortunate enough to work on a couple of research projects with Andrew Ehrenberg. One of the most frustrating things about working with Andrew is he would first edit all the charts and tables of any research paper, and you would have to do those changes before he would even look at the text. His edits typically focused on improving the clarity of data communication. So I quickly learnt if I wanted him to work on any text, I had to first get the charts and tables right. This was one of the most valuable things I learnt from him! Charts are both working devices (to see the data) and communication devices (to show the data). Smart use of your X (and Y) axes ensures you can do both well. When preparing charts for other people, it should always be easier for them to come to your conclusions than it was for you. Well designed and labelled charts help you 30 A-Z for (Better) Brand Health Tracking achieve this. I know there are lots of software programs that claim to help with this, but often what looks good can be poor communication. For example, we sometimes take off axis numbers to improve chart aesthetics - but pretty does not beat useful when it comes to data presentation (ideally you want both, but if you have to pick one……!) It’s quite easy to inflate or suppress perceived data variance in a chart by judiciously choosing the highest and lowest X or Y axis figures. This risks giving you and any reader a misleading view of the data. Sometimes this is done intentionally, but often it is because the default settings on charting programs is often designed to maximise variance - don’t be a slave to the default. X Before you review any chart data from someone else, look at the X (and Y) axes and check they are not designed to magnify or minimise variance. Often I see something that looks like it varies a lot over time, only to discover the X axis is very short or very long and/or the Y axis is say between 3.0 and 3.5. A simple illustration of this is share price charts for say a company like Nestle - look at how the X and Y axes change as you change time and how your perception of the variance in the data also changes. And also remember, sometimes the best data communication device is a well-constructed table. It does not alway have to be a chart. Yesterday to Over a Year Ago - Selecting Timeframes In repeat buying categories such as Consumer Packaged Goods, or B2B consumables, to identify/ classify category buyers you need to set a timeframe, such as having bought <insert category> in the past X months, to identify who to survey or include in analysis. To decide on the timeframe we turn, not to Yoda, but to Goldilocks, for inspiration. However, a timeframe can also be too long. This is when youinadvertently include lapsed category buyers or make it to difficult for heavy category buyers to remember their purchases. Therefore, you get inaccurate data. Identify your ‘Goldilocks’ timeframe If you are unsure, experiment with different time frames. Check the distributions and brand repertoire sizes (number of brands bought in the time period). At some point you will find that taking a longer time does not mean substantially more brands are bought, which means you have captured the current brand repertoire of most category buyers. A Goldilocks time frame is neither too short or too long. Either extreme can be detrimental to data quality. Your time frame is too short if your category buyer base skews to heavier category buyers, because many light buyers have not yet had a chance to buy. Therefore you miss data from really important category buyers. A ‘Goldilocks’ timeframe is long enough for category buyers to have bought multiple times if they want to, but short enough for the purchases to be memorable. This means most useful time frame for a category varies with the category’s purchase frequency and memorability of the buying event. In Better Brand Health Chapter 9, there are some timeframes and examples of categories where they could be useful. Y BTW - the timeframe decision is why there is no single penetration or Pareto figure for a brand or category, it depends on the time frame you take. Similar to my last post on how the X and Y axes can distort the data, so can the timeframe. If you say calculate Pareto share in Packaged Goods over five years, it can get to around 70% (see Kim et al, 2017), but if you do that same calculation over one year it is more likely to be between 40-60% (see Sharp et al 2019). Both are correct, the question you need to ask yourself is whether one year or five years is a more useful time frame for your calculation. Ehrenberg-Bass Institute for Marketing Science 31 Zealotry versus Zest For the last post in this series, let’s first turn to Frank to create the mood: And now the end is here And so I face the final letter Unfortunately, it is a Z A D, M, or P, would be much better I posted each day, some short, some long I even created, an anti-brand love song In the pursuit, of Better Brand Health Finally the last posting day For my last post I want to encourage you to reject the Zealotry of fanatically believing in an idea regardless of the evidence, in favour of a Zest for learning, wanting to know more. Marketing has a history of embracing and championing ideas that seem good on paper, but crumble under scrutiny. You can have a Zeal for a new idea, but ask for evidence, look for evidence, and if necessary, create the environment for evidence to emerge - and be prepared to temper or even turn off your Zeal, if the evidence doesn’t appear. That makes you stronger as a marketing professional, and strengthens all of us, as a discipline. I hope these posts and Better Brand Health lead you to improve your marketing research. As we get better quality data, we make better decisions now, and lay the foundations to learn even more in the future. Our R&D is ongoing but the quality of data we have is a big contributor to how much and how quickly we learn. Z How you can engage as an individual How Brands Grow – for Executives Whether you are a C-Suite marketing executive with a global team, a business owner looking to expand your footprint, or an aspiring young leader – this experience is for those who want to achieve evidence-based growth. How Brands Grow - for Executives equips you, the decision-maker, with the critical evidence and tools to implement world’s best-practice marketing techniques to brands. Gain insights from the best in the business through interactive workshops and dynamic discussions. Learn more 32 A-Z for (Better) Brand Health Tracking Delivered by world-class Ehrenberg- Bass Institute marketing experts, this event leverages the most up-to-date findings to show you: ∙ The strategies that will (and won’t) lead to sustainable brand growth ∙ The real-world value of various marketing interventions ∙ How to incorporate evidence-based knowledge into your strategic decision-making and planning processes while bringing this knowledge back with you to up- skill your department Our impact is global. How you can engage as a company The Ehrenberg-Bass Institute are considered an authority on marketing by many of the world’s biggest brands. They provide companies with the tools and knowledge to grow brands and develop smarter, evidence-based marketing teams. Ehrenberg-Bass Sponsorship Ehrenberg-Bass Sponsorship offers exclusive access to a multimillion dollar research R&D program in exchange for an annual financial contribution. With these funds our team of experts investigate the questions that matter most to marketers. The result is a huge body of research translated into practical insights that are scientifically proven to apply to every brand, across all markets, anywhere in the world. Only our Sponsors have access to all of our important research, our latest findings, as well as the support they need to apply it to their business. Joining as a Sponsor will make your marketing team more efficient and effective. We teach scientific marketing laws, and how to apply that knowledge to grow brands, gain market share and increase sales. The Ehrenberg-Bass Institute actually is the language of the C-suite. It does lean more towards the scientific, empirical, economic language that lends more credibility in the boardroom. Vice President Marketing ANZ, Unilever Learn more 34 A-Z for (Better) Brand Health Tracking Ehrenberg-Bass Institute for Marketing Science 35 References A is Attitude I is for Intentions-to-buy Romaniuk, J. (2023). Brand Attitude, Chapter 8, Better Brand Health. Australia, Oxford University Press. Juster, F. T. (1966). “Consumer buying intentions and purchase probability: An experiment in survey design.” Journal of American Statistical Association 61(315): 658-696. B is for B2B Wright, M. and M. MacRae (2007). “Bias and variability in purchase intention scales “ Journal of the Academy of Marketing Science 35(4): 617-624. Romaniuk, J. (2023). The rise of the machines?, Chapter 13, Better Brand Health. Australia, Oxford University Press. Romaniuk, J. (2023). Brand Buying, Chapter 10, Better Brand Health. Australia, Oxford University Press. C is for Category J is for Jobs To Be Done (JTBD) Romaniuk, J. (2023). Applying the laws of growth to brand health tracking, Chapter 1, Better Brand Health. Australia, Oxford University Press. Christensen, C. M., et al. (2016). Know Your Customers’ “Jobs to be Done”. United States, Harvard Business Press. D is for Distinctive Assets Romaniuk, J. (2021). Building Mental Availability. How Brands Grow: Part 2. J. Butler. Victoria, Australia, Oxford University Press: 61-84. Romaniuk, J. (2023). Brand attribute selection, Chapter 3, Better Brand Health. Australia, Oxford University Press. Romaniuk, J. (2023). Mental Availability and Category Entry Points, Chapter 5, Better Brand Health. Australia, Oxford University Press. Romaniuk, J. (2018). Building Distinctive Brand Assets. South Melbourne, Victoria, Oxford University Press. K is for Key Performance Indicators (KPIs) E is for Exposure to Executions Romaniuk, J. (2023). Applying the Laws of Growth to Brand Health Tracking, Chapter 1, Better Brand Health. Australia, Oxford University Press. Romaniuk, J. (2023). Brand attribute selection, Chapter 3, Better Brand Health. Australia, Oxford University Press. L is for Love (Brand Love) Harrison, F. (2013). “Digging Deeper Down into the Empirical Generalization of Brand Recall.” Journal of Advertising Research 53(2): 181-185. Romaniuk, J. (2023). Brand Attitude, Chapter 8, Better Brand Health. Australia, Oxford University Press. Vaughan, K., V. Beal and J. Romaniuk (2016). “Can brand users really remember advertising more than nonusers? Testing an empirical generalization across six advertising awareness measures.” Journal of Advertising Research 56(3): 311-320. M is for Memory Building G is for Good Measures Romaniuk, J. (2023). Applying the Laws of Growth to Brand Tracking, Chapter 1, Better Brand Health. Australia, Oxford University Press. H is for Handling the Haters East, R., et al. (2007). “The relative incidence of positive and negative word of mouth: a multi-category study.” International Journal of Research in Marketing 24(2): 175-184. Romaniuk, J. (2023). Exposure to Marketing Activity, Chapter 11, Better Brand Health. Australia, Oxford University Press N is for Non-buyers Romaniuk, J. (2023). Brand Attribute Measurement, Chapter 4, Better Brand Health. Australia, Oxford University Press O is for Ownership Romaniuk, J. and R. East (2021). Word-of-Mouth Facts Worth Talking About. How Brands Grow: Part 2. J. Butler. Victoria, Australia, Oxford University Press: 119-138. Romaniuk, J. and E. Gaillard (2007). “The relationship between unique brand associations, brand usage and brand performance: Analysis across eight categories.” Journal of Marketing Management 23(3): 267-284. Romaniuk, J. (2023). Word-of-mouth measurement, Chapter 12, Better Brand Health. Australia, Oxford University Press. Romaniuk, J. (2018). Building Distinctive Brand Assets. South Melbourne, Victoria, Oxford University Press. Ehrenberg, A. (1988). Repeat-buying: Facts, theory and applications. London, Oxford University Press. 36 A-Z for (Better) Brand Health Tracking Ehrenberg-Bass Institute for Marketing Science 37 P is for Prominence, a Pillar of Physical Availability U is for U-shaped distributions Nenycz-Thiel, M. and J. Romaniuk (2021). Building Physical Availability: Prominence and Portfolio. How Brands Grow: Part 2. J. Butler. Victoria, Australia, Oxford University Press: 159-172. Ehrenberg, A. S. C. (1959). “The pattern of consumer purchases.” Applied Statistics 8(1): 26-41. Nenycz-Thiel, M. and J. Romaniuk (2023). What about Physical Availability, Chapter 14, Better Brand Health. Australia, Oxford University Press Schmittlein, D. C., et al. (1985). “Why does the NBD model work? robustness in representing product purchases, brand purchases and imperfectly recorded purchases.” Marketing Science 4(No. 3, Summer): 255-266. Q is for is Quicker always better? Romaniuk, J. (2023). Brand attribute measurement, Chapter 4, Better Brand Health. Australia, Oxford University Press R is for Rating Scales Romaniuk, J. (2023). Brand attribute measurement, Chapter 4, Better Brand Health. Australia, Oxford University Press Anderson, J. R. and G. H. Bower (2013). Human associative memory, Psychology Press. Barnard, N. R. and A. Ehrenberg (1990). “Robust Measures of Consumer Brand Beliefs.” Journal of Marketing Research 27(4): 477-484. Driesener, C. and J. Romaniuk (2006). “Comparing methods of brand image measurement.” International Journal of Market Research 48(6): 681-698. Romaniuk, J. (2008). “Comparing methods of measuring brand personality traits.” The Journal of Marketing Theory and Practice 16(2): 153-161. S is for Sample Screening questions Romaniuk, J. (2023). Category buying behaviour, Chapter 9, Better Brand Health. Australia, Oxford University Press Schoenmueller, V., et al. (2020). “The polarity of online reviews: Prevalence, drivers and implications.” Journal of Marketing Research 57(5): 853-877. Sharp, A and J. Romaniuk, (2023). The rise of the machines? Chapter 13, Better Brand Health. Australia, Oxford University Press V is for Valence in word-of-mouth effects Romaniuk, J. and R. East (2021). Word-of-Mouth Facts Worth Talking About. How Brands Grow: Part 2. J. Butler. Victoria, Australia, Oxford University Press: 119-138. East, R., et al. (2008). “Measuring the impact of positive and negative word of mouth on brand purchase probability.” International Journal of Research in Marketing 25(3): 215-224. Romaniuk, J. (2023). Word-of-mouth measurement. Chapter 12, Better Brand Health. Australia, Oxford University Press W is for Word-of-mouthish attributes are a Waste of time Romaniuk, J. (2023). Word-of-mouth measurement. Chapter 12, Better Brand Health. Australia, Oxford University Press Romaniuk, J. (2023). Brand attribute measurement. Chapter 4, Better Brand Health. Australia, Oxford University Press X is for eXcellence in X (and Y) axes T is for Top-of-mind Brand Awareness Ehrenberg, A. (2000). “Data reduction - Analysing and interpreting statistical data.” Journal of Empirical Generalisations in Marketing Science 5: 1-391. Romaniuk, J. (2023). Brand awareness, Chapter 2, Better Brand Health. Australia, Oxford University Press Ehrenberg, A. and J. A. Bound (2000). Turning data into knowledge. Marketing research: State of the art perspectives. C. Chakrapani. Chicago, IL, American Marketing Association: 23-46. Romaniuk, J. and B. Sharp (2004). “Conceptualizing and measuring brand salience.” Marketing Theory 4(4): 327-342. Romaniuk, J. (2021). Building Mental Availability. How Brands Grow: Part 2. J. Butler. Victoria, Australia, Oxford University Press: 61-84. Y is for from Yesterday to over a Year ago - selecting timeframes Sharp, B., et al. (2019). “Marketing’s 60/20 Pareto Law.” Social Science Research Network: 1-5. Kim, B. J., et al. (2017). “The Pareto rule for frequently purchased packaged goods: an empirical generalization.” Marketing Letters 28(4): 1-17. Romaniuk, J. (2023). Category buying behaviour. Chapter 9, Better Brand Health. Australia, Oxford University Press 38 A-Z for (Better) Brand Health Tracking Ehrenberg-Bass Institute for Marketing Science 39 Ehrenberg-Bass Institute University of South Australia City West Campus Level 4, Yungondi Building 70 North Terrace Adelaide, SA 5000 Australia www.MarketingScience.info info@MarketingScience.info /ehrenbergbass @ehrenbergbass