Insights Article

Tracking studies in market research
7th December 2018
tracking studies

Tracking studies in market research

A client’s perspective on what makes a good tracking study and how to connect it to real business decisions.

What is a tracking study? 

A tracking study’s purpose is to track information over time, generating an ongoing measure, enabling the identification of trends and comparisons, spikes and dips (versus competitors, versus last month, last year etc…). Running a tracker keeps your finger on the pulse of all sorts of important factors, providing an early warning system, a ranking tool, a measure of short term impact and a decision making fact generator.

Trackers work better at discovering the what and the how more than the why. Standard fare for trackers include brand performance and advertising impact, but if your definition is basically an ongoing quantitative study, then you could reasonably include exit surveys (talking to lost/lapsed customers), customer satisfaction and mystery shoppers. When set up correctly they are impossible to ignore – asking the killer business questions of hundreds of customers (yours and the competitors) over years and years, supported by the science of statistics.

How do tracking studies work? 

Trackers are quantitative by nature, asking enough respondents to create statistically significant results. It’s this statistical nature that makes trackers an invaluable tool for business decisions, placing customer responses firmly in the realms of mathematical analysis. As a rule of thumb, you need a minimum of 400 respondents per wave of the study, enabling the results to be examined not only as a whole, but also by such cuts as gender, age, socio-demographic and geographical location. The cuts, when reviewed, should contain 100 respondents as a minimum to get results worthy of action (significance testing, confidence intervals, robust sample sizes and the dark art of statistics can wait for another occasion). You are going to be spending a decent amount of money running your tracker, so it’s key to ensure you get robust results that you can trust (and defend when challenged internally).

Trackers don’t need to be continuous in the classic sense of the word, running on a monthly/quarterly basis, they can also link to key business activity, for example TV advertising campaigns. They do though need continuity to enable comparison through time – the same methodology, core questions (and answer framework), respondent make up etc.

Timing and frequency of your study should reflect the speed of change of the information you are hoping to measure (there are also financial considerations; you may not be able to afford to run the tracker every month). Awareness of a specific advertising campaign and the linked movement in spontaneous and prompted awareness of your brand will fluctuate at a much faster rate than any changes in brand attributes.

How can your business benefit from tracking studies? 

Trackers have the flexibility to move between questions specific to your business and broader market topics. This makes them great for ranking you against the competitor set on your key attributes and perceptions, bringing a sense of customer reality to internally held beliefs. The questions should flow in a common sense order, normally moving from a broad subject to specific matters. An example brand & advertising (B&A) tracker may flow like this:

  1. Spontaneous awareness of brands followed by prompted awareness from a given list
  2. Any previous/current usage of these brands and timeframe for last used – this can include use of website, what purchased, how much spent, how often
  3. Likelihood to use/use again – potentially covering expected future spend and timeframe
  4. Spontaneous awareness of advertising – TV, press, radio (who was advertising and any memory of the creative/message)
  5. Prompted awareness of advertising – usually using debranded versions to see if they know who the advertising is for. The ability to spot miss-attribution is a key function of B&A trackers. When you are spending millions it is good to know if a significant proportion of viewers believe the advert is for your main competitor
  6. Measurement against key brand attributes – value for money, honest, likeable, expertise, green credentials, trust, modern etc…
  7. Net promoter score (NPS) – likelihood to recommend to a friend or relative

Because trackers have the ability to cover a myriad of interlinked topics they can be long – up to 20 minutes is not unreasonable. They also have more of a tendency to grow rather than to shrink with internal stakeholders wanting to add questions to cover new initiatives.

This leads to two key problems:

  • Responder bias (people only willing to complete a 20 minute survey because they are really annoyed or really happy with you)
  • Responder fatigue (people dropping out the survey part way through)

You can counteract these issues in a few different ways:

  1. Offer a prize draw for completed surveys. Make it a reasonable prize with mass appeal and a decent chance of winning (for example 3 chances to win £100 of Amazon vouchers)
  2. Make the survey interesting. Web based trackers work brilliantly with interactive visuals, embedded dynamic content and various methods of getting to the answer beyond the normal radio button
  3. Be ruthless. Get rid of questions that serve no purpose i.e. no-one in the business has looked at the answer for a while

Methodology bias is also a consideration, but not something I plan to dwell on. I just want to cover the perceived issues of using web based surveys. I think that we are now in a world where the majority of people have access to and use the web, barring the very very old and very young. The issues/beliefs that you can only use web based trackers for young and middle aged consumers are just not true anymore (one of my recent prize winners was in their nineties).

Another aspect of flexibility is the ability to add short term questions, or the targeting of a different subset of people (for example running the tracker in a different region/territory). This can save you money, absorbing questions which would have otherwise required the commissioning of a separate research study (and depending on survey frequency can provide answers very quickly). Do remember though that a quantitative study is only as good as the robustness of the results, driven by high volume responses, which are more likely to be garnered from a shorter survey.

Trackers also allow you the ability to augment respondents with recent customers, a real bonus if you are not likely to find your customers from a nationally representative panel provider. You may want to add your own customer data if your market share is small or the window between purchases is large. You will need to compare your data against those customers found naturally through the panel to understand any significant differences in results that should be factored in when viewing the findings

To put all this in a few sentences…

  • Run your tracker at a frequency that mirrors the speed of change of the information you are trying to measure
  • There needs to be continuity of core questions and methodology
  • Ensure you get enough responses to provide significant results
  • Don’t be afraid to augment with your own customer data, just be aware of how this impacts results
  • Make the questionnaire fun to complete – keep people interested, keep the length down
  • Add a prize draw to promote completion and reduce bias
  • Ensure the survey flows in a common sense fashion, from broad to specific

Now you have this great business tool up and running, you and your internal stakeholders are going to want to be able to interrogate the mountain of data it is generating. In my view, the best trackers are supported by an online analysis tool, something that allows you to cut data, create graphs, set your own analysis time frames, look at variations between gender, age etc… there is no point having such a richness of data if you cannot have it on tap.

Trackers also need to be supported by a research agency that fully understands how to make the data sing. It is not enough to be able to run the survey and then pour the results into an online graph generator. You need their data analysis expertise to create the story for your business, offering not just quarterly presentations, but actionable insights and expert opinions with which to guide confident business decisions.

About our guest blogger Jonathan Solomon

Jonathan Solomon is an experienced Head of CRM and Insights. Having worked for almost 15 years within the marketing and research teams of Vision Express, Citi Bank and E.ON, Jonathan has a good understanding of how quantitative market research can guide business strategy.

Want these kinds of results?

We’d love to talk with you about how our insights could help your business grow. Drop us an email at hello@clusters.uk.com or call us on +44 (0)20 7842 6830.

clusters-logo-footer

GET IN TOUCH

Email us: hello@clustersinsights.com

Call us: +44(0) 20 3950 6624

Find us: 85 Great Portland Street, First Floor, London, W1W 7LT

Tell us about your business

Privacy Policy Copyright Clusters Limited 2024. Clusters Limited, 85 Great Portland Street, First Floor, London, W1W 7LT. Registered in England and Wales. No. 5716244