Author Archives: Jonathan Solomon

  1. Quantitive Research Tips

    Comments Off on Quantitive Research Tips

    Guest blogger, Jonathan Solomon:

    There are probably three truths I have picked up about quantitative research studies over the years, especially in comparison to qualitative studies:

    1. Firstly, you need to talk to a lot of people which can make quantitative studies expensive.
    2. Secondly, they take longer to set up as you wrestle with the questionnaire design and routing.
    3. Thirdly, they are often taken as gospel by the business due to the mathematical and statistical analysis made possible by the resulting answers.

    With these three things in mind, it’s important to set your quantitative research up correctly before pushing the button and releasing it onto an unsuspecting public.

    Top 10 Tips for carrying out a quantitative research

    Here are my top ten things to consider:

    1. Does your question fit with quantitative methods?

    This has to be your first consideration. What exactly are you trying to measure? Quant is great for understanding the what, the who and the how, but poor at the why. This is due to the structured nature of the questions married with a fixed list of answers. Yes, you can have open ended responses, but these are harder to analyse (increasing cost) and take longer to answer, making the questionnaire longer, which can impact completion rates and cost.

    1. How easy is it to replicate?

    It’s very rare that you’ll want to measure something only once, so can you afford to run this survey on a regular basis? You need to consider methodology for your research, for example telephone surveys or face to face street surveys will be more expensive than a web survey. You should also factor in how many respondents are available (will you be able to find new respondents for follow up surveys).

    1. Timing is key

    Not just the initial survey but any follow on surveys. Running the survey on a regular or continuous basis will enable tracking of trends, spikes and dips through time. Continuous is the easiest…with respondents constantly sought…but it’s expensive (money and resource).

    When thinking about timing, consideration should be given to what you’re measuring. For example, if you’re interested solely in the impact of a new advertising campaign (in itself and in comparison to previous/competitor campaigns), then you can just run your survey once the campaign is completed.

    1. Do you have enough respondents for the data cuts required?

    Every data cut reduces the volume of people being measured, which in turn reduces the statistical significance of the results. If you know that you’re going to want to analyse by age bands, gender, geographic location etc… then make sure you have a large enough pool of respondents to start with.

    1. The significance of significant

    Linked to the point above…in basic terms, statistical significance measures the likelihood of an observed behaviour (result) being pure chance or an actual finding. You’ll usually be measuring the significance of the difference between two or more results (boys versus girls, advert A versus advert B, old versus young). Without getting into the science, the more people you ask, the smaller the difference that can be measured as statistically significant. There is nothing worse than having a ‘key finding’ that cannot be proven and could just be chance.

    1. Completion and responder bias

    Quant surveys can be long, taking 10-20mins to complete. This puts people off from either starting the survey or actually reaching the end. The motivation for completion can also be an issue, with only the very happy or very annoyed taking the challenge. Methodology can also cause problems, with the classic example being the belief that email and web surveys will not reach older respondents…this is probably still true for some people in their eighties and above.

    To help avoid these issues…

    • keep your survey as short as possible – be ruthless with questions that do not add value (shorter surveys also tend to cost less to run)
    • run a prize draw -something that has mass appeal with a good chance of winning (1 in 3 chances to win £100 Amazon vouchers)
    • keep the survey interesting – easier with online surveys (graphics, video, audio etc…)
    • consider your audience when choosing the methodology – you can see the clipboard wielding researchers on the streets ignoring some passers-by whilst making bee lines for others. This is because they have set quotas to fill… much easier when you can see the potential respondents.


    1. Data overload

    One thing quant studies are great at is generating data, although this can make them quite daunting to manage. You not only have the answers to the questions, but the ability to cut the answer by age, gender, socio-demographic, through time trend, YOY (year on year) comparisons, regression and correlation analysis… the list goes on.

    You need to choose your research agency wisely, making sure you fully understand the analytical toolkit they have to offer. This toolkit should not only include the people capable of organising and analysing the data, but also a tool that you can use to interrogate the data yourself, displaying the answers in easy to use tables and graphs, online portals are brilliant for this.

    As Einstein once said: “if you can’t explain it simply, you don’t understand it well enough”, and your internal stakeholders will not have faith in the findings.


    1. Comparability with other findings

    The best decisions are made from multiple inputs. Your quantitative survey should be one of these inputs, not the only one. Where the answer to the question is fundamental to your business, ensure it’s asked in more than one place. Mirroring questions in multiple surveys will provide additional confidence for big decisions.


    1. Flexibility

    One of the great things about quant surveys is the ability to flex how it runs. You can add questions, boost the volume of respondents from certain groups or locations, route people down different question paths etc… This flexibility can provide answers to new questions cheaply and quickly without the need to commission a separate research piece. Work closely with your research agency to ensure you have the functionality for flexibility built into your study.


    1. Sample boosting

    The majority of surveys will look to mirror the make-up of the general population, with respondents pulled from national panels. If you’re looking for the views of recent customers, this might not work for you:

    • If you hold a 10% market share, then you should reasonably only expect to find 10% of respondents who are customers. Consequently, you’ll need to talk to 1000 people to get 100 customers (and that is before you cut by age, gender etc…)
    • If your repeat purchase cycle is long (white goods for example), then again you may struggle to find a significant number of recent customers

    Boosting the sample with your own data will compensate for this issue, ensuring you have a robust number of customer responses. Your research agency will be able to compare the customer sample to the national representative panel data, flagging any quirks that should be taken into consideration before combining the results.

    Quantitative studies have the potential to become the backbone of your research programme, generating ongoing data for years to come. Setting them up correctly ensures that they continue to add value and statistically measurable customer feedback to empower key business decisions.


    About our guest blogger, Jonathan Solomon

    Jonathan Solomon is an experienced Head of CRM and Insights. Having worked for almost 15 years within the marketing and research teams of Vision Express, Citi Bank and E.ON, Jonathan has a good understanding of how market research can guide business strategy.

    To find out more about how we can help you carry out effective market research for successful business decision, get in touch with us.

  2. Tracking studies in market research

    Comments Off on Tracking studies in market research

    A client’s perspective on what makes a good tracking study and how to connect it to real business decisions

    What is a tracking study? 

    So, what are tracking studies? A tracking study’s purpose is to track information over time, generating an ongoing measure, enabling the identification of trends and comparisons, spikes and dips (versus competitors, versus last month, last year etc…). Running a tracker keeps your finger on the pulse of all sorts of important factors, providing an early warning system, a ranking tool, a measure of short term impact and a decision making fact generator.


    Trackers work better at discovering the what and the how more than the why. Standard fayre for trackers include brand performance and advertising impact, but if your definition is basically an ongoing quantitative study, then you could reasonably include exit surveys (talking to lost/lapsed customers), customer satisfaction and mystery shoppers. When set up correctly they are impossible to ignore – asking the killer business questions of hundreds of customers (yours and the competitors) over years and years, supported by the science of statistics.


    How do tracking studies work? 

    Trackers are quantitative by nature, asking enough respondents to create statistically significant results. It’s this statistical nature that makes trackers an invaluable tool for business decisions, placing customer responses firmly in the realms of mathematical analysis. As a rule of thumb, you need a minimum of 400 respondents per wave of the study, enabling the results to be examined not only as a whole, but also by such cuts as gender, age, socio-demographic and geographical location. The cuts, when reviewed, should contain 100 respondents as a minimum to get results worthy of action (significance testing, confidence intervals, robust sample sizes and the dark art of statistics can wait for another occasion). You are going to be spending a decent amount of money running your tracker, so it’s key to ensure you get robust results that you can trust (and defend when challenged internally).


    Trackers don’t need to be continuous in the classic sense of the word, running on a monthly/quarterly basis, they can also link to key business activity, for example TV advertising campaigns. They do though need continuity to enable comparison through time – the same methodology, core questions (and answer framework), respondent make up etc.


    Timing and frequency of your study should reflect the speed of change of the information you are hoping to measure (there are also financial considerations; you may not be able to afford to run the tracker every month). Awareness of a specific advertising campaign and the linked movement in spontaneous and prompted awareness of your brand will fluctuate at a much faster rate than any changes in brand attributes.


    How can your business benefit from tracking studies? 

    Trackers have the flexibility to move between questions specific to your business and broader market topics. This makes them great for ranking you against the competitor set on your key attributes and perceptions, bringing a sense of customer reality to internally held beliefs. The questions should flow in a common sense order, normally moving from a broad subject to specific matters. An example brand & advertising (B&A) tracker may flow like this:

    1. Spontaneous awareness of brands followed by prompted awareness from a given list
    2. Any previous/current usage of these brands and timeframe for last used – this can include use of website, what purchased, how much spent, how often
    3. Likelihood to use/use again – potentially covering expected future spend and timeframe
    4. Spontaneous awareness of advertising – TV, press, radio (who was advertising and any memory of the creative/message)
    5. Prompted awareness of advertising – usually using debranded versions to see if they know who the advertising is for. The ability to spot miss-attribution is a key function of B&A trackers. When you are spending millions it is good to know if a significant proportion of viewers believe the advert is for your main competitor
    6. Measurement against key brand attributes – value for money, honest, likeable, expertise, green credentials, trust, modern etc…
    7. Net promoter score (NPS) – likelihood to recommend to a friend or relative

    Because trackers have the ability to cover a myriad of interlinked topics they can be long – up to 20 minutes is not unreasonable. They also have more of a tendency to grow rather than to shrink with internal stakeholders wanting to add questions to cover new initiatives.

    This leads to two key problems:

    • Responder bias (people only willing to complete a 20 minute survey because they are really annoyed or really happy with you)
    • Responder fatigue (people dropping out the survey part way through)


    You can counteract these issues in a few different ways:

    1. Offer a prize draw for completed surveys. Make it a reasonable prize with mass appeal and a decent chance of winning (for example 3 chances to win £100 of Amazon vouchers)
    2. Make the survey interesting. Web based trackers work brilliantly with interactive visuals, embedded dynamic content and various methods of getting to the answer beyond the normal radio button
    3. Be ruthless. Get rid of questions that serve no purpose i.e. no-one in the business has looked at the answer for a while


    Methodology bias is also a consideration, but not something I plan to dwell on. I just want to cover the perceived issues of using web based surveys. I think that we are now in a world where the majority of people have access to and use the web, barring the very very old and very young. The issues/beliefs that you can only use web based trackers for young and middle aged consumers are just not true anymore (one of my recent prize winners was in their nineties).

    Another aspect of flexibility is the ability to add short term questions, or the targeting of a different subset of people (for example running the tracker in a different region/territory). This can save you money, absorbing questions which would have otherwise required the commissioning of a separate research study (and depending on survey frequency can provide answers very quickly). Do remember though that a quantitative study is only as good as the robustness of the results, driven by high volume responses, which are more likely to be garnered from a shorter survey.

    Trackers also allow you the ability to augment respondents with recent customers, a real bonus if you are not likely to find your customers from a nationally representative panel provider. You may want to add your own customer data if your market share is small or the window between purchases is large. You will need to compare your data against those customers found naturally through the panel to understand any significant differences in results that should be factored in when viewing the findings


    To put all this in a few sentences…

    • Run your tracker at a frequency that mirrors the speed of change of the information you are trying to measure
    • There needs to be continuity of core questions and methodology
    • Ensure you get enough responses to provide significant results
    • Don’t be afraid to augment with your own customer data, just be aware of how this impacts results
    • Make the questionnaire fun to complete – keep people interested, keep the length down
    • Add a prize draw to promote completion and reduce bias
    • Ensure the survey flows in a common sense fashion, from broad to specific

    Now you have this great business tool up and running, you and your internal stakeholders are going to want to be able to interrogate the mountain of data it is generating. In my view, the best trackers are supported by an online analysis tool, something that allows you to cut data, create graphs, set your own analysis time frames, look at variations between gender, age etc… there is no point having such a richness of data if you cannot have it on tap.

    Trackers also need to be supported by a research agency that fully understands how to make the data sing. It is not enough to be able to run the survey and then pour the results into an online graph generator. You need their data analysis expertise to create the story for your business, offering not just quarterly presentations, but actionable insights and expert opinions with which to guide confident business decisions.

    About our guest blogger, Jonathan Solomon

    Jonathan Solomon is an experienced Head of CRM and Insights. Having worked for almost 15 years within the marketing and research teams of Vision Express, Citi Bank and E.ON, Jonathan has a good understanding of how market research can guide business strategy.

    Read more blogs from Jonathan Solomon on our main blog page or get in touch for more information.

  3. Your CRM: a cost-effective way to enhance your research

    Comments Off on Your CRM: a cost-effective way to enhance your research

    Guest blogger: Jonathan Solomon
    Read the fourth in our series of posts from guest blogger Jonathan Solomon who is an experienced Head of CRM and Insights. Having worked for almost 15 years within the marketing and research teams of Vision Express, Citi Bank and E.ON, Jonathan has a good understanding of how market research can guide business strategy. Here, Jonathan explains how using your existing customer relationship management strategy can be a cost-effective way to enhance your research.


    Customer Relationship Management (CRM) is traditionally a ‘below the line’ affair from the company to the customer. It takes the form of mail, email, SMS, outbound phone calls and more recently social media (social media and web do blur the lines of what classes as below or above the line, but that’s a discussion for another day).

    CRM is the company’s way of directing customers (and prospective customers) along the journey from initial purchase to repeat and additional purchases. For example, the reminder for your next dental appointment or the special customer early sale. It’s not just about bringing in sales today; where there is a natural gap between purchases (e.g. buying cars, getting your eyes tested) it’s a way to keep the relationship alive, create more of an emotional connection to the brand, and enhance the customer’s experience. And importantly, it can help to defend against competitor marketing activity.

    By its very nature, CRM is mass marketing and tangible, with campaigns going directly to thousands (and sometimes millions) of people. This makes it perfect for enhancing research in a number of ways:

    • Plenty of volume to test different creative, timing, offers, channels
    • It can be cheap, with emails and SMS both less than 10p/unit and high volume DM campaigns (direct mail – letters, postcards etc..) less than 30p/unit
    • Targeted at individuals – including the ability to target existing and prospective customers selectively
    • You can define the window of measurement accurately – land date through to ‘valid until’ date
    • Easy to measure – specific offers and codes enable you to track the volume of responses versus volume mailed (response rate, RR%). If you are fortunate enough to hold customer specific records (one of the benefits of a loyalty card scheme), then this measurement can be at the individual level, adding a further depth of insight i.e. I sent customer A offer Z, they responded within the offer window, but actually purchased using in-store promotion X
    • Easy to measure (email) – the richness of email from an analytical point of view is worth pulling out as a separate comment. Beyond the usual response rate measures, you can also see email open rates, link click through rates (and if there are multiple links which ones are clicked more often), followed naturally by web analytics, such as drop-out rates and where people dropped out between email land and purchase/voucher download
    • You can hold out controls – groups of people whose only difference from the campaign population are that they did not receive the campaign. Controls give you the base level of activity and are a measure of the natural flow of customers into your business. Your campaign should be measured taking the Control into account i.e. if the campaign response rate is 10% and the Control is 3%, then the true uplift from the campaign is a 7% response rate (incremental response rate). For Controls to work you need some way of tracking the interactivity of the Control population with your business

    Your CRM cycle can also be used to test results gleaned from other sources of traditional research, such as trackers and segmentation studies. If, for example, your segmentation study has identified segments of people who have different needs from your business, or are triggered by different factors within the experience of interacting with your brand, you can use this information to test different messages and offers. All customers are not the same, both in terms of value to the company and the messages that they find most attractive – convenience, price, quality, experience, offers etc.

    Research may also identify the strength of different messages/offers with regards to propensity for people to frequent and recommend your brand. Again, you can use this insight to manage the order and weight of messages within your customer communication strategy, leading with the big hitters.

    To get the best out of CRM as a research tool, there are a number of things to remember:

        1. You need to set up tests to ensure that you are able to identify what makes the biggest difference. Either test just one thing at a time, or if you do wish to test multiple changes (for example channel and offer), ensure you have enough cells to cover all combinations
          1. Cell 1 – email & £50 off
          2. Cell 2 – letter & £50 off
          3. Cell 3 – email & £30 off
          4. Cell 4 – letter & £30 off
          5. Control
        2. Use a Control, normally 10% of the volume of the campaign and removed from those selected for the campaign. Uplift in response rate above Control is the true measure of success for any campaign


        1.  If you are testing in an existing campaign, then any tests should be against the current material (the champion), The tests (the challengers) should also only be a fraction of the campaign volume (10-20%), just in case they totally fail, especially if this is a critical campaign for business performance


        1. You need to make the campaign as easy as possible to measure. Not all companies have the benefit of customer records and the subsequent ability to tie a campaign sent to a specific customer with a purchase made against said record. Other options to aid with measuring effectiveness are campaign specific barcodes, offers and promotions that are only available via that campaign, specific landing pages or promotional codes (web), getting people to submit data prior to getting the offer etc.


        1. Clash management (air traffic control)…or not sending multiple campaigns to the same customer at the same time (good general business practice). Multiple campaigns will muddy the waters as to which campaign actually convinced the customer to interact.

    In summary, with CRM you are communicating to lots of people at the same time and providing you put the proper checks in place, you can test all sorts of different things without breaking the bank or business targets, either confirming the results of traditional research or just testing a new hypothesis or two.


    Read more posts from Jonathan Solomon by visiting our main blog page.