Tips on how to manage effective
test & learn programmes

Call us on 0845 234 0391

9am-5pm Monday-Friday

Debbie Oates, Principal Consultant, Analytics, Experian Marketing Services

On a day to day basis, organisations make many decisions on implementing new ideas without having anything more than intuition to back them up. Some may offer war stories of previous experiences but, in more cases than not, such learnings are likely to be based on fundamentally flawed activity that wasn’t planned with the appropriate rigor to properly evaluate and enable robust learnings off the back of it.

Why is testing so important to our clients?

A formalised approach to testing allows organisations to understand how to best execute customer communications, offer and product strategies, thus removing the guesswork in the evaluation of impacts in overall business performance. This allows roll out of strategies that provide the business with the most return and avoids the risk of implementing expensive mistakes which will negatively impact performance. Test and learn is not a new concept, but many organisations do not embed this approach within their day to day activity.

Testing programmes can only be effective when there is capability to measure their ability to impact customer response or longer term behaviour. The increase in digital channels such as digital advertising, email programmes and transactional websites means there are many more opportunities to develop and measure testing programmes to impact customer behaviours. There is greater push for marketing departments to prove the return on investment of all activity, so robust testing is the only way to do this meaning it’s a subject that’s going to be making its way up client agendas.

What types of things should organisations be investigating through testing?

Anything that can be measured can be tested. However the following give examples of where our clients typically tend to be interested:


  • What is the impact of refined data targeting?
  • What is the impact of creative/ promotion/product on response and ongoing value?
  • What is the impact of different cross channel contact strategies on response?
  • How do changes in the conversion process increase the volume of new recruits?


  • What is the impact of personalised email communications/web content?
  • How do changes to the welcome programme in terms of timing and content impact customer engagement?
  • What is the impact of different frequencies of catalogue promotions on cross channel sales?
  • Which reactivation offers should I use and with what frequency?

What are the pitfalls?

  • Making a change then trying to infer its impact – observing what has happened will inevitably lead to assessment, however it is unlikely there will be anything appropriate to compare it to so no firm conclusions can be reached.
  • Trying to test too much at one time – tempting given the number of things that can be varied. To get a clear read on a test there must be an equivalent to compare it to. Changing multiple dimensions at one time will result in uncertainty as to which element has caused any impact on performance.
  • Not measuring the right outcomes – this tends to be for two reasons. Either issues with data collection or a lack of business understanding of what the key performance metrics should be.

Top tips:

  1. Invest time in hypothesis generation: Understand the current processes and customer journey – this could be by high level review or customer analysis. Brainstorm all areas that could impact – focus testing where your insight shows there is likely to be most substantial impact.
  2. Be clear on what you want to learn and how you will measure: Define what you want to learn by the end of the test – which elements are being tested e.g. offer A is better than offer B for under 25’s. Then define how this will be measured e.g. response, conversion or profit – these have different data and outcome period requirements.
  3. Use appropriate control groups: there are many nuisance factors that can impact test performance e.g. seasonality or competitor activity. To remove these factors a control group must be used with a standard offer to give a baseline over which to measure impacts. For customer communications a fallow group should be operated which gets no targeted communications – this allows organisations to measure the return on investment of communication programmes.
  4. Ensure that the test matrix supports your learning objective: Each test element will require a comparative cell which can lead to complex testing structures. But there are methodologies for rationalising this. Additionally, volumes required in each test cell will vary dependent on how confident you need to be in the results - this is a statistical calculation!
  5. Create a testing log to collate analysis findings: Based on analysis, tests can fall into positive impact, negative impact or not significant so may need retesting. Only roll out those that have a positive impact, however keep a log of all tests to enable ongoing learning – and avoid the instances of re-inventing the wheel in the future.

And finally why is it important to Experian?

Many organisations will look at tests at a high level within a particular channel e.g. using different subject lines on email or testing page variants to increase stickiness of websites – however many organisation don’t think wider than this. We can work with clients at a higher level to understand their overall objectives and current communications and measurement processes. This may be a pure consultancy exercise or involve pulling customer data from across channels and linking it with our own data. Experian can also help organisations in the hypotheses generation stage based on customer insight as well as providing analytical support in deep dive campaign evaluation.

About the author

Debbie Oates
Principal Consultant - Analytics
Experian Marketing Services

Debbie has over 20 years experience in leveraging data and insight to drive effective marketing strategies across both client and agency based roles.

  • © 2015 Experian. All rights reserved