Last month, Noisy Little Monkey hosted Digital Gaggle and we were lucky enough to have Stephen Pavlovich (he's the CEO of conversion.com, by the way) join us to talk about 'Applying Experimentation Across Your Business'. For me, Stephen's talk exemplified two powerful techniques: experimentation and storytelling. At first, I was somewhat dubious about his talk as he began by flashing a giant yellow M at the audience - an iconic logo for a brand many of us have a lot of disdain for. However, Stephen was actually using McDonalds as an example of a brand that got it wrong, and here’s why...
In 1972, McDonalds had successfully launched a new breakfast phenomenon - the Egg McMuffin - but they didn’t want to stop there. McD also wanted to take a bite (*pun alert*) at the dinnertime industry, and so proceeded to develop a ‘quick-cook’ oven and rework up to 40% of all their restaurants in preparation for having pizza on the menu.
However, as you can probably tell, the launch was a total flop. No one wanted the pizza from McDonalds. The brand had failed to address the most vital thing: the actual customer demand for the product.
A big McStake
Stephen used this example to illustrate that, even if you’re a global brand with incredible power and market influence, you can’t just assume that things will work when you undertake a new risk or challenge. You need to evaluate and assess via experimentation first, not just go in all guns blazing and hope for the desired outcome.
The next great anecdote Stephen used to exemplify the power of experimentation was Claude Hopkins’ advertising for Pepsodent toothpaste in 1957. One of his methods included testing with newspaper ads differing in text and layout, with a desired outcome of getting readers to request a free sample of toothpaste by sending a snippet back to different addresses. From this he could deduce the more effective ad - a wonderfully simple and useful experiment.
Having a background in science, I know the essence of a good experiment boils down to testing a hypothesis; manipulating a particular factor in order to determine an outcome. This is totally invalid, however, if the effects of variables aren’t considered and controlled.
To try and account for these variables, experiments will typically include a ‘control’ experiment - an additional experiment that is the same BUT for the independent factor you are manipulating. Fundamentally, this allows the experimenter to compare between the two and so deem whether the variable has made a significant change or not.
Still with me? Good. Then Stephen’s next story will probably make a bit more sense. To quickly summarise - a Dutch NGO was looking to improve the quality of educational standards in rural Kenya in the 1990s, but was unsure whether the conventional method of supplying textbooks was working. To test this, they conducted a controlled experiment whereby one school was given textbooks, whilst another wasn’t. This revealed that, after a few months, there was no significant difference in performance between the schools, so they formulated a new hypothesis: are the children struggling to read the textbooks due to them being written in English?
Hence, their second experiment provided one school with flip charts and visual aids (whilst the control group received nothing), but still there was no significant difference. Taking a different approach, the NGO investigated further and found that many students had poor attendance due to parasitic infection. Hence, their next experiment - providing anti-worming medication to the students - finally yielded significant results. As a result of trial and testing, the NGO had managed to identify where their funds were both needed and effective, rather than assuming their original technique was the absolute solution.
Testing radical concepts
Fundamentally, what Stephen likes about experimentation is the fact that it allows you to test radical concepts. Our tendency is to stick with the status quo - to do things the way they’ve always been done - but experimentation allows us to query this. E.g. Is this actually the right way to solve this problem? Or, would people be more likely to convert if we did X rather than Y?
Testing gives us a framework to say “let’s try a radical concept” - if it doesn’t work, we haven’t lost anything, and if it does, we’ve made a huge step forward. This unlocks a huge competitive advantage - you are doing something different to everyone else!
There are three key questions to consider when testing:
1. What is the hypothesis of how we’re going to affect user behaviour?
Start with the broad questions - e.g. What will motivate customers? What will lead to better performance? Then, get specific. A good hypothesis is clear and concise, backed up with data.
2. What is the data behind the experiment?
Look at analytics, talk to customers and your service team - make sure you’re confident in the stats before trying to test them.
3. How can you measure it and iterate on it?
Do you want to test the demand for a product that isn’t live yet? Then why not give customers the option - see if they click - then inform them this product is “coming soon” or similar. A poor user experience? Perhaps, but likely a sacrifice you’d be willing to make if it meant you weren’t going to invest a huge amount in a new product that people don’t want…
The minimum viable experiment
Perhaps one of Stephen’s more obvious points (but still a very valuable one) is not to get ‘over-enthused’ by an experiment. Making it bigger, and more complicated, only makes it harder to assess the reasoning behind the outcome. So keep it simple, stupid.
Whether it’s organic search, PPC, social media or UX design you’re struggling with, using the principles of an experimentation framework is key to a business. After all, experimenting with your problems is key to progressing your strategy - so how about taking testing out of your periphery and into your priorities? The results may surprise you.
A massive thanks to Stephen for such an excellent, insightful, and broadly applicable talk at this year’s Digital Gaggle conference.