An e-commerce manager recently got in touch, she runs the website for a group of 10 hotels and wanted to rollout our Direct Booking Platform, her goal was to reassure guests that booking direct is best but she had a problem.
Her senior management wanted to make sure that her investment was going to deliver a good return. So they asked their marketing team to prepare an A/B test.
Our sales team wanted to help because they‘ve seen the value that our Platform can generate for clients. However, being the data geeks that we are, it’s our understanding of statistics and their limitations that prevented us from taking the easy route and running an A/B test.
A/B testing is only effective in certain scenarios, and in this scenario, we knew A/B testing was not going to be effective for this hotel group. Why?
Insufficient data
For this group of hotels, with 2,000 conversion events a month, the A/B test would need for run for over eight months to produce a result with a reasonable statistical confidence. Given the impact a full rollout of the platform would have, during the test they’d miss out on £90,000 in lost revenue. This was because half of their customers would never see the widget and be reassured enough to book direct.
We are frequently asked to undertake A/B tests that would suffer the same data issue. The availability of testing tools has driven a false confidence in their results across the industry. So we thought it would be helpful to do a quick review of A/B tests â€" when they are great and when they should be avoided.
So,with special thanks to our Data Science team, here are the real facts.