By Wendy Eichenbaum
A/B testing is an effective and cost-efficient way to test new ideas in order to create a better customer experience and meet your metrics, but you must consider this data in the overall customer experience.
Back in 2007, a presidential campaign experimented with different versions of the main image and button label on the campaign website’s splash page. They ran a multivariate experiment, showing different combinations of 6 main image and 4 button label options, a total of 24 combinations. Each visitor randomly saw a different combination, and the site tracked if users signed up. The winning combination had a sign up rate of 11.6%, compared with the original version, which had a 8.26% rate. That was a 40.6% improvement, which the campaign calculated yielded an additional 2,880,000 email addresses, and those addresses led to an additional $60 million in donations. Pretty impressive for one image and button label.
A/B testing is a very common tool employed in the fields of Marketing, Software Design, and Customer Experience. In this testing, a company compares the original (control) design against a new (test) design. They want to determine which version makes the task easier to perform or which is more likely to inspire the users to complete the task.
When it comes to software, the A/B testing process is simple and relatively inexpensive. A team identifies a process that they want to improve. For example, they want more users to purchase a product or to contact the company for more information. The team creates the test version. The company then deploys the test version so that X% of users try the test design, while the rest of users still use the control design. The company collects and analyzes the data, and then implements the more effective version. If a team rotates multiple test designs, then the test is called a Multivariate test, but the process is the same as an A/B test.
There are several advantages to this testing. No longer must you dwell on hunches. You foster creativity in problem solving, because you have a real-world arena where you can safely test new ideas. And you create a data-driven design culture. Rather than arguing opinions, you can look at the numbers. You are not subject to following the ideas of the HIPPO, highest paid person’s opinion.
So now with the winning design you have an improved Customer Experience right? Well, maybe. The problem is that you don’t know if the users enjoyed the experience, or if the users will return to your site or application to perform the task again.
UX guru Jared Spool commented on A/B testing: “Conversion rate has lots of problems as a measure of success, but its big crime is that it focuses purely on the pressing of the purchase button. It doesn’t measure whether the users are happy with that purchase or whether they are delighted with the product they finally received and the way they received it. It’s easy to optimize for conversion while sacrificing a great experience. Conversion ≠ Delight.” 
So it’s extremely important to pick the right areas to test. The best place to begin is to look at your company’s goals and metrics. Do you want 80% of your users to purchase the products they add to their carts? Do you want 50% of your users to select a certain button on the home page? Then create designs to test those goals and see which version comes closest to or exceeds your metrics.
And for best results, keep the following in mind. Make sure you test pages with high traffic rates to get a statistically significant number of results. Also, limit the number of changes in your test design so you can identify which change worked. You can test often to compensate for few changes per round of testing.
A/B testing is an effective and cost-efficient way to test new ideas in order to create a better customer experience and meet your metrics. But you must consider this data in the overall customer experience so that customers not only perform the tasks, but also are delighted, return to your site, and tell their friends.
About the Author
Wendy Eichenbaum has been a UX professional since the early-1990’s. She began her career as a technical writer. She then earned a Master of Arts in Professional Writing at Carnegie Mellon University, studying both writing and UI design. Over the years, she has worked across verticals, from start-ups to multi-national firms, in many areas of UX including research & strategy, Information Architecture, usability testing, and focus groups. She started her own UX consulting firm in 2008, Ucentric Design. And she is an adjunct professor at Cal State University, Fullerton. There she teaches a class that she created, User-Centered Design for Web and Mobile Interfaces.