Mobile Marketing Masterclass

Glossary

A/B Testing

« Back to Glossary Index

What is A/B Testing?

A/B testing is a technique to compare two variations of a metadata element. By identifying the variation that users prefer, app owners can improve their conversion rate. In other words, it is an experiment that helps improve the product page.

Another word for A/B testing is split testing.

The Process of A/B Testing

Split testing follows a specific process:

1. Selecting a Metadata Element.

First, the app owner needs to pick the metadata element he wants to optimize. To get reasonable results, only one element should be tested at the same time, while the rest of the product page stays unchanged.

2. Formulating a hypothesis.

Next, he should state a hypothesis: Which change shall create which result? The hypothesis should contain the metadata element, the aspect of it that will be changed, and the expected outcome.

3. Creating Variations

Each A/B test requires at least two variations of the metadata element that will be tested. It is possible to use up to four variations. Using more makes the test too complex.

4. Running the Test

To run a successful text, app store visitors should be split. The same share of all users should be directed to each variation. So, when running a test with four variations, 25% of the overall traffic should be sent to each of them.

5. Evaluating the Data

In case, the previously mentioned setup was used, the variation that created the most installs is the winner of the test. It delivered the best conversion rate. Before judging, sufficient data is needed, though. Evaluating a test too early on the basis of insufficient data can lead to wrong conclusions.

Synonyms:
Split Testing
« Back to Glossary Index