This guide covers how to leverage A/B testing in Builder. A/B testing is a data-driven approach to testing and measuring different content and experiences to help you make impactful improvements to your site or app.
A/B testing is available for any content created and managed in Builder, but not for content created elsewhere.
Builder's A/B testing advantages
The advantages of doing A/B testing the Builder way are varied. They include:
- Conversions are tracked right in the same UI with your Builder content
- Run as many comparisons as you want
- SEO friendly
- No DOM/UI blocking
- Never see flashes of unwanted content
- No coding required
Thanks to gzipping, all overlapping content deflates away in the compression serving an optimized and fast page with little added weight, regardless of the amount of test groups.
This means that even though multiple pieces of content are sent with the initial HTML, they are heavily duplicative, which means your page will not be 200% larger for 2+ test groups, it'll generally only be about 5-10% larger, regardless of the number of test groups, due to how gzip deflation deduplicates redundancies.
Builder supports this supported for all frameworks, including static frameworks like Gatsby and Nuxt.
To leverage A/B testing, you first create different variations of a page or piece of content so that Builder can collect metrics for each version.
In the Builder editor, click the A/B Tests tab and select Add A/B Test Variation.
You can rename your new variation (called Variation 1 by default) and adjust the test ratio. You can click Add A/B Test Variation to create more variants to perform multi-variate testing.
For A/B testing with reusable components, called Symbols, see Symbols.
Navigating back to the Edit tab, you can toggle between the two variations and make adjustments. When ready, click the Publish button to start the test or you can schedule it to publish later You can learn more about scheduling in Targeting and Scheduling Content.
Once a test is published and receiving traffic, you cannot add or remove variants. This ensures that the results of the your test aren't impacted by data from previous tests or earlier versions of the content.
👉 Note: The allocation is random based on the test ratio. Builder tracks the test group via cookies; however, users that have disabled tracking are not assigned to a test group
The following video shows the steps for creating an A/B test.
A/B test metrics
To determine how your test is performing, click on the A/B Tests tab, and then click the View Results button under the sliders. This will take you to the Insights tab, where you can see results for the test. At first you will not see many visitors or much of anything there, because the test has not been running long enough. But after a while, you will see some statistics on how your tests are doing. It will look like the sample data below.
Note that for conversion metrics on custom tech stacks you'll need to integrate conversion tracking. For Shopify hosted stores, this integration is done for you automatically
👉 Note: Builder calculates conversions based impressions; therefore, an impression of Builder content is all that's necessary to lead to a trackable conversion.
Here's a look at the Insights tab, showing impressions, clicks, clickthroughs, conversions, and so forth.
You can use your A/B test data to determine which variation is performing better by comparing the conversion stats between the variations. Depending on what is important to you and your store, you might choose to focus on conversion rate, number of conversions, or total conversion value. Or maybe you're just interested in the number of clicks a particular version got. No matter what metric you choose to focus on, make sure you give the test a long enough time to run and get enough visitors.
Once you've determined the winning variation, it's time to roll out the winner to your visitors. If you want to show only the winning variation to new visitors, or want to keep all the variations around to possibly use in the future, you can adjust the sliders in the A/B Tests tab to the desired ratios. If you would rather remove all the variations except for the winning one, click Choose Winning Variation on the A/B Tests tab. This will open a dialog that allows you to select a winning variation and remove all others, thus ending the test.
When your test starts gathering data, Builder calculates the statistical significance of the results. You will see a checkmark next to the variant name in the table if a variant has a statistically significant impact on conversion. The check mark will be gray if the p-value meets a 90% confidence interval and green if it surpasses 95%.
For more information about hypothesis testing and statistical significance, you can check out this helpful guide.
Ending an A/B test
When you are ready to end your test, navigate back to the A/B Tests tab and click Choose Winning Variation. This will open a dialog, shown below, in which you can select a winning variation and remove all others, ending the test.
👉 Tip: To run a new A/B test, you can duplicate the page and add new variants. If you duplicate a page before ending the test, the variants will be copied to the new page. If you duplicate after a winner has been chosen, the test variants will not be copied.
This screen shot shows how to duplicate a page.
In addition to running A/B tests in Builder, you can also display content based on a schedule, and set who can see which content. For more information, visit Targeting and Scheduling.
Need Expert help?
Submit a project to our partners, BuildQuick, and be matched with a Builder expert.