A/B Testing Best Practices
Summary / TL:DR: This article outlines the best practices for A/B Testing within a Blueshift Campaign.
What is A/B Testing?
A/B testing, also known as split testing, is a method used in marketing, product development, and other fields to compare two versions of a web page, app, email, or other content to determine which one performs better. It's a controlled experiment where two variants, A and B, are presented to different segments of your audience. By measuring the response and behavior of users exposed to each variant, you can identify which version leads to more desirable outcomes, such as higher click-through rates, conversions, or engagement.
A/B Testing in Blueshift
Control Buckets
When a customer is created in Blueshift, a user attribute called bsft_control_bucket is assigned to the customer. Each customer, identified or anonymous, is randomly assigned a control bucket number between 1 to 100.
You can use this user attribute to split up segments into smaller groups or buckets, to create random segments, or for splitting up customers for A/B testing.
Let’s say you would like to send out an email to users within a segment but you would like to send one version to 25% of the segment and another version to the remaining 75%. You can achieve this using segments or campaign filters:
Segments
In this example, you would use 2 segments:
One that has
bsft_control_bucket less than or equal to 25
The second has
bsft_control_bucket greater than 25
You would then need to create 2 different campaigns, one that uses the segment that targets 25% of the users with email version 1 and the other uses the segment that targets 75% of the users with email version 2.
Campaigns
In this example you, would need to 1 segment and 1 campaign with 2 triggers:
One trigger has a filter for
bsft_control_bucket less than or equal to 25
The second has a filter for
bsft_control_bucket greater than 25
You would then need to use email version 1 for the trigger that targets 25% of the users and then email version 2 for the trigger that targets 75% of the users.
A/B Split Feature
Campaigns
From Blueshift’s Help Center:
Using A/B tests in Blueshift, you can see which variation of a template performs better. You can test several templates that may have different forms of personalization or entirely different content.Â
You can set up the A/B test from the A/B Test tab of the trigger. The type of A/B test available depends on the type of channel selected for the trigger.
A/B Test | Triggers | Description |
---|---|---|
Entire Creative | All | Test the entire template. If your templates are similar to each other, you can use the clone functionality after creating one template and edit the cloned template. |
Subject Line | Only Email | Test different subject lines for the same content. |
Preheader Text | Only Email | Test different preheader text for the same content. |
Sender | Email and SMS | For emails, test the From name, From address, and Reply to address fields. For SMS, test different Sender numbers/Codes for the same content. |
Â
Let’s say you would like to send out an email to users within a segment but you would like to send one Subject Line version to 30% of the segment, one Subject Line version to 50% and another Subject Line version to the remaining 20%. You can achieve this using the A/B Split feature when building the campaign.
In this example, you will need 1 segment, 3 Subject Line versions, and 1 campaign:
On the trigger, you will need to create 3 A/B Test Variations
Variation 1: Subject Line 1 - 30% split
Variation 2: Subject Line 2 - 50% split
Variation 3: Subject Line 3 - 20% split
Automatic Winner Selection
This is only available for One-Time Send Campaigns
From Blueshift’s Help Center:
For one time campaigns, you can enable automatic winner selection on messaging triggers that have multiple templates. Using this feature, you can let the campaign automatically select the best template based on the metric you are trying to optimize for.
Automatic winner selection consists of the following actions:
- During campaign execution, qualifying users for the messaging trigger are split into a test and holdout group.
- Instead of messaging all users, messages are sent only to users in the test group. Each test user is assigned a template in a round robin fashion.
- After the test group has been messaged, the campaign waits for the time specified in the End Test field so that user interactions can occur.
- After the waiting period is up, the campaign selects the winning template based on a metric selected in the Optimize field and messages the holdout group with this template.
- Automatic winners are selected based on the test time selected by you. If the holdout period that you select is not long enough, it is possible that there might not be enough data to select a winning template. In such cases, it is possible that a random template is selected as the winner.
Â
Let’s say you would like to send out an email to users within a segment but you would like to determine out of 2 email templates, which template has a higher likelihood to be opened based on the impression rate of 20% of the segment. You can achieve this by using the Automatic Winner Selection feature.
In this example, you would need 1 segment, 1 campaign, and 2 email templates:
On the trigger, you will need to create 3 A/B Test Variations
You will also need to determine how long you would like the test to run. In the example above, the test will run for 1 hour and based on the Unique Impression Rate, determine a winner. Once the winner is determined, that email template is sent to the remaining users in the segment.
In order for automatic winner selection to work as intended, you must be careful when choosing the inputs. In particular, you should make sure you set the holdout period and metric such that there will be enough conversions on the metric you selected. You will probably need to take a look at your past campaigns in order to get a sense of what values will make sense. Read more about Considerations
Read more about how to use A/B Testing for Reminder Messages