Documentation Index
Fetch the complete documentation index at: https://x-preview-mintlify-weekly-changelog-1778710653.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Introduction
A/B Testing allows advertisers to segment the users they’re reaching on X so that they can understand how best to optimize for campaign performance and gather learnings to inform their marketing strategies. These segments—referred to as user group splits—are randomized and mutually exclusive. With randomization, factors that influence outcomes are equally distributed. In other words, there are no inherent differences between groups or their expected behaviors. Because of this, when a single variation is applied to a user group and not the others, the difference in campaign performance can be attributed to that variation. While it’s possible to test many variations at once, we strongly recommend testing a single variation at a time. This isolates the causal factor for the observed difference in campaign performance. Variations are set at the campaign level. For example, if the advertiser wants to test the efficacy of a new creative, they should create two identical campaigns where the only difference is the creative. In the future, we plan to support variations at the line item level.Use Cases
A/B testing is most often used to support (1) optimization use cases for performance customers who want to understand what works best on X in order to optimize their investment and (2) learning use cases for brand advertisers who want to use learnings to inform their marketing strategy. The API will support A/B testing for any campaign variable, including:- Creative
- Targeting
- Bid type
- Bid unit
A/B Testing
A/B Testing allows advertisers to segment the users they’re reaching on X so that they can understand how best to optimize for campaign performance and gather learnings to inform their marketing strategies. These segments—referred to as user group splits—are randomized and mutually exclusive. With randomization, factors that influence outcomes are equally distributed. In other words, there are no inherent differences between groups or their expected behaviors. Because of this, when a single variation is applied to a user group and not the others, the difference in campaign performance can be attributed to that variation. While it’s possible to test many variations at once, we strongly recommend testing a single variation at a time. This isolates the causal factor for the observed difference in campaign performance. Variations are set at either the campaign level or the ad group level. The ad group is set through line item in Ads API, As an example of an ad group level variation, if the advertiser wants to test the efficacy of a new creative, they should create one campaign with 2 identical ad groups where the only difference is the creative.Use Cases
A/B testing is most often used to support (1) optimization use cases for performance customers who want to understand what works best on X in order to optimize their investment and (2) learning use cases for brand advertisers who want to use learnings to inform their marketing strategy. The API will support A/B testing for any campaign variable, including:- Creative
- Targeting
- Bid type
- Bid unit
Attributes
A/B Tests are represented as nested structures. There are top level fields for the A/B Test itself and an array of user group objects, each with a set of fields describing it. At a high level, every A/B Test must include the following information.- The test duration, represented by the start_time and end_time fields
- The level at which the split will occur, represented by the entity_type field
- At least two (and at most 30) user groups, each represented as an object in the user_groups array
- The percentage of users that should be allocated to the given user group, represented by the size field
- The campaign IDs/line item IDs that should make up the pool of users for the given user group, represented by the entity_ids array
Usage
The subsections below describe creating and updating A/B Tests. Reading and deleting work like they do with all other Ads API endpoints.Creating
Create an A/B Test using the POST accounts/:account_id/ab_tests endpoint. The endpoint only accepts JSON POST bodies. The Content-Type must be set to application/json. After the advertiser sets up two or more campaigns, an A/B Test can be created. As stated above, A/B Tests must include: test duration, split level, and at least two user groups. Each user group must declare the percentage of users that should be allocated to it as well as the campaign IDs that should make up its pool of users. Each of these is described in further detail below. Test duration:-
The start_time and end_time values must
- Be in the future (relative to when the A/B Test is created)
- Overlap with the campaign/line item flight dates
- The test must last at least one day for non-app-based campaigns and last at least five days for app-based campaigns
- The entity_type can be set to CAMPAIGN or LINE_ITEM
-
Each user group is represented as an object in the user_groups array
- A minimum of two user groups is required
- A maximum of 30 user groups is allowed
-
The size for each user group is set using a string representation of a numeric value between 1.00 and 99.00
- Note: The size values across objects must add up to 100.00
- The campaign IDs should be specified in each user group’s entity_ids array
- All line items of the A/B testing campaign must be included in the split test.
- Only equal splitting is allowed for Line item level.
- Number of user groups line items allowed in 1 split test must be less than or equal to 5.
- Only 1 line item per user group.