Run an Incrementality experiment

Premium

At a glance: Run an Incrementality experiment to understand the true impact of your remarketing campaigns.

KB_toggle_final.jpg

About the experiment

Incrementality testing measures the real impact of your campaigns with A/B testing that accurately measures “incremental lift”—the difference between conversions that happened as a result of your remarketing campaigns, and those that would have happened anyway.

The results are crucial for understanding the actual value generated by your marketing investments. With Incrementality testing, you can objectively see the impact of particular channels and campaigns and make better decisions about budget allocation and campaign optimization.

AppsFlyer makes Incrementality testing easy with an automated process that begins at audience creation. The new audience is automatically split between connected media partners and a control group via the enabled “Measure incremental impact” toggle. The control group is the portion of the audience that is held back from partners and is an indispensable part of testing. Once the audience is saved, an Incrementality experiment is automatically created. After launching your campaigns on the partner side and ensuring the correct campaigns are linked, you’ll be able to view the experiment results in the Incrementality dashboard.

Get started

Incrementality is a premium feature. After subscribing, follow these steps to get started. For additional details about each step, see Set up the experiment below.

  Steps
1

Know your test objectives

Maximize the actionability of your experiment results by defining clear test objectives and a hypothesis for the outcome.

2

Create the target audience and split groups

In Audiences:

  • Configure the audience.
  • Connect it to ad networks.
  • The audience is split into test and control groups automatically, via an enabled toggle (for Incrementality subscribers, in audiences created after January 11, 2023).
    AppsFlyer Audiences uploads test group members to the ad networks and holds back control group members. 
3

Launch the campaign and experiment

  • On the ad networks, create and launch a campaign that targets the users in the audience.
  • The Incrementality experiment is created automatically (for Incrementality subscribers). 

Test group members are exposed to ads. The control group is not.

4

Analyze the results

The insights gained from well-run incrementality experiments enable you to optimize your marketing campaigns for maximum positive impact. For example, it can help point you towards the best messaging for a target audience, the best creative or promotion, the most effective media source, and more. 

Set up the experiment

Know your test objectives

As in any experiment, it's important to define exactly what you are testing (variables) and what you expect the outcome will be (the hypothesis). The following table provides guidelines and best practices for designing your experiment: 

Variables/Hypothesis Guidelines and best practices
Audience Describe the audience you are testing. Choose an audience you have already set up in AppsFlyer Audiences, or create a new one that meets your test criteria.
Audience size

Select an audience with a large user base and an event with high conversion rates. A large sample size is more likely to yield statistically significant results.

 

Best practice: An audience of at least 50,000 users, split among 1-3 ad networks + the control group.

Campaign Describe the campaign (messaging, incentives, etc.).
Control group % The control group is automatically set to 15% of the audience size (best practice is between 15% to 25%). 
Test group

Determine the ad networks you want to test. The split percentage will be divided equally between them (but you can adjust the percentage of the audience that will be sent to each).

 

Best practice: 1-3 ad networks

Hypothesis

Define the expected outcome of the experiment.

While it may feel somewhat counterintuitive to predict the outcome before you begin, setting out a hypothesis is an important element of experimental design; it's what helps you interpret your results.

Best practice: State a hypothesis that is easily testable by using actual percentages as opposed to vague estimates. For example:

  • 15% more users will purchase an upgrade is an easily testable hypothesis.
  • Significantly more users will purchase an upgrade is more difficult to test.

 Example

Using the guidelines described above, you might define an experiment that looks something like the following.

Variables/Hypothesis Example
Audience Users who completed registration but didn't make a purchase
Audience size 100,000
Campaign A campaign that provides a discount voucher to users who make a purchase during a specified period
Control group % 15% of the audience size
Test group Send remaining audience members to:
  • Network A (42% of audience)
  • Network B (43% of audience)
Hypothesis The rate of test group users who engage with the remarketing campaign and make a purchase will be 10% higher than users in the control group. 

Create the target audience and split groups  

After you have defined your objectives, the next step is to configure the audience you'll be testing:

  1. In AppsFlyer Audiences, create an audience (or you can or split an existing audience that meets the requirements of your experimental design.)
  2. In the Connect tab for the selected audience:
    • Connect the audience to 1-3 ad network partners.
    • Make sure the toggle that splits the audience is enabled (it's automatically enabled for Incrementality subscribers, on audiences created after January 11, 2023). If not, click the toggle to enable it.
      Measure_inc_impact_toggle.jpg
    • The control group is automatically set to 15% (recommended).
    • The rest of the audience is automatically split evenly among your selected ad network partners.
      • The total split percentage must equal 100% (including the control group).

Once you have saved your audience with the split enabled, an Incrementality experiment is automatically launched (Incrementality subscribers only). These experiments can be viewed on the Incrementality dashboard with the prefix "[Auto]" before the title.

Auto_before_title.jpg

Important notes:

  • While the experiment is running, don't make any changes to the audience configuration. Changes to partners, audience name, or other settings will cause inaccurate statistical analysis.
    Best practice: Even after an experiment has ended, if you need to make changes to the definition of a previously-tested audience, duplicate the audience first and make changes to the new copy. This ensures the ongoing integrity of the historical data on the Incrementality dashboard.
  • Modifying the split and partner configuration of an existing audience will end the current Incrementality experiment and create a new one.
  • Check for audience expansion features:

Both Meta ads and Google Ads have an optional feature that enables campaigns to target outside a configured audience list in order to expand campaign reach. This feature, if left on, will bias Incrementality experiments and cause inaccurate results. Please verify that this feature is turned off.

In Meta ads, the feature is called “Advantage custom audience”. It appears at the adset level and is enabled by default. 

image__4_.png
In Google Ads, it’s called “Audience expansion”.

Google_Ads_Incrementality_limitation_cropped.jpg

Launch the campaign and experiment

Take the following actions:

1. Configure campaigns on ad networks

The first step is to create a campaign that targets the users in your audience and launch it on the ad network side (for each ad network to which the audience is being uploaded).

  • Important! In order to obtain accurate test results, configure the campaign so that it targets only users in the audience being tested; don't include users from any other source.
  • If the tested audience is a newly-created audience, it can take up to 24 hours for the audience to be available on the ad network platform. 

2. Ensure the correct campaign is linked to the experiment 

A new Incrementality experiment is automatically launched once a new audience is created with the “Measure incremental impact” toggle enabled (or if it's newly enabled on an existing audience). The most relevant campaign is automatically linked to the experiment, but you can change the campaign if you wish.

To link/change a campaign:

1. From the Incrementality dashboard > the specific experiment > Click Edit experiment

Edit_experiment.jpg

 

Or

From Audiences, locate the specific audience > click + Link campaign in the Incrementality column.

Link_campaign__audiences_status_.jpg

A window displays, enabling you to link campaigns.

Link_campaign_to_experiment_window.jpg

2. Select the relevant campaign(s) for each ad network.

Note: The dropdown menu will be populated with campaigns pulled from your media partners. When the data is available, campaigns in the dropdown will be ordered by the likelihood of them being used to market to the specific audience. The top suggestion will be preselected for you.

3. Click Save.

The campaign is now linked to the experiment.

 

Manually create an experiment 

If you’re an Incrementality subscriber who created your audience after January 11, 2023, there's no need to manually create an experiment. Experiments are automatically created for your audiences (and appear on the Incrementality dashboard with "[Auto]" before the title). If you wish to create a manual experiment on an older audience or for some other reason, follow these steps.

To manually create a new Incrementality experiment:

  • In AppsFlyer, from the side menu, select Optimize > Incrementality.
  • From the Incrementality experiments page, click the New experiment button.
  • Follow the steps in the wizard to create the experiment. Detailed instructions are provided as you go through each step.

     Notes

    Wizard step 2 Audience setup

    In step 2 of the wizard, you are asked to select the date from which audience members should be included in the experiment.

    By default, this is the date on which the audience was initially split with a control group. To ensure accurate experiment results, you should change this date if either of the following situations applies:

    • If you've made changes to the definition of the selected audience (audience segmentation rulesets, user identifier, etc.) change this date to match the last date on which these audience definition changes were made.
    • If you're testing a new campaign on a pre-existing audience change this date to the date on which you launched the new campaign.

    Wizard step 3 – Set targets

    Step_3_Set_targets.jpg

    Here's where to select the target app and KPIs.

    Note: All other KPIs will be available at the end of the experiment, even if not selected.

     

    Wizard step 4 Review

    In step 4 of the wizard, you are asked to set the experiment period.

    • By default, an experiment is set to run for 30 days from the time it is created. This is the minimum recommended period for running an experiment.
    • Select the Continue experiment indefinitely option if you wish to let the experiment continue running until you actively choose to end it. 

Analyze the results

Initial experiment results are available on the Incrementality dashboard about 48 hours after connected campaigns are launched on the ad networks (updated daily). you may see alerts on the experiment page which help you identify issues that can compromise the accuracy of the results and suggest actions that can be taken to resolve them.

Incrementality raw data reports are available via Data Locker.

See also