Running an Incrementality experiment

At a glance: Run an incrementality experiment to perform lift analysis of remarketing (retargeting) campaigns.


Related reading | Incrementality guide


The following steps are an overview of the steps for setting up an Incrementality experiment. For additional details about each step, see the Run the experiment, below.

# Description

Define test objectives

Design your experiment to define what you are testing (variables) and the expected outcome (hypothesis).


Split an audience into test and control groups

In Audiences:

  • Configure the audience
  • Connect it to ad networks
  • Split it into test and control groups

AppsFlyer Audiences uploads test group members to the ad networks and holds back control group members (in accordance with the split percentages you have defined).


Launch the test

  • In Incrementality, create a new experiment.
  • On the ad networks, create and launch a campaign that targets the users in the audience.

Test group members are exposed to ads. The control group is not exposed to ads.


Analyze the results


Act on the test results

Based on the findings:

  • Adjust the test and run it again, or
  • Optimize campaigns

Run the experiment

Define test objectives

As in any experiment, it's important to define exactly what you are testing (variables) and what you expect the outcome will be (the hypothesis). The following table provides guidelines and best practices for designing your experiment: 

Variables/Hypothesis Guidelines and best practices
Audience Describe the audience you are testing. Choose an audience you have already set up in AppsFlyer Audiences, or create a new one that meets your test criteria.
Audience size

Select an audience with a large user base and an event with high conversion rates. A large sample size is more likely to yield statistically significant results.


Best practice: An audience of at least 50,000 users, split among 1-3 ad networks + the control group

Campaign Describe the campaign (messaging, incentives, etc.)
Control group % Best practice: 15 to 25% of the audience size
Test group

Determine the ad networks you want to test and the percentage of the audience that will be sent to each.


Best practice: 1-3 ad networks


Define the expected outcome of the experiment.


While it may feel somewhat counterintuitive to predict the outcome before you begin, setting out a hypothesis is an important element of experimental design; it's what helps you interpret your results.


Best practice: State a hypothesis that is easily testable by using actual percentages as opposed to vague estimates. For example:

  • 15% more users will purchase an upgrade is an easily testable hypothesis.
  • Significantly more users will purchase an upgrade is more difficult to test.


Using the guidelines described above, you might define an experiment that looks something like the following.

Variables/Hypothesis Example
Audience Users who completed registration but didn't make a purchase
Audience size 100,000
Campaign Campaign that provides a discount voucher to users who make a purchase during a specified period
Control group % 20% of the audience size
Test group

Send remaining audience members to:

  • Network A (40% of audience)
  • Network B (40% of audience)
Hypothesis The rate of test group users who engage with the remarketing campaign and make a purchase will be 10% higher than users in the control group. 

Split an audience into test and control groups

After you have defined your objectives, the next step is to configure the audience you will be testing:

  1. In AppsFlyer Audiences, add an audience (or you can or split an existing audience that meets the requirements of your expermiental design.)
  2. In the Connect tab for the selected audience:
    • Connect the audience to 1-3 ad network partners.
    • Turn on the Split audience option.
    • Set the size of the control group.
    • Split the remaining audience among the ad network partners you've connected the audience to.
      • Total split percentage must equal 100% (including the control group)

Important! While the experiment is running, don't make any changes to the audience configuration. Changes to partners, audience name, or other settings will cause inaccurate statistical analysis.

  • Best practice: Even after an experiment has ended, if you need to make changes to the definition of a previously-tested audience, duplicate the audience first and make changes to the new copy. This ensures the ongoing integrity of the historical data on the Incrementality dashboard.

Launch the test

Set up the experiment in Incrementality

Follow these steps to create a new experiment in Incrementality:

  • In AppsFlyer, go to Dashboards > Incrementality.
  • From the Incrementality experiments page, click the New experiment button.
  • Follow the steps in the wizard to create the experiment. Detailed instructions are provided as you go through each step.


    Wizard step 2 Audience setup

    In step 2 of the wizard, you are asked to select the date from which audience members should be included in the experiment.

    By default, this is the date on which the audience was initially split with a control group. To ensure accurate experiment results, you should change this date if either of the following situations applies:

    • If you've made changes to the definition of the selected audience (audience segmentation rulesets, user identifier, etc.) change this date to match the last date on which these audience definition changes were made.
    • If you're testing a new campaign on a pre-existing audience change this date to the date on which you launched the new campaign.

    Wizard step 4 Review

    In step 4 of the wizard, you are asked to set the experiment period.

    • By default, an experiment is set to run for 30 days from the time it is created. This is the minimum recommended period for running an experiment.
    • Select the Continue experiment indefinitely option if you wish to let the experiment continue running until you actively choose to end it.


Configure campaigns on ad networks

On each ad network to which the tested audience is being uploaded, configure a campaign that targets the audience.

  • Important! In order to obtain accurate test results, configure the campaign so that it targets only users in the audience being tested; don't include users from any other source.
  • If the tested audience is a newly-created audience, it can take up to 24 hours for the audience to be available on the ad network platform. 

Analyze the results

Initial experiment results are available on the Incrementality dashboard within 24-36 hours after connected campaigns are launched on the ad networks (updated daily).

Incrementality raw data reports are available via Data Locker.

Was this article helpful?