Incrementality overview

Premium

At a glance: Understand how Incrementality measures the incremental lift generated by retargeting campaigns.

What is Incrementality?

Probably the best way to understand incrementality analysis is through an example:

Say you recently ran a remarketing campaign. For the sake of simplicity, we'll say it was with only one ad network. The campaign's conversion rate was 5%.

Now you want to evaluate the success of the campaign, and you need to answer important (and, of course, the most difficult) questions:

  • Is 5% a good conversion rate for this campaign?
  • How do we know that the campaign truly resulted in driving additional conversions?
  • Was the campaign worth the cost of running it?

In order to answer these questions accurately and without bias, we must consider more than just measured conversions. We must take into account the number of users who would have converted organically, even if they had never engaged with a remarketing campaign.

This difference between total conversions and those that would have happened anyway is known as incremental lift (or sometimes simply "lift"), and it is critical to determining the true value of a campaign. It can be quite difficult to measure, but that is precisely what AppsFlyer Incrementality is designed to do.

Incrementality experiments

Incrementality overcomes the challenge of measuring incremental lift by allowing you to create scientifically valid experiments to compare the behavior of similar groups. These groups vary only in that some groups (the "test groups") are exposed to your campaign, and one group (the "control group") is not.

Learn more about creating and running Incrementality experiments

Calculation methodology

The following sections explain the calculations behind the numbers reported on the Incrementality dashboard.

Lift

AppsFlyer Incrementality employs 2 alternative methods for calculating lift:

  • Intent-to-treat
  • Reach-based

While reach-based methodology is preferable for purposes of evaluating campaign performance, its calculation requires the measurement of data points not provided by all ad networks. For these networks, intent-to-treat methodology is used instead.

 Note

Prior to December 16, 2021, lift was calculated using solely the intent-to-treat method. However, data for earlier dates has been updated to utilize reach-based lift where applicable. Therefore, you can review historical experiments to analyze their results in light of the new methodology. 

ITT-based lift

Intent-to-treat (ITT) experimental methodology has its foundations in medical science. Using ITT, researchers randomly split a given population between a control group who are not treated and a test group whom researchers intend to treat. Researchers measure the efficacy of a given treatment by comparing the results of the groups. Whether or not a test group member actually received treatment is not taken into account.

Similarly, in remarketing, the lift metric measures campaign efficacy. It doesn't matter if a user in the test group actually engaged with a campaign (received treatment) because it is the intent that is relevant in calculating incrementality. In other words, results are calculated as if every member of the test group actually engaged with the campaign.

When is it used?

AppsFlyer Incrementality uses ITT-based lift calculation for SRNs.

Although it is not the preferred method for measuring lift in our context, ITT remains a scientifically sound experimental method, and it provides valid results when reach-based calculation is not possible.

Interpreting results

The underlying assumption that all members of the test group were actually reached by a campaign often results in low conversion rates and, as a consequence, low incremental lift.

Reach-based lift

As its name implies, reach-based lift calculation takes into account whether or not a member of the test group was actually reached (served a campaign ad) prior to converting. In addition to requiring additional data, reach-based lift calculation is also more complex in that it requires certain mathematical adjustments to the control group in order to be statistically valid. These adjustments are cumulatively known as the creation of a "counterfactual" control group.

When is it used?

AppsFlyer Incrementality uses reach-based lift calculation for non-SRNs (attribution-link networks).

While it is the preferred method for calculating lift, reach-based calculation requires that ad networks report the following data to AppsFlyer:

  • Impression data
  • Campaign ID (from the attribution link)

If this data is not reported, incrementality measurement is not available (and displays as N/A on the Incrementality dashboard).

Interpreting results

Since reach-based lift calculation takes into account only test group members that were actually reached by a campaign, networks that are measured using reach-based incrementality will usually show amplified results (positive lift will become much higher, and negative lift will become much lower).

Best practices

Because the 2 different lift calculation methodologies can yield significantly different results, the following best practices are recommended in order to make interpretation more meaningful:

  • When creating your experiments, select audiences that are split only among a control group and one or more networks of the same type, either:
    • Attribution-link networks (non-SRNs) for reach-based calculation; or
    • SRNs for ITT-based calculation
  • In your initial experiments, consider using an audience split only between a single network and a control group. This makes it less likely that comparisons will be made among 2 or more networks for which different lift calculations are used.

Statistical significance

When evaluating the validity of Incrementality experiments, sound scientific methodology requires that we consider statistical significance (the level of confidence that the lift result isn't due to random events or chance). 

On the Incrementality dashboard, significance is reported as a percentage for each media source:

  • The percentage indicates the likelihood that the results for that media source would repeat themselves if the experiment were performed again under similar circumstances.
  • If statistical significance is determined to be below 60%, the dashboard will display Unreliable results instead of a percentage.
  • Significance is affected by audience size, experiment duration, and media source performance.

Scaled control group

For test-group-level metrics (as opposed to per-user metrics), lift is calculated based on a scaled control group. In other words, the size of the control group is adjusted to match the control group size for each media source.

 Example

Assume the following scenario:

  • Test group size for Network A = 100 members
  • Control group size = 90 members
  • Experiment results:
    • Network A had 40 conversions
    • The control group had 20 conversions
  • Total conversions will be adjusted based on an assumed (scaled) control group size of 100 members:
    • Assumed members - Actual members = 100 - 90 = 11%           
                        Actual members                              90

       
    • Actual incremental conversions + 11% = 20 x 1.11 = 22

Lift will be calculated based on a scaled control group size of 100 members and 22 incremental conversions.

Incrementality dashboard

New_Incrementality_Guide_image.jpg

Accessing the dashboard

Incrementality is a premium feature. To get started, reach out to your customer success manager or write to hello@appsflyer.com. After subscribing, you'll be able to access the Incrementality dashboard.

To access the Incrementality dashboard:

  1. In AppsFlyer, from the side menu, select Optimize > Incrementality.
  2. From the experiments list, click the row of the experiment for which you want to see the dashboard.

A tour of the dashboard

  Component Description
A.png Filter bar

Measure: Select the metric you want to view in the dashboard:

  • Conversions – The number of unique users who performed the selected event
  • Revenue – Revenue generated by converting users of the selected event
  • Events – The number of occurrences of the selected event performed by converting users

Your selection in the Measure filter determines which metrics are displayed throughout the dashboard.

 

Event: Select the in-app event for which you want to view data in the dashboard.

 

Media source: Filter the dashboard display by one or more media sources tested in the experiment.

 

Date range: Filter the incremental analysis of the audience according to the selected date range.

B.png Experiment summary

Sums up the whole experiment with quick, at-a-glance results.

 

C.png Significance tile Displays the statistical significance of the results shown in the dashboard
D.png Headline metric tile

The metric displayed is controlled by your selection in the Measure filter, as shown below.

 

Measure filter selection Displayed headline metric
Conversions/Events Cost per incremental action (CPiA)
  • Average cost of every incremental event, based on campaign cost
Revenue
Incremental return on ad spend (iROAS)
  • Ratio of incremental revenue generated for every dollar spent on the campaign

Note: Calculation of metrics displayed in the headline tile require cost data. Therefore, these metrics are available only to ROI360 subscribers. If the required cost and/or revenue data is not available for this or any other reason, the relevant metric will display as N/A.

E.png Target size and reach tile

Target size: The number of distinct users, for each media source, who were added to the audience for the first time during the experiment's date range

  • Also represented by the pie chart's inner ring (darker color)

Reach: The percentage of the target that actually saw an ad from the relevant media source's campaign

  • Also represented by the pie chart's outer ring (lighter color)
F.png Test-group-level lift tile Displays total incremental metric and lift for the test group (by media source):
  • Total incremental lift is calculated using a scaled control group.
  • The metric displayed is controlled by your selection in the Measure filter, as shown below.

     
Measure filter selection Displayed metric
Conversions Total incremental conversions
Events
Total incremental events
Revenue
Total incremental revenue
G.png Per-user lift tile Displays incremental metric and lift per user (by media source):
  • Per-user incremental lift is calculated using the actual number of users in the test group and control group.
  • The metric displayed is controlled by your selection in the Measure filter, as shown below.

     
Measure filter selection Displayed metric
Conversions Incremental conversion rate
Events
Incremental events per user
Revenue
Incremental revenue per user
H.png Trend chart Displays the selected per-user incremental metric/lift over the date range of the experiment:
  • The displayed metric is controlled by your selection in the Measure filter and is the same metric as the one shown in the Per-user lift tile.
  • A dotted line (in the chart view) indicates partial data for the reported date (meaning future events can still affect the data).
  • Select various view and download options using the Trend chart view controls (as further described immediately below).
I.jpg Trend chart view controls

Use the view controls to select options for Trend chart display and download:

  • Cumulative or daily (On-day) values
  • Chart or table format

Use the Export button to download the data as a report in CSV format.

  • The data downloaded is controlled by your selection with the view controls: Cumulative or On-day

Description of terms in the report

Term Description
Raw measure The total amount of conversions/events/revenue generated by the test group, regardless of impressions (solely based on the network each user was assigned to by the split configuration).
Reached measure The amount of conversions/events/revenue generated by the test group, only by users who had an impression of the selected campaign prior to triggering the underlying event.
Unreached measure Raw measure minus reached measure. This describes the total amount of conversions/events/revenue generated by users from the test group who did not have an impression prior to triggering the underlying event.
Per user measure Reached/raw measure of the test group (depending on whether or not the selected network supports reach) divided by group size/reach of the test group.
Control group raw measure The total amount of conversions/events/revenue generated by the control group.
Control group reached measure The amount of conversions/events/revenue generated by the control group group, adjusted for the test group’s reach (this value is counterfactual and serves as a means to compare apples-to-apples).
Control group unreached measure Raw measure minus counterfactual reached measure. This describes the total amount of conversions/events/revenue generated by users from the control group.
Control group per user measure Counterfactual reached/raw measure of the control group (depending on whether or not the selected network supports reach) divided by group size/counterfactual reach of the control group.
Total incremental measure Reached test measure minus control counterfactual reached measure.
Incremental per user measure Per user measure minus control per user measure.

 

Raw data reports

Use Incrementality raw data reports to analyze the interaction of users with remarketing campaigns:

  • Contents: Row-level data of users included in Incrementality experiments. 
  • Availability:
    • Via Data Locker
    • Data freshness: daily 19:00-23:00 UTC

Download example Incrementality raw data files

Available reports

Category Report name  Data Locker folder
First-seen First-seen users incrementality_first_seen_users
In-app events Organic in-app events incrementality_organic_inapps
In-app events non-organic incrementality_inapps
In-app events re-attributions incrementality_inapps_reattr

Sessions

 

 

Organic sessions incrementality_organic_sessions
Sessions non-organic incrementality_sessions
Sessions re-attributions incrementality_sessions_reattr
Uninstalls Uninstalls
(not currently populated) 
incrementality_uninstalls

Report logic

The basic logic underlying report structure is as follows:

  • Users participating in the experiment—First-seen report
    • In Audiences, rules are set characterizing users to include in the experiment.
    • When a given user, is identified as matching the rules, the event is recorded in the first_seen report. 
      • Users are allocated randomly to a test or control group indicated by the is_control_group field.
      • Users in the test group are allocated to a media source (pid_destination) for retargeting.
  • User engagement within the app: User engagement with the app during the experiment is recorded in context-specific reports: 
    • Engagement type: Session or in-app event
    • User attribution status when the user is first-seen: Organic, non-organic, re-attribution. For example, in the past, a user installed the app and was attributed to organic. As such, during an experiment, the attribution status is organic.  
  • Uninstalls: Users uninstalling the app during the experiment. Uninstall measurement must be active. 

Data characteristics and fields

Field availability varies according to report type as indicated below:

Fields unique to Incrementality (per report type)

Field Description First seen In-app events Sessions Uninstalls
is_control_group If true, the user is part of the control group Y Y Y Y
pid_destination The media source the user is sent to  Y Y Y Y
audience_id Unique identifier Y Y Y Y
joined_audience_date Date user first joined the audience Y Y Y Y
audience_name Audience name (not unique)  Y Y Y Y
tm Hour of day   Y Y Y
timestamp Event time stamp YYYY-MM-DD HH:MM   Y Y  
app_ids App ids associated with the audience rules Y      

Other Incrementality report fields (per report type)

Field Display name* First seen In-app events Sessions Uninstalls
advertising_id Advertising ID (GAID) Y Y Y Y
android_id Android ID   Y Y  
app_id App ID Y Y Y Y
app_name App name   Y Y  
app_version App version   Y Y  
appsflyer_id Appsflyer ID   Y Y Y
revenue_alt App-specific currency   Y    
bundle_id Bundle ID   Y Y  
country Country code   Y Y Y
currency Currency code   Y Y Y
customer_user_id Customer user ID   Y Y  
brand Device brand   Y Y  
device_category Device category   Y Y  
model Device model   Y Y  
device_model Device model   Y Y  
device_type Device type   Y Y  
event_name Event name   Y Y Y
event_revenue Event revenue   Y    
event_revenue_currency Event revenue currency   Y    
event_revenue_u_s_d Event revenue USD   Y    
event_time Event time   Y Y Y
event_value Event value   Y Y  
idfa IDFA Y Y Y Y
idfv IDFV   Y Y  
imei IMEI   Y Y  
is_purchase_validated Is receipt validated   Y    
os_version OS version   Y Y  
platform Platform   Y Y Y
sdk_version SDK version   Y Y  
* According to raw data specification

Incrementality specifications and limitations

Trait Remarks 
Ad network access Not available
Agency access Not available. Incrementality is available to advertiser accounts only.
Agency transparency You can measure campaigns run by agencies that have agency transparency enabled.
Time zone UTC
Currency  USD
Organic data Yes
Non-organic data Yes
Data freshness

Dashboard: daily at 18:00 UTC for the previous day

Raw data reports in Data Locker: daily at 19:00-23:00 for the previous day

Historical data N/A 
Account user access Yes (according to account permissions)