At a glance: Use the Predict dashboard to obtain insights about the predicted LTV of a campaign's acquired users at its earliest stages. Explore and analyze users' predicted performance metrics (KPIs) and eCPI (effective Cost Per Install), and compare predicted results among user cohorts. Based on these insights, make decisions about setting campaign bids, and which campaigns to stop, boost, or optimize.
Overview
Predict uses predictive analytics to provide accurate LTV-based predictions of campaign success as soon as 48 hours post-install. The Predict dashboard includes 2 primary components:
- An interactive bubble chart that shows the distribution of predicted KPIs for new users. Each bubble represents a group of users ("cohort") grouped by the characteristics you specify: media source, campaign, geo, site ID, adset, and more.
- A full, sortable results table, incorporating additional data fields for each cohort.
Further investigation of specific data points allows you to extract early insights about the quality of your campaigns and helps answer questions such as:
- Will your campaign be successful?
- Which campaigns and media sources provide the best users?
- Will your campaign generate good users, or should you reduce campaign costs?
Based on these insights, you can determine how to set campaign bids and make decisions about which campaigns to continue, stop, boost, or optimize.
Opening the dashboard
To open the dashboard: Go to Labs > Predict.
Dashboard components
The Predict dashboard is made up of two tabs, the Overview, and the Validation report tab.
Overview tab
The following sections describe the main components of the overview tab.
Filter bar and attribution method selector
Use the filters and attribution method selector to view the information most relevant to you.
Note: Selections apply to both the bubble chart and the results table.
Filter |
Description |
---|---|
App | The app for which data is displayed |
Dates |
The dates for which data is displayed. Use the date picker within this filter to change the displayed date range. Notes:
|
Select whether you want to view predictions based on:
|
|
Install type |
Select whether you want to view predictions based on:
Note: This filter is relevant only when SKAN attribution is selected. |
Media source |
The ad network attributed with the install Note: For agency-driven traffic, the actual media source is displayed irrespective of the agency transparency status. Consequently, you may see media sources that are unfamiliar to you. |
Campaign |
Campaign name and ID as set by the ad network Notes:
|
Geo |
Territory or country (based on data supplied by the ad network or the device's IP address) Notes:
|
Site ID |
The site ID as reported by the ad network |
Additional filters |
Click the
|
Bubble chart
The heart of the Predict dashboard is an interactive bubble chart that shows the distribution of predicted KPIs for new users. Many display options are available, giving you the ability to highlight the data most important to you. These options are described in detail in the table below.
Item | Description | |
---|---|---|
|
Data visualization options |
Use the data visualization options to select:
Predicted KPIBy default, the chart displays bubbles based on pARPU value. You can select to display the bubble chart for any of the following KPIs:
DimensionEach row of the chart displays the bubbles broken down by the dimension you select. Hover over the dimension description on the far left side of each row to see the total number of users in all bubbles on the row, including white bubbles (and gray bubbles, if applicable). For example:
Available dimensions:
Characteristics
|
![]() |
Sorting options |
Use the sorting options to display the rows in the bubble chart (in ascending/descending order) by:
|
![]() |
Bubbles |
Each bubble represents a group ("cohort") of newly-acquired users with the same characteristics (see
|
![]() |
Distribution scale | Scale of predicted KPI values in the distribution |
![]() |
Value filter | Move the endpoints to narrow the range of predicted KPI values for which bubbles are displayed. |
![]() |
Row average | Average predicted KPI value for all bubbles on the row. |
![]() |
Legend |
Description of the chart's visual elements:
|
![]() |
Minimum cohort size | By default, the chart displays bubbles for all cohorts with 10 or more users. Use this option to select a minimum cohort size higher or lower than 10. |
![]() |
No-prediction column selector |
Select whether to show a column on the far right side of the chart that displays no-prediction and no-cost-data bubbles.
|
![]() |
No-prediction bubbles No-cost-data bubbles (pROAS only) |
No-prediction bubbles and no-cost-data bubbles appear in the no-prediction column if you have turned this option on (see
|
Bubble details
Click on a bubble in the chart to open a side panel with additional details about the cohort.
- Cohort details: the full description of the bubble based on the characteristics you have chosen (see
above).
- Trend chart: a chart showing the predicted value of the selected KPI for cohort users who installed the app each day.
- What does this mean? The chart breaks down the full cohort into groups of users who installed the app each day, and it displays the predicted KPI value for each of these date-based groups.
- This allows you to consider the predicted KPI value for each day's users and how it has changed during the selected date range, making it particularly useful for analyzing the impact of campaign changes (such as different creatives, etc.)
- Caution! Don't interpret the chart as showing the change in the predicted KPI value for the whole cohort over time. This is not what it is intended to represent.
- Trend charts by metric:
- pARPU and pROAS are displayed on the same trend chart (entitled pRevenue trend).
- p-% Retention for all 3 retention measurement dates is displayed on the same trend chart (entitled p-% Retention trend).
- p-% Paying users is displayed in its own trend chart (entitled, appropriately enough, p-% Paying users trend).
- What does this mean? The chart breaks down the full cohort into groups of users who installed the app each day, and it displays the predicted KPI value for each of these date-based groups.
- KPIs: predicted KPI values and eCPI for the cohort. Learn more about Predict KPIs and how they are calculated.
- User count: the total number of users in the cohort including percentages for those:
- with prediction
- without prediction as a result of insufficient data
- without prediction as a result of a postback that does not report a conversion value due to SKAN privacy thresholds (relevant for SKAN and SSOT attribution methods only)
Results table
Scroll down below the chart to view a customizable, downloadable results table, including all data fields for each cohort.
Item | Description | |
---|---|---|
|
Grouping options |
|
![]() |
Chart rows |
|
![]() |
Column headers |
|
![]() |
|
|
Validation report tab
Use the Validation report tab to compare the predicted results with the actual campaign results.
Based on the validation insights, you can see how accurate Predict is at giving you campaign forecasts, helping you determine if you should continue, stop, boost, or optimize your campaigns.
The following sections describe the main components of the Validation report tab.
Model information
The Model information shows when the model was last trained and the install date range of the users the model trained on. This is useful to know if changes or new features are added after the last trained version. The behavior of new users might be different than the users on the last trained model.
Headline metrics
Use the Headline metrics for a high-level view and comparison of the ARPU, paying users, and rolling retention information between the predicted and actual results.
Metric component layout
Item | Description | |
---|---|---|
|
Key metric name |
KPI metric name There are four KPI metrics available:
|
![]() |
Coverage |
The coverage ratio between the actual and predicted results. This is calculated using the formula predicted/actual.
Example 1 The predicted result is $20.52 and the actual result is $23.24. The predicted result is lower than the actual result by $2.72 (12%), and the coverage is 88%.
Example 2 The predicted result is $19.57 and the actual result is $15.37. The predicted result is higher than the actual result by $4.2 (27%) and the coverage is 127%. |
![]() |
Bullet graph |
A visual bar showing the coverage. The dark blue line represents the predicted value and the vertical light blue line represents the actual value.
|
![]() |
Numbers |
The predicted and actual results. Depending on the metric, this is displayed as a percentage or as a currency. |
Bubble chart
The bubble chart shows the distribution of predicted vs actual KPIs.
Item | Description | |
---|---|---|
|
Data visualization options |
Select the Predicted vs actual KPI and then select the dimension to group it by.
Predicted KPIsBy default, the chart displays bubbles based on ARPU value. You can display the bubble chart for the following KPIs:
DimensionBy default, the chart is grouped by Media source. You can group the bubble chart by the following dimensions:
|
![]() |
Bubbles |
Each bubble represents a group ("cohort") of users.
|
![]() |
X and Y axis | shows the predicted (y-axis) and the actual (x-axis) KPI values. |
Results table
Item | Description | |
---|---|---|
|
Grouping options |
|
![]() |
Chart rows |
|
![]() |
Column headers |
|
![]() |
|
|
Notes
- The bubble chart and Results table will only show data for cohorts with over 100 users.
- The bubble chart and Results table are only available using the AF attribution model.
Putting it all together—a dashboard example
The following example demonstrates some of the capabilities of the Predict dashboard and how you can use these capabilities to efficiently analyze your data and take meaningful action.
Example
Let's say you are a UA manager responsible for running campaigns in the UK, and you want to look at the predicted quality of users being driven by the campaigns you started last week. The main KPI you use to evaluate campaign performance is ROAS.
- Using the filter bar, you set the date range to display the data for installs during the past 7 days, and you set the geo filter to display only data for the UK.
- In addition to your new campaigns, you have many ongoing campaigns. For the moment, you want to focus only on the campaigns you started running last week, so you use the campaign filter to select only these campaigns.
- You change the data visualization options to show the pROAS KPI. For now, you keep the other options set to the default values (Dimension = media source; Characteristics = campaign and media source)
- This means each row represents a media source and each bubble on the row represents a cohort of users attributed to the same media source and campaign.
- Even with the filters you've applied, there are lots of small bubbles that make it harder to analyze your data, so you change the minimum cohort size to 50. This allows you to focus only on the largest cohorts.
- You notice that there are several bubbles on the left side of the distribution, meaning that they have low pROAS in comparison to other cohorts.
- You decide that you want to concentrate now on only those cohorts with pROAS of less than 100%, so you slide the right endpoint of the value filter down to 100%.
- You want to understand if there are specific site IDs that are pulling down the pROAS, so you change the characteristics selected in the data visualization options. You remove the campaign characteristic and add site ID instead.
- After this change, each row still represents a media source, but now each bubble represents a cohort of users attributed to the same media source and site ID.
- With the revised view, you quickly see that it's just a few site IDs that have low pROAS. When you click on these bubbles, the KPIs in the bubble details panel show that these cohorts actually have decent pARPU. The low pROAS is caused primarily by the high cost of acquiring these users (eCPI).
- To further analyze all the data, you scroll down to the results table set the grouping options to show media source > campaign > site ID. You download the table as a CSV file, to which you can apply your own pivot tables and BI analysis.
Based on the insights you obtain from this analysis, you might take one or more of the following actions to optimize your new campaigns:
- Stop running the campaigns on low-performing media sources or lower the bids. (With lower bids, it might make sense to continue running the campaigns on these media sources.)
- Allocate budget to more productive media sources.
- Adjust campaign configuration on the low-performing media sources (for example, by targeting better-performing site IDs) to balance the quality of the users they provide with the cost of acquiring them.