Putting the value into LTV predictions

Over the past two years the mobile marketing community has been focused on finding a measurement solution for iOS under the new user-privacy reality.

To meet demand, we at AppsFlyer had our sights set on what we believed that solution was. 

In early 2020, AppsFlyer’s Predictive analytics product was in its minimum viable product (MVP) state and was ready to be taken to the next level. However, Apple’s announcement of their upcoming user-privacy changes made us re-examine our original plans. 

The goal: A product that was capable of providing optimization insights in minimal time, based on very early engagement measurements, which fitted seamlessly into this new privacy-centric reality while meeting marketers’ campaign measurement and optimization needs in full.

We quickly decided to capitalize on the new product’s potential as a privacy-preserving measurement solution, make the necessary changes to fit the new SKAN framework, and give our ecosystem what it was looking for.

The product was delivering on its initial promise, producing predictive scores for key LTV pillars that could provide an image of a user’s likely future benefit. 

<img width=”670″ height=”332″ src_old=”data:image/svg xml,” alt=”AppsFlyer’s Predictive analytics MVP dashboard” class=”wp-image-131946″ data-lazy-sizes=”(max-width: 670px) 100vw, 670px” src=”https://oss.vtcmobile.vn/media/2022/04/13/6438d56ea1694862bf798c1605a4e118.png”>
AppsFlyer’s Predictive analytics MVP dashboard

The three performance indicators we aimed to present were user engagement, retention, and monetization — with the reason being that all three are likely to appear in the majority of app owners’ LTV logic.

With each element measured in a different format (days, frequency, revenue), we decided to represent predictive insights in the form of relative scores, scoring each element from 1-9.

Back then, the product appeared to be capable of ticking all the necessary boxes towards the upcoming privacy changes:

  • Predictions did not rely on user identity and could be presented anonymously
  • Accurate predicted scores were delivered within 3 days of measurement
  • The predicted score reflected all measurable LTV events
<img width=”670″ height=”725″ src_old=”data:image/svg xml,” alt=”AppsFlyer’s Predictive analytics MVP scorecard” class=”wp-image-131955″ data-lazy-sizes=”(max-width: 670px) 100vw, 670px” src=”https://oss.vtcmobile.vn/media/2022/04/13/f85d85a6e5634cbcb3ad96bb6dc420a3.png”>
AppsFlyer’s Predictive analytics MVP scorecard

Several adaptations did take place since, in order to match a few mission-critical specific SKAN requirements, among which was cutting our required measurement time frame to 24 hours. And by the time SKAN was officially active in the summer of 2021 — we were ready to start testing the product on the field. 

Testing the waters

A beta process is a traditional pre-launch phase meant to test a product in a safe limited capacity, ahead of its global availability. This helps make sure it meets its necessary requirements and delivers on the value we’ve promised. 

A beta phase is also a great opportunity to field test assumptions that were made during the development stage, flush out anything that doesn’t work, and even change specific elements based on design partners’ feedback.

A key element we aimed to test and get feedback on was the product’s new UI. The new dashboard we created and released once we went into beta was very different from any of AppsFlyer’s existing dashboards, which represented the big shift in perception we were making with PredictSK.

<img width=”1701″ height=”865″ src_old=”data:image/svg xml,” alt=”AppsFlyer’s predictive analytics beta dashboard” class=”wp-image-131964″ src=”https://www.appsflyer.com/wp-content/uploads/2022/04/Predict-dashboard.gif”>
AppsFlyer’s predictive analytics beta dashboard

This new take on data visualization was inspired by several data science oriented platforms we’ve studied — that encouraged us to present our data differently, across multiple dimensions. 

This highly UX-minded interface allows our users to compare different user cohorts in addition to distribution comparison, taking into account the means and variance of each distribution, recognizing its outliers more easily.

We were no longer dealing with specific user-level data, or a specific campaign with a linear form of data analysis — but user groups or clusters, for which different elements were measured, analyzed and ranked. 

We wanted to portray this complex image in the best, most user-friendly way.

Alongside the visual representation of data, we wanted to measure our customers’ ability to produce meaningful insights from the data we delivered. 

Optimizing based on predictive insights is something many app owners already know very well. However, receiving these insights in the form of relative KPI scores — was another thing entirely.

These scores will have to make sense for both advertisers and media partners in order for all sides to communicate and optimize properly.

Another key component we wished to test was the predictive engine’s ability to generate predictions in an accuracy level that would meet our expected thresholds.

This was obviously the product’s main promise. Without a high confidence level, whatever predictions we would produce would be nothing more than an educated guess.

That said, with so many AppsFlyer customers requesting to join the beta, we’ve decided that a qualification process must be introduced, in which only app owners who qualify to a very strict accuracy model requirement would onboard the beta.

A selected group of customers also allowed us to offer dedicated training, customer care, and support to all beta participants, which helped streamline their onboarding process.

Others were offered relevant feedback for required measurement improvements — that would help them qualify for the next beta onboarding round. 

Making a good thing even better

A successful beta is meant to be full of bug fixes, model adjustments, reprioritizations, and unexpected product feature requirements.

Which means our beta was very successful.

With our customers applying our predictive insights on SKAN for the first time, we gained priceless feedback on the product’s functionality, validating some of our initial assumptions while also challenging others — which is an essential step when developing a new product.

Transition into predictive value buckets

Going into the beta, we believed that the predictive scoring model would suffice for this current point in time. It was always considered that predictive scores were a necessary step towards a form of more precise predicted values, but it was regarded as a long term goal rather than an early stage necessity.

As we’ve evaluated our advertisers’ ability to properly operate and optimize their campaigns with predictive scores, their feedback pointed out the need to rethink the type of insights we delivered.

The score-based output, while accurate and clear to us, was hard to translate into optimization actions in the way that we had expected.

With our customers in mind, we decided to reevaluate our predictive insights format and the way that it appears in the advertisers’ dashboard. 

The goal was to make our results easier to translate into actions, and so we’ve decided to shift into a model that presents our insights in more common industry terms. 

The three predicted KPIs would still be delivered to SKAN. However, now instead of being presented under relative scores — they will answer the following questions:

  • How much money is the user expected to spend? 
  • Is it likely that this is a paying user? 
  • What is the predicted retention rate of this user?

Each question is answered using specific value ranges or buckets, and delivered to SKAN through the required conversion value of 0-63. The media source information provided in the SKAN postback helps us create a more complete picture of these KPIs when it’s time to display them on the advertiser dashboard.

The dashboard provides our advertisers with a predicted KPI data card per each user cohort, containing all relevant predicted performance indicators per a group of users, delivered by a specific media source and campaign. 

These aggregated KPIs are of course anonymous, with users in each cohort all sharing similar attribution details.

Transitioning our view to predictive values allowed us to take a great step towards our advertisers and the way they want to view our insights.

Predicted revenue will now be available through:

  • Predicted return on ad spend (ROAS) 
  • Predicted average revenue per user (ARPU)
  • Predicted percentage of paying users 

The predicted retention rates will display the likelihood of users to remain app-active on day 3, 7, or 30 — based on the advertiser’s request.

<img width=”670″ height=”428″ src_old=”data:image/svg xml,” alt=”AppsFlyer’s Predictive analytics cohort KPI details” class=”wp-image-132407″ data-lazy-sizes=”(max-width: 670px) 100vw, 670px” src=”https://oss.vtcmobile.vn/media/2022/04/13/08098be72195415ea88a57294cfb206a.png”>
AppsFlyer’s Predictive analytics cohort KPI details

This evolution in predictive insights alone could be considered the biggest achievement of our beta, but we didn’t stop there. 

Partner postbacks

When planning the product’s framework for media partner communication, we were debating between two options:

  1. Creating a custom schema that provides ad networks with the conversion value mapping translated into PredictSK scores, using the product’s unique terminology.
  2. Utilizing the existing schema that AppsFlyer’s SKAN team had already built, using events to represent PredictSK scores.

Operating in beta mode offered the opportunity to have an open discussion with our partners to get their perspective and feedback. For example, using a custom API would provide a faster, more tailor-made solution that delivers our pLTV insights directly to the media partners in their original form.

However, opting for AppsFlyer’s existing API would mean working under an existing setup many of our media partners are familiar with and requires minimal modifications to our model. 

With our advertisers’ and media partners’ best interest in mind, we eventually opted for relying on AppsFlyer’s existing SKAN API. 

While the majority of partners are understandably not yet confident enough in working with SKAN, we wanted to keep things simple and not introduce a new reporting API to an already complicated workflow.

This process allows both our advertisers and their media partners to have a smoother optimization experience, while we keep exploring ways to make our pLTV insights a main optimization driver.

<img width=”1920″ height=”925″ src_old=”data:image/svg xml,” alt=”” class=”wp-image-132156″ src=”https://oss.vtcmobile.vn/media/2022/04/13/4b191a73d9824da6ada05edab5674b1f.gif”>
PredictSK V2 dashboard

Looking (fast) forward

As mentioned above, the next point in our journey is the expected PredictSK GA, in which the product will officially become widely available to qualified AppsFlyer customers.

But work doesn’t stop there, as we aim to introduce additional product improvements, features, and abilities that will make AppsFlyer’s predictive analytics experience even better, and Apple’s SKAN more navigation-friendly.