Power Of Personalisation: Driving Customer Growth With Advanced Recommendation Engines

Example of different levels of sophistication of the recommenders used in Telco and Banking industries.

Guiding Principles For Experimental Campaign Design

Treatments in our context could be campaigns with specific product at defined price, delivered in specified form to selected audience.
  • An experimental mindset — no matter your team’s experience, ideating new campaigns should be approached from a fresh standpoint and focus on asking what the end customer needs.
  • Validate your designs based on incremental effects — A/B tests by definition require a control group that will allow you to capture the difference between a new campaign and business-as-usual.
  • Be precise with the problem — identifying the key-metric and the actual goal of the campaign should occur before the product and campaign ideation.
  • Every test ends with a clear next step — to capture value from experimenting with your campaigns, you need to be able to translate conclusions into actions.
  • Acknowledge that your customers differ — by focusing on a Conditional Average Treatment Effect (CATE), you integrate an appreciation that campaign effect depends on the unique characteristics of each person exposed to the tested treatment.
The key reason behind the AB pilots is the ability to derive CATE for each customer and each treatment.

Modeling Campaign Effects

To estimate the Conditional Average Treatment Effects, a machine learning (ML) model was trained for each treatment and each KPI of interest on corresponding target and control populations. At a customer level, the prediction of the model focused on the expected net change of the KPI due to being targeted by the treatment.

We’ve trained our ML models to identify and prioritise customers with highest CATE.

Bringing Into Operation — Prioritisation Based On Predicted Treatment Effect At Scale

With the customer-level models and KPI-level granularity, selecting the campaigns for each customer was not as simple as matching scored customers to the best predictions.

  • Prioritisation between KPIs: describing what business objectives take priority in campaigning (e.g. customer spend vs specific service/product adoption)
  • Prioritisation within KPIs: the direct output of the models. A thresholding mechanism was added to deprioritise treatments that have low expected uplift

Driving Value With AI-Powered Personalisation

Maintaining experimentation-driven campaigning across the use case significantly stimulated the growth of the campaign net effect. An initial 15% increase was effectively doubled thanks to the implementation of the model-based prioritisations. Depending on the priority KPIs, engines can increase revenue per user, decrease churn, boost satisfaction and increase user engagement.

Impact seen at client organisation — amplification of the effect identified at AB testing happens through better AI-driven customer prioritisation.
The traditional approach to modeling (Ideation->Experimental Pilot -> Model training) has become the focal point of the overall process.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
QuantumBlack, AI by McKinsey

QuantumBlack, AI by McKinsey

4.5K Followers

An advanced analytics firm operating at the intersection of strategy, technology and design. www.quantumblack.com @quantumblack