Three Research Papers From 2021 That Highlight Key Data Science Trends

QuantumBlack, AI by McKinsey
5 min readDec 23, 2021

Yousuf Mohamed-Ahmed, Data Scientist, Giulio Morina, Data Scientist, Daniel Herde, Data Scientist, QuantumBlack

2021 has been a transformational year for data science. With the initial model disarray caused by the pandemic’s upheaval in 2020 largely behind us, today’s organisations are even more focused on seizing the advanced analytics opportunity and 56% of companies featured in McKinsey’s recent research have adopted AI — up 6 percentage points compared to just 12 months ago.

Alongside this progress there has been a variety of fantastic research papers published in 2021 that will help inform, articulate and develop ideas in the data science industry in the years ahead. Data science research is an incredibly broad area and, unsurprisingly, often highly complex, so it would take far too long to touch on the full range of concepts and findings published this year. However, we wanted to share three papers from 2021 covering research in areas we expect to develop significantly in the near future — reinforcement learning, improving generalisation error via modified losses and balancing ethical considerations that can often compete in practice.

Optimal Stroke Learning with Policy Gradient Approach for Robotic Table Tennis by Yapeng Gao, Jonas Tebbe, and Andreas Zell

In this paper, the authors apply Reinforcement Learning (RL) when teaching a robot to play table tennis, a game requiring very fine motor control to even manage to return the ball. In a typical application it would be far too costly and time consuming to teach a model by real-world experience. A self-driving car cannot learn by driving down a motorway when it is yet to even understand steering, both for safety reasons and because machine learning models, particularly those deployed with RL, suffer from poor learning efficiency. A model may need to experience millions of real-world examples in order to develop systematic skills.

As a result, most practitioners teach their model in a semi-realistic simulation before adapting it to the real-world — a self-driving car that has learned to steer in a simulation can adapt its skills into the real-world much more easily than a car with no prior knowledge. Retraining the model is certainly non-trivial as no simulation can account for all the real-world physical forces that the model will have to understand.

This paper proceeds directly along this avenue and teaches the table tennis robot via an RL technique known as policy gradients, where the model explicitly learns a ‘policy’ via gradient descent. A policy in RL refers to a function mapping from the ‘state’ of the world to the action taken by the model. For example, in table tennis, the state might be the position of the ball and the action the direction that the robot wishes to move the bat. Subsequently, having trained the model in simulation, the authors re-train the model using real-world data. The resulting robot, in testing, managed to return 98% of the balls fired at it and within a distance of 24.9 cm of the desired return position.

Accurate simulations are difficult to come by and real-world training is often infeasible. This paper offers a highly useful vision of compromise between the two and adapting simulation-trained models to real-world operation is likely to be the predominant approach as RL becomes an increasingly popular machine learning technique.

Sharpness-Aware Minimisation For Efficiently Improving Generalisation by Pierre Foret, Ariel Kleiner, Hossein Mobahi and Behnam Neyshabur — Presented at ICLR 2021

This paper explores one of the major problems in training machine-learning: overfitting, where, after training, model parameters represent the training dataset well but do not generalise to novel input data. The problem is especially pronounced in modern overparameterised models, with millions or billions of model parameters that can be adjusted.

Many approaches exist to address this problem of overfitting such as activity regularisation, weight constraints, noise and early stopping. This paper adds a novel and interesting approach to the catalogue by optimising for loss value and sharpness of the minimum. Simplified, the goal of sharpness-aware minimisation (SAM) is to find a set of model parameters in a shallow area of parameter space with a low loss for all points in the surrounding neighbourhood instead of a sharp minimum, associated with overfitting for a training set. The authors demonstrated novel state-of-the-art performance for a range of benchmark datasets, such as the CIFAR-100 image classification dataset.

The paper describes an exciting approach to improving model performance, as SAM can simply replace the stochastic gradient descent optimisation procedure in existing models and yield an improved performance with very limited additional hyperparameter tuning. SAM introduces one additional parameter representing the weight of the “sharpness” factor, but the authors provided a robust default value. In addition, it provides a new mental model for the meaning of a “good” set of model parameters.

Decision Making with Differential Privacy under a Fairness Lens by Cuong Tran, Ferdinando Fioretto, Pascal Van Hentenryck and Zhiyan Yao — Presented at IJCAI 2021

This paper examines the impact of releasing differentially private datasets and studies whether the noise generated by differential privacy actually puts some groups at an unfair disadvantage. Specifically, it explores the trade-off between differential privacy and group fairness: the former focuses on individuals, the latter on groups. In particular, it has been shown that differential privacy can often introduce biases against specific subgroups of the population.

For instance, imagine a system that awards grants to schools based on their percentage of students from disadvantaged backgrounds. If we were to know the exact number of children attending each school, we could just proportionally divide the available resources. However, to protect children and family identities, data may have been ingested with random noise due to differential privacy — this paper highlights that proportional division of resources may yet disadvantage some schools disproportionally. The paper highlights that this added noise impacts all school districts and could lead to an overestimate of resource allotment for small districts and an underestimate for larger districts.

The authors formalise a mechanism for which the level of maximum unfairness to any one group resulting from differential privacy models can be controlled. For the discussed resource allocation problem, they show how to achieve a fair decision while maintaining individual privacy in three different ways. Two of these are based on the availability of extra information: either the underlying true data or the sum of the proportion of children at disadvantage. The last approach introduces a parameter that can be tuned to either mitigate disparities or allotment errors.

Although this paper focuses mostly on a specific type of problem, it is an important contribution to a slowly growing library of literature that aims to better characterise the relationship between privacy and fairness. As is often the case with these types of problems, there is no single, easy answer: the proper choice of which measure of fairness to use and how to calibrate privacy remains in the hand of the model maker. Awareness of these potential issues is the most important step towards a fairer society, as problems related to fairness can be unintentionally overlooked.

We hope you find these papers as interesting and informative as we did. We’re eager to compile an easy-to-access list of insightful papers, so please do leave your own suggestions in the comments.

--

--

QuantumBlack, AI by McKinsey

An advanced analytics firm operating at the intersection of strategy, technology and design.