Olivier JeunenShareChatEdinburghUnited Kingdomjeunen@sharechat.coandAleksei UstimenkoShareChatLondonUnited Kingdomaleksei.ustimenko@sharechat.co
(2024)
Abstract.
Online controlled experiments are a crucial tool to allow for confident decision-making in technology companies.A North Star metric is defined (such as long-term revenue or user retention), and system variants that statistically significantly improve on this metric in an A/B-test can be considered superior.North Star metrics are typically delayed and insensitive.As a result, the cost of experimentation is high: experiments need to run for a long time, and even then, type-II errors (i.e. false negatives) are prevalent.
We propose to tackle this by learning metrics from short-term signals that directly maximise the statistical power they harness with respect to the North Star.We show that existing approaches are prone to overfitting, in that higher average metric sensitivity does not imply improved type-II errors, and propose to instead minimise the -values a metric would have produced on a log of past experiments.We collect such datasets from two social media applications with over 160 million Monthly Active Users each, totalling over 153 A/B-pairs.Empirical results show that we are able to increase statistical power by up to 78% when using our learnt metrics stand-alone, and by up to 210% when used in tandem with the North Star.Alternatively, we can obtain constant statistical power at a sample size that is down to 12% of what the North Star requires, significantly reducing the cost of experimentation.
A/B-Testing; Evaluation Metrics; Statistical Power
โ โ journalyear: 2024โ โ copyright: acmlicensedโ โ conference: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining; August 25โ29, 2024; Barcelona, Spainโ โ booktitle: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD โ24), August 25โ29, 2024, Barcelona, Spainโ โ doi: 10.1145/3637528.3671512โ โ isbn: 979-8-4007-0490-1/24/08โ โ ccs: General and referenceExperimentationโ โ ccs: Mathematics of computingHypothesis testing and confidence interval computationโ โ ccs: Computing methodologiesMachine learning1. Introduction & Motivation
Modern platforms on the web need to continuously make decisions about their product and user experience, which are often central to the business at hand.These decisions range from design and interface choices to back-end technology adoption and machine learning models that power personalisation.Online controlled experiments, the modern web-based extension of Randomised Controlled Trials (RCTs)(Rubin, 1974), provide an effective tool to allow for confident decision-making in this context(Kohavi etal., 2020) (bar some common pitfalls(Kohavi etal., 2022; Jeunen, 2023)).
A North Star metric is adopted, such as long-term revenue or user retention, and system variants that statistically significantly improve the North Star metric are considered superior to the tested alternative(Deng and Shi, 2016).Proper use of statistical hypothesis testing tools such as Welchโs -test(WELCH, 1947), then allows us to define and measure statistical significance in a mathematically rigorous manner.
However effective this procedure is, it is far from efficient.Indeed, experiments typically need to run for a long time, and statistically significant changes to the North Star are scarce.This can either be due to false negatives (i.e. type-II error), or simply because the North Star is not moved by short-term experiments.In these cases, we need to resort to second-tier metrics (e.g. various types of user engagement signals) to make decisions instead.These problems are common in industry, as evidenced by a wide breadth of related work.A first line of research leverages control variates to reduce the variance of the North Star metric, directly reducing type-II errors by increasing sensitivity(Deng etal., 2013; Xie and Aurisset, 2016; Budylin etal., 2018; Poyarkov etal., 2016; Guo etal., 2021; Baweja etal., 2024).Another focuses on identifying second-tier โproxyโ or โsurrogateโ metrics that are promising to consider instead of the North Star(Wang etal., 2022; Richardson etal., 2023; Tripuraneni etal., 2023), or to predict long-term effects from short-term data(Athey etal., 2019; Tang etal., 2022; Goffrier etal., 2023).Finally, several works learn metric combinations that maximise sensitivity(Deng and Shi, 2016; Kharitonov etal., 2017; Tripuraneni etal., 2023).
This paper synthesises, generalises and extends several of the aforementioned works into a general framework to learn A/B-testing metrics that maximise the statistical power they harness.We specifically extend the work of Kharitonov etal.(Kharitonov etal., 2017) to applications beyond web search, where the North Star can be delayed and insensitive.We highlight how their approach of maximising the average -score does not accurately reflect downstream metric utility in our case, in that it does not penalise disagreement with the North Star sufficiently (i.e. type-III/S errors(Mosteller, 1948; Kaiser, 1960; Gelman and Carlin, 2014; Urbano etal., 2019)).Indeed: whilst this approach maximises the mean -score, it does not necessarily improve the median -score, and does not lead to improved statistical power in the form of reduced type-II error as a result.
Alternatively, optimising the learnt metric to minimise -values โeither directly or after applying a -transformationโ more equitably ditributes gains over multiple experiments, leading to more statistically significant results instead of a few extremely significant results.Furthermore, we emphasise that learnt metrics are not meant to replace existing metrics, but rather to complement them.As such, their evaluation should be done through multiple hypothesis testing (with appropriate corrections(Shaffer, 1995)) if any of the North Star, available vetted proxies and surrogates, or learnt metrics are statistically significant under the considered treatment variant.We can then either adopt a conservative plug-in Bonferroni correction to temper type-I errors, or analyse synthetic A/A experiments to ensure the final procedure matches the expected confidence level.
We empirically validate these insights through two dataset of past logged A/B results from large-scale short-video platforms with over 160 million monthly active users each: ShareChat and Moj.Experimental results highlight that our learnt metrics provide significant value to the business: learnt metrics can increase statistical power by up to 78% over the North Star, and up to 210% when used in tandem.Alternatively, if we wish to retain constant statistical power as we do under the North Star, we can do so with down to 12% of the original required sample size.This significantly reduces the cost of online experimentation to the business.Our learnt metrics are currently used for confident, high-velocity decision-making across ShareChat and Moj business units.
2. Background & Problem Setting
We deal with online controlled experiments, where two system variants and are deployed to a properly randomised sub-population of users, adhering to best practices(Kohavi etal., 2020; Jeunen, 2023).
For every system variant, for every experiment, we measure various metrics that describe how users interact with the platform.These metrics include types of implicit engagement (e.g. video-plays and watch time), as well as explicit engagement (e.g. likes and shares) as well as longer-term retention or revenue signals.For each metric, we log empirical means, variances and covariances (of the sample mean).For metrics with , that is:
(1) |
Superscripts denote measurements pertaining to different variants in an experiment: e.g. and .
2.1. Statistical Significance Testing
We want to assess whether the mean of metric is statistically significantly higher under variant compared to variant .To this end, we define a significance level (often ), corresponding to the false-positive-rate we deem acceptable.Then, we apply Welchโs -test.The test statistic (also known as the -score) for metric and the given variants is given by:
(2) |
We then transform this to a -value for a two-tailed test as:
(3) |
Here, denotes a partial ordering between variants, implying that is preferred over . represents the cumulative distribution function (CDF) for a standard Gaussian.For completeness, this CDF is given by:
(4) |
When , we can confidently reject the null-hypothesis that and are equivalent w.r.t. the mean of metric .Note that -scores are signed, whereas two-tailed -values are not.Indeed: relabeling the variants changes the -score but not the -value, which leaves room for faulty conclusions of directionality, known as type-III errors(Mosteller, 1948; Kaiser, 1960; Urbano etal., 2019) or sign errors(Gelman and Carlin, 2014).We discuss these phenomena in detail, further in this article.
A one-tailed -value for the one-tailed null hypothesis is given by , and rejected when .Throughout, we use two-tailed -values unless mentioned otherwise.
2.1.1. -value corrections
The above procedure is valid for a single metric, a single hypothesis, and importantly, a single decision.Nevertheless, this is not how experiments run in practice.Without explicit corrections on the -values (or corresponding -scores), violations of these assumptions lead to inflated false-positive-rates.We consider two common cases: a (conservative) multiple testing correction when an experiment has several treatments, and a sequential testing correction when experiments have no predetermined end-date or sample size at which to conclude.These corrections are applied as experiment-level corrections, to ensure that for any metric and variants , the obtained -values accurately reflect what they should reflect, yielding the specified coverage at varying confidence levels .
Multiple comparisons
Often, launched experiments will have multiple treatments deployed, leading to the infamous โmultiple hypothesis testingโ problem(Shaffer, 1995).We apply a Bonferroni correction to deal with this.When there are treatments, we consider a treatment to be statistically significantly different from control when a two-tailed , instead of the original threshold.
We can equivalently apply this correction on -scores instead, allowing us to directly compare -scores across experiments with varying numbers of treatments.Recall that the percentile point function is the inverse of the CDF.We obtain a one-tailed -value as , and we reject the one-tailed null hypothesis when .Now, instead, we reject when .As such, computing corrected -scores as controls type-I errors effectively.
Always-Valid-Inference and peeking
A statistical test should only be performed once, at the end of an experiment.When the treatment effect is large, this implies we may have been able to conclude the experiment earlier.To this end, sequential hypothesis tests have been proposed in the literature(Wald, 1945).Modern versions make use of Always-Valid-Inference (AVI)(Howard etal., 2021) to allow for continuous peeking at intermediate results and making decisions based on them, whilst controlling type-I errors.Here, analogously, we can apply a correction on the -scores as follows:
(5) |
where is the total number of samples over both variants combined.For a detailed motivation, see Schmit and Miller(Schmit and Miller, 2022).
These corrections are applied on a per-experiment level, both in the objective functions of methods introduced in the following Sections and when evaluating the metrics that they produce.
2.2. Learning Metrics that Maximise Sensitivity
The observation that we can learn parameters to maximise statistical sensitivity is not new.Yue etal. apply such ideas specifically for interleaving experiments in web search(Yue etal., 2010).Kharitonov etal. extend this to A/B-testing in web search, aiming to learn combinations of metrics that maximise the average -score(Kharitonov etal., 2017).Deng and Shi discuss โlessons learnedโ from applying similar techniques(Deng and Shi, 2016).We introduce the approach presented by Kharitonov etal.(Kharitonov etal., 2017), as our proposed improvements build on their foundations.
We consider new metrics as linear transformations on :
(6) |
The advantage of restricting ourselves to linearity, it that we can write out the -score of the new metric as a function of its weights:
(7) |
These -scores can be used exactly as before to obtain -values.An intuitive property of the -score, is that a relative -score of implies that requires a factor fewer samples to achieve the same significance level as (Chapelle etal., 2012).This can directly be translated to the cost of experimentation, as it allows us to run experiments for shorter time-periods or on smaller sub-populations.
As such, it comes naturally to frame the objective as learning the weights that maximise the -score on the training data.This training data consists of a set of experiments with pairs of variants .We consider three distinct relations between pairs of deployed system variants:
- (1)
Known outcomes: , where ,
- (2)
Unknown outcomes: , where ,
- (3)
A/A outcomes: , where .
Here, implies that there is a known and vetted preference of variant over โ typically because the North star or other guardrail metrics showed statistically significant improvements.These experiments are further validated by replicating outcomes, observing long-term holdouts, or because the experiment was part of an intentional degradation test.We denote inconclusive experiments as , implying statistically insignificant outcomes on the North Star.In rare cases, the North star might have gone up at the expense of important guardrail metrics, rendering conclusions ambiguous.We only include experiments into the inconclusive set for which we have a very strong intuition that something changed (and we โknowโ the null hypothesis should be rejected), but we are unable to make a confident directionality decision.This ensures that we can use this set to truly measure type-II errors.Finally, represents A/A experiments, where we know the null hypothesis to hold true (by design).The first set of experiments is used to measure type-III/S errors.Known and unknown outcomes are used to measure type-II errors, and A/A experiments can inform us about type-I errors.This dataset of past A/B experiments is collected and labelled by hand, from natural experiments occurring on the platform over time.
2.2.1. Optimising Metric Weights with a Geometric Heuristic
Note that as a function of is scale-free. That is, the direction of the weight vector matters, but its scale does not.As Kharitonov etal. write(Kharitonov etal., 2017), we can compute the optimal direction of using the Lagrange multipliers method, to obtain:
(8) |
Here, is a small number to ensure that the matrix to be inverted is not singular.Kharitonov etal. fix this value at and never adjust it throughout the paper.We wish to highlight that technique is known as Ledoit-Wolf shrinkage(Ledoit and Wolf, 2004, 2020), and that it can have substantial influence on the obtained direction.Indeed: it acts as a regularisation term pushing the weights closer to .This can be seen by observing that as , the inverse becomes , and the solution hence becomes .As we only care about the direction we can ignore the denominator.To ensure fair comparison, we also set .Exploring the effects of Ledoit-Wolf shrinkage as a regularisation technique where is a hyper-parameter, gives an interesting avenue for future work.
In order to include observations from multiple experiments into a single set of learned weights, they propose to compute the optimal direction per experiment, normalise, and average the weights:
(9) |
Whilst this procedure provides no guarantees about the sensitivity of the obtained metric on the overall set of experiments, it is efficient to compute and provides a strong baseline method.
2.2.2. Optimising Metric Weights via Gradient Ascent on -scores
A more principled approach is to cast the above as an optimisation problem.The objective function for this optimisation problem consists of three parts.First, we wish to maximise the -score for all variant pairs with known outcomes:
(10) |
Second, we wish to maximise the absolute -score for all variant pairs with inconclusive outcomes under the North Star:
(11) |
Third, we wish to minimise the absolute -score for all variant pairs that are equivalent (i.e. A/A-pairs):
(12) |
This gives rise to a combined objective as a weighted average:
(13) |
Kharitonov etal. demonstrate that, for a variety of different metrics in a web search engine, these approaches can exhibit improved sensitivity(Kharitonov etal., 2017).We apply this method to learn instantaneously available proxies to a delayed North Star metric in general scenarios, and propose several extensions, detailed in the following Sections.
3. Methodology & Contributions
3.1. Learning Metrics that Maximise Power
When directly optimising -scores, an implicit assumption is made that the utility we obtain from increased -scores is linear.This is seldom a truthful characterisation of reality, considering how we wish to actually use these metrics downstream.We provide a toy example in Table1, reporting -scores and one-tailed -values for two experiments and three possible metrics , inspired by real data.In this toy example, we know that , based on a hypothetical North Star metric.Nevertheless, as we do not know this beforehand, we typically test for the null hypothesis with two-tailed -values.In the table, this means that the outcome is statistically significant if the reported one-tailed values are or .We report the power of every metric at varying significance levels , reporting whether(i) the null hypothesis is correctly rejected (, โ ),(ii) the outcome is inconclusive (?), i.e. a type-II error, or(iii) the null hypothesis is rejected, but for the wrong reason (, ).
This latter case is deeply problematic, as it signifies disagreement with the North Star.Such errors have been described as type-III or type-S errors in the statistical literature(Mosteller, 1948; Kaiser, 1960; Gelman and Carlin, 2014; Urbano etal., 2019).Naturally, we would rather have a metric that fails to reject the null than one that confidently declares a faulty variant to be superior.Indeed, Deng and Shi argue that both directionality and sensitivity are desirable attributes for any metric(Deng and Shi, 2016).Nevertheless, considering candidate metrics in Table1, type-III errors are not sufficiently penalised by the average -score: metric maximises this objective despite yielding statistical power that is on par with a coin flip.
Directly maximising power might prove cumbersome, as it is essentially a discrete step-function w.r.t. the -score, dependent on the significance level .Instead, it comes natural to minimise the one-tailed -value reported in Table1.Indeed, the -value transformation models diminishing returns for high -scores, which allows type-III errors to be sufficiently penalised.When considering this objective, is clearly suboptimal whilst is preferred.
Note that the change in objective would not affect the geometric heuristic as described in Section2.2.1.As we simply apply a monotonic transformation on the -scores, the weight direction that maximises the -score equivalently minimises its -value.When learning via gradient descent, however, the -value transformation affects how we aggregate and attribute gains over different input samples.This allows us to stop focusing on increasing sensitivity for experiments that are already โsensitive enoughโ, and more equitably consider all experiments in the training data.
-score | -value | Power | |||
---|---|---|---|---|---|
Exp. 1 | โ | ? | |||
Exp. 2 | โ | ? | |||
Mean | 100% | 0% | |||
\cdashline1-7 | |||||
Exp. 1 | ? | ? | |||
Exp. 2 | โ | โ | |||
Mean | 50% | 50% | |||
\cdashline1-7 | |||||
Exp. 1 | |||||
Exp. 2 | โ | โ | |||
Mean | 5.29 | 50% | 50% |
This change in objective provides an intuitive and efficient extension to existing approaches, allowing us to directly optimise the confidence we have to correctly reject the null-hypothesis.For known outcomes, the loss function is given by:
(14) |
and analogously extended to unknown and A/A-outcomes .Nevertheless, we wish to point out that we only want to maximise -values for A/A-outcomes if type-I error becomes problematic.As we will show empirically in Section4.3, this is not a problem we encounter.For this reason, we set .
Note that whilst direct optimisation of -values is an improvement over myopic consideration of -scores, there is another caveat: the โworst-caseโ loss of a type-III/S error is bounded at 1, which does not reflect our true utility function: metrics that disagree with the North Star are far less reliable than those that simply remain inconclusive.As such, we also consider another variant of the objective, where .Figure1 provides visual intuition to clarify how this monotonic transformation on the -values more heavily penalises type-III/S errors, whilst retaining the optimum.From a theoretical perspective, this function provides a convex relaxation for minimising the number of type-III/S errors a metric produces.As a result, we expect this surrogate to exhibit strong generalisation.We refer to this objective as minimising the -value.
![Learning Metrics that Maximise Power for Accelerated A/B-Tests (1) Learning Metrics that Maximise Power for Accelerated A/B-Tests (1)](https://i0.wp.com/arxiv.org/html/2402.03915v2/x1.png)
Note that one could envision further extensions here where the significance level is directly incorporated into the objective function to maximise statistical power at a given significance level .Nevertheless, we conjecture that their discrete nature might hamper effective optimisation and generalisation, compared to the strictly convex and smooth surrogate we obtain from the -value.
3.2. Accelerated Convergence for Scale-Free Objectives via Spherical Regularisation
The objective functions we describe โeither -scores or -valuesโ are scale-free w.r.t. the weights that are being optimised.As a result, out-of-the-box gradient-based optimisation techniques are not well-equipped to handle this efficiently.
Consider a simple toy example where we have two observed metrics for an experiment with a known preference , and:
(15) |
For this low-dimensional problem, we can visualise the -score as a function of the metric weights in a contour plot, as shown in Figure2(a).
![Learning Metrics that Maximise Power for Accelerated A/B-Tests (2) Learning Metrics that Maximise Power for Accelerated A/B-Tests (2)](https://i0.wp.com/arxiv.org/html/2402.03915v2/x2.png)
![Learning Metrics that Maximise Power for Accelerated A/B-Tests (3) Learning Metrics that Maximise Power for Accelerated A/B-Tests (3)](https://i0.wp.com/arxiv.org/html/2402.03915v2/x3.png)
Here, it becomes visually clear that whilst the direction of the vector matters, its scale does not.The consequence is that the gradient vectors w.r.t. the objective on the right-hand plot can lead to slow convergence, even in this concave objective.Indeed, for poor initialisation in the bottom left quadrant (e.g. ), the gradient direction is perpendicular w.r.t. the optima.
Recent work makes a similar observation for discrete scale-free objectives as they appear in ranking problems(Ustimenko and Prokhorenkova, 2020).They propose to adopt projected gradient descent, normalising the gradients before every update.Whilst effective, in our setting we would prefer to use out-of-the-box optimisation methods for practitionersโ ease-of-use.Instead, we introduce a simple regularisation term that represents the distance between the scale of the vector and a hyper-sphere:
(16) |
All optima for this objective function are also optima to the original functionโbut the gradient field is more amenable to iterative gradient-based optimisation techniques.Figure2(b) visualises how this transforms the loss surface.Under this regularised objective, it is visually clear that gradient-based optimisation methods are likely to exhibit faster convergence.Our empirical results confirm this, for a variety of initialisation weights and learning objectives.
4. Experiments & Discussion
To empirically validate the methods proposed in this work, we require a dataset containing logged metrics (sample means, their variances and covariances), together with preference orderings over competing system variants that were collected from real-world A/B-tests, ideally spanning large user-bases and several weeks.
Existing work on this topic used a private dataset from Yandex focused on web search experiments that ran between 2013โ2014(Kharitonov etal., 2017).They report type-I and -II errors for 8 metrics and a fixed 5% significance level, over 118 A/B-tests and 472 A/A-tests.
In this work, we consider more general metrics that are relevant for use-cases beyond web search (i.a. user retention and various engagement signals).Furthermore, we report type-I/II/III/S errors at varying significance levels, providing insights into the learnt metricsโ behaviour.For this, we leverage logs of past A/B-experiments on two large-scale short-video platforms with over 160 million monthly active users each: ShareChat and Moj.The datasets consist of 153 A/B-experiments (of which 58 were conclusive) total that ran in 2023, and over 25โ000 A/A-pairs.In total, we have access to roughly 100 metrics detailing various types of interactions with the platform, engagements, and delayed retention signals.Because our dataset is limited in size (a natural consequence of the problem domain), we are bound to overfit when using all available metrics as input features.As such, we limit ourselves to 10 input metrics to learn from, and evaluate them w.r.t. the delayed North Star.This feature selection step also ensures that our linear model consists of fewer parameters, which increases practitionersโ and business stakeholdersโ trust in its output.We focus on non-delayed signals, including activity metrics such as the number of sessions and active days, and counters for positive and negative feedback engagements of various types.These are selected through an analysis of their type-I/II/III/S errors w.r.t. the North Star, as well as their -scores: focusing on metrics with high sensitivity and limited disagreement.The research questions we wish to answer empirically using this data, are the following:
- RQ1:
Do learnt metrics effectively improve on their objectives?
- RQ2:
How do learnt metrics behave in terms of type-III/S errors?
- RQ3:
How do learnt metricsโ type-I/II errors behave when considered as stand-alone evaluation metrics?
- RQ4:
How do learnt metricsโ type-I/II errors behave when used in conjunction with the North Star and top proxy metrics?
- RQ5:
How do learnt metrics influence required sample sizes when used in conjunction with the North Star and top proxy metrics?
- RQ6:
Do we observe accelerated convergence over varying objectives via the proposed spherical regularisation technique?
We report results for the ShareChat platform in what follows, and provide further empirical results for Moj in AppendixA.
4.1. Effectiveness of Learnt Metrics (RQ1)
We learn and evaluate metrics through leave-one-out cross-validation: for every experiment, we train a model on all other experiments and evaluate the -score (Eq.7) and -value (Eq.3) the metric yields for the held-out experiment.We report the mean and median -scores and -values we obtain for all A/B-pairs with known outcomes (i.e. ) in Table2.Best performers for every column (either maximising -scores or minimising -values) are highlighted in boldface.Empirical observations match our theoretical expectations: whilst the -score objective does effectively maximise the average -score, it is the worst performer for both mean and median -values, and even the median -score.Our proposed -value objective effectively improves both the median -score and -value over alternatives.
Objective | -score | -value | ||
---|---|---|---|---|
Mean | Median | Mean | Median | |
heuristic | ||||
-score | 7.55 | |||
-value | ||||
-value | 3.17 |
![Learning Metrics that Maximise Power for Accelerated A/B-Tests (4) Learning Metrics that Maximise Power for Accelerated A/B-Tests (4)](https://i0.wp.com/arxiv.org/html/2402.03915v2/x4.png)
![Learning Metrics that Maximise Power for Accelerated A/B-Tests (5) Learning Metrics that Maximise Power for Accelerated A/B-Tests (5)](https://i0.wp.com/arxiv.org/html/2402.03915v2/x5.png)
![Learning Metrics that Maximise Power for Accelerated A/B-Tests (6) Learning Metrics that Maximise Power for Accelerated A/B-Tests (6)](https://i0.wp.com/arxiv.org/html/2402.03915v2/x6.png)
4.2. Agreement with the North Star (RQ2)
From the obtained -scores and -values summarised in Table2, we can additionally derive (dis-)agreement with the North Star, for varying significance levels .We visualise these results in Figure3: if the obtained -value under a learnt metric is lower than , that metric yields a statistically significant result (agreement).If the obtained -value for the alternative hypothesis (i.e. when we know ) is lower than , we have statistically significant disagreement, or type-III error.This is a capital sin we wish to avoid at all costs, as it severely diminishes the trust we can put in the learnt metric.If the -value reveals a statistically insignificant result, we say the result is inconclusive, implying a type-II error.We observe that both the -score-maximising metric and the heuristic approach fail to steer clear from type-III error.Optimising ()-values instead, alleviates this issue.For this reason, we only consider these metrics for further evaluation.Indeed: an analysis of type-II error is rendered meaningless when type-III errors are present.
Figure7(a) in AppendixA highlights that for the Moj platform as well, type-III errors are common in the case of -score-maximising or heuristic metrics.As such, we only consider ()-values to assess increases in statistical power and potential reductions to the cost of running online experiments.
4.3. Power Increase from Learnt Metrics (RQ3โ4)
Until now, we have leveraged experiments with known outcomes to assess sensitivity and agreement with the North Star.Now, we additionally consider A/A-experiments () and experiments with unknown outcomes () to measure type-I and type-II error respectively.We measure this for the North Star, for the best-performing proxy metric that serves as input to the learnt metrics, and for learnt metrics that exhibit no empirical disagreement with the North Star.We plot the type-I error (i.e. fraction of A/A-pairs in that are statistically significant at significance level ) and the type-II error (i.e. fraction of A/B-pairs in that are statistically insignificant at significance level ) for varying values of in Figure4(a).We observe that we are able to significantly reduce type-II errors compared to the North Star (up to 78%), whilst keeping type-I errors at the required level (i.e. ).However, we also observe that the type-II error we obtain when using learnt metrics does not significantly improve over the top proxy metric, when considered in isolation.
Nonetheless, this is not how evaluation metrics are used in practice.Indeed, we track several metrics and can draw conclusions if any of them are statistically significant.As such, metrics should be evaluated on their complementary sensitivity.That is, we compute -values for a set of metrics, apply a Bonferroni correction, and assess statistical significance.The statistical power that we obtain through this procedure is visualised in Figure4(b).We consider either the North Star in isolation, the North Star in conjunction with the top-proxy, or a further combination with any learnt metric.Here, we observe that the learnt metric provides a substantial increase in statistical power: statistical power (i.e. type-II error) is increased by up to a relative 210% compared to the North Star alone, and 25โ30% over the North Star plus proxies.Furthermore, as the Bonferroni correction is slightly conservative, we observe lower than expected type-I error for higher significance levels .This implies that a more fine-grained multiple testing correction can further improve statistical power.We empirically observe that this works as expected, but its effects are negligible in practice.
4.4. Cost Reduction from Learnt Metrics (RQ5)
So far, we have shown that metrics learnt to minimise ()-values are effective at improving sensitivity (Table2), whilst minimising type-III error (Figure3) and improving statistical power (Figure4).
On one hand, powerful learnt metrics can lead to more confident decisions from statistically significant A/B-test outcomes.Another view is that we could make the same amount of decisions based on fewer data points, as we reach statistical significance with smaller sample sizes.This implies a cost reduction, as we can run experiments either on smaller portions of user traffic or for shorter periods of time, directly impacting experimentation velocity.
This reduction in required sample size is equal to the square of the relative -score(Chapelle etal., 2012; Kharitonov etal., 2017).We visualise this quantity in Figure5, for varying significance levels , for the same Bonferroni-corrected procedure as Figure4(b).To obtain a -score for a set of metrics, we simply take the maximal score and apply a Bonferroni correction to it as laid out in Section2.1.1.Note that this procedure depends on , explaining the slope in Figure5.We observe that our learnt metrics can achieve the same level of statistical confidence as the North Star with up to 8 times fewer samples, i.e. a reduction down to .This significantly reduces the cost of experimentation for the business, further strengthening the case for our learnt metrics.
![Learning Metrics that Maximise Power for Accelerated A/B-Tests (7) Learning Metrics that Maximise Power for Accelerated A/B-Tests (7)](https://i0.wp.com/arxiv.org/html/2402.03915v2/x7.png)
![Learning Metrics that Maximise Power for Accelerated A/B-Tests (8) Learning Metrics that Maximise Power for Accelerated A/B-Tests (8)](https://i0.wp.com/arxiv.org/html/2402.03915v2/x8.png)
4.5. Spherical Regularisation (RQ6)
Our goal is to assess and quantify the effects of the proposed spherical regularisation method in Section3.2.We train models on all available data with known outcomes , where we have a vetted preference over variants .We consider three weight initialisation strategies to set , and normalise weights to ensure :(i) goodinitialisation at ,(ii) constantinitialisation at the all-one vector , and(iii) badinitialisation at .We train models for all learning objectives we deal with in this paper: -scores, -values, and -values; whilst varying the strength of the spherical regularisation term .As discussed, this term does not affect the optima, but simply transforms the loss to be more amenable to gradient-based optimisation methods.Thus, we expect convergence after fewer training iterations.All models are trained until convergence with the adam optimiser(Kingma and Ba, 2014), initialising the learning rate at and halving it every 1โ000 steps where we do not observe improvements.We use the radam variant to avoid convergence issues(Reddi etal., 2018; Liu etal., 2020), and have validated that this choice does not significantly alter our obtained results and conclusions.We consider a model converged if there are no improvements to the learning objective after 10โ000 steps.All methods are implemented using Python3.9 and PyTorch(Paszke etal., 2019).
Figure6 visualises the evolution of the learning objective over optimisation steps, for all mentioned learning objectives, initialisation strategies, and regularisation strengths.We observe that the method is robust, significantly improving convergence speed for all settings, requiring up to 40% fewer iterations until convergence is reached.This positively influences the practical utility of the learnt metric pipeline for researchers and practitioners.
We provide source code to reproduce Figure2 and our regularisation method at github.com/olivierjeunen/learnt-metrics-kdd-2024.
5. Insights from Learnt Metrics
In this Section, we briefly discuss insights that arose through our empirical evaluation of all metrics: the North Star, classical surrogates and proxies, as well as learnt metrics.These insights are specific to our platforms, but we believe they can contribute to a general intuition and understanding of metrics for online content platforms and broader application areas.
Ratio metrics are easily fooled.
Often, important metrics can be framed as a ratio of the means (or sums) of two existing metrics(Baweja etal., 2024; Budylin etal., 2018).Examples include click-through rate (i.e. clicks / impressions), variants of user retention (i.e. retained users / active users), or general engagement ratios (e.g. likes / video-plays).We observe that, whilst these metrics can be important from a business perspective, they typically exhibit significant type-III/S errors w.r.t. the North Star.Indeed, in the examples above both the numerator and denominator represent positive signals we wish to increase.Suppose an online experiment increases the number of video-plays by , and the overall number of likes by .These two positive signals will lead to a decreasing ratio, whilst we are likely to still prefer the treatment w.r.t. the North Star if and are substantially large.Similar observations cautioning the use of ratio metrics have been made by Dmitriev etal. (2017).
We believe this is connected to common offline ranking evaluation metrics prevalent in the recommender systems field(Steck, 2013; Jeunen, 2019).Indeed, such metrics are cumulative in nature, optimising overall value instead of some notion of value-per-item(Jeunen etal., 2024).
User-level aggregations conquer general counters.
In the previous example, we describe general count metrics for the number of likes and the number of video-play events.User behaviour on online platforms often follows a power-law distribution: a few โpower usersโ generate the majority of such events(Chi, 2020).As a result, such metrics are easily skewed, and they are not guaranteed to accurately reflect improvements for the full population of usersโempirically leading to type-III/S errors w.r.t. the North Star.Aggregating such counters per users (e.g. count the number of days a user has at least video-plays) instead of using raw event counters, provides strong and sensitive proxies to the North Star.
Interestingly, this framing is reminiscent of recall, as we effectively measure the coverage of users about whom we have positive signals.Recall metrics are again strongly connected to offline evaluation practices in recommender systems, especially in the first stage of two-stage systems common in industry(Covington etal., 2016; Ma etal., 2020).
6. Conclusions & Outlook
A/B-testing is a crucial tool for decision-making in online businesses, and it has widely been adopted as a go-to approach to allow for continuous system improvements.Notwithstanding their popularity, online experiments are often expensive to perform.Indeed: many experiments lead to statistically insignificant outcomes, presenting an obstacle for confident decision-making.Experiments that do lead to significant outcomes are costly too: by their very definition, a portion of user traffic interacts with a sub-optimal system variant.As such, we want to maximise the number of decisions we can make based on the experiments we run, and we want to minimise the required sample size for statistically significant outcomes.In this work, we achieve this by learning metrics that maximise the statistical power they harness.We present novel learning objectives for such metrics, and provide a thorough evaluation of the effectiveness of our proposed approaches.Our learnt metrics are currently used for confident, high-velocity decision-making across ShareChat and Moj business units.
We believe our work opens several avenues for future work improving the efficacy of learnt metrics, by e.g. relaxing the linearity constraint we rely on.Furthermore, we wish to leverage our learnt metrics as reward signals for personalisation through machine learning models(Jeunen, 2021).
References
- (1)
- Athey etal. (2019)Susan Athey, Raj Chetty, GuidoW Imbens, and Hyunseung Kang. 2019.The Surrogate Index: Combining Short-Term Proxies to Estimate Long-Term Treatment Effects More Rapidly and Precisely.Working Paper 26463. National Bureau of Economic Research.https://doi.org/10.3386/w26463
- Baweja etal. (2024)Shubham Baweja, Neeti Pokharna, Aleksei Ustimenko, and Olivier Jeunen. 2024.Variance Reduction in Ratio Metrics for Efficient Online Experiments. In Proc. of the 46th European Conference on Information Retrieval (ECIR โ24). Springer.
- Budylin etal. (2018)Roman Budylin, Alexey Drutsa, Ilya Katsev, and Valeriya Tsoy. 2018.Consistent Transformation of Ratio Metrics for Efficient Online Controlled Experiments. In Proc. of the Eleventh ACM International Conference on Web Search and Data Mining (WSDM โ18). ACM, 55โ63.https://doi.org/10.1145/3159652.3159699
- Chapelle etal. (2012)Olivier Chapelle, Thorsten Joachims, Filip Radlinski, and Yisong Yue. 2012.Large-Scale Validation and Analysis of Interleaved Search Evaluation.ACM Trans. Inf. Syst. 30, 1, Article 6 (mar 2012), 41pages.https://doi.org/10.1145/2094072.2094078
- Chi (2020)EdH. Chi. 2020.From Missing Data to Boltzmann Distributions and Time Dynamics: The Statistical Physics of Recommendation. In Proc. of the 13th International Conference on Web Search and Data Mining (WSDM โ20). ACM, 1โ2.https://doi.org/10.1145/3336191.3372193
- Covington etal. (2016)Paul Covington, Jay Adams, and Emre Sargin. 2016.Deep Neural Networks for YouTube Recommendations. In Proc. of the 10th ACM Conference on Recommender Systems (RecSys โ16). ACM, 191โ198.https://doi.org/10.1145/2959100.2959190
- Deng and Shi (2016)Alex Deng and Xiaolin Shi. 2016.Data-Driven Metric Development for Online Controlled Experiments: Seven Lessons Learned. In Proc. of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD โ16). ACM, 77โ86.https://doi.org/10.1145/2939672.2939700
- Deng etal. (2013)Alex Deng, Ya Xu, Ron Kohavi, and Toby Walker. 2013.Improving the Sensitivity of Online Controlled Experiments by Utilizing Pre-Experiment Data. In Proc. of the Sixth ACM International Conference on Web Search and Data Mining (WSDM โ13). ACM, 123โ132.https://doi.org/10.1145/2433396.2433413
- Dmitriev etal. (2017)Pavel Dmitriev, Somit Gupta, DongWoo Kim, and Garnet Vaz. 2017.A Dirty Dozen: Twelve Common Metric Interpretation Pitfalls in Online Controlled Experiments. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD โ17). ACM, 1427โ1436.https://doi.org/10.1145/3097983.3098024
- Gelman and Carlin (2014)Andrew Gelman and John Carlin. 2014.Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors.Perspectives on Psychological Science 9, 6 (2014), 641โ651.https://doi.org/10.1177/1745691614551642PMID: 26186114.
- Goffrier etal. (2023)GrahamVan Goffrier, Lucas Maystre, and CiarรกnMark Gilligan-Lee. 2023.Estimating long-term causal effects from short-term experiments and long-term observational data with unobserved confounding. In Proc. of the Second Conference on Causal Learning and Reasoning (Proc. of Machine Learning Research, Vol.213), Mihaela vander Schaar, Cheng Zhang, and Dominik Janzing (Eds.). PMLR, 791โ813.https://proceedings.mlr.press/v213/goffrier23a.html
- Guo etal. (2021)Yongyi Guo, Dominic Coey, Mikael Konutgan, Wenting Li, Chris Schoener, and Matt Goldman. 2021.Machine Learning for Variance Reduction in Online Experiments. In Advances in Neural Information Processing Systems, Vol.34. Curran Associates, Inc., 8637โ8648.
- Howard etal. (2021)StevenR. Howard, Aaditya Ramdas, Jon McAuliffe, and Jasjeet Sekhon. 2021.Time-uniform, nonparametric, nonasymptotic confidence sequences.The Annals of Statistics 49, 2 (2021), 1055 โ 1080.https://doi.org/10.1214/20-AOS1991
- Jeunen (2019)Olivier Jeunen. 2019.Revisiting Offline Evaluation for Implicit-Feedback Recommender Systems. In Proc. of the 13th ACM Conference on Recommender Systems (RecSys โ19). ACM, 596โ600.https://doi.org/10.1145/3298689.3347069
- Jeunen (2021)Olivier Jeunen. 2021.Offline Approaches to Recommendation with Online Success.Ph.โD. Dissertation. University of Antwerp.
- Jeunen (2023)Olivier Jeunen. 2023.A Common Misassumption in Online Experiments with Machine Learning Models.SIGIR Forum 57, 1, Article 13 (dec 2023), 9pages.https://doi.org/10.1145/3636341.3636358
- Jeunen etal. (2024)Olivier Jeunen, Ivan Potapov, and Aleksei Ustimenko. 2024.On (Normalised) Discounted Cumulative Gain as an Offline Evaluation Metric for Top- Recommendation. In Proc. of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD โ24).arXiv:2307.15053[cs.IR]
- Kaiser (1960)HenryF Kaiser. 1960.Directional statistical decisions.Psychological Review 67, 3 (1960), 160.
- Kharitonov etal. (2017)Eugene Kharitonov, Alexey Drutsa, and Pavel Serdyukov. 2017.Learning Sensitive Combinations of A/B Test Metrics. In Proc. of the Tenth ACM International Conference on Web Search and Data Mining (WSDM โ17). ACM, 651โ659.https://doi.org/10.1145/3018661.3018708
- Kingma and Ba (2014)DiederikP. Kingma and Jimmy Ba. 2014.Adam: A Method for Stochastic Optimization. In Proc. of the 3rd International Conference on Learning Representations (ICLR โ14).arXiv:1412.6980[cs.LG]
- Kohavi etal. (2022)Ron Kohavi, Alex Deng, and Lukas Vermeer. 2022.A/B Testing Intuition Busters: Common Misunderstandings in Online Controlled Experiments. In Proc. of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD โ22). ACM, 3168โ3177.https://doi.org/10.1145/3534678.3539160
- Kohavi etal. (2020)Ron Kohavi, Diane Tang, and Ya Xu. 2020.Trustworthy online controlled experiments: A practical guide to A/B testing.Cambridge University Press.
- Ledoit and Wolf (2004)Olivier Ledoit and Michael Wolf. 2004.A well-conditioned estimator for large-dimensional covariance matrices.Journal of Multivariate Analysis 88, 2 (2004), 365โ411.https://doi.org/10.1016/S0047-259X(03)00096-4
- Ledoit and Wolf (2020)Olivier Ledoit and Michael Wolf. 2020.The Power of (Non-)Linear Shrinking: A Review and Guide to Covariance Matrix Estimation.Journal of Financial Econometrics 20, 1 (06 2020), 187โ218.https://doi.org/10.1093/jjfinec/nbaa007arXiv:https://academic.oup.com/jfec/article-pdf/20/1/187/42274902/nbaa007.pdf
- Liu etal. (2020)Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2020.On the Variance of the Adaptive Learning Rate and Beyond. In International Conference on Learning Representations (ICLR โ20).https://arxiv.org/abs/1908.03265
- Ma etal. (2020)Jiaqi Ma, Zhe Zhao, Xinyang Yi, Ji Yang, Minmin Chen, Jiaxi Tang, Lichan Hong, and EdH Chi. 2020.Off-policy learning in two-stage recommender systems. In Proc. of The Web Conference 2020. 463โ473.
- Mosteller (1948)Frederick Mosteller. 1948.A k-Sample Slippage Test for an Extreme Population.The Annals of Mathematical Statistics 19, 1 (1948), 58โ65.http://www.jstor.org/stable/2236056
- Paszke etal. (2019)Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019.PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems, H.Wallach, H.Larochelle, A.Beygelzimer, F.d'Alchรฉ-Buc, E.Fox, and R.Garnett (Eds.), Vol.32. Curran Associates, Inc.https://proceedings.neurips.cc/paper_files/paper/2019/file/bdbca288fee7f92f2bfa9f7012727740-Paper.pdf
- Poyarkov etal. (2016)Alexey Poyarkov, Alexey Drutsa, Andrey Khalyavin, Gleb Gusev, and Pavel Serdyukov. 2016.Boosted Decision Tree Regression Adjustment for Variance Reduction in Online Controlled Experiments. In Proc. of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD โ16). ACM, 235โ244.https://doi.org/10.1145/2939672.2939688
- Reddi etal. (2018)SashankJ. Reddi, Satyen Kale, and Sanjiv Kumar. 2018.On the Convergence of Adam and Beyond. In International Conference on Learning Representations (ICLR โ18).https://openreview.net/forum?id=ryQu7f-RZ
- Richardson etal. (2023)Lee Richardson, Alessandro Zito, Dylan Greaves, and Jacopo Soriano. 2023.Pareto optimal proxy metrics.arXiv:2307.01000[stat.ME]
- Rubin (1974)DonaldB Rubin. 1974.Estimating causal effects of treatments in randomized and nonrandomized studies.Journal of educational Psychology 66, 5 (1974), 688.
- Schmit and Miller (2022)Sven Schmit and Evan Miller. 2022.Sequential confidence intervals for relative lift with regression adjustments.(2022).
- Shaffer (1995)JulietPopper Shaffer. 1995.Multiple Hypothesis Testing.Annual Review of Psychology 46, 1 (1995), 561โ584.https://doi.org/10.1146/annurev.ps.46.020195.003021arXiv:https://doi.org/10.1146/annurev.ps.46.020195.003021
- Steck (2013)Harald Steck. 2013.Evaluation of recommendations: rating-prediction and ranking. In Proc. of the 7th ACM Conference on Recommender Systems (RecSys โ13). ACM, 213โ220.https://doi.org/10.1145/2507157.2507160
- Tang etal. (2022)Ziyang Tang, Yiheng Duan, Steven Zhu, Stephanie Zhang, and Lihong Li. 2022.Estimating Long-Term Effects from Experimental Data. In Proc. of the 16th ACM Conference on Recommender Systems (RecSys โ22). ACM, 516โ518.https://doi.org/10.1145/3523227.3547398
- Tripuraneni etal. (2023)Nilesh Tripuraneni, Lee Richardson, Alexander DโAmour, Jacopo Soriano, and Steve Yadlowsky. 2023.Choosing a Proxy Metric from Past Experiments.arXiv:2309.07893[stat.ME]
- Urbano etal. (2019)Juliรกn Urbano, Harlley Lima, and Alan Hanjalic. 2019.Statistical Significance Testing in Information Retrieval: An Empirical Analysis of Type I, Type II and Type III Errors. In Proc. of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIRโ19). ACM, 505โ514.https://doi.org/10.1145/3331184.3331259
- Ustimenko and Prokhorenkova (2020)Aleksei Ustimenko and Liudmila Prokhorenkova. 2020.StochasticRank: Global Optimization of Scale-Free Discrete Functions. In Proc. of the 37th International Conference on Machine Learning (ICML โ20โ, Vol.119). PMLR, 9669โ9679.https://proceedings.mlr.press/v119/ustimenko20a.html
- Wald (1945)Abraham Wald. 1945.Sequential Tests of Statistical Hypotheses.The Annals of Mathematical Statistics 16, 2 (1945), 117 โ 186.https://doi.org/10.1214/aoms/1177731118
- Wang etal. (2022)Yuyan Wang, Mohit Sharma, Can Xu, Sriraj Badam, Qian Sun, Lee Richardson, Lisa Chung, EdH. Chi, and Minmin Chen. 2022.Surrogate for Long-Term User Experience in Recommender Systems. In Proc. of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (Washington DC, USA) (KDD โ22). ACM, 4100โ4109.https://doi.org/10.1145/3534678.3539073
- WELCH (1947)BernardLewis WELCH. 1947.The Generalization of โStudentโsโ Problem when Several Different Population Variances are Involved.Biometrika 34, 1-2 (01 1947), 28โ35.https://doi.org/10.1093/biomet/34.1-2.28
- Xie and Aurisset (2016)Huizhi Xie and Juliette Aurisset. 2016.Improving the Sensitivity of Online Controlled Experiments: Case Studies at Netflix. In Proc. of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD โ16). ACM, 645โ654.https://doi.org/10.1145/2939672.2939733
- Yue etal. (2010)Yisong Yue, Yue Gao, Oliver Chapelle, Ya Zhang, and Thorsten Joachims. 2010.Learning More Powerful Test Statistics for Click-Based Retrieval Evaluation. In Proc. of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR โ10). ACM, 507โ514.https://doi.org/10.1145/1835449.1835534
![Learning Metrics that Maximise Power for Accelerated A/B-Tests (9) Learning Metrics that Maximise Power for Accelerated A/B-Tests (9)](https://i0.wp.com/arxiv.org/html/2402.03915v2/x9.png)
![Learning Metrics that Maximise Power for Accelerated A/B-Tests (10) Learning Metrics that Maximise Power for Accelerated A/B-Tests (10)](https://i0.wp.com/arxiv.org/html/2402.03915v2/x10.png)
![Learning Metrics that Maximise Power for Accelerated A/B-Tests (11) Learning Metrics that Maximise Power for Accelerated A/B-Tests (11)](https://i0.wp.com/arxiv.org/html/2402.03915v2/x11.png)
![Learning Metrics that Maximise Power for Accelerated A/B-Tests (12) Learning Metrics that Maximise Power for Accelerated A/B-Tests (12)](https://i0.wp.com/arxiv.org/html/2402.03915v2/x12.png)
Appendix A Additional Experimental Results
To further empirically validate our theoretical insights w.r.t. the proposed methods, we repeat the experiments reported in Section4 on data collected for the Moj platform, and reproduce Figures3โ5.Results are visualised in Figure7.Observations match our expectations, further strengthening trust in the replicability of our results.
All improvements in sensitivity and statistical power are a similar order of magnitude as those for ShareChat: learnt metrics that minimise ()-values can substantially reduce type-II/III errors without affecting type-I errors.We observe an improvement over ShareChat data in Figure7(d): learnt metrics for Moj exhibit a 12-fold reduction in the sample size that is required to attain constant statistical confidence as to the North Star.