Arbitrariness and Social Prediction:
The Confounding Role of Variance in Fair Classification
Abstract
Variance in predictions across different trained models is a significant, under-explored source of error in fair binary classification. In practice, the variance on some data examples is so large that decisions can be effectively arbitrary. To investigate this problem, we take an experimental approach and make four overarching contributions. We: 1) Define a metric called self-consistency, derived from variance, which we use as a proxy for measuring and reducing arbitrariness; 2) Develop an ensembling algorithm that abstains from classification when a prediction would be arbitrary; 3) Conduct the largest to-date empirical study of the role of variance (vis-a-vis self-consistency and arbitrariness) in fair binary classification; and, 4) Release a toolkit that makes the US Home Mortgage Disclosure Act (HMDA) datasets easily usable for future research. Altogether, our experiments reveal shocking insights about the reliability of conclusions on benchmark datasets. Most fair binary classification benchmarks are close-to-fair when taking into account the amount of arbitrariness present in predictions — before we even try to apply any fairness interventions. This finding calls into question the practical utility of common algorithmic fairness methods, and in turn suggests that we should reconsider how we choose to measure fairness in binary classification.
1 Introduction
A goal of algorithmic fairness is to develop techniques that measure and mitigate discrimination in automated decision-making. In fair binary classification, this often involves training a model to satisfy a chosen fairness metric, which typically defines fairness as parity between model error rates for different demographic groups in the dataset [4]. However, even if a model’s classifications satisfy a particular fairness metric, it is not necessarily the case that the model is equally confident in each classification.
To provide an intuition for what we mean by confidence, consider the following experiment: We fit 100 logistic regression models using the same learning process, which draws different subsamples of the training set from the COMPAS prison recidivism dataset [44, 30], and we compare the resulting classifications for two individuals in the test set. Figure 1 shows a difference in the consistency of predictions for both individuals: the 100 models agree completely to classify Individual 1 as “will recidivate” and disagree completely on whether to classify Individual 2 as “will” or “will not recidivate.” If we were to pick one model at random to use in practice, there would be no effect on how Individual 1 is classified;

yet, for Individual 2, the prediction is effectively random. We can interpret this disagreement to mean that the learning process that produced these predictions is not sufficiently confident to justify assigning Individual 2 either decision outcome. In practice, instances like Individual 2 exhibit so little confidence that their classification is effectively arbitrary [16, 15, 18]. Further, this arbitrariness can also bring about discrimination if classification decisions are systematically more arbitrary for individuals in certain demographic groups.
A key aspect of this example is that we use only one model to make predictions. This is the typical setup in fair binary classification: Popular metrics are commonly applied to evaluate the fairness of a single model [34, 50, 40]. However, as is clear from the example learning process in Figure 1, using only a single model can mask the arbitrariness of predictions. Instead, to reveal arbitrariness, we must examine distributions over possible models for a given learning process. With this shift in frame, we ask:
What is the empirical role of arbitrariness in fair binary classification tasks?
To study this question, we make four contributions:
-
1.
Quantify arbitrariness. We formalize a metric called self-consistency, derived from statistical variance, which we use as a quantitative proxy for arbitrariness of model outputs. Self-consistency is a simple yet powerful tool for empirical analyses of fair classification (Section 3).
- 2.
-
3.
Perform a comprehensive experimental study of variance in fair binary classification. We conduct the largest-to-date such study, through the lens of self-consistency and its relationship to arbitrariness. Surprisingly, we find that most benchmarks are close-to-fair when taking into account the amount of arbitrariness present in predictions — before we even try to apply any fairness interventions (Section 5). This shocking finding has huge implications for the field: it casts doubt on the reliability of prior work that claims there is baseline unfairness in these benchmarks, in order to demonstrate that methods to improve fairness work in practice. We instead find that such methods are often empirically unnecessary to improve fairness (Section 6).
-
4.
Release a large-scale fairness dataset package. We observe that variance, particularly in small datasets, can undermine the reliability of conclusions about fairness. We therefore open-source a package that makes the large-scale US Home Mortgage Disclosure Act datasets (HMDA) easily usable for future research.
2 Preliminaries on Fair Binary Classification
To analyze arbitrariness in the context of fair binary classification, we first need to establish our background definitions. This material is likely familiar to most readers. Nevertheless, we highlight particular details that are important for understanding the experimental methods that enable our contributions. We present the fair-binary-classification problem formulation and associated empirical approximations, with an emphasis on the distribution over possible models that could be produced from training on different subsets of data drawn from the same data distribution.
2.1 Problem formulation
Consider a distribution from which we can sample examples . The are feature instances and is a group of protected attributes that we do not use for learning (e.g., race, gender).111We examine the common setting in which , and abuse notation by treating like a scalar with . The are the associated observed labels, and , where is the label space. From we can sample training datasets , with representing the set of all -sized datasets. To reason about the possible models of a hypothesis class that could be learned from the different subsampled datasets , we define a learning process:
Definition 1.
A learning process is a randomized function that runs instances of a training procedure on each and a model specification, in order to produce classifiers . A particular run , where , which is deterministic mapping from the instance space to the label space . All such runs over produce a distribution over possible trained models, .
Reasoning about , rather than individual models , enables us to contextualize arbitrariness in the data, which, in turn, is captured by learned models (Section 3).222Model multiplicity has similar aims, but ultimately relocates the arbitrariness we describe to model selection (Section 6; Appendix C.3). Each particular model deterministically produces classifications . The classification rule is , for some threshold , where regressor computes the probability of positive classification. Executing produces by minimizing the loss of predictions with respect to their associated observed labels in . This loss is computed by a chosen loss function . We compute predictions for a test set of fresh examples and calculate their loss. The loss is an estimate of the error of , which is dependent on the specific dataset used for training. To generalize to the error of all possible models produced by a specific learning process (Definition 1), we consider the expected error, .
In fair classification, it is common to use 0-1 loss or cost-sensitive loss, which assigns asymmetric costs for false positives FP and for false negatives FN [25]. These costs are related to the classifier threshold , with (Appendix A.3). Common fairness metrics, such as Equality of Opportunity [34], further analyze error by computing disparities across group-specific error rates and . For example, . Model-specific and are further-conditioned on the dataset used in training, i.e., .
2.2 Empirical approximation of the formulation
We typically only have access to one dataset, not the data distribution . In fair binary classification experiments, it is common to estimate expected error by performing cross validation (CV) on this dataset to produce a small handful of models [11, 38, 17, e.g.]. CV can be unreliable when there is high variance; it can produce error estimates that are themselves high variance, and does not reliably estimate expected error with respect to possible models (Section 5). For more details, see Efron and Tibshirani [23, 24] and Wager [57].
To get around these reliability issues, one can bootstrap.333We could use MCMC [60], but optimization is the standard tool that allows use of standard models in fairness. Bootstrapping splits the available data into train and test sets, and simulates drawing different training datasets from a distribution by resampling the train set , generating replicates . We use these replicates to approximate the learning process on (Def. 1). We treat the resulting as our empirical estimate for the distribution , and evaluate their predictions for the same reserved test set. This enables us to produce comparisons of classifications across test instances like in Fig. 1 (Appendix A.4).
3 Variance, Self-Consistency, and Arbitrariness
From these preliminaries, we can now pin down arbitrariness more precisely. We develop a quantitative proxy for measuring arbitrariness, called self-consistency (Section 3.2), which is derived from a definition of statistical variance between different model predictions (Section 3.1). We then illustrate how self-consistency is a simple-yet-powerful tool for revealing the role of arbitrariness in fair classification (Section 3.3). Next, we will introduce an algorithm to improve self-consistency (Section 4) and compute self-consistency on popular fair binary classification benchmarks (Section 5).
3.1 Arbitrariness resembles statistical variance
In Section 2, we discussed how common fairness metrics analyze error by computing false positive rate (FPR) and false negative rate (FNR). Another common way to formalize error is as a decomposition of different statistical sources: noise-, bias-, and variance-induced error [2, 32]. To understand our metric for self-consistency (Section 3.2), we first describe how the arbitrariness in Figure 1 (almost, but not quite) resembles variance.
Informally, variance-induced error quantifies fluctuations in individual example predictions for different models . Variance is the error in the learning process that comes from training on different datasets . In theory, we measure variance by imagining training all possible , testing them all on the same test instance , and then quantifying how much the resulting classifications for deviate from each other. More formally,
Definition 2.
For all pairs of possible models , the variance for a test is
We can approximate variance directly by using the bootstrap method (Section 2.2, Appendix B.1). For 0-1 and cost-sensitive loss with costs (Section 2.1), we can generate replicates to train concrete models that serve as our approximation for the distribution . For , where and denote the number of - and -class predictions for ,
(1) |
3.2 Defining self-consistency from variance
It is clear from above that, in general, variance (1) is unbounded. We can always increase the maximum possible by increasing the magnitudes of our chosen and .444Because , for a given we can scale costs arbitrarily and have the same decision rule (Section 2.1). Relative, not absolute, costs affect the number of classifications and . However, as we can see from our intuition for arbitrariness in Figure 1, the most important takeaway is the amount of (dis)agreement, reflected in the counts and . Here, there is no notion of the cost of misclassifications. So, variance (1) does not exactly measure what we want to capture. Instead, we want to focus unambiguously on the (dis)agreement part of variance, which we call self-consistency of the learning process:
Definition 3.
For all pairs of possible models , the self-consistency of the learning process for a test is
(2) |
In words, (2) models the probability that two models produced by the same learning process on different -sized training datasets agree on their predictions for the same test instance.555(2) follows from it being equally likely to draw any two in a learning process (Appendix B.3). Like variance, we can derive an empirical approximation of SC. Using the bootstrap method with ,
(3) |
For increasingly large , is defined on (Appendix B.3). Throughout, we use the shorthand self-consistency, but it is important to note that Definition 3 is a property of the distribution over possible models produced by the learning process, not of individual models. We summarize other important takeaways below:

Total | |||
---|---|---|---|
NW | |||
W |

Total | |||
---|---|---|---|
F | |||
M |
Terminology. In naming our metric, we intentionally evoke related notions of “consistency” in logic and the law (Fuller [31], Stalnaker [55]; Appendix B.3).
Interpretation. Definition 3 is defined on , which coheres with the intuition in Figure 1: and respectively reflect minimal (Individual 2) and maximal (Individual 1) possible SC. SC, unlike FPR and FNR (Section 2.1), does not depend on the observed label . It captures the learning process’s confidence in a classification , but says nothing directly about ’s accuracy. By construction, low self-consistency indicates high variance, and vice versa. We derive empirical (3) from (1) by leveraging observations about the definition of for 0-1 loss (Appendix B.3). While there are no costs , in computing (3), they still affect empirical measurements of . Because and affect (Section 2.1), they control the concrete number of and , and thus the we measure in experiments.
Empirical focus. Since self-consistency depends on the particular data subsets used in training, conclusions about its relevance vary according to task. This is why we take a practical approach for our main results — of running a large-scale experimental study on many different datasets to extract general observations about ’s practical effects (Section 5). In our experiments, we typically use , which yields a range of in practice.666Efron and Tibshirani [24] recommend .
Relationship to other fairness concepts. Self-consistency is qualitatively different from traditional fairness metrics. Unlike FPR and FNR, SC does not depend on observed label . This has two important implications. First, while calibration also measures a notion of confidence, it is different: calibration reflects confidence with respect to a model predicting , but says nothing about the relative confidence in predictions produced by the possible models that result from the learning process [50]. Second, a common assumption in algorithmic fairness is that there is label bias — that unfairness is due in part to discrimination reflected in recorded, observed decisions [29, 12]. As a result, it is arguably a nice side effect that self-consistency does not depend on . However, it is also possible to be perfectly self-consistent and inaccurate (e.g., ; Section 6).
3.3 Illustrating self-consistency in practice
enables us to evaluate arbitrariness in classification experiments. It is straightforward to compute (3) with respect to multiple test instances — for all instances in a test set or for all instances conditioned on membership in . Therefore, beyond visualizing for individuals (Figure 1), we can also do so across sets of individuals. We plot the cumulative distribution (CDF) of for the groups in the test set (i.e., the -axis shows the range of for , ). In Figure 2, we provide illustrative examples from two of the most common fair classification benchmarks [26], COMPAS and Old Adult using random forests (RFs). We split the available data into train and test sets, and bootstrap the train set times to train models (Section 2.2). We repeat this process on 10 train/test splits, and the resulting confidence intervals (shown in the inset) indicate that our estimates are stable. We group observations regarding these examples into two categories:
Individual arbitrariness. Both CDFs show that varies drastically across test instances. For random forests on the COMPAS dataset, about one-half of instances are under self-consistent. Nearly one-quarter of test instances are effectively self-consistent; they resemble Individual 2 in Figure 1, meaning that their predictions are essentially arbitrary. These differences in across the test set persist even though the 101 models exhibit relatively small average disparities , , and (Figure 2, bottom; Section 5.2). This supports our motivating claim: it is possible to come close to satisfying fairness metrics, while the learning process exhibits very different levels of confidence for the underlying classifications that inform those metrics (Section 1).
Systematic arbitrariness. We can also highlight according to groups. The plot for Old Adult shows that it is possible for the degree of arbitrariness to be systematically worse for a particular demographic (Figure 2). While the lack of is not as extreme as it is for COMPAS (Figure 2) — the majority of test instances exhibit over — there is more arbitrariness in the Male subgroup. We can quantify such systematic arbitrariness using a measure of distance between probability distributions. We use the Wasserstein-1 distance (), which has a closed form for CDFs [52]. The distance has an intuitive interpretation for measuring systematic arbitrariness: it computes the total disparity in SC by examining all possible SC levels at once (Appendix B.3). For two groups and with respective SC CDFs and , . For Old Adult, empirical ; for COMPAS, which does not show systematic arbitrariness, .
4 Accounting for Self-Consistency
By definition, low signals that there is high (Section 3.2). It is therefore a natural idea to use variance reduction techniques to improve (and thus reduce arbitrariness).
As a starting point for improving , we perform variance reduction with Breiman [8]’s bootstrap aggregation, or bagging, ensembling algorithm. Bagging involves bootstrapping to produce a set of models (Section 2.2), and then, for each test instance, producing an aggregated prediction , which takes the majority vote of the classifications. This procedure is practically effective for classifiers with high variance [8, 9]. However, by taking the majority vote, bagging embeds the idea that having slightly-better-than-random classifiers is sufficient for improving ensembled predictions, . Unfortunately, there exist instances like Individual 2 (Figure 1), where the classifiers in the ensemble are evenly split between classes. This means that bagging alone cannot overcome arbitrariness (Appendix D.1).
To remedy this, we add the option to abstain from prediction if is low (Algorithm 1). A minor adjustment to (3) accounts for abstentions, and a simple proof follows that Algorithm 1 improves (Appendix D). We bootstrap as usual, but pro-
Input: training data , , , ,
Output: with or Abstain
duce a prediction for only if surpasses a user-specified minimum level of ; otherwise, if an instance fails to achieve of at least , we Abstain from predicting. For evaluation, we divide the test set into two subsets: we group together the instances we Abstain on in an abstention set and those we predict on in a prediction set. This method improves self-consistency through two complementary mechanisms: 1) variance reduction (due to bagging, see Appendix D) and 2) abstaining from instances that exhibit low (thereby raising the overall amount of for the prediction set, see Appendix D).
Further, since variance is a component of error (Section 3), variance reduction also tends to improve accuracy [8]. This leads to an important observation: the abstention set, by definition, exhibits high variance; we can therefore expect it to exhibit higher error than the prediction set (Section 5, Appendix E). So, while at first glance it may seem odd that our solution for arbitrariness is to not predict, it is worth noting that we often would have predicted incorrectly on a large portion of the abstention set anyway (Appendix D). In practice, we test two versions of our method:
Simple ensembling. We run Algorithm 1 to build ensembles of typical hypothesis classes in algorithmic fairness. For example, running with decision trees and produces a bagged classifier that contains underlying decision trees, for which the bagged classifier abstains from predicting on test instances that exhibit less than . If overall is low, then simple ensembling will lead to a large number of abstentions. For example, almost half of all test instances in COMPAS using random forests would fail to surpass the threshold (Figure 2). The potential for large abstention sets informs our second approach.
Super ensembling. We run Algorithm 1 on bagged models . When there is low (i.e., high ) it can be beneficial to do an initial pass of variance reduction. We produce bagged classifiers using traditional bagging, but without abstaining (at Algorithm 1, lines 4-5); then we using those bagged classifiers as the underlying models . The first round of bagging raises the overall before the second round, which is when we decide whether to Abstain or not. We therefore expect this approach to abstain less; however, it may potentially incur higher error, if, by happenstance, simple-majority-vote bagging chooses for instances with very low (Appendix D).777We could recursively super ensemble, but do not in this work. We also experiment with an rule that averages the output probabilities of the underlying regressors , and then applies threshold to produce ensembled predictions. We do not observe major differences in results.
5 Experiments
We release an extensible package of different methods, with which we trained and compared several million different models (all told, taking on the order of hours of compute). We include results covering common datasets and models: COMPAS, Old Adult, German and Taiwan Credit, and 3 large-scale New Adult - CA tasks on logistic regression (LR), decision trees (DTs), random forests (RFs), MLPs, and SVMs (Appendix E). Our results are shocking: by using Algorithm 1, we happened to observe close-to-fairness in nearly every task. Mitigating arbitrariness leads to fairness, without applying common fairness-improving interventions (Section 5.2, Appendix E).

Baseline | Simple | Super | |
---|---|---|---|

Baseline | Simple | Super | |
---|---|---|---|
Releasing an HMDA toolkit. A possible explanation is that most fairness benchmarks are small ( examples) and therefore exhibit high variance. We therefore clean a larger, more diverse, and newer dataset for investigating fair binary classification — the Home Mortgage Disclosure Act (HMDA) 2007-2017 datasets [27] — and release them with a standalone, easy-to-use software package.888It is repeatedly argued that the field needs such datasets [19, e.g.]. HMDA meets this need, but is less commonly used. It requires engineering effort to manipulate — a barrier we remove. In this paper, we examine the NY and TX 2017 subsets of HMDA, which have and examples, respectively, and we still find close-to-fairness (Section 5.1, Appendix E).
Presentation. To visualize Algorithm 1, we plot the CDFs of the of the underlying models used in each ensembling method. We simultaneously plot the results of simple ensembling (dotted curves) and super ensembling (solid curves). Instances to the left of the vertical line (the minimum threshold ) form the abstention set. We also provide corresponding mean STD fairness and accuracy metrics for individual models (our expected, but not-necessarily-practically-attainable baseline) and for both simple and super ensembling. For ensembling methods, we report these metrics on the prediction set, along with the abstention rate ().
We necessarily defer most of our results to Appendix E. In the main text, we exemplify two overarching themes: the effectiveness of both ensembling variants (Section 5.1), and how our results reveal shocking insights about reliability in fair binary classification research (Section 5.2). For all experiments, we illustrate Algorithm 1 with , but note that is task-dependent in practice.
5.1 Validating Algorithm 1
We highlight results for two illustrative examples: Old Adult and HMDA-NY-2017 for ethnicity (Hispanic or Latino (HL), Non-Hispanic or Latino (NHL)). We plot CDFs and show metrics using random forests (RFs). For Old Adult, the expected disparity of the RF baseline is . The dashed set of curves plots the underlying for these RFs (Figure 3). When we apply simple to these RFs, overall decreases (Appendix E), shown in part by the decrease in and . Fairness also improves: decreases to . However, the corresponding is quite high, especially for the Male subgroup (, Figure 4(a)).
As expected, super improves overall through a first pass of variance reduction (Section 4). The CDF curves are brought down, indicating a lower proportion of the test set exhibits low . Abstention rate is lower and more equal (Figure 4(a)); however, error, while still lower than the baseline RFs, has gone up for all metrics. There is also a decrease in systematic arbitrariness (Section 3.3): the dark gray area for super () is smaller than the light gray area for simple () (B.3, E.4).


For HMDA (Figure 3), simple similarly improves , but has a less beneficial effect on fairness (). However, note that since the baseline is the empirical expected error over thousands of RF models, the specific is not necessarily attainable by any individual model. In this respect, simple has the benefit of actually obtaining a specific (ensemble) model that yields this disparity reliably in practice: is the mean over simple ensembles. Notably, this is extremely low, even without applying traditional fairness techniques. Similar to Old Adult, simple exhibits high , which decreases with super at the cost of higher error. still improves for both in comparison to the baseline, but the benefits are unequally applied: has a larger benefit, so increases slightly.
Abstention set error. As an example, the average in the Old Adult simple abstention set is close to — compared to for the RF baseline, and for simple and for super prediction sets (Appendix E.4.2). As expected, beyond reducing arbitrariness, we abstain from predicting for many instances for which we also would have been more inaccurate (Section 4).
A trade-off. Our results support that there is indeed a trade-off between abstention rate and error (Section 4). This is because Algorithm 1 identifies low- instances for which ML prediction does a poor job, and abstains from predicting on them. Nevertheless, it may be infeasible for some applications to tolerate a high . Thus the choice of and ensembling method should be considered a context-dependent decision.
Unequal abstention rates. When there is a high degree of systematic arbitrariness, can vary a lot by (Figure 4). With respect to improving , error, and fairness, this may be a reasonable outcome: it is arguably better to abstain unevenly — deferring a final classification to non-ML decision processes — than to predict more inaccurately and arbitrarily for one group. More importantly, we rarely observe systematic arbitrariness; unequal is uncommon in practice (Section 6).
5.2 A problem of empiriclal algorithmic fairness
We also highlight results for COMPAS, 1 of the 3 most common fairness datasets [26]. Algorithm 1 is similarly very effective at reducing arbitrariness (Figure 5), and is able to obtain state-of-the-art accuracy [45] with between . Analogous results for German Credit indicate statistical equivalence in fairness metrics (Appendices E.4.3 and E.4.7).
These low-single-digit disparities do not cohere with much of the literature on fair binary classification, which often reports much larger fairness violations [44, notably]. However, most work on fair classification examines individual models, selected via cross-validation with a handful of random seeds (Section 2). Our results suggest that selecting between a few individual models in fair binary classification experiments is unreliable. When we instead estimate expected error by ensembling, we have difficulty reproducing unfairness in practice. Variance in the underlying models in seems to be the culprit. The individual models we train on these tasks exhibit radically different group-specific error rates (Appendix E.4.7). Our strategy of shifting focus to the overall behavior of the distribution provides a solution: we not only mitigate arbitrariness, we also improve accuracy and usually average away most underlying, individual-model unfairness (Appendix E.5).

Baseline | Simple | Super | |
---|---|---|---|
6 Discussion and Related Work
In this paper, we advocate for a shift in thinking about individual models to the distribution over possible models in fair binary classification. This shift surfaces arbitrariness in underlying model decisions. We suggest a metric of self-consistency as a proxy for arbitrariness (Section 3) and an intuitive, elegantly simple extension of the classic bagging algorithm to mitigate it (Section 4). Our approach is tremendously effective with respect to improving , accuracy, and fairness metrics in practice (Section 5, Appendix E).
Our findings contradict accepted truths in algorithmic fairness. For example, much work posits that there is an inherent analytical trade-off between fairness and accuracy [17, 48]. Instead, our experiments complement prior work that disputes the practical relevance of this formulation [53]. We show it is in fact typically possible to achieve accuracy (via variance reduction) and close-to-fairness — and to do so without using fairness-focused interventions.
Other research also calls attention to the need for metrics beyond fairness and accuracy. Model multiplicity reasons about sets of models that have similar accuracy [10], but differ in underlying properties due to variance in decision rules [7, 47, 58]. This work emphasizes developing criteria for selecting an individual model from that set. Instead, our work uses the distribution over possible models (with no normative claims about model accuracy or other selection criteria) to reason about arbitrariness (Appendix C.3). Some related work considers the role of uncertainty and variance in fairness [3, 11, 39, 5]. Notably, Black et al. [6] concurrently investigates abstention-based ensembling, employing a strategy that (based on their choice of variance definition) ultimately does not address the arbitrariness we describe and mitigate (Appendix C). After our work, Ko et al. [41] built on prior work that studies fairness and variance in deep learning tasks [28, 51], and find that fairness emerges in deep ensembles (Appendix C.4).
Most importantly, we take a comprehensive experimental approach missing from prior work. It is this approach that uncovers our alarming results: almost all tasks and settings demonstrate close-to or complete statistical equality in fairness metrics, after accounting for arbitrariness (Appendix E.4). Old Adult (Figure 3) is one of two exceptions. These results hold for larger, newer datasets like HMDA, which we clean and release. Altogether, our findings indicate that variance is undermining the reliability of conclusions in fair binary classification experiments. It is worth revisiting all prior experiments that depend on cross validation or few models.
What does this mean for fairness research?
While the field has put forth numerous theoretical results about (un)fairness regarding single models — impossibility of satisfying multiple metrics [40], post-processing individual models to achieve a particular metric [34] — these results seem to miss the point. By examining individual models, arbitrariness remains latent; when we account for arbitrariness in practice, most measurements of unfairness vanish.
We are not suggesting that there are no reasons to be concerned with the fairness of machine-learning models. We are not challenging the idea that actual, reliable violations of standard fairness metrics should be of concern. Instead, we are suggesting that common formalisms and methods for measuring fairness can lead to false conclusions about the degree to which such violations are happening in practice (Appendix F). Worse, they can conceal a tremendous amount of arbitrariness, which should itself be an important concern when examining the social impact of automated decision-making.
Ethical Statement
This work raises important ethical concerns regarding the practice of fair-binary-classification research. We organize these concerns into several themes below.
Arbitrariness and legitimacy
On common research benchmarks, we show that many classification decisions are effectively arbitrary. Intuitively, this is unfair, but is a type of unfairness that largely has gone unnoticed in the algorithmic-fairness community. Such arbitrariness raises serious concerns about the legitimacy of automated decision-making. Fully examining these implications is the subject of current work that our team is completing. Complementing prior work on ML and arbitrariness [18, 15], we are working on a law-review piece that clarifies the due process implications of arbitrariness in ML-decision outcomes. For additional notes on future work in this area, see Appendix F.
Misspecification, mismeasurement, and fairness
Much prior work has emphasized theoretical contributions and problem formulations for how to study fairness in ML. A common pattern is to study unequal model error rates between demographic subgroups in the available data. Typically, experimental validation of these ideas has relied on using just a handful of models. Our work shows that this is not empirically sound: it can lead to drawing unreliable conclusions about the degree of unfairness (defined in terms of error rates). Most observable unfairness seems due to inadequately modeling or measuring the role of variance in learned models on common benchmark tasks.
Other than indicating serious concerns about the rigor of experiments in fairness research, our findings suggest ethical issues about the role of mismeasurement in identifying and allocating resources to specific research problems [37]. A lot of resources and research effort have been allocated to the study of these problem formulations. In turn, they have had profound social influence and impact, both in research and in the real world, with respect to how we reason broadly about fairness in automated decision-making.
In response to the heavy investment in these ideas, many have noted that there are normative and ethical reasons why such formulations are inadequate for the task of aligning with more just or equitable outcomes in practice. Our work shows that normative and ethical considerations extend even further. Even if we take these formulations at face value in theory, they are very difficult to replicate in practice on common fairness benchmarks when we account for variance in predictions across trained models. When we perform due diligence with our experiments, we have trouble validating the hypothesis that popular ML-theoretical formulations of fairness are capturing a meaningful practical phenomenon.
This should be an incredibly alarming finding to anyone in the community that is concerned about the practice, not just the theory, of fairness research. Using bootstrapping, we observe serious problems with respect to the reliability of how fairness and accuracy are measured in work that relies on cross-validation of just a few models. We also find little empirical evidence of a trade-off between fairness and accuracy (another common formulation in the community), which complements prior work that has made similar observations [53].
Project aims, reduction of scope
We emphasize that this was an unintended outcome of our original research objectives. We aimed to study arbitrariness as a latent issue in problem formulations that have to do with fair classification. This included broader methodological aims: studying many sources of non-determinism that could impact arbitrariness in practice (e.g., minibatching, example ordering). Instead, our initial results of close-to-fair expected performance for individual models made us refocus our work on issues of mismeasurement and fairness.
We did not expect to find that dealing with arbitrariness would make almost all unfairness (again, as measured by common definitions) vanish in practice. Regardless of our intention, these results indicate the limits of theory in a domain that, by centering social values like fairness, cannot be separated from practice. We believe that problem formulations are only as good as they are useful. Based on our work, it is unclear how useful our existing formulations are if they do not bear out in experiments.
Reproducibility and project aims
In the course of this study, we also tried to reproduce the experiments of many well-cited theory-focused works. We almost always could not do so: code was almost always unavailable. We therefore pivoted from making reproducibility an explicit aspect of the present paper, which we will pursue in future work that focuses solely on reproducibility and fairness. Nevertheless, our work raises concerns about the validity of some of this past work. At the very least, past work that makes claims about baseline unfairness in fairness benchmarks (in order to demonstrate that proposed methods improve upon these baselines) should be subject to experimental scrutiny.
Further along these lines, in our opinion, this project should not have been possible or necessary in 2022. We believe that the novel findings we present here should have surfaced long ago, and likely would have surfaced if experimental contributions had been more evenly balanced with theoretical work.
The limits of prediction
Lastly, it has not escaped our notice that our results signal severe limits to prediction in social settings. It is true that our method performs reasonably well with respect to both fairness and accuracy metrics; however, arbitrariness is such a rampant problem, it is arguably unreasonable to assign these metrics much value in practice.
Acknowledgments
This work was done as part of an internship at Microsoft Research. A. Feder Cooper is supported by Prof. Christopher De Sa’s NSF CAREER grant, Prof. Baobao Zhang, and Prof. James Grimmelmann. A. Feder Cooper is affiliated with The Berkman Klein Center for Internet & Society at Harvard University. The authors would like to thank The Internet Society Project at Yale Law School, Artificial Intelligence Policy and Practice at Cornell University, Jack Balkin, Emily Black, danah boyd, Sarah Dean, Fernando Delgado, Hoda Heidari, Ken Holstein, Jessica Hullman, Abigail Z. Jacobs, Meg Leta Jones, Michael Littman, Kweku Kwegyir-Aggrey, Rosanne Liu, Emanuel Moss, Kathy Strandburg, Hanna Wallach, and Simone Zhang for their feedback.
References
- Abadi [2012] Daniel Abadi. Consistency Tradeoffs in Modern Distributed Database System Design: CAP is Only Part of the Story. Computer, 45(2):37–42, February 2012.
- Abu-Mostafa et al. [2012] Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin. Learning From Data: A Short Course. AMLbook, USA, 2012.
- Ali et al. [2021] Junaid Ali, Preethi Lahoti, and Krishna P. Gummadi. Accounting for Model Uncertainty in Algorithmic Discrimination. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’21, page 336–345, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450384735. doi: 10.1145/3461702.3462630. URL https://doi.org/10.1145/3461702.3462630.
- Barocas et al. [2019] Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning: Limitations and Opportunities. fairmlbook.org, 2019. http://www.fairmlbook.org.
- Bhatt et al. [2021] Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, and Alice Xiang. Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, page 401–413. Association for Computing Machinery, New York, NY, USA, 2021.
- Black et al. [2022a] Emily Black, Klas Leino, and Matt Fredrikson. Selective Ensembles for Consistent Predictions. In International Conference on Learning Representations, 2022a.
- Black et al. [2022b] Emily Black, Manish Raghavan, and Solon Barocas. Model Multiplicity: Opportunities, Concerns, and Solutions. In 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 850–863, New York, NY, USA, 2022b. Association for Computing Machinery. ISBN 9781450393522. doi: 10.1145/3531146.3533149. URL https://doi.org/10.1145/3531146.3533149.
- Breiman [1996] Leo Breiman. Bagging Predictors. Machine Learning, 24(2):123–140, August 1996. ISSN 0885-6125. doi: 10.1023/A:1018054314350.
- Breiman [1998] Leo Breiman. Arcing classifiers. Annals of Statistics, 26:801–823, 1998.
- Breiman [2001] Leo Breiman. Statistical Modeling: The Two Cultures. Statistical Science, 16(3):199–215, 2001. ISSN 08834237.
- Chen et al. [2018] Irene Chen, Fredrik D Johansson, and David Sontag. Why Is My Classifier Discriminatory? In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
- Cooper and Abrams [2021] A. Feder Cooper and Ellen Abrams. Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, page 46–54, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450384735.
- Cooper et al. [2021a] A. Feder Cooper, Karen Levy, and Christopher De Sa. Accuracy-Efficiency Trade-Offs and Accountability in Distributed ML Systems. In Equity and Access in Algorithms, Mechanisms, and Optimization, New York, NY, USA, 2021a. Association for Computing Machinery. ISBN 9781450385534. URL https://doi.org/10.1145/3465416.3483289.
- Cooper et al. [2021b] A. Feder Cooper, Yucheng Lu, Jessica Zosa Forde, and Christopher De Sa. Hyperparameter Optimization Is Deceiving Us, and How to Stop It. In Advances in Neural Information Processing Systems, volume 34. Curran Associates, Inc., 2021b.
- Cooper et al. [2022a] A. Feder Cooper, Jonathan Frankle, and Christopher De Sa. Non-Determinism and the Lawlessness of Machine Learning Code. In Proceedings of the 2022 Symposium on Computer Science and Law, CSLAW ’22, page 1–8, New York, NY, USA, 2022a. Association for Computing Machinery. ISBN 9781450392341. doi: 10.1145/3511265.3550446.
- Cooper et al. [2022b] A. Feder Cooper, Emanuel Moss, Benjamin Laufer, and Helen Nissenbaum. Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning. In 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 864–876, New York, NY, USA, 2022b. Association for Computing Machinery. ISBN 9781450393522. doi: 10.1145/3531146.3533150.
- Corbett-Davies et al. [2017] Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, page 797–806, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450348874.
- Creel and Hellman [2022] Kathleen Creel and Deborah Hellman. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems. Canadian Journal of Philosophy, 52(1):26–43, 2022.
- Ding et al. [2021] Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. Retiring Adult: New Datasets for Fair Machine Learning. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 6478–6490. Curran Associates, Inc., 2021.
- Domingos [2000a] Pedro Domingos. A Unified Bias-Variance Decomposition and Its Applications. In Proceedings of the Seventeenth International Conference on Machine Learning, ICML ’00, page 231–238, San Francisco, CA, USA, 2000a. Morgan Kaufmann Publishers Inc. ISBN 1558607072.
- Domingos [2000b] Pedro Domingos. A unified bias-variance decomposition. Technical report, University of Washington, 2000b. URL https://homes.cs.washington.edu/~pedrod/bvd.pdf.
- Efron [1979] B. Efron. Bootstrap Methods: Another Look at the Jackknife. The Annals of Statistics, 7(1):1 – 26, 1979. doi: 10.1214/aos/1176344552.
- Efron and Tibshirani [1997] Bradley Efron and Robert Tibshirani. Improvements on Cross-Validation: The 632+ Bootstrap Method. Journal of the American Statistical Association, 92(438):548–560, 1997. doi: 10.1080/01621459.1997.10474007.
- Efron and Tibshirani [1993] Bradley Efron and Robert J. Tibshirani. An Introduction to the Bootstrap. Number 57 in Monographs on Statistics and Applied Probability. Chapman & Hall/CRC, Boca Raton, Florida, USA, 1993.
- Elkan [2001] Charles Elkan. The Foundations of Cost-Sensitive Learning. In Proceedings of the 17th International Joint Conference on Artificial Intelligence - Volume 2, IJCAI’01, page 973–978, San Francisco, CA, USA, 2001. Morgan Kaufmann Publishers Inc.
- Fabris et al. [2022] Alessandro Fabris, Stefano Messina, Gianmaria Silvello, and Gian Antonio Susto. Algorithmic Fairness Datasets: the Story so Far. Data Mining and Knowledge Discovery, 36(6):2074–2152, September 2022. doi: 10.1007/s10618-022-00854-z.
- [27] FFIEC 2017. HMDA Data Publication, 2017. URL https://www.consumerfinance.gov/data-research/hmda/historic-data/. Released due to the Home Mortgage Disclosure Act.
- Forde et al. [2021] Jessica Zosa Forde, A. Feder Cooper, Kweku Kwegyir-Aggrey, Chris De Sa, and Michael L. Littman. Model Selection’s Disparate Impact in Real-World Deep Learning Applications, 2021. URL https://arxiv.org/abs/2104.00606.
- Friedler et al. [2016] Sorelle A. Friedler, Carlos Scheidegger, and Suresh Venkatasubramanian. On the (im)possibility of fairness, 2016.
- Friedler et al. [2019] Sorelle A. Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P. Hamilton, and Derek Roth. A Comparative Study of Fairness-Enhancing Interventions in Machine Learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* ’19, page 329–338, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450361255. doi: 10.1145/3287560.3287589.
- Fuller [1965] Lon L. Fuller. The Morality of Law. Yale University Press, New Haven, 1965. ISBN 9780300010701.
- Geman et al. [1992] Stuart Geman, Elie Bienenstock, and René Doursat. Neural Networks and the Bias/Variance Dilemma. Neural Comput., 4(1):1–58, January 1992. ISSN 0899-7667. doi: 10.1162/neco.1992.4.1.1.
- Grömping [2019] Ulrike Grömping. South German Credit Data: Correcting a Widely Used Data Set. Technical report, Beuth University of Applied Sciences Berlin, 2019.
- Hardt et al. [2016] Moritz Hardt, Eric Price, Eric Price, and Nati Srebro. Equality of Opportunity in Supervised Learning. In D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016.
- Hastie et al. [2009] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer, USA, 2 edition, 2009.
- Hintikka [1962] Jaakko Hintikka. Knowledge and Belief. Cornell University Press, 1962.
- Jacobs and Wallach [2021] Abigail Z. Jacobs and Hanna Wallach. Measurement and Fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 375–385, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383097. doi: 10.1145/3442188.3445901.
- Jiang et al. [2020] Ray Jiang, Aldo Pacchiano, Tom Stepleton, Heinrich Jiang, and Silvia Chiappa. Wasserstein Fair Classification. In Ryan P. Adams and Vibhav Gogate, editors, Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, volume 115 of Proceedings of Machine Learning Research, pages 862–872. PMLR, 22–25 Jul 2020.
- Khan et al. [2023] Falaah Arif Khan, Denys Herasymuk, and Julia Stoyanovich. On Fairness and Stability: Is Estimator Variance a Friend or a Foe?, 2023.
- Kleinberg et al. [2017] Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent Trade-Offs in the Fair Determination of Risk Scores. In Christos H. Papadimitriou, editor, 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9-11, 2017, Berkeley, CA, USA, volume 67 of LIPIcs, pages 43:1–43:23. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2017.
- Ko et al. [2023] Wei-Yin Ko, Daniel D’souza, Karina Nguyen, Randall Balestriero, and Sara Hooker. FAIR-Ensemble: When Fairness Naturally Emerges From Deep Ensembling, 2023.
- Kohavi [1996] Ron Kohavi. Scaling up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, KDD’96, page 202–207. AAAI Press, 1996.
- Kong and Dietterich [1995] Eun Bae Kong and Thomas G. Dietterich. Error-Correcting Output Coding Corrects Bias and Variance. In Armand Prieditis and Stuart Russell, editors, Machine Learning, Proceedings of the Twelfth International Conference on Machine Learning, Tahoe City, California, USA, July 9-12, 1995, pages 313–321, 1995.
- Larson et al. [2016] Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. How We Analyzed the COMPAS Recidivism Algorithm. Technical report, ProPublica, May 2016. URL https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm.
- Lin et al. [2020] Zhiyuan Jerry Lin, Jongbin Jung, Sharad Goel, and Jennifer Skeem. The limits of human predictions of recidivism. Science Advances, 6(7):eaaz0652, 2020. doi: 10.1126/sciadv.aaz0652.
- Mackun et al. [2021] Paul Mackun, Joshua Comenetz, and Lindsay Spell. Around Four-Fifths of All U.S. Metro Areas Grew Between 2010 and 2020. Technical report, United States Census Bureau, August 2021. URL https://www.census.gov/library/stories/2021/08/more-than-half-of-united-states-counties-were-smaller-in-2020-than-in-2010.html.
- Marx et al. [2020] Charles Marx, Flavio Calmon, and Berk Ustun. Predictive Multiplicity in Classification. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 6765–6774. PMLR, 13–18 Jul 2020. URL https://proceedings.mlr.press/v119/marx20a.html.
- Menon and Williamson [2018] Aditya Krishna Menon and Robert C. Williamson. The cost of fairness in binary classification. In Sorelle A. Friedler and Christo Wilson, editors, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Machine Learning Research, pages 107–118. PMLR, 23–24 Feb 2018.
- Pedregosa et al. [2011] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res., 12:2825–2830, November 2011. ISSN 1532-4435.
- Pleiss et al. [2017] Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On Fairness and Calibration. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
- Qian et al. [2021] Shangshu Qian, Hung Pham, Thibaud Lutellier, Zeou Hu, Jungwon Kim, Lin Tan, Yaoliang Yu, Jiahao Chen, and Sameena Shah. Are My Deep Learning Systems Fair? An Empirical Study of Fixed-Seed Training. In Advances in Neural Information Processing Systems, volume 34, Red Hook, NY, USA, 2021. Curran Associates, Inc.
- Ramdas et al. [2015] Aaditya Ramdas, Nicolas Garcia, and Marco Cuturi. On Wasserstein Two Sample Testing and Related Families of Nonparametric Tests, 2015. URL https://arxiv.org/abs/1509.02237.
- Rodolfa et al. [2021] K.T. Rodolfa, H. Lamba, and R Ghani. Empirical observation of negligible fairness–accuracy trade-offs in machine learning for public policy. Nature Machine Intelligence, 3, 2021. doi: https://doi.org/10.1038/s42256-021-00396-x.
- Smullyan [1986] Raymond M. Smullyan. Logicians Who Reason about Themselves. In Proceedings of the 1986 Conference on Theoretical Aspects of Reasoning about Knowledge, TARK ’86, page 341–352, San Francisco, CA, USA, 1986. Morgan Kaufmann Publishers Inc. ISBN 0934613049.
- Stalnaker [2006] Robert Stalnaker. On Logics of Knowledge and Belief. Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 128(1):169–199, 2006. ISSN 00318116, 15730883. URL http://www.jstor.org/stable/4321718.
- Tamanaha [2004] Brian Z. Tamanaha. On the Rule of Law: History, Politics, Theory. Cambridge University Press, Cambridge, 2004.
- Wager [2020] Stefan Wager. Cross-Validation, Risk Estimation, and Model Selection: Comment on a Paper by Rosset and Tibshirani. Journal of the American Statistical Association, 115(529):157–160, 2020. doi: 10.1080/01621459.2020.1727235.
- Watson-Daniels et al. [2022] Jamelle Watson-Daniels, David C. Parkes, and Berk Ustun. Predictive Multiplicity in Probabilistic Classification, 2022.
- Yeh and Lien [2009] I.C. Yeh and C.H. Lien. The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Systems with Applications, 36:2473–2480, 2009.
- Zhang et al. [2020] Ruqi Zhang, A. Feder Cooper, and Christopher De Sa. AMAGOLD: Amortized Metropolis Adjustment for Efficient Stochastic Gradient MCMC. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, 2020.
Appendix Overview
The Appendix goes into significantly more detail than the main paper. It is organized as follows:
Appendix A: Extended Preliminaries
Appendix B: Additional Details on Variance and Self-Consistency
Appendix C: Related Work and Alternative Notions of Variance
Appendix D: Additional Details on Our Algorithmic Framework
-
•
D.1: Self-consistent ensembling with abstention
Appendix A Extended Preliminaries
A.1 Notes on notation and on our choice of terminology
Terminology. Traditionally, what we term “observed labels” are often referred to instead as the “ground truth” or “correct” labels [2, 35, 43, e.g.]. We avoid this terminology because, as the work on label bias has explained, these labels are often unreliable or contested [29, 12].
Sets, random variables, and instances. We use bold non-italics letters to denote random variables (e.g., , ), capital block letters to denote sets (e.g., , ), lower case italics letters to denote scalars (e.g., ), bold italics lower case letters to denote vectors (e.g., ), and bold italics upper case to denote matrices (e.g., ). For a complete example, is an arbitrary instance’s feature vector, is the set representing the space of instances (), and is the random variable that can take on specific values of . We use this notation consistently, and thus do not always define all symbols explicitly.
A.2 Constraints on our setup
Our setup, per our definition of the learning process (Definition 1) is deliberately limited to studying the effects of variance due to changes in the underlying training dataset, with such datasets drawn from the same distribution. For this reason, Definition 1 does not include the data collection process or hyperparameter optimization (HPO), which can further introduce non-determinism to machine learning, and are thus assumed to have been already be completed.
Relatedly, variance-induced error can of course have other sources due to such non-determinism. For example, stochastic optimization methods, such as SGD and Adam, can cause fluctuations in test error; as, too, can choices in HPO configurations [14]. While each of these decision points is worthy of investigation with respect to their impact on fair classification outcomes, we aim to fix as many sources of randomness as possible in order to highlight the particular kind of arbitrariness that we describe in Sections 1 and 3. As such, we use the Limited-memory BFGS solver and fix our hyperparameters based on the results of an initial search (Section 5), for which we selected a search space through consulting related work such as Chen et al. [11].
A.3 Costs and the classification decision threshold
For reference, we provide a bit more of the basic background regarding the relationship between the classification decision threshold and costs of false positives FP () and false negatives FN (). We visualize the loss as follows:
TN: | FP: | |
FN: | TP: |
0-1 loss treats the cost of different types of errors equally ; false positives and false negatives are quantified as equivalently bad – they are symmetric; the case for which is asymmetric or cost-sensitive.
Altering the asymmetric of costs shifts the classification decision threshold applied to the underlying regressor . We can see this by examining the behavior of that we learn. estimates the probability of a each label given (since we do not learn using ), i.e., that we develop a good approximation of the distribution . Ideally, will be similar to the Bayes optimal classifier (for which the classification rule produces classifications that yield the smallest weighted sum of the loss, where the weights are the probabilities of a particular label for a given , i.e., sums over
(4) |
For binary classification, the terms of (4) in the sum for a particular yield two cases:
We can therefore break down the Bayes optimal classifier into the following decision rule, which we hope to approximate through learning. For an arbitrary and ,
That is, to predict label , the cost of mis-predicting (i.e., the cost of a false positive FP) must be be smaller than the cost of mis-predicting (i.e, the cost of a false negative FN). In binary classification So, we can assign and , and rewrite the above as
(5) |
The decision boundary is the case for which both of the arguments to in (5) are equivalent (i.e., the costs of predicting a false positive and a false negative are equal), i.e.,
For 0-1 loss, in which , evaluates to . If we want to model asymmetric costs, then we need to change this decision threshold to account for which type of error is more costly. For example, let us say that false negatives are more costly than false positives, with and . This leads to a threshold of , which biases toward choosing the (generally cheaper to predict/more conservative) positive class.
A.4 The bootstrap method
In the bootstrap method, we treat each dataset as equally likely. For each set aside test example , we can approximate empirically by computing
(6) |
for a concrete number of replicates . We estimate overall error for the test set by additionally summing over each example instance , which we can further delineate into and , or into group-specific , , and by computing separate averages according to .
The bootstrap method exhibits less variance than cross-validation, but can be biased — in particular, pessimistic — with respect to estimating expected error. To reduce this bias, one can follow our setup in Definition 1, which splits into train and test sets before resampling. For more information comparing the two methods, see Efron and Tibshirani [23, 24]. Further, recent work shows that, in relation to studying individual models, CV is in fact asymptotically uninformative regarding expected error [57].
Appendix B Additional Details on Variance and Self-Consistency
In this appendix, we provide more details on other types of statistical error (Appendix B.1), on variance (Appendix B.2) and self-consistency (Appendix B.3). Following this longer presentation of our metrics, we then provide some additional information on other definitions of variance that have been used in work on fair classification, and contextualize issues with these definitions that encouraged us to deviate from them in order to derive our definition of self-consistency (Appendix C).
B.1 Other statistical sources of error
Noise. Noise is traditionally understood as irreducible error; it is due to inherent randomness in the data, which cannot be captured perfectly accurately by a deterministic decision rule . Notably, noise is an aspect of the data collection pipeline, not the learning process (Definition 1). It is irreducible in the sense that it does not depend on our choice of training procedure or how we draw datasets for training from , either in theory or in practice. Heteroskedastic noise across demographic groups is often hypothesized to be a source of unfairness in machine learning [12, 11]. Importantly, albeit somewhat confusingly, this is commonly referred to as label bias, where “bias” connotes discrimination, as opposed to the statistical bias that we mention here.
Unlike noise, bias and variance are traditionally understood as sources of epistemic uncertainty. These sources of error are reducible because they are contingent on the modeling choices we make in the learning process; if we knew how to model the task at hand more effectively, in principle, we could reduce bias and variance error.
Bias. Within the amount of reducible error, bias reflects the error associated with the chosen hypothesis class , and is therefore governed by decisions concerning the training procedure in the learning process (Definition 1). This type of error is persistent because it takes effect at the level of possible models in ; in expectation, all models have the same amount of bias-induced error.
Whereas variance depends on stochasticity in the underlying training data, noise and bias error are traditionally formulated in relation to the Bayes optimal classifier — the best possible classifier that machine learning could produce for a given task [32, 2, 20]. Since the Bayes optimal classifier is typically not available in practice, we often cannot estimate noise or bias directly in experiments.
Of the three types of statistical error, it is only variance that seems to reflect the intuition in Figure 1 concerning the behavior of different possible models . This is because noise is a property of the data distribution; for a learning process (Definition 1), in expectation we can treat noise error as constant. Bias can similarly be treated as constant for the learning process: It is a property of the chosen hypothesis class , and thus is in expectation the same for each . In Figure 1, we are keeping the data distribution constant and constant; we are only changing the underlying subset of training data to produce different models .
B.2 Our variance definition
We first provide a simple proof that explains the simplified version for our empirical approximation for variance in (1).
Proof.
For the models that we produce, we denote to be the multiset of their predictions on . , where and represent the counts of and -predictions, respectively. We also set the cost of false positives to be and the cost of false negatives to be .
Looking at the sum in (i.e., ), each of the -predictions will get compared to the other -predictions and to the -predictions. By the definition of , each of the computations of evaluates to and each of the computations of evaluates to . Therefore, the -predictions contribute
to the sum in , and, by similar reasoning, It follows that the total sum in is
∎
The effect of on variance. As discussed in Appendix A.3, and can be related to changing applied to to produce classifier . We analyze the range of minimal and maximal empirical variance by examining the behavior of , i.e.,
(7) |
Minimal variance. Either or (exclusively, since ) will be , with the other being , making (7) equivalent to
Maximal variance. will represent half of , with representing the other half. More particularly, and ; or, without loss of generality, and . This means that
(Or, ) | ||||
(Or, ; it will not matter in the limit) | ||||
And, therefore,
(8) |
It follows analytically that variance will be in the range . However, empirically, for concrete , , for smaller positive as the number of models increases. The maximal variance will better approximate as gets larger, but will be . For example, for 0-1 loss . For , the maximal .
B.3 Deriving self-consistency from variance
In this appendix, we describe the relationship between variance (Definition 2) and self-consistency (Definition 3) in more detail, and show that ] for small positive as the number of models increases.
Proof.
Note that, by the definition of 0-1 loss, , so
(9) |
By the definition of the indicator function ,
Therefore, rearranging,
∎
We note that (2) is independent of specific costs and . Nevertheless, the choice of decision threshold will of course impact the values of and in practice. In turn, this will impact the degree of self-consistency that a learning process exhibits empirically. In short, the measured degree of self-consistency in practice will depend on the choice of . Further, following an analysis similar to what we can show that will be a value in , for small positive . This reality is reflected in the results that we report for our experiments, for which yields minimal .
Cost-independence of self-consistency
Intuitively, self-consistency of a learning process is a relative metric; it is a quantity that is measured relative to the learning process. We therefore conceive of it as a metric that is normalized with respect to the learning process (Definition 1). Such a process can be maximally self-consistent, but it does not make sense for it to be more than that (reflected by the maximum value of ).
In contrast, as discussed in Appendix B, variance can measure much greater than 1, depending on the magnitude of the sum of the costs and , in particular, for (8). However, it is not necessarily meaningful to compare the magnitude of variance across classifiers. Recall that the effect of changing costs and corresponds to a change in the binary classification decision threshold, with . It is the relative costs that change the decision threshold; not the costs themselves. For example, the classifier with costs and is equivalent to the classifier with costs and (for both, ), but the former would measure a larger magnitude for variance.
It is this observation that grounds our cost-independent definition of self-consistency in Section 3 and Appendix B.3. Given the fact that the magnitude of variance measurements can complicate our comparisons of classifiers, as discussed above, we focus on the part of variance that encodes information about arbitrariness in a learning process: its measure of (dis)agreement between classification decisions that result from changing the training dataset. We could alternatively conceive of self-consistency as the additive inverse of normalized variance, but this is more complicated because it would require a computation that depends on the specific costs, .
B.3.1 Additional details on our choice of self-consistency metric
Terminology. In logic, the idea of consistent belief has to do with ensuring that we do not draw conclusions that contradcit each other. This is much like the case that we are modeling with self-consistency — the idea that underlying changes in the dataset can lead to predictions that are directly in contradition [54, 36, 55]. Ideas of consistency in legal rules have a similar flavor; legal rules should not contradict each other; legal judgments should not contradict each other (this is at least an aspiration for the law, based on common ideas in legal theory [31, 56]. For both of these reasons, the term “consistent” has a natural mapping to our usage of it in this paper. This is especially true in the legal theory case, given that inconsistency in the law is often considered arbitrary and a source of discrimination.
We nevertheless realize that the word “consistent” is overloaded with many meanings in statistics and different subfields computer science like distributed computing [60, 1, e.g.,]. Nevertheless, due to the clear relationship between our purposes concerning arbitrariness and discrimination, and definitions in logic and the law, we believe that it is the most appropriate term for our work.
Quantifying systematic arbitrariness. We depict systematic arbitrariness using the Wasserstein-1 distance [52]. This is the natural distance for us to consider because it has a closed form when being applied to CDFs. For our purposes, it should be interpreted as computing the total disparity in self-consistency by examining all possible self-consistency levels at once.
Formally,999We consider the Wasserstein distance for one-dimensional distributions. More generally, the -th Wasserstein distance for such distributions, , requires the inverse CDFs to be well-defined (i.e., the CDFs need to be strictly monotonic). This is fine to assume for our purposes. We have to relax the formal definition of the Wasserstein distance, anyway, when we estimate it in practice with a discrete number of samples. for two groups and with respective SC CDFs and ,
For self-consistency, which we have defined on , this is just
Empirically, we can approximate this with
We typically set , and thus
which we use to produce our CDF plots.
When measuring systematic arbitrariness with abstention, we set the probability mass for to it. This makes sense because we are effectively re-defining the CDFs to not include instances that exhibit below a minimal amount of . This also makes comparing systematic arbitrariness across CDFs for different interventions more interpretable. It allows us to keep the number of experimental samples for the empirical CDF measures constant when computing averages, so abstaining would then always have the effect of decreasing systematic arbitrariness. If we did not do this, because the Wasserstein-1 distance is an average, changing the set , of course, would change the amount of Wasserstein-1 distance — possibly leading to a relative increase (if there are greater discrepancies between -condition CDF curves at ).
Appendix C Related Work and Alternative Notions of Variance
As noted in Section 6, prior work that discusses variance and fair classification often relies on the definition of variance from Domingos [20]. We deviate from prior work and provide our own definition for two reasons: 1) variance in Domingos [20, 21] does not cleanly extend to cost-sensitive loss, and 2) the reference point for measuring variance in Domingos [20, 21] — the main prediction — can be unstable/ brittle in practice. We start by explaining the Domingos [20, 21] definitions, and then use these definitions to support our rationale.
C.1 Defining variance in relation to a “main prediction”
To begin, we restate the definitions from Domingos [20, 21] concerning the expected model (called the main predictor). We change the notation from Domingos to align with our own, as we believe these changes provide greater clarity concerning meaning, significance, and consequent takeaways. Nevertheless, these definitions for quantifying error are equivalent to those in Domingos [21], and they fundamentally depend on human decisions for setting up the learning process.
Domingos [20, 21] define predictive variance in relation to this single point of reference. This reference point captures the general, expected behavior of models that could be produced by the chosen learning process. We can think of each prediction of this point of reference as the “central tendency” of the predictions made by all possible models in for . Formally,
Definition 4.
The main prediction is the prediction value that generates the minimum average loss with respect to all of the predictions generated by the different possible models in . It is defined as the expectation over training sets for a loss function , given an example instance . That is,
(10) |
The main predictor produces the main prediction for each .
What (10) evaluates to in practice of course depends on the loss function . For squared loss, the main prediction is defined as the mean prediction of all the [20, 43]. Following Kong and Dietterich [43], for 0-1 loss Domingos [20] defines the main prediction as the mode/majority vote — the most frequent prediction for an example instance . We provide a more formal discussion of why this is the case when we discuss problems with the main prediction for cost-sensitive loss (Appendix C.2). Domingos [20, 21] then define variance in relation to specific models and the main predictor :
Definition 5.
The variance-induced error for fresh example instance is
where is the main prediction and the are the predictions for the different .
That is, for a specific , it is possible to compare the individual predictions to the main prediction . Using the main prediction as a reference point, one can compute the extent of disagreement of individual predictions with the main prediction as a source of error. It is this definition (Definition 5) that prior work on fair classification tends to reference when discussing variance [11, 6]. However, as we discuss in more detail below (Appendix C.2), many of the theoretical results in Chen et al. [11] follow directly from the definitions in Domingos [20], and the experiments do not actually use those results in practice. Black et al. [6], in contrast, presents results that rely heavily on the main prediction in Domingos [20].
C.2 Why we choose to avoid computing the main prediction
We now compare our definition of variance (Definition 2) to the one in Domingos [20, 21] (Definition 5). This comparison makes clear in detail why we deviate from prior work that relies on Domingos [20, 21].
No decomposition result. Following from above, it is worth noting that by not relying on the main prediction, we lose the applicability of the decomposition result that Domingos [20, 21] develop. However, we believe that this is fine for our purposes, as we are interested in the impact of empirical variance specifically on fair classification outcomes. We do not need to reason about bias or noise in our results to understand the arbitrariness with which we are concerned (Section 3.1). It is also worth noting that prior work on fair classification that leverages Domingos [20] also does not leverage the decomposition, either. Chen et al. [11] extends the decomposition to subgroups in the context of algorithmic fairness,101010This just involves splitting the conditioning on an example instance of features into conditioning on an example instance whose features are split into . and then informally translates the takeaways of the Domingos [20] result to a notion of a “level of discrimination.” Moreoever, unlike our work, these prior studies do not actually measure variance directly in its experiments.
No need to compute a “central tendency”. In Domingos [20, 21], variance is defined in terms of both the loss function and the main prediction . This assumes that the main prediction is well-defined for the loss function, and that it is well-behaved. While there is a simple interpretation of the main prediction for squared loss (the mean) and for 0-1 loss (the mode/majority vote), it is significantly messier for cost-sensitive loss, which is a more general formulation that includes 0-1 loss. Domingos [20, 21] does not discuss this explicitly, so we derive the main prediction for cost-sensitive loss ourselves below. In summary:
-
•
The behavior of the main prediction for cost-sensitive loss reveals that the decomposition result provided in the extended technical report (Theorem 4, Domingos [21]) is in fact very carefully constructed. We believe that this construction is so specific that it is not practically useful (it is, in our opinion, hardly “unified” in a more general sense, as it is so carefully adapted to specific loss functions and their behavioral special cases).
-
•
By decoupling from the need to compute a main prediction as a reference point, our variance definition is ultimately much simpler and more general, with respect to how it accommodates different loss functions.111111This reveals a subtle ambiguity in the definition of the loss in Domingos [20, 21]. Neither paper explicitly defines the signature of . For the main prediction (Definition 4) and variance (Definition 5), there is a lack of clarity in what constitutes a valid domain for . Computing the main prediction suggests , where , but, since , it is possible that . However, the definition of variance suggests that . Since , it is not guaranteed that . This may be fine in practice, especially for squared loss and 0-1 loss (the losses with which Domingos [20] explicitly contends), but it does arguably present a problem formally with respect to generalizing.
Brittleness of the main prediction. For high variance instances, the main prediction can flip-flop from to and back. While the strategy in Black et al. [6] is to abstain on the prediction in these cases, we believe that a better alternative is to understand that the main prediction is not very meaningful more generally for high-variance examples. That is, for these examples, the ability (and reliability) of breaking close ties to determine the main (simple majority) prediction is not the right approach. Instead, we should ideally be able to embed more confidence into our process than a simple-majority-vote determination.121212This is also another aspect of the simplicity of not needing to define and compute a “central tendency” prediction. We do not need to encode a notion of a tie-breaking vote to determine a “central tendency.” The main prediction can be unclear in cases for which there is no “main outcome” (e.g., Individual 2 in Figure 1), as the vote is split exactly down the middle. By avoiding the need to vote on a main reference point, we also avoid having to ever choose that reference point arbitrarily. Put different, in cases for which we can reliably estimate the main prediction, but the vote margin is slim, we believe that the main prediction is still uncertain, based on our understanding of variance, intuited in Figure 1. The main prediction can be reliable, but it can still, in this view, be arbitrary (Section 6). With a simple-majority voting scheme, there can be huge differences between predictions that are mostly in agreement, and those that are just over the majority reference point. Freeing ourselves of this reference point via our self-consistency metric, we can define thresholds of self-consistency as our criterion for abstention (where simple-majority voting is one instantiation of that criterion).131313This problem is worse for cost-sensitive loss, where the main prediction is not always the majority vote (see below).
C.2.1 The main prediction and cost-sensitive loss
We show here that, for cost-sensitive loss, the main prediction depends on the majority class being predicted, the asymmetry of the costs, and occasional tie-breaking, such that the main prediction can either be the majority vote or the minority vote. Domingos [21] provides an error decomposition in Theorem 4, but does not explain the effects on the main prediction. We do so below, and also call attention to 0-1 loss as a special case of cost-sensitive loss, for which the costs are symmetric (and equal to 1). We first summarize the takeaways of the analysis below:
-
•
Symmetric loss: The main prediction is the majority vote.
-
•
Asymmetric loss: Compute 1) the relative cost difference (i.e., ), 2) the majority class (and, as a result, the minority class) for the , and 3) the relative difference in the number of votes in the majority and minority classes (i.e., what we call the vote margin; below, )
-
–
If the majority class in has the lower cost of misclassification, then the main prediction is the majority vote.
-
–
If the majority class in has the higher cost of misclassification, then the main prediction depends on the asymmetry of the costs and the vote margin, i.e.,
-
*
If , we can choose the main prediction to be either class (but must make this choice consistently).
-
*
If , the minority vote is the main prediction.
-
*
If , the majority vote is the main prediction.
-
*
-
–
Proof.
Let us consider cost-sensitive loss for binary classification, for which and we have potentially-asymmetric loss for misclassifications, i.e. and , with . 0-1 loss is a special case for this type of loss, for which .
Let us say that the total number of models trained is , which we evaluate on an example instance . Let us set , with and . We can think of as the common number of votes that each class has, and as the margin of votes between the two classes. Given this setup, this means that , i.e., we always have the predictions of at least 1 model to consider, and is always odd. This means that there is always a strict majority classification.
Without loss of generality, on , of these model predictions , there are class- predictions and class- predictions (i.e., we do our analysis with class as the majority prediction). To compute the main prediction , each will get compared to the values of possible predictions . That is, there are two cases to consider:
-
•
Case : will get compared times to the s in , for which ; will similarly get compared times to the s in in , for which (by Definition 4) the comparison is . By definition of expectation, the expected loss is
(11) -
•
Case : Similarly, the label will also get compared times to the s in , for which the comparison is ; will also be compared times to the s in , for which . The expected loss is
(12)
We need to compare these two cases for different possible values of and to understand which expected loss is minimal, which will determine the main prediction that satisfies Equation (10). The three different possible relationships for values of and are (symmetric loss), and and (asymmetric loss). Since the results of the two cases above share the same denominator, we just need to compare their numerators, (11) and (12).
Symmetric Loss (0-1 Loss). When , the numerators in (11) and (12) yield expected losses and , respectively. We can rewrite the numerator for (12) as
which makes the comparison of numerators , i.e., we are in the case (12) (11). This means that the case of (12) is the minimal one; the expected loss for class , the most frequent class, is the minimum, and thus the most frequent/ majority vote class is the main prediction. An analogous result holds if we instead set the most frequent class to be . More generally, this holds for all symmetric losses, for which .
For symmetric losses, the main prediction is majority vote of the predictions in .
Asymmetric Loss. For asymmetric/ cost-sensitive loss, we need to examine two sub-cases: and .
-
•
Case : , given that . Therefore, since is minimal and associated with class (the most frequent class in our setup), the majority vote is the main prediction. We can achieve an analogous result if we instead set as the majority class.
For asymmetric losses, the main prediction is the majority vote of the predictions in , if the majority class has a cheaper cost associated with misclassification (i.e., if the majority class is and , or if the majority class is and ).
-
•
Case : If , it depends on how asymmetric the costs are and how large the vote margin (i.e., ) between class votes is. There are 3 sub-cases:
-
–
Case , i.e. cost equality: We can look at the relative asymmetric cost difference of the minority class cost (above , without loss of generality) and the majority class cost (above , without loss of generality), (above , without loss of generality). If that relative cost difference is equal to the relative difference of the votes between the majority and minority classes (i.e., ), then the costs of predicting either or are equal. That is, we can rearrange terms as a ratio of costs to votes:
(The terms in this equality are ) (Given the above, so ) (13) For asymmetric loss when the majority-class-associated cost is less than the minority-class associated cost and if the expected losses are equal, then the main prediction is either or , (and we must make this choice consistently).
-
–
Case : We can look at the relative asymmetric cost difference of the minority class cost (above , without loss of generality) and the majority class cost (above , without loss of generality), (above , without loss of generality). If that relative cost difference is greater than the relative difference of the votes between the majority and minority classes (i.e., ), then the minority vote yields the minimum cost and is the main prediction (above , without loss of generality; an analogous result holds if we had set the majority vote to be and the minority vote to be ). Following (13) above, this is the same as
For asymmetric loss when the majority-class-associated cost is less than the minority-class associated cost, it is possible for the minority class to have a greater associated loss. In this case, the minority vote is the main prediction .
-
–
Case : We can look at the relative asymmetric cost difference of the minority class cost (above , without loss of generality) and the majority class cost (above , without loss of generality), (above , without loss of generality). If that relative cost difference s less than the relative difference of the votes between the majority and minority classes (i.e., ), then the majority vote yields to minimum cost and is the main prediction (above , without loss of generality; an analogous result holds if we had set the majority vote to be and the minority vote to be ). Following (13) above, this is the same as
For asymmetric loss when the majority-class-associated cost is less than the minority-class associated cost, it is possible for the majority class to have a greater associated loss. In this case, the majority vote is the main prediction .
-
–
∎
C.3 Putting our work in conversation with research on model multiplicity
A line of related work to ours concerns model multiplicity and fairness [58, 47, 7]. This work builds off of an observation made by Breiman [10] regarding how there are multiple possible models of the same problem that exhibit similar degrees of accuracy. This set of multiple possible models of similar accuracy is referred to as the Rashomon set [10].
Work on model multiplicity has recently become fashionable in algorithmic fairness. In an effort to develop more nuanced model selection metrics beyond looking at just fairness and accuracy for different demographic groups, work at the intersection of model multiplicity and fairness tends to examine other properties of models in the Rashomon set in order to surface additional metrics for determining which model to use in practice.
At first glance, this work may seem similar to what we investigate here, but we observe four key differences:141414We defer discussion of Black et al. [6] to C.4.
-
1.
Model multiplicity places conditions on accuracy and fairness in order to determine the Rashomon set. We place no such conditions on the models that a learning process (Definition 1) produces; we simulate the distribution over possible models without making any claims about the associated properties of those models.
-
2.
Model multiplicity makes observations about the Rashomon set with the aim of still ultimately putting forward criteria for helping to select a single model. While the metrics used to inform these criteria include variance, most often work on model multiplicity still aims to choose one model to use in practice.
-
3.
Much of the work on model multiplicity emphasizes theoretical contributions, whereas our emphasis is on more experimental contributions. In conjunction with the first point, of ultimately trying to arrive at a single model, this work is also trying to make claims with respect to the Bayes-optimal model. Given our empirical focus — of what we can actually produce in practice — claims about optimality are not our concern.
-
4.
We focus specifically on variance reduction as a way to mitigate arbitrariness. We rely on other work, coincidentally contributions also made by Breiman, to study arbitrariness [8], and emphasize the importance of using ensemble models to produce predictions or abstention from prediction. We do not study the development of model selection criteria to pick a single model to use in practice; we use self-consistency to give a sense of predictive confidence about when to predict or not. We always select an ensemble model — regardless of whether that model is produced by simple or super ensembling (Section 4) — and then use a user-specified level of self-consistency to determine when that model actually produces predictions.
These differences ultimately lead to very different methods for making observations about fairness. Importantly, we can study the arbitrariness of the underlying laerning process with a bit more nuance. For example, it could be the case that a particular task is just impossible to get right for some large subset of the test data (and this would be reflected in the Rashomon set of models), but for some portion of it there is a high amount of self-consistency for which we may still want to produce predictions.
Further, based on our experimental approach, we highlight completely different normative problems than those highlighted in work on model multiplicity (notably, see Black et al. [7]). So, in short, while model multiplicity deals with related themes as our work — issues of model selection, problem formulation, variance, etc. — the goals of that work are ultimately different, but potentially complementary, from those in our paper.
For example, a potentially interesting direction for future work would be to measure how metrics from work on model multiplicity behave in practice in light of the ensembling methods we present here. We could run experiments using Algorithm 1 and investigate model multiplicity metrics for the underlying ensembled models. However, we ultimately do not see a huge advantage to doing this. Our empirical results indicate that variance is generally high, and has led to reliability issues regarding conclusions about fairness and accuracy. In fairness settings and available benchmarks, we find that the most important point is that variance has muddled conclusions. Under these circumstances, ensembling with abstention based on self-consistency seems a reasonable solution, in contrast to finding a single best model in the Rashomon set that attains other desired criteria.
C.4 Concurrent work
There are several related papers that either preceded or came after this work’s public posting. Some of this work is clearly concurrent, given the time frame. Other works that came after ours are not necessarily concurrent, but are either independent and unaware of our paper, or build on our work.
Setting the stage in 2021. The present work was scoped in 2021, in direct response to the initial study by Forde et al. [28] and critical review by Cooper and Abrams [12]. Forde et al. [28] was one of the first (if not the first) paper to note that variance is overlooked in problem formulations that consider fairness. However, it was limited in scope and also dealt with deep learning settings, which have multiple sources of non-determinism that can be difficult to tease apart with respect to their effects on variance.
Cooper and Abrams [12] notes important, overlooked normative assumptions in the fairness-accuracy trade-off problem formulation, and suggests that this formulations is tautological. Our work is a natural direction for future research, in this respect – to see how, in practice, the fairness-accuracy trade-off behaves after we account for variance. Indeed, we find that there is often no such trade-off, but for different reasons than those suggested by Cooper and Abrams [12]. We expected there to be residual label bias that contributes to noise-induced error, but ultimately did not really observe this in practice. In these respects, our work both strengthens and complements these prior works. We support their claims, and go significantly beyond the work they did in order to provide such support. Further, our results suggest additional conclusions about experimental reliability in algorithmic fairness.
Variance and abstention-based ensembling. Black et al. [6] is concurrent work that slightly preceded our public posting. This work is similarly is interested in variance reduction, ensembling, and abstention in fairness settings, but fundamentally studies these topics in a different manner. We address four differences:
-
1.
Black et al. [6] does not take the wide-ranging experimental approach that we take. While we both study variance and fairness, our work also considers the practice of fair classification research as an object of study. It is for these reasons that we do so many experiments on benchmark datasets, and clean and release another dataset for others to use.
-
2.
They rely on the definition of variance from Domingos [20] in their work, likely building on the choice made by Chen et al. [11] to use this defintion. Much of this Appendix is devoted to discussing Domingos [20, 21] and his definition of variance. The overarching takeaway from our discussion is that 1) there are technical problems with this definition (which have been noted by others that investigated the bias-variance-noise trade-off for 0-1 loss in the early 2000s), 2) the definition does not naturally extend to cost-sensitive loss, 3) the main prediction can be unstable in practice and thus should not be the criterion for investigating arbitrariness (indeed, relying on the main prediction just pushes arbitrariness into that definition). While Black et al. [6] observes that variance is an important consideration for fairness, they ultimately focus on reliable estimation of the main prediction as the criterion for abstention in their ensembling method. While this kind of reliability is important, it does not deal with the general problem of arbitrary predictions (i.e., it is possible to have a reliable main prediction that is still effectively arbitrary). As a result, the nature of when and how to abstain is very different from ours. We instead base our criterion on a notion of confidence in the prediction, and we allow for flexibility around when to abstain when predictions are too arbitrary.
-
3.
As a result of the above two differences, the claims and conclusions in both of our works are different. While there are similar terms used in both works (e.g., variance, abstention), which may make the works seem overlapping with a cursory read, our definitions, methods, claims, and conclusions are non-overlapping. For example, as stated in 1., while Black et al. [6]’s use of successful ensembles is intended to address individual-level arbitrariness, by relying on traditional bagging (simple-majority vote ensembling) and the definition of variance from Domingos [20] that encodes a main prediction, arbitrariness gets pushed into the aggregation rule. If they can estimate the mode prediction reliably, they do not abstain; the mode, however, may still be effectively arbitrary. Our measure of arbitrariness is more direct and more configurable. We can avoid such degenerate situations, as in the example we give for making reliable but arbitrary predictions in Black et al. [6].
-
4.
We also describe a method for recursively ensembling in order to achieve different trade-offs between abstention and prediction. This type of strategy is absent from Black et al. [6].
Deep learning. Qian et al. [51] is work that came after Forde et al. [28]. They, too, do a wide-ranging empirical study of variance and fairness, but focus on deep learning settings. As a result, they are not examining the fair classification experimental setup that is most common in the field. They therefore make different claims about reliability, which have a similar flavor as those that we make here. However, because of our setup, we are able to probe these claims much deeper (due in part to model/ problem size and being able to limit non-determinism solely to sampling the training data). We mention this work because of its close relationship to Forde et al. [28], which in part inspired this study.
Ko et al. [41] is another deep learning fairness paper. It was posted publicly months after our study, and examines non-overlapping settings and tasks. While the results are similar — we find fairness after ensembling — it is again fundamentally different (along the lines of Qian et al. [51] and Forde et al. [28]) because it does not study common non-deep-learning setups. They also do not study arbitrariness, which is one of the main purposes of our paper.
Variance in fair classification. Khan et al. [39] is concurrent work that studies the same problem that we study, but also takes a different approach. For one, they bake in a notion of 0-1 loss into their definitions. In this respect, our definition of self-consistency generalizes the definitions in their paper. While they run more types of models than we do (we initially ran more, but ultimately stopped because the results were largely similar with more common model types), they do not cover as many datasets as we do. They also do not study arbitrariness or abstention-based ensembling to deal with it, and they do not release a dataset. Further, based on the fact that they study fewer empirical tasks than we do, and that they do not examine abstention-based ensembling, they do not surface or make claims about the experimental reliability issues that we observe. They do not make claims about the fundamental problem that we observe: That variance is the culprit for much observed algorithmic unfairness in classification; in practice, we do not seem to learn very confident decisions for large portions of the datasets we examine, and this is a key problem that has been masked by current common experimental practices in the field. We make notes about this in our Ethics Statement.
Other work. Any other work on variance and fairness comes after the present study. We have made a significant attempt to keep our related work section up-to-date in response to this new work. We have used a detailed and robust mixed of Google alerts and scraping arXiv to find new related work. We used this same procedure to make sure we found (ideally) all related work on fairness and variance when we conducted this project. There are some studies, which directly build on ours, which we choose not to cite.
Appendix D Additional Details on Our Algorithmic Framework
A natural question is to see if we can improve self-consistency, with the hope that doing so would reduce arbitrariness in the learning process, improve accuracy, and, for the cases in which there is different self-consistency across subgroups, also perhaps improve fairness. To do so, we consider ways of reducing variance, as, based on our definitions (Definition 2 and 3), doing so should improve self-consistency.
We consider the classic bootstrap aggregation — or, bagging — algorithm [8] as a starting point. It has been well-known since Breiman [8] that bagging can improve the performance of unstable predictors. That is, for models produced by a learning process that is sensitive to the underlying training data, it is (theoretically-grounded) good practice to train an ensemble of models using bootstrapping (Appendix A.4; Efron [22], Efron and Tibshirani [24]). When classifying an example instance, we then leverage the whole ensemble by aggregating the predictions produced by its members. This aggregation process identifies the most common prediction in the ensemble, and returns that label as the classification. Put differently, we have combined the information of a lot of unstable classifiers, and averaged over their behavior in order to generate more stable classifications.
Given the the relationship between variance (Definition 2) and self-consistency (Definition 3), reducing variance will improve self-consistency. However, rather than relying on a simple-majority-vote to decide the aggregated prediction, we also will instill a notion of confidence in our predictions by requiring a minimum level of self-consistency, which is described in Algorithm 1.
D.1 Self-consistent ensembling with abstention
We present a framework that alters the semantics of classification outputs to , , and Abstain, and employ ensembling to determine the -level that guides the output process. We modify bagging from using a simple-majority-vote because this type of aggregation rule still allows for arbitrariness. If, for example, we happen to train classifiers, it is possible that 50 of them yield one classification and the other 51 yield the other classification for a particular example. Bagging would select the classification that goes along with the 51 underlying models; however, if we happened to train models, it is perhaps the case that the majority vote would flip. In short, the bagging aggregation rule bakes in the idea that simple-majority voting is a sufficient strategy for making decisions. And while this may generally be true for variance reduction in high-variance classifiers, it does not address the problem of arbitrariness that we study. It just encodes arbitrariness in the aggregation rule — it picks classifications, in some cases, that are no better than a coin flip.
Instead, Algorithm 1 is more flexible. It suggests many possible ways to produce bagged classifiers that do not have to rely on simple-majority voting, by allowing for abstentions. For example, we can change the aggregation rule in regular bagging to use a self-consistency level rather than majority vote. Instead of relying on votes, we can bag the underlying prediction probabilities and then apply a filter. We could take the top- most consistent predictions and let a super-ensemble of underlying bagged classifiers decide whether to abstain or predict.
In the experiments in the paper, we provide two examples: Changing the underlying bagging vote aggregation rule (simple ensembling), and applying a round of regular bagging to do variance reduction and then bagging the bagged outputs (super ensembling) to apply a self-consistency threshold. Our ensemble model will not produce predictions for examples for which the lack of self-consistency is too high. We describe our procedure more formally in Algorithm 1.
Simple proof that abstention improves self-consistency (by construction). We briefly show the simple proof that any method that meets the semantics of Algorithm 1 will be more self-consistent than its counterpart that cannot Abstain.
We define abstentions to be in agreement with both and predictions. This makes sense intuitively: Algorithm 1 abstains to avoid making predictions that lack self-consistency, so abstaining should not increase disagreement between predictions.
It follows that we can continue to use Definition 3 and associated empirical approximations (3), but with one small adjustment. Instead of the total number of predictions , with and corresponding to and predictions, respectively, we now allow for , in order to account for possibly some non-zero number of abstentions.
In more detail, let us denote to be the multiset of predictions for models on , with . This is where we depart from our typical definition of self-consistency, for which (Section 3, Appendix B.3). We continue to let and represent the counts of and predictions, respectively, and now include to denote the (possibly nonzero) number of abstentions. This leads to the following adjustment of (3):
(14) |
Equation (14) follows from a similar analysis of comparing s, s, and abstentions for Definition 3, which lead us to derive (3) in Appendix B.3. However, since the costs of -to-Abstain comparisons and -to-Abstain comparisons are both 0, the and terms in (14) reduce to 0. As a result, we yield our original definition for self-consistency (3), with the possibility that , if there is a nonzero number of abstentions .
Since and , it is always the case that option to Abstain is at least as self-consistent as not having the option to do so. This follows from the fact that , which would make the denominator in (14) greater than or equal to the corresponding method that cannot Abstain; when subtracted from 1, this would produce a that is no smaller than the value for the corresponding method without that cannot Abstain.
Now, it follows that, given the choice between Abstain and predicting a label that is in disagreement with an existing prediction label, choosing to Abstain will always lead to higher self-consistency. This is because the cost to Abstain is less than disagreeing, so it will always be the minimal choice that maximizes .
Error and the abstention set. It is very straightforward to see that the abstention set will generally exhibit higher than the prediction set. When we ensemble and measure , the exmaples that exhibit low contain higher variance-induced error. Let us call the size of the abstention set (which incurs error ), the size of the prediction set (which incurs error ), and the size of the test set (which incurs error ). We can relate the total number of misclassified examples as with . If we assume the bias and noise are equally distributed across the test and abstention sets (this is a reasonable assumption, on average, in our setup), then splitting off the high variance instances from the low variance (high instances) requires that . The error on the abstention set necessarily has to be larger than the error on the prediction set, in order to retain the above relationship.
Appendix E Additional Experimental Results and Details for Reproducibility
The code for the examples in Sections 1, 3 and 5 can be found in https://github.com/pasta41/variance. This repository also contains necessary and sufficient information concerning reproducibility. At the time of writing, we use Conda to produce environments with associated package-versioning information, so that our results can be exactly replicated and independently verified. We also use the Scikit-Learn [49] toolkit for modeling and optimization. More details on our choice of models and hyperparameter optimization can be found in our code repository, cited above. In brief, we consulted prior related work (e.g., Chen et al. [11]) and performed our own validation for reasonable hyperparameters per model type. We keep these settings fixed to reduce impact on our results, in order to observe in isolation how different training data subsets impact our results.
During these early runs, we collected information on train accuracy, not just test accuracy; while models ultimately have similar test accuracy in most cases for the same task, they can vary significantly in terms of train accuracy (e.g., for logistic regression, COMPAS is in the low .70s; for random forests, it is in the mid .90s). We do not include these results for the sake of space.
This section is organized as follows. We first present information on our datasets, models and code, including our HDMA toolkit (Appendix E.1). We then provide details on our setup for running experiments on our cluster (Appendix E.2). Appendix E.3 contains more detailed information concerning the experiments performed to produce Figures 1 and 2 in the main paper. In Appendix E.4, we provide more details on the results presented in Section 5, as well as additional experiments. Lastly, in ppendix E.5, we discuss implications of these results for common fairness Abenchmarks like South German Credit. We conclude that in many cases, without adequate attention to error estimation, it is likely that training and post-processing a single model for fairness on these models likely is a brittle approach to achieve generalizable fairness (and accuracy) performance. Based on our experiments, it seems like high variance can be a significant confounding factor when using a small set of models to draw conclusions about performance — whether fairness or accuracy. There is an urgent need for future work concerning reproducibility. More specifically, our results indicate that it would be useful to revisit key algorithmic strategies in fair classification to see how they perform in context with more reliable expected error estimation and variance reduction.
Note on CDF figures. We show our results in terms of the of the underlying bagged models because doing so conveys how Algorithm 1 makes decisions to predict or abstain.151515The CDF of Algorithm 1, computed via a third round of bootstrapping, has nearly all mass at ; it is difficult to visualize. For both types of ensembling, Algorithm 1 predicts for all examples captured by the area to the right of the reference line, and abstains for all examples on the left.
It is also worth noting (though hopefully obvious) that our CDF plots of are not continuous, yet we choose to plot them as interpolated curves. This are discrete because we train a concrete number of models (individual models or bags) — typically 101 of them — that we treat as our approximation for when computing . This means that there are a finite number of -values for , for which we plot a corresponding concrete number of heights corresponding to the cumulative proportion of the test set. In this respect, it would perhaps be more precise to plot our curves using a step function, exemplified below (see Appendix B.3 for the values in ):

We opted not to do this for two reasons. First, plotting steps for some of our figures, in our opinion, can make the figures more difficult to understand. Second, in experiments for which we increase the number of models used to estimate (e.g., Appendix E.5), we found that the curves for 101 models were a reasonable approximation of the overall CDF. We therefore concluded that plotting the figures without steps was worth the clarity of presentation, with a sacrifice in correctness for the overall takeaways that we intend with these figures.
A remark on cost. It can be considerably more computationally intensive to train an ensemble of models to compute than to train a handful of models and perform cross-validation, as is the standard practice in fair classification. However, as our empirical analysis demonstrates, this cost comes with a huge benefit: It enables us to improve self-consistency and to root out the arbitrariness of producing predictions that are effectively close-to-random, which is especially important in high-stakes fairness settings [13]. Moreover, for common fair classification datasets, the increased cost on modern hardware is relatively small; (super-) ensembling with confidence takes under an hour to execute (Appendix E.4).
E.1 Hypothesis classes, datasets, and code
Models. According to a comprehensive recent survey study [26], as well as related work like Chen et al. [11], we conclude that some of the most common models used in fair classification are logistic regression, decision tree classifiers, random forest classifiers, SVMs, and MLPs. We opted to include comprehensive results for the first three, since they capture different complexities, and therefore encode different degrees of statistical bias, that we expected to have an impact on the underlying sources of error. We provide some results for SVMs and MLPs, which we include in this Appendix. Since we choose not to use stochastic optimizers to reduce the sources of randomness, for our results, training MLPs is slower than it could be. We consistently use a decision threshold of 0.5 (i.e., 0-1 loss) for our experiments, though our results can easily be extended to other thresholds, as discussed in Section 3. Depending on the dataset, we reserve between 20% and 30% of the available data for the test set. This is consistent with standard fair classification training settings, which we validated during our initial experiments to explore the space (for which we also did preliminary hyperparameter optimization, before fixing the hyperparameters for our presented results).161616Please refer to https://github.com/pasta41/variance for more details.
Datasets. Also according to Fabris et al. [26], the most common tasks in fair classification are Old Adult [42], COMPAS [44], and South German Credit [33].171717Technically, Grömping [33] is an updated and corrected version of the dataset from 2019. These three datasets arguably serve as a de facto benchmark in the community, so we felt the need to include them in the present work. In recognition of the fact that these three datasets, however standard, have problems, we also run experiments on tasks in the New Adult dataset, introduced by Ding et al. [19] to replace Old Adult. We subset to the CA (California) subset of the dataset, and run on Income, Employment, and Public Coverage, and consider sex and race as protected attributes, which we binarize into {Male, Female} and {White, Non-white}. These are all large-scale tasks, at least in the domain of algorithmic fairness — on the order of hundreds of thousands of example instances. However, the tasks do share example instances and some features. In summary, concerning common tasks in fair classification:
-
•
COMPAS [44]. We run on the commonly-used version of this dataset from Friedler et al. [30], which has 6167 example instances with 404 features. The target is to predict recidivism within 2 years ( corresponding to Yes, and to No). The protected attribute is race, binarized into “Non-white” () and “White” () subgroups.
-
•
Old Adult [42]. We run on the commonly-used version of this dataset from Friedler et al. [30], which has 30,162 examples with 97 features. This version of the dataset removes instances with missing values from the original dataset, and changes the encoding of some of the features (Kohavi [42] has 48842 example instnaces with 88 features). The target is to predict income () income (). The protected attribute is sex, binarized into “Female” () and “Male” () subgroups.
-
•
South German Credit [33]. We download the dataset from UCI181818See https://archive.ics.uci.edu/ml/datasets/South+German+Credit+%28UPDATE%29 and process the data ourselves. We use the provided codetable.txt to “translate” the features from German to English. We say “translate” because the authors took some liberties, e.g., the column converted to “credit_history” is labeled “moral” in the German, which is not a translation. There are four categories in the protected attribute “personal_status_sex” column, one of which () is used for both “Male (single)” and “Female (non-single).” We therefore remove rows with this value, and binarize the remaining three categories into “Female” () and “Male” (). What results is a dataset with 690 example instances (of the original 1000) with 19 features. The target is “good” credit () and “bad” credit ().
-
•
Taiwan Credit [59]. This task is to predict default on credit card payments () or not (). There are 30,000 example instances and 24 features. The protected attribute is binary sex. We download this dataset from UCI.191919See https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients.
-
•
New Adult [19]. This dataset contains millions of example instances from US Census data, which can be used for several different targets/tasks. We select three of them (listed below). These tasks share some features, and therefore are not completely independent. Further, given the size of the whole dataset, we subset to CA (California), the most populous state in the US. There are two protected attribute columns that we use: sex, which is binarized “Female” () and “Male” () subgroups, and race, which we binarize into “Non-white” () and “White” (). In future work, we would like to explore extending our results beyond binary subgroups.
-
–
Income. This task is designed to be analogous to Old Adult [42]. As a result, the target is to predict income () income (). In the CA subset, there are 195,665 example instances with 8 features.
-
–
Employment. This task is to predict whether an individual is employed () or not (). In the CA subset, there are 378,817 example instances with 14 features.
-
–
Public Coverage. This task is to predict whether an individual is on public health insurance () or not (). In the CA subset, there are 138,554 example instances with 17 features.
-
–
E.1.1 The standalone HMDA tookit
In addition to the above standard tasks, we include experiments that use the NY and TX 2017 subsets of the the Home Mortgage Data Disclosure Act (HMDA) 2007-2017 dataset [27]. These two datasets have 244,107 and 576,978 examples, respectively, with 18 features. The HMDA datasets together contain over 140 million examples of US home mortgage loans from 2007-2017 (newer data exists, but in a different format). We developed a toolkit, described below, to make this dataset easy to use for classification experiments. Similar to New Adult, we enable subsetting by US state. For the experiments in this paper, we run on the NY (New York) and TX (Texas) 2017 subset, in order to add some geographic diversity to complement our New Adult experiments. We additionally chose NY and TX because they are two of the most populous states in the US, alongside CA.202020Per the 2020 Census, the top-4-most-populous states are CA, TX, FL, and NY [46].
The target variable, action_taken, concerning loan origination has 8 values, 2 of which we cannot meaningful conclude approval or denial decisions. They are: Action Taken: – Loan originated, – Application approved but not accepted, – Application denied by financial institution, – Application withdrawn by applicant, – File closed for incompleteness, – Loan purchased by the institution, – Preapproval request denied by financial institution, and – Preapproval request approved but not accepted (optional reporting). We filter out and , and binarize into grant= and reject=. There are three protected attributes that we consider: sex, race, and ethnicity:
-
•
sex has 5 possible values, 2 of which correspond to categories/non-missing values: Male – and Female – . We binarize sex into and .
-
•
race has 8 possible values, 5 of which correspond to categories/ non-missing information: – American Indian or Alaska Native, – Asian, – Black or African American, – Native Hawaiian or Other Pacific Islander, and – White. There are 5 fields for applicant race, which model an applicant belonging to more than one racial group. For our experiments, we only look at the first field. When we binarize race, and .
-
•
ethnicity has 5 possible values, 2 of which correspond to categories/ non-missing information: – Hispanic or Latino and – Not Hispanic or Latino. We binarize ethnicity to be and .
After subsetting to only include examples that have values that do not correspond to missing information, HMDA has 18 features. The NY dataset has 244,107 examples; the TX dataset has 576,978 examples, making it the largest dataset in our experiments. As with our experiments using New Adult, we would like to extend our results beyond binary subgroups and binary classification in future work.
Releasing a standalone toolkit. These datasets are less-commonly used in current algorithmic fairness literature [26]. We believe this is likely due to the fact that the over-100-million data examples are only available in bulk files, which are on the order of 10s of gigabytes and therefore not easily downloadable or explorable on most personal computers. Following the example of Ding et al. [19], one of our contributions is to pre-process all of these datasets — all locations and years — and release them with a software toolkit. The software engineering effort to produce this toolkit was substantial. Our hope is that wider access to this dataset will further reduce the community’s dependency on small (and dated) datasets. Please refer to https://github.com/pasta41/hmda for the latest information on this standalone software package. Our release aligns with the terms of service for this dataset.
E.2 Cluster environment details
While most of the experiments run in this paper can be easily reproduced on a modern laptop, for efficiency, we ran all of our experiments (except the one to produce Figure 1) in a cluster environment. This enabled us to easily execute train/test splits in parallel on different CPUs, serialize our results, and then reconstitute and combine them to produce plots locally. Our cluster environment runs Ubuntu 20.04 and uses Slurm v20.11.8 to manage jobs. We ran all experiments using Anaconda3, which is why we used Conda to reproduce environments for easy replicability.
The experiments using New Adult and HMDA rely on datasets that are (in some cases) orders of magnitude larger than the traditional algorithmic fairness tasks. This is one of the reasons why we recommend running on a cluster, and therefore do not include Jupyter notebooks in our repository for these tasks. We also limit our modeling choices to logistic regression, decision tree classifiers, and random forest classifiers for these results due to the expense of training on the order of thousands of models for each experiment.
E.3 Details on motivating examples in the main paper
This appendix provides extended results for the experiments associated in Sections 1 and 3, which give an intuition for individual- and subgroup-level consistency. The experimental results in the main paper are for logistic regression. We expand the set of models we examine, and associated discussion of how to interpret comparisons between these results.
Reproducing Figure 1. The experiment to produce this figure in Section 1 (also shown in Appendix B.3) trains logistic regression models on the COMPAS dataset (Appendix E.1) using 0-1 loss. We use the bootstrap method to produce each model, which we evaluate on the same test set. We then search for a maximally consistent and minimally consistent individual in the test set, i.e., an individual with predictions that agree and an individual with predictions in each class, which we plot in the bar graph. Please refer to the README in https://github.com/pasta41/variance regarding which Jupyter notebook to run to produce the underlying results and figure. The experiments to reproduce this figure can be easily replicated on a laptop.
Reproducing Figure 2. These figures were produced by executing runs of bootstrap training replicates to train random forest classifiers for Old Adult and COMPAS. We reproduce these figures below, so that they can be examined and treated in relation to our additional results for decision tree classifiers and logistic regression. For each run, we take train/test split, bootstrap the train split times, and evaluate the resulting model classification decisions on the test set. can be estimated from the results across those models. We Run this process times to produce confidence intervals, shown in the figures below. The intervals are not always clearly visible; there is not a lot of variance at the level of comparing whole runs to each other. Please refer to the README in https://github.com/pasta41/variance regarding which Jupyter notebook to run to produce the underlying results and figure. There are also scripted version of these experiments, which enable them to be run in parallel in a cluster environment.
Self-consistency of incorrectly-classified instances. Last, we include figures that underscore how self-consistency is independent from correctness that is measured in terms of observed label alignment. That is, it is possible for an instance to be self-consistent and classified incorrectly, with respect to its observed label . We show this using stacked bar plots. For the above experiments, we find the test examples that have the majority of their classifications incorrect (, for , we find the instances with incorrect classifications) and the majority of their classification correct (similarly), and we examine how self-consistent they are. We bucket self-consistency into different levels, and then plot the relative proportion of majority-incorrectly and majority-correctly classified examples according to subgroup. Subgroups in COMPAS exhibit a similar trend, while subgroups in Adult Old exhibit differences, with the heights of the bars corresponding to the trends we plot in our CDF plots. As we note briefly in Section 3, it may be interesting to examine patterns in examples about which learning processes are confident (i.e., highly self-consistent) but wrong in terms of label alignment. If such issues correlate with subgroup, it may be worth testing the counterfactual that such labels are indicative of label bias. We leave such thoughts to future work.


E.4 Validating our algorithm in practice
E.4.1 COMPAS
CDFs for COMPAS () and associated error metrics on the prediction set. Baseline metrics computed with models. For simple, models; for super, ensemble models, each composed of underlying models. We repeat for test/train splits. We also report abstention rate .

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
E.4.2 Old Adult
CDFs for Old Adult () and associated error metrics on the prediction set. Baseline metrics computed with models. For simple, models; for super, ensemble models, each composed of underlying models. We repeat for test/train splits. We also report abstention rate .

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
E.4.3 South German Credit
CDFs for German Credit () and associated error metrics on the prediction set. Baseline metrics computed with models. For simple, models; for super, ensemble models, each composed of underlying models. We repeat for test/train splits. We also report abstention rate .

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
E.4.4 Taiwan Credit
CDFs for Taiwan Credit () and associated error metrics on the prediction set. Baseline metrics computed with models. For simple, models; for super, ensemble models, each composed of underlying models. We repeat for test/train splits. We also report abstention rate .

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
E.4.5 New Adult - CA
CDFs for three tasks (Income, Employment, Public Coverage) in New Adult - CA, using and race, and associated error metrics on the prediction set. Baseline metrics computed with models. For simple, models; for super, ensemble models, each composed of underlying models for Income and Public Coverage; for Employment. We repeat for test/train splits. We also report abstention rate .
Income - by sex.

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
Income - by race.

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
Employment - by sex.

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
Employment - by race.

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
Public Coverage - by sex.

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
Public Coverage - by race.

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
E.4.6 HMDA
CDFs for two states (NY, TX) in HMDA -2017, using and sex, and associated error metrics on the prediction set. Baseline metrics computed with models. For simple, models; for super, ensemble models, each composed of underlying models for NY; for TX. We repeat for test/train splits. We also report abstention rate .
NY - 2017 - by ethnicity.

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
NY - 2017 - by race.

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
NY - 2017 - by sex.

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
TX - 2017 - by ethnicity.

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
TX - 2017 - by race.

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
TX - 2017 - by sex.

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Logistic regression prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Decision tree prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |

Abstention set metrics | ||
---|---|---|
Simple | Super | |
Random forest prediction set metrics | |||
---|---|---|---|
Baseline | Simple | Super | |
E.4.7 Discussion of extended results for Algorithm 1
Overall, our results support that examining self-consistency and error together provide a much richer picture of model behavior, both with respect to arbitrariness and fairness metric disparities. Particularly in smaller datasets, the learning process produces models with a large degree of variance. As a result, ensembling with confidence can lead to huge abstention rates.
Improving self-consistency by doing a round of variance reduction first and then ensembling with confidence (i.e., super-ensembling) can lead to improvements in error over baselines while having a lower abstention rate. These improvements are typically shared across subgroups, but may not be symmetric; some subgroups may benefit more than others. As a result, even though accuracy increases absolutely for both groups, relative fairness metrics can decrease. This is a different instantiation of the fairness-accuracy trade-off than is often written about, which posits a necessary decrease in accuracy for one subgroup to improve fairness between binarized groups. Our results suggest that it is worth first tuning for accuracy, and then seeing how fairness interventions can balance the benefits across subgroups. Of course, it is possible that doing this could lead to injecting variance back into the model outputs, thereby reducing self-consistency and inducing arbitrariness. We leave this investigation to future work.
Additionally, our results reify that choice of model matters a lot. While overall error rates across model types may be similar, the sources of that error are not necessarily the same. This is an obvious point, relating to bias and variance. However, a lot of fair classification work describes similar performance across logistic regression, decision trees, random forests, SVMs, and MLPs (e.g., Chen et al. [11]). Looking at self-consistency confirms that this is not the case, with decision trees and random forests in particular exhibiting higher variance, and thus being more amenable to variance reduction and improvements in overall error. As fair classification research transitions to larger benchmarks, it will likely be fruitful to investigate more complex model classes.
We provide run times on our cluster environment in Table 2. We did not select for CPUs with any particular features, and thus the run times are quite variable.
Dataset | Logistic regression | Decision trees | Random forests | |
South German Credit | sex | 00:42:50 | 00:25:28 | 00:34:42 |
COMPAS | race | 00:57:05 | 00:39:24 | 00:31:47 |
Old Adult | sex | 01:08:37 | 01:23:39 | 00:57:11 |
Taiwan Credit | sex | 00:31:35 | 01:34:57 | 01:53:33 |
New Adult - CA | ||||
Income | 01:39:53 | 02:51:13 | 04:59:07 | |
Employment | 02:20:15 | 02:18:16 | 03:00:15 | |
Public Coverage | 01:13:33 | 02:02:57 | 02:24:08 | |
HMDA - 2017 | ||||
NY | sex, race, ethnicity | 03:50:52 | 05:00:19 | 05:39:44 |
TX | sex, race, ethnicity | 05:18:59 | 04:10:34 | 04:18:59 |
We also provide details on systematic arbitrariness in Tables 3 and 4, which we measure using the . As noted in Appendix B.3, since this metric is an average, its measure necessarily changes if we compute it over a different set of levels . To make distances across interventions comparable, we treat CDF values below as , so that all of the probability mass is on . We therefore report two versions of these results, those for no abstention and those that account for abstention at values . is the difference between . Positive differences indicate cases for which the super-ensembling method decreases the Wasserstein-1 distance between subgroup CDFs; negative differences indicate increases. While in some cases there is an increase, it is worth noting that this aligns with cases for which the distance is very close to 0. Old Adult, highlighted below, is the only dataset that exhibts large amounts of systematic arbitrariness (for decision tress and random forests, in particular; it exhibits the highest amount for logistic regression, but is overall low). Old Adult and New Adult - Employment (by sex) are two of the only tasks that show any fairness disparities that are .
Dataset | Logistic regression | Decision trees | Random forests | |||||||
Simple | Super | Simple | Super | Simple | Super | |||||
German Credit | sex | 0.0181 | 0.0101 | 0.0079 | 0.0162 | 0.0244 | -0.0082 | 0.0181 | 0.0175 | 0.0006 |
COMPAS | race | 0.0073 | 0.0043 | 0.0030 | 0.0189 | 0.0170 | 0.0019 | 0.0073 | 0.0031 | 0.0043 |
Old Adult | sex | 0.0206 | 0.0033 | 0.0173 | 0.1386 | 0.0273 | 0.1112 | 0.1266 | 0.0255 | 0.1011 |
Taiwan Credit | sex | 0.0028 | 0.0007 | 0.0020 | 0.0223 | 0.0108 | 0.0115 | 0.0240 | 0.0065 | 0.0175 |
New Adult - CA | ||||||||||
Income | sex | 0.0009 | 0.0003 | 0.0006 | 0.0138 | 0.0055 | 0.0083 | 0.0089 | 0.0018 | 0.0071 |
race | 0.0003 | 0.0001 | 0.0002 | 0.0170 | 0.0073 | 0.0096 | 0.0163 | 0.0055 | 0.0108 | |
Employment | sex | 0.0004 | 0.0002 | 0.0003 | 0.0010 | 0.0011 | -0.0001 | 0.0043 | 0.0031 | 0.0013 |
race | 0.0004 | 0.0001 | 0.0004 | 0.0020 | 0.0007 | 0.0013 | 0.0021 | 0.0015 | 0.0006 | |
Public Coverage | sex | 0.0004 | 0.0001 | 0.0003 | 0.0029 | 0.0024 | 0.0005 | 0.0045 | 0.0023 | 0.0024 |
race | 0.0010 | 0.0003 | 0.0007 | 0.0200 | 0.0089 | 0.0113 | 0.0235 | 0.0089 | 0.0147 | |
HMDA - 2017 | ||||||||||
NY | sex | 0.0002 | 0.0004 | -0.0002 | 0.0096 | 0.0039 | 0.0056 | 0.0080 | 0.0023 | 0.0056 |
race | 0.0009 | 0.0005 | 0.0005 | 0.0433 | 0.0203 | 0.0231 | 0.0409 | 0.0133 | 0.0276 | |
ethnicity | 0.0005 | 0.0005 | 0.0000 | 0.0229 | 0.0156 | 0.0073 | 0.0248 | 0.0108 | 0.0140 | |
TX | sex | 0.0001 | 0.0001 | 0.0000 | 0.0153 | 0.0097 | 0.0055 | 0.0113 | 0.0054 | 0.0058 |
race | 0.0001 | 0.0001 | 0.0000 | 0.0010 | 0.0012 | -0.0002 | 0.0013 | 0.0007 | 0.0006 | |
ethnicity | 0.0007 | 0.0002 | 0.0004 | 0.0509 | 0.0291 | 0.0219 | 0.0379 | 0.0190 | 0.0188 |
Dataset | Logistic regression | Decision trees | Random forests | |||||||
Simple | Super | Simple | Super | Simple | Super | |||||
German Credit | sex | 0.0113 | 0.0080 | 0.0034 | 0.0090 | 0.0094 | -0.0004 | 0.0084 | 0.0132 | -0.0048 |
COMPAS | race | 0.0035 | 0.0019 | 0.0017 | 0.0039 | 0.0060 | -0.0021 | 0.0041 | 0.0019 | 0.0022 |
Old Adult | sex | 0.0110 | 0.0020 | 0.0090 | 0.0654 | 0.0155 | 0.0500 | 0.0634 | 0.0139 | 0.0494 |
Taiwan Credit | sex | 0.0014 | 0.0005 | 0.0009 | 0.0057 | 0.0059 | -0.0002 | 0.0107 | 0.0040 | 0.0067 |
New Adult - CA | ||||||||||
Income | sex | 0.0005 | 0.0002 | 0.0004 | 0.0051 | 0.0032 | 0.0019 | 0.0047 | 0.0012 | 0.0035 |
race | 0.0002 | 0.0000 | 0.0002 | 0.0073 | 0.0040 | 0.0033 | 0.0082 | 0.0028 | 0.0053 | |
Employment | sex | 0.0002 | 0.0001 | 0.0001 | 0.0005 | 0.0005 | 0.0001 | 0.0020 | 0.0014 | 0.0006 |
race | 0.0002 | 0.0000 | 0.0002 | 0.0012 | 0.0003 | 0.0008 | 0.0008 | 0.0005 | 0.0003 | |
Public Coverage | sex | 0.0002 | 0.0001 | 0.0001 | 0.0006 | 0.0012 | -0.0006 | 0.0011 | 0.0009 | 0.0002 |
race | 0.0006 | 0.0001 | 0.0005 | 0.0068 | 0.0049 | 0.0019 | 0.0106 | 0.0047 | 0.0059 | |
HMDA - 2017 | ||||||||||
NY | sex | 0.0001 | 0.0002 | -0.0001 | 0.0033 | 0.0020 | 0.0012 | 0.0040 | 0.0013 | 0.0028 |
race | 0.0004 | 0.0002 | 0.0002 | 0.0155 | 0.0111 | 0.0044 | 0.0190 | 0.0076 | 0.0114 | |
ethnicity | 0.0002 | 0.0002 | 0.0001 | 0.0055 | 0.0083 | -0.0028 | 0.0081 | 0.0059 | 0.0022 | |
TX | sex | 0.0000 | 0.0000 | 0.0000 | 0.0061 | 0.0050 | 0.0011 | 0.0058 | 0.0029 | 0.0028 |
race | 0.0000 | 0.0001 | 0.0000 | 0.0004 | 0.0005 | -0.0002 | 0.0007 | 0.0004 | 0.0003 | |
ethnicity | 0.0003 | 0.0001 | 0.0003 | 0.0229 | 0.0159 | 0.0070 | 0.01200 | 0.0104 | 0.0095 |
E.5 Reliability and fairness metrics in COMPAS and South German Credit
Even before we apply our intervention to improve self-consistency, our results in Section 3 show close-to-parity , , and across subgroups in COMPAS (and similarly for South German Credit, below). These results are surprising. We run models to produce estimates of variance and self-consistency, but of course doing this also has the effect of estimating the expected error more generally (with variance representing a portion of that error). Our estimates of expected error for these tasks indicate that the average model produced training on COMPAS and South German Credit, with respect to popular fairness definitions like Equality of Opportunity and Equalized Odds [4, 34] are in fact baseline close to parity, with no fairness intervention applied. We found this across model types for both datasets, though the story becomes more complicated when we apply techniques to improve self-consistency (see discussion at the end of Appendix E.4).
Of course, we did not expect this result, as these are two of the de facto standard benchmark datasets in algorithmic fairness. They are used in countless other studies to probe and verify algorithmic fairness interventions [26]. As a result, we initially thought that our results must be incorrect. We therefore looked at the underlying models in our bootstrap runs to see the error of the underlying models.
We re-ran our baseline experiments with and for test/train splits for logistic regression. In Figure 56, we plot the () bootstrap models that went into these results. For another view on analogous information, in Table 5(f), we provide an excerpt of the results for COMPAS regarding the underlying random forest classifiers used to produce Figure 2.



Overall, we can see that there is a wide range of error disparities that trend in both directions, with a skew toward higher for . These results support our claim that training many models is necessary to get an accurate picture of expected error, with implications both for reproducibility of experiments that just train and analyze a small handful of models and for generalizability. There are models that exhibit worse degrees of unfairness in both directions, but they are more unlikely than models that exhibit smaller disparities.
We subset the above results to the models that produce the lowest , as this is often the selection critera for picking models to post-process. We plot these results below. These top-performing models in fact exhibit (on average) closer-to-parity for and .



Run # | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
762 | ||||||||||||
757 | ||||||||||||
328 |
Run # | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
414 | ||||||||||||
435 | ||||||||||||
413 |
Run # | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
757 | ||||||||||||
729 | ||||||||||||
791 |
Run # | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
639 | ||||||||||||
807 | ||||||||||||
543 |
Run # | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
246 | ||||||||||||
506 | ||||||||||||
204 |
Run # | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
474 | ||||||||||||
401 | ||||||||||||
52 |
This detailed view provides insight into how such a result is possible. Broadly speaking, individual runs have roughly similar error;212121This should be taken relatively. In general, COMPAS demonstrates high error; the error is relatively tight given just how much error there is. The error fluctuates depending on the training data, but the average error rate across train/test splits is rather tight, despite the fluctuations in error within the runs of each split. yet, the subgroup-specific error rates that compose the overall error can nevertheless vary widely depending on the underlying training data. This observation aligns with current interest in model multiplicity in the algorithmic fairness community [7, 58], which imports the idea from Breiman [10]. In this case, as suggested by Table 5(f), there are models that demonstrate unfairness toward both subgroups with respect to each error rate metric , , and . When we move away from attempting to find a single model that performs well (accurately or fairly) on COMPAS, and instead consider the information contained across different possible models, we yield the result that the average, expected behavior smooths over the variance in underlying models such that the result is close to fair: The average of unfair models with high variance in subgroup error rates is essentially fair.
Stability analysis. To verify the stability of this result, we re-execute our experiments for increasing numbers of train/test splits and replicates . While our results for COMPAS are generally tight for small (e.g., Figures 2 and 5), this was not the case for German Credit, for which it was difficult to estimate self-consistency consistently. As a result, for COMPAS, we did not expect markedly different results for increased . Our results for using logistic regression (Figure 58, Table 6) confirm this intuition.
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/5ceb08e8-a2a6-4267-a194-cb80e6b73ea7/x66.png)
COMPAS | ||||
---|---|---|---|---|
Total | ||||
We provide analogous results for German Credit, with using random forests (Figure 59, Table 7). It takes an enormous number of runs to produce stable estimates of error and for German Credit, which indicate statistical equality across groups. Arguably, our results below for models still are very high variance (certainly with respect to error metrics). This task really has too few data points () to generalize reliably.
![[Uncaptioned image]](https://cdn.awesomepapers.org/papers/5ceb08e8-a2a6-4267-a194-cb80e6b73ea7/x67.png)
South German Credit | ||||
---|---|---|---|---|
Total | ||||
Appendix F Brief notes on future research
There are many interesting directions for future work that are out of scope for the present project. We address some topics below.
Novel theory. We do not include extensive novel theory in this project. Nevertheless, our project raises interesting questions for theory in future work. Notably, we could compose our methodology with post-processing [34] for cases in which there is observed empirical unfairness. We could then investigate picking group-specific thresholds that take variance into account. We could reconfigure the formulations in [34] and related work, with respect to the fairness-accuracy trade-off, as actually representing multiple such trade-off curves (that are a function of different models under consideration). There may be interesting directions for mathematical analysis in this direction.
We could also extend traditional results on bagging and variance reduction for classifiers. While bagging has guarantees for variance reduction for regression, it does not have the same guarantees for classification [8, 9]. It generally is observed to work well in practice for variance reduction if the underlying classifiers are high variance — which is indeed the regime we are in for this paper. However, there are interesting theory questions regarding abstention that we could investigate with theoretical tools, which could let us come up with other ways of reasoning about bagging and variance reduction.
Both of these directions are out of scope for the present paper. They are interesting, but do not have to do with our main experimental aims and contributions, and thus do not make it into a conference-length submission. We are not interested in novel theory in the present study. If anything, our work highlights how over-attention to theory can (directly or indirectly) bring about serious problems of mismeasurement in practice. That is a main takeaway for our work, which by nature does not involve novel theory.
Arbitrariness beyond algorithmic fairness. Our framework for reasoning about self-consistency and arbitrariness does not inherently have to do with algorithmic fairness. We could apply it to other domains. For example, it would be interesting to ask similar questions in deep learning and generative AI. We think that such work would be interesting, but is again out of scope for the present study. The first author of this project is in fact working on such questions as separate work. However, this project’s research aims are inherently focused on fairness; the project was designed in response to observations in experimental practices in the fairness community, fairness definitions, and fairness theory.
Experiments on synthetic data. Our results indicate that unfairness (as defined with respect to model error rates) is not frequently observed on common benchmark tasks in fair classification. Of course, there could be other datasets in fairness domains that are not currently used as benchmarks that more clearly demonstrate unfairness in practice. Hypothetically, there could be datasets for which we use Algorithm 1 to reduce arbitrariness, and yet we still see significant systematic arbitrariness or differences in error rates (and thus unfairness) due to noise or bias. We just did not really see this for almost all of the tasks we investigate in this paper, which happen to be the ones that the fairness community uses for experiments.
To study Algorithm 1 in light of these other possibilities, we could develop synthetic datasets that retain unfairness after dealing with arbitrariness. We did not do this in the present study for two reasons. First, our focus was the practice of fairness research, as it currently stands, with a data-centric approach on the datasets people actually use for their research. We are not interested in synthetic data for this project.
However, future theory results that extend our work could be vetted experimentally with synthetic data. The work we mention above regarding composition with post-processing, as well as revisting impossibility results from a distributional approach over possible models, may be very interesting to examine under data settings that we can control.
How to deal with abstention. Future work could also perform a deeper exploration of the trade-off between abstention rate and error. We could characterize a Pareto-optimal trade-off that is a function of the choice of self-consistency level , and also examine in experiments and analytically how absention leads to improvements in accuracy. Future work could also identify patterns in abstention sets beyond low self-consistency. In this, looking to metrics from model multiplicity may be helpful. Further, future work could combine human decision-making or other automated elements to see how we can root out arbitrariness.
Reproducibility. As mentioned in our Ethics Statement, we made attempts to reproduce prior work in fair classification, and often could not. We ultimately made reproducibility of specific papers out of scope for the present project, as we could make our contributions about arbitrariness and variance without such work. It would nevertheless be useful to focus future work on reproducing prior algorithmic fairness studies, and seeing if conclusions change in those works as a function of using Algorithm 1 prior to introducing the proposed fairness intervention.
Law and policy. As mentioned in our Ethics Statement, our work regarding arbitrariness raises concrete questions for the law around due process and automated decision-making. Such contributions are also out of the scope of the present work, but we are currently developing them for future submission to a law review journal.