This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On Comparing Fair Classifiers under Data Bias

Mohit Sharma
IIIT Delhi
&Amit Deshpande
Microsoft Research India
&Rajiv Ratn
IIIT Delhi
Work partly done during an internship at Microsoft Research India. The author is supported by Microsoft Research India Joint PhD fellowship.
All correspondence to [email protected].
Abstract

In this paper, we consider a theoretical model for injecting data bias, namely, under-representation and label bias (Blum & Stangl, 2019). We empirically study the effect of varying data biases on the accuracy and fairness of fair classifiers. Through extensive experiments on both synthetic and real-world datasets (e.g., Adult, German Credit, Bank Marketing, COMPAS), we empirically audit pre-, in-, and post-processing fair classifiers from standard fairness toolkits for their fairness and accuracy by injecting varying amounts of under-representation and label bias in their training data (but not the test data). Our main observations are: 1. The fairness and accuracy of many standard fair classifiers degrade severely as the bias injected in their training data increases, 2. A simple logistic regression model trained on the right data can often outperform, in both accuracy and fairness, most fair classifiers trained on biased training data, and 3. A few, simple fairness techniques (e.g., reweighing, exponentiated gradients) seem to offer stable accuracy and fairness guarantees even when their training data is injected with under-representation and label bias . Our experiments also show how to integrate a measure of data bias risk in the existing fairness dashboards for real-world deployments.

1 Introduction

Fairness in AI and machine learning is an important problem because automated decision-making in highly sensitive domains such as justice, healthcare, banking, and education can proliferate social and economic harms [17, 29, 4, 45]. Fair and representative data is a foremost requirement to train and validate fair models. However, most collected and labeled datasets invariably contain historical and systemic biases [5], resulting in unfair and unreliable performance after deployment [58, 13, 61]. Even many standard datasets used to evaluate and compare bias mitigation techniques are not perfectly fair and representative [21, 48].

Recent literature in fair classification has studied the failure of fairness and accuracy guarantees under distribution shifts (e.g., covariate shift, label shift) between the training and test distributions and proposed fixes under reasonable assumptions [57, 20, 54, 59, 44, 22, 10, 56]. In this paper, we consider a simple, natural data bias model that incorporates under-representation, over-representation, and label biases [12], and study the change in accuracy and fairness of various classifiers as we inject bias in their training data. The central question we ask is: Among the various fair classifiers available in the standard fairness toolkits (e.g., [7]), which ones are less vulnerable to data bias?

The definitions of fairness prevalent in literature fall into two broad types: individual fairness and group fairness [24, 6, 50, 27, 50, 33, 62, 8]. Individual fairness promises similar treatment to similar individuals. Our focus is group-fair classification, which promises equal or near-equal outcomes across different sensitive groups (e.g., race, gender). Examples of group fairness include Equal Opportunity (equal false negative rates across groups) and Statistical Parity (equal acceptance rate across groups). Various bias mitigation techniques have been proposed for group-fair classification motivated by different applications and stages of the machine learning pipeline [69, 29, 45]. They are broadly categorized into three types: 1. Pre-Processing, where one transforms the input data, independent of the subsequent stages of training, and validation [37, 25, 15], 2. In-Processing, where models are trained by fairness-constrained optimization [14, 39, 64, 63, 66, 1, 16], and 3. Post-Processing, where the output predictions are adjusted afterward to improve fairness [38, 33, 52] . These techniques appear side by side in standard fairness toolkits with little normative prescription about which technique to choose under data bias. Furthermore, it’s unclear what happens to such standard fair classifiers when they are trained and exposed to varying amounts of data bias over time in the training or deployment environments. We attempt to close this gap by providing an auditing method to compare different fair classifiers under a simple model of data bias.

To examine the effect of data bias on group-fair classifiers, we use a theoretical model by Blum & Stangl [12] for under-representation and label bias. It is a simple model for demographic under-/over-sampling and implicit label bias seen in real-world data [58, 13, 61, 32, 41]. Given a train-test data split, we create biased training data by injecting under-representation and label bias in the training data (but not the test data). We then study the error rates and (un)fairness metrics of various pre-, in-, and post-processing fair classifiers on the original test data, as we vary the amount of bias in their training data. We inject two types of under-representation biases in the training data: βpos\beta_{\text{pos}}-bias, where we undersample the positive labeled population of the minority group by a multiplicative factor of βpos\beta_{\text{pos}}, and βneg\beta_{\text{neg}}-bias, where we undersample the negative labeled population of the minority group by a multiplicative factor of βneg\beta_{\text{neg}}. We examine the effect of increasing the parameters βpos\beta_{\text{pos}} and βneg\beta_{\text{neg}} separately as well as together. We also investigate the effect of label bias parameter ν\nu, where we flip the positive labels of the minority group to negative ones with probability ν\nu. We cannot hope to re-create real-world data bias scenarios via this synthetic pipeline [13, 61], but our findings on stability/instability of various classifiers can be somewhat useful towards designing fair pipelines in the presence of various data biases.

Our Contributions:

Our main contributions and observations can be summarized as follows.

  • Using synthetic and real-world datasets, we show that the fairness and accuracy guarantees of many fair classifiers from standard fairness toolkits are highly vulnerable to under-representation and label bias. In fact, often, an unfair classifier (e.g., logistic regression classifier) trained on the correct data can be more accurate and fairer than fair classifiers trained on biased data.

  • Some fair classification techniques (viz., reweighing [37], exponentiated gradients [1], adaptive reweighing [36]) have stable accuracy and fairness guarantees even when their training data in injected with under-representation and label bias. We provide a theoretical justification to support the empirically observed stability of the reweighing classifier [37]; see Theorem 1.

  • Our experimental pipeline can be leveraged to create audit and test suites to check the vulnerability of fair classifiers to simple models of data bias before their real-world deployment.

The rest of the paper is organized as follows. Section 2 explains related work to set the context for our work. Section 3 explains our experimental setup, and Section 4 discusses our key empirical observations. Section 5 contains some relevant theorems and proof outlines. Section 6 discusses our work in the context of other research directions on data bias in fair machine learning.

2 Related Work

The robustness of machine learning models when there is a mismatch between their training and test distributions has been studied under various theoretical models of distribution shifts, e.g., covariate shift and label shift. Recent works have studied fair classification subject to these distribution shifts and proposed solutions under reasonable assumptions on the data distribution [19, 57, 12, 54, 59, 20, 10, 22, 44, 31]. As pointed out in the introduction, our goal is different, so we defer the critical contributions of these works to Appendix A.1. However, later in Section 6, we discuss different distribution shifts and their relation to the data bias model of Blum & Stangl [12] used in our paper.

Our work builds on the under-representation and label bias model proposed in Blum & Stangl [12]. Under this model, they prove that the optimal fair classifier that maximizes accuracy subject to fairness constraints (equal opportunity constraints, to be precise) on the biased data distribution gives the maximum accuracy on the unbiased data distribution. For under-representation bias, the above result can be achieved by equalized odds and equal opportunity constraints but not by demographic parity constraints. Along similar lines, Maity et al. [44] derive necessary and sufficient conditions about when performance parity constraints in the presence of subpopulation shifts give better accuracy on the test distribution. Jiang et al. [36] have also considered the label bias model, although we look at the aggregate effect over varying amounts of label bias. Our observations on label bias also corroborate various findings by other works investigating the role of label noise in fair classification [43, 28, 60, 42]. Under-representation bias can also be studied as a targeted Sample Selection Bias mechanism [18]. There is some work done on fair sample selection bias [22, 67], which we elaborate on in Appendix A.1.

Parallel to our work, a recent preprint by Akpinar et al. [2] proposes a simulation toolbox based on the under-representation setting from Blum & Stangl [12] to stress test four different fairness interventions with user-controlled synthetic data and data biases. On the other hand, our work uses the under-representation framework from Blum & Stangl [12] to extensively examine the stability of different types of fair classifiers on various synthetic and real-world datasets. While Akpinar et al. [2] focus on proposing a simulation framework to extensively test all findings of Blum & Stangl [12], we focus on using the under-representation framework to highlight the differences in performance between various fair classifiers and theoretically investigating the stability of one of the fair classifiers.

We compare various fair classifiers in the presence of under-representation and label bias, which complements some expository surveys in fair machine learning [69, 45, 49, 26] and some benchmarking on important aspects like the choice of data processing, splits, data errors, and data pre-processing choices [29, 10, 53, 35]. We elaborate on these works in the Appendix A.2.

3 Experimental Setup

For our analysis, we select a comprehensive suite of commonly used fair classifiers implemented in the open-source AI Fairness 360 (AIF-360) toolkit [7]. From the available choice of pre-processing classifiers, we include two: 1. Reweighing (‘rew’) [37], and 2. Adaptive Reweighing (‘jiang_nachum’) [36] . Since ‘jiang_nachum’ is not a part of AIF-360, we use the authors’ implementation111https://github.com/google-research/google-research/tree/master/label_bias. Among the in-processing classifiers in AIF-360, we choose three: 1. Prejudice Remover classifier (‘prej_remover’) [39], 2. Exponentiated Gradient Reduction classifier (‘exp_grad’), and its deterministic version (‘grid_search’) [1] and 3. the method from Kearns et al. (‘gerry_fair’) [40] . From post-processing classifiers, we choose three: 1. the Reject Option classifier (‘reject’) [38], 2. the Equalized Odds classifier (‘eq’) [33], and 3. the Calibrated Equalized Odds algorithm (‘cal_eq’) [52] . We omitted the use of some classifiers in the AIF360 toolkit for two reasons: resource constraints and their extreme sensitivity to the choice of hyperparameters.

All fair classifiers use a base classifier, which it uses in its optimization routine to mitigate unfairness. We experiment with two base classifiers: a logistic regression (LR) model and a Linear SVM classifier with Platt’s scaling [51] for probability outputs. We use the Scikit-learn toolkit to implement the base classifiers [47]. The ‘prej_remover’ and ‘gerry_fair’ classifiers do not support SVM as a base classifier; hence, we only show results for those on LR.

The fairness metric we use in the main paper is Equalized Odds (EODDS) [33], which is defined as the difference of False positive and False negative Rates, and we measure an equally weighted combination of both to report our results 0.5|Pr(Y^=0Y=1,S=1)Pr(Y^=0Y=1,S=0)|+0.5|Pr(Y^=1Y=0,S=1)Pr(Y^=1Y=0,S=0)|0.5*|\operatorname{Pr}(\hat{Y}=0|Y=1,S=1)-\operatorname{Pr}(\hat{Y}=0|Y=1,S=0)|+0.5*|\operatorname{Pr}(\hat{Y}=1|Y=0,S=1)-\operatorname{Pr}(\hat{Y}=1|Y=0,S=0)|, where Y^{0,1}\hat{Y}\in\{0,1\} denotes the predicted label, Y{0,1}Y\in\{0,1\} denotes a binary class label, and S{0,1}S\in\{0,1\} denotes a binary sensitive attribute. Blum & Stangl [12] suggest that equal opportunity and equalized odds constraints on biased data can be more beneficial than demographic parity constraints. Therefore, we use the equalized odds constraint for ‘jiang_nachum’ and ‘exp_grad’. Different classifiers have been proposed for different fairness metrics and definitions. However, we often compare different methods with one or two standard metrics in practice. Since most of the methods used in this paper were trained for Equalized Odds, we report Equalized Odds results in the main text and report average Equal Opportunity Difference (EOD) and Statistical Parity Difference (SPD) results over under-representation bias settings to report a complete picture of the effect on data bias on fairness and error rate. EOD is defined as the True Positive Rate difference: |Pr(Y^=1Y=1,S=1)Pr(Y^=1Y=1,S=0)||\operatorname{Pr}(\hat{Y}=1|Y=1,S=1)-\operatorname{Pr}(\hat{Y}=1|Y=1,S=0)|, while SPD is defined as |Pr(Y^=1S=1)Pr(Y^=1S=0)||\operatorname{Pr}(\hat{Y}=1|S=1)-\operatorname{Pr}(\hat{Y}=1|S=0)|. For both metrics, values closer to 0 mean better fairness, whereas values closer to 11 indicate extreme parity difference between the two sensitive groups S=0S=0 and S=1S=1.

We train on multiple datasets to perform our analysis. We consider four standard real-world datasets from fair classification literature: the Adult Income dataset [23], the Bank Marketing dataset [46], the COMPAS dataset [3], and the German Credit dataset [23]. We also consider a synthetic dataset setup from Zeng et al. [65], which consists of a binary label y{0,1}y\in\{0,1\} and a binary sensitive attribute setting s{0,1}s\in\{0,1\}. We take the original train-test split for a given dataset, or in its absence, a 70%-30% stratified split on the subgroups. Appendix B gives specific details for all datasets.

To perform our analysis, we use the data bias model from Blum & Stangl [12] and inject varying amounts of under-representation and label bias into the original training data before giving it to a classifier. We summarize our experimental setup in Algorithm 11, presented in Appendix C. Let Y{0,1}Y\in\{0,1\} represent the label, and S{0,1}S\in\{0,1\} represent the sensitive attribute. Let βpos\beta_{pos} be the probability of retaining samples from the subgroup defined by the favorable label Y=1Y=1 and underprivileged group S=0S=0 (1010-subgroup). Similarly, let βneg\beta_{neg} be the probability of retaining samples from the unfavorable label Y=0Y=0 and underprivileged S=0S=0 group (0000-subgroup). We inject under-representation bias into the training data by retaining samples from the 1010-subgroup and the 0000-subgroup with probability βpos\beta_{pos} and βneg\beta_{neg}, respectively. We consider ten different values each for βpos\beta_{pos} and βneg\beta_{neg} varying over {0.1,0.2,..,1.0}\{0.1,0.2,.....,1.0\}. This results in 100100 different settings (10βpos10~{}\beta_{pos} factors x 10βneg10~{}\beta_{neg} factors) for training data bias. We separately inject ten different levels of label bias by flipping the favorable label of the underprivileged group to an unfavorable one (100010\rightarrow 00) with a probability ν\nu varying over {0.0,0.1,..,0.9}\{0.0,0.1,........,0.9\}. The test set for all settings is the original split test set, either provided at the original dataset source or taken out separately beforehand. This results in 110110 different datasets corresponding to different data bias settings, and consequently 110110 different results each for a particular fair classifier. Finally, the training of all classifiers and the procedures to create biased training data using βpos\beta_{pos}, βneg\beta_{neg}, and ν\nu is performed 55 times to account for randomness in sampling and optimization procedures.

We also note that some fair classifiers in our list, like ‘exp_grad’ and ‘cal_eq’, are randomized classifiers. Therefore, repeating the entire pipeline multiple times and reporting means and standard deviations normalizes the random behavior of these methods to some extent, apart from the measures taken during the implementation of these methods in widely used toolkits like the one we are using, AIF360 [7]. The code to train and reproduce the results is available here 222https://github.com/mohitsharma29/comparision_data_bias_fairness.

4 Stability of Fair Classifiers: Under-Representation and Label Bias

In this section, we present our experimental results about the stability of fairness and accuracy guarantees of various fair classifiers after injecting under-representation and label bias in their training data. Stability means whether the error rate and unfairness show high variance/spread in response to increasing or decreasing under-representation or label bias. A stable behavior is indicated by small changes in error rate and unfairness, as βpos,βneg or ν\beta_{pos},\beta_{neg}\text{ or }\nu change.

Refer to caption
Figure 1: Error Rate-EODDS values for various classifiers on all datasets, with no explicit under-representation and label bias. The blue horizontal and vertical lines denote the error rate and EODDS of a base LR model without any fairness constraints.

We first plot the unfairness (Equalized Odds) and error rates of various fair classifiers on the original training splits. Figure 1 shows the results of various fair classifiers with a logistic regression model on original dataset splits without any explicitly injected under-representation or label bias. While most fair classifiers exhibit an expected behavior of trading off some accuracy to minimize unfairness, some classifiers perform worse in mitigating unfairness than the unfair LR model. We attribute this reason to the strong effects of different kinds of data preprocessing on the performance of fair classifiers, as noted in previous works [29, 11]. Other results with SVM as a base classifier, Equal Opportunity Difference, and Statistical Parity Difference are given in Appendix D.

We can also inspect Figure 1 and other figures later in the paper by looking at the four quadrants formed by the blue horizontal and vertical lines corresponding to the unfairness and error rate of the base LR model. Following a Cartesian coordinate system, most fair classifiers, when trained on the original training set without explicit under-representation or label bias, lie in the fourth quadrant, as they trade-off some accuracy to mitigate unfairness. Any method lying in the first quadrant is poor, as it is worse than the base LR model in terms of both error rate and unfairness. Finally, any method lying in the third quadrant is excellent as it improves over both error rate and unfairness compared to the LR model.

Refer to caption (a) Refer to caption (b) Algo. Err. SPD EOD base lr 0.324 ±\pm 0.011 0.206 ±\pm 0.011 0.126 ±\pm 0.024 svm 0.338 ±\pm 0.013 0.186 ±\pm 0.014 0.106 ±\pm 0.028 rew lr 0.327 ±\pm 0.006 0.198 ±\pm 0.026 0.123 ±\pm 0.037 svm 0.343 ±\pm 0.008 0.177 ±\pm 0.027 0.101 ±\pm 0.04 jiang nachum lr 0.39 ±\pm 0.014 0.066 ±\pm 0.017 0.018 ±\pm 0.009 svm 0.389 ±\pm 0.011 0.068 ±\pm 0.018 0.016 ±\pm 0.006 exp       grad lr 0.333 ±\pm 0.009 0.07 ±\pm 0.025 0.029 ±\pm 0.022 svm 0.354 ±\pm 0.017 0.057 ±\pm 0.02 0.029 ±\pm 0.021 grid search lr 0.336 ±\pm 0.016 0.194 ±\pm 0.107 0.163 ±\pm 0.096 svm 0.358 ±\pm 0.025 0.163 ±\pm 0.092 0.13 ±\pm 0.089 gerry fair lr 0.41 ±\pm 0.037 0.113 ±\pm 0.057 0.097 ±\pm 0.082 svm - - - prej remover lr 0.654 ±\pm 0.024 0.303 ±\pm 0.137 0.246 ±\pm 0.118 svm - - - eq lr 0.37 ±\pm 0.012 0.05 ±\pm 0.016 0.044 ±\pm 0.021 svm 0.381 ±\pm 0.014 0.046 ±\pm 0.018 0.043 ±\pm 0.022 cal_eq lr 0.343 ±\pm 0.022 0.251 ±\pm 0.141 0.171 ±\pm 0.092 svm 0.353 ±\pm 0.022 0.216 ±\pm 0.114 0.144 ±\pm 0.083 reject lr 0.415 ±\pm 0.019 0.079 ±\pm 0.027 0.038 ±\pm 0.019 svm 0.413 ±\pm 0.021 0.081 ±\pm 0.026 0.041 ±\pm 0.018 (a) Figure 2: Stability analysis of various fair classifiers on the Compas dataset (Lighter the shade, more the under-representation and label bias): (a) The spread of error rates and Equalized Odds(EODDS) of various fair classifiers as we vary βpos\beta_{pos}, βneg\beta_{neg} and label bias separately. The vertical and horizontal reference blue lines denote the performance of an unfair logistic regression model on the original dataset without any under-representation or label bias. (b) Heatmap for Error Rate and EODDS across all different settings of βpos\beta_{pos} and βneg\beta_{neg}. Darker values denote lower error rates and unfairness. Uniform values across the grid indicate stability to different βpos\beta_{pos} and βneg\beta_{neg}. (c) Mean error rates, Statistical Parity Difference (SPD), and Equal Opportunity Difference (EOD) across all βpos\beta_{pos} and βneg\beta_{neg} settings for both kinds of base classifiers (SVM and LR).

Figure 1 shows how various fair classifiers perform without explicit under-representation and label bias. In this figure, we try to summarize all the available pieces of results for a given dataset in various ways to get a holistic picture. We show the stability of error rates and Equalized Odds unfairness and look at the average stability of the other two metrics: Equal Opportunity and Statistical Parity Difference, since many times in practice, we care about the performance of methods against a fixed set of metrics. We now look at the individual effects of both under-representation factors (βpos\beta_{pos}, βneg\beta_{neg} and label bias (ν\nu)), with the LR model as a reference, denoted by the vertical and the horizontal error rate and Equal Opportunity Difference (EOD) lines respectively. Figure 2(a) shows those results for the Compas dataset. The results for the Adult, Synthetic, Bank Marketing, and Credit datasets are given in Appendix E. We first observe that many fair classifiers exhibit a large variance in the unfairness-error rate plane and often become unfair as the bias factors increase. This is also indicated by how the fair classifiers jump from quadrant four to quadrant one, as βpos\beta_{pos}, βneg\beta_{neg}, or ν\nu increase. In fact, for the ‘prej_remover’ classifier, the results lie outside the plot limits, indicating its instability. Even when its average SPD is measured in Figure 2(c) (‘prej_remover’ is implicitly optimized for SPD), we observe that it does not perform well.

However, ‘rew’, ‘jiang_nachum’, and ‘exp_grad’ fair classifiers remain stable across all three kinds of biases for both error rate and EOD and show a minimal spread across both the error rate and unfairness (EOD) axes. They tend to remain in the same quadrant with varying bias factors. This remains true on other datasets as well (as shown in Appendix E). Furthermore, it’s interesting to note that a simple logistic regression model tends to stay relatively stable in the presence of under-representation and label biases, and on many occasions, more fair than many fair classifiers in our analysis.

We can also look at the combined effect of βpos\beta_{pos} and βneg\beta_{neg} in Figure 2(b) with the help of heatmaps. Each cell in the heatmap denotes a possible βpos,βneg\beta_{pos},\beta_{neg} setting, thus covering all the 100100 possible settings. Darker color values in cells denote lower error rates and unfairness. For this analysis, to examine stability with heatmaps, we look for uniformity of colors in the entire grid, which means that the error rate and unfairness values do not change with different βpos,βneg\beta_{pos},\beta_{neg} values.

Figure 2(b) again confirm the stability of ‘rew’, ‘jiang_nachum’ and ‘exp_grad’ fair classifiers even when they are trained on datasets with combined βpos\beta_{pos} and βneg\beta_{neg} biases. However, ‘jiang_nachum’ and ‘exp_grad’ emerge as stable and better classifiers in terms of their unfairness values than the ‘rew’ classifier because they have darker uniform colors across their entire error rate and unfairness grid. The ‘eq’ classifier also remains stable for unfairness, except on the Credit dataset (shown in Appendix E). Amongst the classifiers optimized for Equalized Odds (‘jiang_nachum’, ‘exp_grad’, ‘eq’ and ‘cal_eq’), ‘jiang_nachum’ and ‘exp_grad’ emerge out to be most stable ones in terms of error rate and unfairness.

To quantitatively summarize the findings from the heatmap plots and to show this statistic with other choices of base classifiers (LR and SVM) and unfairness metrics (SPD and EOD), we look at the average performance across 100100 different settings of βpos\beta_{pos} and βneg\beta_{neg} in Figure 2(c) for the Compas dataset. An indication of stability with these results is whether the mean error rate and unfairness across 100100 different under-representation settings stay low and whether this mean performance has a low standard deviation. Because we run the whole experimental setup 55 times with different random seeds for reproducibility, the values represent the mean value of the mean across 55 runs for each βpos,βneg\beta_{pos},\beta_{neg} setting, and its standard deviation over the 100100 settings. We embolden the top-33 mean and top-33 standard deviation values for each column, where the top-33 means the lowest 33 error rate and the unfairness metrics (SPD and EOD).

For all datasets, the observations across all emboldened values corroborate our findings from the heatmap plots. For both unfairness metrics (SPD and EOD), ‘exp_grad’ and ‘jiang_nachum’ often appear in the top-33 mean performance across all settings, whereas ‘rew’, ‘jiang_nachum’ and ‘exp_grad’ often appear in the top-33 smallest standard deviation values, indicating their stability. Furthermore, ‘rew’ and ‘jiang_nachum’ often appear in the top-33 mean and standard deviation list for error rate. ‘rew’ often appears in the top-33 mean performance for Statistical Parity Difference (SPD). As expected, the unfair base classifier often appears in the top-33 mean error rate list since its objective is always to minimize error rates without regard for unfairness. Finally, when looking over all the datasets’ results, we find that ‘exp_grad’ and ‘jiang_nachum’ often emerge to be the most stable classifiers for other metrics like EOD and SPD, despite not being trained to minimize those metrics, further solidifying the evidence for their stable behavior.

Another interesting observation across the results of all datasets is that the deterministic version of the ‘exp_grad’ fair algorithm, the ’grid_search’ classifier [1], is very unstable compared to the ’exp_grad’ classifier, which is the most stable amongst the lot, indicating that in the face of data bias, randomness might help. Other results, such as Figure 2 (a,b) with SPD as the unfairness metric, cannot be included in the manuscript.

Finally, it is worth noting that the stability of ‘rew’ and ‘exp_grad’ has also been reported in another benchmarking study about testing various fair classifiers in the presence of varying sensitive attribute noise [30]. This indicates towards a potentially stable performance from reweighing-based approaches in the presence of varying kinds of data biases and noises in the training data.

5 Stability Theorems for Reweighing Classifier

The following theorems give a theoretical justification for the stability of test accuracy when the reweighing classifier [37] is trained even on data injected with extreme under-representation such as βpos0\beta_{\text{pos}}\rightarrow 0 or βneg0\beta_{\text{neg}}\rightarrow 0.

Theorem 1.

Let DD be the original data distribution and β(0,1]\beta\in(0,1]. Let DβD_{\beta} be the biased data distribution after injecting under-representation bias in DD using βpos=β\beta_{\text{pos}}=\beta (or similarly βneg=β\beta_{\text{neg}}=\beta). Let ff be any binary classifier and L(f)L(f) be any non-negative loss function on the prediction of ff and the true label. Let LL^{\prime} denote the reweighed loss optimized by the reweighing classifier [37]. Then

α24𝔼D[L(f)]𝔼Dβ[L(f)]4α𝔼D[L(f)],\frac{\alpha^{2}}{4}~{}\mathbb{E}_{D}[L(f)]\leq\mathbb{E}_{D_{\beta}}[L^{\prime}(f)]\leq\frac{4}{\alpha}~{}\mathbb{E}_{D}[L(f)],

where α=minijP(Y=i,S=j)maxijP(Y=i,S=j)\alpha=\frac{\min_{ij}\operatorname{P}\left(Y=i,S=j\right)}{\max_{ij}\operatorname{P}\left(Y=i,S=j\right)} denotes the subgroup imbalance in the original distribution DD.

The proof for this theorem is given in Appendix F. The proof involves bounding subgroup and group/label probabilities using α\alpha, and with a reasonable assumption that X|Y=y,S=sX^{\prime}|Y^{\prime}=y,S^{\prime}=s in DβX|Y=y,S=sD_{\beta}\sim X|Y=y,S=s in DD. The main take-away from Theorem 1 is the following: when we reweigh a given non-negative loss function using the Reweighing scheme from Kamiran & Calders [37] on samples from DβD_{\beta}, we always lie in some constant ‘radius’ of the expected loss on the true distribution DD for any classifier ff. This does not tell us anything about the expected loss on DD itself, but rather, about the ability of the reweighed loss LL^{\prime} to recover from any arbitrary under-representation bias β\beta. However, the constant factors depend on α\alpha, the worst case imbalance between the subgroups for a given distribution.

Remark 1.

Whenever 𝔼D[L(f)]\mathbb{E}_{D}[L(f)] is small, Theorem 1 says that the expected reweighed loss on the biased distribution 𝔼Dβ[L(f)]\mathbb{E}_{D_{\beta}}[L^{\prime}(f)] will also be small, for any classifier ff. This means that if we are given a non-negative loss function which also subsumes a given fairness constraint (let’s call it LfairL_{fair}), then reweighing with LfairL_{fair} can give us low unfairness and stable classifiers.

Using Theorem 1, we prove the following bound on the expected loss of the reweighing classifier that is obtained by empirical minimization of the reweighed loss on the biased distribution DβD_{\beta} but tested on the original distribution DD.

Theorem 2.

For any β(0,1]\beta\in(0,1], let g^β\hat{g}_{\beta} the classifier obtained by empirical minimization of the reweighed loss LL^{\prime} on NN samples from the biased distribution DβD_{\beta} defined as in Theorem 1. Then, with probability at least 1δ1-\delta, we have

𝔼D[L(gβ^)]16α3βln(2/δ)2N+16α3𝔼D[L(f)],\mathbb{E}_{D}[L(\hat{g_{\beta}})]\leq\frac{16}{\alpha^{3}\beta}~{}\sqrt{\frac{\ln{(2/\delta)}}{2N}}+\frac{16}{\alpha^{3}}~{}\mathbb{E}_{D}[L(f^{*})],

where ff^{*} is the Bayes optimal classifier on the original distribution DD, α\alpha is the subgroup imbalance as in Theorem 1.

The proof for this theorem is given in Appendix F. It uses a generalized version of the Hoeffding’s inequality [34], proof of Theorem 22 of Zhu et al. [68] and Theorem  1. Note that Theorem 2 implies that given any ϵ>0\epsilon>0 and δ>0\delta>0, we can get the guarantee

𝔼D[L(g^β)]16α3𝔼D[L(f)]+ϵ,\mathbb{E}_{D}[L(\hat{g}_{\beta})]\leq\frac{16}{\alpha^{3}}~{}\mathbb{E}_{D}[L(f^{*})]+\epsilon,

with probability at least 1δ1-\delta, by simply choosing the number of samples NN from DβD_{\beta} in the empirical reweighed loss minimization to be large enough so that

N128log(2/δ)α6β2ϵ2.N\geq\frac{128\log(2/\delta)}{\alpha^{6}\beta^{2}\epsilon^{2}}.

It is important to note that Theorems 1 and 2 hold for data without any injected bias, i.e., βpos=βneg=1\beta_{\text{pos}}=\beta_{\text{neg}}=1, where such theoretical guarantees for the reweighing classifier [37] were not known earlier.

6 Discussion

In this section, we discuss where our results stand in the context of related works on distribution shifts and data bias in real-world AI/ML models and make a few practical recommendations based on our work.

6.1 Under-Representation and Distribution Shifts

Let PtrainP_{\text{train}} and PtestP_{\text{test}} define the probabilities for training and testing distributions, respectively, with a random data point denoted by (X,S,Y)(X,S,Y) with features XX, sensitive attribute SS, and class label YY. Let Y=1Y=1 be the favorable label and S=0S=0 be the underprivileged group. A covariate shift in fair classification is defined as Ptrain(Y|X,S)=Ptest(Y|X,S)P_{\text{train}}(Y|X,S)=P_{\text{test}}(Y|X,S) [54]. Singh et al. [59] look at fairness and covariate shift from a causal perspective using the notion of a separating set of features that induce a covariate shift. Coston et al. [19] look at covariate shift and domain adaptation when sensitive attributes are not available at source (train) or target (test).

Distribution shifts can also be induced via class labels. Dai et al. [20] present a more general model for label bias, building upon the work of Blum et al. [12]. They present a model for label shift where Ptrain(Y)Ptest(Y)P_{\text{train}}(Y)\neq P_{\text{test}}(Y), but Ptrain(X|Y,S)=Ptest(X|Y,S)P_{\text{train}}(X|Y,S)=P_{\text{test}}(X|Y,S). Biswas et al. [10] study model shifts in prior probability, where Ptrain(Y=1|S)Ptest(Y=1|S)P_{\text{train}}(Y=1|S)\neq P_{\text{test}}(Y=1|S) but Ptrain(X|Y=1,S)=Ptest(X|Y=1,S)P_{\text{train}}(X|Y=1,S)=P_{\text{test}}(X|Y=1,S). Our work is also different from sample selection bias [18] (selection of training samples at random vs. sensitive group dependent under-sampling and label flipping), but may have some connections to fair sample selection bias [22], where the objective is to train only on selected samples(𝒮=1\mathscr{S}=1), but generalize well for fairness and accuracy on the overall population (unselected samples 𝒮=0\mathscr{S}=0 and 𝒮=1\mathscr{S}=1). An interesting question one can ask in this regard is to design fair classifiers which are stable against a range of malicious sample selection biases.

The under-representation model used in our paper comes from Blum et al. [12] and can be thought of as the following distribution shift: Ptrain(Y=0,S=0)=βnegPtest(Y=0,S=0) and Ptrain(Y=1,S=0)=βposPtest(Y=1,S=0)P_{\text{train}}(Y=0,S=0)=\beta_{\text{neg}}\cdot P_{\text{test}}(Y=0,S=0)\text{ and }P_{\text{train}}(Y=1,S=0)=\beta_{\text{pos}}\cdot P_{\text{test}}(Y=1,S=0), while the other two subgroups (Y=0,S=1Y=0,S=1) and (Y=1,S=1Y=1,S=1) are left untouched. Overall, the distribution of (X,S)(X,S) remains unchanged, while the distribution of YY changes only for the underprivileged group S=0S=0. The above distribution shift is different from covariate and label shifts in that it affects both the joint marginal P(X,S)P(X,S) and P(X|Y,A)P(X|Y,A).

In a broader sense beyond the mathematical definition used in our work, under-representation of a demographic and implicit bias in the labels are known problems observed in real-world data. Shankar et al. [58] highlight geographic over-representation issues in public image datasets and how that can harm use cases concerning prediction tasks involving developing countries. Buolamwini et al. [13] highlight a similar under-representation issue with gender classification systems trained on over-represented lighter-skinned individuals. Wilson et al. [61] highlight similar disparities for pedestrian detection tasks. Biased labels in the training data have also been observed in the context of implicit bias in the literature [32, 41].

6.2 Practical Recommendations

We show that the performance guarantees of commonly used fair classifiers can be highly unstable when their training data has under-representation and label bias. Our experimental setup serves as a template of a dashboard to check the vulnerability of fair classifiers to data bias by injecting bias using simple models similar to Blum & Stangl [12].

Our work motivates the importance of creating test suites for robustness to data bias and incorporating them with modern fairness toolkits such as AIF-360 [7]. Our results complement previous studies on the stability across different train-test splits [29] or different data pre-processing strategies [11]. We experiment with a range of βpos,βneg\beta_{pos},\beta_{neg} and ν\nu factors. In practice, this can be done with a user-supplied range of under-representation and label bias relevant for specific use cases. The ability to measure the stability of chosen fairness metrics across dynamically specified bias factors can be a great first step towards safe deployment of fair classifiers, similar to recent works like Shifty [31].

Our results show that some classifiers like ‘jiang_nachum’ [36] and ‘exp_grad’ [1] can remain relatively unscathed in the presence of varying under-representation and label biases. The stability is maintained even when they are trained and tested on mismatching fairness metrics. This motivates using such reweighing methodologies when the level of data biases our model might see is unknown.

Motivated by the same setup of Blum & Stangl [12], Akpinar et al. [2] also propose a toolkit and dashboard to stress test fair algorithms under a user-controlled synthetic setup, which allows the user to control various aspects of the simulation. A handy addition to existing fairness toolkits such as AIF360 [7], and Fairlearn [9] can be to incorporate input distribution stability routines from our work and Akpinar et al. [2], along with some comparative analysis routines from Friedler et al. [29] to create a practical checklist for any fairness algorithm. Finally, our experimental setup can provide an additional tool for checking the robustness of models to data bias, along with the existing robustness test suites, e.g., CheckList [55].

7 Conclusion

We show that many state-of-the-art fair classifiers exhibit a large variance in their performance when their training data is injected with under-representation and label bias. We propose an experimental setup to compare different fair classifiers under data bias. We show how the Bayes Optimal unfair and fair classifiers react with under-representation bias and propose a reweighing scheme to recover the true fair Bayes Optimal classifiers while optimizing over biased distributions with a known bias. We give a theoretical bound on the accuracy of the reweighing classifier [37] that holds even under extreme data bias. Finally, we discuss our work in the broader context of data bias literature and make practical recommendations.

A limitation of our work is that we use a simple model for injecting under-representation and label bias, with which one cannot hope to address the root cause behind different kinds of biases in real-world data. Thinking about the causal origin of data bias instead may allow one to model more types of biases and obtain better fixes for them under reasonable assumptions. Theoretically explaining the stability of the ‘exp_grad’ [1], and ‘jiang_nachum’ [36] fair classifiers with under-representation bias is also an interesting future work.

References

  • [1] Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. A reductions approach to fair classification. In International Conference on Machine Learning, pages 60–69. PMLR, 2018.
  • [2] Nil-Jana Akpinar, Manish Nagireddy, Logan Stapleton, Hao-Fei Cheng, Haiyi Zhu, Steven Wu, and Hoda Heidari. A sandbox tool to bias (stress)-test fairness algorithms. arXiv preprint arXiv:2204.10233, 2022.
  • [3] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. Machine bias. ProPublica, May, 23(2016):139–159, 2016.
  • [4] Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness in machine learning. Nips tutorial, 1:2017, 2017.
  • [5] Solon Barocas and Andrew D Selbst. Big data’s disparate impact. Calif. L. Rev., 104:671, 2016.
  • [6] Yahav Bechavod, Christopher Jung, and Zhiwei Steven Wu. Metric-free individual fairness in online learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
  • [7] Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, and Yunfeng Zhang. AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias, October 2018.
  • [8] Reuben Binns. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, page 514–524. Association for Computing Machinery, 2020.
  • [9] Sarah Bird, Miro Dudík, Richard Edgar, Brandon Horn, Roman Lutz, Vanessa Milan, Mehrnoosh Sameki, Hanna Wallach, and Kathleen Walker. Fairlearn: A toolkit for assessing and improving fairness in ai. Microsoft, Tech. Rep. MSR-TR-2020-32, 2020.
  • [10] Arpita Biswas and Suvam Mukherjee. Ensuring fairness under prior probability shifts. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 414–424, 2021.
  • [11] Sumon Biswas and Hridesh Rajan. Fair preprocessing: Towards understanding compositional fairness of data transformers in machine learning pipeline. arXiv preprint arXiv:2106.06054, 2021.
  • [12] Avrim Blum and Kevin Stangl. Recovering from biased data: Can fairness constraints improve accuracy? arXiv preprint arXiv:1912.01094, 2019.
  • [13] Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, pages 77–91. PMLR, 2018.
  • [14] Toon Calders and Sicco Verwer. Three naive bayes approaches for discrimination-free classification. Data mining and knowledge discovery, 21(2):277–292, 2010.
  • [15] Flavio P Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized pre-processing for discrimination prevention. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 3995–4004, 2017.
  • [16] L Elisa Celis, Lingxiao Huang, Vijay Keswani, and Nisheeth K Vishnoi. Classification with fairness constraints: A meta-algorithm with provable guarantees. In Proceedings of the conference on fairness, accountability, and transparency, pages 319–328, 2019.
  • [17] Alexandra Chouldechova and Aaron Roth. The frontiers of fairness in machine learning. arXiv preprint arXiv:1810.08810, 2018.
  • [18] Corinna Cortes, Mehryar Mohri, Michael Riley, and Afshin Rostamizadeh. Sample selection bias correction theory. In International conference on algorithmic learning theory, pages 38–53. Springer, 2008.
  • [19] Amanda Coston, Karthikeyan Natesan Ramamurthy, Dennis Wei, Kush R Varshney, Skyler Speakman, Zairah Mustahsan, and Supriyo Chakraborty. Fair transfer learning with missing protected attributes. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 91–98, 2019.
  • [20] Jessica Dai and Sarah M Brown. Label bias, label shift: Fair machine learning with unreliable labels. In NeurIPS 2020 Workshop on Consequential Decision Making in Dynamic Environments, volume 12, 2020.
  • [21] Frances Ding, Moritz Hardt, John Miller, and Ludwig Schmidt. Retiring adult: New datasets for fair machine learning. arxiv preprint arxiv:2108.04884, 2021.
  • [22] Wei Du and Xintao Wu. Fair and robust classification under sample selection bias. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 2999–3003, 2021.
  • [23] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017.
  • [24] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science (ITCS) Conference, page 214–226. Association for Computing Machinery, 2012.
  • [25] Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. Certifying and removing disparate impact. In proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pages 259–268, 2015.
  • [26] Qizhang Feng, Mengnan Du, Na Zou, and Xia Hu. Fair machine learning in healthcare: A review. arXiv preprint arXiv:2206.14397, 2022.
  • [27] Will Fleisher. What’s Fair about Individual Fairness?, page 480–490. Association for Computing Machinery, 2021.
  • [28] Riccardo Fogliato, Alexandra Chouldechova, and Max G’Sell. Fairness evaluation in presence of biased noisy labels. In International Conference on Artificial Intelligence and Statistics, pages 2325–2336. PMLR, 2020.
  • [29] Sorelle A Friedler, Carlos Scheidegger, Suresh Venkatasubramanian, Sonam Choudhary, Evan P Hamilton, and Derek Roth. A comparative study of fairness-enhancing interventions in machine learning. In Proceedings of the conference on fairness, accountability, and transparency, pages 329–338, 2019.
  • [30] Avijit Ghosh, Pablo Kvitca, and Christo Wilson. When fair classification meets noisy protected attributes. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, pages 679–690, 2023.
  • [31] Stephen Giguere, Blossom Metevier, Bruno Castro da Silva, Yuriy Brun, Philip S Thomas, and Scott Niekum. Fairness guarantees under demographic shift. In International Conference on Learning Representations, 2022.
  • [32] Anthony G Greenwald and Linda Hamilton Krieger. Implicit bias: Scientific foundations. California law review, 94(4):945–967, 2006.
  • [33] Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. Advances in neural information processing systems, 29:3315–3323, 2016.
  • [34] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American statistical association, 58(301):13–30, 1963.
  • [35] Maliha Tashfia Islam, Anna Fariha, Alexandra Meliou, and Babak Salimi. Through the data management lens: Experimental analysis and evaluation of fair classification. In Proceedings of the 2022 International Conference on Management of Data, pages 232–246, 2022.
  • [36] Heinrich Jiang and Ofir Nachum. Identifying and correcting label bias in machine learning. In International Conference on Artificial Intelligence and Statistics, pages 702–712. PMLR, 2020.
  • [37] Faisal Kamiran and Toon Calders. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1):1–33, 2012.
  • [38] Faisal Kamiran, Asim Karim, and Xiangliang Zhang. Decision theory for discrimination-aware classification. In 2012 IEEE 12th International Conference on Data Mining, pages 924–929. IEEE, 2012.
  • [39] Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma. Fairness-aware classifier with prejudice remover regularizer. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 35–50. Springer, 2012.
  • [40] Michael Kearns, Seth Neel, Aaron Roth, and Zhiwei Steven Wu. Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. In International Conference on Machine Learning, pages 2564–2572. PMLR, 2018.
  • [41] Jon Kleinberg and Manish Raghavan. Selection problems in the presence of implicit bias. arXiv preprint arXiv:1801.03533, 2018.
  • [42] Nikola Konstantinov and Christoph H Lampert. On the impossibility of fairness-aware learning from corrupted data. In Algorithmic Fairness through the Lens of Causality and Robustness workshop, pages 59–83. PMLR, 2022.
  • [43] Alex Lamy, Ziyuan Zhong, Aditya K Menon, and Nakul Verma. Noise-tolerant fair classification. Advances in Neural Information Processing Systems, 32, 2019.
  • [44] Subha Maity, Debarghya Mukherjee, Mikhail Yurochkin, and Yuekai Sun. Does enforcing fairness mitigate biases caused by subpopulation shift? Advances in Neural Information Processing Systems, 34, 2021.
  • [45] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6):1–35, 2021.
  • [46] Sérgio Moro, Paulo Cortez, and Paulo Rita. A data-driven approach to predict the success of bank telemarketing. Decision Support Systems, 62:22–31, 2014.
  • [47] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825–2830, 2011.
  • [48] Kenneth L Peng, Arunesh Mathur, and Arvind Narayanan. Mitigating dataset harms requires stewardship: Lessons from 1000 papers. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.
  • [49] Dana Pessach and Erez Shmueli. A review on fairness in machine learning. ACM Computing Surveys (CSUR), 55(3):1–44, 2022.
  • [50] Felix Petersen, Debarghya Mukherjee, Yuekai Sun, and Mikhail Yurochkin. Post-processing for individual fairness. Advances in Neural Information Processing Systems, 34, 2021.
  • [51] John Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61–74, 1999.
  • [52] Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. On fairness and calibration. arXiv preprint arXiv:1709.02012, 2017.
  • [53] Shangshu Qian, Hung Pham, Thibaud Lutellier, Zeou Hu, Jungwon Kim, Lin Tan, Yaoliang Yu, Jiahao Chen, and Sameena Shah. Are my deep learning systems fair? an empirical study of fixed-seed training. Advances in Neural Information Processing Systems, 34, 2021.
  • [54] Ashkan Rezaei, Anqi Liu, Omid Memarrast, and Brian D Ziebart. Robust fairness under covariate shift. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 9419–9427, 2021.
  • [55] Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. Beyond accuracy: Behavioral testing of nlp models with checklist. arXiv preprint arXiv:2005.04118, 2020.
  • [56] Jessica Schrouff, Natalie Harris, Oluwasanmi Koyejo, Ibrahim Alabdulmohsin, Eva Schnider, Krista Opsahl-Ong, Alex Brown, Subhrajit Roy, Diana Mincu, Christina Chen, et al. Maintaining fairness across distribution shift: do we have viable solutions for real-world applications? arXiv preprint arXiv:2202.01034, 2022.
  • [57] Candice Schumann, Xuezhi Wang, Alex Beutel, Jilin Chen, Hai Qian, and Ed H Chi. Transfer of machine learning fairness across domains. arXiv preprint arXiv:1906.09688, 2019.
  • [58] Shreya Shankar, Yoni Halpern, Eric Breck, James Atwood, Jimbo Wilson, and D Sculley. No classification without representation: Assessing geodiversity issues in open data sets for the developing world. arXiv preprint arXiv:1711.08536, 2017.
  • [59] Harvineet Singh, Rina Singh, Vishwali Mhasawade, and Rumi Chunara. Fairness violations and mitigation under covariate shift. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 3–13, 2021.
  • [60] Jialu Wang, Yang Liu, and Caleb Levy. Fair classification with group-dependent label noise. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 526–536, 2021.
  • [61] Benjamin Wilson, Judy Hoffman, and Jamie Morgenstern. Predictive inequity in object detection. arXiv preprint arXiv:1902.11097, 2019.
  • [62] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P. Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web (WWW), WWW’17, page 1171–1180, 2017.
  • [63] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. Fairness constraints: Mechanisms for fair classification. In Artificial Intelligence and Statistics, pages 962–970. PMLR, 2017.
  • [64] Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In International conference on machine learning, pages 325–333. PMLR, 2013.
  • [65] Xianli Zeng, Edgar Dobriban, and Guang Cheng. Bayes-optimal classifiers under group fairness. arXiv preprint arXiv:2202.09724, 2022.
  • [66] Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 335–340, 2018.
  • [67] Jiongli Zhu, Sainyam Galhotra, Nazanin Sabri, and Babak Salimi. Consistent range approximation for fair predictive modeling. Proceedings of the VLDB Endowment, 16(11):2925–2938, 2023.
  • [68] Zhaowei Zhu, Tianyi Luo, and Yang Liu. The rich get richer: Disparate impact of semi-supervised learning. arXiv preprint arXiv:2110.06282, 2021.
  • [69] Indre Zliobaite. A survey on measuring indirect discrimination in machine learning. arXiv preprint arXiv:1511.00148, 2015.

Appendix A Detailed Related Work

A.1 Prior Work on Distribution Shifts

The robustness of ML models when there is a mismatch between the training and test distributions has been studied under various theoretical models of distribution shifts, e.g., covariate shift and label shift. Recent works have studied fair classification subject to these distribution shifts and proposed solutions under reasonable assumptions on the data distribution [19, 57, 12, 54, 59, 20, 10, 22, 44, 31].

Coston et al. [19] study the problem of fair transfer learning with covariate shift under two conditions: availability of protected attributes in the source or target data, and propose to solve these problems by weighted classification and using loss functions with twice differentiable score disparity terms. Schumann et al.  [57] present theoretical guarantees on different scenarios of fair transfer learning for equalized odds and equal opportunity through a domain adaptation framework. Rezaei et al.  [54] propose an adversarial framework to learn fair predictors in the presence of covariate shift by formulating a game between choosing fair minimizing and maximizing target conditional label distributions, formulating a convex optimization problem which they ultimately solve using batch gradient descent. Singh et al.  [59] proposes to learn stable models with respect to fairness and accuracy in the presence of shifts in data distribution using a causal graph modeling the shift, using the framework of causal domain adaptation, deriving solutions that are worst-case optimal for some kinds of covariate shift. Dai et al.  [20] talk about the impact of label bias and label shift on standard fair classification algorithms. They study label bias and label shift through a common framework where they assume the availability of a trained fair classifier on true labels and analyze its performance at test time when either the true label distribution changes or when the new test time labels are biased. Biswas et al.  [10] focus on addressing the effects of prior probability shifts on fairness and propose an algorithm and a metric called Prevalence Difference, which, when minimized, ensures Proportional Equality.

Finally, Du et al.  [22] propose a framework to realize classifiers robust to sample selection bias using reweighing and minimax robust estimation. The selection of samples is indicated by 𝒮\mathscr{S}. The aim is to train using a part of the distribution where 𝒮=1\mathscr{S}=1 but generalize well for fairness and accuracy on the overall population (both 𝒮=0\mathscr{S}=0 and 𝒮=1\mathscr{S}=1). Du et al. [22] show that robust and fair classifiers on the overall population can be obtained by solving a reweighed optimization problem. Giguere et al. [31] consider demographic shift where a certain group can become more or less probable in the test distribution than the training distribution. They output a classifier that retains fairness guarantees for given user-specified bounds on these probabilities. Zhu et al. [67] study the fair sample selection bias problem and propose the CRAB framework, which constructs a causal graph based on the background knowledge about the data collection process and external data sources.

We tackle a related problem to the above works, where we only look at the performance of fair classifiers in varying amounts of under-representation bias (or subgroups imbalance) and label bias, which is a very straightforward case of distribution shift.

A.2 Prior Work on Comparative Analysis of Fair Methods

We compare various fair classifiers in the presence of under-representation and label bias. Our work complements the surveys in fair machine learning that compare different methods either conceptually [69, 45, 49, 26] or empirically, on important aspects like the choice of data processing, splits, data errors, and data pre-processing choices [29, 10, 53, 35]. Friedler et al. [29] compare some fair classification methods in terms of how the data is prepared and preprocessed for training, how different fairness metrics correlate with each other, stability with respect to slight changes in the training data, and the usage of multiple sensitive attributes. Islam et al. [35] conceptually compare various fairness metrics and further use that to empirically compare 1313 fair classification approaches on their fairness-accuracy tradeoffs, scalability with respect to the size of dataset and dimensionality, robustness against training data errors such as swapped and missing feature values, and variability of results with multiple training data partitions. On the other hand, we compare various classifiers with respect to changes in the distribution while keeping most things like base classifiers and data preprocessing the same across methods.

Appendix B Dataset Information

Table 1 list all information pertaining to the datasets used in our experiments. We choose to drop columns which either introduce a lot of empty values, or represent some other sensitive attribute. The reason to drop the non-primary sensitive attributes is to avoid unintended disparate treatment. The choice of sensitive attributes for each dataset is motivated from previous works in fair ML literature [63, 1, 29, 36].

The synthetic dataset setup from Zeng et al. [65] consists of a binary label y{0,1}y\in\{0,1\} and a binary sensitive attribute setting s{0,1}s\in\{0,1\}. We sample data from subgroup-specific 100100-dimensional Gaussian distributions X𝒩(μsy,σ2I100)X\sim\mathcal{N}(\mu_{sy},\sigma^{2}I_{100}) where I100I_{100} denotes a 100×100100\times 100 identity matrix and all entries of μsy\mu_{sy} are sampled from U(0,1)U(0,1). We sample 20,00020,000 training examples and 50005000 test examples.

Dataset # Data Pts. Outcome (Y) Sensitive Attribute(S) Dropped Columns
Adult [23] 45k45k Income(>50k50k, <=50k50k) Sex (Male, Female) fnlwgt, education-num, race
Bank [46] 41k41k Term Deposit (Yes, No) Age (<=2525, >2525) poutcome, contact
Compas [3] 6k6k Recidivate (Yes,No) Race(African-American, Caucasian) sex
Credit [23] 1k1k Credit Risk (good, bad) Sex (Male, Female) age, foreign_worker
Synthetic [65] 20k20k 0,10,1 0,10,1 NA
Table 1: Details of all the datasets used in our Experiments.

Appendix C Algorithm to create different Data Bias scenarios

Algorithm 1 gives a detailed description of how we inject varying amounts of under-representation and label bias for the considered fair classifiers.

Input: A fair classifier, or a base classifier (lr or svm without fairness constraints): clscls, original training dataset: data_traindata\_train, test dataset: data_testdata\_test. All samples in data_traindata\_train and data_testdata\_test are of the form (x,y,s)(x,y,s)
Output: Results for all under-representation and label bias setting for clscls on the test set, with multiple runs.
1 all_runs_resultsall\_runs\_results = {}
2for run1run\leftarrow 1 to 55 do
3       under_rep_resultsunder\_rep\_results = {}, label_bias_resultslabel\_bias\_results = {}
4      for βpos[0.1.1.0]\beta_{pos}\in[0.1....1.0] do
5             for βneg[0.1.1.0]\beta_{neg}\in[0.1....1.0] do
6                   train_datatrain\_data(βpos,βneg\beta_{pos},\beta_{neg}) \leftarrow Randomly sample βpos\beta_{pos} proportion of (y=1,s=0y=1,s=0) subpopulation and βneg\beta_{neg} proportion of (y=0,s=0y=0,s=0) subpopulation from data_traindata\_train, include other subpopulations same as data_traindata\_train.
7                  under_rep_resultsunder\_rep\_results[(βpos,βneg\beta_{pos},\beta_{neg})] \leftarrow result for clscls on train_datatrain\_data(βpos,βneg\beta_{pos},\beta_{neg})
8             end for
9            
10       end for
11      
12      for ν[0.0.0.9]\nu\in[0.0....0.9] do
13             train_datatrain\_data(ν\nu) \leftarrow Randomly flip ν\nu proportion of (y=1,s=0y=1,s=0) subpopulation examples to (y=0,s=0y=0,s=0) from data_traindata\_train, include other subpopulations same as data_traindata\_train.
14            label_bias_resultslabel\_bias\_results[ν\nu] \leftarrow result for clscls on train_datatrain\_data(ν\nu)
15       end for
16      
17      all_runs_resultsall\_runs\_results[runrun] = (under_rep_resultsunder\_rep\_results, label_bias_resultslabel\_bias\_results)
18 end for
19
return all_runs_results
Pseudocode 1 Experimental Pipeline for Under-Representation and Label Bias

Appendix D Original Split results for other settings

Figures 4, 5 and 6 show the results of all fair classifiers when we use either a Logistic Regression model or an SVM model, and measure unfairness using Statistical Parity or Equal Opportunity Difference as a metric. Note that ‘gerry_fair’ and ‘prej_remover’ are missing from Figures 5 and 6 because these two classifiers do not support SVM as a base classifier for their fairness routines.

Refer to caption
Figure 3: Error Rate-EOD values for various classifiers on all datasets, with no explicit under-representation and label bias. The blue horizontal and vertical lines denote the error rate and EOD of a base LR model without any fairness constraints.
Refer to caption
Figure 4: Error Rate-SPD values for various classifiers on all datasets, with no explicit under-representation and label bias. The blue horizontal and vertical lines denote the error rate and SPD of a base LR model without any fairness constraints.
Refer to caption
Figure 5: Error Rate-EOD values for various classifiers on all datasets, with no explicit under-representation and label bias. The blue horizontal and vertical lines denote the error rate and EOD of a base SVM model without any fairness constraints.
Refer to caption
Figure 6: Error Rate-SPD values for various classifiers on all datasets, with no explicit under-representation and label bias. The blue horizontal and vertical lines denote the error rate and SPD of a base SVM model without any fairness constraints.

Appendix E Stability Analysis for remaining datasets

Figures 7, 8, 9 and 10 show the scatter plot, heatmap and average results for the Synthetic, Adult, Credit and Bank Marketing datasets respectively.

Refer to caption (a) Refer to caption (b) Algo. Err. SPD EOD base lr 0.075 ±\pm 0.009 0.311 ±\pm 0.073 0.206 ±\pm 0.107 svm 0.074 ±\pm 0.009 0.325 ±\pm 0.074 0.218 ±\pm 0.115 rew lr 0.077 ±\pm 0.006 0.241 ±\pm 0.018 0.122 ±\pm 0.033 svm 0.074 ±\pm 0.006 0.258 ±\pm 0.023 0.131 ±\pm 0.04 jiang nachum lr 0.075 ±\pm 0.006 0.291 ±\pm 0.041 0.086 ±\pm 0.079 svm 0.083 ±\pm 0.009 0.289 ±\pm 0.046 0.086 ±\pm 0.093 exp       grad lr 0.076 ±\pm 0.003 0.259 ±\pm 0.009 0.029 ±\pm 0.011 svm 0.079 ±\pm 0.005 0.257 ±\pm 0.01 0.026 ±\pm 0.01 grid search lr 0.078 ±\pm 0.012 0.301 ±\pm 0.111 0.207 ±\pm 0.149 svm 0.078 ±\pm 0.013 0.315 ±\pm 0.119 0.22 ±\pm 0.167 gerry fair lr 0.099 ±\pm 0.032 0.368 ±\pm 0.144 0.354 ±\pm 0.245 svm - - - prej remover lr 0.022 ±\pm 0.001 0.296 ±\pm 0.009 0.022 ±\pm 0.008 svm - - - eq lr 0.192 ±\pm 0.052 0.187 ±\pm 0.029 0.025 ±\pm 0.017 svm 0.191 ±\pm 0.052 0.188 ±\pm 0.028 0.025 ±\pm 0.017 cal_eq lr 0.218 ±\pm 0.042 0.534 ±\pm 0.132 0.219 ±\pm 0.11 svm 0.22 ±\pm 0.042 0.535 ±\pm 0.132 0.22 ±\pm 0.111 reject lr 0.075 ±\pm 0.008 0.278 ±\pm 0.028 0.168 ±\pm 0.043 svm 0.075 ±\pm 0.008 0.276 ±\pm 0.028 0.167 ±\pm 0.046 (a) Figure 7: Stability analysis of various fair classifiers on the Synthetic dataset (Lighter the shade, more the under-representation and label bias): (a) The spread of error rates and Equalized Odds (EODDS) of various fair classifiers as we vary βpos\beta_{pos}, βneg\beta_{neg} and label bias separately. The vertical and horizontal reference blue lines denote the performance of an unfair logistic regression model on the original dataset without any under-representation or label bias. (b) Heatmap for Error Rate and EODDS across all different settings of βpos\beta_{pos} and βneg\beta_{neg}. Darker values denote lower error rates and unfairness. Uniform values across the grid indicate stability to different βpos\beta_{pos} and βneg\beta_{neg}. (c) Mean error rates, Statistical Parity Difference (SPD), and Equal Opportunity Difference (EOD) across all βpos\beta_{pos} and βneg\beta_{neg} settings for both kinds of base classifiers (SVM and LR).

Refer to caption (a) Refer to caption (b) Algo. Err. SPD EOD base lr 0.156 ±\pm 0.003 0.171 ±\pm 0.035 0.134 ±\pm 0.111 svm 0.155 ±\pm 0.003 0.179 ±\pm 0.036 0.172 ±\pm 0.131 rew lr 0.16 ±\pm 0.001 0.101 ±\pm 0.012 0.109 ±\pm 0.02 svm 0.159 ±\pm 0.001 0.103 ±\pm 0.02 0.105 ±\pm 0.029 jiang nachum lr 0.156 ±\pm 0.001 0.125 ±\pm 0.011 0.019 ±\pm 0.011 svm 0.159 ±\pm 0.003 0.136 ±\pm 0.012 0.049 ±\pm 0.013 exp       grad lr 0.166 ±\pm 0.002 0.107 ±\pm 0.003 0.016 ±\pm 0.008 svm 0.168 ±\pm 0.002 0.109 ±\pm 0.003 0.017 ±\pm 0.008 grid search lr 0.156 ±\pm 0.005 0.176 ±\pm 0.048 0.169 ±\pm 0.131 svm 0.208 ±\pm 0.041 0.056 ±\pm 0.067 0.051 ±\pm 0.064 gerry fair lr 0.412 ±\pm 0.287 0.071 ±\pm 0.068 0.064 ±\pm 0.087 svm - - - prej remover lr 0.156 ±\pm 0.005 0.174 ±\pm 0.05 0.172 ±\pm 0.135 svm - - - eq lr 0.184 ±\pm 0.013 0.085 ±\pm 0.019 0.038 ±\pm 0.02 svm 0.182 ±\pm 0.013 0.085 ±\pm 0.017 0.038 ±\pm 0.021 cal_eq lr 0.183 ±\pm 0.034 0.125 ±\pm 0.075 0.088 ±\pm 0.063 svm 0.183 ±\pm 0.036 0.123 ±\pm 0.081 0.091 ±\pm 0.066 reject lr 0.199 ±\pm 0.008 0.192 ±\pm 0.025 0.027 ±\pm 0.018 svm 0.203 ±\pm 0.008 0.191 ±\pm 0.024 0.027 ±\pm 0.02 (a) Figure 8: Stability analysis of various fair classifiers on the Adult dataset (Lighter the shade, more the under-representation and label bias): (a) The spread of error rates and Equal Opportunity Difference(EOD) of various fair classifiers as we vary βpos\beta_{pos}, βneg\beta_{neg} and label bias separately. The vertical and horizontal reference blue lines denote the performance of an unfair logistic regression model on the original dataset without any under-representation or label bias. (b) Heatmap for Error Rate and EOD across all different settings of βpos\beta_{pos} and βneg\beta_{neg}. Darker values denote lower error rates and unfairness. Uniform values across the grid indicate stability to different βpos\beta_{pos} and βneg\beta_{neg}. (c) Mean error rates, Statistical Parity Difference (SPD), and Equal Opportunity Difference (EOD) across all βpos\beta_{pos} and βneg\beta_{neg} settings for both kinds of base classifiers (SVM and LR).

Refer to caption (a) Refer to caption (b) Algo. Err. SPD EOD base lr 0.235 ±\pm 0.012 0.156 ±\pm 0.125 0.103 ±\pm 0.11 svm 0.235 ±\pm 0.011 0.17 ±\pm 0.143 0.115 ±\pm 0.111 rew lr 0.234 ±\pm 0.006 0.043 ±\pm 0.012 0.018 ±\pm 0.009 svm 0.234 ±\pm 0.007 0.071 ±\pm 0.045 0.047 ±\pm 0.024 jiang nachum lr 0.236 ±\pm 0.008 0.038 ±\pm 0.012 0.023 ±\pm 0.011 svm 0.236 ±\pm 0.011 0.046 ±\pm 0.025 0.031 ±\pm 0.018 exp       grad lr 0.239 ±\pm 0.008 0.045 ±\pm 0.022 0.027 ±\pm 0.014 svm 0.261 ±\pm 0.044 0.045 ±\pm 0.019 0.034 ±\pm 0.015 grid search lr 0.24 ±\pm 0.027 0.153 ±\pm 0.117 0.102 ±\pm 0.105 svm 0.291 ±\pm 0.029 0.026 ±\pm 0.028 0.021 ±\pm 0.021 gerry fair lr 0.263 ±\pm 0.031 0.039 ±\pm 0.038 0.024 ±\pm 0.024 svm - - - prej remover lr 0.247 ±\pm 0.022 0.153 ±\pm 0.112 0.112 ±\pm 0.114 svm - - - eq lr 0.318 ±\pm 0.074 0.057 ±\pm 0.032 0.074 ±\pm 0.045 svm 0.336 ±\pm 0.084 0.045 ±\pm 0.035 0.059 ±\pm 0.05 cal_eq lr 0.261 ±\pm 0.013 0.255 ±\pm 0.163 0.151 ±\pm 0.124 svm 0.255 ±\pm 0.011 0.186 ±\pm 0.154 0.106 ±\pm 0.111 reject lr 0.312 ±\pm 0.031 0.084 ±\pm 0.038 0.098 ±\pm 0.051 svm 0.317 ±\pm 0.037 0.089 ±\pm 0.04 0.105 ±\pm 0.046 (a) Figure 9: Stability analysis of various fair classifiers on the Credit dataset (Lighter the shade, more the under-representation and label bias): (a) The spread of error rates and Equalized Odds (EODDS) of various fair classifiers as we vary βpos\beta_{pos}, βneg\beta_{neg} and label bias separately. The vertical and horizontal reference blue lines denote the performance of an unfair logistic regression model on the original dataset without any under-representation or label bias. (b) Heatmap for Error Rate and EODDS across all different settings of βpos\beta_{pos} and βneg\beta_{neg}. Darker values denote lower error rates and unfairness. Uniform values across the grid indicate stability to different βpos\beta_{pos} and βneg\beta_{neg}. (c) Mean error rates, Statistical Parity Difference (SPD), and Equal Opportunity Difference (EOD) across all βpos\beta_{pos} and βneg\beta_{neg} settings for both kinds of base classifiers (SVM and LR).

Refer to caption (a) Refer to caption (b) Algo. Err. SPD EOD base lr 0.116 ±\pm 0.000 0.069 ±\pm 0.021 0.12 ±\pm 0.05 svm 0.116 ±\pm 0.000 0.069 ±\pm 0.007 0.096 ±\pm 0.014 rew lr 0.116 ±\pm 0.000 0.053 ±\pm 0.008 0.088 ±\pm 0.011 svm 0.116 ±\pm 0.000 0.067 ±\pm 0.003 0.092 ±\pm 0.008 jiang nachum lr 0.116 ±\pm 0.000 0.033 ±\pm 0.016 0.039 ±\pm 0.031 svm 0.116 ±\pm 0.000 0.05 ±\pm 0.017 0.051 ±\pm 0.023 exp       grad lr 0.117 ±\pm 0.001 0.023 ±\pm 0.007 0.03 ±\pm 0.011 svm 0.124 ±\pm 0.018 0.022 ±\pm 0.006 0.031 ±\pm 0.012 grid search lr 0.12 ±\pm 0.012 0.153 ±\pm 0.14 0.255 ±\pm 0.184 svm 0.116 ±\pm 0.001 0.002 ±\pm 0.01 0.003 ±\pm 0.015 gerry fair lr 0.117 ±\pm 0.003 0.146 ±\pm 0.215 0.224 ±\pm 0.238 svm - - - prej remover lr 0.116 ±\pm 0.001 0.09 ±\pm 0.123 0.174 ±\pm 0.175 svm - - - eq lr 0.227 ±\pm 0.125 0.026 ±\pm 0.01 0.069 ±\pm 0.027 svm 0.209 ±\pm 0.093 0.028 ±\pm 0.01 0.067 ±\pm 0.019 cal_eq lr 0.116 ±\pm 0.000 0.05 ±\pm 0.028 0.099 ±\pm 0.054 svm 0.116 ±\pm 0.000 0.054 ±\pm 0.019 0.091 ±\pm 0.025 reject lr 0.394 ±\pm 0.126 0.123 ±\pm 0.025 0.118 ±\pm 0.04 svm 0.35 ±\pm 0.115 0.116 ±\pm 0.026 0.117 ±\pm 0.035 (a) Figure 10: Stability analysis of various fair classifiers on the Bank dataset (Lighter the shade, more the under-representation and label bias): (a) The spread of error rates and Equalized Odds (EODDS) of various fair classifiers as we vary βpos\beta_{pos}, βneg\beta_{neg} and label bias separately. The vertical and horizontal reference blue lines denote the performance of an unfair logistic regression model on the original dataset without any under-representation or label bias. (b) Heatmap for Error Rate and EODDS across all different settings of βpos\beta_{pos} and βneg\beta_{neg}. Darker values denote lower error rates and unfairness. Uniform values across the grid indicate stability to different βpos\beta_{pos} and βneg\beta_{neg}. (c) Mean error rates, Statistical Parity Difference (SPD), and Equal Opportunity Difference (EOD) across all βpos\beta_{pos} and βneg\beta_{neg} settings for both kinds of base classifiers (SVM and LR).

Appendix F Proofs

F.1 Proof for Theorem 1

Proof.

We show the proof for binary label yy and sensitive attribute ss. The proof works out in a similar manner for multiple labels and sensitive attributes.

Assume that the subgroup probabilities on DD are denoted by Pr(Y=y,S=s)=pysPr(Y=y,S=s)=p_{ys}: p00,p01,p10,p11p_{00},p_{01},p_{10},p_{11}.

For the loss LL, we can write its expectation under distribution DD as:

𝔼(X,Y,S)D[L(f(X),Y)]=y,sPr(Y=y,S=s)𝔼X|Y=y,S=s[L(f(X),y)]\displaystyle\mathbb{E}_{(X,Y,S)\sim D}[L(f(X),Y)]=\sum_{y,s}Pr(Y=y,S=s)~{}\mathbb{E}_{X|Y=y,S=s}[L(f(X),y)]

Let m=min{p00,p01,p10,p11}m=\min\{p_{00},p_{01},p_{10},p_{11}\}, M=max{p00,p01,p10,p11}M=\max\{p_{00},p_{01},p_{10},p_{11}\}, and α=m/M\alpha=m/M.

For the rest of the proof, we will assume an under-representation bias on (Y=1,S=0)(Y=1,S=0) subgroup with bias β\beta. The proof for bias on (Y=0,S=0)(Y=0,S=0) subgroup will work out similarly.

We denote by DβD_{\beta} the distribution induced because of the under-representation bias on (Y=1,S=0)(Y=1,S=0) subgroup, where p10βp10p^{\prime}_{10}\propto\beta p_{10}, and pyspysp^{\prime}_{ys}\propto p_{ys} for rest of the subgroups.

For the reweighing loss LL^{\prime}, the reweighing weight WysW_{ys} is given by the following expression  [37]:

Wys=Pr(Y=y)Pr(S=s)Pr(Y=y,S=s).W_{ys}=\frac{Pr(Y^{\prime}=y)~{}Pr(S^{\prime}=s)}{Pr(Y^{\prime}=y,S^{\prime}=s)}.

Now we can write

𝔼(X,Y,S)Dβ[L(f(X),Y)]=y,zPr(Y=y,S=s)𝔼X|Y=y,S=z[L(f(X),y)]\displaystyle\mathbb{E}_{(X^{\prime},Y^{\prime},S^{\prime})\sim D_{\beta}}[L^{\prime}(f(X^{\prime}),Y^{\prime})]=\sum_{y,z}Pr(Y^{\prime}=y,S^{\prime}=s)~{}\mathbb{E}_{X^{\prime}|Y^{\prime}=y,S^{\prime}=z}[L^{\prime}(f(X^{\prime}),y)]
=y,sPr(Y=y,S=s)𝔼X|Y=y,S=s[WysL(f(X),y)]\displaystyle=\sum_{y,s}Pr(Y^{\prime}=y,S^{\prime}=s)~{}\mathbb{E}_{X^{\prime}|Y^{\prime}=y,S^{\prime}=s}[W_{ys}L(f(X^{\prime}),y)]
=y,sPr(Y=y,S=s)Wys𝔼X|Y=y,S=s[L(f(X),y)]\displaystyle=\sum_{y,s}Pr(Y^{\prime}=y,S^{\prime}=s)~{}W_{ys}~{}\mathbb{E}_{X^{\prime}|Y^{\prime}=y,S^{\prime}=s}[L(f(X^{\prime}),y)]
=y,sPr(Y=y)Pr(S=s)𝔼X|Y=y,S=s[L(f(X),y)].\displaystyle=\sum_{y,s}Pr(Y^{\prime}=y)~{}Pr(S^{\prime}=s)~{}\mathbb{E}_{X^{\prime}|Y^{\prime}=y,S^{\prime}=s}[L(f(X^{\prime}),y)].

Thus, we have

𝔼Dβ[L]=𝔼(X,Y,S)Dβ[L(f(X),Y)]\displaystyle\mathbb{E}_{D_{\beta}}[L^{\prime}]=\mathbb{E}_{(X^{\prime},Y^{\prime},S^{\prime})\sim D_{\beta}}[L^{\prime}(f(X^{\prime}),Y^{\prime})]
=y,zPr(Y=y)Pr(S=s)𝔼X|Y=y,S=s[L(f(X),y)]\displaystyle=\sum_{y,z}Pr(Y^{\prime}=y)~{}Pr(S^{\prime}=s)~{}\mathbb{E}_{X^{\prime}|Y^{\prime}=y,S^{\prime}=s}[L(f(X^{\prime}),y)]
=y,zPr(Y=y)Pr(S=s)𝔼X|Y=y,S=s[L(f(X),y)],\displaystyle=\sum_{y,z}Pr(Y^{\prime}=y)~{}Pr(S^{\prime}=s)~{}\mathbb{E}_{X|Y=y,S=s}[L(f(X),y)],

because X|Y=y,S=sX^{\prime}|Y^{\prime}=y,S^{\prime}=s in DβD_{\beta} remains the same as X|Y=y,S=sX|Y=y,S=s in DD even after injecting under-representation.

For the distribution DβD_{\beta}, assume that the normalization constant for the subgroup probabilities is N=Y=y,S=spys=p00+p01+βp10+p11N^{\prime}=\sum_{Y^{\prime}=y,S^{\prime}=s}p_{ys}=p_{00}+p_{01}+\beta p_{10}+p_{11}.

We can write down individual label and sensitive attribute probabilities as follows in DβD_{\beta}: Pr(Y=0)=(p00+p01)/N\Pr(Y^{\prime}=0)=(p_{00}+p_{01})/N^{\prime}, Pr(Y=1)=(βp10+p11)/NPr(Y^{\prime}=1)=(\beta p_{10}+p_{11})/N^{\prime}, Pr(S=0)=(p00+βp10)/NPr(S^{\prime}=0)=(p_{00}+\beta p_{10})/N^{\prime} and Pr(S=1)=(p01+p11)/NPr(S^{\prime}=1)=(p_{01}+p_{11})/N^{\prime}.

We will now attempt to lower and upper bound

Pr(Y=y)Pr(S=s)Pr(Y=y,S=s)\frac{Pr(Y^{\prime}=y)~{}Pr(S^{\prime}=s)}{Pr(Y=y,S=s)}

in terms of α\alpha, for any Y,S{0,1}Y,S\in\{0,1\} and any β(0,1]\beta\in(0,1]. Let N=1=p00+p01+p10+p11N=1=p_{00}+p_{01}+p_{10}+p_{11}. For example, for Y=0,S=0Y=0,S=0 we get

α2p00+p01N(p00+p01)N(p00+βp10)NNp00Np004α,\frac{\alpha}{2}\leq\frac{p_{00}+p_{01}}{N}\leq\frac{(p_{00}+p_{01})}{N^{\prime}}~{}\frac{(p_{00}+\beta p_{10})}{N^{\prime}}~{}\frac{N}{p_{00}}\leq\frac{N}{p_{00}}\leq\frac{4}{\alpha},

using NNN^{\prime}\leq N, p00p00+βp10p_{00}\leq p_{00}+\beta p_{10} and the definition of m,Mm,M and α\alpha. Similarly working out the upper and lower bounds for each of the four probability terms in the expected reweighed loss expression above, we get a common bound

α24Pr(Y=y)Pr(S=s)Pr(Y=y,S=s)4α,\frac{\alpha^{2}}{4}\leq\frac{Pr(Y^{\prime}=y)~{}Pr(S^{\prime}=s)}{Pr(Y=y,S=s)}\leq\frac{4}{\alpha},

for any y,s{0,1}y,s\in\{0,1\}. Now we can plug in these bounds in the following expression for 𝔼D[L]\mathbb{E}_{D}[L].

𝔼D[L]=𝔼(X,Y,S)D[L(f(X),Y)]\displaystyle\mathbb{E}_{D}[L]=\mathbb{E}_{(X,Y,S)\sim D}[L(f(X),Y)]
=y,sPr(Y=y,S=s)𝔼X|Y=y,S=s[L(f(X),y)]\displaystyle=\sum_{y,s}Pr(Y=y,S=s)~{}\mathbb{E}_{X|Y=y,S=s}[L(f(X),y)]

Using the sandwich bound with the final form of 𝔼Dβ[L]\mathbb{E}_{D_{\beta}}[L^{\prime}], we get:

α24𝔼D[L]𝔼Dβ[L]4α𝔼D[L].\frac{\alpha^{2}}{4}~{}\mathbb{E}_{D}[L]\leq\mathbb{E}_{D_{\beta}}[L^{\prime}]\leq\frac{4}{\alpha}~{}\mathbb{E}_{D}[L].

F.2 Proof of Theorem 2

Before proving Theorem 1, we prove a small proposition that will be useful for later results.

Proposition 1.

Let L be a non-negative bounded loss between [0,B][0,B]. Let f^\hat{f} be the empirical risk minimizing classifier with loss LL with NN samples: f^=argminf1Ni=1NL(f(xi,yi))\hat{f}=\operatorname*{\arg\!\min}_{f\in\mathcal{F}}\frac{1}{N}\sum_{i=1}^{N}L(f(x_{i},y_{i})), where {(x1,y1),(x2,y2),,(xN,yN)}\{(x_{1},y_{1}),(x_{2},y_{2}),...,(x_{N},y_{N})\} are i.i.d. samples from DD. Let f=argminf𝔼(x,y)D[L(f(x),y)]f^{*}=\operatorname*{\arg\!\min}_{f\in\mathcal{F}}\mathbb{E}_{(x,y)\sim D}[L(f(x),y)]. Then with probability of at least 1δ1-\delta, we have

𝔼D[L(f^)]Blog(2/δ)2N+𝔼D[L(f)].\mathbb{E}_{D}[L(\hat{f})]\leq B~{}\sqrt{\frac{\log(2/\delta)}{2N}}+\mathbb{E}_{D}[L(f^{*})].
Proof.

The proof follows directly from the proof of Thm. 1 in [68], and a general form of Hoeffding’s inequality for any non-negative bounded loss333http://cs229.stanford.edu/extra-notes/hoeffding.pdf. ∎

We are now ready to prove Theorem 2.

Proof.

Using the results from Theorem 1, for any gg and β\beta, we can write

α24𝔼D[L(g)]𝔼Dβ[L(g)]4α𝔼D[L(g)].\frac{\alpha^{2}}{4}~{}\mathbb{E}_{D}[L(g)]\leq\mathbb{E}_{D_{\beta}}[L^{\prime}(g)]\leq\frac{4}{\alpha}~{}\mathbb{E}_{D}[L(g)].

For g^β\hat{g}_{\beta}, we can use the lower bound from Theorem 1 to get

𝔼D[L(g^β)]4α2𝔼Dβ[L(g^β)].\displaystyle\mathbb{E}_{D}[L(\hat{g}_{\beta})]\leq\frac{4}{\alpha^{2}}~{}\mathbb{E}_{D_{\beta}}[L^{\prime}(\hat{g}_{\beta})]. (1)

We now assume that LL is a bounded loss between [0,1][0,1]. if not, we can rescale it to lie in [0,1][0,1]. Using m=min{p00,p01,p10,p11}m=\min\{p_{00},p_{01},p_{10},p_{11}\}, M=max{p00,p01,p10,p11}M=\max\{p_{00},p_{01},p_{10},p_{11}\}, and α=m/M\alpha=m/M, we can show that the reweighing factor

Wys=Pr(Y=y)Pr(S=s)Pr(Y=y,S=s)4αβ.W_{ys}=\frac{Pr(Y^{\prime}=y)~{}Pr(S^{\prime}=s)}{Pr(Y^{\prime}=y,S^{\prime}=s)}\leq\frac{4}{\alpha\beta}.

for all y,s{0,1}y,s\in\{0,1\}. This can be done by using the upper bound from Theorem 1 and the fact that in the worst case (Y=1,S=0Y=1,S=0 here), Pr(Y=y,S=s)Pr(Y=y,S=s)=1β\frac{Pr(Y=y,S=s)}{Pr(Y^{\prime}=y,S^{\prime}=s)}=\frac{1}{\beta}.

Thus, the reweighed loss LL^{\prime} is bounded between [0,4/(αβ)][0,4/(\alpha\beta)]. Now, using Proposition 1 and that g^\hat{g} is the empirical minimizer of the reweighed loss LL^{\prime} on NN samples from the biased distribution DβD_{\beta}, we obtain that

𝔼Dβ[L(g^β)]4αβlog(2/δ)2N+𝔼Dβ[L(g)].\mathbb{E}_{D_{\beta}}[L^{\prime}(\hat{g}_{\beta})]\leq\frac{4}{\alpha\beta}~{}\sqrt{\frac{\log(2/\delta)}{2N}}+\mathbb{E}_{D_{\beta}}[L^{\prime}(g^{*})].

where gg^{*} is the Bayes optimal classifier for the reweighed loss LL^{\prime} on the biased distribution DβD_{\beta}. Now let ff^{*} be the Bayes optimal classifier for the loss LL on the original distribution DD. Thus, 𝔼Dβ[L(g)]𝔼Dβ[L(f)]\mathbb{E}_{D_{\beta}}[L^{\prime}(g^{*})]\leq\mathbb{E}_{D_{\beta}}[L^{\prime}(f^{*})], and we get

𝔼Dβ[L(g^β)]4αβlog(2/δ)2N+𝔼Dβ[L(f)].\displaystyle\mathbb{E}_{D_{\beta}}[L^{\prime}(\hat{g}_{\beta})]\leq\frac{4}{\alpha\beta}~{}\sqrt{\frac{\log(2/\delta)}{2N}}+\mathbb{E}_{D_{\beta}}[L^{\prime}(f^{*})]. (2)

Combining (1) and (2), we have

𝔼D[L(g^β)]\displaystyle\mathbb{E}_{D}[L(\hat{g}_{\beta})] 4α2(4αβlog(2/δ)2N+𝔼Dβ[L(f)])\displaystyle\leq\frac{4}{\alpha^{2}}\left(\frac{4}{\alpha\beta}~{}\sqrt{\frac{\log(2/\delta)}{2N}}+\mathbb{E}_{D_{\beta}}[L^{\prime}(f^{*})]\right)
=16α3βlog(2/δ)2N+4α2𝔼Dβ[L(f)]\displaystyle=\frac{16}{\alpha^{3}\beta}~{}\sqrt{\frac{\log(2/\delta)}{2N}}+\frac{4}{\alpha^{2}}~{}\mathbb{E}_{D_{\beta}}[L^{\prime}(f^{*})]
16α3βlog(2/δ)2N+16α3𝔼D[L(f)],\displaystyle\leq\frac{16}{\alpha^{3}\beta}~{}\sqrt{\frac{\log(2/\delta)}{2N}}+\frac{16}{\alpha^{3}}~{}\mathbb{E}_{D}[L(f^{*})],

where we use the upper bound in Theorem 1 to get the last inequality. That completes the proof of Theorem 2. ∎