Loss Balancing for Fair Supervised Learning
Abstract
Supervised learning models have been used in various domains such as lending, college admission, face recognition, natural language processing, etc. However, they may inherit pre-existing biases from training data and exhibit discrimination against protected social groups. Various fairness notions have been proposed to address unfairness issues. In this work, we focus on Equalized Loss (EL), a fairness notion that requires the expected loss to be (approximately) equalized across different groups. Imposing EL on the learning process leads to a non-convex optimization problem even if the loss function is convex, and the existing fair learning algorithms cannot properly be adopted to find the fair predictor under the EL constraint. This paper introduces an algorithm that can leverage off-the-shelf convex programming tools (e.g., CVXPY (Diamond and Boyd, 2016; Agrawal et al., 2018)) to efficiently find the global optimum of this non-convex optimization. In particular, we propose the ELminimizer algorithm, which finds the optimal fair predictor under EL by reducing the non-convex optimization to a sequence of convex optimization problems. We theoretically prove that our algorithm finds the global optimal solution under certain conditions. Then, we support our theoretical results through several empirical studies.
1 Introduction
As machine learning (ML) algorithms are increasingly being used in applications such as education, lending, recruitment, healthcare, criminal justice, etc., there is a growing concern that the algorithms may exhibit discrimination against protected population groups. For example, speech recognition products such as Google Home and Amazon Alexa were shown to have accent bias (Harwell, 2018). The COMPAS recidivism prediction tool, used by courts in the US in parole decisions, has been shown to have a substantially higher false positive rate for African Americans compared to the general population (Dressel and Farid, 2018). Amazon had been using automated software since 2014 to assess applicants’ resumes, which were found to be biased against women (Dastin, 2018). As a result, there have been several works focusing on interpreting machine learning models to understand how features and sensitive attributes contribute to the output of the model (Ribeiro et al., 2016; Lundberg and Lee, 2017; Abroshan et al., 2023).
Various fairness notions have been proposed in the literature to measure and remedy the biases in ML systems; they can be roughly classified into two categories: 1) individual fairness focuses on equity at the individual level and it requires similar individuals to be treated similarly (Dwork et al., 2012; Biega et al., 2018; Jung et al., 2019; Gupta and Kamble, 2019); 2) group fairness requires certain statistical measures to be (approximately) equalized across different groups distinguished by some sensitive attributes (Hardt et al., 2016; Conitzer et al., 2019; Khalili et al., 2020; Zhang et al., 2020; Khalili et al., 2021; Diana et al., 2021; Williamson and Menon, 2019; Zhang et al., 2022).
Several approaches have been developed to satisfy a given definition of fairness; they fall under three categories: 1) pre-processing, by modifying the original dataset such as removing certain features and reweighing, (e.g., (Kamiran and Calders, 2012; Celis et al., 2020; Abroshan et al., )); 2) in-processing, by modifying the algorithms such as imposing fairness constraints or changing objective functions during the training process, (e.g., (Zhang et al., 2018; Agarwal et al., 2018, 2019; Reimers et al., 2021; Calmon et al., 2017)); 3) post-processing, by adjusting the output of the algorithms based on sensitive attributes, (e.g., (Hardt et al., 2016)).
In this paper, we focus on group fairness, and we aim to mitigate unfairness issues in supervised learning using an in-processing approach. This problem can be cast as a constrained optimization problem by minimizing a loss function subject to a group fairness constraint. We are particularly interested in the Equalized Loss (EL) fairness notion proposed by Zhang et al. (2019), which requires the expected loss (e.g., Mean Squared Error (MSE), Binary Cross Entropy (BCE) Loss) to be equalized across different groups.111 Zhang et al. (2019) propose the EL fairness notion without providing an efficient algorithm for satisfying this notion.
The problem of finding fair predictors by solving constrained optimizations has been largely studied. Komiyama et al. (2018) propose the coefficient of determination constraint for learning a fair regressor and develop an algorithm for minimizing the mean squared error (MSE) under their proposed fairness notion. Agarwal et al. (2019) propose an approach to finding a fair regression model under bounded group loss and statistical parity fairness constraints. Agarwal et al. (2018) study classification problems and aim to find fair classifiers under various fairness notions including statistical parity and equal opportunity. In particular, they consider zero-one loss as the objective function and train a randomized fair classifier over a finite hypothesis space. They show that this problem can be reduced to a problem of finding the saddle point of a linear Lagrangian function. Zhang et al. (2018) propose an adversarial debiasing technique to find fair classifiers under equalized odd, equal opportunity, and statistical parity. Unlike the previous works, we focus on the Equalized Loss fairness notion which has not been well studied. Finding an EL fair predictor requires solving a non-convex optimization. Unfortunately, there is no algorithm in fair ML literature with a theoretical performance guarantee that can be properly applied to EL fairness (see Section 2 for detailed discussion).
Our main contribution can be summarized as follows,
-
•
We develop an algorithm with a theoretical performance guarantee for EL fairness. In particular, we propose the algorithm to solve a non-convex constrained optimization problem that finds the optimal fair predictor under EL constraint. We show that such a non-convex optimization problem can be reduced to a sequence of convex constrained optimizations. The proposed algorithm finds the global optimal solution and is applicable to both regression and classification problems. Importantly, it can be easily implemented using off-the-shelf convex programming tools.
-
•
In addition to ELminimizer which finds the global optimal solution, we develop a simple algorithm for finding a sub-optimal predictor satisfying EL fairness. We prove there is a sub-optimal solution satisfying EL fairness that is a linear combination of the optimal solutions to two unconstrained optimizations, and it can be found without solving any constrained optimizations.
-
•
We conduct sample complexity analysis and provide a generalization performance guarantee. In particular, we show the sample complexity analysis found in (Donini et al., 2018) is applicable to learning problems under EL.
-
•
We also examine (in the appendix) the relation between Equalized Loss (EL) and Bounded Group Loss (BGL), another fairness notion proposed by (Agarwal et al., 2019). We show that under certain conditions, these two notions are closely related, and they do not contradict each other.
2 Problem Formulation
Consider a supervised learning problem where the training dataset consists of triples from two social groups.222We use bold letters to represent vectors. Random variable is the feature vector (in the form of a column vector), is the sensitive attribute (e.g., race, gender) indicating the group membership, and is the label/output.We denote realizations of random variables by small letters (e.g., is a realization of ). Feature vector may or may not include sensitive attribute . Set can be either or : if (resp. ), then the problem of interest is a binary classification (resp. regression) problem.
Let be a set of predictors parameterized by weight vector with dimension .333Predictive models such as logistic regression, linear regression, deep learning models, etc., are parameterized by a weight vector. If the problem is binary classification, then is an estimate of .444Our framework can be easily applied to multi-class classifications, where becomes a vector. Because it only complicates the notations without providing additional insights about our algorithm, we present the method and algorithm in a binary setting. Consider loss function where measures the error of in predicting . We denote the expected loss with respect to the joint probability distribution of by . Similarly, denotes the expected loss of the group with sensitive attribute . In this work, we assume that is differentiable and strictly convex in (e.g., binary cross entropy loss).555We do not consider non-differentiable losses (e.g., zero-one loss) as they have already been extensively studied in the literature, e.g., (Hardt et al., 2016; Zafar et al., 2017; Lohaus et al., 2020).
Without fairness consideration, a predictor that simply minimizes the total expected loss, i.e., , may be biased against certain groups. To mitigate the risks of unfairness, we consider Equalized Loss (EL) fairness notion, as formally defined below.
Definition 2.1 (-EL (Zhang et al., 2019)).
We say satisfies the equalized loss (EL) fairness notion if . Moreover, we say satisfies EL for some if .
Note that if is a (strictly) convex function in , both and are also (strictly) convex in . However, is not necessary convex666As an example, consider two functions and . Although both and are convex, their difference is not a convex function. . As a result, the following optimization problem for finding a fair predictor under -EL is not a convex programming,
(1) |
We say a group is disadvantaged group if it experiences higher loss than the other group. Before discussing how to find the global optimal solution of the above non-convex optimization problem and train a -EL fair predictor, we first discuss why -EL is an important fairness notion and why the majority of fair learning algorithms in the literature cannot be used for finding -EL fair predictors.
2.1 Existing Fairness Notions & Algorithms
Next, we (mathematically) introduce some of the most commonly used fairness notions and compare them with -EL. We will also discuss why the majority of proposed fair learning algorithms are not properly applicable to EL fairness.
Overall Misclassification Rate (OMR): It was considered by (Zafar et al., 2017, 2019) for classification problems. Let , where is an indicator function, and if . OMR requires which is not a convex constraint. As a result, Zafar et al. (2017; 2019) propose a method to relax this constraint using decision boundary covariances. We emphasize that OMR is different from EL fairness, that OMR only equalizes the accuracy of binary predictions across different groups while EL is capable of considering the fairness in estimating probability , e.g., by using binary cross entropy loss function. Note that in many applications such as conversion prediction, click prediction, medical diagnosis, etc., it is critical to find accurately for different groups besides the final predictions . Moreover, unlike EL, OMR is not applicable to regression problems. Therefore, the relaxation method proposed by (Zafar et al., 2017, 2019) cannot be applied to the EL fairness constraint.
Statistical Parity (SP), Equal Opportunity (EO): For binary classification, Statistical Parity (SP) (Dwork et al., 2012) (resp. Equal Opportunity (EO) (Hardt et al., 2016)) requires the positive classification rates (resp. true positive rates) to be equalized across different groups. Formally,
Both notions can be re-written in the expectation form using an indicator function. Specifically, SP is equivalent to , and EO equals to . Since the indicator function is neither differentiable nor convex, Donini et al. (2018) use a linear relaxation of EO as a proxy. 777This linear relaxation is applicable to EL with some modification. We use linear relaxation as one of our baselines. However, linear relaxation may negatively affect the fairness of the predictor (Lohaus et al., 2020). To address this issue, Lohaus et al. (2020) and Wu et al. (2019) develop convex relaxation techniques for SP and EO fairness criteria by convexifying indicator function . However, these convex relaxation techniques are not applicable to EL fairness notion because in our setting is convex, not a zero-one function. FairBatch (Roh et al., 2020) is another algorithm that has been proposed to find a predictor under SP or EO. FairBatch adds a sampling bias in the mini-batch selection. However, the bias in mini-batch sampling distribution leads to a biased estimate of the gradient, and there is no guarantee FairBatch finds the global optimum solution. FairBatch can be used to find a sub-optimal fair predictor EL fairness notion. We use FairBatch as a baseline. Shen et al. (2022) propose an algorithm for EO. This algorithm adds a penalty term to the objective function, which is similar to the Penalty Method (Ben-Tal and Zibulevsky, 1997). We will use the Penalty method as a baseline as well.
Hardt et al. (2016) propose a post-processing algorithm that randomly flips the binary predictions to satisfy EO or SP. However, this method does not guarantee finding an optimal classifier (Woodworth et al., 2017). Agarwal et al. (2018) introduce a reduction approach for SP or EO. However, this method finds a randomized classifier satisfying SP or EO in expectation. In other words, to satisfy SP, the reduction approach finds distribution over such that,
where is the probability of selecting model under distribution . Obviously, satisfying a fairness constraint in expectation may lead to unfair predictions because can still assign a non-zero probability to unfair models.
In summary, maybe some of the approaches used for SP/EO are applicable to EL fairness notion (e.g., linear relaxation or FairBatch). However, they can only find sub-optimal solutions (see Section 6 for more details).
Statistical Parity for Regression: SP can be adjusted to be suitable for regression. As proposed by (Agarwal et al., 2019), Statistical Parity for regressor is defined as:
(2) |
To find a predictor that satisfies constraint (2), Agarwal et al. (2019) use the reduction approach as mentioned above. However, this approach only finds a randomized predictor satisfying SP in expectation and cannot be applied to optimization problem (1).888Appendix includes a detailed discussion on why the reduction approach is not appropriate for EL fairness.
Bounded Group Loss (BGL): -BGL was introduced by (Agarwal et al., 2019) for regression problems. It requires that the loss experienced by each group be bounded by . That is, Agarwal et al. (2019) use the reduction approach to find a randomized regression model under -BGL. In addition to the reduction method, if , , and are convex in , then we can directly use convex solvers (e.g., CVXPY (Diamond and Boyd, 2016; Agrawal et al., 2018)) to find a -BGL fair predictor. This is because the following is a convex problem,
(3) |
However, for non-convex optimization problems such as (1), the convex solvers cannot be used directly.
We want to emphasize that even though there are already many fairness notions and algorithms in the literature to find a fair predictor, none of the existing algorithms can be used to solve the non-convex optimization (1) efficiently and find a global optimal fair predictor under EL notion.
3 Optimal Model under -EL
In this section, we consider optimization problem (1) under EL fairness constraint. Note that this optimization problem is non-convex and finding the global optimal solution is difficult. We propose an algorithm that finds the solution to non-convex optimization (1) by solving a sequence of convex optimization problems. Before presenting the algorithm, we first introduce two assumptions, which will be used when proving the convergence of the proposed algorithm.
Input:
Parameters:
Define
(4) |
Output:
Assumption 3.1.
Expected losses , , and are strictly convex and differentiable in . Moreover, each of them has a unique minimizer.
Let be the optimal weight vector minimizing the loss associated with group . That is,
(5) |
Since problem (5) is an unconstrained, convex optimization problem, can be found efficiently by common convex solvers. We make the following assumption about .
Assumption 3.2.
We assume the following holds,
Input: , ,,
Output:
Assumption 3.2 implies that when a group experiences its lowest possible loss, this group is not the disadvantaged group. Under Assumptions 3.1 and 3.2, the optimal 0-EL fair predictor can be easily found using our proposed algorithm (i.e., function with ); the complete procedure is shown in Algorithm 1, in which parameter specifies the stopping criterion: as , the output approaches to the global optimal solution.
Intuitively, Algorithm 1 solves non-convex optimization (1) by solving a sequence of convex and constrained optimizations. When (i.e., relaxed fairness), the optimal -EL fair predictor can be found with Algorithm 2 which calls function ELminimizer twice. The convergence of Algorithm 1 for finding the optimal 0-EL fair solution, and the convergence of Algorithm 2 for finding the optimal -EL fair solution are stated in the following theorems.
Theorem 3.3 (Convergence of Algorithm 1 when ).
Theorem 3.3 implies that when and goes to infinity, the solution to convex problem (4) is the same as the solution to (1).
Theorem 3.4 (Convergence of Algorithm 2).
Complexity Analysis. The While loop in Algorithm 1 is executed for times. Therefore, Algorithm 1 needs to solve a constrained convex optimization problem for times. Note that constrained convex optimization problems can be efficiently solved via sub-gradient methods (Nedić and Ozdaglar, 2009), brier methods (Wright, 2001), stochastic gradient descent with one projection (Mahdavi et al., 2012), interior point methods (Nemirovski, 2004), etc. For instance, (Nemirovski, 2004) shows that several convex optimization problems can be solved in polynomial time. Therefore, the time complexity of Algorithm 1 depends on the convex solver. If the time complexity of solving (4) is , then the overall time complexity of Algorithm 1 is .
Regularization. So far we have considered a supervised learning model without regularization. Next, we explain how Algorithm 2 can be applied to a regularized problem. Consider the following optimization problem,
(6) |
where is a regularizer function. In this case, we can re-write the optimization problem as follows,
4 Sub-optimal Model under -EL
In Section 3, we have shown that non-convex optimization problem (1) can be reduced to a sequence of convex constrained optimizations (4), and based on this we proposed Algorithm 2 that finds the optimal -EL fair predictor. However, the proposed algorithm still requires solving a convex constrained optimization in each iteration. In this section, we propose another algorithm that finds a sub-optimal solution to optimization (1) without solving constrained optimization in each iteration. The algorithm consists of two phases: (1) finding two weight vectors by solving two unconstrained convex optimization problems; (2) generating a new weight vector satisfying -EL using the two weight vectors found in the first phase.
Phase 1: Unconstrained optimization. We ignore EL fairness and solve the following unconstrained problem,
(7) |
Because is strictly convex in , the above optimization problem can be solved efficiently using convex solvers. Predictor is the optimal predictor without fairness constraint, and is the smallest overall expected loss that is attainable. Let , i.e., group is disadvantaged under predictor . Then, for the disadvantaged group , we find by optimization (5).
Phase 2: Binary search to find the fair predictor. For , we define the following two functions,
where function can be interpreted as the loss disparity between two demographic groups under predictor , and is the corresponding overall expected loss. Some properties of functions and are summarized in the following theorem.
Theorem 4.1.
Theorem 4.1 implies that in a -dimensional space if we start from and move toward along a straight line, the overall loss increases and the disparity between two groups decreases until we reach , at which 0-EL fairness is satisfied. Note that is the unique root of . Since is a strictly decreasing function, can be found using binary search.
For the approximate -EL fairness, there are multiple values of such that satisfies -EL. Since is strictly increasing in , among all that satisfy -EL fairness, we would choose the smallest one. The method for finding a sub-optimal solution to optimization (1) is described in Algorithm 3.
Input: , , ,
Initialization: , , ,
Note that while loop in Algorithm 3 is repeated for times. Since the time complexity of operations (i.e., evaluating ) in each iteration is , the total time complexity of Algorithm 3 is . We can formally prove that the output returned by Algorithm 3 satisfies -EL constraint.
Theorem 4.2.
Note that since is increasing in , we only need to find the smallest possible such that satisfies -EL, which is in Theorem 4.2. Since Algorithm 3 finds a sub-optimal solution, it is important to investigate the performance of this sub-optimal fair predictor, especially in the worst case scenario. The following theorem finds an upper bound of the expected loss of , where is the output of Algorithm 3.
Theorem 4.3.
Learning with Finite Samples. So far we proposed algorithms for solving optimization (1). In practice, the joint probability distribution of is unknown and the expected loss needs to be estimated using the empirical loss. Specifically, given i.i.d. samples and a predictor , the empirical losses of the entire population and each group are defined as follows,
where . Because -EL fairness constraint is defined in terms of expected loss, the optimization problem of finding an optimal -EL fair predictor using empirical losses is as follows,
(8) |
In this section, we aim to investigate how to determine so that with high probability, the predictor found by solving problem (8) satisfies -EL fairness, and meanwhile is a good estimate of the solution to optimization (1). We aim to show that we can set if the number of samples is sufficiently large. To understand the relation between (8) and (1), we follow the general sample complexity analysis found in (Donini et al., 2018) and show their sample complexity analysis is applicable to EL. To proceed, we make the assumption used in (Donini et al., 2018).
Assumption 4.4.
With probability , following holds:
where is a bound that goes to zero as .
Note that according to (Shalev-Shwartz and Ben-David, 2014), if the class is learnable with respect to loss function , then always there exists such a bound that goes to zero as goes to infinity.999As an example, if is a compact subset of linear predictors in Reproducing Kernel Hilbert Space (RKHS) and loss is Lipschitz in (second argument), then Assumption 4.4 can be satisfied (Bartlett and Mendelson, 2002). Vast majority of linear predictors such as support vector machine and logistic regression can be defined in RKHS.
Theorem 4.5.
Theorem 4.5 shows that as , go to infinity, , and both empirical loss and expected loss satisfy -EL. In addition, as goes to infinity, the expected loss at goes to the minimum possible expected loss. Therefore, solving (8) using empirical loss is equivalent to solving (1) if the number of data points from each group is sufficiently large.
5 Beyond Linear Models
So far, we have assumed that the loss function is strictly convex. This assumption is mainly valid for training linear models (e.g., Ridge regression, regularized logistic regression). However, it is known that training deep models lead to minimizing non-convex objective functions. To train a deep model under the equalized loss fairness notion, we can take advantage of Algorithm 2 for fine-tuning under EL as long as the objective function is convex with respect to the parameters of the output layer.101010In classification or regression problems with l2 regularizer, the objective function is strictly convex with respect to the parameters of the output layer. This is true regardless of the network structure before the output layer. To clarify how Algorithm 2 can be used for deep models, for simplicity, consider a neural network with one hidden layer for regression. Let be an by matrix ( is the size of feature vector and is the number of neurons in the first layer) denoting the parameters of the first layer of the Neural Network, and be a vector corresponding to the output layer. To find a neural network satisfying the equalized loss fairness notion, first, we train the network without any fairness constraint using common gradient descent algorithms (e.g., stochastic gradient descent). Let and denote the network parameters after training the network without fairness constraint. Now we can take advantage of Algorithm 2 to fine-tune the parameters of the output layer under the equalized loss fairness notion. Let us define . The problem for fine-tuning the output layer can be written as follows,
(9) | |||||
The objective function of the above optimization problem is strictly convex, and the optimization problem can be solved using Algorithm 2. After solving the above problem, will be the final parameters of the neural network model satisfying the equalized loss fairness notion. Note that a similar optimization problem can be written for fine-tuning any deep model with classification/regression task.
6 Experiments
We conduct experiments on two real-world datasets to evaluate the performance of the proposed algorithm. In our experiments, we used a system with the following configurations: 24 GB of RAM, 2 cores of P100-16GB GPU, and 2 cores of Intel Xeon [email protected] GHz processor. More information about the experiments and the instructions on reproducing the empirical results are provided in Appendix. The codes are available at https://github.com/KhaliliMahdi/Loss_Balancing_ICML2023.
Baselines. As discussed in Section 2, not all the fair learning algorithms are applicable to EL fairness. The followings are three baselines that are applicable to EL fairness.
Penalty Method (PM): The penalty method (Ben-Tal and Zibulevsky, 1997) finds a fair predictor under -EL fairness constraint by solving the following problem,
(10) |
where is the penalty parameter, and is the regularizer. The above optimization problem cannot be solved with a convex solver because it is not generally convex. We solve problem (10) using Adam gradient descent (Kingma and Ba, 2014) with a learning rate of . We use the default parameters of Adam optimization in Pytorch. We set the penalty parameter and increase this penalty coefficient by a factor of every iteration.
Linear Relaxation (LinRe): Inspired by (Donini et al., 2018), for the linear regression, we relax the EL constraint as . For the logistic regression, we relax the constraint as . Note that the sign of determines whether the binary classifier makes a correct prediction or not.
FairBatch (Roh et al., 2020): This method was originally proposed for equal opportunity, statistical parity, and equalized odds. With some modifications (see the appendix for more details), this can be applied to EL fairness. This algorithm measures the loss of each group in each epoch and changes the Minibatch sampling distribution in favor of the group with a higher empirical loss. When implementing FairBatch, we use Adam optimization with default parameters, a learning rate of , and a batch size of .
Linear Regression and Law School Admission Dataset. In the first experiment, we use the law school admission dataset, which includes the information of 21,790 law students studying in 163 different law schools across the United States (Wightman, 1998). This dataset contains entrance exam scores (LSAT), grade-point average (GPA) prior to law school, and the first year average grade (FYA). Our goal is to train a -EL fair regularized linear regression model to estimate the FYA of students given their LSAT and GPA. In this study, we consider Black and White Demographic groups. In this dataset, data points belong to White students, and data points are from Black students. We randomly split the dataset into training and test datasets (70% for training and 30% for testing), and conduct five independent runs of the experiment. A fair predictor is found by solving the following optimization problem,
(11) |
with and being the overall and the group specific empirical MSE, respectively. Note that is the regularizer. We use Algorithm 2 and Algorithm 3 with to find the optimal linear regression model under EL and adopt CVXPY python library (Diamond and Boyd, 2016; Agrawal et al., 2018) as the convex optimization solver in ELminimizer algorithm.


test loss | |||
---|---|---|---|
PM |
test | ||
test loss | |||
LinRe |
test | ||
test loss | |||
Fair Batch |
test | ||
test loss | |||
ours Alg 2 |
test | ||
test loss | |||
ours Alg 3 |
test |
Table 1 illustrates the means and standard deviations of empirical loss and the loss difference between Black and White students. The first row specifies desired fairness level ( and ) used as the input to each algorithm. Based on Table 1, when desired fairness level is , the model fairness level trained by LinRe and FairBatch method is far from . We also realized that the performance of FairBatch highly depends on the random seed. As a result, the fairness level of the model trained by FairBatch has a high variance (0.1933 in this example) in these five independent runs of the experiment, and in some of these runs, it can achieve desired fairness level. This is because the FairBatch algorithm does not come with any performance guarantee, and as stated in (Roh et al., 2020), FairBatch calculates a biased estimate of the gradient in each epoch, and the mini-batch sampling distribution keeps changing from one epoch to another epoch. We observed that FairBatch has better performance with a non-linear model (see Table 3). Both Algorithms 2 and 3 can achieve a fairness level close to . However, Algorithm 3 finds a sub-optimal solution and achieves higher MSE compared to Algorithm 2. For , in addition to Algorithms 2 and 3, the penalty method also achieves a fairness level close to desired fairness level (i.e., ). Algorithm 2 still achieves the lowest MSE compared to Algorithm 3 and the penalty method. The model trained by FairBatch also suffers from high variance in the fairness level. We want to emphasize that even though Algorithm 3 has a higher MSE compared to Algorithm 2, it is much faster, as stated in Section 3.
We also investigate the trade-off between fairness and overall loss under different algorithms. Figure 2 illustrates the MSE loss as a function of the loss difference between Black and White students. Specifically, we run Algorithm 2, Algorithm 3, and the baselines under different values of . For each , we repeat the experiment five times and calculate the average MSE and average MSE difference over these five runs using the test dataset. Figure 2 shows the penalty method, linear relaxation, and FairBatch are not sensitive to input . However, Algorithm 2 and Algorithm 3 are sensitive to ; As increases, increases and MSE decreases.
Logistic Regression and Adult Income Dataset. We consider the adult income dataset containing the information of 48,842 individuals (Kohavi, 1996). Each data point consists of 14 features, including age, education, race, etc. In this study, we consider race (White or Black) as the sensitive attribute and denote the White demographic group by and the Black group by . We first pre-process the dataset by removing the data points with a missing value or with a race other than Black and White; this results in 41,961 data points. Among these data points, 4585 belong to the Black group. For each data point, we convert all the categorical features to one-hot vectors with dimension and randomly split the dataset into training and test data sets (70% of the dataset is used for training). The goal is to predict whether the income of an individual is above using a -EL fair logistic regression model. In this experiment, we solve optimization problem (11), with and being the overall and the group specific empirical average of binary cross entropy (BCE) loss, respectively.
test loss | |||
---|---|---|---|
PM |
test | ||
test loss | |||
LinRe |
test | ||
test loss | |||
Fair Batch |
test | ||
test loss | |||
Ours Alg2 |
test | ||
test loss | |||
Ours Alg3 |
test |
The comparison of Algorithm 2, Algorithm 3, and the baselines is shown in Table 2, where we conduct five independent runs of experiments, and calculate the mean and standard deviation of overall loss and the loss difference between two demographic groups. The first row in this table shows the value of used as an input to the algorithms. The results show that linear relaxation, algorithm 2 and Algorithm 3 have very similar performances. All of these three algorithms are able to satisfy the -EL with small test loss. Similar to Table 1, we observe the high variance in the performance of FairBatch, which highly depends on the random seed.
In Figure 2, we compare the performance-fairness trade-off. We focus on binary cross entropy on the test dataset. To generate this figure, we run Algorithm 2, Algorithm 3, and the baselines (we do not include the curve for FairBatch due to large overall loss and high variance in performance) under different values of for five times and calculate the average BCE and the average BCE difference. We observe Algorithms 2 and 3 and the linear relaxation have a similar trade-off between and .
Experiment with a non-linear model
We repeat our first experiment with nonlinear models to demonstrate how we can use our algorithms to fine-tune a non-linear model. We work with the Law School Admission dataset, and we train a neural network with one hidden layer which consists of 125 neurons. We use sigmoid as the activation function for the hidden layer. We run the following algorithms,
-
•
Penalty Method: We solve optimization problem (10). In this example, and are not convex anymore. The hyperparameters except for the learning rate remain the same as in the first experiment. The learning rate is set to be .
-
•
FairBatch: we train the whole network using FairBatch with mini-batch Adam optimization with a batch size of 100 and a learning rate of .
-
•
Linear Relaxation: In order to take advantage of CVXPY, first, we train the network without any fairness constraint using batch Adam optimization (i.e., the batch size is equal to the size of the training dataset) with a learning rate of . Then, we fine-tune the parameters of the output layer. Note that the output layer has 126 parameters, and we fine-tune those under relaxed EL fairness. In particular, we solve problem (9) after linear relaxation.
-
•
Algorithm 2 and Algorithm 3: We can run Algorithm 2 and Algorithm 3 to fine-tune the neural network. After training the network without any constraint using batch Adam optimization, we solve (9) using Algorithm 2 and Algorithm 3.
Table 3 illustrates the average and standard deviation of empirical loss and the loss difference between Black and White students. Both Algorithm 2 and Algorithm 3 can achieve a fairness level (i.e., ) close to desired fairness level . Also, we can see that the MSE of Algorithm 2 and Algorithm 3 under the nonlinear model is slightly lower than the MSE under the linear model.
We also investigate how MSE changes as a function of fairness level . Figure 3 illustrates the MSE-fairness trade-off. To generate this plot, we repeat the experiment for . For each , we ran the experiment 5 times and calculated the average of MSE and the average of MSE difference using the test dataset. Based on Figure 3, we observe that FairBatch and LinRe are not very sensitive to the input . However, FairBatch may sometimes show a better trade-off than Algorithm 2. In this example, PM, Algorithm 2, and Algorithm 3 are very sensitive to , and as increases, MSE decreases and increases.
test loss | |||
---|---|---|---|
PM |
test | ||
test loss | |||
LinRe |
test | ||
test loss | |||
Fair Batch |
test | ||
test loss | |||
ours Alg 2 |
test | ||
test loss | |||
ours Alg 3 |
test |

Limitation and Negative Societal Impact. 1) Our theoretical guarantees are valid under the stated assumptions (e.g., the convexity of , i.i.d. samples, binary sensitive attribute). These assumptions have been clearly stated throughout this paper. 2) In this paper, we develop an algorithm for finding a fair predictor under EL fairness. However, we do not claim this notion is better than other fairness notions. Depending on the scenario, this notion may or may not be suitable for mitigating unfairness.
7 Conclusion
In this work, we studied supervised learning problems under the Equalized Loss (EL) fairness (Zhang et al., 2019), a notion that requires the expected loss to be balanced across different demographic groups. By imposing the EL constraint, the learning problem can be formulated as a non-convex problem. We proposed two algorithms with theoretical performance guarantees to find the global optimal and a sub-optimal solution to this non-convex problem.
Acknowledgment
This work is partially supported by the NSF under grants IIS-2202699, IIS-2301599, and ECCS-2301601.
References
- [1] Mahed Abroshan, Mohammad Mahdi Khalili, and Andrew Elliott. Counterfactual fairness in synthetic data generation. In NeurIPS 2022 Workshop on Synthetic Data for Empowering ML Research.
- Abroshan et al. [2023] Mahed Abroshan, Saumitra Mishra, and Mohammad Mahdi Khalili. Symbolic metamodels for interpreting black-boxes using primitive functions. arXiv preprint arXiv:2302.04791, 2023.
- Agarwal et al. [2018] Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. A reductions approach to fair classification. In International Conference on Machine Learning, pages 60–69. PMLR, 2018.
- Agarwal et al. [2019] Alekh Agarwal, Miroslav Dudik, and Zhiwei Steven Wu. Fair regression: Quantitative definitions and reduction-based algorithms. In International Conference on Machine Learning, pages 120–129. PMLR, 2019.
- Agrawal et al. [2018] Akshay Agrawal, Robin Verschueren, Steven Diamond, and Stephen Boyd. A rewriting system for convex optimization problems. Journal of Control and Decision, 5(1):42–60, 2018.
- Bartlett and Mendelson [2002] Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463–482, 2002.
- Ben-Tal and Zibulevsky [1997] Aharon Ben-Tal and Michael Zibulevsky. Penalty/barrier multiplier methods for convex programming problems. SIAM Journal on Optimization, 7(2):347–366, 1997.
- Biega et al. [2018] Asia J Biega, Krishna P Gummadi, and Gerhard Weikum. Equity of attention: Amortizing individual fairness in rankings. In The 41st international acm sigir conference on research & development in information retrieval, pages 405–414, 2018.
- Calmon et al. [2017] Flavio P Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. Optimized pre-processing for discrimination prevention. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 3995–4004, 2017.
- Celis et al. [2020] L Elisa Celis, Vijay Keswani, and Nisheeth Vishnoi. Data preprocessing to mitigate bias: A maximum entropy based approach. In International Conference on Machine Learning, pages 1349–1359. PMLR, 2020.
- Conitzer et al. [2019] Vincent Conitzer, Rupert Freeman, Nisarg Shah, and Jennifer Wortman Vaughan. Group fairness for the allocation of indivisible goods. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 1853–1860, 2019.
- Dastin [2018] Jeffrey Dastin. Amazon scraps secret ai recruiting tool that showed bias against women. http://reut.rs/2MXzkly, 2018.
- Diamond and Boyd [2016] Steven Diamond and Stephen Boyd. CVXPY: A Python-embedded modeling language for convex optimization. Journal of Machine Learning Research, 17(83):1–5, 2016.
- Diana et al. [2021] Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, and Aaron Roth. Minimax group fairness: Algorithms and experiments. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 66–76, 2021.
- Donini et al. [2018] Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. Advances in Neural Information Processing Systems, 31, 2018.
- Dressel and Farid [2018] Julia Dressel and Hany Farid. The accuracy, fairness, and limits of predicting recidivism. Science advances, 4(1):eaao5580, 2018.
- Dwork et al. [2012] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pages 214–226, 2012.
- Gupta and Kamble [2019] Swati Gupta and Vijay Kamble. Individual fairness in hindsight. In Proceedings of the 2019 ACM Conference on Economics and Computation, pages 805–806, 2019.
- Hardt et al. [2016] Moritz Hardt, Eric Price, and Nati Srebro. Equality of opportunity in supervised learning. Advances in neural information processing systems, 29:3315–3323, 2016.
- Harwell [2018] Drew Harwell. The accent gap. http://wapo.st/3pUqZ0S, 2018.
- Jung et al. [2019] Christopher Jung, Michael Kearns, Seth Neel, Aaron Roth, Logan Stapleton, and Zhiwei Steven Wu. Eliciting and enforcing subjective individual fairness. arXiv preprint arXiv:1905.10660, 2019.
- Kamiran and Calders [2012] Faisal Kamiran and Toon Calders. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1):1–33, 2012.
- Khalili et al. [2020] Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, and Somayeh Sojoudi. Improving fairness and privacy in selection problems. arXiv preprint arXiv:2012.03812, 2020.
- Khalili et al. [2021] Mohammad Mahdi Khalili, Xueru Zhang, and Mahed Abroshan. Fair sequential selection using supervised learning models. Advances in Neural Information Processing Systems, 34:28144–28155, 2021.
- Kingma and Ba [2014] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Kohavi [1996] Ron Kohavi. Scaling up the accuracy of naive-bayes classifiers: A decision-tree hybrid. In Kdd, volume 96, pages 202–207, 1996.
- Komiyama et al. [2018] Junpei Komiyama, Akiko Takeda, Junya Honda, and Hajime Shimao. Nonconvex optimization for regression with fairness constraints. In International conference on machine learning, pages 2737–2746. PMLR, 2018.
- Lohaus et al. [2020] Michael Lohaus, Michael Perrot, and Ulrike Von Luxburg. Too relaxed to be fair. In International Conference on Machine Learning, pages 6360–6369. PMLR, 2020.
- Lundberg and Lee [2017] Scott Lundberg and Su-In Lee. A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874, 2017.
- Mahdavi et al. [2012] Mehrdad Mahdavi, Tianbao Yang, Rong Jin, Shenghuo Zhu, and Jinfeng Yi. Stochastic gradient descent with only one projection. Advances in neural information processing systems, 25:494–502, 2012.
- Nedić and Ozdaglar [2009] Angelia Nedić and Asuman Ozdaglar. Subgradient methods for saddle-point problems. Journal of optimization theory and applications, 142(1):205–228, 2009.
- Nemirovski [2004] Arkadi Nemirovski. Interior point polynomial time methods in convex programming. Lecture notes, 42(16):3215–3224, 2004.
- Reimers et al. [2021] Christian Reimers, Paul Bodesheim, Jakob Runge, and Joachim Denzler. Towards learning an unbiased classifier from biased data via conditional adversarial debiasing. arXiv preprint arXiv:2103.06179, 2021.
- Ribeiro et al. [2016] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016.
- Roh et al. [2020] Yuji Roh, Kangwook Lee, Steven Euijong Whang, and Changho Suh. Fairbatch: Batch selection for model fairness. In International Conference on Learning Representations, 2020.
- Shalev-Shwartz and Ben-David [2014] Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014.
- Shen et al. [2022] Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, and Lea Frermann. Optimising equal opportunity fairness in model training. arXiv preprint arXiv:2205.02393, 2022.
- Wightman [1998] Linda F Wightman. Lsac national longitudinal bar passage study. lsac research report series. 1998.
- Williamson and Menon [2019] Robert Williamson and Aditya Menon. Fairness risk measures. In International Conference on Machine Learning, pages 6786–6797. PMLR, 2019.
- Woodworth et al. [2017] Blake Woodworth, Suriya Gunasekar, Mesrob I Ohannessian, and Nathan Srebro. Learning non-discriminatory predictors. In Conference on Learning Theory, pages 1920–1953. PMLR, 2017.
- Wright [2001] Stephen J Wright. On the convergence of the newton/log-barrier method. Mathematical programming, 90(1):71–100, 2001.
- Wu et al. [2019] Yongkai Wu, Lu Zhang, and Xintao Wu. On convexity and bounds of fairness-aware classification. In The World Wide Web Conference, pages 3356–3362, 2019.
- Zafar et al. [2017] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th international conference on world wide web, pages 1171–1180, 2017.
- Zafar et al. [2019] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez-Rodriguez, and Krishna P Gummadi. Fairness constraints: A flexible approach for fair classification. The Journal of Machine Learning Research, 20(1):2737–2778, 2019.
- Zhang et al. [2018] Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 335–340, 2018.
- Zhang et al. [2019] Xueru Zhang, Mohammadmahdi Khaliligarekani, Cem Tekin, and Mingyan Liu. Group retention when using machine learning in sequential decision making: the interplay between user dynamics and fairness. Advances in Neural Information Processing Systems, 32:15269–15278, 2019.
- Zhang et al. [2020] Xueru Zhang, Mohammad Mahdi Khalili, and Mingyan Liu. Long-term impacts of fair machine learning. Ergonomics in Design, 28(3):7–11, 2020.
- Zhang et al. [2022] Xueru Zhang, Mohammad Mahdi Khalili, Kun Jin, Parinaz Naghizadeh, and Mingyan Liu. Fairness interventions as (dis) incentives for strategic manipulation. In International Conference on Machine Learning, pages 26239–26264. PMLR, 2022.
Appendix A Appendix
A.1 Some notes on the code for reproducibility
In this part, we provide a description of the files provided in our GitHub repository.
-
•
: This file includes a function called which processes the law school admission dataset and splits the dataset randomly into training and test datasets (we keep 70% of the datapoints for training). Later, in our experiments, we set the equal to 0, 1, 2, 3, and 4 to get five different splits to repeat our experiments five times.
-
•
: This file includes a function called which processes the adult income dataset and splits the dataset randomly into training and test datasets. Later, in our experiments, we set the equal to 0, 1, 2, 3, 4 to get five different splits to repeat our experiments five times.
-
•
: This file includes the following functions,
-
–
: This function implements Elminimizer algorithm. are the training datapoints belonging to group and are the datapoits belonging to group . is the fairness level for EL constraint. is the reqularizer parameter (in our experiments, ). determines the model that we want to train. If , then we train a linear regression model. If , then we train a logisitic regression model. This function returns five variables . are the weight vector and bias term of the trained model. are the average training loss of group 0 and group 1, respectively. is the overall training loss.
-
–
: This function implements Algorithm 2 which calls Elminimizer algorithm twice. This function also returns five variables . These variables have been defined above.
-
–
: This function implements Algorithm 3 which finds a sub-optimal solution under EL fairness. This function also returns five variables . These variables have been defined above.
-
–
: This function uses the CVXPY package to solve the optimization problem (4). We set equal to to solve the optimization problem (4) in iteration of Algorithm 1.
-
–
: This function is used to find the test loss. are model parameters (trained by Algorithm 2 or 3). It returns the average loss of group 0 and group 1 and the overall loss based on the given dataset.
-
–
: This function is for solving optimization problem (8) after linear relaxation.
-
–
-
•
: this file includes the following functions,
-
–
where can be either for linear regression or for logistic regression. This function uses the penalty method and trains the model under EL using the Adam optimization. is the maximum number of iterations. is the regularization parameter (it is set to in our experiment). is the learning rate and is the fairness level. is used for the stopping criterion. This function returns the trained model (which is a torch module), and training loss of group 0 and group 1, and the overall training loss.
-
–
: This function is used to simulate the FairBatch algorithm [Roh et al., 2020]. The input parameters are similar to the input parameters of except for . This parameter determines how to adjust the sub-sampling distribution for mini-batch formation. Please look at the next section for more details. This function returns the trained model (which is a torch module), and training loss of group 0 and group 1, and the overall training loss.
-
–
uses the above functions to reproduce the results in Table 1 and Table 2. uses the above functions to reproduce Figure 1 and Figure 2. We provide some comments in these files to make the code more readable. We have also provided code for training non-linear models. Please use and to generate the results in Table 3 and Figure 3, respectively.
Lastly, use the following command to generate results in Table 1:
-
•
python3 table1_2.py --experiment=1 --gamma=0.0
-
•
python3 table1_2.py --experiment=1 --gamma=0.1
Use the following command to generate results in Table 2:
-
•
python3 table1_2.py --experiment=2 --gamma=0.0
-
•
python3 table1_2.py --experiment=2 --gamma=0.1
Use the following command to generate results in Table 3:
-
•
python3 table3.py --gamma=0.0
-
•
python3 table3.py --gamma=0.1
Use the following command to generate results in Figure 1:
-
•
python3 figure1_2.py --experiment=1
Use the following command to generate results in Figure 2:
-
•
python3 figure1_2.py --experiment=2
Use the following command to generate results in Figure 3:
-
•
python3 figure3.py
Note that you need to install packages in requirements.txt
A.2 Notes on FairBatch [Roh et al., 2020]
This method has been proposed to find a predictor under equal opportunity, equalized odd or statistical parity. In each epoch, this method identifies the disadvantaged group and increases the subsampling rate corresponding to the disadvantaged group in mini-batch selection for the next epoch. We modify this approach for -EL as follows,
-
•
We initialize the sub-sampling rate of group (denoted by ) for mini-batch formation by . We Form the mini-batches using and .
-
•
At epoch , we run gradient descent using the mini-batches formed by and , and we obtain new model parameters .
-
•
After epoch , we calculate the empirical loss of each group. Then, we update as follows,
where is is a hyperparameter and, in our experiment, is equal to .
A.3 Details of numerical experiments and additional numerical results
Due to the space limits of the main paper, we provide more details on our experiments here,
-
•
Stopping criteria for penalty method and FairBatch: For stopping criteria, we stopped the learning process when the change in the objective function is less than between two consecutive epochs. The reason that we used was that we did not observe any significant change by choosing a smaller value.
-
•
Learning rate for penalty method and FairBatch: We chose for the learning rate for training a linear model. For the experiment with a non-linear model, we set the learning rate to be .
-
•
Stopping criteria for Algorithm 2 and Algorithm 3: As we stated in the main paper, we set in ELminimizer and Algorithm 3. Choosing smaller did not change the performance significantly.
-
•
Linear Relaxation: Note that equation (8) after linear relaxation is a convex optimization problem. We directly solve this optimization problem using CVXPY.
The experiment has been done on a system with the following configurations: 24 GB of RAM, 2 cores of P100-16GB GPU, and 2 cores of Intel Xeon [email protected] GHz processor. We used GPUs for training FairBatch.
A.4 Notes on the Reduction Approach [Agarwal et al., 2018, 2019]
Let be a distribution over . In order to find optimal using the reduction approach, we have to solve the following optimization problem,
Similar to [Agarwal et al., 2018, 2019], we can re-write the above optimization problem in the following form,
Then, the reduction approach forms the Lagrangian function as follows,
Since is parametrized with , we can find distribution over . Therefore, we rewrite the problem in the following form,
The reduction approach updates and alternatively. Looking carefully at Algorithm 1 in [Agarwal et al., 2018], after updating , we need to have access to an oracle that is able to solve the following optimization problem in each iteration,
The above optimization problem is not convex for all . Therefore, in order to use the reduction approach, we need to have access to an oracle that is able to solve the above non-convex optimization problem which is not available. Note that the original problem (1) is a non-convex optimization problem and using the reduction approach just leads to another non-convex optimization problem.
A.5 Equalized Loss & Bounded Group Loss
In this section, we study the relation between EL and BGL fairness notions. It is straightforward to see that any predictor satisfying -BGL also satisfies the -EL. However, it is unclear to what extend an optimal fair predictor under -EL satisfies the BGL fairness notion. Next, we theoretically study the relation between BGL and EL fairness notions.
Let be denoted as the solution to (1) and the corresponding optimal -EL fair predictor. Theorem A.1 below shows that under certain conditions, it is impossible for both groups to experience a loss larger than under the optimal -EL fair predictor.
Theorem A.1.
Suppose there exists a predictor that satisfies -BGL fairness notion. That is, the following optimization problem has at least one feasible point.
(12) |
Then, the followings hold,
Theorem A.1 shows that -EL implies -BGL if -BGL is a feasible constraint. Therefore, if is not too small (e.g., ), then EL and BGL are not contradicting each other.
We emphasize that we are not claiming that whether EL fairness is better than BGL. Instead, these relations indicate the impacts the two fairness constraints could have on the model performance; the results may further provide the guidance for policy-makers.
A.6 Proofs
In order to prove Theorem 3.3, we first introduce two lemmas.
Lemma A.2.
Under assumption 3.2, there exists such that and .
Proof. Let and , and . Note that because is the minimizer of .
First, we show that is an increasing function in , and is a decreasing function in . Note that , and is convex because is convex. This implies that is an increasing function, and . Similarly, we can show that .
Note that under Assumption (3.2), and . Therefore, by the intermediate value theorem, the exists such that . Define . We have,
is | ||||
Lemma A.3.
, where is the solution to (4).
Proof. We proceed by contradiction. Assume that (i.e., is an interior point of the feasible set of (4)). Notice that cannot be in the feasible set of (4) because . As a result, . This is a contradiction because is an interior point of the feasible set of a convex optimization and cannot be optimal if is not equal to zero.
Proof [Theorem 3.3]
Now, we show that for all ( is the solution to (1) when . As a result, ). Note that because is the minimizer of . Moreover, otherwise ( is defined in Lemma A.2) and is not optimal solution under 0-EL. Therefore, .
Now we proceed by induction. Suppose . We show that as well. We consider two cases.
-
•
. In this case is a feasible point for (4), and . Therefore, .
-
•
. In this case, we proceed by contradiction to show that . Assume that . Define , where . Note that By Lemma A.3, . Therefore, . Moreover, under Assumption 3.2, . Therefore, by the intermediate value theorem, there exists such that . Similar to the proof of Lemma A.2, we can show that in an increasing function for all . As a result . Define . We have,
(13) (14) The last two equations imply that is not a global optimal fair solution under -EL fairness constraint and it is not the global optmal solution to (1). This is a contradiction. Therefore, if , then . As a result,
By two above cases and the nested interval theorem, we conclude that,
Therefore, would be the solution to the following optimziation problem,
By lemma A.3, the solution to above optimization problem (i.e., ) satisfies the following, . Therefore, is the global optimal solution to optimization problem (1).
Proof [Theorem 3.4 ] Let’s assume that does not satisfy the -EL.111111If satisfies -EL, it will be the optimal predictor under -EL fairness. Therefore, there is no need to solve any constrained optimization problem. Note that is the solution to problem (7). Let be the optimal weight vector under -EL. It is clear that .
Step 1. we show that one of the following holds,
(15) | |||
(16) |
Proof by contradiction. Assume . This implies that is an interior point of the feasible set of optimization problem (1). Since , then . As a result, object function of (1) can be improved at by moving toward . This a contradiction. Therefore, .
Step 2. Function is the solution to the following optimization problem,
(17) |
To show the above claim, notice that the solution to optimization problem (A.6) is the same as the following,
Lastly, because , we have,
Proof [Theorem 4.1]
-
1.
Under Assumption 3.2, . Moreover, . Therefore, by the intermediate value theorem, there exists such that .
-
2.
Since is the minimizer of , . Moreover, since is strictly convex, is strictly convex and is strictly increasing function. As a result, for , and is strictly increasing.
-
3.
Similar to the above argument, is strictly decreasing function (notice that and is strictly convex).
Note that since is strictly increasing and is strictly decreasing. Therefore, we conclude that is strictly increasing. As a result, should be strictly decreasing.
Proof [Theorem 4.2] First, we show that if , then satisfies -EL.
Moreover, because . Therefore, -EL is satisfied.
Now, let . Note that under Assumption 3.2, . Therefore, by the intermediate value theorem, there exists such that . Moreover, based on Theorem 4.2, is a strictly decreasing function. Therefore, the binary search proposed in Algorithm 3 converges to the root of . As a result, satisfies -EL. Note that since is strictly decreasing, and , and is the smallest possible under which satisfies -EL. Since is increasing, the smallest possible gives us a better accuracy.
Proof [Theorem 4.3] If , then satisfies -EL, and . In this case, it is easy to see that (because is a weighted average of and .
Now assume that . Note that if we prove this theorem for , then the theorem will hold for . This is because the optimal predictor under -EL satisfies -EL condition as well. In other words, -EL is a stronger constraint than -EL.
Let . In this case, Algorithm 3 finds , where is defined in Theorem 4.1. We have,
Therefore, we have,
(22) | |||||
(23) | |||||
(24) |
Proof [Theorem 4.5]
By the triangle inequality, the following holds,
(25) | |||
(26) |
Therefore, with probability at least we have,
(27) |
As a result, with probability holds,
(28) |
Now consider the following,
(29) |
By (A.6), with probability . Thus, with probability at least , we have,
(30) |
Therefore, under assumption 4.4, we can conclude with probability at least , . In addition, by (A.6), with probability at least , we have,
Proof [Theorem A.1] Let be a feasible point of optimization problem (12), then is also a feasible point to (1).
We proceed by contradiction. We consider three cases,
- •
-
•
If and . This case is similar to above. implies that This is a contradiction because it implies that is not an optimal solution to (1).
- •
Therefore, and .