11email: [email protected] 22institutetext: Naveen Jindal School of Management, University of Texas at Dallas
22email: [email protected]
Group Fairness in Non-monotone Submodular Maximization
Abstract
Maximizing a submodular function has a wide range of applications in machine learning and data mining. One such application is data summarization whose goal is to select a small set of representative and diverse data items from a large dataset. However, data items might have sensitive attributes such as race or gender, in this setting, it is important to design fairness-aware algorithms to mitigate potential algorithmic bias that may cause over- or under- representation of particular groups. Motivated by that, we propose and study the classic non-monotone submodular maximization problem subject to novel group fairness constraints. Our goal is to select a set of items that maximizes a non-monotone submodular function, while ensuring that the number of selected items from each group is proportionate to its size, to the extent specified by the decision maker. We develop the first constant-factor approximation algorithms for this problem. We also extend the basic model to incorporate an additional global size constraint on the total number of selected items.
1 Introduction
Submodular function refers to a broad class of functions which satisfy the natural diminishing returns property: adding an additional item to a larger existing subset is less beneficial. A wide range of machine learning and AI problems, including exemplar-based clustering [7], feature selection [6], active learning [12], influence maximization in social networks [20], recommender system [9], and diverse data summarization [19], can be formulated as a submodular maximization problem. This problem, whose goal is to select a set of items to maximize a submodular function, and its variants [14, 18] have been extensively studied in the literature subject to various constraints, including cardinality, matroid, or knapsack-type restrictions.
We notice that in practise, items or individuals are often associated with different groups based on various attributes, such as gender, race, age, religion, or other factors. Existing algorithms might exhibit bias if left unchecked, for example, some of the groups might be over- or under-represented in the final selected subset. Therefore, it becomes increasingly important to design fairness-aware algorithms to mitigate such issues. Towards this end, we propose and study the classic non-monotone submodular maximization problem subject to novel group fairness constraints. Our goal is to select a balanced set of items that maximizes a non-monotone submodular function, such that the ratio of selected items from each group to its size is within a desired range, as determined by the decision maker. Non-monotone submodular maximization has multiple compelling applications, such as feature selection [6], profit maximization [21], maximum cut [13] and data summarization [16]. Formally, we consider a set of items (e.g., datapoints) which are partitioned into groups: such that items from the same group share same attributes (e.g., gender). We say that a set of items is -fair if for all groups , it holds that . Using our model, it allows for the decision maker to specify the desired level of fairness by setting appropriate values of and . Specifically, setting leads to the highest level of fairness in that the number of selected items is strictly proportional to its group size; if we set and , there are no fairness constraints. Our goal is to find such a -fair subset of items that maximizes a submodular objective function. Our definition of fairness, which balances solutions with respect to sensitive attributes, has gained widespread acceptance in the academic community, as demonstrated by its frequent use in previous studies [4, 10, 5]. There are several other notations of fairness that can be captured by our formulation such as the -rule [1], statistical parity [8] and proportional representation [17].
1.1 Our Contributions
-
•
Our study breaks new ground by examining the classic (non-monotone) submodular maximization problem under -fairness constraints. Our model offers flexibility in capturing varying degrees of fairness as desired by the decision maker, by adjusting the values of and .
-
•
We develop the first constant-factor approximation algorithm for this problem. We observe that the parameter is closely linked to the complexity of solving the -fair non-monotone submodular maximization problem. In particular, when , we design a -approximation algorithm and when , we develop a -approximation algorithm, where is the approximation ratio of the current best algorithm for matroid-constrained submodular maximization. We also extend the basic model to incorporate an additional global size constraint on the total number of selected items. We provide approximation algorithms that have a constant-factor approximation ratio for this extended model.
1.2 Additional Related Works
In recent years, there has been a growing awareness of the importance of fair and unbiased decision-making systems. This has led to an increased interest in the development of fair algorithms in a wide range of applications, including influence maximization [25], classification [26], voting [4], bandit learning [15], and data summarization [3]. Depending on the specific context and the type of bias that one is trying to mitigate, existing studies adopt different metrics of fairness. This can lead to different optimization problems and different fair algorithms that are tailored to the specific requirements of the application. Our notation of fairness is general enough to capture many existing notations such as the -rule [1], statistical parity [8] and proportional representation [17]. Unlike most of existing studies on fair submodular maximization [4] whose objective is to maximize a monotone submodular function, [10] develop fair algorithms in the context of streaming non-monotone submodular maximization. Their proposed notation of fairness is more general than ours, leading to a more challenging optimization problem which does not admit any constant-factor approximation algorithms. [24, 23] aim to develop randomized algorithms that satisfy average fairness constraints. Very recently, [22] extend the studies of fair algorithms to a more complicated adaptive setting and they propose a new metric of fairness called group equality.
2 Preliminaries and Problem Statement
We consider a set of items. There is a non-negative submodular utility function . Denote by the marginal utility of on top of , i.e., . We say a function is submodular if for any two sets such that and any item ,
Assume is partitioned into disjoint groups: . We assume that there is a given lower and upper bound on the fraction of items of each group that must be contained in a feasible solution. These two bounds, namely and , represent group fairness constraints. The problem of -fair submodular maximization problem (labelled as P.0) can be written as follows.
P.0
subject to:
.
One can adjust the degree of group fairness in a feasible solution through choosing appropriate values of and . I.e., strict group fairness is achieved at in which case every feasible solution must contain the same fraction of items from each group; if we set and , then there is no group fairness constraints. We next present the hardness result of this problem.
Lemma 1
Problem P.0 is NP-hard.
Proof: We prove this through reduction to the classic cardinality constrained submodular maximization problem which we define below.
Definition 1
The input of cardinality constrained submodular maximization problem is a group of items , a submodular function , and a cardinality constraint ; we aim to select a group of items such that is maximized and .
We next show a reduction from cardinality constrained submodular maximization problem to P.0. Consider any given instance of cardinality constrained submodular maximization problem, we construct a corresponding instance of P.0 as follows: Let , , assume there is only one group, i.e., , and let , . It is easy to verify that these two instances are equivalent. This finishes the proof of the reduction.
3 Non-monotone Submodular Maximization with Group Fairness
Warm-up: Monotone Utility Function
If is monotone and submodular, we can easily confirm that P.0 can be simplified to P.1 by removing the lower bound constraints. This is because in this case, increasing the size of a solution by adding more items will not decrease its utility. As a result, the lower bound constraints in P.0, which state that for all , can always be met by adding sufficient items to the solution.
P.1
subject to:
.
Since is a monotone submodular function, P.1 is a well-known problem of maximizing a monotone submodular function subject to matroid constraints111A matroid is a pair where and 1. . 2. .. This problem has a -approximation algorithm.
We then proceed to develop approximation algorithms for non-monotone functions. We will examine two scenarios, specifically when and when .
3.1 The case when
In the scenario where , we use the solution of P.1 as a building block to construct our algorithm. First, it is easy to verify that P.1 is a relaxed version of P.0 with lower bound constraints in P.0 being removed. Because is a submodular function, P.1 is a classic problem of maximizing a (non-monotone) submodular function subject to matroid constraints. There exist effective solutions for P.1. Now we are ready to present the design of our algorithm as below.
-
1.
Apply the state-of-the-art algorithm for matroid constrained submodular maximization to solve P.1 and obtain a solution .
-
2.
Note that is not necessarily a feasible solution to P.0 because it might violate the lower bound constraints for some groups. To make it feasible, we add additional items to . Specifically, for each group such that , our algorithm selects a backup set of size , by randomly sampling items from . Define if .
-
3.
At the end, add to to build the final solution , i.e., .
The pseudocode of this approximation algorithm is given as Algorithm 1. Observe that is a feasible solution to P.1, hence, satisfies upper bound constraints of P.1 and hence P.0, i.e., . According to the construction of , it is easy to verify that adding to does not violate the upper bound constraints because are only supplemented to those groups which do not satisfy the lower bound constraints of P.0, i.e., . Moreover, adding to makes it satisfy lower bound constraints of P.0. Hence, is a feasible solution to P.0.
Lemma 2
is a feasible solution to P.0.
3.1.1 Performance Analysis
Lemma 3
If is submodular and is a random subset of , such that each item in is contained in with probability at most , then .
The next lemma states that if is a -approximate solution of P.1, then is at least fraction of the optimal solution of P.0.
Lemma 4
Suppose is a -approximate algorithm for non-monotone submodular maximization subject to a matroid constraint. Let denote the optimal solution of P.0, we have .
Proof: Because is a -approximate algorithm for non-monotone submodular maximization subject to a matroid constraint, we have where denotes the optimal solution of P.1. Moreover, because P.1 is a relaxed version of P.0, we have . Hence, .
We next show that augmenting with items from the random set reduces its utility by a factor of at most in expectation. Here the expectation is taken over the distribution of .
Lemma 5
Suppose , we have where .
Proof: Recall that for all such that , hence, adding those to does not affect its utility. In the rest of the proof we focus on those with
(1) |
Recall that for every such that , is a random set of size that is sampled from . It follows that each item in is contained in with probability at most
(2) |
We next give an upper bound of (2). First,
(3) |
where the second inequality is by the assumption that . Moreover,
(4) | |||
(5) |
where the inequality is by the assumption that .
Hence,
(6) |
where the first inequality is by (5); the second inequality is by (3) and the assumption that (listed in (1)). That is, the probability that each item in is contained in is at most . It follows that the probability that each item in is contained in is at most . Moreover, Lemma 3 states that if is submodular and is a random subset of , such that each item in appears in with probability at most , then . With the above discussion and the observation that is submodular, it holds that .
Theorem 3.1
Suppose is a -approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and , we have .
3.2 The case when
We next consider the case when . We first introduce a new utility function as below:
(7) |
We first present a well-known result, which states that submodular functions maintain their submodularity property when taking their complement.
Lemma 6
If is submodular, then must be submodular.
With utility function , we present a new optimization problem P.2 as below:
P.2
subject to:
.
P.2 is a flipped version of the original problem P.0 in the sense that if there is a -approximate solution to P.2, it can be easily verified that is a -approximate solution to P.0. As a result, we will focus on solving P.2 for the rest of this section.
To solve P.2, we introduce another problem (labeled as P.3) as follows:
P.3
subject to:
.
P.3 is relaxed version of P.2 with lower bound constraints in P.2 being removed. Because is a submodular function, P.3 is a classic problem of maximizing a submodular function subject to matroid constraints. Now we are ready to present the design of our algorithm.
-
1.
Apply the state-of-the-art algorithm for matroid constrained submodular maximization to solve P.3 and obtain a solution .
-
2.
Note that is not necessarily a feasible solution to P.2 because it might violate the lower bound constraints for some groups. We add additional items to to make it feasible. Specifically, for each group such that , our algorithm selects a backup set of size , by randomly sampling items from . Define if .
-
3.
Add to to build , i.e., . Return as the final solution.
The pseudocode of this approximation algorithm is given as Algorithm 2. Observe that satisfies upper bound constraints of P.3 and hence P.2 because is a feasible solution to P.3. According to the construction of , adding to does not violate the upper bound constraints because are added to meet the lower bound constraints of P.2 if necessary. Moreover, adding to makes it satisfy lower bound constraints of P.2. Hence, is a feasible solution to P.2.
Lemma 7
is a feasible solution to P.2.
3.2.1 Performance Analysis
We first introduce a technical lemma which states that if is a -approximate solution of P.3, then is at least fraction of the optimal solution of P.2. This lemma follows from the observation that P.3 is a relaxation of P.2 .
Lemma 8
Suppose is a -approximate algorithm for non-monotone submodular maximization subject to a matroid constraint. Let denote the optimal solution of P.2, it holds that .
We next show that augmenting with items from reduces its utility by a factor of at most in expectation.
Lemma 9
Suppose , where .
Proof: Recall that for all such that , hence, adding those to does not affect its utility. Therefore, we focus on those groups with in the rest of the proof. Let denote the set containing the indexes of all such groups and we assume to avoid trivial cases. We next show that it is safe to assume without loss of generality, i.e., the smallest group in contains at least two items. To prove this, we consider two cases, depending on the value of . If , then does not hold for any group such that , that is, . If , then according to the group fairness constraints listed in P.0, we are not allowed to select any items from those groups with . Hence, removing all groups with size one from consideration does not affect the quality of the optimal solution.
With the assumption that , we are now in position to prove this lemma. Recall that for every , is a random set of size that is sampled from . It follows that each item in appears in with probability at most
(8) |
We next give an upper bound of (8). Because we assume , we have . This, together with the assumption that , implies that for all ,
(9) |
Moreover,
(10) | |||
(11) | |||
(12) | |||
(13) |
where the inequality is by the observation that . It follows that
(14) |
where the first inequality is by (13) and the second inequality is by (9) and the assumption that . That is, each item in appears in with probability at most . Lemma 3 and the observation that is submodular imply that .
Lemma 8 and Lemma 9 together imply that
By the definition of function , we have
where the last equality is by the observation that P.2 and P.0 share the same value of the optimal solution. Hence, the following main theorem holds.
Theorem 3.2
Suppose is a -approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and , we have .
4 Extension: Incorporating Global Cardinality Constraint
In this section, we extend P.0 to incorporate a global cardinality constraint. A formal definition of this problem is listed in P.A. Our objective is to find a best subject to a group fairness constraint and an additional cardinality constraint .
P.A
subject to:
.
.
4.1 The case when
We first consider the case when . We introduce a new optimization problem P.B as follows:
P.B
subject to:
.
.
It is easy to verify that P.B is a relaxation of P.A in the sense that every feasible solution to P.A is also a feasible solution to P.B. Hence, we have the following lemma.
Lemma 10
Let denote the optimal solution of P.A and denote the optimal solution of P.B, we have .
It has been shown that the constraints in P.B gives rise to a matroid [10]. This, together with the assumption that is a submodular function, implies that P.B is a classic problem of maximizing a submodular function subject to matroid constraints. Now we are ready to present the design of our algorithm.
-
1.
Apply the state-of-the-art algorithm for matroid constrained submodular maximization to solve P.B and obtain a solution .
-
2.
Note that is not necessarily a feasible solution to P.A because it might violate the lower bound constraints for some groups. To make it feasible, we add additional items to . Specifically, for each group such that , our algorithm selects a backup set of size , by randomly sampling items from . Define if .
-
3.
At the end, add to to build the final solution , i.e., .
The pseudocode of this approximation algorithm is given as Algorithm 3. Observe that satisfies the group-wise upper bound constraints of P.A because meets the first set of constraints in P.B. According to the construction of , adding to does not violate the group-wise upper bound constraints of P.A because are added to meet the lower bound constraints of P.A if necessary. Moreover, adding to does not violate the global cardinality constraint of P.A because meets the second set of constraints in P.B. At last, it is easy to verify that adding to makes it satisfy the lower bound constraints of P.A. Hence, is a feasible solution to P.A.
Lemma 11
is a feasible solution to P.A.
Following the same proof of Theorem 3.1, we have the following theorem.
Theorem 4.1
Suppose is a -approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and , we have .
4.2 The case when
We next consider the case when . Recall that . We first present a flipped formation of P.A as below:
P.C
subject to:
.
.
Suppose there is a -approximate solution to P.C, it is easy to verify that is a -approximate solution to P.A. We focus on solving P.C in the rest of this section. We first introduce a new optimization problem (labeled as P.D) as follows:
P.D
subject to:
.
P.D is relaxed version of P.C with both group-wise lower bound constraints and global lower bound constraints in P.C being removed. Hence, we have the following lemma.
Lemma 12
Let denote the optimal solution of P.C and denote the optimal solution of P.D, we have .
Recall that if is submodular, must be submodular (by Lemma 6). Hence, P.D is a classic problem of maximizing a submodular function subject to matroid constraints. We next present the design of our algorithm.
-
1.
Apply the state-of-the-art algorithm for matroid constrained submodular maximization to solve P.D and obtain a solution .
-
2.
Note that is not necessarily a feasible solution to P.C because it might violate the group-wise or the global lower bound constraints of P.C. We add additional items to to make it feasible. Specifically, for each group , our algorithm selects a backup set of size , by randomly sampling items from . Define if .
-
3.
Add to to build , i.e., . Return as the final solution.
The pseudocode of this approximation algorithm is given as Algorithm 4. Observe that adding to ensures that each group contributes exactly number of items to the solution. Because (otherwise P.C does not have a feasible solution), must satisfy all constraints in P.C. Hence, we have the following lemma.
Lemma 13
is a feasible solution to P.C.
We next analyze the performance of . The following lemma states that adding to reduces its utility by a factor of at most in expectation.
Lemma 14
Suppose , we have .
Proof: Observe that for all such that , hence, adding those to does not affect its utility. Therefore, we focus on those groups with in the rest of the proof. Let denote the set containing the indexes all such groups. We assume to avoid trivial cases. We next show that it is safe to assume without loss of generality, i.e., the smallest group in contains at least two items. To prove this, we consider two cases, depending on the value of . If , then does not hold for any group such that . Hence, . If , then according to the group fairness constraints listed in P.A, we are not allowed to select any items from those groups with . Hence, removing all groups with size one from consideration does not affect the quality of the optimal solution.
With the assumption that , we are now ready to prove this lemma. Recall that for every , is a random set of size that is sampled from . It follows each item in appears in with probability at most
(15) |
We next give an upper bound of (15). Because we assume and , it holds that for all ,
(16) |
Moreover,
(17) | |||
(18) | |||
(19) | |||
(20) |
where the inequality is by the assumptions that and . It follows that
(21) |
where the first inequality is by (20) and the second inequality is by (16) and the assumption that . That is, each item in appears in with probability at most . Lemma 3 and the observation that is submodular imply that .
Suppose is a -approximate algorithm for non-monotone submodular maximization subject to a matroid constraint, we have
where the first inequality is by Lemma 14. This, together with (as proved in Lemma 12), implies that . By the definition of function , we have
where the last equality is by the observation that P.A and P.C share the same value of the optimal solution. Hence, the following main theorem holds.
Theorem 4.2
Suppose is a -approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and , we have .
5 Conclusion
This paper presents a comprehensive investigation of the non-monotone submodular maximization problem under group fairness constraints. Our main contribution is the development of several constant-factor approximation algorithms for this problem. In the future, we plan to expand our research to explore alternative fairness metrics.
References
- [1] Biddle, D.: Adverse impact and test validation: A practitioner’s guide to valid and defensible employment testing. Routledge (2017)
- [2] Buchbinder, N., Feldman, M., Naor, J., Schwartz, R.: Submodular maximization with cardinality constraints. In: Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms. pp. 1433–1452. SIAM (2014)
- [3] Celis, E., Keswani, V., Straszak, D., Deshpande, A., Kathuria, T., Vishnoi, N.: Fair and diverse dpp-based data summarization. In: International Conference on Machine Learning. pp. 716–725. PMLR (2018)
- [4] Celis, L.E., Huang, L., Vishnoi, N.K.: Multiwinner voting with fairness constraints. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence. pp. 144–151 (2018)
- [5] Chierichetti, F., Kumar, R., Lattanzi, S., Vassilvtiskii, S.: Matroids, matchings, and fairness. In: The 22nd International Conference on Artificial Intelligence and Statistics. pp. 2212–2220. PMLR (2019)
- [6] Das, A., Kempe, D.: Algorithms for subset selection in linear regression. In: Proceedings of the fortieth annual ACM symposium on Theory of computing. pp. 45–54 (2008)
- [7] Dueck, D., Frey, B.J.: Non-metric affinity propagation for unsupervised image categorization. In: 2007 IEEE 11th International Conference on Computer Vision. pp. 1–8. IEEE (2007)
- [8] Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (2012)
- [9] El-Arini, K., Guestrin, C.: Beyond keyword search: discovering relevant scientific literature. In: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 439–447 (2011)
- [10] El Halabi, M., Mitrović, S., Norouzi-Fard, A., Tardos, J., Tarnawski, J.M.: Fairness in streaming submodular maximization: algorithms and hardness. Advances in Neural Information Processing Systems 33, 13609–13622 (2020)
- [11] Feldman, M., Naor, J., Schwartz, R.: A unified continuous greedy algorithm for submodular maximization. In: 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science. pp. 570–579. IEEE (2011)
- [12] Golovin, D., Krause, A.: Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Journal of Artificial Intelligence Research 42, 427–486 (2011)
- [13] Gotovos, A., Karbasi, A., Krause, A.: Non-monotone adaptive submodular maximization. In: Twenty-Fourth International Joint Conference on Artificial Intelligence (2015)
- [14] Gu, S., Gao, C., Wu, W.: A binary search double greedy algorithm for non-monotone dr-submodular maximization. In: Algorithmic Aspects in Information and Management: 16th International Conference, AAIM 2022, Guangzhou, China, August 13–14, 2022, Proceedings. pp. 3–14. Springer (2022)
- [15] Joseph, M., Kearns, M., Morgenstern, J.H., Roth, A.: Fairness in learning: Classic and contextual bandits. Advances in neural information processing systems 29 (2016)
- [16] Mirzasoleiman, B., Badanidiyuru, A., Karbasi, A.: Fast constrained submodular maximization: Personalized data summarization. In: ICML. pp. 1358–1367 (2016)
- [17] Monroe, B.L.: Fully proportional representation. American Political Science Review 89(4), 925–940 (1995)
- [18] Shi, G., Gu, S., Wu, W.: k-submodular maximization with two kinds of constraints. Discrete Mathematics, Algorithms and Applications 13(04), 2150036 (2021)
- [19] Sipos, R., Swaminathan, A., Shivaswamy, P., Joachims, T.: Temporal corpus summarization using submodular word coverage. In: Proceedings of the 21st ACM international conference on Information and knowledge management. pp. 754–763 (2012)
- [20] Tang, S., Yuan, J.: Influence maximization with partial feedback. Operations Research Letters 48(1), 24–28 (2020)
- [21] Tang, S., Yuan, J.: Adaptive regularized submodular maximization. In: 32nd International Symposium on Algorithms and Computation (ISAAC 2021). Schloss Dagstuhl-Leibniz-Zentrum für Informatik (2021)
- [22] Tang, S., Yuan, J.: Group equility in adaptive submodular maximization. arXiv preprint arXiv:2207.03364 (2022)
- [23] Tang, S., Yuan, J.: Beyond submodularity: A unified framework of fair subset selection through randomization. In: Under Review (2023)
- [24] Tang, S., Yuan, J., Mensah-Boateng, T.: Achieving long-term fairness in submodular maximization through randomization. In: Under Review (2023)
- [25] Tsang, A., Wilder, B., Rice, E., Tambe, M., Zick, Y.: Group-fairness in influence maximization. arXiv preprint arXiv:1903.00967 (2019)
- [26] Zafar, M.B., Valera, I., Rogriguez, M.G., Gummadi, K.P.: Fairness constraints: Mechanisms for fair classification. In: Artificial intelligence and statistics. pp. 962–970. PMLR (2017)