This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Department of Computer Science and Engineering, University of North Texas
11email: [email protected]
22institutetext: Naveen Jindal School of Management, University of Texas at Dallas
22email: [email protected]

Group Fairness in Non-monotone Submodular Maximization

Jing Yuan 11 0000-0001-6407-834X    Shaojie Tang 22 0000-0001-9261-5210
Abstract

Maximizing a submodular function has a wide range of applications in machine learning and data mining. One such application is data summarization whose goal is to select a small set of representative and diverse data items from a large dataset. However, data items might have sensitive attributes such as race or gender, in this setting, it is important to design fairness-aware algorithms to mitigate potential algorithmic bias that may cause over- or under- representation of particular groups. Motivated by that, we propose and study the classic non-monotone submodular maximization problem subject to novel group fairness constraints. Our goal is to select a set of items that maximizes a non-monotone submodular function, while ensuring that the number of selected items from each group is proportionate to its size, to the extent specified by the decision maker. We develop the first constant-factor approximation algorithms for this problem. We also extend the basic model to incorporate an additional global size constraint on the total number of selected items.

1 Introduction

Submodular function refers to a broad class of functions which satisfy the natural diminishing returns property: adding an additional item to a larger existing subset is less beneficial. A wide range of machine learning and AI problems, including exemplar-based clustering [7], feature selection [6], active learning [12], influence maximization in social networks [20], recommender system [9], and diverse data summarization [19], can be formulated as a submodular maximization problem. This problem, whose goal is to select a set of items to maximize a submodular function, and its variants [14, 18] have been extensively studied in the literature subject to various constraints, including cardinality, matroid, or knapsack-type restrictions.

We notice that in practise, items or individuals are often associated with different groups based on various attributes, such as gender, race, age, religion, or other factors. Existing algorithms might exhibit bias if left unchecked, for example, some of the groups might be over- or under-represented in the final selected subset. Therefore, it becomes increasingly important to design fairness-aware algorithms to mitigate such issues. Towards this end, we propose and study the classic non-monotone submodular maximization problem subject to novel group fairness constraints. Our goal is to select a balanced set of items that maximizes a non-monotone submodular function, such that the ratio of selected items from each group to its size is within a desired range, as determined by the decision maker. Non-monotone submodular maximization has multiple compelling applications, such as feature selection [6], profit maximization [21], maximum cut [13] and data summarization [16]. Formally, we consider a set VV of items (e.g., datapoints) which are partitioned into mm groups: V1,V2,,VmV_{1},V_{2},\cdots,V_{m} such that items from the same group share same attributes (e.g., gender). We say that a set SVS\subseteq V of items is (α,β)(\alpha,\beta)-fair if for all groups i[m]i\in[m], it holds that α|Vi||SVi|β|Vi|\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}|\leq\lfloor\beta|V_{i}|\rfloor. Using our model, it allows for the decision maker to specify the desired level of fairness by setting appropriate values of α\alpha and β\beta. Specifically, setting α=β\alpha=\beta leads to the highest level of fairness in that the number of selected items is strictly proportional to its group size; if we set α=0\alpha=0 and β=1\beta=1, there are no fairness constraints. Our goal is to find such a (α,β)(\alpha,\beta)-fair subset of items that maximizes a submodular objective function. Our definition of fairness, which balances solutions with respect to sensitive attributes, has gained widespread acceptance in the academic community, as demonstrated by its frequent use in previous studies [4, 10, 5]. There are several other notations of fairness that can be captured by our formulation such as the 80%80\%-rule [1], statistical parity [8] and proportional representation [17].

1.1 Our Contributions

  • Our study breaks new ground by examining the classic (non-monotone) submodular maximization problem under (α,β)(\alpha,\beta)-fairness constraints. Our model offers flexibility in capturing varying degrees of fairness as desired by the decision maker, by adjusting the values of α\alpha and β\beta.

  • We develop the first constant-factor approximation algorithm for this problem. We observe that the parameter α\alpha is closely linked to the complexity of solving the (α,β)(\alpha,\beta)-fair non-monotone submodular maximization problem. In particular, when α1/2\alpha\leq 1/2, we design a γ2\frac{\gamma}{2}-approximation algorithm and when α>1/2\alpha>1/2, we develop a γ3\frac{\gamma}{3}-approximation algorithm, where γ\gamma is the approximation ratio of the current best algorithm for matroid-constrained submodular maximization. We also extend the basic model to incorporate an additional global size constraint on the total number of selected items. We provide approximation algorithms that have a constant-factor approximation ratio for this extended model.

1.2 Additional Related Works

In recent years, there has been a growing awareness of the importance of fair and unbiased decision-making systems. This has led to an increased interest in the development of fair algorithms in a wide range of applications, including influence maximization [25], classification [26], voting [4], bandit learning [15], and data summarization [3]. Depending on the specific context and the type of bias that one is trying to mitigate, existing studies adopt different metrics of fairness. This can lead to different optimization problems and different fair algorithms that are tailored to the specific requirements of the application. Our notation of fairness is general enough to capture many existing notations such as the 80%80\%-rule [1], statistical parity [8] and proportional representation [17]. Unlike most of existing studies on fair submodular maximization [4] whose objective is to maximize a monotone submodular function, [10] develop fair algorithms in the context of streaming non-monotone submodular maximization. Their proposed notation of fairness is more general than ours, leading to a more challenging optimization problem which does not admit any constant-factor approximation algorithms. [24, 23] aim to develop randomized algorithms that satisfy average fairness constraints. Very recently, [22] extend the studies of fair algorithms to a more complicated adaptive setting and they propose a new metric of fairness called group equality.

2 Preliminaries and Problem Statement

We consider a set VV of nn items. There is a non-negative submodular utility function f:2V+f:2^{V}\rightarrow\mathbb{R}_{+}. Denote by f(eS)f(e\mid S) the marginal utility of eVe\in V on top of SVS\subseteq V, i.e., f(eS)=f({e}S)f(S)f(e\mid S)=f(\{e\}\cup S)-f(S). We say a function f:2V+f:2^{V}\rightarrow\mathbb{R}_{+} is submodular if for any two sets X,YVX,Y\subseteq V such that XYX\subseteq Y and any item eVYe\in V\setminus Y,

f(eY)f(eX).f(e\mid Y)\leq f(e\mid X).

Assume VV is partitioned into mm disjoint groups: V1,V2,,VmV_{1},V_{2},\cdots,V_{m}. We assume that there is a given lower and upper bound on the fraction of items of each group that must be contained in a feasible solution. These two bounds, namely α\alpha and β\beta, represent group fairness constraints. The problem of (α,β)(\alpha,\beta)-fair submodular maximization problem (labelled as P.0) can be written as follows.

P.0 maxf(S)\max f(S)
subject to:
α|Vi||SVi|β|Vi|,i[m]\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}|\leq\lfloor\beta|V_{i}|\rfloor,\forall i\in[m].

One can adjust the degree of group fairness in a feasible solution through choosing appropriate values of α\alpha and β\beta. I.e., strict group fairness is achieved at α=β\alpha=\beta in which case every feasible solution must contain the same α\alpha fraction of items from each group; if we set α=0\alpha=0 and β=1\beta=1, then there is no group fairness constraints. We next present the hardness result of this problem.

Lemma 1

Problem P.0 is NP-hard.

Proof: We prove this through reduction to the classic cardinality constrained submodular maximization problem which we define below.

Definition 1

The input of cardinality constrained submodular maximization problem is a group of items UU, a submodular function h:2U+h:2^{U}\rightarrow\mathbb{R}_{+}, and a cardinality constraint bb; we aim to select a group of items SUS\subseteq U such that h(S)h(S) is maximized and |S|b|S|\leq b.

We next show a reduction from cardinality constrained submodular maximization problem to P.0. Consider any given instance of cardinality constrained submodular maximization problem, we construct a corresponding instance of P.0 as follows: Let V=UV=U, f=hf=h, assume there is only one group, i.e., V=V1V=V_{1}, and let α=0\alpha=0, β=b/|U|\beta=b/|U|. It is easy to verify that these two instances are equivalent. This finishes the proof of the reduction. \Box

3 Non-monotone Submodular Maximization with Group Fairness

Warm-up: Monotone Utility Function

If ff is monotone and submodular, we can easily confirm that P.0 can be simplified to P.1 by removing the lower bound constraints. This is because in this case, increasing the size of a solution by adding more items will not decrease its utility. As a result, the lower bound constraints in P.0, which state that α|Vi||SVi|\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}| for all i[m]i\in[m], can always be met by adding sufficient items to the solution.

P.1 maxf(S)\max f(S)
subject to:
|SVi|β|Vi|,i[m]|S\cap V_{i}|\leq\lfloor\beta|V_{i}|\rfloor,\forall i\in[m].

Since ff is a monotone submodular function, P.1 is a well-known problem of maximizing a monotone submodular function subject to matroid constraints111A matroid is a pair =(V,)\mathcal{M}=(V,\mathcal{I}) where 2V\mathcal{I}\subseteq 2^{V} and 1. Y,XYX\forall Y\in\mathcal{I},X\subseteq Y\rightarrow X\in\mathcal{I}. 2. X,Y;|X|<|Y|eYX;X{e}\forall X,Y\in\mathcal{I};|X|<|Y|\rightarrow\exists e\in Y\setminus X;X\cup\{e\}\in\mathcal{I}.. This problem has a (11/e)(1-1/e)-approximation algorithm.

We then proceed to develop approximation algorithms for non-monotone functions. We will examine two scenarios, specifically when α1/2\alpha\leq 1/2 and when α>1/2\alpha>1/2.

3.1 The case when α1/2\alpha\leq 1/2

In the scenario where α1/2\alpha\leq 1/2, we use the solution of P.1 as a building block to construct our algorithm. First, it is easy to verify that P.1 is a relaxed version of P.0 with lower bound constraints α|Vi||SVi|\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}| in P.0 being removed. Because ff is a submodular function, P.1 is a classic problem of maximizing a (non-monotone) submodular function subject to matroid constraints. There exist effective solutions for P.1. Now we are ready to present the design of our algorithm as below.

  1. 1.

    Apply the state-of-the-art algorithm 𝒜\mathcal{A} for matroid constrained submodular maximization to solve P.1 and obtain a solution AP.1A^{P.1}.

  2. 2.

    Note that AP.1A^{P.1} is not necessarily a feasible solution to P.0 because it might violate the lower bound constraints α|Vi||SVi|\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}| for some groups. To make it feasible, we add additional items to AP.1A^{P.1}. Specifically, for each group i[m]i\in[m] such that |AP.1Vi|<α|Vi||A^{P.1}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor, our algorithm selects a backup set BiB_{i} of size α|Vi||AP.1Vi|\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|, by randomly sampling α|Vi||AP.1Vi|\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}| items from ViAP.1V_{i}\setminus A^{P.1}. Define Bi=B_{i}=\emptyset if |AP.1Vi|α|Vi||A^{P.1}\cap V_{i}|\geq\lfloor\alpha|V_{i}|\rfloor.

  3. 3.

    At the end, add i[m]Bi\cup_{i\in[m]}B_{i} to AP.1A^{P.1} to build the final solution AapproxA^{approx}, i.e., Aapprox=AP.1(i[m]Bi)A^{approx}=A^{P.1}\cup(\cup_{i\in[m]}B_{i}).

Algorithm 1 Approximation Algorithm for P.0 when α1/2\alpha\leq 1/2
1:  Apply 𝒜\mathcal{A} to solve P.1 and obtain a solution AP.1A^{P.1}
2:  for every group i[m]i\in[m] do
3:     if |AP.1Vi|<α|Vi||A^{P.1}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor then
4:        select a random backup set BiB_{i} of size α|Vi||AP.1Vi|\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}| from ViAP.1V_{i}\setminus A^{P.1}
5:     else
6:        BiB_{i}\leftarrow\emptyset
7:  AapproxAP.1(i[m]Bi)A^{approx}\leftarrow A^{P.1}\cup(\cup_{i\in[m]}B_{i})
8:  return  AapproxA^{approx}

The pseudocode of this approximation algorithm is given as Algorithm 1. Observe that AP.1A^{P.1} is a feasible solution to P.1, hence, AP.1A^{P.1} satisfies upper bound constraints of P.1 and hence P.0, i.e., |SVi|β|Vi|,i[m]|S\cap V_{i}|\leq\lfloor\beta|V_{i}|\rfloor,\forall i\in[m]. According to the construction of BiB_{i}, it is easy to verify that adding i[m]Bi\cup_{i\in[m]}B_{i} to AP.1A^{P.1} does not violate the upper bound constraints because i[m]Bi\cup_{i\in[m]}B_{i} are only supplemented to those groups which do not satisfy the lower bound constraints of P.0, i.e., α|Vi||SVi|\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}|. Moreover, adding i[m]Bi\cup_{i\in[m]}B_{i} to AP.1A^{P.1} makes it satisfy lower bound constraints of P.0. Hence, AapproxA^{approx} is a feasible solution to P.0.

Lemma 2

AapproxA^{approx} is a feasible solution to P.0.

3.1.1 Performance Analysis

We next analyze the performance of Algorithm 1. We first introduce a useful lemma from [2].

Lemma 3

If ff is submodular and SS is a random subset of VV, such that each item in VV is contained in SS with probability at most pp, then 𝔼S[f(S)](1p)f()\mathbb{E}_{S}[f(S)]\geq(1-p)f(\emptyset).

The next lemma states that if AP.1A^{P.1} is a γ\gamma-approximate solution of P.1, then f(AP.1)f(A^{P.1}) is at least γ\gamma fraction of the optimal solution of P.0.

Lemma 4

Suppose 𝒜\mathcal{A} is a γ\gamma-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint. Let OPTOPT denote the optimal solution of P.0, we have f(AP.1)γf(OPT)f(A^{P.1})\geq\gamma f(OPT).

Proof: Because 𝒜\mathcal{A} is a γ\gamma-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint, we have f(AP.1)γf(OP.1)f(A^{P.1})\geq\gamma f(O^{P.1}) where OP.1O^{P.1} denotes the optimal solution of P.1. Moreover, because P.1 is a relaxed version of P.0, we have f(OP.1)f(OPT)f(O^{P.1})\geq f(OPT). Hence, f(AP.1)γf(OPT)f(A^{P.1})\geq\gamma f(OPT). \Box

We next show that augmenting AP.1A^{P.1} with items from the random set i[m]Bi\cup_{i\in[m]}B_{i} reduces its utility by a factor of at most 1/21/2 in expectation. Here the expectation is taken over the distribution of i[m]Bi\cup_{i\in[m]}B_{i}.

Lemma 5

Suppose α1/2\alpha\leq 1/2, we have 𝔼Aapprox[f(Aapprox)]12f(AP.1)\mathbb{E}_{A^{approx}}[f(A^{approx})]\geq\frac{1}{2}f(A^{P.1}) where Aapprox=AP.1(i[m]Bi)A^{approx}=A^{P.1}\cup(\cup_{i\in[m]}B_{i}).

Proof: Recall that Bi=B_{i}=\emptyset for all i[m]i\in[m] such that |AP.1Vi|α|Vi||A^{P.1}\cap V_{i}|\geq\lfloor\alpha|V_{i}|\rfloor, hence, adding those BiB_{i} to AP.1A^{P.1} does not affect its utility. In the rest of the proof we focus on those BiB_{i} with

|AP.1Vi|<α|Vi|.\displaystyle|A^{P.1}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor. (1)

Recall that for every i[m]i\in[m] such that |AP.1Vi|<α|Vi||A^{P.1}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor, BiB_{i} is a random set of size α|Vi||AP.1Vi|\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}| that is sampled from ViAP.1V_{i}\setminus A^{P.1}. It follows that each item in ViAP.1V_{i}\setminus A^{P.1} is contained in BiB_{i} with probability at most

α|Vi||AP.1Vi||ViAP.1|.\displaystyle\frac{\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|}{|V_{i}\setminus A^{P.1}|}. (2)

We next give an upper bound of (2). First,

α|Vi||AP.1Vi|α|Vi||Vi|/2,\displaystyle\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|\leq\lfloor\alpha|V_{i}|\rfloor\leq|V_{i}|/2, (3)

where the second inequality is by the assumption that α1/2\alpha\leq 1/2. Moreover,

|ViAP.1|=|Vi||AP.1Vi|=(α|Vi||AP.1Vi|)+(|Vi|α|Vi|)\displaystyle|V_{i}\setminus A^{P.1}|=|V_{i}|-|A^{P.1}\cap V_{i}|=(\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|)+(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor) (4)
(α|Vi||AP.1Vi|)+|Vi|/2,\displaystyle\geq(\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|)+|V_{i}|/2, (5)

where the inequality is by the assumption that α1/2\alpha\leq 1/2.

Hence,

(2)α|Vi||AP.1Vi|(α|Vi||AP.1Vi|)+|Vi|/2|Vi|/2|Vi|/2+|Vi|/2=1/2,\displaystyle(\ref{eq:haha})\leq\frac{\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|}{(\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|)+|V_{i}|/2}\leq\frac{|V_{i}|/2}{|V_{i}|/2+|V_{i}|/2}=1/2, (6)

where the first inequality is by (5); the second inequality is by (3) and the assumption that α|Vi||AP.1Vi|>0\lfloor\alpha|V_{i}|\rfloor-|A^{P.1}\cap V_{i}|>0 (listed in (1)). That is, the probability that each item in ViAP.1V_{i}\setminus A^{P.1} is contained in BiB_{i} is at most 1/21/2. It follows that the probability that each item in VAP.1V\setminus A^{P.1} is contained in i[m]Bi\cup_{i\in[m]}B_{i} is at most 1/21/2. Moreover, Lemma 3 states that if ff is submodular and SS is a random subset of VV, such that each item in VV appears in SS with probability at most pp, then 𝔼A[f(A)](1p)f()\mathbb{E}_{A}[f(A)]\geq(1-p)f(\emptyset). With the above discussion and the observation that f(AP.1)f(A^{P.1}\cup\cdot) is submodular, it holds that 𝔼Aapprox[f(Aapprox)]=𝔼i[m]Bi[f(AP.1(i[m]Bi))](112)f(AP.1)=12f(AP.1)\mathbb{E}_{A^{approx}}[f(A^{approx})]=\mathbb{E}_{\cup_{i\in[m]}B_{i}}[f(A^{P.1}\cup(\cup_{i\in[m]}B_{i}))]\geq(1-\frac{1}{2})f(A^{P.1}\cup\emptyset)=\frac{1}{2}f(A^{P.1}). \Box

Our main theorem as below follows from Lemma 4 and Lemma 5.

Theorem 3.1

Suppose 𝒜\mathcal{A} is a γ\gamma-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and α1/2\alpha\leq 1/2, we have 𝔼Aapprox[f(Aapprox)]γ2f(OPT)\mathbb{E}_{A^{approx}}[f(A^{approx})]\geq\frac{\gamma}{2}f(OPT).

One option of 𝒜\mathcal{A} is the continuous double greedy algorithm proposed in [11] which gives a 1/eo(1)1/e-o(1)-approximation solution, that is, γ1/eo(1)\gamma\geq 1/e-o(1). This, together with Theorem 3.1, implies that 𝔼Aapprox[f(Aapprox)]1/eo(1)2f(OPT)\mathbb{E}_{A^{approx}}[f(A^{approx})]\geq\frac{1/e-o(1)}{2}f(OPT).

3.2 The case when α>1/2\alpha>1/2

We next consider the case when α>1/2\alpha>1/2. We first introduce a new utility function g:2V+g:2^{V}\rightarrow\mathbb{R}_{+} as below:

g()=f(V).\displaystyle g(\cdot)=f(V\setminus\cdot). (7)

We first present a well-known result, which states that submodular functions maintain their submodularity property when taking their complement.

Lemma 6

If ff is submodular, then gg must be submodular.

With utility function gg, we present a new optimization problem P.2 as below:

P.2 maxg(S)\max g(S)
subject to:
|Vi|β|Vi||SVi||Vi|α|Vi|,i[m]|V_{i}|-\lfloor\beta|V_{i}|\rfloor\leq|S\cap V_{i}|\leq|V_{i}|-\lfloor\alpha|V_{i}|\rfloor,\forall i\in[m].

P.2 is a flipped version of the original problem P.0 in the sense that if there is a γ\gamma-approximate solution AP.2A^{P.2} to P.2, it can be easily verified that VAP.2V\setminus A^{P.2} is a γ\gamma-approximate solution to P.0. As a result, we will focus on solving P.2 for the rest of this section.

To solve P.2, we introduce another problem (labeled as P.3) as follows:

P.3 maxg(S)\max g(S)
subject to:
|SVi||Vi|α|Vi|,i[m]|S\cap V_{i}|\leq|V_{i}|-\lfloor\alpha|V_{i}|\rfloor,\forall i\in[m].

P.3 is relaxed version of P.2 with lower bound constraints |Vi|β|Vi||SVi||V_{i}|-\lfloor\beta|V_{i}|\rfloor\leq|S\cap V_{i}| in P.2 being removed. Because gg is a submodular function, P.3 is a classic problem of maximizing a submodular function subject to matroid constraints. Now we are ready to present the design of our algorithm.

  1. 1.

    Apply the state-of-the-art algorithm 𝒜\mathcal{A} for matroid constrained submodular maximization to solve P.3 and obtain a solution AP.3A^{P.3}.

  2. 2.

    Note that AP.3A^{P.3} is not necessarily a feasible solution to P.2 because it might violate the lower bound constraints |Vi|β|Vi||SVi||V_{i}|-\lfloor\beta|V_{i}|\rfloor\leq|S\cap V_{i}| for some groups. We add additional items to AP.3A^{P.3} to make it feasible. Specifically, for each group i[m]i\in[m] such that |AP.3Vi|<|Vi|β|Vi||A^{P.3}\cap V_{i}|<|V_{i}|-\lfloor\beta|V_{i}|\rfloor, our algorithm selects a backup set BiB_{i} of size |Vi|β|Vi||AP.3Vi||V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|, by randomly sampling |Vi|β|Vi||AP.3Vi||V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}| items from ViAP.3V_{i}\setminus A^{P.3}. Define Bi=B_{i}=\emptyset if |AP.1Vi||Vi|β|Vi||A^{P.1}\cap V_{i}|\geq|V_{i}|-\lfloor\beta|V_{i}|\rfloor.

  3. 3.

    Add i[m]Bi\cup_{i\in[m]}B_{i} to AP.3A^{P.3} to build AapproxA^{approx}, i.e., Aapprox=AP.3(i[m]Bi)A^{approx}=A^{P.3}\cup(\cup_{i\in[m]}B_{i}). Return VAapproxV\setminus A^{approx} as the final solution.

Algorithm 2 Approximation Algorithm for P.0 when α>1/2\alpha>1/2
1:  Apply 𝒜\mathcal{A} to solve P.3 and obtain a solution AP.3A^{P.3}
2:  for every group i[m]i\in[m] do
3:     if |AP.3Vi|<|Vi|β|Vi||A^{P.3}\cap V_{i}|<|V_{i}|-\lfloor\beta|V_{i}|\rfloor then
4:        select a random backup set BiB_{i} of size |Vi|β|Vi||AP.3Vi||V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}| from ViAP.3V_{i}\setminus A^{P.3}
5:     else
6:        BiB_{i}\leftarrow\emptyset
7:  AapproxAP.3(i[m]Bi)A^{approx}\leftarrow A^{P.3}\cup(\cup_{i\in[m]}B_{i})
8:  return  VAapproxV\setminus A^{approx}

The pseudocode of this approximation algorithm is given as Algorithm 2. Observe that AP.3A^{P.3} satisfies upper bound constraints of P.3 and hence P.2 because AP.3A^{P.3} is a feasible solution to P.3. According to the construction of BiB_{i}, adding i[m]Bi\cup_{i\in[m]}B_{i} to AP.1A^{P.1} does not violate the upper bound constraints because i[m]Bi\cup_{i\in[m]}B_{i} are added to meet the lower bound constraints of P.2 if necessary. Moreover, adding i[m]Bi\cup_{i\in[m]}B_{i} to AP.3A^{P.3} makes it satisfy lower bound constraints of P.2. Hence, AapproxA^{approx} is a feasible solution to P.2.

Lemma 7

AapproxA^{approx} is a feasible solution to P.2.

3.2.1 Performance Analysis

We first introduce a technical lemma which states that if AP.3A^{P.3} is a γ\gamma-approximate solution of P.3, then f(AP.3)f(A^{P.3}) is at least γ\gamma fraction of the optimal solution of P.2. This lemma follows from the observation that P.3 is a relaxation of P.2 .

Lemma 8

Suppose 𝒜\mathcal{A} is a γ\gamma-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint. Let OP.2O^{P.2} denote the optimal solution of P.2, it holds that g(AP.3)γg(OP.2)g(A^{P.3})\geq\gamma g(O^{P.2}).

We next show that augmenting AP.3A^{P.3} with items from i[m]Bi\cup_{i\in[m]}B_{i} reduces its utility by a factor of at most 2/32/3 in expectation.

Lemma 9

Suppose α>1/2\alpha>1/2, 𝔼Aapprox[g(Aapprox)]13g(AP.3)\mathbb{E}_{A^{approx}}[g(A^{approx})]\geq\frac{1}{3}g(A^{P.3}) where Aapprox=AP.3(i[m]Bi)A^{approx}=A^{P.3}\cup(\cup_{i\in[m]}B_{i}).

Proof: Recall that Bi=B_{i}=\emptyset for all i[m]i\in[m] such that |AP.3Vi||Vi|β|Vi||A^{P.3}\cap V_{i}|\geq|V_{i}|-\lfloor\beta|V_{i}|\rfloor, hence, adding those BiB_{i} to AP.3A^{P.3} does not affect its utility. Therefore, we focus on those groups i[m]i\in[m] with |AP.3Vi|<|Vi|β|Vi||A^{P.3}\cap V_{i}|<|V_{i}|-\lfloor\beta|V_{i}|\rfloor in the rest of the proof. Let M={i|AP.3Vi|<|Vi|β|Vi|}M=\{i\mid|A^{P.3}\cap V_{i}|<|V_{i}|-\lfloor\beta|V_{i}|\rfloor\} denote the set containing the indexes of all such groups and we assume MM\neq\emptyset to avoid trivial cases. We next show that it is safe to assume miniM|Vi|>1\min_{i\in M}|V_{i}|>1 without loss of generality, i.e., the smallest group in MM contains at least two items. To prove this, we consider two cases, depending on the value of β\beta. If β=1\beta=1, then |AP.3Vi|<|Vi|β|Vi||A^{P.3}\cap V_{i}|<|V_{i}|-\lfloor\beta|V_{i}|\rfloor does not hold for any group ii such that |Vi|=1|V_{i}|=1, that is, miniM|Vi|>1\min_{i\in M}|V_{i}|>1. If β<1\beta<1, then according to the group fairness constraints listed in P.0, we are not allowed to select any items from those groups with |Vi|=1|V_{i}|=1. Hence, removing all groups with size one from consideration does not affect the quality of the optimal solution.

With the assumption that miniM|Vi|>1\min_{i\in M}|V_{i}|>1, we are now in position to prove this lemma. Recall that for every iMi\in M, BiB_{i} is a random set of size |Vi|β|Vi||AP.3Vi||V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}| that is sampled from ViAP.3V_{i}\setminus A^{P.3}. It follows that each item in ViAP.3V_{i}\setminus A^{P.3} appears in BiB_{i} with probability at most

|Vi|β|Vi||AP.3Vi||ViAP.3|.\displaystyle\frac{|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|}{|V_{i}\setminus A^{P.3}|}. (8)

We next give an upper bound of (8). Because we assume α>1/2\alpha>1/2, we have βα>1/2\beta\geq\alpha>1/2. This, together with the assumption that miniM|Vi|>1\min_{i\in M}|V_{i}|>1, implies that for all iMi\in M,

|Vi|β|Vi||AP.3Vi||Vi|β|Vi|2|Vi|/3.\displaystyle|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|\leq|V_{i}|-\lfloor\beta|V_{i}|\rfloor\leq 2|V_{i}|/3. (9)

Moreover,

|ViAP.3|=|Vi||AP.3Vi|\displaystyle|V_{i}\setminus A^{P.3}|=|V_{i}|-|A^{P.3}\cap V_{i}| (10)
=(|Vi|β|Vi||AP.3Vi|)+(|Vi|(|Vi|β|Vi|))\displaystyle=(|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|)+(|V_{i}|-(|V_{i}|-\lfloor\beta|V_{i}|\rfloor)) (11)
=(|Vi|β|Vi||AP.3Vi|)+β|Vi|\displaystyle=(|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|)+\lfloor\beta|V_{i}|\rfloor (12)
(|Vi|β|Vi||AP.3Vi|)+|Vi|/3,\displaystyle\geq(|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|)+|V_{i}|/3, (13)

where the inequality is by the observation that β>1/2\beta>1/2. It follows that

(8)|Vi|β|Vi||AP.3Vi|(|Vi|β|Vi||AP.3Vi|)+|Vi|/32|Vi|/32|Vi|/3+|Vi|/3=2/3,\displaystyle(\ref{eq:haha3})\leq\frac{|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|}{(|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|)+|V_{i}|/3}\leq\frac{2|V_{i}|/3}{2|V_{i}|/3+|V_{i}|/3}=2/3, (14)

where the first inequality is by (13) and the second inequality is by (9) and the assumption that |Vi|β|Vi||AP.3Vi|>0|V_{i}|-\lfloor\beta|V_{i}|\rfloor-|A^{P.3}\cap V_{i}|>0. That is, each item in ViAP.3V_{i}\setminus A^{P.3} appears in BiB_{i} with probability at most 2/32/3. Lemma 3 and the observation that g(AP.3)g(A^{P.3}\cup\cdot) is submodular imply that 𝔼Aapprox[g(Aapprox)]=𝔼i[m]Bi[g(AP.3(i[m]Bi))](123)g(AP.3)=13g(AP.3)\mathbb{E}_{A^{approx}}[g(A^{approx})]=\mathbb{E}_{\cup_{i\in[m]}B_{i}}[g(A^{P.3}\cup(\cup_{i\in[m]}B_{i}))]\geq(1-\frac{2}{3})g(A^{P.3}\cup\emptyset)=\frac{1}{3}g(A^{P.3}). \Box

Lemma 8 and Lemma 9 together imply that

𝔼Aapprox[g(Aapprox)]13g(AP.3)γ3g(OP.2).\mathbb{E}_{A^{approx}}[g(A^{approx})]\geq\frac{1}{3}g(A^{P.3})\geq\frac{\gamma}{3}g(O^{P.2}).

By the definition of function gg, we have

𝔼Aapprox[f(VAapprox)]=𝔼Aapprox[g(Aapprox)]γ3g(OP.2)=γ3f(OPT)\mathbb{E}_{A^{approx}}[f(V\setminus A^{approx})]=\mathbb{E}_{A^{approx}}[g(A^{approx})]\geq\frac{\gamma}{3}g(O^{P.2})=\frac{\gamma}{3}f(OPT)

where the last equality is by the observation that P.2 and P.0 share the same value of the optimal solution. Hence, the following main theorem holds.

Theorem 3.2

Suppose 𝒜\mathcal{A} is a γ\gamma-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and α>1/2\alpha>1/2, we have 𝔼Aapprox[f(VAapprox)]γ3f(OPT)\mathbb{E}_{A^{approx}}[f(V\setminus A^{approx})]\geq\frac{\gamma}{3}f(OPT).

If we adopt the continuous double greedy algorithm [11] as 𝒜\mathcal{A} to compute AP.3A^{P.3}, it gives a 1/eo(1)1/e-o(1)-approximation solution, that is, γ1/eo(1)\gamma\geq 1/e-o(1). This, together with Theorem 3.2, implies that 𝔼Aapprox[f(VAapprox)]1/eo(1)3f(OPT)\mathbb{E}_{A^{approx}}[f(V\setminus A^{approx})]\geq\frac{1/e-o(1)}{3}f(OPT).

4 Extension: Incorporating Global Cardinality Constraint

In this section, we extend P.0 to incorporate a global cardinality constraint. A formal definition of this problem is listed in P.A. Our objective is to find a best SS subject to a group fairness constraint (α,β)(\alpha,\beta) and an additional cardinality constraint cc.

P.A maxf(S)\max f(S)
subject to:
α|Vi||SVi|β|Vi|,i[m]\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}|\leq\lfloor\beta|V_{i}|\rfloor,\forall i\in[m].
|S|c|S|\leq c.

4.1 The case when α1/2\alpha\leq 1/2

We first consider the case when α1/2\alpha\leq 1/2. We introduce a new optimization problem P.B as follows:

P.B maxf(S)\max f(S)
subject to:
|SVi|β|Vi|,i[m]|S\cap V_{i}|\leq\lfloor\beta|V_{i}|\rfloor,\forall i\in[m].
i[m]max{α|Vi|,|SVi|}c\sum_{i\in[m]}\max\{\lfloor\alpha|V_{i}|\rfloor,|S\cap V_{i}|\}\leq c.

It is easy to verify that P.B is a relaxation of P.A in the sense that every feasible solution to P.A is also a feasible solution to P.B. Hence, we have the following lemma.

Lemma 10

Let OPTOPT denote the optimal solution of P.A and OP.BO^{P.B} denote the optimal solution of P.B, we have f(OP.B)f(OPT)f(O^{P.B})\geq f(OPT).

It has been shown that the constraints in P.B gives rise to a matroid [10]. This, together with the assumption that ff is a submodular function, implies that P.B is a classic problem of maximizing a submodular function subject to matroid constraints. Now we are ready to present the design of our algorithm.

  1. 1.

    Apply the state-of-the-art algorithm 𝒜\mathcal{A} for matroid constrained submodular maximization to solve P.B and obtain a solution AP.BA^{P.B}.

  2. 2.

    Note that AP.BA^{P.B} is not necessarily a feasible solution to P.A because it might violate the lower bound constraints α|Vi||SVi|\lfloor\alpha|V_{i}|\rfloor\leq|S\cap V_{i}| for some groups. To make it feasible, we add additional items to AP.BA^{P.B}. Specifically, for each group i[m]i\in[m] such that |AP.BVi|<α|Vi||A^{P.B}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor, our algorithm selects a backup set BiB_{i} of size α|Vi||AP.BVi|\lfloor\alpha|V_{i}|\rfloor-|A^{P.B}\cap V_{i}|, by randomly sampling α|Vi||AP.BVi|\lfloor\alpha|V_{i}|\rfloor-|A^{P.B}\cap V_{i}| items from ViAP.BV_{i}\setminus A^{P.B}. Define Bi=B_{i}=\emptyset if |AP.1Vi|α|Vi||A^{P.1}\cap V_{i}|\geq\lfloor\alpha|V_{i}|\rfloor.

  3. 3.

    At the end, add i[m]Bi\cup_{i\in[m]}B_{i} to AP.BA^{P.B} to build the final solution AapproxA^{approx}, i.e., Aapprox=AP.B(i[m]Bi)A^{approx}=A^{P.B}\cup(\cup_{i\in[m]}B_{i}).

Algorithm 3 Approximation Algorithm for P.A when α1/2\alpha\leq 1/2
1:  Apply 𝒜\mathcal{A} to solve P.B and obtain a solution AP.BA^{P.B}
2:  for every group i[m]i\in[m] do
3:     if |AP.BVi|<α|Vi||A^{P.B}\cap V_{i}|<\lfloor\alpha|V_{i}|\rfloor then
4:        select a random backup set BiB_{i} of size α|Vi||AP.BVi|\lfloor\alpha|V_{i}|\rfloor-|A^{P.B}\cap V_{i}| from ViAP.BV_{i}\setminus A^{P.B}
5:     else
6:        BiB_{i}\leftarrow\emptyset
7:  AapproxAP.B(i[m]Bi)A^{approx}\leftarrow A^{P.B}\cup(\cup_{i\in[m]}B_{i})
8:  return  AapproxA^{approx}

The pseudocode of this approximation algorithm is given as Algorithm 3. Observe that AP.BA^{P.B} satisfies the group-wise upper bound constraints of P.A because AP.BA^{P.B} meets the first set of constraints in P.B. According to the construction of BiB_{i}, adding i[m]Bi\cup_{i\in[m]}B_{i} to AP.BA^{P.B} does not violate the group-wise upper bound constraints of P.A because i[m]Bi\cup_{i\in[m]}B_{i} are added to meet the lower bound constraints of P.A if necessary. Moreover, adding i[m]Bi\cup_{i\in[m]}B_{i} to AP.BA^{P.B} does not violate the global cardinality constraint of P.A because AP.BA^{P.B} meets the second set of constraints in P.B. At last, it is easy to verify that adding i[m]Bi\cup_{i\in[m]}B_{i} to AP.BA^{P.B} makes it satisfy the lower bound constraints of P.A. Hence, AapproxA^{approx} is a feasible solution to P.A.

Lemma 11

AapproxA^{approx} is a feasible solution to P.A.

Following the same proof of Theorem 3.1, we have the following theorem.

Theorem 4.1

Suppose 𝒜\mathcal{A} is a γ\gamma-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and α1/2\alpha\leq 1/2, we have 𝔼Aapprox[f(Aapprox)]γ2f(OPT)\mathbb{E}_{A^{approx}}[f(A^{approx})]\geq\frac{\gamma}{2}f(OPT).

4.2 The case when α>1/2\alpha>1/2

We next consider the case when α>1/2\alpha>1/2. Recall that g()=f(V)g(\cdot)=f(V\setminus\cdot). We first present a flipped formation of P.A as below:

P.C maxg(S)\max g(S)
subject to:
|Vi|β|Vi||SVi||Vi|α|Vi|,i[m]|V_{i}|-\lfloor\beta|V_{i}|\rfloor\leq|S\cap V_{i}|\leq|V_{i}|-\lfloor\alpha|V_{i}|\rfloor,\forall i\in[m].
|S|nc|S|\geq n-c.

Suppose there is a γ\gamma-approximate solution AP.CA^{P.C} to P.C, it is easy to verify that VAP.CV\setminus A^{P.C} is a γ\gamma-approximate solution to P.A. We focus on solving P.C in the rest of this section. We first introduce a new optimization problem (labeled as P.D) as follows:

P.D maxg(S)\max g(S)
subject to:
|SVi||Vi|α|Vi|,i[m]|S\cap V_{i}|\leq|V_{i}|-\lfloor\alpha|V_{i}|\rfloor,\forall i\in[m].

P.D is relaxed version of P.C with both group-wise lower bound constraints |Vi|β|Vi||SVi||V_{i}|-\lfloor\beta|V_{i}|\rfloor\leq|S\cap V_{i}| and global lower bound constraints |S|nc|S|\geq n-c in P.C being removed. Hence, we have the following lemma.

Lemma 12

Let OP.CO^{P.C} denote the optimal solution of P.C and OP.DO^{P.D} denote the optimal solution of P.D, we have g(OP.D)g(OP.C)g(O^{P.D})\geq g(O^{P.C}).

Recall that if ff is submodular, gg must be submodular (by Lemma 6). Hence, P.D is a classic problem of maximizing a submodular function subject to matroid constraints. We next present the design of our algorithm.

  1. 1.

    Apply the state-of-the-art algorithm 𝒜\mathcal{A} for matroid constrained submodular maximization to solve P.D and obtain a solution AP.DA^{P.D}.

  2. 2.

    Note that AP.DA^{P.D} is not necessarily a feasible solution to P.C because it might violate the group-wise or the global lower bound constraints of P.C. We add additional items to AP.DA^{P.D} to make it feasible. Specifically, for each group i[m]i\in[m], our algorithm selects a backup set BiB_{i} of size |Vi|α|Vi||AP.DVi||V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|, by randomly sampling |Vi|α|Vi||AP.DVi||V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}| items from ViAP.DV_{i}\setminus A^{P.D}. Define Bi=B_{i}=\emptyset if |Vi|α|Vi||AP.DVi|=0|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|=0.

  3. 3.

    Add i[m]Bi\cup_{i\in[m]}B_{i} to AP.DA^{P.D} to build AapproxA^{approx}, i.e., Aapprox=AP.D(i[m]Bi)A^{approx}=A^{P.D}\cup(\cup_{i\in[m]}B_{i}). Return VAapproxV\setminus A^{approx} as the final solution.

Algorithm 4 Approximation Algorithm for P.A when α>1/2\alpha>1/2
1:  Apply 𝒜\mathcal{A} to solve P.D and obtain a solution AP.DA^{P.D}
2:  for every group i[m]i\in[m] do
3:     if |AP.DVi|<|Vi|α|Vi||A^{P.D}\cap V_{i}|<|V_{i}|-\lfloor\alpha|V_{i}|\rfloor then
4:        select a random backup set BiB_{i} of size |Vi|α|Vi||AP.DVi||V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}| from ViAP.DV_{i}\setminus A^{P.D}
5:     else
6:        BiB_{i}\leftarrow\emptyset
7:  AapproxAP.D(i[m]Bi)A^{approx}\leftarrow A^{P.D}\cup(\cup_{i\in[m]}B_{i})
8:  return  VAapproxV\setminus A^{approx}

The pseudocode of this approximation algorithm is given as Algorithm 4. Observe that adding i[m]Bi\cup_{i\in[m]}B_{i} to AP.DA^{P.D} ensures that each group contributes exactly |Vi|α|Vi||V_{i}|-\lfloor\alpha|V_{i}|\rfloor number of items to the solution. Because nci[m](|Vi|α|Vi|)n-c\leq\sum_{i\in[m]}(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor) (otherwise P.C does not have a feasible solution), AP.D(i[m]Bi)A^{P.D}\cup(\cup_{i\in[m]}B_{i}) must satisfy all constraints in P.C. Hence, we have the following lemma.

Lemma 13

AapproxA^{approx} is a feasible solution to P.C.

We next analyze the performance of AapproxA^{approx}. The following lemma states that adding i[m]Bi\cup_{i\in[m]}B_{i} to AP.DA^{P.D} reduces its utility by a factor of at most 2/32/3 in expectation.

Lemma 14

Suppose α>1/2\alpha>1/2, we have 𝔼Aapprox[g(Aapprox)]13g(AP.D)\mathbb{E}_{A^{approx}}[g(A^{approx})]\geq\frac{1}{3}g(A^{P.D}).

Proof: Observe that Bi=B_{i}=\emptyset for all i[m]i\in[m] such that |AP.DVi|=|Vi|α|Vi||A^{P.D}\cap V_{i}|=|V_{i}|-\lfloor\alpha|V_{i}|\rfloor, hence, adding those BiB_{i} to AP.DA^{P.D} does not affect its utility. Therefore, we focus on those groups i[m]i\in[m] with |AP.DVi|<|Vi|α|Vi||A^{P.D}\cap V_{i}|<|V_{i}|-\lfloor\alpha|V_{i}|\rfloor in the rest of the proof. Let Z={i[m]|AP.DVi|<|Vi|α|Vi|}Z=\{i\in[m]\mid|A^{P.D}\cap V_{i}|<|V_{i}|-\lfloor\alpha|V_{i}|\rfloor\} denote the set containing the indexes all such groups. We assume ZZ\neq\emptyset to avoid trivial cases. We next show that it is safe to assume miniZ|Vi|>1\min_{i\in Z}|V_{i}|>1 without loss of generality, i.e., the smallest group in ZZ contains at least two items. To prove this, we consider two cases, depending on the value of α\alpha. If α=1\alpha=1, then |AP.DVi|<|Vi|α|Vi||A^{P.D}\cap V_{i}|<|V_{i}|-\lfloor\alpha|V_{i}|\rfloor does not hold for any group ii such that |Vi|=1|V_{i}|=1. Hence, miniZ|Vi|>1\min_{i\in Z}|V_{i}|>1. If α<1\alpha<1, then according to the group fairness constraints listed in P.A, we are not allowed to select any items from those groups with |Vi|=1|V_{i}|=1. Hence, removing all groups with size one from consideration does not affect the quality of the optimal solution.

With the assumption that miniZ|Vi|>1\min_{i\in Z}|V_{i}|>1, we are now ready to prove this lemma. Recall that for every iZi\in Z, BiB_{i} is a random set of size |Vi|α|Vi||AP.DVi||V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}| that is sampled from ViAP.DV_{i}\setminus A^{P.D}. It follows each item in ViAP.DV_{i}\setminus A^{P.D} appears in BiB_{i} with probability at most

|Vi|α|Vi||AP.DVi||ViAP.D|.\displaystyle\frac{|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|}{|V_{i}\setminus A^{P.D}|}. (15)

We next give an upper bound of (15). Because we assume α>1/2\alpha>1/2 and miniZ|Vi|>1\min_{i\in Z}|V_{i}|>1, it holds that for all iMi\in M,

|Vi|α|Vi||AP.DVi||Vi|α|Vi|2|Vi|/3.\displaystyle|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|\leq|V_{i}|-\lfloor\alpha|V_{i}|\rfloor\leq 2|V_{i}|/3. (16)

Moreover,

|ViAP.D|=|Vi||AP.DVi|\displaystyle|V_{i}\setminus A^{P.D}|=|V_{i}|-|A^{P.D}\cap V_{i}| (17)
=(|Vi|α|Vi||AP.DVi|)+(|Vi|(|Vi|α|Vi|))\displaystyle=(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|)+(|V_{i}|-(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor)) (18)
=(|Vi|α|Vi||AP.DVi|)+α|Vi|\displaystyle=(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|)+\lfloor\alpha|V_{i}|\rfloor (19)
(|Vi|α|Vi||AP.DVi|)+|Vi|/3,\displaystyle\geq(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|)+|V_{i}|/3, (20)

where the inequality is by the assumptions that α>1/2\alpha>1/2 and |Vi|>1|V_{i}|>1. It follows that

(15)|Vi|α|Vi||AP.DVi|(|Vi|α|Vi||AP.DVi|)+|Vi|/32|Vi|/32|Vi|/3+|Vi|/3=2/3,\displaystyle(\ref{eq:haha31})\leq\frac{|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|}{(|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|)+|V_{i}|/3}\leq\frac{2|V_{i}|/3}{2|V_{i}|/3+|V_{i}|/3}=2/3, (21)

where the first inequality is by (20) and the second inequality is by (16) and the assumption that |Vi|α|Vi||AP.DVi|>0|V_{i}|-\lfloor\alpha|V_{i}|\rfloor-|A^{P.D}\cap V_{i}|>0. That is, each item in ViAP.DV_{i}\setminus A^{P.D} appears in BiB_{i} with probability at most 2/32/3. Lemma 3 and the observation that g(AP.D)g(A^{P.D}\cup\cdot) is submodular imply that 𝔼Aapprox[g(Aapprox)]=𝔼i[m]Bi[g(AP.D(i[m]Bi))](123)g(AP.D)=13g(AP.D)\mathbb{E}_{A^{approx}}[g(A^{approx})]=\mathbb{E}_{\cup_{i\in[m]}B_{i}}[g(A^{P.D}\cup(\cup_{i\in[m]}B_{i}))]\geq(1-\frac{2}{3})g(A^{P.D}\cup\emptyset)=\frac{1}{3}g(A^{P.D}). \Box

Suppose 𝒜\mathcal{A} is a γ\gamma-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint, we have

𝔼Aapprox[g(Aapprox)]13g(AP.D)γ3g(OP.D)\mathbb{E}_{A^{approx}}[g(A^{approx})]\geq\frac{1}{3}g(A^{P.D})\geq\frac{\gamma}{3}g(O^{P.D})

where the first inequality is by Lemma 14. This, together with g(OP.D)g(OP.C)g(O^{P.D})\geq g(O^{P.C}) (as proved in Lemma 12), implies that 𝔼Aapprox[g(Aapprox)]γ3g(OP.C)\mathbb{E}_{A^{approx}}[g(A^{approx})]\geq\frac{\gamma}{3}g(O^{P.C}). By the definition of function gg, we have

𝔼Aapprox[f(VAapprox)]=𝔼Aapprox[g(Aapprox)]γ3g(OP.C)=γ3f(OPT)\mathbb{E}_{A^{approx}}[f(V\setminus A^{approx})]=\mathbb{E}_{A^{approx}}[g(A^{approx})]\geq\frac{\gamma}{3}g(O^{P.C})=\frac{\gamma}{3}f(OPT)

where the last equality is by the observation that P.A and P.C share the same value of the optimal solution. Hence, the following main theorem holds.

Theorem 4.2

Suppose 𝒜\mathcal{A} is a γ\gamma-approximate algorithm for non-monotone submodular maximization subject to a matroid constraint and α>1/2\alpha>1/2, we have 𝔼Aapprox[f(VAapprox)]γ3f(OPT)\mathbb{E}_{A^{approx}}[f(V\setminus A^{approx})]\geq\frac{\gamma}{3}f(OPT).

5 Conclusion

This paper presents a comprehensive investigation of the non-monotone submodular maximization problem under group fairness constraints. Our main contribution is the development of several constant-factor approximation algorithms for this problem. In the future, we plan to expand our research to explore alternative fairness metrics.

References

  • [1] Biddle, D.: Adverse impact and test validation: A practitioner’s guide to valid and defensible employment testing. Routledge (2017)
  • [2] Buchbinder, N., Feldman, M., Naor, J., Schwartz, R.: Submodular maximization with cardinality constraints. In: Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms. pp. 1433–1452. SIAM (2014)
  • [3] Celis, E., Keswani, V., Straszak, D., Deshpande, A., Kathuria, T., Vishnoi, N.: Fair and diverse dpp-based data summarization. In: International Conference on Machine Learning. pp. 716–725. PMLR (2018)
  • [4] Celis, L.E., Huang, L., Vishnoi, N.K.: Multiwinner voting with fairness constraints. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence. pp. 144–151 (2018)
  • [5] Chierichetti, F., Kumar, R., Lattanzi, S., Vassilvtiskii, S.: Matroids, matchings, and fairness. In: The 22nd International Conference on Artificial Intelligence and Statistics. pp. 2212–2220. PMLR (2019)
  • [6] Das, A., Kempe, D.: Algorithms for subset selection in linear regression. In: Proceedings of the fortieth annual ACM symposium on Theory of computing. pp. 45–54 (2008)
  • [7] Dueck, D., Frey, B.J.: Non-metric affinity propagation for unsupervised image categorization. In: 2007 IEEE 11th International Conference on Computer Vision. pp. 1–8. IEEE (2007)
  • [8] Dwork, C., Hardt, M., Pitassi, T., Reingold, O., Zemel, R.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (2012)
  • [9] El-Arini, K., Guestrin, C.: Beyond keyword search: discovering relevant scientific literature. In: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 439–447 (2011)
  • [10] El Halabi, M., Mitrović, S., Norouzi-Fard, A., Tardos, J., Tarnawski, J.M.: Fairness in streaming submodular maximization: algorithms and hardness. Advances in Neural Information Processing Systems 33, 13609–13622 (2020)
  • [11] Feldman, M., Naor, J., Schwartz, R.: A unified continuous greedy algorithm for submodular maximization. In: 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science. pp. 570–579. IEEE (2011)
  • [12] Golovin, D., Krause, A.: Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Journal of Artificial Intelligence Research 42, 427–486 (2011)
  • [13] Gotovos, A., Karbasi, A., Krause, A.: Non-monotone adaptive submodular maximization. In: Twenty-Fourth International Joint Conference on Artificial Intelligence (2015)
  • [14] Gu, S., Gao, C., Wu, W.: A binary search double greedy algorithm for non-monotone dr-submodular maximization. In: Algorithmic Aspects in Information and Management: 16th International Conference, AAIM 2022, Guangzhou, China, August 13–14, 2022, Proceedings. pp. 3–14. Springer (2022)
  • [15] Joseph, M., Kearns, M., Morgenstern, J.H., Roth, A.: Fairness in learning: Classic and contextual bandits. Advances in neural information processing systems 29 (2016)
  • [16] Mirzasoleiman, B., Badanidiyuru, A., Karbasi, A.: Fast constrained submodular maximization: Personalized data summarization. In: ICML. pp. 1358–1367 (2016)
  • [17] Monroe, B.L.: Fully proportional representation. American Political Science Review 89(4), 925–940 (1995)
  • [18] Shi, G., Gu, S., Wu, W.: k-submodular maximization with two kinds of constraints. Discrete Mathematics, Algorithms and Applications 13(04), 2150036 (2021)
  • [19] Sipos, R., Swaminathan, A., Shivaswamy, P., Joachims, T.: Temporal corpus summarization using submodular word coverage. In: Proceedings of the 21st ACM international conference on Information and knowledge management. pp. 754–763 (2012)
  • [20] Tang, S., Yuan, J.: Influence maximization with partial feedback. Operations Research Letters 48(1), 24–28 (2020)
  • [21] Tang, S., Yuan, J.: Adaptive regularized submodular maximization. In: 32nd International Symposium on Algorithms and Computation (ISAAC 2021). Schloss Dagstuhl-Leibniz-Zentrum für Informatik (2021)
  • [22] Tang, S., Yuan, J.: Group equility in adaptive submodular maximization. arXiv preprint arXiv:2207.03364 (2022)
  • [23] Tang, S., Yuan, J.: Beyond submodularity: A unified framework of fair subset selection through randomization. In: Under Review (2023)
  • [24] Tang, S., Yuan, J., Mensah-Boateng, T.: Achieving long-term fairness in submodular maximization through randomization. In: Under Review (2023)
  • [25] Tsang, A., Wilder, B., Rice, E., Tambe, M., Zick, Y.: Group-fairness in influence maximization. arXiv preprint arXiv:1903.00967 (2019)
  • [26] Zafar, M.B., Valera, I., Rogriguez, M.G., Gummadi, K.P.: Fairness constraints: Mechanisms for fair classification. In: Artificial intelligence and statistics. pp. 962–970. PMLR (2017)