This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Fair Hierarchical Clustering

Sara Ahmadian
Google
[email protected]
&Alessandro Epasto
Google
[email protected]
Marina Knittel
University of Maryland
[email protected]
&Ravi Kumar
Google
[email protected]
&Mohammad Mahdian
Google
[email protected]
&Benjamin Moseley
Carnegie Mellon University
[email protected]
&Philip Pham
Google
[email protected]
&Sergei Vassilvtiskii
Google
[email protected]
&Yuyan Wang
Carnegie Mellon University
[email protected]
Abstract

As machine learning has become more prevalent, researchers have begun to recognize the necessity of ensuring machine learning systems are fair. Recently, there has been an interest in defining a notion of fairness that mitigates over-representation in traditional clustering.

In this paper we extend this notion to hierarchical clustering, where the goal is to recursively partition the data to optimize a specific objective. For various natural objectives, we obtain simple, efficient algorithms to find a provably good fair hierarchical clustering. Empirically, we show that our algorithms can find a fair hierarchical clustering, with only a negligible loss in the objective.

1 Introduction

Algorithms and machine learned models are increasingly used to assist in decision making on a wide range of issues, from mortgage approval to court sentencing recommendations [26]. It is clearly undesirable, and in many cases illegal, for models to be biased to groups, for instance to discriminate on the basis of race or religion. Ensuring that there is no bias is not as easy as removing these protected categories from the data. Even without them being explicitly listed, the correlation between sensitive features and the rest of the training data may still cause the algorithm to be biased. This has led to an emergent literature on computing provably fair outcomes (see the book [6]).

The prominence of clustering in data analysis, combined with its use for data segmentation, feature engineering, and visualization makes it critical that efficient fair clustering methods are developed. There has been a flurry of recent results in the ML research community, proposing algorithms for fair flat clustering, i.e., partitioning a dataset into a set of disjoint clusters, as captured by k-center, k-median, k-means, correlation clustering objectives [2, 3, 5, 7, 8, 13, 17, 23, 24, 27, 28]. However, the same issues affect hierarchical clustering, which is the problem we study.

The input to the hierarchical clustering problem is a set of data points, with pairwise similarity or dissimilarity scores. A hierarchical clustering is a tree, whose leaves correspond to the individual datapoints. Each internal node represents a cluster containing all the points in the leaves of its subtree. Naturally, the cluster at an internal node is the union of the clusters given by its children. Hierarchical clustering is widely used in data analysis [20], social networks [29, 31], image/text organization [25].

Hierarchical clustering is frequently used for flat clustering when the number of clusters is unknown ahead of time. A hierarchical clustering yields a set of clusterings at different granularities that are consistent with each other. Therefore, in all clustering problems where fairness is desired but the number of clusters is unknown, fair hierarchical clustering is useful. As concrete examples, consider a collection of news articles organized by a topic hierarchy, where we wish to ensure that no single source or view point is over-represented in a cluster; or a hierarchical division of a geographic area, where the sensitive attribute is gender or race, and we wish to ensure balance in every level of the hierarchy. There are many such problems that benefit from fair hierarchical clustering, motivating the study of the problem area.

Our contributions. We initiate an algorithmic study of fair hierarchical clustering. We build on Dasgupta’s seminal formal treatment of hierarchical clustering [19] and prove our results for the revenue [30], value [18], and cost [19] objectives in his framework.

To achieve fairness, we show how to extend the fairlets machinery, introduced by [15] and extended by [2], to this problem. We then investigate the complexity of finding a good fairlet decomposition, giving both strong computational lower bounds and polynomial time approximation algorithms.

Finally, we conclude with an empirical evaluation of our approach. We show that ignoring protected attributes when performing hierarchical clustering can lead to unfair clusters. On the other hand, adopting the fairlet framework in conjunction with the approximation algorithms we propose yields fair clusters with a negligible objective degradation.

Related work. Hierarchical clustering has received increased attention over the past few years. Dasgupta [19] developed a cost function objective for data sets with similarity scores, where similar points are encouraged to be clustered together lower in the tree. Cohen-Addad et al. [18] generalized these results into a class of optimization functions that possess other desirable properties and introduced their own value objective in the dissimilarity score context. In addition to validating their objective on inputs with known ground truth, they gave a theoretical justification for the average-linkage algorithm, one of the most popular algorithms used in practice, as a constant-factor approximation for value. Contemporaneously, Moseley and Wang [30] designed a revenue objective function based on the work of Dasgupta for point sets with similarity scores and showed the average-linkage algorithm is a constant approximation for this objective as well. This work was further improved by Charikar [12] who gave a tighter analysis of average-linkage for Euclidean data for this objective and [1, 4] who improved the approximation ratio in the general case.

In parallel to the new developments in algorithms for hierarchical clustering, there has been tremendous development in the area of fair machine learning. We refer the reader to a recent textbook  [6] for a rich overview, and focus here on progress for fair clustering. Chierichetti et al. [15] first defined fairness for kk-median and kk-center clustering, and introduced the notion of fairlets to design efficient algorithms. Extensive research has focused on two topics: adapting the definition of fairness to broader contexts, and designing efficient algorithms for finding good fairlet decompositions. For the first topic, the fairness definition was extended to multiple values for the protected feature [2, 8, 32]. For the second topic, Backurs et al. [5] proposed a near-linear constant approximation algorithm for finding fairlets for kk-median, Kleindessner et al. [27] designed a linear time constant approximation algorithm for kk-center, Bercea et al. [8] developed methods for fair kk-means, while Ahmadian et al. [3] defined approximation algorithms for fair correlation clustering. Concurrently with our work, Chhabra et al. [14] introduced a possible approach to ensuring fairness in hierarchical clustering. However, their fairness definition differs from ours (in particular, they do not ensure that all levels of the tree are fair), and the methods they introduce are heuristic, without formal fairness or quality guarantees.

Beyond clustering, the same balance notion that we use has been utilized to capture fairness in other contexts, for instance: fair voting [9], fair optimization [16], as well as other problems [10].

2 Formulation

2.1 Objectives for hierarchical clustering

Let G=(V,s)G=(V,s) be an input instance, where VV is a set of nn data points, and s:V20s:V^{2}\rightarrow\mathbb{R}^{\geq 0} is a similarity function over vertex pairs. For two sets, A,BVA,B\subseteq V, we let s(A,B)=aA,bBs(a,b)s(A,B)=\sum_{a\in A,b\in B}s(a,b) and S(A)={i,j}A2s(i,j)S(A)=\sum_{\{i,j\}\in A^{2}}s(i,j). For problems where the input is G=(V,d)G=(V,d), with dd a distance function, we define d(A,B)d(A,B) and d(A)d(A) similarly. We also consider the vertex-weighted versions of the problem, i.e. G=(V,s,m)G=(V,s,m) (or G=(V,d,m)G=(V,d,m)), where m:V+m:V\rightarrow\mathbb{Z}^{+} is a weight function on the vertices. The vertex-unweighted version can be interpreted as setting m(i)=1,iVm(i)=1,\forall i\in V. For UVU\subseteq V, we use the notation m(U)=iUm(i)m(U)=\sum_{i\in U}m(i).

A hierarchical clustering of GG is a tree whose leaves correspond to VV and whose internal vertices represent the merging of vertices (or clusters) into larger clusters until all data merges at the root. The goal of hierarchical clustering is to build a tree to optimize some objective.

To define these objectives formally, we need some notation. Let TT be a hierarchical clustering tree of GG. For two leaves ii and jj, we say iji\lor j is their least common ancestor. For an internal vertex uu in TT, let T[u]T[u] be the subtree in TT rooted at uu. Let leaves(T[u])\mathrm{leaves}(T[u]) be the leaves of T[u]T[u].

We consider three different objectives—revenue, value, and cost—based on the seminal framework of [19], and generalize them to the vertex-weighted case.

Revenue. Moseley and Wang [30] introduced the revenue objective for hierarchical clustering. Here the input instance is of the form G=(V,s,m)G=(V,s,m), where s:V20s:V^{2}\rightarrow\mathbb{R}^{\geq 0} is a similarity function.

Definition 1 (Revenue).

The revenue (rev\mathrm{rev}) of a tree TT for an instance G=(V,s,m)G=(V,s,m), where s(,)s(\cdot,\cdot) denotes similarity between data points, is: revG(T)=i,jVs(i,j)(m(V)m(leaves(T[ij]))).\mathrm{rev}_{G}(T)=\sum_{i,j\in V}s(i,j)\cdot\big{(}m(V)-m(\mathrm{leaves}(T[i\lor j]))\big{)}.

Note that in this definition, each weight is scaled by (the vertex-weight of) the non-leaves. The goal is to find a tree of maximum revenue. It is known that average-linkage is a 1/3\nicefrac{{1}}{{3}}-approximation for vertex-unweighted revenue [30]; the state-of-the-art is a 0.5850.585-approximation [4].

As part of the analysis, there is an upper bound for the revenue objective [18, 30], which is easily extended to the vertex-weighted setting:

revG(T)(m(V)minu,vV,uvm({u,v}))s(V).\mathrm{rev}_{G}(T)\leq\Big{(}m(V)-\min_{u,v\in V,u\neq v}m(\{u,v\})\Big{)}\cdot s(V). (1)

Note that in the vertex-unweighted case, the upper bound is just (|V|2)s(V)(|V|-2)s(V).

Value. A different objective was proposed by Cohen-Addad et al. [18], using distances instead of similarities. Let G=(V,d,m)G=(V,d,m), where d:V20d:V^{2}\to\mathbb{R}^{\geq 0} is a distance (or dissimilarity) function.

Definition 2 (Value).

The value (val\mathrm{val}) of a tree TT for an instance G=(V,d,m)G=(V,d,m) where d(,)d(\cdot,\cdot) denotes distance is: valG(T)=i,jVd(i,j)m(leaves(T[ij])).\mathrm{val}_{G}(T)=\sum_{i,j\in V}d(i,j)\cdot m(\mathrm{leaves}(T[i\lor j])).

As in revenue, we aim to find a hierarchical clustering to maximize value. Cohen-Addad et al. [18] showed that both average-linkage and a locally ϵ\epsilon-densest cut algorithm achieve a 2/3\nicefrac{{2}}{{3}}-approximation for vertex-unweighted value. They also provided an upper bound for value, much like that in (1), which in the vertex-weighted context, is:

valG(T)m(V)s(V).\mathrm{val}_{G}(T)\leq m(V)\cdot s(V). (2)

Cost. The original objective introduced by Dasgupta [19] for analyzing hierarchical clustering algorithms introduces the notion of cost.

Definition 3 (Cost).

The cost\mathrm{cost} of a tree TT for an instance G=(V,s)G=(V,s) where s(,)s(\cdot,\cdot) denotes similarity is: costG(T)=i,jVs(i,j)|leaves(T[ij])|.\mathrm{cost}_{G}(T)=\sum_{i,j\in V}s(i,j)\cdot|\mathrm{leaves}(T[i\lor j])|.

The objective is to find a tree of minimum cost. From a complexity point of view, cost is a harder objective to optimize. Charikar and Chatziafratis [11] showed that cost is not constant-factor approximable under the Small Set Expansion hypothesis, and the current best approximations are O(logn)O\left(\sqrt{\log n}\right) and require solving SDPs.

Convention. Throughout the paper we adopt the following convention: s(,)s(\cdot,\cdot) will always denote similarities and d(,)d(\cdot,\cdot) will always denote distances. Thus, the inputs for the cost and revenue objectives will be instances of the form (V,s,m)(V,s,m) and inputs for the value objective will be instances of the form (V,d,m)(V,d,m). All the missing proofs can be found in the Supplementary Material.

2.2 Notions of fairness

Many definitions have been proposed for fairness in clustering. We consider the setting in which each data point in VV has a color; the color corresponds to the protected attribute.

Disparate impact. This notion is used to capture the fact that decisions (i.e., clusterings) should not be overly favorable to one group versus another. This notion was formalized by Chierichetti et al. [15] for clustering when the protected attribute can take on one of two values, i.e., points have one of two colors. In their setup, the balance of a cluster is the ratio of the minimum to the maximum number of points of any color in the cluster. Given a balance requirement tt, a clustering is fair if and only if each cluster has a balance of at least tt.

Bounded representation. A generalization of disparate impact, bounded representation focuses on mitigating the imbalance of the representation of protected classes (i.e., colors) in clusters and was defined by Ahmadian et al. [2]. Given an over-representation parameter α\alpha, a cluster is fair if the fractional representation of each color in the cluster is at most α\alpha, and a clustering is fair if each cluster has this property. An interesting special case of this notion is when there are cc total colors and α=1/c\alpha=1/c. In this case, we require that every color is equally represented in every cluster. We will refer to this as equal representation. These notions enjoy the following useful property:

Definition 4 (Union-closed).

A fairness constraint is union-closed if for any pair of fair clusters AA and BB, ABA\cup B is also fair.

This property is useful in hierarchical clustering: given a tree TT and internal node uu, if each child cluster of uu is fair, then uu must also be a fair cluster.

Definition 5 (Fair hierarchical clustering).

For any fairness constraint, a hierarchical clustering is fair if all of its clusters (besides the leaves) are fair.

Thus, under any union-closed fairness constraint, this definition is equivalent to restricting the bottom-most clustering (besides the leaves) to be fair. Then given an objective (e.g., revenue), the goal is to find a fair hierarchical clustering that optimizes the objective. We focus on the bounded representation fairness notion with cc colors and an over-representation cap α\alpha. However, the main ideas for the revenue and value objectives work under any notion of fairness that is union-closed.

3 Fairlet decomposition

Definition 6 (Fairlet [15]).

A fairlet YY is a fair set of points such that there is no partition of YY into Y1Y_{1} and Y2Y_{2} with both Y1Y_{1} and Y2Y_{2} being fair.

In the bounded representation fairness setting, a set of points is fair if at most an α\alpha fraction of the points have the same color. We call this an α\alpha-capped fairlet. For α=1/t\alpha=1/t with tt an integer, the fairlet size will always be at most 2t12t-1. We will refer to the maximum size of a fairlet by mfm_{f}.

Recall that given a union-closed fairness constraint, if the bottom clustering in the tree is a layer of fairlets (which we call a fairlet decomposition of the original dataset) the hierarchical clustering tree is also fair. This observation gives an immediate algorithm for finding fair hierarchical clustering trees in a two-phase manner. (i) Find a fairlet decomposition, i.e., partition the input set VV into clusters Y1,Y2,Y_{1},Y_{2},\ldots that are all fairlets. (ii) Build a tree on top of all the fairlets. Our goal is to complete both phases in such a way that we optimize the given objective (i.e., revenue or value).

In Section 4, we will see that to optimize for the revenue objective, all we need is a fairlet decomposition with bounded fairlet size. However, the fairlet decomposition required for the value objective is more nuanced. We describe this next.

Fairlet decomposition for the value objective

For the value objective, we need the total distance between pairs of points inside each fairlet to be small. Formally, suppose VV is partitioned into fairlets 𝒴={Y1,Y2,}\mathcal{Y}=\{Y_{1},Y_{2},\ldots\} such that YiY_{i} is an α\alpha-capped fairlet. The cost of this decomposition is defined as:

ϕ(𝒴)=Y𝒴{u,v}Yd(u,v).\phi(\mathcal{Y})=\sum_{Y\in\mathcal{Y}}\sum_{\{u,v\}\subseteq Y}d(u,v). (3)

Unfortunately, the problem of finding a fairlet decomposition to minimize ϕ()\phi(\cdot) does not admit any constant-factor approximation unless P = NP.

Theorem 7.

Let z3z\geq 3 be an integer. Then there is no bounded approximation algorithm for finding (zz+1)(\frac{z}{z+1})-capped fairlets optimizing ϕ(𝒴)\phi(\mathcal{Y}), which runs in polynomial time, unless P = NP.

The proof proceeds by a reduction from the Triangle Partition problem, which asks if a graph G=(V,E)G=(V,E) on 3n3n vertices can be partitioned into three element sets, with each set forming a triangle in GG. Fortunately, for the purpose of optimizing the value objective, it is not necessary to find an approximate decomposition.

4 Optimizing revenue with fairness

This section considers the revenue objective. We will obtain an approximation algorithm for this objective in three steps: (i) obtain a fairlet decomposition such that the maximum fairlet size in the decomposition is small, (ii) show that any β\beta-approximation algorithm to (1) plus this fairlet decomposition can be used to obtain a (roughly) β\beta-approximation for fair hierarchical clustering under the revenue objective, and (iii) use average-linkage, which is known to be a 1/3\nicefrac{{1}}{{3}}-approximation to (1). (We note that the recent work [1, 4] on improved approximation algorithms compare to a bound on the optimal solution that differs from (1) and therefore do not fit into our framework.)

First, we address step (ii). Due to space, this proof can be found in Appendix B .

Theorem 8.

Given an algorithm that obtains a β\beta-approximation to (1) where β1\beta\leq 1, and a fairlet decomposition with maximum fairlet size mfm_{f}, there is a β(12mfn)\beta\left(1-\frac{2m_{f}}{n}\right)-approximation for fair hierarchical clustering under the revenue objective.

Prior work showed that average-linkage is a 1/3\nicefrac{{1}}{{3}}-approximation to (1) in the vertex-unweighted case; this proof can be easily modified to show that it is still 1/3\nicefrac{{1}}{{3}}-approximation even with vertex weights. This accounts for step (iii) in our process.

Combined with the fairlet decomposition methods for the two-color case [15] and for multi-color case (Supplementary Material) to address step (i), we have the following.

Corollary 9.

There is polynomial time algorithm that constructs a fair tree that is a 13(12mfn)\frac{1}{3}\left(1-\frac{2m_{f}}{n}\right)-approximation for revenue objective, where mfm_{f} is the maximum size of fairlets.

5 Optimizing value with fairness

In this section we consider the value objective. As in the revenue objective, we prove that we can reduce fair hierarchical clustering to the problem of finding a good fairlet decomposition for the proposed fairlet objective (3), and then use any approximation algorithm for weighted hierarchical clustering with the decomposition as the input.

Theorem 10.

Given an algorithm that gives a β\beta-approximation to (2) where β1\beta\leq 1, and a fairlet decomposition 𝒴\mathcal{Y} such that ϕ(𝒴)ϵd(V)\phi(\mathcal{Y})\leq\epsilon\cdot d(V), there is a β(1ϵ)\beta(1-\epsilon) approximation for (2).

We complement this result with an algorithm that finds a good fairlet decomposition in polynomial time under the bounded representation fairness constraint with cap α\alpha.

Let R1,,RcR_{1},\ldots,R_{c} be the cc colors and let 𝒴={Y1,Y2}\mathcal{Y}=\{Y_{1},Y_{2}\ldots\} be the fairlet decomposition. Let nin_{i} be the number of points colored RiR_{i} in VV. Let ri,kr_{i,k} denote the number of points colored RiR_{i} in the kkth fairlet.

Theorem 11.

There exists a local search algorithm that finds a fairlet decomposition 𝒴\mathcal{Y} with ϕ(𝒴)(1+ϵ)maxi,kri,knid(V)\phi(\mathcal{Y})\leq(1+\epsilon)\max_{i,k}\frac{r_{i,k}}{n_{i}}d(V) in time O~(n3/ϵ)\tilde{O}(n^{3}/\epsilon).

We can now use the fact that both average-linkage and the ϵn\frac{\epsilon}{n}-locally-densest cut algorithm give a 23\frac{2}{3}- and (23ϵ)(\frac{2}{3}-\epsilon)-approximation respectively for vertex-weighted hierarchical clustering under the value objective. Finally, recall that fairlets are intended to be minimal, and their size depends only on the parameter α\alpha, and not on the size of the original input. Therefore, as long as the number of points of each color increases as input size, nn, grows, the ratio ri,k/nir_{i,k}/n_{i} goes to 0. These results, combined with Theorem 10 and Theorem 11, yield Corollary 12.

Corollary 12.

Given bounded size fairlets, the fairlet decomposition computed by local search combined with average-linkage constructs a fair hierarchical clustering that is a 23(1o(1))\frac{2}{3}(1-o(1))-approximation for the value objective. For the ϵn\frac{\epsilon}{n}-locally-densest cut algorithm in [18], we get a polynomial time algorithm for fair hierarchical clustering that is a (23ϵ)(1o(1))(\frac{2}{3}-\epsilon)(1-o(1))-approximation under the value objective for any ϵ>0\epsilon>0.

Given at most a small fraction of every color is in any cluster, Corollary 12 states that we can extend the state-of-the-art results for value to the α\alpha-capped, multi-colored constraint. Note that the preconditions will always be satisfied and the extension will hold in the two-color fairness setting or in the multi-colored equal representation fairness setting.

Fairlet decompositions via local search

In this section, we give a local search algorithm to construct a fairlet decomposition, which proves Theorem 11. This is inspired by the ϵ\epsilon-densest cut algorithm of [18]. To start, recall that for a pair of sets AA and BB we denote by d(A,B)d(A,B) the sum of interpoint distances, d(A,B)=uA,vBd(u,v)d(A,B)=\sum_{u\in A,v\in B}d(u,v). A fairlet decomposition is a partition of the input {Y1,Y2,}\{Y_{1},Y_{2},\ldots\} such that each color composes at most an α\alpha fraction of each YiY_{i}.

Our algorithm will recursively subdivide the cluster of all data to construct a hierarchy by finding cuts. To search for a cut, we will use a swap method.

Definition 13 (Local optimality).

Consider any fairlet decomposition 𝒴={Y1,Y2,}\mathcal{Y}=\{Y_{1},Y_{2},\ldots\} and ϵ>0\epsilon>0. Define a swap of uYiu\in Y_{i} and vYjv\in Y_{j} for jij\neq i as updating YiY_{i} to be (Yi{u}){v}(Y_{i}\setminus\{u\})\cup\{v\} and YjY_{j} to be (Yj{v}){u}(Y_{j}\setminus\{v\})\cup\{u\}. We say 𝒴\mathcal{Y} is ϵ\epsilon-locally-optimal if any swap with u,vu,v of the same color reduces the objective value by less than a (1+ϵ)(1+\epsilon) factor.

The algorithm constructs a (ϵ/n)(\epsilon/n)-locally optimal algorithm for fairlet decomposition, which runs in O~(n3/ϵ)\tilde{O}(n^{3}/\epsilon) time. Consider any given instance (V,d)(V,d). Let dmaxd_{\max} denote the maximum distance, mfm_{f} denote the maximum fairlet size, and Δ=dmaxmfn\Delta=d_{\max}\cdot\frac{m_{f}}{n}. The algorithm begins with an arbitrary decomposition. Then it swaps pairs of monochromatic points until it terminates with a locally optimal solution. By construction we have the following.

Claim 14.

Algorithm 1 finds a valid fairlet decomposition.

We prove two things: Algorithm 1 optimizes the objective (3), and has a small running time. The following lemma gives an upper bound on 𝒴\mathcal{Y}’s performance for (3) found by Algorithm 1.

Algorithm 1 Algorithm for (ϵ/n)(\epsilon/n)-locally-optimal fairlet decomposition.
0:  A set VV with distance function d0d\geq 0, parameter α\alpha, small constant ϵ[0,1]\epsilon\in[0,1].
0:  An α\alpha-capped fairlet decomposition 𝒴\mathcal{Y}.
1:  Find dmaxd_{\max}, Δmfndmax\Delta\leftarrow\frac{m_{f}}{n}d_{\max}.
2:  Arbitrarily find an α\alpha-capped fairlet decomposition {Y1,Y2,}\{Y_{1},Y_{2},\ldots\} such that each partition has at most an α\alpha fraction of any color.
3:  while uYi,vYj,ij\exists u\in Y_{i},v\in Y_{j},i\neq j of the same color, such that for the decomposition 𝒴\mathcal{Y}^{\prime} after swapping u,vu,v, Yk𝒴d(Yk)Yk𝒴d(Yk)(1+ϵ/n)\frac{\sum_{Y_{k}\in\mathcal{Y}}d(Y_{k})}{\sum_{Y_{k}\in\mathcal{Y}^{\prime}}d(Y_{k})}\geq(1+\epsilon/n) and Yk𝒴d(Yk)>Δ\sum_{Y_{k}\in\mathcal{Y}}d(Y_{k})>\Delta do
4:     Swap uu and vv by setting Yi(Yi{u}){v}Y_{i}\leftarrow(Y_{i}\setminus\{u\})\cup\{v\} and Yj(Yj{v}){u}Y_{j}\leftarrow(Y_{j}\setminus\{v\})\cup\{u\}.
5:  end while

Lemma 15.

The fairlet decomposition 𝒴\mathcal{Y} computed by Algorithm 1 has an objective value for (3) of at most (1+ϵ)maxi,kri,knid(V)(1+\epsilon)\max_{i,k}\frac{r_{i,k}}{n_{i}}d(V).

Finally we bound the running time. The algorithm has much better performance in practice than its worst-case analysis would indicate. We will show this later in Section 7.

Lemma 16.

The running time for Algorithm 1 is O~(n3/ϵ)\tilde{O}(n^{3}/\epsilon).

Together, Lemma 15, Lemma 16, and Claim 14 prove Theorem 11. This establishes that there is a local search algorithm that can construct a good fairlet decomposition.

6 Optimizing cost with fairness

This section considers the cost objective of [19]. Even without our fairness constraint, the difficulty of approximating cost is clear in its approximation hardness and the fact that all known solutions require an LP or SDP solver. We obtain the result in Theorem 17; extending this result to other fairness constraints, improving its bound, or even making the algorithm practical, are open questions.

Theorem 17.

Consider the two-color case. Given a β\beta-approximation for cost and a γt\gamma_{t}-approximation for minimum weighted bisection 111 The minimum weighted bisection problem is to find a partition of nodes into two equal-sized subsets so that the sum of the weights of the edges crossing the partition is minimized. on input of size tt, then for parameters tt and \ell such that ntn\geq t\ell and n>+108t2/2n>\ell+108t^{2}/\ell^{2}, there is a fair O(nt+t+nγtt+ntγt2)βO\left(\frac{n}{t}+t\ell+\frac{n\ell\gamma_{t}}{t}+\frac{nt\gamma_{t}}{\ell^{2}}\right)\beta-approximation for cost(Tunfair)\mathrm{cost}(T^{*}_{\mathrm{unfair}}).

With proper parameterization, we achieve an O(n5/6log5/4n)O\left(n^{5/6}\log^{5/4}n\right)-approximation. We defer our algorithm description, pseudocode, and proofs to the Supplementary Material. While our algorithm is not simple, it is an important (and non-obvious) step to show the existence of an approximation, which we hope will spur future work in this area.

7 Experiments

This section validates our algorithms from Sections 4 and 5 empirically. We adopt the disparate impact fairness constraint [15]; thus each point is either blue or red. In particular, we would like to:

  • Show that running the standard average-linkage algorithm results in highly unfair solutions.

  • Demonstrate that demanding fairness in hierarchical clustering incurs only a small loss in the hierarchical clustering objective.

  • Show that our algorithms, including fairlet decomposition, are practical on real data.

In Appendix G we consider multiple colors and the same trends as the two color case occur.

Table 1: Dataset description. Here (b,r)(b,r) denotes the balance of the dataset.
Name Sample size # features Protected feature Color (blue, red) (b,r)(b,r)
CensusGender 30162 6 gender (female, male) (1,3)(1,3)
CensusRace 30162 6 race (non-white, white) (1,7)(1,7)
BankMarriage 45211 7 marital status (not married, married) (1,2)(1,2)
BankAge 45211 7 age (<40<40, 40\geq 40) (2,3)(2,3)
Table 2: Impact of Algorithm 1 on ratiovalue\mathrm{ratio}_{\mathrm{value}} in percentage (mean ±\pm std. dev).
Samples 400 800 1600 3200 6400 12800
CensusGender, initial 88.17±0.7688.17\pm 0.76 88.39±0.2188.39\pm 0.21 88.27±0.4088.27\pm 0.40 88.12±0.2688.12\pm 0.26 88.00±0.1088.00\pm 0.10 88.04±0.1388.04\pm 0.13
final 99.01±0.6099.01\pm 0.60 99.09±0.5899.09\pm 0.58 99.55±0.2699.55\pm 0.26 99.64±0.1399.64\pm 0.13 99.20±0.3899.20\pm 0.38 99.44±0.2399.44\pm 0.23
CensusRace, initial 84.49±0.6684.49\pm 0.66 85.01±0.3185.01\pm 0.31 85.00±0.4285.00\pm 0.42 84.88±0.4384.88\pm 0.43 84.84±0.1684.84\pm 0.16 84.89±0.2084.89\pm 0.20
final 99.50±0.2099.50\pm 0.20 99.89±0.3299.89\pm 0.32 100.0±0.21100.0\pm 0.21 99.98±0.2199.98\pm 0.21 99.98±0.1199.98\pm 0.11 99.93±0.3199.93\pm 0.31
BankMarriage, initial 92.47±0.5492.47\pm 0.54 92.58±0.3092.58\pm 0.30 92.42±0.3092.42\pm 0.30 92.53±0.1492.53\pm 0.14 92.59±0.1492.59\pm 0.14 92.75±0.0492.75\pm 0.04
final 99.18±0.2299.18\pm 0.22 99.28±0.3399.28\pm 0.33 99.59±0.1499.59\pm 0.14 99.51±0.1799.51\pm 0.17 99.46±0.1099.46\pm 0.10 99.50±0.0599.50\pm 0.05
BankAge, initial 93.70±0.5693.70\pm 0.56 93.35±0.4193.35\pm 0.41 92.95±0.2592.95\pm 0.25 93.28±0.1393.28\pm 0.13 93.36±0.1293.36\pm 0.12 93.33±0.1293.33\pm 0.12
final 99.40±0.2899.40\pm 0.28 99.40±0.5199.40\pm 0.51 99.61±0.1399.61\pm 0.13 99.64±0.0799.64\pm 0.07 99.65±0.0899.65\pm 0.08 99.59±0.0699.59\pm 0.06

Datasets. We use two datasets from the UCI data repository.222archive.ics.uci.edu/ml/index.php, Census: archive.ics.uci.edu/ml/datasets/census+income, Bank: archive.ics.uci.edu/ml/datasets/Bank+Marketing In each dataset, we use features with numerical values and leave out samples with empty entries. For for value, we use the Euclidean distance as the dissimilarity measure. For revenue, we set the similarity to be s(i,j)=11+d(i,j)s(i,j)=\frac{1}{1+d(i,j)} where d(i,j)d(i,j) is the Euclidean distance. We pick two different protected features for both datasets, resulting in four datasets in total (See Table 1 for details).

  • Census dataset: We choose gender and race to be the protected feature and call the resulting datasets CensusGender and CensusRace.

  • Bank dataset: We choose marital status and age to be the protected features and call the resulting datasets BankMarriage and BankAge.

In this section, unless otherwise specified, we report results only for the value objective. Results for the revenue objective are qualitatively similar and are omitted here. We do not evaluate our algorithm for the cost objective since it is currently only of theoretical interest.

We sub-sample points of two colors from the original data set proportionally, while approximately retaining the original color balance. The sample sizes used are 100×2i,i=0,,8100\times 2^{i},i=0,\ldots,8. On each, we do 55 experiments and report the average results. We set ϵ\epsilon in Algorithm 1 to 0.10.1 in all of the experiments.

Implementation. The code is available in the Supplementary Material. In the experiments, we use Algorithm 1 for the fairlet decomposition phase, where the fairlet decomposition is initialized by randomly assigning red and blue points to each fairlet. We apply the average-linkage algorithm to create a tree on the fairlets. We further use average-linkage to create subtrees inside of each fairlet.

The algorithm selects a random pair of blue or red points in different fairlets to swap, and checks if the swap sufficiently improves the objective. We do not run the algorithm until all the pairs are checked, rather the algorithm stops if it has made a 2n2n failed attempts to swap a random pair. As we obseve empirically, this does not have material effect on the quality of the overall solution.

Table 3: Impact of Algorithm 1 on ratiofairlets\mathrm{ratio}_{{\text{fairlets}}}.
Samples 100 200 400 800 1600 3200 6400 12800
CensusGender, initial 2.5e-2 1.2e-2 6.2e-3 3.0e-3 1.5e-3 7.5e-4 3.8e-4 1.9e-4
final 4.9e-3 1.4e-3 6.9e-4 2.5e-4 8.5e-5 3.6e-5 1.8e-5 8.0e-6
CensusRace, initial 6.6e-2 3.4e-2 1.7e-2 8.4e-3 4.2e-3 2.1e-3 1.1e-3 5.3e-4
final 2.5e-2 1.2e-2 6.2e-3 3.0e-3 1.5e-3 7.5e-4 3.8e-4 1.9e-5
BankMarriage, initial 1.7e-2 8.2e-3 4.0e-3 2.0e-3 1.0e-3 5.0e-4 2.5e-4 1.3e-4
final 5.9e-3 2.1e-3 9.3e-4 4.1e-4 1.3e-4 7.1e-5 3.3e-5 1.4e-5
BankAge, initial 1.3e-2 7.4e-3 3.5e-3 1.9e-3 9.3e-4 4.7e-4 2.3e-4 1.2e-4
final 5.0e-3 2.2e-3 7.0e-4 3.7e-4 1.3e-4 5.7e-5 3.0e-5 1.4e-5
Refer to caption Refer to caption Refer to caption
(i) (ii) (iii)
Figure 1: (i) ratiofairlets\mathrm{ratio}_{\text{fairlets}}, every 100100 swaps. (ii) ratiovalue\mathrm{ratio}_{\mathrm{value}}, every 100100 swaps. (iii) CensusGender: running time vs sample size on a log-log scale.

Metrics. We present results for value here, the results for revenue are qualitatively similar. In our experiments, we track the following quantities. Let GG be the given input instance and let TT be the output of our fair hierarchical clustering algorithm. We consider the following ratio ratiovalue=valueG(T)valueG(T)\mathrm{ratio}_{\mathrm{value}}=\frac{\mathrm{value}_{G}(T)}{\mathrm{value}_{G}(T^{\prime})}, where TT^{\prime} is the tree obtained by the standard average-linkage algorithm. We consider the fairlet objective function where 𝒴\mathcal{Y} is a fairlet decomposition. Let ratiofairlets=ϕ(𝒴)d(V)\mathrm{ratio}_{{\text{fairlets}}}=\frac{\phi(\mathcal{Y})}{d(V)}.

Results. Average-linkage algorithm always constructs unfair trees. For each of the datasets, the algorithm results in monochromatic clusters at some level, strengthening the case for fair algorithms.

In Table 2, we show for each dataset the ratiovalue\mathrm{ratio}_{\mathrm{value}} both at the time of initialization (Initial) and after usint the local search algorithm (Final). We see the change in the ratio as the local search algorithm performs swaps. Fairness leads to almost no degradation in the objective value as the swaps increase. Table 3 shows the ratiofairlets\mathrm{ratio}_{{\text{fairlets}}} between the initial initialization and the final output fairlets. As we see, Algorithm 1 significantly improves the fairness of the initial random fairlet decomposition.

Table 4: Clustering on fairlets found by local search vs. upper bound, at size 1600 (mean ±\pm std. dev).
Dataset CensusGender CensusRace BankMarriage BankAge
Revenue vs. upper bound 81.89±0.4081.89\pm 0.40 81.75±0.8381.75\pm 0.83 61.53±0.3761.53\pm 0.37 61.66±0.6661.66\pm 0.66
Value vs. upper bound 84.31±0.1584.31\pm 0.15 84.52±0.2284.52\pm 0.22 89.17±0.2989.17\pm 0.29 88.81±0.1888.81\pm 0.18

The more the locally-optimal algorithm improves the objective value of (3), the better the tree’s performance based on the fairlets. Figures 1(i) and 1(ii) show ratiovalue\mathrm{ratio}_{\mathrm{value}} and ratiofairlets\mathrm{ratio}_{{\text{fairlets}}} for every 100100 swaps in the execution of Algorithm 1 on a subsample of size 32003200 from Census data set. The plots show that as the fairlet objective value decreases, the value objective of the resulting fair tree increases. Such correlation are found on subsamples of all sizes.

Now we compare the objective value of the algorithm with the upper bound on the optimum. We report the results for both the revenue and value objectives, using fairlets obtained by local search, in Table 4. On all datasets, we obtain ratios significantly better than the theoretical worst case guarantee. In Figure 1(iii), we show the average running time on Census data for both the original average-linkage and the fair average-linkage algorithms. As the sample size grows, the running time scales almost as well as current implementations of average-linkage algorithm. Thus with a modest increase in time, we can obtain a fair hierarchical clustering under the value objective.

8 Conclusions

In this paper we extended the notion of fairness to the classical problem of hierarchical clustering under three different objectives (revenue, value, and cost). Our results show that revenue and value are easy to optimize with fairness; while optimizing cost appears to be more challenging.

Our work raises several questions and research directions. Can the approximations be improved? Can we find better upper and lower bounds for fair cost? Are there other important fairness criteria?

References

  • [1] Sara Ahmadian, Vaggos Chatziafratis, Alessandro Epasto, Euiwoong Lee, Mohammad Mahdian, Konstantin Makarychev, and Grigory Yaroslavtsev. Bisect and conquer: Hierarchical clustering via max-uncut bisection. In AISTATS, 2020.
  • [2] Sara Ahmadian, Alessandro Epasto, Ravi Kumar, and Mohammad Mahdian. Clustering without over-representation. In KDD, pages 267–275, 2019.
  • [3] Sara Ahmadian, Alessandro Epasto, Ravi Kumar, and Mohammad Mahdian. Fair correlation clustering. In AISTATS, 2020.
  • [4] Noga Alon, Yossi Azar, and Danny Vainstein. Hierarchical clustering: a 0.585 revenue approximation. In COLT, 2020.
  • [5] Arturs Backurs, Piotr Indyk, Krzysztof Onak, Baruch Schieber, Ali Vakilian, and Tal Wagner. Scalable fair clustering. In ICML, pages 405–413, 2019.
  • [6] Solon Barocas, Moritz Hardt, and Arvind Narayanan. Fairness and Machine Learning. www.fairmlbook.org, 2019.
  • [7] Suman Bera, Deeparnab Chakrabarty, Nicolas Flores, and Maryam Negahbani. Fair algorithms for clustering. In NeurIPS, pages 4955–4966, 2019.
  • [8] Ioana O Bercea, Martin Groß, Samir Khuller, Aounon Kumar, Clemens Rösner, Daniel R Schmidt, and Melanie Schmidt. On the cost of essentially fair clusterings. In APPROX-RANDOM, pages 18:1–18:22, 2019.
  • [9] L. Elisa Celis, Lingxiao Huang, and Nisheeth K. Vishnoi. Multiwinner voting with fairness constraints. In IJCAI, pages 144–151, 2018.
  • [10] L. Elisa Celis, Damian Straszak, and Nisheeth K. Vishnoi. Ranking with fairness constraints. In ICALP, pages 28:1–28:15, 2018.
  • [11] Moses Charikar and Vaggos Chatziafratis. Approximate hierarchical clustering via sparsest cut and spreading metrics. In SODA, pages 841–854, 2017.
  • [12] Moses Charikar, Vaggos Chatziafratis, and Rad Niazadeh. Hierarchical clustering better than average-linkage. In SODA, pages 2291–2304, 2019.
  • [13] Xingyu Chen, Brandon Fain, Charles Lyu, and Kamesh Munagala. Proportionally fair clustering. In ICML, pages 1032–1041, 2019.
  • [14] Anshuman Chhabra and Prasant Mohapatra. Fair algorithms for hierarchical agglomerative clustering. arXiv:2005.03197, 2020.
  • [15] Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. Fair clustering through fairlets. In NIPS, pages 5029–5037, 2017.
  • [16] Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. Matroids, matchings, and fairness. In AISTATS, pages 2212–2220, 2019.
  • [17] Ashish Chiplunkar, Sagar Kale, and Sivaramakrishnan Natarajan Ramamoorthy. How to solve fair kk-center in massive data models. In ICML, 2020.
  • [18] Vincent Cohen-Addad, Varun Kanade, Frederik Mallmann-Trenn, and Claire Mathieu. Hierarchical clustering: Objective functions and algorithms. In SODA, pages 378–397, 2018.
  • [19] Sanjoy Dasgupta. A cost function for similarity-based hierarchical clustering. In STOC, pages 118–127, 2016.
  • [20] Richard Dubes and Anil K. Jain. Clustering methodologies in exploratory data analysis. In Advances in Computers, volume 19, pages 113–228. Academic Press, 1980.
  • [21] Uriel Feige and Robert Krauthgamer. A polylogarithmic approximation of the minimum bisection. In FOCS, pages 105–115, 2000.
  • [22] Michael R Garey and David S Johnson. Computers and Intractability. WH Freeman New York, 2002.
  • [23] Lingxiao Huang, Shaofeng H. C. Jiang, and Nisheeth K. Vishnoi. Coresets for clustering with fairness constraints. In NeurIPS, pages 7587–7598, 2019.
  • [24] Matthew Jones, Thy Nguyen, and Huy Nguyen. Fair kk-centers via maximum matching. In ICML, 2020.
  • [25] Michael Steinbach George Karypis, Vipin Kumar, and Michael Steinbach. A comparison of document clustering techniques. In TextMining Workshop at KDD, 2000.
  • [26] Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, and Sendhil Mullainathan. Human decisions and machine predictions. The Quarterly Journal of Economics, 133(1):237–293, 2017.
  • [27] Matthäus Kleindessner, Pranjal Awasthi, and Jamie Morgenstern. Fair kk-center clustering for data summarization. In ICML, pages 3448–3457, 2019.
  • [28] Matthäus Kleindessner, Samira Samadi, Pranjal Awasthi, and Jamie Morgenstern. Guarantees for spectral clustering with fairness constraints. In ICML, pages 3448–3457, 2019.
  • [29] Charles F Mann, David W Matula, and Eli V Olinick. The use of sparsest cuts to reveal the hierarchical community structure of social networks. Social Networks, 30(3):223–234, 2008.
  • [30] Benjamin Moseley and Joshua Wang. Approximation bounds for hierarchical clustering: Average linkage, bisecting kk-means, and local search. In NIPS, pages 3094–3103, 2017.
  • [31] Anand Rajaraman and Jeffrey David Ullman. Mining of Massive Datasets. Cambridge University Press, 2011.
  • [32] Clemens Rösner and Melanie Schmidt. Privacy preserving clustering with constraints. In ICALP, pages 96:1–96:14, 2018.

Appendix

Appendix A Approximation algorithms for weighted hierarchical clustering

In this section we first prove that running constant-approximation algorithms on fairlets gives good solutions for value objective, and then give constant approximation algorithms for both revenue and value in weighted hierarchical clustering problem, as is mentioned in Corollary 9 and 12. That is, a weighted version of average-linkage, for both weighted revenue and value objective, and weighted (ϵ/n\epsilon/n)-locally densest cut algorithm, which works for weighted value objective. Both proofs are easily adapted from previous proofs in [18] and [30].

A.1 Running constant-approximation algorithms on fairlets

In this section, we prove Theorem 10, which says if we run any β\beta-approximation algorithm for the upper bound on weighted value on the fairlet decomposition, we get a fair tree with minimal loss in approximation ratio. For the remainder of this section, fix any hierarchical clustering algorithm AA that is guaranteed on any weighted input (V,d,m)(V,d,m) to construct a hierarchical clustering with objective value at least βm(V)d(V)\beta m(V)d(V) for the value objective on a weighted input. Recall that we extended the value objective to a weighted variant in the Preliminaries Section and m(V)=uVmum(V)=\sum_{u\in V}m_{u}. Our aim is to show that we can combine AA with the fairlet decomposition 𝒴\mathcal{Y} introduced in the prior section to get a fair hierarchical clustering that is a β(1ϵ)\beta(1-\epsilon)-approximation for the value objective, if ϕ(𝒴)ϵd(V)\phi(\mathcal{Y})\leq\epsilon d(V).

In the following definition, we transform the point set to a new set of points that are weighted. We will analyze AA on this new set of points. We then show how we can relate this to the objective value of the optimal tree on the original set of points.

Definition 18.

Let 𝒴={Y1,Y2,}\mathcal{Y}=\{Y_{1},Y_{2},\ldots\} be the fairlet decomposition for VV that is produced by the local search algorithm. Define V(𝒴)V(\mathcal{Y}) as follows:

  • Each set YiY_{i} has a corresponding point aia_{i} in V(𝒴)V(\mathcal{Y}).

  • The weight mim_{i} of aia_{i} is set to be |Yi||Y_{i}|.

  • For each partitions Yi,YjY_{i},Y_{j}, where iji\neq j and Yi,Yj𝒴Y_{i},Y_{j}\in\mathcal{Y}, d(ai,aj)=d(Yi,Yj)d(a_{i},a_{j})=d(Y_{i},Y_{j}).

We begin by observing the objective value that AA receives on the instance V(𝒴)V(\mathcal{Y}) is large compared to the weights in the original instance.

Theorem 19.

On the instance V(𝒴)V(\mathcal{Y}) the algorithm AA has a total weighted objective of β(1ϵ)nd(V)\beta(1-\epsilon)\cdot nd(V).

Proof.

Notice that m(V(𝒴))=|V|=nm(V(\mathcal{Y}))=|V|=n. Consider the total sum of all the distances in V(𝒴)V(\mathcal{Y}). This is ai,ajV(𝒴)d(ai,aj)=Yi,Yj𝒴d(Yi,Yj)=d(V)ϕ(𝒴)\sum_{a_{i},a_{j}\in V(\mathcal{Y})}d(a_{i},a_{j})=\sum_{Y_{i},Y_{j}\in\mathcal{Y}}d(Y_{i},Y_{j})=d(V)-\phi(\mathcal{Y}). The upper bound on the optimal solution is (Yi𝒴mi)(d(V)ϕ(𝒴)=n(d(V)ϕ(𝒴))(\sum_{Y_{i}\in\mathcal{Y}}m_{i})(d(V)-\phi(\mathcal{Y})=n(d(V)-\phi(\mathcal{Y})). Since ϕ(𝒴)ϵd(V)\phi(\mathcal{Y})\leq\epsilon d(V), this upper bound is at least (1ϵ)nd(V)(1-\epsilon)nd(V). Theorem 10 follows from the fact that the algorithm AA archives a weighted revenue at least a β\beta factor of the total weighted distances. ∎

A.2 Weighted hierarchical clustering: Constant-factor approximation

For weighted hierarchical clustering with positive integral weights, we define the weighted average-linkage algorithm for input (V,d,m)(V,d,m) and (V,s,m)(V,s,m). Define the average distance to be 𝐴𝑣𝑔(A,B)=d(A,B)m(A)m(B)\mathit{Avg}(A,B)=\frac{d(A,B)}{m(A)m(B)} for dissimilarity-based input, and 𝐴𝑣𝑔(A,B)=s(A,B)m(A)m(B)\mathit{Avg}(A,B)=\frac{s(A,B)}{m(A)m(B)} for similarity-based input. In each iteration, weighted average-linkage seeks to merge the clusters which minimizes this value, if dissimilarity-based, and maximizes this value, if similarity-based.

Lemma 20.

Weighted average-linkage is a 23(resp.,13)\frac{2}{3}(resp.,\frac{1}{3}) approximation for the upper bound on weighted value (resp., revenue) objective with positive, integral weights.

Proof.

We prove it for weighted value first. This is directly implied by the fact that average-linkage is 23\frac{2}{3} approximation for unweighted value objective, as is proved in [18]. We have already seen in the last subsection that a unweighted input VV can be converted into weighted input V(𝒴)V(\mathcal{Y}). Vice versa, we can construct a weighted input (V,d,m)(V,d,m) into unweighted input with same upper bound for value objective.

In weighted hierarchical clustering we treat each point pp with integral weights as m(p)m(p) duplicates of points with distance 0 among themselves, let’s call this set S(p)S(p). For two weighted points (p,m(p))(p,m(p)) and (q,m(q))(q,m(q)), if iS(p),jS(q)i\in S(p),j\in S(q), let d(i,j)=d(p,q)m(p)m(q)d(i,j)=\frac{d(p,q)}{m(p)m(q)}. This unweighted instance, composed of many duplicates, has the same upper bound as the weighted instance. Notice that running average-linkage on the unweighted instance will always choose to put all the duplicates S(p)S(p) together first for each pp, and then do hierarchical clustering on top of the duplicates. Thus running average-linkage on the unweighted input gives a valid hierarchical clustering tree for weighted input. Since unweighted value upper bound equals weighted value upper bound, the approximation ratio is the same.

Now we prove it for weighted revenue. In [30], average-linkage being 13\frac{1}{3} approximation for unweighted revenue is proved by the following. Given any clustering 𝒞\mathcal{C}, if average-linkage chooses to merge AA and BB in 𝒞\mathcal{C}, we define a local revenue for this merge:

merge-rev(A,B)=C𝒞{A,B}|C||A||B|𝐴𝑣𝑔(A,B).\text{merge-rev}(A,B)=\sum_{C\in\mathcal{C}\setminus\{A,B\}}|C||A||B|\mathit{Avg}(A,B).

And correspondingly, a local cost:

merge-cost(A,B)=C𝒞{A,B}(|B||A||C|𝐴𝑣𝑔(A,C)+|A||B||C|𝐴𝑣𝑔(B,C)).\displaystyle\text{merge-cost}(A,B)=\sum_{C\in\mathcal{C}\setminus\{A,B\}}(|B||A||C|\mathit{Avg}(A,C)+|A||B||C|\mathit{Avg}(B,C)).

Summing up the local revenue and cost over all merges gives the upper bound. [30] used the property of average-linkage to prove that at every merge, merge-cost(A,B)2merge-rev(A,B)\text{merge-cost}(A,B)\leq 2\text{merge-rev}(A,B), which guarantees the total revenue, which is the summation of merge-rev(A,B)\text{merge-rev}(A,B) over all merges, is at least 13\frac{1}{3} of the upper bound. For the weighted case, we define

merge-rev(A,B)=C𝒞{A,B}m(C)m(A)m(B)𝐴𝑣𝑔(A,B).\displaystyle\text{merge-rev}(A,B)=\sum_{C\in\mathcal{C}\setminus\{A,B\}}m(C)m(A)m(B)\mathit{Avg}(A,B).

And

merge-cost(A,B)C𝒞{A,B}(m(B)m(A)m(C)𝐴𝑣𝑔(A,C)+m(A)m(B)m(C)𝐴𝑣𝑔(B,C)).\displaystyle\text{merge-cost}(A,B)\sum_{C\in\mathcal{C}\setminus\{A,B\}}(m(B)m(A)m(C)\mathit{Avg}(A,C)+m(A)m(B)m(C)\mathit{Avg}(B,C)).

And the rest of the proof works in the same way as in [30], proving weighted average-linkage to be 13\frac{1}{3} for weighted revenue. ∎

Next we define the weighted (ϵ/n)(\epsilon/n)-locally-densest cut algorithm. The original algorithm, introduced in [18], defines a cut to be d(A,B)|A||B|\frac{d(A,B)}{|A||B|}. It starts with the original set as one cluster, at every step, it seeks the partition of the current set that locally maximizes this value, and thus constructing a tree from top to bottom. For the weighted input (V,d,m)(V,d,m), we define the cut to be d(A,B)m(A)m(B)\frac{d(A,B)}{m(A)m(B)}, and let n=m(V)n=m(V). For more description of the algorithm, see Algorithm 4 in Section 6.2 in [18].

Lemma 21.

Weighted (ϵ/n)(\epsilon/n)-locally-densest cut algorithm is a 23ϵ\frac{2}{3}-\epsilon approximation for weighted value objective.

Proof.

Just as in the average-linkage proof, we convert each weighted point pp into a set SS of m(p)m(p) duplicates of pp. Notice that the converted unweighted hierarchical clustering input has the same upper bound as the weighted hierarchical clustering input, and the ϵ/n\epsilon/n-locally-densest cut algorithm moves all the duplicate sets SS around in the unweighted input, instead of single points as in the original algorithm in [18].

Focus on a split of cluster ABA\cup B into (A,B)(A,B). Let SS be a duplicate set. SA\forall S\subseteq A, where SS is a set of duplicates, we must have

(1+ϵn)d(A,B)|A||B|d(AS,BS)(|A||S|)(|B|+|S|).(1+\frac{\epsilon}{n})\frac{d(A,B)}{|A||B|}\geq\frac{d(A\setminus S,B\cup S)}{(|A|-|S|)(|B|+|S|)}.

Pick up a point qSq\in S,

(1+ϵn)d(A,B)|S|(|A|1)(|B|+1)\displaystyle\qquad(1+\frac{\epsilon}{n})d(A,B)|S|(|A|-1)(|B|+1)
=(1+ϵn)d(A,B)(|A||B|+|A||B|1)|S|\displaystyle=(1+\frac{\epsilon}{n})d(A,B)(|A||B|+|A|-|B|-1)|S|
=(1+ϵn)d(A,B)(|A||B|+|A||S||B||S||S|)+(1+ϵn)d(A,B)(|A||B|)(|S|1)\displaystyle=(1+\frac{\epsilon}{n})d(A,B)(|A||B|+|A||S|-|B||S|-|S|)+(1+\frac{\epsilon}{n})d(A,B)(|A||B|)(|S|-1)
(1+ϵn)d(A,B)(|A||S|)(|B|+|S|)+d(A,B)|A||B|(|S|1)\displaystyle\geq(1+\frac{\epsilon}{n})d(A,B)(|A|-|S|)(|B|+|S|)+d(A,B)|A||B|(|S|-1)
|A||B|d(AS,BS)+d(A,B)|A||B|(|S|1)\displaystyle\geq|A||B|d(A\setminus S,B\cup S)+d(A,B)|A||B|(|S|-1)
=|A||B|(d(A,B)+|S|d(q,A)|S|d(q,B))+|A||B|(|S|1)d(A,B)\displaystyle=|A||B|(d(A,B)+|S|d(q,A)-|S|d(q,B))+|A||B|(|S|-1)d(A,B)
=|A||B||S|(d(A,B)+d(q,A)d(q,B)).\displaystyle=|A||B||S|(d(A,B)+d(q,A)-d(q,B)).

Rearrange the terms and we get the following inequality holds for any point qAq\in A:

(1+ϵn)d(A,B)|A||B|d(A,B)+d(q,A)d(q,B)(|A|1)(|B|+1).\left(1+\frac{\epsilon}{n}\right)\frac{d(A,B)}{|A||B|}\geq\frac{d(A,B)+d(q,A)-d(q,B)}{(|A|-1)(|B|+1)}.

The rest of the proof goes exactly the same as the proof in [18, Theorem 6.5]. ∎

Appendix B Proof of Theorem 8

Proof.

Let 𝒜\mathcal{A} be the β\beta-approximation algorithm to (1). For a given instance G=(V,s)G=(V,s), let 𝒴={Y1,Y2,}\mathcal{Y}=\{Y_{1},Y_{2},\ldots\} be a fairlet decomposition of VV; let mf=maxY𝒴|Y|m_{f}=\max_{Y\in\mathcal{Y}}|Y|. Recall that n=|V|n=|V|.

We use 𝒴\mathcal{Y} to create a weighted instance G𝒴=(𝒴,s𝒴,m𝒴)G_{\mathcal{Y}}=(\mathcal{Y},s_{\mathcal{Y}},m_{\mathcal{Y}}). For Y,Y𝒴Y,Y^{\prime}\in\mathcal{Y}, we define s(Y,Y)=iY,jYs(i,j)s(Y,Y^{\prime})=\sum_{i\in Y,j\in Y^{\prime}}s(i,j) and we define m𝒴(Y)=|Y|m_{\mathcal{Y}}(Y)=|Y|.

We run 𝒜\mathcal{A} on G𝒴G_{\mathcal{Y}} and let T𝒴T_{\mathcal{Y}} be the hierarchical clustering obtained by 𝒜\mathcal{A}. To extend this to a tree TT on VV, we simply place all the points in each fairlet as leaves under the corresponding vertex in T𝒴T_{\mathcal{Y}}.

We argue that revG(T)β(12mfn)(n2)s(V)\mathrm{rev}_{G}(T)\geq\beta\left(1-\frac{2m_{f}}{n}\right)(n-2)s(V).

Since 𝒜\mathcal{A} obtains a β\beta-approximation to hierarchical clustering on G𝒴G_{\mathcal{Y}}, we have revG𝒴(T𝒴)βY,Y𝒴s(Y,Y)(nm(Y)m(Y)).\mathrm{rev}_{G_{\mathcal{Y}}}\big{(}T_{\mathcal{Y}})\geq\beta\cdot\sum_{Y,Y^{\prime}\in\mathcal{Y}}s(Y,Y^{\prime})(n-m(Y)-m(Y^{\prime})).

Notice the fact that, for any pair of points u,vu,v in the same fairlet Y𝒴Y\in\mathcal{Y}, the revenue they get in the tree TT is (nm(Y))s(u,v)(n-m(Y))s(u,v). Then using revG(T)=Y𝒴(nm(Y))s(Y)+rev(T𝒴)rev_{G}(T)=\sum_{Y\in\mathcal{Y}}(n-m(Y))s(Y)+\mathrm{rev}(T_{\mathcal{Y}}),

revG(T)\displaystyle\mathrm{rev}_{G}(T) Y𝒴β(nm(Y))s(Y)+βY,Y𝒴s(Y,Y)(nm(Y)m(Y))\displaystyle\geq\sum_{Y\in\mathcal{Y}}\beta(n-m(Y))s(Y)+\beta\sum_{Y,Y^{\prime}\in\mathcal{Y}}s(Y,Y^{\prime})(n-m(Y)-m(Y^{\prime}))
β(n2mf)(Y𝒴s(Y)+Y,Y𝒴s(Y,Y))β(12mfn)(n2)s(V).\displaystyle\geq\beta(n-2m_{f})\left(\sum_{Y\in\mathcal{Y}}s(Y)+\sum_{Y,Y^{\prime}\in\mathcal{Y}}s(Y,Y^{\prime})\right)\geq\beta\left(1-\frac{2m_{f}}{n}\right)(n-2)s(V).

Thus the resulting tree TT is a β(12mfn)\beta\left(1-\frac{2m_{f}}{n}\right)-approximation of the upper bound. ∎

Appendix C Proofs for (ϵ/n)(\epsilon/n)-locally-optimal local search algorithm

In this section, we prove that Algorithm 1 gives a good fairlet decomposition for the fairlet decomposition objective 3, and that it has polynomial run time.

C.1 Proof for a simplified version of Lemma 15

In Subsection C.2, we will prove Lemma 15. For now, we will consider a simpler version of Lemma 15 in the context of [15]’s disparate impact problem, where we have red and blue points and strive to preserve their ratios in all clusters. Chierichetti et al. [15] provided a valid fairlet decomposition in this context, where each fairlet has at most bb blue points and rr red points. Before going deeper into the analysis, we state the following useful proposition.

Proposition 22.

Let rt=|𝑟𝑒𝑑(V)|r_{t}=|\mathit{red}(V)| be the total number of red points and bt=|𝑏𝑙𝑢𝑒(V)|b_{t}=|\mathit{blue}(V)| the number of blue points. We have that, max{rrt,bbt}2(b+r)n\max\{\frac{r}{r_{t}},\frac{b}{b_{t}}\}\leq\frac{2(b+r)}{n}.

Proof.

Recall that 𝑏𝑎𝑙𝑎𝑛𝑐𝑒(V)=btrtbr\mathit{balance}(V)=\frac{b_{t}}{r_{t}}\geq\frac{b}{r}, and wlog btrtb_{t}\leq r_{t}. Since the fractions are positive and btrtbr\frac{b_{t}}{r_{t}}\geq\frac{b}{r} we know that btbt+rtbb+r\frac{b_{t}}{b_{t}+r_{t}}\geq\frac{b}{b+r}. Since bt+rt=nb_{t}+r_{t}=n we conclude that btbb+rnb_{t}\geq\frac{b}{b+r}n. Similarly, we conclude that rtbt+rtrb+r\frac{r_{t}}{b_{t}+r_{t}}\leq\frac{r}{b+r}. Therefore rtrb+rnr_{t}\leq\frac{r}{b+r}n.

Thus, rrtb+rnbbt\frac{r}{r_{t}}\geq\frac{b+r}{n}\geq\frac{b}{b_{t}}. However, since btrtb_{t}\leq r_{t} and bt+rt=nb_{t}+r_{t}=n, rt12nr_{t}\geq\frac{1}{2}n, rrt2rn2(b+r)n\frac{r}{r_{t}}\leq\frac{2r}{n}\leq\frac{2(b+r)}{n}. ∎

Using this, we can define and prove the following lemma, which is a simplified version of Lemma 15.

Lemma 23.

The fairlet decomposition 𝒴\mathcal{Y} computed by Algorithm 1 has an objective value for (3) of at most (1+ϵ)2(b+r)nd(V)(1+\epsilon)\frac{2(b+r)}{n}d(V).

Proof.

Let Y:V𝒴Y:V\mapsto\mathcal{Y} denote a mapping from a point in VV to the fairlet it belongs to. Let dR(X)=u𝑟𝑒𝑑(X)d(u,X)d_{R}(X)=\sum_{u\in\mathit{red}(X)}d(u,X), and dB(X)=v𝑏𝑙𝑢𝑒(X)d(v,X)d_{B}(X)=\sum_{v\in\mathit{blue}(X)}d(v,X). Naturally, dR(X)+dB(X)=2d(X)d_{R}(X)+d_{B}(X)=2d(X) for any set XX. For a fairlet Yi𝒴Y_{i}\in\mathcal{Y}, let rir_{i} and bib_{i} denote the number of red and blue points in YiY_{i}.

We first bound the total number of intra-fairlet pairs. Let xi=|Yi|x_{i}=|Y_{i}|, we know that 0xib+r0\leq x_{i}\leq b+r and ixi=n\sum_{i}x_{i}=n. The number of intra-fairlet pairs is at most ixi2i(b+r)xi=(b+r)n\sum_{i}x_{i}^{2}\leq\sum_{i}(b+r)x_{i}=(b+r)n.

The While loop can end in two cases: 1) if 𝒴\mathcal{Y} is (ϵ/n)(\epsilon/n)-locally-optimal; 2) if Yk𝒴d(Yk)Δ\sum_{Y_{k}\in\mathcal{Y}}d(Y_{k})\leq\Delta. Case 2 immediately implies the lemma, thus we focus on case 1. By definition of the algorithm, we know that for any pair uY(u)u\in Y(u) and vY(v)v\in Y(v) where u,vu,v have the same color and Y(u)Y(v)Y(u)\neq Y(v) the swap does not increase objective value by a large amount. (The same trivially holds if the pair are in the same cluster.)

Ykd(Yk)\displaystyle\sum_{Y_{k}}d(Y_{k}) (1+ϵn)(Ykd(Yk)d(u,Y(u))d(v,Y(v))+d(u,Y(v))+d(v,Y(u))2d(u,v))\displaystyle\leq(1+\frac{\epsilon}{n})(\sum_{Y_{k}}d(Y_{k})-d(u,Y(u))-d(v,Y(v))+d(u,Y(v))+d(v,Y(u))-2d(u,v))
(1+ϵn)(Ykd(Yk)d(u,Y(u))d(v,Y(v))+d(u,Y(v))+d(v,Y(u))).\displaystyle\leq(1+\frac{\epsilon}{n})(\sum_{Y_{k}}d(Y_{k})-d(u,Y(u))-d(v,Y(v))+d(u,Y(v))+d(v,Y(u))).

After moving terms and some simplification, we get the following inequality:

d(u,Y(u))+d(v,Y(v))d(u,Y(v))+d(v,Y(u))+ϵ/n1+ϵ/nYk𝒴d(Yk)d(u,Y(v))+d(v,Y(u))+ϵnYk𝒴d(Yk).\begin{split}&\quad d(u,Y(u))+d(v,Y(v))\\ &\leq d(u,Y(v))+d(v,Y(u))+\quad\frac{\epsilon/n}{1+\epsilon/n}\sum_{Y_{k}\in\mathcal{Y}}d(Y_{k})\\ &\leq d(u,Y(v))+d(v,Y(u))+\frac{\epsilon}{n}\sum_{Y_{k}\in\mathcal{Y}}d(Y_{k}).\end{split} (4)

Then we sum up (4), d(u,Y(u))+d(v,Y(v))d(u,Y(v))+d(v,Y(u))+ϵnYk𝒴d(Yk)d(u,Y(u))+d(v,Y(v))\leq d(u,Y(v))+d(v,Y(u))+\frac{\epsilon}{n}\sum_{Y_{k}\in\mathcal{Y}}d(Y_{k}), over every pair of points in 𝑟𝑒𝑑(V)\mathit{red}(V) (even if they are in the same partition).

rtYidR(Yi)(YiridR(Yi))+(u𝑟𝑒𝑑(V)YiY(u)rid(u,Yi))+rt2ϵnYid(Yi).\displaystyle r_{t}\sum_{Y_{i}}d_{R}(Y_{i})\leq\Bigg{(}\sum_{Y_{i}}r_{i}d_{R}(Y_{i})\Bigg{)}+\Bigg{(}\sum_{u\in\mathit{red}(V)}\sum_{Y_{i}\neq Y(u)}r_{i}d(u,Y_{i})\Bigg{)}+r_{t}^{2}\frac{\epsilon}{n}\sum_{Y_{i}}d(Y_{i}).

Divide both sides by rtr_{t} and use the fact that rirr_{i}\leq r for all YiY_{i}:

YidR(Yi)(YirrtdR(Yi))+(u𝑟𝑒𝑑(V)YiY(u)rrtd(u,Yi))+rtϵnYid(Yi).\displaystyle\sum_{Y_{i}}d_{R}(Y_{i})\leq\left(\sum_{Y_{i}}\frac{r}{r_{t}}d_{R}(Y_{i})\right)+\left(\sum_{u\in\mathit{red}(V)}\sum_{Y_{i}\neq Y(u)}\frac{r}{r_{t}}d(u,Y_{i})\right)+\frac{r_{t}\epsilon}{n}\sum_{Y_{i}}d(Y_{i}). (5)

For pairs of points in 𝑏𝑙𝑢𝑒(V)\mathit{blue}(V) we sum (4) to similarly obtain:

YidB(Yi)(YibbtdB(Yi))+(v𝑏𝑙𝑢𝑒(V)YiY(v)bbtd(v,Yi))+btϵnYid(Yi).\displaystyle\sum_{Y_{i}}d_{B}(Y_{i})\leq\left(\sum_{Y_{i}}\frac{b}{b_{t}}d_{B}(Y_{i})\right)+\left(\sum_{v\in\mathit{blue}(V)}\sum_{Y_{i}\neq Y(v)}\frac{b}{b_{t}}d(v,Y_{i})\right)+\frac{b_{t}\epsilon}{n}\sum_{Y_{i}}d(Y_{i}). (6)

Now we sum up (5) and (6). The LHS becomes:

Yi(dR(Yi)+dB(Yi))=YiuYid(u,Yi)=2Yid(Yi.)\sum_{Y_{i}}(d_{R}(Y_{i})+d_{B}(Y_{i}))=\sum_{Y_{i}}\sum_{u\in Y_{i}}d(u,Y_{i})=2\sum_{Y_{i}}d(Y_{i}.)

For the RHS, the last term in (5) and (6) is ϵ(bt+rt)nYid(Yi)=ϵYid(Yi)\frac{\epsilon(b_{t}+r_{t})}{n}\sum_{Y_{i}}d(Y_{i})=\epsilon\sum_{Y_{i}}d(Y_{i}).

The other terms give:

rrtYidR(Yi)+rrtu𝑟𝑒𝑑(V)YiY(u)d(u,Yi)+bbtYidB(Yi)+bbtv𝑏𝑙𝑢𝑒(V)YiY(v)d(v,Yi)\displaystyle\quad\frac{r}{r_{t}}\sum_{Y_{i}}d_{R}(Y_{i})+\frac{r}{r_{t}}\sum_{u\in\mathit{red}(V)}\sum_{Y_{i}\neq Y(u)}d(u,Y_{i})+\frac{b}{b_{t}}\sum_{Y_{i}}d_{B}(Y_{i})+\frac{b}{b_{t}}\sum_{v\in\mathit{blue}(V)}\sum_{Y_{i}\neq Y(v)}d(v,Y_{i})
max{rrt,bbt}{Yi(dR(Yi)+dB(Yi))+uVYiY(u)d(u,Yi)}\displaystyle\leq\max\{\frac{r}{r_{t}},\frac{b}{b_{t}}\}\Bigg{\{}\sum_{Y_{i}}\left(d_{R}(Y_{i})+d_{B}(Y_{i})\right)+\sum_{u\in V}\sum_{Y_{i}\neq Y(u)}d(u,Y_{i})\Bigg{\}}
=max{rrt,bbt}{YiuYid(u,Yi)+YiYjYid(Yi,Yj)}\displaystyle=\max\{\frac{r}{r_{t}},\frac{b}{b_{t}}\}\Bigg{\{}\sum_{Y_{i}}\sum_{u\in Y_{i}}d(u,Y_{i})+\sum_{Y_{i}}\sum_{Y_{j}\neq Y_{i}}d(Y_{i},Y_{j})\Bigg{\}}
=2max{rrt,bbt}d(V)\displaystyle=2\max\{\frac{r}{r_{t}},\frac{b}{b_{t}}\}d(V)
4(b+r)nd(V).\displaystyle\leq\frac{4(b+r)}{n}d(V).

The last inequality follows from Proposition 22. All together, this proves that

2Ykd(Yk)4(b+r)nd(V)+ϵYkd(Yk).\displaystyle 2\sum_{Y_{k}}d(Y_{k})\leq\frac{4(b+r)}{n}d(V)+\epsilon\sum_{Y_{k}}d(Y_{k}).

Then, Ykd(Yk)d(V)2(b+r)n11ϵ/2(1+ϵ)2(b+r)n\frac{\sum_{Y_{k}}d(Y_{k})}{d(V)}\leq\frac{2(b+r)}{n}\cdot\frac{1}{1-\epsilon/2}\leq(1+\epsilon)\frac{2(b+r)}{n}. The final step follows from the fact that (1+ϵ)(1ϵ/2)=1+ϵ2(1ϵ)1(1+\epsilon)(1-\epsilon/2)=1+\frac{\epsilon}{2}(1-\epsilon)\geq 1. This proves the lemma. ∎

C.2 Proof for the generalized Lemma 15

Next, we prove Lemma 15 for the more generalized definition of fairness, which is α\alpha-capped fairness.

Proof of [Lemma 15] The proof follows the same logic as in the two-color case: we first use the (ϵ/n)(\epsilon/n)-local optimality of the solution, and sum up the inequality over all pairs of points with the same color.

Let Y:V𝒴Y:V\mapsto\mathcal{Y} denote a mapping from a point in VV to the fairlet it belongs to. Let Ri(X)R_{i}(X) be the set of RiR_{i} colored points in a set XX. Let dRi(X)=uRi(X)d(u,X)d_{R_{i}}(X)=\sum_{u\in R_{i}(X)}d(u,X). Naturally, idRi(x)=2d(X)\sum_{i}d_{R_{i}}(x)=2d(X) for any set XX since the weight for every pair of points is repeated twice.

The While loop can end in two cases: 1) if 𝒴\mathcal{Y} is (ϵ/n)(\epsilon/n)-locally-optimal; 2) if Yk𝒴d(Yk)Δ\sum_{Y_{k}\in\mathcal{Y}}d(Y_{k})\leq\Delta. Case 2 immediately implies the lemma, thus we focus on case 1.

By definition of the algorithm, we know that for any pair uY(u)u\in Y(u) and vY(v)v\in Y(v) where u,vu,v have the same color and Y(u)Y(v)Y(u)\neq Y(v) the swap does not increase objective value by a large amount. (The same trivially holds if the pair are in the same cluster.) We get the following inequality as in the two color case:

d(u,Y(u))+d(v,Y(v))d(u,Y(v))+d(v,Y(u))+ϵnYk𝒴d(Yk).d(u,Y(u))+d(v,Y(v))\leq d(u,Y(v))+d(v,Y(u))+\frac{\epsilon}{n}\sum_{Y_{k}\in\mathcal{Y}}d(Y_{k}). (7)

For any color RiR_{i}, we sum it over every pair of points in Ri(V)R_{i}(V) (even if they are in the same partition).

niYkdRi(Yk)(YkrikdRi(Yk))+(uRi(V)YkY(u)rikd(u,Yk))+ni2ϵnYkd(Yk).\displaystyle n_{i}\sum_{Y_{k}}d_{R_{i}}(Y_{k})\leq\Bigg{(}\sum_{Y_{k}}r_{ik}d_{R_{i}}(Y_{k})\Bigg{)}+\Bigg{(}\sum_{u\in R_{i}(V)}\sum_{Y_{k}\neq Y(u)}r_{ik}d(u,Y_{k})\Bigg{)}+n_{i}^{2}\frac{\epsilon}{n}\sum_{Y_{k}}d(Y_{k}).

Divide both sides by nin_{i} and we get:

YkdRi(Yk)(YkriknidRi(Yk))+(uRi(V)YkY(u)riknid(u,Yk))+niϵnYkd(Yk).\displaystyle\sum_{Y_{k}}d_{R_{i}}(Y_{k})\leq\left(\sum_{Y_{k}}\frac{r_{ik}}{n_{i}}d_{R_{i}}(Y_{k})\right)+\left(\sum_{u\in R_{i}(V)}\sum_{Y_{k}\neq Y(u)}\frac{r_{ik}}{n_{i}}d(u,Y_{k})\right)+\frac{n_{i}\epsilon}{n}\sum_{Y_{k}}d(Y_{k}). (8)

Now we sum up this inequality over all colors RiR_{i}. The LHS becomes:

YkidRi(Yk)=YkuYkd(u,Yk)=2Ykd(Yk).\sum_{Y_{k}}\sum_{i}d_{R_{i}}(Y_{k})=\sum_{Y_{k}}\sum_{u\in Y_{k}}d(u,Y_{k})=2\sum_{Y_{k}}d(Y_{k}).

For the RHS, the last term sums up to ϵ(ini)nYkd(Yk)=ϵYkd(Yk)\frac{\epsilon(\sum_{i}n_{i})}{n}\sum_{Y_{k}}d(Y_{k})=\epsilon\sum_{Y_{k}}d(Y_{k}). Using the fact that riknimaxi,krikni\frac{r_{ik}}{n_{i}}\leq\max_{i,k}\frac{r_{ik}}{n_{i}}, the other terms sum up to :

iYkriknidRi(Yk)+iuRi(V)YkY(u)riknid(u,Yk)\displaystyle\quad\sum_{i}\sum_{Y_{k}}\frac{r_{ik}}{n_{i}}d_{R_{i}}(Y_{k})+\sum_{i}\sum_{u\in R_{i}(V)}\sum_{Y_{k}\neq Y(u)}\frac{r_{ik}}{n_{i}}d(u,Y_{k})
maxi,krikni{YkidRi(Yi)+uVYkY(u)d(u,Yk)}\displaystyle\leq\max_{i,k}\frac{r_{ik}}{n_{i}}\Bigg{\{}\sum_{Y_{k}}\sum_{i}d_{R_{i}}(Y_{i})+\sum_{u\in V}\sum_{Y_{k}\neq Y(u)}d(u,Y_{k})\Bigg{\}}
=maxi,krikni{YkuYkd(u,Yk)+YkYjYkd(Yj,Yk)}\displaystyle=\max_{i,k}\frac{r_{ik}}{n_{i}}\Bigg{\{}\sum_{Y_{k}}\sum_{u\in Y_{k}}d(u,Y_{k})+\sum_{Y_{k}}\sum_{Y_{j}\neq Y_{k}}d(Y_{j},Y_{k})\Bigg{\}}
=2maxi,kriknid(V).\displaystyle=2\max_{i,k}\frac{r_{ik}}{n_{i}}\cdot d(V).

Therefore, putting LHS and RHS together, we get

2Ykd(Yk)2maxi,kriknid(V)+ϵYkd(Yk).\displaystyle 2\sum_{Y_{k}}d(Y_{k})\leq 2\max_{i,k}\frac{r_{ik}}{n_{i}}d(V)+\epsilon\sum_{Y_{k}}d(Y_{k}).

Then, Ykd(Yk)d(V)maxi,krikni11ϵ/2(1+ϵ)maxi,krikni\frac{\sum_{Y_{k}}d(Y_{k})}{d(V)}\leq\max_{i,k}\frac{r_{ik}}{n_{i}}\cdot\frac{1}{1-\epsilon/2}\leq(1+\epsilon)\cdot\max_{i,k}\frac{r_{ik}}{n_{i}}. The final step follows from the fact that (1+ϵ)(1ϵ/2)=1+ϵ2(1ϵ)1(1+\epsilon)(1-\epsilon/2)=1+\frac{\epsilon}{2}(1-\epsilon)\geq 1.

In the two-color case, the ratio maxi,krikni\max_{i,k}\frac{r_{ik}}{n_{i}} becomes max{rrt,bbt}\max\{\frac{r}{r_{t}},\frac{b}{b_{t}}\}, which can be further bounded by 2(b+r)n\frac{2(b+r)}{n} (see Proposition 22). If there exists a caplet decomposition such that maxi,krikni=o(1)\max_{i,k}\frac{r_{ik}}{n_{i}}=o(1), Lemma 15 implies we can build a fair hierarchical clustering tree with o(1)o(1) loss in approximation ratio for value objective.

Assuming for all color class RiR_{i}, ni+n_{i}\rightarrow+\infty as n+n\rightarrow+\infty, here we give a possible caplet decomposition for α=1t(t<=c)\alpha=\frac{1}{t}(t<=c) with size O(t)O(t) for positive integer tt, thus guaranteeing maxi,krikni=o(1)\max_{i,k}\frac{r_{ik}}{n_{i}}=o(1) for any ii.

Lemma 24.

For any set PP of size pp that satisfies fairness constraint with α=1/t\alpha=1/t, there exists a partition of PP into sets (P1,P2,)(P_{1},P_{2},\ldots) where each PiP_{i} satisfies the fairness constraint and t|Pi|<2tt\leq|P_{i}|<2t.

Proof.

Let p=m×t+rp=m\times t+r with 0r<t0\leq r<t, then the fairness constraints ensures that there are at most mm elements of each color. Consider partitioning obtained through the following process: consider an ordering of elements where points of the same color are in consecutive places, assign points to sets P1,P2,,PmP_{1},P_{2},\ldots,P_{m} in a round robin fashion. So each set PiP_{i} gets at least tt elements and at most t+r<2tt+r<2t elements assigned to it. Since there are at most mm elements of each color, each set gets at most one point of any color and hence all sets satisfy the fairness constraint as 11t|Pi|1\leq\frac{1}{t}\cdot|P_{i}|. ∎

C.3 Proof for the running time of (ϵ/n)(\epsilon/n)-locally-optimal fairlet decomposition algorithm

Proof of [Lemma 16] Notice that finding the maximum pairwise distance takes O(n2)O(n^{2}) time. Thus, we focus on analyzing the time spent on the While loop.

Let tt be the total number of swaps. We argue that t=O~(n/ϵ)t=\tilde{O}(n/\epsilon). If t=0t=0 the conclusion trivially holds. Otherwise, consider the decomposition 𝒴t1\mathcal{Y}_{t-1} before the last swap. Since the While loop does not terminate here, Yk𝒴t1d(Yk)Δ=b+rndmax\sum_{Y_{k}\in\mathcal{Y}_{t-1}}d(Y_{k})\geq\Delta=\frac{b+r}{n}d_{max}. However, at the beginning, we have Yk𝒴d(Yk)(b+r)ndmax=n2Δn2Yk𝒴t1d(Yk)\sum_{Y_{k}\in\mathcal{Y}}d(Y_{k})\leq(b+r)n\cdot d_{max}=n^{2}\Delta\leq n^{2}\sum_{Y_{k}\in\mathcal{Y}_{t-1}}d(Y_{k}). Therefore, it takes at most log1+ϵ/n(n2)=O~(n/ϵ)\log_{1+\epsilon/n}(n^{2})=\tilde{O}(n/\epsilon) iterations to finish the While loop.

It remains to discuss the running time of each iteration. We argue that there is a way to finish each iteration in O(n2)O(n^{2}) time. Before the While loop, keep a record of d(u,Yi)d(u,Y_{i}) for each point uu and each fairlet YiY_{i}. This takes O(n2)O(n^{2}) time. If we know d(u,Yi)d(u,Y_{i}) and the objective value from the last iteration, in the current iteration, it takes O(1)O(1) time to calculate the new objective value after each swap (u,v)(u,v), and there are at most n2n^{2} such calculations, before the algorithm either finds a pair to swap, or determines that no such pair is left. After the swap, the update for all the d(u,Yi)d(u,Y_{i}) data takes O(n)O(n) time. In total, every iteration takes O(n2)O(n^{2}) time.

Therefore, Algorithm 1 takes O~(n3/ϵ)\tilde{O}(n^{3}/\epsilon) time.

Appendix D Hardness of optimal fairlet decomposition

Before proving Theorem 7, we state that the PARTITION INTO TRIANGLES (PIT) problem is known to belong to the NP-complete class [22], defined as follows. In the definition, we call a clique kk-clique if it has kk nodes. A triangle is a 33-clique.

Definition 25.

PARTITION INTO TRIANGLES
(PIT).
Given graph G=(V,E)G=(V,E), where V=3nV=3n, determine if VV can be partitioned into 33-element sets S1,S2,,SnS_{1},S_{2},\ldots,S_{n}, such that each SiS_{i} forms a triangle in GG.

The NP-hardness of PIT problem gives us a more general statement.

Definition 26.

PARTITION INTO kk-CLIQUES
(PIKC).
For a fixed number kk treated as constant, given graph G=(V,E)G=(V,E), where V=knV=kn, determine if VV can be partitioned into kk-element sets S1,S2,,SnS_{1},S_{2},\ldots,S_{n}, such that each SiS_{i} forms a kk-clique in GG.

Lemma 27.

For a fixed constant k3k\geq 3, the PIKC problem is NP-hard.

Proof.

We reduce the PIKC problem from the PIT problem. For any graph G=(V,E)G=(V,E) given to the PIT problem where |V|=3n|V|=3n, construct another graph G=(V,E)G^{\prime}=(V^{\prime},E^{\prime}). Let V=VC1C2CnV^{\prime}=V\cup C_{1}\cup C_{2}\cup\cdots\cup C_{n}, where all the CiC_{i}’s are (k3)(k-3)-cliques, and there is no edge between any two cliques CiC_{i} and CjC_{j} where iji\neq j. For any CiC_{i}, let all points in CiC_{i} to be connected to all nodes in VV.

Now let GG^{\prime} be the input to PIKC problem. We prove that GG can be partitioned into triangles if and only if GG^{\prime} can be partitioned into kk-cliques. If VV has a triangle partition V={S1,,Sn}V=\{S_{1},\ldots,S_{n}\}, then V={S1C1,,SnCn}V^{\prime}=\{S_{1}\cup C_{1},\ldots,S_{n}\cup C_{n}\} is a kk-clique partition. On the other hand, if VV^{\prime} has a kk-clique partition V={S1,,Sn}V^{\prime}=\{S_{1}^{\prime},\ldots,S_{n}^{\prime}\} then C1,,CnC_{1},\ldots,C_{n} must each belong to different kk-cliques since they are not connected to each other. Without loss of generality we assume CiSiC_{i}\subseteq S_{i}, then V={S1C1,,SnCn}V=\{S_{1}^{\prime}\setminus C_{1},\ldots,S_{n}^{\prime}\setminus C_{n}\} is a triangle partition. ∎

We are ready to prove the theorem.

Proof of [Theorem 7] We prove Theorem 7 by proving that for given z4z\geq 4, if there exists a cc-approximation polynomial algorithm 𝒜\mathcal{A} for (3), it can be used to solve the PIKC problem where k=z1k=z-1 for any instance as well. This holds for any finite cc.

Given any graph G=(V,E)G=(V,E) that is input to the PIKC problem, where |V|=kn=(z1)n|V|=kn=(z-1)n, let a set VV^{\prime} with distances be constructed in the following way:

  1. 1.

    V=V{C1,,Cn}V^{\prime}=V\cup\{C_{1},\ldots,C_{n}\}, where each CiC_{i} is a singleton.

  2. 2.

    Color the points in VV red, and color all the CiC_{i}’s blue.

  3. 3.

    For a e=(u,v)e=(u,v), let d(u,v)=0d(u,v)=0, if it satisfies one of the three conditions: 1) eEe\in E. 2) u,vCiu,v\in C_{i} for some CiC_{i}. 3) one of u,vu,v is in VV, while the other belong to some CiC_{i}.

  4. 4.

    All other edges have distance 11.

Obviously the blue points make up a 1/z\nicefrac{{1}}{{z}} fraction of the input so each fairlet should have exactly 11 blue point and z1z-1 red points.

We claim that GG has a kk-clique partition if and only if algorithm 𝒜\mathcal{A} gives a solution of 0 for (3). The same argument as in the proof of Lemma 27 will show that GG has a kk-clique partition if and only if the optimal solution to (3) is 0. This is equal to algorithm 𝒜\mathcal{A} giving a solution of 0 since otherwise the approximate is not bounded.

Appendix E Optimizing cost with fairness

In this section, we present our fair hierarchical clustering algorithm that approximates Dasgupta’s cost function and satisfies Theorem 17. Most of the proofs can be found in Section E.1. We consider the problem of equal representation, where vertices are red or blue and α=1/2\alpha=1/2. From now on, whenever we use the word “fair”, we are referring to this fairness constraint. Our algorithm also uses parameters tt and \ell such that ntn\geq t\ell and t>+108t2/2t>\ell+108t^{2}/\ell^{2} for n=|V|n=|V|, and leverages a β\beta-approximation for cost and γt\gamma_{t}-approximation for minimum weighted bisection. We will assume these are fixed and use them throughout the section.

We will ultimately show that we can find a fair solution that is a sublinear approximation for the unfair optimum TunfairT^{*}_{\mathrm{unfair}}, which is a lower bound of the fair optimum. Our main result is Theorem 17, which is stated in the body of the paper.

The current best approximations described in Theorem 17 are γt=O(log3/2n)\gamma_{t}=O(\log^{3/2}n) by [21] and β=logn\beta=\sqrt{\log n} by both [19] and [11]. If we set t=n(log3/4n)t=\sqrt{n}(\log^{3/4}n) and =n1/3logn\ell=n^{1/3}\sqrt{\log n}, then we get Corollary 28.

Corollary 28.

Consider the equal representation problem with two colors. There is an O(n5/6log5/4n)O\left(n^{5/6}\log^{5/4}n\right)-approximate fair clustering under the cost objective.

The algorithm will be centered around a single clustering, which we call 𝒞\mathcal{C}, that is extracted from an unfair hierarchy. We will then adapt this to become a similar, fair clustering 𝒞\mathcal{C}^{\prime}. To formalize what 𝒞\mathcal{C}^{\prime} must satisfy to be sufficiently “similar” to 𝒞\mathcal{C}, we introduce the notion of a 𝒞\mathcal{C}-good clustering. Note that this is not an intuitive set of properties, it is simply what 𝒞\mathcal{C}^{\prime} must satisfy in order

Definition 29 (Good clustering).

Fix a clustering 𝒞\mathcal{C} whose cluster sizes are at most tt. A fair clustering 𝒞\mathcal{C}^{\prime} is 𝒞\mathcal{C}-good if it satisfies the following two properties:

  1. 1.

    For any cluster C𝒞C\in\mathcal{C}, there is a cluster C𝒞C^{\prime}\in\mathcal{C}^{\prime} such that all but (at most) an O(γt/t+tγt/2)O(\ell\gamma_{t}/t+t\gamma_{t}/\ell^{2})-fraction of the weight of edges in CC is also in CC^{\prime}.

  2. 2.

    Any C𝒞C^{\prime}\in\mathcal{C}^{\prime} is not too much bigger, so |C|6t|C^{\prime}|\leq 6t\ell.

The hierarchy will consist of a 𝒞\mathcal{C}-good (for a specifically chosen 𝒞\mathcal{C}) clustering 𝒞\mathcal{C}^{\prime} as its only nontrivial layer.

Lemma 30.

Let TT be a β\beta-approximation for cost and 𝒞\mathcal{C} be a maximal clustering in TT under the condition that all cluster sizes are at most tt. Then, a fair two-tiered hierarchy TT^{\prime} whose first level consists of a 𝒞\mathcal{C}-good clustering achieves an O(nt+t+nγtt+ntγt2)βO\left(\frac{n}{t}+t\ell+\frac{n\ell\gamma_{t}}{t}+\frac{nt\gamma_{t}}{\ell^{2}}\right)\beta-approximation for cost.

Proof.

Since TT is a β\beta-approximation, we know that:

cost(T)βcost(Tunfair)\mathrm{cost}(T)\leq\beta\mathrm{cost}(T^{*}_{\mathrm{unfair}})

We will then utilize a scheme to account for the cost contributed by each edge relative to their cost in TT in the hopes of extending it to TunfairT^{*}_{\mathrm{unfair}}. There are three different types of edges:

  1. 1.

    An edge ee that is merged into a cluster of size tt or greater in TT, thus contributing ts(e)t\cdot s(e) to the cost. At worst, this edge is merged in the top cluster in TT^{\prime} to contribute ns(e)n\cdot s(e). Thus, the factor increase in the cost contributed by ee is O(n/t)O(n/t). Then since the total contribution of all such edges in TT is at most cost(T)\mathrm{cost}(T), the total contribution of all such edges in TT^{\prime} is at most O(n/t)cost(T)O(n/t)\cdot\mathrm{cost}(T).

  2. 2.

    An edge ee that started in some cluster C𝒞C\in\mathcal{C} that does not remain in the corresponding cluster CC^{\prime}. We are given that the total weight removed from any such CC is an O(γt/t+tγt/2)O(\ell\gamma_{t}/t+t\gamma_{t}/\ell^{2})-fraction of the weight contained in CC. If we sum across the weight in all clusters in 𝒞\mathcal{C}, that is at most cost(T)\mathrm{cost}(T). So the total amount of weight moved is at most O(γt/t+tγt/2)cost(T)O(\ell\gamma_{t}/t+t\gamma_{t}/\ell^{2})\cdot\mathrm{cost}(T). These edges contributed at least 2s(e)2s(e) in TT as the smallest possible cluster size is two. In TT^{\prime}, these may have been merged at the top of the cluster, for a maximum cost contribution of ns(e)n\cdot s(e). Therefore, the total cost across all such edges is increased by at most a factor of n/2n/2, which gives a total cost of at most O(nγt/t+ntγt/2)cost(T)O(n\ell\gamma_{t}/t+nt\gamma_{t}/\ell^{2})\cdot\mathrm{cost}(T).

  3. 3.

    An edge ee that starts in some cluster C𝒞C\in\mathcal{C} and remains in the corresponding C𝒞C^{\prime}\in\mathcal{C}^{\prime}. Similarly, this must have contributed at least 2s(e)2s(e) in TT, but now we know that this edge is merged within CC^{\prime} in TT^{\prime}, and that the size of CC^{\prime} is |C|6t|C^{\prime}|\leq 6t\ell. Thus its contribution increases at most by a factor of 3t3t\ell. By the same reasoning from the first edge type we discussed, all these edges total contribute at most a factor of O(t)cost(T)O(t\ell)\cdot\mathrm{cost}(T).

We can then put a conservative bound by putting this all together.

cost(T)\displaystyle\mathrm{cost}(T^{\prime})\leq O(nt+t+nγtt+ntγt2)cost(T).\displaystyle O\left(\frac{n}{t}+t\ell+\frac{n\ell\gamma_{t}}{t}+\frac{nt\gamma_{t}}{\ell^{2}}\right)\mathrm{cost}(T).

Finally, we know TT is a β\beta-approximation for TunfairT^{*}_{\mathrm{unfair}}.

cost(T)\displaystyle\mathrm{cost}(T^{\prime}) O(nt+t+nγtt+ntγt2)\displaystyle\leq O\left(\frac{n}{t}+t\ell+\frac{n\ell\gamma_{t}}{t}+\frac{nt\gamma_{t}}{\ell^{2}}\right)
βcost(Tunfair).\displaystyle\quad\qquad\cdot\beta\cdot\mathrm{cost}(T^{*}_{\mathrm{unfair}}).\qed

With this proof, the only thing left to do is find a 𝒞\mathcal{C}-good clustering 𝒞\mathcal{C}^{\prime} (Definition 29). Specifically, using the clustering 𝒞\mathcal{C} mentioned in Lemma 30, we would like to find a 𝒞\mathcal{C}-good clustering 𝒞\mathcal{C}^{\prime} using the following.

Lemma 31.

There is an algorithm that, given a clustering 𝒞\mathcal{C} with maximum cluster size tt, creates a 𝒞\mathcal{C}-good clustering.

The proof is deferred to the Section E.1. With these two Lemmas, we can prove Theorem 17.

Proof.

Consider our graph GG. We first obtain a β\beta-approximation for unfair cost, which yields a hierarchy tree TT. Let 𝒞\mathcal{C} be the maximal clustering in TT under the constraint that the cluster sizes must not exceed tt. We then apply the algorithm from Lemma 31 to get a 𝒞\mathcal{C}-good clustering 𝒞\mathcal{C}^{\prime}. Construct TT^{\prime} such that it has one layer that is 𝒞\mathcal{C}^{\prime}. Then we can apply the results from Lemma 30 to get the desired approximation. ∎

From here, we will only provide a high-level description of the algorithm for Lemma 31. For precise details and proofs, see Section E.1. To start, we need to propose some terminology.

Definition 32 (Red-blue matching).

A red-blue matching on a graph GG is a matching MM such that M(u)=vM(u)=v implies uu and vv are different colors.

Red-blue matchings are interesting because they help us ensure fairness. For instance, suppose MM is a red-blue matching that is also perfect (i.e., touches all nodes). If the lowest level of a hierarchy consists of a clustering such that vv and M(v)M(v) are in the same cluster for all vv, then that level of the hierarchy is fair since there is a bijection between red and blue vertices within each cluster. When these clusters are merged up in the hierarchy, fairness is preserved.

Our algorithm will modify an unfair clustering to be fair by combining clusters and moving a small number of vertices. To do this, we will use the following notion.

Definition 33 (Red-blue clustering graph).

Given a graph GG and a clustering 𝒞={C1,,Ck}\mathcal{C}=\{C_{1},\ldots,C_{k}\}, we can construct a red-blue clustering graph HM=(VM,EM)H_{M}=(V_{M},E_{M}) that is associated with some red-blue matching MM. Then HMH_{M} is a graph where VM=𝒞V_{M}=\mathcal{C} and (Ci,Cj)EM(C_{i},C_{j})\in E_{M} if and only if there is a viCiv_{i}\in C_{i} and M(vi)=vjCjM(v_{i})=v_{j}\in C_{j}.

Basically, we create a graph of clusters, and there is an edge between two clusters if and only if there is at least one vertex in one cluster that is matched to some vertex in the other cluster. We now show that the red-blue clustering graph can be used to construct a fair clustering based on an unfair clustering.

Proposition 34.

Let HMH_{M} be a red-blue clustering graph on a clustering 𝒞\mathcal{C} with a perfect red-blue matching MM. Let 𝒞\mathcal{C}^{\prime} be constructed by merging all the clusters in each component of HMH_{M}. Then 𝒞\mathcal{C}^{\prime} is fair.

Proof.

Consider some C𝒞C\in\mathcal{C}^{\prime}. By construction, this must correspond to a connected component in HMH_{M}. By definition of HMH_{M}, for any vertex vCv\in C, M(v)CM(v)\in C. That means MM, restricted to CC, defines a bijection between the red and blue nodes in CC. Therefore, CC has an equal number of red and blue vertices and hence is fair. ∎

We will start by extracting a clustering 𝒞\mathcal{C} from an unfair hierarchy TT that approximates cost. Then, we will construct a red-blue clustering graph HMH_{M} with a perfect red-blue matching MM. Then we can use the components of HMH_{M} to define our first version of the clustering 𝒞\mathcal{C}^{\prime}. However, this requires a non-trivial way of moving vertices between clusters in 𝒞\mathcal{C}.

We now give an overview of our algorithm in Steps (A)–(G). For a full description, see our pseudocode in Section H.

(A) Get an unfair approximation TT. We start by running a β\beta-approximation for cost in the unfair setting. This gives us a tree TT such that cost(T)βcost(Tunfair)\mathrm{cost}(T)\leq\beta\cdot\mathrm{cost}(T^{*}_{\mathrm{unfair}}).

(B) Extract a tt-maximal clustering. Given TT, we find the maximal clustering 𝒞\mathcal{C} such that (i) every cluster in the clustering is of size at most tt, and (ii) any cluster above these clusters in TT is of size more than tt.

(C) Combine clusters to be size tt to 3t3t. We will now slowly change 𝒞\mathcal{C} into 𝒞\mathcal{C^{\prime}} during a number of steps. In the first step, we simply define 𝒞0\mathcal{C}_{0} by merging small clusters |C|t|C|\leq t until the merged size is between tt and 3t3t. Thus clusters in 𝒞\mathcal{C} are contained within clusters in 𝒞0\mathcal{C}_{0}, and all clusters are between size tt and 3t3t.

(D) Find cluster excesses. Next, we strive to make our clustering more fair. We do this by trying to find an underlying matching between red and blue vertices that agrees with 𝒞0\mathcal{C}_{0} (matches are in the same cluster). If the matching were perfect, then the clusters in 𝒞0\mathcal{C}_{0} would have equal red and blue representation. However, this is not guaranteed initially. We start by conceptually matching as many red and blue vertices within clusters as we can. Note we do not actually create this matching; we just want to reserve the space for this matching to ensure fairness, but really some of these vertices may be moved later on. Then the remaining unmatched vertices in each cluster is either entirely red or entirely blue. We call this amount the excess and the color the excess color. We label each cluster with both of these.

(E) Construct red-blue clustering graph. Next, we would like to construct HM=(VM,EM)H_{M}=(V_{M},E_{M}), our red-blue clustering graph on 𝒞0\mathcal{C}_{0}. Let VM=𝒞0V_{M}=\mathcal{C}_{0}. In addition, for the within-cluster matchings mentioned in Step (D), let those matches be contained in MM. With this start, we will do a matching process to simultaneously construct EME_{M} and the rest of MM. Note the unmatched vertices are specifically the excess vertices in each cluster. We will match these with an iterative process given our parameter \ell:

  1. 1.

    Select a vertex CiVMC_{i}\in V_{M} with excess at least \ell to start a new connected component in HMH_{M}. Without loss of generality, say its excess color is red.

  2. 2.

    Find a vertex CjVMC_{j}\in V_{M} whose excess color is blue and whose excess is at least \ell. Add (Ci,Cj)(C_{i},C_{j}) to EME_{M}.

  3. 3.

    Say without loss of generality that the excess of CiC_{i} is less than that of CjC_{j}. Then match all the excess in CiC_{i} to vertices in the excess of CjC_{j}. Now CjC_{j} has a smaller excess.

  4. 4.

    If CjC_{j} has an excess less than \ell or CjC_{j} is the \ellth cluster in this component, end this component. Start over at (1) with a new cluster.

  5. 5.

    Otherwise, use CjC_{j} as our reference and continue constructing this component at (2).

  6. 6.

    Complete when there are no more clusters with over \ell excess that are not in a component (or all remaining such clusters have the same excess color).

We would like to construct 𝒞\mathcal{C}^{\prime} by merging all clusters in each component. This would be fair if MM were a perfect matching, however this is not true yet. In the next step, we handle this.

(F) Fix unmatched vertices. We now want to match excess vertices that are unmatched. We do this by bringing vertices from other clusters into the clusters that have unmatched excess, starting with all small unmatched excess. Note that some clusters were never used in Step (E) because they had small excess to start. This means they had many internal red-blue matches. Remove t2/2t^{2}/\ell^{2} of these and put them into clusters in need. For other vertices, we will later describe a process where t/t/\ell of the clusters can contribute 108t2/2108t^{2}/\ell^{2} vertices to account for unmatched excess. Thus clusters lose at most 108t2/2108t^{2}/\ell^{2} vertices, and we account for all unmatched vertices. Call the new clustering 𝒞1\mathcal{C}_{1}. Now MM is perfect and HMH_{M} is unchanged.

(G) Define 𝒞\mathcal{C}^{\prime}. Finally, we create the clustering 𝒞\mathcal{C}^{\prime} by merging the clusters in each component of HMH_{M}. Note that Proposition 34 assures CC^{\prime} is fair. In addition, we will show that cluster sizes in 𝒞1\mathcal{C}_{1} are at most 6t6t, so 𝒞\mathcal{C}^{\prime} has the desired upper bound of 6t6t\ell on cluster size. Finally, we removed at most +t2/2\ell+t^{2}/\ell^{2} vertices from each cluster. This is the desired 𝒞\mathcal{C}-good clustering.

Further details and the proofs that the above sequence of steps achieve the desired approximation can be found in the next section. While the approximation factor obtained is not as strong as the ones for revenue or value objectives with fairness, we believe cost is a much harder objective with fairness constraints.

E.1 Proof of Theorem  17

This algorithm contains a number of components. We will discuss the claims made by the description step by step. In Step (A), we simply utilize any β\beta-approximation for the unfair approximation. Step (B) is also quite simple. At this point, all that is left is to show how to find 𝒞\mathcal{C}^{\prime}, ie, prove Lemma 31 (introduced in Section 6). This occurs in the steps following Step (B). In Step (C), we apply our first changes to the starting clustering from TT. We now prove that the cluster sizes can be enforced to be between tt and 3t3t.

Lemma 35.

Given a clustering 𝒞\mathcal{C}, we can construct a clustering 𝒞0\mathcal{C}_{0}, where each C𝒞0C\in\mathcal{C}_{0} is a union of clusters in 𝒞\mathcal{C} and t|C|<3tt\leq|C|<3t.

Proof.

We iterate over all clusters in 𝒞\mathcal{C} whose size are less than tt and continually merge them until we create a cluster of size t\geq t. Note that since the last two clusters we merged were of size <t<t, this cluster is of size t|C|<2tt\leq|C|<2t. We then stop this cluster and continue merging the rest of the clusters. At the end, if we are left with a single cluster of size <t<t, we simply merge this with any other cluster, which will then be of size t|C|<3tt\leq|C|<3t. ∎

Step (D) describes a rather simple process. All we have to do in each cluster is count the amount of each color in each cluster, find which is more, and also compute the difference. No claims are made here.

Step (E) defines a more careful process. We describe this process and its results here.

Lemma 36.

There is an algorithm that, given a clustering 𝒞0\mathcal{C}_{0} with t|C|3tt\leq|C|\leq 3t for C𝒞0C\in\mathcal{C}_{0}, can construct a red-blue clustering graph HM=(VM,EM)H_{M}=(V_{M},E_{M}) on 𝒞0\mathcal{C}_{0} with underlying matching MM such that:

  1. 1.

    HMH_{M} is a forest, and its max component size is \ell.

  2. 2.

    For every (Ci,Cj)EM(C_{i},C_{j})\in E_{M}, there are at least \ell matches between CiC_{i} and CjC_{j} in MM. In other words, |M(Ci)Cj||M(C_{i})\cap C_{j}|\geq\ell.

  3. 3.

    For most CiVMC_{i}\in V_{M}, at most \ell vertices in CiC_{i} are unmatched in MM. The only exceptions to this rule are (1) exactly one cluster in every \ell-sized component in HMH_{M}, and (2) at most n/2n/2 additional clusters.

Proof.

We use precisely the process from Step 5. Let VM=𝒞0V_{M}=\mathcal{C}_{0}. HMH_{M} will look like a bipartite graph with some entirely isolated nodes. We then try to construct components of HMH_{M} one-by-one such that (1) the max component size is \ell, and (2) edges represent at least \ell matches in MM.

Let us show it satisfies the three conditions of the lemma. For condition 1, note that we will always halt component construction once it reaches size \ell. Thus no component can exceed size \ell. In addition, for every edge added to the graph, at least one of its endpoints now has small excess and will not be considered later in the program. Thus no cycles can be created, so it is a forest.

For condition 2, consider the construction of any edge (Ci,Cj)EM(C_{i},C_{j})\in E_{M}. At this point, we only consider CiC_{i} and CjC_{j} to be clusters with different-color excess of at least \ell each. In the next part of the algorithm, we match as much excess as we can between the two clusters. Therefore, there must be at least \ell underlying matches.

Finally, condition 3 will be achieved by the completion condition. By the completion condition, there are no isolated vertices (besides possibly those leftover of the same excess color) that have over \ell excess. Whenever we add a cluster to a component, either that cluster matches all of its excess, or the cluster it becomes adjacent to matches all of its excess. Therefore at any time, any component has at most one cluster with any excess at all. If the component is smaller than \ell (and is not the final component), then that can only happen when in the final addition, both clusters end up with less than \ell excess. Therefore, no cluster in this component can have less than \ell excess. For an \ell-sized component, by the rule mentioned before, only one cluster can remain with \ell excess. When the algorithm completes, we are left with a number of large-excess clusters with the same excess color, say red without loss of generality. Assume for contradiction there are more than n/2n/2 such clusters, and so there is at least n/2n\ell/2 . Since we started with half red and half blue vertices, the remaining excess in the rest of the clusters must match up with the large red excess. Thus the remaining at most n/2n/2 clusters must have at least n/2n\ell/2 blue excess, but this is only achievable if they have large excess left. This is a contradiction. Thus we satisfy condition 3. ∎

This concludes Step (E). In Step (F), we will transform the underlying clustering 𝒞0\mathcal{C}_{0} such that we can achieve a perfect matching MM. This will require removing a small number of vertices from some clusters in 𝒞0\mathcal{C}_{0} and putting them in clusters that have unmatched vertices. This process will at most double cluster size.

Lemma 37.

There is an algorithm that, given a clustering 𝒞0\mathcal{C}_{0} with t|C|3tt\leq|C|\leq 3t for C𝒞0C\in\mathcal{C}_{0}, finds a clustering 𝒞1\mathcal{C}_{1} and an underlying matching MM^{\prime} such that:

  1. 1.

    There is a bijection between 𝒞0\mathcal{C}_{0} and 𝒞1\mathcal{C}_{1}.

  2. 2.

    For any cluster C0𝒞0C_{0}\in\mathcal{C}_{0} and its corresponding C1𝒞1C_{1}\in\mathcal{C}_{1}, |C0||C1|+108t2/2|C_{0}|-|C_{1}|\leq\ell+108t^{2}/\ell^{2}. This means that at most \ell vertices are removed from C0C_{0} in the construction of C1C_{1}.

  3. 3.

    For all C1𝒞1C_{1}\in\mathcal{C}_{1}, t108t2/2|C1|6tt-\ell-108t^{2}/\ell^{2}\leq|C_{1}|\leq 6t.

  4. 4.

    MM^{\prime} is a perfect red-blue matching.

  5. 5.

    HMH_{M} is a red-blue clustering graph of 𝒞1\mathcal{C}_{1} with matching MM^{\prime}, perhaps with additional edges.

Proof.

Use Lemma 36 to find the red-blue clustering graph HMH_{M} and its corresponding graph MM. Then we know that only one cluster in every \ell-sized component plus one other cluster can have a larger than \ell excess. Since cluster sizes are at least tt, |VM|n/t|V_{M}|\geq n/t. This means that at most n/(t)+1=(n+t)/(t)2n/(t)n/(t\ell)+1=(n+t\ell)/(t\ell)\leq 2n/(t\ell) clusters need more than \ell vertices. Since the excess is upper bounded by cluster size which is upper bounded by 3t3t, this is at most 6n/6n/\ell vertices in large excess that need matches.

We will start by removing all small excess vertices from clusters. This removes at most \ell from any cluster. These vertices will then be placed in clusters with large excess of the right color. If we run out of large excess of the right color that needs matches, since the total amount of red and blue vertices is balanced, that means we can instead transfer the unmatched small excess red vertices to clusters with a small amount of unmatched blue vertices. In either case, this accounts for all the small unmatched excess. Now all we need to account for is at most 6n/6n/\ell unmatched vertices in large excess clusters. At this point, note that the large excess should be balanced between red and blue. From now on, we will remove matches from within and between clusters to contribute to this excess. Since this always contributes the same amount of red and blue vertices by breaking matches, we do not have to worry about the balance of colors. We will describe how to distribute these contributions across a large number of clusters.

Consider vertices that correspond to clusters that (ignoring the matching MM) started out with at most \ell excess. So the non-excess portion, which is at least size tt-\ell, is entirely matched with itself. We will simply remove t2/2t^{2}/\ell^{2} of these matches to contribute.

Otherwise, we will consider vertices that started out with large excess. We must devise a clever way to break matches without breaking too many incident upon a single cluster. For every tree in HMH_{M} (since HMH_{M} is a forest by Lemma 36), start at the root, and do a breadth-first search over all internal vertices. At any vertex we visit, break \ell matches between it and its child (recall by by Lemma 36 that each edge in HMH_{M} represents at least \ell inter-cluster matches). Thus, each break contributes 22\ell vertices. We do this for every internal vertex. Since an edge represents at least \ell matches and the max cluster size is at most 3t3t, any vertex can have at most 3t/3t/\ell children. Thus the fraction of vertices in HMH_{M} that correspond to a contribution of 22\ell vertices is at least /(3t)\ell/(3t).

Clearly, the worst case is when all vertices in HMH_{M} have large excess, as this means that fewer clusters are ensured to be able to contribute. By Lemma 36, at least n/2n/2 of these are a part of completed connected components (ie, of size \ell or with each cluster having small remaining excess). So consider this case. Since |VM|n/(3t)|V_{M}|\geq n/(3t), then this process yields n2/(18t2)n\ell^{2}/(18t^{2}) vertices. To achieve 6n/6n/\ell vertices, we must then run 108t2/3108t^{2}/\ell^{3} iterations. If an edge no longer represents \ell matches because of an earlier iteration, consider it a non-edge for the rest of the process. The only thing left to consider is if a cluster CC becomes isolated in HMH_{M} during the process. We know CC began with at least tt vertices, and at most \ell were removed by removing small excess. So as long as t>+108t2/2t>\ell+108t^{2}/\ell^{2}, we can remove the rest of the 108t2/2108t^{2}/\ell^{2} vertices from the non-excess in CC (the rest must be non-excess) in the same way as vertices that were isolated in HMH_{M} to start. Thus, we can account for the entire set of unmatched vertices without removing more than 108t2/2108t^{2}/\ell^{2} vertices from any given cluster.

Now we consider the conditions. Condition 1 is obviously satisfied because we are just modifying clusters in 𝒞0\mathcal{C}_{0}, not removing them. The second condition is true because of our careful accounting scheme where we only remove +108t2/2\ell+108t^{2}/\ell^{2} vertices per cluster. The same is true for the lower bound in condition 3. When we add them to new clusters, since we only add a vertex to match an unmatched vertex, we at most double cluster size. So the max cluster size is 6t6t.

For the fourth condition, note that we explicitly executed this process until all unmatched vertices became matched, and any endpoint in a match we broke was used to create a new match. Thus the new matching, which we call MM^{\prime}, is perfect. It is still red-blue. Finally, note we did not create any matches between clusters. Therefore, no match in MM^{\prime} can violate HMH_{M}. Thus condition 5 is met. ∎

Finally, we construct our final clustering in Step (G). However, to satisfy the qualities of Lemma 30, we must first argue about the weight loss from each cluster.

Lemma 38.

Consider any clustering 𝒞\mathcal{C} with cluster sizes between tt and 6t6t. Say each cluster has a specified rr number of red vertices to remove and bb number of blue vertices to remove such that r+bxr+b\leq x for some xx, and rr (resp. bb) is nonzero only if the number of red (resp. blue) vertices in the cluster is O(n)O(n). Then we can remove the desired number of each color while removing at most an O((x/t)γt)O((x/t)\gamma_{t}) of the weight originally contained within the cluster.

Proof.

Consider some cluster CC with parameters rr and bb. We will focus first on removing red vertices. Let CrC_{r} be the red vertex set in CC. We create a graph KK corresponding to this cluster as follows. Let b0b_{0} be a vertex representing all blue vertices from CC, b0b_{0}^{\prime} be the “complement” vertex to b0b_{0}, and RR be a set of vertices rir_{i} corresponding to all red vertices in CC. We also add a set of 2r|Cr|+2X2r-|C_{r}|+2X dummy vertices (where XX is just some large value that makes it so 2r|Cr|+X>02r-|C_{r}|+X>0). 2r|Cr|+X2r-|C_{r}|+X of the dummy vertices will be connected to b0b_{0} with infinite edge weight (denote these δi\delta_{i}), the other XX will be connected to b0b_{0}^{\prime} with infinite edge weight (denote these δi\delta_{i}^{\prime}). This will ensure that b0b_{0} and b0b_{0}^{\prime} are in the same partitions as their corresponding dummies. Let sGs_{G} and sKs_{K} be the similarity function in the original graph and new graph respectively.

sK(b0,δi)=\displaystyle s_{K}(b_{0},\delta_{i})= \displaystyle\infty
sK(b0,δi)=\displaystyle s_{K}(b_{0}^{\prime},\delta_{i}^{\prime})= \displaystyle\infty

The blue vertex b0b_{0} is also connected to all rir_{i} with the following weight (where CbC_{b} is the set of blue vertices in CC):

sK(b0,ri)=bjCbsG(ri,bj)+12rjR{rj}sG(ri,rj)s_{K}(b_{0},r_{i})=\sum_{b_{j}\in C_{b}}s_{G}(r_{i},b_{j})+\frac{1}{2}\sum_{r_{j}\in R\setminus\{r_{j}\}}s_{G}(r_{i},r_{j})

This edge represents the cumulative edge weight between rir_{i} and all blue vertices. The additional summation term, which contains the edge weights between rir_{i} and all other red vertices, is necessary to ensure our bisection cut will also contain the edge weights between two of the removed red vertices.

Next, the edge weights between red vertices must contain the other portion of the corresponding edge weight in the original graph.

sK(ri,rj)=12sG(ri,rj)s_{K}(r_{i},r_{j})=\frac{1}{2}s_{G}(r_{i},r_{j})

Now, we note that there are a total of 2|Cr|+2X+|Cr|=2r+2X2-|C_{r}|+2X+|C_{r}|=2r+2X vertices. So a bisection will partition the graph into vertex sets of size r+Xr+X. Obviously, in any approximation, b0b_{0} must be grouped with all δi\delta_{i} and b0b_{0}^{\prime} must be grouped with all δi\delta_{i}^{\prime}. This means the b0b_{0} partition must contain |Cr|r|C_{r}|-r of the RR vertices, and the b0b_{0}^{\prime} partition must contain the other rr. These rr vertices in the latter partition are the ones we select to move.

Consider any set SS of rr red vertices in KK. Then it is a valid bisection. We now show that the edge weight in the cut for this bisection is exactly the edge weight lost by removing SS from KK. We can do this algebraically. We start by breaking down the weight of the cut into the weight between the red vertices in SS and b0b_{0}, and also the red vertices in SS and the red vertices not in SS.

sK\displaystyle s_{K} (S,V(K)S)\displaystyle(S,V(K)\setminus S)
=\displaystyle= riSsK(b0,ri)+riS,rjRSsK(ri,rj)\displaystyle\sum_{r_{i}\in S}s_{K}(b_{0},r_{i})+\sum_{r_{i}\in S,r_{j}\in R\setminus S}s_{K}(r_{i},r_{j})
=\displaystyle= riS(bjBsG(ri,bj)+12rjR{rj}sG(ri,rj))\displaystyle\sum_{r_{i}\in S}\left(\sum_{b_{j}\in B}s_{G}(r_{i},b_{j})+\frac{1}{2}\sum_{r_{j}\in R\setminus\{r_{j}\}}s_{G}(r_{i},r_{j})\right)
+riS,rjRS12sG(ri,rj)\displaystyle+\sum_{r_{i}\in S,r_{j}\in R\setminus S}\frac{1}{2}s_{G}(r_{i},r_{j})
=\displaystyle= riS(bjBsG(ri,bj)+12rjR{rj}sG(ri,rj)\displaystyle\sum_{r_{i}\in S}\left(\sum_{b_{j}\in B}s_{G}(r_{i},b_{j})+\frac{1}{2}\sum_{r_{j}\in R\setminus\{r_{j}\}}s_{G}(r_{i},r_{j})\right.
+12rjRSsG(ri,rj))\displaystyle\left.\quad\quad\quad+\frac{1}{2}\sum_{r_{j}\in R\setminus S}s_{G}(r_{i},r_{j})\right)

Notice that the two last summations have an overlap. They both contribute half the edge weight between rir_{i} and vertices in RSR\setminus S. Thus, these edges contribute their entire edge weight. All remaining vertices in S{ri}S\setminus\{r_{i}\} only contribute half their edge weight. We can then distribute the summation.

sK\displaystyle s_{K} (S,V(K)S)\displaystyle(S,V(K)\setminus S)
=\displaystyle= riS(bjBsG(ri,bj)+12rjS{rj}sG(ri,rj)\displaystyle\sum_{r_{i}\in S}\left(\sum_{b_{j}\in B}s_{G}(r_{i},b_{j})+\frac{1}{2}\sum_{r_{j}\in S\setminus\{r_{j}\}}s_{G}(r_{i},r_{j})\right.
+rjRSsG(ri,rj))\displaystyle\left.\quad\quad\quad+\sum_{r_{j}\in R\setminus S}s_{G}(r_{i},r_{j})\right)
=\displaystyle= riS,bjBsG(ri,bj)+12riS,rjS{rj}sG(ri,rj)\displaystyle\sum_{r_{i}\in S,b_{j}\in B}s_{G}(r_{i},b_{j})+\frac{1}{2}\sum_{r_{i}\in S,r_{j}\in S\setminus\{r_{j}\}}s_{G}(r_{i},r_{j})
+riS,rjRSsG(ri,rj)\displaystyle+\sum_{r_{i}\in S,r_{j}\in R\setminus S}s_{G}(r_{i},r_{j})

In the middle summation, note that every edge e=(u,v)e=(u,v) is counted twice when ri=ur_{i}=u and rj=vr_{j}=v, and when ri=vr_{i}=v and rj=ur_{j}=u. We can then rewrite this as:

sK(S,V(K)S)=\displaystyle s_{K}(S,V(K)\setminus S)= riS,bjBsG(ri,bj)\displaystyle\sum_{r_{i}\in S,b_{j}\in B}s_{G}(r_{i},b_{j})
+ri,rjSsG(ri,rj)\displaystyle+\sum_{r_{i},r_{j}\in S}s_{G}(r_{i},r_{j})
+riS,rjRSsG(ri,rj)\displaystyle+\sum_{r_{i}\in S,r_{j}\in R\setminus S}s_{G}(r_{i},r_{j})

When we remove SS, we remove the connections between SS and blue vertices, the connections within SS, and the connections between SS and red vertices not in SS. This is precisely what this accounts for. Therefore, any bisection on KK directly corresponds to removing a vertex set SS of rr red vertices from CC. If we have a γt\gamma_{t}-approximation for minimum weighted bisection, then, this yields a γt\gamma_{t}-approximation for the smallest loss we can achieve from removing rr red vertices.

Now we must compare the optimal way to remove rr vertices to the total weight in a cluster. Let ρ=|Cr|\rho=|C_{r}| be the number of red vertices in a cluster. Then the total number of possible cuts to isolate rr red vertices is (ρr)\binom{\rho}{r}. Let 𝒮\mathcal{S} be the set of all possible cuts to isolate rr red vertices. Then if we sum over the weight of all possible cuts (where weight here is the weight between the rr removed vertices and all vertices, including each other), that will sum over each red-red edge and blue-red edge multiple times. A red-red edge is counted if either of its endpoints is in S𝒮S\in\mathcal{S}, and this happens 2(ρr1)(R1r2)2(ρr1)2\binom{\rho}{r-1}-\binom{R-1}{r-2}\leq 2\binom{\rho}{r-1} of the time. A blue-red edge is counted if its red endpoint is in SS, which happens (ρr1)2(ρr1)\binom{\rho}{r-1}\leq 2\binom{\rho}{r-1}. And of course, since no blue-blue edge is covered, each is covered under 2(ρr1)2\binom{\rho}{r-1} times. Therefore, if we sum over all these cuts, we get at most 2(ρr1)2\binom{\rho}{r-1} times the weight of all edges in CC.

S𝒮s(S)2(ρr1)s(C)\sum_{S\in\mathcal{S}}s(S)\leq 2\binom{\rho}{r-1}s(C)

Let OPTOPT be the minimum possible cut. Now since there are (ρr)\binom{\rho}{r} cuts, we know the lefthand side here is bounded above by (ρr)s(OPT)\binom{\rho}{r}s(OPT).

(ρr)s(OPT)2(ρr1)s(C)\binom{\rho}{r}s(OPT)\leq 2\binom{\rho}{r-1}s(C)

We can now simplify.

s(OPT)2rρs(C)s(OPT)\leq\frac{2r}{\rho}s(C)

But note we are given ρ=O(t)\rho=O(t). So if we have a γt\gamma_{t} approximation for the minimum bisection problem, this means we can find a way to remove rr vertices such that the removed weight is at most O(r/t)γtO(r/t)\gamma_{t}. We can do this again to get a bound on the removal of the blue vertices. This yields a total weight removal of O(x/t)γtO(x/t)\gamma_{t}. ∎

Finally, we can prove Lemma 31, which satisfies the conditions of Lemma 30.

Proof.

Start by running Lemma 35 on 𝒞\mathcal{C} to yield 𝒞0\mathcal{C}_{0}. Then we can apply Lemma 37 to yield 𝒞1\mathcal{C}_{1} with red-blue clustering graph HMH_{M} and underlying perfect red-blue matching MM^{\prime}. We create 𝒞\mathcal{C}^{\prime} by merging components in HMH_{M} into clusters. Since the max component size is \ell and the max cluster size in 𝒞1\mathcal{C}_{1} is 6t6t, then the max cluster size in 𝒞\mathcal{C}^{\prime} is 6t6t\ell. This satisfies condition 2 of being 𝒞\mathcal{C}-good. In addition, it is fair by Proposition 34.

Finally, we utilize the fact that we only moved at most +108t22\ell+108t^{2}\ell^{2} vertices from any cluster, and note that we only move vertices of a certain color if we have O(n)O(n) of that color in that cluster. Then by Lemma 38, we know we lost at most O(γt/t+tγt/2)O(\ell\gamma_{t}/t+t\gamma_{t}/\ell^{2}) fraction of the weight from any cluster. This satisfies the second condition and therefore 𝒞\mathcal{C}^{\prime} is 𝒞\mathcal{C}-good. ∎

Appendix F Additional experimental results for revenue

We have conducted experiments on the four datasets for revenue as well. The Table 5 shows the ratio of fair tree built by using average-linkage on different fairlet decompositions. We run Algorithm 1 on the subsamples with Euclidean distances. Then we convert distances into similarity scores using transformation s(i,j)=11+d(i,j)s(i,j)=\frac{1}{1+d(i,j)}. We test the performance of the initial random fairlet decomposition and final fairlet decomposition found by Algorithm 1 for revenue objective using the converted similarity scores.

Table 5: Impact of different fairlet decomposition on ratio over original average-linkage in percentage (mean ±\pm std. dev).
Samples 100 200 400 800 1600
CensusGender, initial 74.12±2.5274.12\pm 2.52 76.16±3.4276.16\pm 3.42 74.15±1.4474.15\pm 1.44 70.17±1.0170.17\pm 1.01 65.02±0.7965.02\pm 0.79
final 92.32±2.7092.32\pm 2.70 95.75±0.7495.75\pm 0.74 95.68±0.9695.68\pm 0.96 96.61±0.6096.61\pm 0.60 97.45±0.1997.45\pm 0.19
CensusRace, initial 65.67±7.5365.67\pm 7.53 65.31±3.7465.31\pm 3.74 61.97±2.5061.97\pm 2.50 59.59±1.8959.59\pm 1.89 56.91±0.8256.91\pm 0.82
final 85.38±1.6885.38\pm 1.68 92.98±1.8992.98\pm 1.89 94.99±0.5294.99\pm 0.52 96.86±0.8596.86\pm 0.85 97.24±0.6397.24\pm 0.63
BankMarriage, initial 75.19±2.5375.19\pm 2.53 73.58±1.0573.58\pm 1.05 74.03±1.3374.03\pm 1.33 73.68±0.5973.68\pm 0.59 72.94±0.6372.94\pm 0.63
final 93.88±2.1693.88\pm 2.16 96.91±0.9996.91\pm 0.99 96.82±0.3696.82\pm 0.36 97.05±0.7197.05\pm 0.71 97.81±0.4997.81\pm 0.49
BankAge, initial 77.48±1.4577.48\pm 1.45 78.28±1.7578.28\pm 1.75 76.40±1.6576.40\pm 1.65 75.95±0.7775.95\pm 0.77 75.33±0.2875.33\pm 0.28
final 91.26±2.6691.26\pm 2.66 95.74±2.1795.74\pm 2.17 96.45±1.5696.45\pm 1.56 97.31±1.9497.31\pm 1.94 97.84±0.9297.84\pm 0.92

Appendix G Additional experimental results for multiple colors

We ran experiments with multiple colors and the results are analogous to those in the paper. We tested both Census and Bank datasets, with age as the protected feature. For both datasets we set 4 ranges of age to get 4 colors and used α=1/3\alpha=1/3. We ran the fairlet decomposition in [2] and compare the fair hierarchical clustering’s performance to that of average-linkage. The age ranges and the number of data points belonging to each color are reported in Table 6. Colors are named {1,2,3,4}\{1,2,3,4\} descending with regard to the number of points of the color. The vanilla average-linkage has been found to be unfair: if we take the layer of clusters in the tree that is only one layer higher than the leaves, there is always one cluster with α>13\alpha>\frac{1}{3} for the definition of α\alpha-capped fairness, showing the tree to be unfair.

Table 6: Age ranges for all four colors for Census and Bank.
Dataset Color 1 Color 2 Color 3 Color 4
CensusMultiColor (26,38]:9796(26,38]:9796 (38,48]:7131(38,48]:7131 (48,+):6822(48,+\infty):6822 (0,26]:6413(0,26]:6413
BankMultiColor (30,38]:14845(30,38]:14845 (38,48]:12148(38,48]:12148 (48,+):11188(48,+\infty):11188 (0,30]:7030(0,30]:7030

As in the main body, in Table 7, we show for each dataset the ratiovalue\mathrm{ratio}_{\mathrm{value}} both at the time of initialization (Initial) and after using the local search algorithm (Final), where ratiovalue\mathrm{ratio}_{\mathrm{value}} is the ratio between the performance of the tree built on top of the fairlets and that of the tree directly built by average-linkage.

Table 7: Impact of Algorithm 1 on ratiovalue\mathrm{ratio}_{\mathrm{value}} in percentage (mean ±\pm std. dev).
Samples 200 400 800 1600 3200 6400
CensusMultiColor, initial 88.55±0.8788.55\pm 0.87 88.74±0.4688.74\pm 0.46 88.45±0.5388.45\pm 0.53 88.68±0.2288.68\pm 0.22 88.56±0.2088.56\pm 0.20 88.46±0.3088.46\pm 0.30
final 99.01±0.0999.01\pm 0.09 99.41±0.5799.41\pm 0.57 99.87±0.2899.87\pm 0.28 99.80±0.2799.80\pm 0.27 100.00±0.14100.00\pm 0.14 99.88±0.3099.88\pm 0.30
BankMultiColor, initial 90.98±1.1790.98\pm 1.17 91.22±0.8491.22\pm 0.84 91.87±0.3291.87\pm 0.32 91.70±0.3091.70\pm 0.30 91.70±0.1891.70\pm 0.18 91.69±0.1491.69\pm 0.14
final 98.78±0.2298.78\pm 0.22 99.34±0.3299.34\pm 0.32 99.48±0.1699.48\pm 0.16 99.71±0.1699.71\pm 0.16 99.80±0.0899.80\pm 0.08 99.84±0.0599.84\pm 0.05

Table 8 shows the performance of trees built by average-linkage based on different fairlets, for Revenue objective. As in the main body, the similarity score between any two points i,ji,j is s(i,j)=11+d(i,j)s(i,j)=\frac{1}{1+d(i,j)}. The entries in the table are mean and standard deviation of ratios between the fair tree performance and the vanilla average-linkage tree performance. This ratio was calculated both at time of initialization (Initial) when the fairlets were randomly found, and after Algorithm 1 terminated (Final).

Table 8: Impact of Algorithm 1 on revenue, in percentage (mean ±\pm std. dev).
Samples 200 400 800 1600 3200
CensusMultiColor, initial 75.76±2.8675.76\pm 2.86 73.60±1.7773.60\pm 1.77 69.77±0.5669.77\pm 0.56 66.02±0.9566.02\pm 0.95 61.94±0.6161.94\pm 0.61
final 92.68±0.9792.68\pm 0.97 94.66±1.6694.66\pm 1.66 96.40±0.6196.40\pm 0.61 97.09±0.6097.09\pm 0.60 97.43±0.7797.43\pm 0.77
BankMultiColor, initial 72.08±0.9872.08\pm 0.98 70.96±0.6970.96\pm 0.69 70.79±0.7270.79\pm 0.72 70.77±0.4970.77\pm 0.49 69.88±0.5369.88\pm 0.53
final 94.99±0.7994.99\pm 0.79 95.87±2.0795.87\pm 2.07 97.19±0.8197.19\pm 0.81 97.93±0.5997.93\pm 0.59 98.43±0.1498.43\pm 0.14

Table 9 shows the run time of Algorithm 1 with multiple colors.

Table 9: Average running time of Algorithm 1 in seconds.
Samples 200 400 800 1600 3200 6400
CensusMultiColor 0.43 1.76 7.34 35.22 152.71 803.59
BankMultiColor 0.43 1.45 6.77 29.64 127.29 586.08

Appendix H Pseudocode for the cost objective

Algorithm 2 Fair hierarchical clustering for cost objective.
  Input: Graph GG, edge weight w:Ew:E\to\mathbb{R}, color c:V{red, blue}c:V\to\{\text{red, blue}\}, parameters tt and \ell
  
  {Step (A)}
  TUnfairHC(G,w)T\leftarrow\textsc{UnfairHC}(G,w) {Blackbox unfair clustering that minimizes cost}
  
  {Step (B)}
  Let 𝒞\mathcal{C}\leftarrow\emptyset
  Do a BFS of TT, placing visited cluster CC in 𝒞\mathcal{C} if |C|t|C|\leq t, and not proceeding to CC’s children
  
  {Step (C)}
  𝒞0,C\mathcal{C}_{0},C^{\prime}\leftarrow\emptyset
  for CC in 𝒞\mathcal{C} do
     CCCC^{\prime}\leftarrow C^{\prime}\cup C
     if |C|t|C^{\prime}|\geq t then
        Add CC^{\prime} to 𝒞0\mathcal{C}_{0}
        Let CC^{\prime}\leftarrow\emptyset
     end if
  end for
  If |C|>0|C^{\prime}|>0, merge CC^{\prime} into some cluster in 𝒞0\mathcal{C}_{0}
  
  {Step (D)}
  for CC in 𝒞0\mathcal{C}_{0} do
     Let exc(C)exc(C)\leftarrow majority color in CC
     Let ex(C)ex(C)\leftarrow difference between majority and minority colors in CC
  end for
  
  {Step (E}
  HMH_{M}\leftarrow BuildClusteringGraph(𝒞0,ex,exc)(\mathcal{C}_{0},ex,exc)
  
  {Step (F)}
  𝑓𝑉\mathit{fV}\leftarrow FixUnmatchedVertices(𝒞0,HM,ex,exc)(\mathcal{C}_{0},H_{M},ex,exc)
  
  {Step (G)}
  𝒞\mathcal{C}^{\prime}\leftarrow ConstructClustering(𝒞0,ex,exc,𝑓𝑉)(\mathcal{C}_{0},ex,exc,\mathit{fV})
  return  𝒞\mathcal{C}^{\prime}
Algorithm 3 BuildClusteringGraph (𝒞0,ex,exc)(\mathcal{C}_{0},ex,exc)
  HM(VM=𝒞0,EM=)H_{M}\leftarrow(V_{M}=\mathcal{C}_{0},E_{M}=\emptyset)
  Let CiVMC_{i}\in V_{M} be any vertex
  Let n1/3logn\ell\leftarrow n^{1/3}\sqrt{\log n}
  while \exists an unvisited CjVMC_{j}\in V_{M} such that exc(Cj)exc(Ci)exc(C_{j})\neq exc(C_{i}) do
     Add (Ci,Cj)(C_{i},C_{j}) to EME_{M}
     Swap labels CiC_{i} and CjC_{j} if ex(Cj)>ex(Ci)ex(C_{j})>ex(C_{i})
     Let ex(Ci)ex(Ci)ex(Cj)ex(C_{i})\leftarrow ex(C_{i})-ex(C_{j})
     if ex(Ci)<ex(C_{i})<\ell or |𝑐𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡(Ci)||\mathit{component}(C_{i})|\geq\ell then
        Reassign starting point CiC_{i} to an unvisited vertex in VMV_{M}
     end if
  end while
  return  HMH_{M}
Algorithm 4 FixUnmatchedVertices(𝒞0,HM,ex,exc)(\mathcal{C}_{0},H_{M},ex,exc)
  Let n1/3logn\ell\leftarrow n^{1/3}\sqrt{\log n}
  for C𝒞0VMC\in\mathcal{C}_{0}\setminus V_{M} do
     Let 𝑓𝑉(C,red),𝑓𝑉(C,blue)m2/2\mathit{fV}(C,\text{red}),\mathit{fV}(C,\text{blue})\leftarrow m^{2}/\ell^{2}
  end for
  for ii from 11 to 108t2/3108t^{2}/\ell^{3} do
     for each kk component in HMH_{M} do
        for pp in a BFS of kk do
           Let chch\leftarrow some child of pp
           𝑓𝑉(p,exc(p))𝑓𝑉(p,exc(p))+\mathit{fV}(p,exc(p))\leftarrow\mathit{fV}(p,exc(p))+\ell
           ex(p)ex(p)ex(p)\leftarrow ex(p)-\ell
           𝑓𝑉(ch,exc(ch))𝑓𝑉(ch,exc(ch))+\mathit{fV}(ch,exc(ch))\leftarrow\mathit{fV}(ch,exc(ch))+\ell
           ex(ch)ex(ch)ex(ch)\leftarrow ex(ch)-\ell
           if # matches between pp and chch <<\ell then
              Remove (p,ch)(p,ch) from EME_{M} {This creates a new component}
           end if
        end for
     end for
  end for
  return  𝑓𝑉\mathit{fV}
Algorithm 5 ConstructClustering(𝒞0,ex,exc,𝑓𝑉)(\mathcal{C}_{0},ex,exc,\mathit{fV})
  Let 𝒞,R\mathcal{C}^{\prime},R\leftarrow\emptyset
  for CC in 𝒞0\mathcal{C}_{0} do
     for cc in {red, blue}\{\text{red, blue}\} do
        Let f=𝑓𝑉(C,c)f=\mathit{fV}(C,c)
        Let Cf={vC:c(v)=c}C_{f}=\{v\in C:c(v)=c\}
        Create the transformed graph LL from CfC_{f} {Described in the proof of Lemma 38}
        CMinWeightBisection(L)C^{\prime}\leftarrow\textsc{MinWeightBisection}(L) {Blackbox, returns isolated CfC_{f} vertices}
        CCCC\leftarrow C\setminus C^{\prime}
        RRCR\leftarrow R\cup C^{\prime}
        ex(C)ex(C)|C|ex(C)\leftarrow ex(C)-|C^{\prime}|
     end for
  end for
  for C𝒞0C\in\mathcal{C}_{0} do
     Let SRS\subset R such that |S|=ex(C)|S|=ex(C) with no vertices of color exc(C)exc(C)
     C=CSC=C\cup S
     RRSR\leftarrow R\setminus S
     Add CC to 𝒞\mathcal{C}^{\prime}
  end for
  return  𝒞\mathcal{C}^{\prime}