This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

LLM-Assisted Content Conditional Debiasing for Fair Text Embedding

Wenlong Deng1,2, Blair Chen2, Beidi Zhao1, Chiyu Zhang1,
Xiaoxiao Li1, Christos Thrampoulidis1†
Department of Electrical and Computer Engineering1,
The University of British Columbia, Vancouver, BC, Canada
{dwenlong,beidiz,chiyuzh,xiaoxiao.li,cthrampo}@ece.ubc.ca
Google Cloud2, USA
{chenblair}@google.com
Work done at google.Equal Corresponding Author.
Abstract

Mitigating biases in machine learning models has become an increasing concern in Natural Language Processing (NLP), particularly in developing fair text embeddings, which are crucial yet challenging for real-world applications like search engines. In response, this paper proposes a novel method for learning fair text embeddings. First, we define a novel content-conditional equal distance (CCED) fairness for text embeddings, ensuring content-conditional independence between sensitive attributes and text embeddings. Building on CCED, we introduce a content-conditional debiasing (CCD) loss to ensure that embeddings of texts with different sensitive attributes but identical content maintain the same distance from the embedding of their corresponding neutral text. Additionally, we tackle the issue of insufficient training data by using Large Language Models (LLMs) with instructions to fairly augment texts into different sensitive groups. Our extensive evaluations show that our approach effectively enhances fairness while maintaining the utility of embeddings. Furthermore, our augmented dataset, combined with the CCED metric, serves as an new benchmark for evaluating fairness.

LLM-Assisted Content Conditional Debiasing for Fair Text Embedding


Wenlong Deng1,2thanks: Work done at google., Blair Chen2, Beidi Zhao1, Chiyu Zhang1, Xiaoxiao Li1thanks: Equal Corresponding Author., Christos Thrampoulidis1† Department of Electrical and Computer Engineering1, The University of British Columbia, Vancouver, BC, Canada {dwenlong,beidiz,chiyuzh,xiaoxiao.li,cthrampo}@ece.ubc.ca Google Cloud2, USA {chenblair}@google.com


1 Introduction

Embedding text into dense representations is a widely used technique in modern NLP, powering applications such as sentiment analysis Dang et al. (2020), recommendation systems Zhang et al. (2016), and search engines Palangi et al. (2016). However, the extensive use of these embeddings introduces inherent biases that can affect various applications Packer et al. (2018); Baeza-Yates (2018); Zerveas et al. (2022); Rabelo et al. (2022). For instance, search engines Huang et al. (2020) preprocess all text contents and search queries into embeddings to optimize storage and enable efficient similarity matching. These inherent biases in text embeddings can influence the calculation of embedding similarity, impacting the filtering of numerous documents to find pertinent ones. Moreover, text embeddings are directly employed in other applications such as zero-shot classification Yin et al. (2019); Radford et al. (2021) and clustering John et al. (2023). Unfortunately, various forms of biases, including gender, racial, and religious biases, have been identified in text embeddings generated by pre-trained language models (PLMs), as reported in several studies Bolukbasi et al. (2016); Nissim et al. (2020); Liang et al. (2020); May et al. (2019). Consequently, attaining fairness in text embedding models is crucial.

Refer to caption
Figure 1: Pipleline of our method with gender as the sensitive attributes. (a) Graphical demonstration of the fairness issue. (b) The debiasing procedure achieves a content-conditioned equal distance to improve the fairness. (c) Overview of the data augmentation strategy, including the prompt template used to replace sensitive words with their equivalents from all sensitive groups. (d) Prompt search module: Augmented texts are sent to the demographic polarity checking block. Incorrectly augmented samples are then manually labeled and added to the prompts.

Recent debiasing techniques Liang et al. (2020); Kaneko and Bollegala (2021) for text embeddings use post-training to address biases, avoiding the inefficiency of retraining sentence encoders for each new bias. When removing bias, projection-based methods Liang et al. (2020); Kaneko and Bollegala (2021) reduce an embedding’s projection onto each bias subspace. The distance-based method Yang et al. (2023) constructs embeddings for sensitive groups and equalizes distances to text embeddings across these groups. Nevertheless, these methods persist in pursuing independence between sensitive attributes and text embeddings, which results in the complete removal of sensitive information. As a result, these approaches do not effectively find the sweet spot between fairness and utility trade-off Zhao and Gordon (2022); Deng et al. (2023); Zliobaite (2015).

Recent studies Mary et al. (2019); Deng et al. (2023); Pogodin et al. (2022) suggest that using datasets labeled with sensitive information to achieve conditional independence — specifically, conditioning on the content class to preserve semantic information within the text — provides a more effective approach to achieving fairness while preserving utility. Yet, the scarcity of text datasets with sensitive labels Gallegos et al. (2023) limits the practical application of these findings. To create such datasets, Counterfactual Data Augmentation (CDA) Zhao et al. (2018) collects sensitive-related words and employs a rule-based method to augment the data, but this approach encounters challenges due to the need for an extensive list of words. Finally, while Large Language Models (LLMs) Schick and Schütze (2021); Shao et al. (2023) have offered new methods for data generation thanks to their rich contextual knowledge, yet they still struggle with inherent systematic biases Yu et al. (2023).

In this paper, we improve the text embeding fairness through defining fairness with theoretical analysis, a novel debiasing loss design, and an LLM-based data strategy for dataset generation. Our contributions include:

  • Introducing CCED fairness for text embeddings, ensuring equal sensitive information and conditional independence between sensitive attributes and embeddings.

  • Proposing CCD loss to achieve the desired CCED fairness by ensuring that texts with varied sensitive attributes but identical content have embeddings equidistant from their neutral counterparts.

  • Employing LLMs to augment datasets fairly, representing diverse sensitive groups within the same content for effective training with CCD. Proposing polarity-guided prompting to ensure the LLM-generated data quality and minimize the potential biases from LLMs.

  • Establishing CCED fairness as a benchmark for evaluating fairness in text embeddings.

  • Extensive evaluations on debiasing benchmarks and downstream tasks demonstrate CCD’s effectiveness in promoting fairness while preserving utility.

2 Related Work

Debias Text Embedding: Bias in text embeddings (also known as sentence embedding) is a significant issue that arises when these models reflect or amplify societal stereotypes and prejudices found in their training data. To resolve the issue,  Liang et al. (2020) contextualizes predefined sets of bias attribute words to sentences and applies a hard-debias algorithm Bolukbasi et al. (2016). Contextualized debiasing methods Kaneko and Bollegala (2021); Yang et al. (2023) apply token-level debiasing for all tokens in a sentence and can be applied at token- or sentence-levels Kaneko and Bollegala (2021) to debias pretrained contextualized embeddings. However, all the above methods aim to strictly achieve independence between text embedding and sensitive attributes, which may not balance fairness and utility well. While Shen et al. (2021, 2022) employ contrastive learning losses to mitigate biases in language representations for text classification, their approach relies on supervised data, which is often scarce and expensive to obtain, and primarily focuses on fairness in the subsequent task. Additionally, although Leteno et al. (2023); Shen et al. (2022) observe that representational fairness and group fairness in subsequent tasks are either not correlated or only partially correlated, it is important to note that fairness in subsequent tasks and fairness in text embeddings are distinct areas, with the latter being crucial for various applications. A detailed discussion of these differences can be found in Appendix A.2. In this paper, we utilize LLMs to augment training data for learning fair text embeddings with proposed CCD loss.

LLMs for Dataset Generation: Leveraging the success of LLMs, researchers have begun using them to generate various forms of training data, such as tabular data Borisov et al. (2022), relation triplets Chia et al. (2022), sentence pairs Schick and Schütze (2021); Zhang et al. (2024), and instruction data Shao et al. (2023); Wu et al. (2024). As we focus on obtaining data with sensitive attribute information, data generation for text classification would be the most similar one among those applications. Recent efforts in generating data for text classification  Meng et al. (2022); Ye et al. (2022); Wang et al. (2019) primarily employ simple class-conditional prompts while focusing on mitigating issues of low quality after generation. However, these efforts encounter the challenge of inherent systematic biases present in LLMs Yu et al. (2023). While Yu et al. (2023) considers generated data bias, it focuses only on the diversity of topics and overlooks the inherent bias within words in a text (e.g. ‘child’ occurs more frequently with‘mother’). In this paper, we instructs the LLM to only locate the gendered words and replace them with counterparts from other groups and propose polarity-guided prompt searching to minimize biases from LLMs and ensure the quality of augmented data.

3 Method

3.1 Problem Setting

This section outlines the problem of fairness in text embeddings. We define several key variables: S𝒟S\in\mathcal{D} represents the input text from the data distribution, CC denotes the content of the text,111For instance, the texts ‘he is a teacher’ and ‘she is a teacher’ both convey the same content CC = ‘is a teacher’. and A=[a1,,a|A|]A=[a_{1},\ldots,a_{|A|}] represents the sensitive attributes (e.g. gender and age). The symbol nn indicates neutral, meaning no sensitive information is present. A text with content CC is considered neutral SCnS_{C}^{n} if it contain no sensitive information, whereas text SCaiS_{C}^{a_{i}} is associated with the sensitive attribute aia_{i} if its sensitive polarity Wang et al. (2023) is aia_{i}, see Eq. (6). The text embedding model ff processes a text into a dd-dimensional embedding ZdZ\in\mathbb{R}^{d}. The embedding of a neutral text encodes the content information CC^{\prime} (a well trained model CCC^{\prime}\approx C), while the embedding of a sensitive text additionally encodes sensitive information. Words in the text related to the attribute aia_{i} are denoted as XaiX^{a_{i}}, and neutral words are denoted as XnX^{n}. For clarity, we provide detailed notations in Table 8 in Appendix.
Fairness Issue: Fig. 1 (a) shows there exists an association between attributes AA and content variable CC. If model ff superficially treats AA as a proxy for CC222For instance, raising children is frequently associated with women in the training corpus, resulting in the proxy effect., it results in encoded CC^{\prime} being represented by AA thus embedding ZZ will mainly contain sensitive information, which leads to issues of fairness.
Fairness Goal: Mitigating fairness is not trivial, as we need to address not only bias mitigation but also the protection of the model’s representation ability. As shown in Fig. 1 (a), our method aims to (1) break the association between content CC and the sensitive attribute AA, and (2) preserve useful sensitive information in the text embedding. For example, in the case of a text about a father raising a child, its embedding should retain information about the father.

3.2 Content Conditional Debiasing

To break the superficial association, we propose to achieve conditional independence between sensitive attributes and content AC|CA\perp C^{\prime}~{}|~{}C. The conditional independence allows prediction CC^{\prime} to depend on AA but only through the content variable CC, prohibiting abusing AA as a proxy for CC thus mitigating the fairness issue while preserving the utility. To protect utility, our objective is not to completely remove sensitive information but to ensure that text embeddings from different sensitive groups with identical content contain an equal amount of sensitive information.

3.2.1 Fairness Definition

Firstly, we propose a novel content conditional equal distance fairness for fair text embedding:

Definition 3.1.

(Content Conditional Equal Distance (CCED) Fairness.) Let SCnS_{C}^{n} be a neutral text with content CC. Assume SCA=[SCa1,SCa2,,SCa|A|]S_{C}^{A}=[S_{C}^{a_{1}},S_{C}^{a_{2}},...,S_{C}^{a_{|A|}}] being a set of texts from all sensitive groups with the same content CC. Then, embedding model ff is content conditioned equal distance fair with respect to attributes AA, for any ai,ajAa_{i},a_{j}\in A:

f(SCai)f(SCn)=f(SCaj)\displaystyle\|f(S_{C}^{a_{i}})-f(S_{C}^{n})\|=\|f(S_{C}^{a_{j}}) f(SCn),\displaystyle-f(S_{C}^{n})\|, (1)

where \|\cdot\| is L2L_{2} norm.

As shown in Fig. 1 (b), CCED fairness requires that texts with the same context from different sensitive groups have equal distance to their corresponding neutral text on the embedding space. This text embedding fairness definition has two merits:
Equal sensitive information: The equal distance to the neutral embedding ensures an equitable encoding of sensitive information across diverse groups, allowing fair usage of sensitive information and preserving the utility of embeddings.
Content Conditional Independent: Echoing the methodologies in previous research Hinton and Roweis (2002); Yang et al. (2023), the conditional independence AC|CA\perp C^{\prime}~{}|~{}C can be represented as the CCED on the embedding space:

Assumption 3.2.

(Equal Probability) Within a content CC, the likelihood P(ai|C)P(a_{i}|C) on all sensitive attributes aiAa_{i}\in A is uniform P(a1|C)==P(aA|C)P(a_{1}|C)=...=P(a_{A}|C).

Theorem 3.3.

When the equal probability assumption holds, achieving content conditioned equal distance fairness is equivalent to achieving conditional independence between sensitive attributes and content AC|CA\perp C^{\prime}~{}|~{}C.

Assumption 3.2 is true for a fair dataset that has balanced texts from all groups within content CC (can be obtained through our data augmentation strategy in Section 3.3). Theorem 3.3 demonstrates the merit of CCED fairness (Definition 3.1) in achieving embedding fairness. Detailed proof can be found in Appendix A.5.

3.2.2 Content Conditional Debiasing Loss

Based on the defined CCED fairness, we design a loss function LbiasL_{bias} that aims to mitigate biases while preserving the representation ability of PLMs. For a sample pair [SCa1,,SCa|A|,SCn][S_{C}^{a_{1}},...,S_{C}^{a_{|A|}},S_{C}^{n}] :

Lbias=i[A]ji\displaystyle L_{bias}=\sum_{i\in[A]}\sum_{j\not=i} |dist(f(SCai),f(SCn))\displaystyle|dist(f(S_{C}^{a_{i}}),f(S_{C}^{n}))
dist(f(SCaj),f(SCn))|,\displaystyle-dist(f(S_{C}^{a_{j}}),f(S_{C}^{n}))|, (2)

where dist(A,B)=exp(AB22ρ2)dist(A,B)=\exp\left(-\frac{\lVert A-B\rVert^{2}}{2\rho^{2}}\right) measures the distance on the embedding manifold Yang et al. (2023); Hinton and Roweis (2002) (details in Appendix A.5), and ρ\rho is selected as the variance of the distance over the training dataset for normalization. To further preserve the valuable information encoded in the model and achieve efficient debiasing, we design LrepL_{rep} to enforce high similarity between the neutral texts’ embeddings processed by the fine-tuned model ff and those processed by the original model forgf^{org}:

Lrep=f(Sn)forg(Sn).\displaystyle L_{rep}=\|f(S^{n})-f^{org}(S^{n})\|. (3)

Ensuring that neutral embeddings remain unchanged offers two benefits: preserving the model’s representational capability and maintaining neutral embeddings as a consistent reference point in the debiasing loss, ensuring stable equal distance to embeddings with various sensitive attributes. Thus, the overall training objective is:

Lall=Lbias+βLrep,\displaystyle L_{all}=L_{bias}+\beta*L_{rep}, (4)

where β\beta is a hyper-parameter used to balance the two terms. An ablation study for setting β\beta is detailed in Table 7.

3.3 LLM-Assisted Content Conditional Data Augmentation

We leverage the rich contextual knowledge of LLM with few-shot prompting to obtain a dataset that (1) fulfills the Assumption 3.2 to achieve our goal in Definition 3.1 as well as (2) avoids introducing inherent bias in LLM to augmented data. The data augmentation algorithm is shown in Alg. 1, followed by a detailed explanation below.

Algorithm 1 Data Augmentation Algorithm

Input: Dataset 𝒟\mathcal{D}, Sensitive word lists VV, Pretrained LLM hh, Task Description TT, Example Prompts PP.

1:for kk in 1,,K1,\ldots,K do \triangleright K=10K=10 in this work
2:     Block I: Augment Texts into Different Sensitive Groups
3:     for S𝒟S\in\mathcal{D} do
4:         h(S,T,P)[Sa1,,Sa|A|,Sn],ch(S,T,P)\rightarrow[S^{a_{1}},...,S^{a_{|A|}},S^{n}],c
5:     end for
6:     if k=Kk=K then
7:         return Augmented Dataset 𝒟\mathcal{D}^{\prime}
8:     end if
9:     Block II: Polarity Guided Prompt Searching
10:     for [Sa1,,Sa|A|,Sn]D[S^{a_{1}},...,S^{a_{|A|}},S^{n}]\in D^{\prime} do
11:         Polarity Checking Eq.6
12:     end for
13:     Manually Augment the wrong augmentation with highest cc and add to PP.
14:end for

Augment Text into Different Sensitive Groups: As shown in Fig. 1 (c), our task description TT instructs the LLM to only locate the gendered words and replace them with counterparts from other groups, leaving the other content unchanged thus avoiding fairness issues in text generation. Specifically, for sensitive words XA=[Xai,,Xaj],ai,ajAX^{A}=[X^{a_{i}},...,X^{a_{j}}],a_{i},a_{j}\in A in the text SS, the LLM hh substitutes XAX^{A} with words from different sensitive groups and neutral terms, thus obtaining augmented texts from all sensitive groups (as shown in Table 1):

h(S,T,P)=[Sa1,,Sa|A|,Sn],c\displaystyle h(S,T,P)=[S^{a_{1}},...,S^{a_{|A|}},S^{n}],c (5)

where cc is the confidence score and PP is the example prompts (detailed in Table 10 in Appendix). After augmentation, the dataset will have an equal amount of texts from each sensitive group with identical content, meeting our equal probability Assumption 3.2.
Polarity-Guided Prompt Searching: To ensure the quality of augmented texts and the effectiveness of few-shot prompt tuning on LLMs, finding appropriate prompts PP is crucial. We propose identifying difficult samples from incorrectly augmented texts to use as prompts. First, these incorrectly augmented samples are detected through a sensitive polarity check as described by Wang et al. (2023) and illustrated in Fig. 1(d). By counting the occurrences of words in predefined sensitive word lists V=[Vai,,Vaj],ai,ajAV=[V^{a_{i}},...,V^{a_{j}}],a_{i},a_{j}\in A, the polarities of a series of sentences are determined as follows:

g(S)=argmaxaiAocc(S,Vai),\displaystyle g(S)=\arg\max_{a_{i}\in A}occ(S,V^{a_{i}}), (6)

where occocc represents the number of times words from the list VaiV^{a_{i}} appear in all augmented sentences SS. For a properly augmented sentence SaiS^{a_{i}}, its polarity should match the sensitive attribute aia_{i}. If g(Sai)aig(S^{a_{i}})\neq a_{i}, the sentence is considered inaccurately augmented. Then we introduce our prompt searching strategy in Algorithm 1. In each iteration, the algorithm identifies the incorrectly augmented sample with the highest confidence cc, manually augments it, and adds it to the example prompts PP. This rule-guided prompt search is repeated KK times (with K=10K=10) to prepare samples for the few-shot prompt tuning of de-biasing LLMs.

Gender Generated Text
Male But because Rumsfeld wanted to prove a point about transforming strategy.
After championing the continuation of his hardline policy, his current strategy of negotiation is risky.
He has been very vocal in voicing discontent with the rule of Kirchner and that of his husband and predecessor, Néstor Kirchner.
Neutral But because the individual wanted to prove a point about transforming strategy.
After championing the continuation of their hardline policy, the current strategy of negotiation is risky.
They have been very vocal in voicing discontent with the rule of Kirchner and that of their spouse and predecessor, Néstor Kirchner.
Female But because Rachel wanted to prove a point about transforming strategy.
After championing the continuation of her hardline policy, her current strategy of negotiation is risky.
She has been very vocal in voicing discontent with the rule of Kirchner and that of her wife and predecessor, Néstor Kirchner.

Table 1: We utilize LLM to augment text into three gender categories: Male, Female, and Neutral. Below are sample examples of the generated data, where words containing gender information are highlighted in colors: red for male, blue for neutral, and orange for female.

4 Experiments

In this paper, we take gender bias as an example due to its broad impact on society.

Datasets: We utilize the News-commentary-v15 corpus Tiedemann (2012) as source samples to generate our training data with LLMs. For gender bias evaluation, we follow Yang et al. (2023) to use SEAT May et al. (2019), CrowS-Pairs Nangia et al. (2020) and StereoSet-Intrasentence data Nadeem et al. (2020). We additionally assess fairness on longer texts via the Bias-IR dataset Krieg et al. (2023). To evaluate whether the biased models’ representation ability is maintained, we follow Kaneko and Bollegala (2021); Yang et al. (2023) to select four small-scale subsequent tasks from the GLEU benchmark: Stanford Sentiment Treebank (SST-2 Socher et al. (2013)), Microsoft Research Paraphrase Corpus (MRPC Dolan and Brockett (2005)), Recognizing Textual Entailment (RTE Bentivogli et al. (2009)) and Winograd Schema Challenge (WNLI Levesque et al. (2012)). More dataset information see Appendix A.3.

Backbone and Baseline Methods: For the selection of PLMs, we choose BERT-large-uncased Devlin et al. (2018) and RoBERTa-base Liu et al. (2019). To assess debiasing performance, we compare our algorithm with finetuning-based methods DPCE Kaneko and Bollegala (2021) and ADEPT-F Yang et al. (2023). To assess the effectiveness of our data augmentation strategy, we compare our approach with CDA Zhao et al. (2018).

LLM-Assisted Data Augmentation: We leverage ChatGPT (i.e., gpt-3.5-tubo) and Gemini Team et al. (2023) to generate our training data. We obtained a dataset with texts of content CC from all groups AA and neutral. Using Gemini and ChatGPT for data augmentation resulted in datasets with 43,221 and 42,930 sample pairs, respectively. Examples of data augmented through our method are presented in Table 1, and the quality of the augmented dataset is assessed in Section 4.1.

Hyperparameters: We use Adam to optimize the objective function. During the debiasing training, our learning rate is 5e-5, batch size is 32, and β\beta is 1. Our method requires training for only a single epoch and selecting the checkpoint with the lowest validation loss (validate every 500 steps). The results for DPCE and ADEPT-F are obtained using the originally reported hyperparameters from the studies by Kaneko and Bollegala (2021); Yang et al. (2023). Consistent with these studies, we set the random seed to 42 to ensure a fair comparison. All experiments are conducted on an NVIDIA A100 GPU.

4.1 Augmentation Quality Checking

To demonstrate the quality of our augmented data on gender, we quantitatively assess the fairness of our augmented dataset using the union gender polarity accuracy metric, formulated as follows:

giu=(g(Sin)=n\displaystyle g_{i}^{u}=\big{(}g(S_{i}^{n})=n g(Sim)=amg(Sif)=af)\displaystyle\cap g(S_{i}^{m})=a_{m}\cap g(S_{i}^{f})=a_{f}\big{)}
Acc\displaystyle Acc =i=1NgiuN,\displaystyle=\frac{\sum_{i=1}^{N}g_{i}^{u}}{N}, (7)

where [Sin,Sim,Sif][S_{i}^{n},S_{i}^{m},S_{i}^{f}] are the augmented texts for the ii-th sample, NN denotes the size of the augmented dataset, and g()g(\cdot) is the polarity checking function as defined in Eq. (6). The union gender polarity accuracy metric measures the proportion of text triples (neutral, male, female) that are accurately augmented in alignment with their respective gender polarities. The results show both Gemini and GPT models achieve high accuracy, with Gemini and GPT reaching 83.4% and 82.2% respectively . This suggests that our data augmentation process has effectively produced a fair dataset. Incorporating polarity checking as a post-processing step further ensures the fairness of our augmented data.

Datasets Method SEAT (0.00)(0.00) the best StereoSet:gender StereoSet:all CrowS-Pairs
66 66-b 77 77-b 88 88-b AVG (abs)\downarrow LMS\uparrow SS (50.00)(50.00) ICAT\uparrow LMS\uparrow SS(50.00)(50.00) ICAT\uparrow SS(50.00)(50.00)
BERT 0.370.37 0.200.20 0.420.42 0.220.22 0.26-0.26 0.710.71 0.360.36 86.3486.34 59.6659.66 69.6669.66 84.1684.16 58.2458.24 70.2970.29 55.7355.73
DPCE 0.21-0.21 0.270.27 0.440.44 0.070.07 0.250.25 0.210.21 0.240.24 81.1981.19 56.7256.72 65.4165.41 64.0664.06 52.96¯\underline{52.96} 60.2660.26 52.2952.29
ADEPT-F 0.830.83 0.14-0.14 0.630.63 1.241.24 0.430.43 1.281.28 0.760.76 86.4586.45 61.7061.70 66.2166.21 85.0985.09 57.5257.52 72.2672.26 51.9151.91
DPCE-Gemini 0.63-0.63 0.410.41 0.000.00 0.01-0.01 0.190.19 0.170.17 0.23 82.6382.63 60.6860.68 64.9864.98 64.0864.08 54.9154.91 57.7857.78 51.5351.53
ADEPT-F-Gemini 0.710.71 0.23-0.23 0.210.21 0.920.92 0.350.35 0.990.99 0.570.57 86.8086.80 61.7261.72 66.4466.44 85.4785.47 58.5058.50 71.7171.71 51.9151.91
CCD-CDA 0.160.16 0.030.03 0.430.43 0.380.38 0.470.47 0.220.22 0.290.29 80.3480.34 53.53\mathbf{53.53} 74.6974.69 79.1079.10 53.4653.46 73.6273.62 46.9546.95
CCD-GPT 0.350.35 0.11-0.11 0.17-0.17 0.15-0.15 0.570.57 0.060.06 0.23 81.4781.47 53.60¯\underline{53.60} 75.60\mathbf{75.60} 80.2280.22 52.83 75.97\mathbf{75.97} 47.71¯\underline{47.71}
CCD-Gemini 0.470.47 0.00-0.00 0.02-0.02 0.72-0.72 0.30-0.30 0.070.07 0.26¯\underline{0.26} 82.9182.91 54.9354.93 74.72¯\underline{74.72} 82.9782.97 55.0055.00 74.67¯\underline{74.67} 48.85
Table 2: Comparison of debiasing performance on BERT. We test the debiased models on SEAT, CrowS-Pairs, and filtered StereoSet-Intrasentence, with the best and second best results in bold and underline respectively.

4.2 Results and Analysis

Datasets Method SEAT (0.00)(0.00) the best StereoSet:gender StereoSet:all CrowS-Pairs
66 66-b 77 77-b 88 88-b AVG (abs)\downarrow LMS\uparrow SS (50.00)(50.00) ICAT\uparrow LMS\uparrow SS(50.00)(50.00) ICAT\uparrow SS(50.00)(50.00)
RoBERTa 0.920.92 0.210.21 0.980.98 1.461.46 0.810.81 1.261.26 0.940.94 89.7989.79 66.1766.17 60.7460.74 88.9188.91 62.2262.22 67.1767.17 60.1560.15
DPCE 0.400.40 0.110.11 0.730.73 0.980.98 0.030.03 0.750.75 0.500.50 82.9382.93 61.8061.80 64.1164.11 61.3061.30 55.1455.14 54.9954.99 54.7954.79
ADEPT-F 1.231.23 0.14-0.14 0.990.99 1.091.09 0.930.93 1.111.11 0.920.92 89.8189.81 63.1063.10 66.2766.27 90.0390.03 61.3161.31 69.6869.68 55.5655.56
CCD-CDA 0.290.29 0.07-0.07 0.870.87 0.940.94 0.580.58 0.850.85 0.600.60 88.5288.52 60.2960.29 70.29¯\underline{70.29} 88.8888.88 59.1259.12 72.6672.66 50.57¯\underline{50.57}
CCD-GPT 0.400.40 0.080.08 0.410.41 0.850.85 0.570.57 0.630.63 0.49¯\underline{0.49} 87.2187.21 59.51¯\underline{59.51} 70.63 88.3388.33 57.61¯\underline{57.61} 74.89 48.6648.66
CCD-Gemini 0.270.27 0.18-0.18 0.13-0.13 0.820.82 0.080.08 0.810.81 0.38 81.3581.35 58.15 68.1068.10 84.6884.68 56.65 73.41¯\underline{73.41} 49.54
Table 3: Comparison of debiasing performance on RoBERTa. We test the debiased models on SEAT, CrowS-Pairs, and filtered StereoSet-Intrasentence, with the best and second best results in bold and underline respectively.
Datasets Method GLUE \uparrow Bias-IR (Male Ratio, 0.50 the best)
SST-2\uparrow MRPC\uparrow RTE\uparrow WNLI\uparrow AVG\uparrow Appearance Child Cognitive Domestic Career Physical Relationship AVG-DEV\downarrow
BERT 92.992.9 84.684.6 72.5¯\underline{72.5} 38.038.0 72.072.0 0.710.71 0.500.50 0.750.75 0.460.46 0.750.75 0.680.68 0.610.61 0.160.16
DPCE 92.892.8 69.669.6 53.453.4 49.349.3 66.366.3 0.860.86 0.790.79 1.001.00 0.470.47 0.700.70 0.840.84 0.610.61 0.240.24
ADEPT-F 93.293.2 85.5¯\underline{85.5} 69.969.9 56.356.3 76.276.2 0.500.50 0.500.50 0.750.75 0.530.53 0.800.80 0.680.68 0.650.65 0.130.13
DPCE-Gemini 93.293.2 81.481.4 60.660.6 46.546.5 70.470.4 0.290.29 0.360.36 0.170.17 0.200.20 0.100.10 0.320.32 0.350.35 0.240.24
ADEPT-F-Gemini 92.792.7 81.481.4 71.571.5 56.356.3 75.575.5 0.710.71 0.430.43 0.830.83 0.530.53 0.650.65 0.740.74 0.650.65 0.170.17
CCD-CDA 92.892.8 86.3 65.365.3 50.750.7 73.873.8 0.790.79 0.790.79 0.830.83 0.800.80 0.700.70 0.790.79 0.830.83 0.290.29
CCD-GPT 93.6 85.185.1 70.470.4 56.3¯\underline{56.3} 76.4¯\underline{76.4} 0.780.78 0.780.78 0.500.50 0.730.73 0.500.50 0.630.63 0.520.52 0.13¯\underline{0.13}
CCD-Gemini 93.5¯\underline{93.5} 83.683.6 72.9 56.3 76.6 0.570.57 0.640.64 0.580.58 0.600.60 0.700.70 0.420.42 0.650.65 0.11
Table 4: Evaluation results on the GLUE dataset and the Bias-IR dataset with BERT, we calculate the average deviation to 0.5 for Bias-IR as AVG-DEV. The bold and underline represent the best and second-best respectively.
Datasets Method GLUE \uparrow Bias-IR (Male Ratio, 0.50 the best)
SST-2\uparrow MRPC\uparrow RTE\uparrow WNLI\uparrow AVG\uparrow Appearance Child Cognitive Domestic Career Physical Relationship AVG-DEV\downarrow
RoBERTa 93.893.8 88.288.2 70.870.8 56.356.3 76.976.9 0.280.28 0.280.28 0.660.66 0.400.40 0.600.60 0.420.42 0.700.70 0.160.16
DPCE 78.178.1 81.681.6 53.853.8 56.356.3 67.567.5 0.430.43 0.930.93 0.420.42 0.600.60 0.500.50 0.580.58 0.430.43 0.120.12
ADEPT-F 93.993.9 89.2 66.866.8 56.356.3 76.676.6 0.570.57 0.500.50 0.830.83 0.600.60 0.850.85 0.680.68 0.740.74 0.180.18
CCD-CDA 94.3¯\underline{94.3} 88.2¯\underline{88.2} 68.268.2 56.356.3 76.776.7 0.290.29 0.500.50 0.580.58 0.130.13 0.350.35 0.210.21 0.560.56 0.160.16
CCD-GPT 93.193.1 86.586.5 71.5¯\underline{71.5} 56.356.3 76.9¯\underline{76.9} 0.430.43 0.360.36 0.580.58 0.330.33 0.550.55 0.530.53 0.610.61 0.09
CCD-Gemini 94.6 86.586.5 72.9 56.356.3 77.6 0.430.43 0.500.50 0.670.67 0.530.53 0.650.65 0.580.58 0.690.69 0.10¯\underline{0.10}
Table 5: Evaluation results on the GLUE dataset and the Bias-IR dataset with RoBERTa, we calculate the average deviation to 0.5 for Bias-IR as AVG-DEV. The bold and underline represent the best and second-best respectively.
Method CCED \downarrow
BERT 0.339
DPCE 0.212
ADEPT-F 0.324
CCD-CDA 0.081
CCD-GPT 0.056
CCD-Gemini 0.077
(a) CCED on BERT.
Method CCED \downarrow
RoBERTa 0.438
DPCE 0.177
ADEPT-F 0.159
CCD-CDA 0.166
CCD-GPT 0.143
CCD-Gemini 0.052
(b) CCED on RoBERTa.
Table 6: Debiasing performance in terms of CCED.

We evaluate four models on all benchmarks, namely the original model (pre-trained with no explicit debiasing), the DPCE model, the ADEPT-F model, and our CCD.

Reducing Gender Biases: In Table 2 and Table 3, our experiments demonstrate that CCD with GPT and Gemini data strategies excels in debiasing, consistently outperforming baselines in the StereoSet and CrowS-Pairs datasets for both BERT and RoBERTa backbones. On SEAT, both CCD and DPCE achieve good performance, with CCD-Gemini achieving the best overall performance on SEAT across both backbones. Notably, our method attains a high ICAT score in the StereoSet dataset, indicating an excellent balance between performance and fairness. However, while DPCE maintains great fairness, it adversely affects its representation capability, as evidenced by a significantly lower LMS score in the StereoSet dataset.

Preserving Representation Ability: In Table 4 and Table 5, the GLUE results demonstrate that CCD-Gemini achieves the highest average performance with both BERT and RoBERTa backbones, suggesting that our CCD even enhances the model’s representation capabilities. Conversely, DPCE, which strictly separate gender attributes from neutral text embeddings, harms the model’s utility.

Bias in Information Retrieval: Since search engine performance is a crucial subsequent task of text embedding usage, we evaluate the bias in information retrieval using the Bias-IR dataset. For the BERT model, Table 4 shows that CCD-Gemini achieves the best fairness, with CCD-GPT ranking second. For the RoBERTa model, Table 5 demonstrates CCD-GPT achieves the best fairness, with CCD-Gemini ranking second. Overall, CCD with GPT and Gemini data strategies outperforms baselines in fairness across various fields, as well as in average fairness.

Refer to caption
Figure 2: T-SNE plots of embeddings that are processed by different methods. Our approach maintains embedding positions similar to BERT while mixing male and female embeddings thus achieving fairness.

CCED as Fairness Metric: We use our CCED fairness from Definition 3.1 to evaluate fairness. Specifically, we calculate the CCED gap for all methods on our Gemini-augmented dataset using the equation 1NiN|f(Siai)f(Sin)f(Siaj)f(Sin)|\frac{1}{N}\sum^{N}_{i}|\|f(S_{i}^{a_{i}})-f(S_{i}^{n})\|-\|f(S_{i}^{a_{j}})-f(S_{i}^{n})\||. Table 6 demonstrates that CCD achieves the best fairness on the CCED fairness metric and DPCE being the fairest baseline. The CCED results align well with the results on other benchmarks in Table 2 and Table 3, indicating that CCED serves as an new benchmark for text embedding fairness.

Comparision of Data Strategy: To demonstrate the effectiveness of our proposed data strategy, we conduct comparisons with CDA as shown in Table 2 to Table 5. Integrating our debiasing loss with all data strategies results in improved fairness. However, CDA consistently performs worse than GPT and Gemini on fairness due to its limited sensitive word list. This highlights the superiority of our LLM-based augmentation method in leveraging the rich contextual knowledge of LLMs. For the use of different LLMs, both ChatGPT and Gemini achieve strong performance.

Baseline with augmented data: In this section, we study of baseline methods with our Gemini augmented data and denote as DPCE-Gemini and ADEPT-F-Gemini . Table 2 shows that our augmented dataset marginally improves fairness in certain metrics, though the overall performance remains similar to that of the original dataset. We arrive at the same conclusion: our CCD surpasses these baseline approaches. Regarding representation capability and BiasIR performance, the results are reported in Table 4. We observed that DPCE experienced an improvement in GLUE average performance, while ADEPT-F showed a slight decline. Despite these variations, both DPCE-Gemini and ADEPT-F-Gemini still exhibit a significant performance gap compared to CCD methods, as detailed in Table 4. To summarize, even with our augmented dataset, our CCD still outperforms baseline methods.

Influence of β\beta: We perform the ablation study of β\beta on CCD-Gemini using the StereoSet dataset on BERT, known for its comprehensive evaluation metrics that assess performance (LMS), fairness (SS), and the trade-off between them (ICAT). We highlight that increasing β\beta amplifies the impact of the LrepL_{rep}, as detailed in Eq. 4, ensuring that neutral embeddings remain unchanged. This provides two key benefits: preserving the model’s representational capability and maintaining neutral embeddings as a consistent reference point in the debiasing loss. We vary β\beta from 0 to 1.5, with the results presented in the Table 7.

Method β\beta LMS SS ICAT
CCD-Gemini 0.0 64.37 51.03 63.02
0.5 73.67 53.69 68.22
1.0 82.91 54.93 74.72
1.5 84.28 57.64 71.39
Table 7: Influence of β\beta on StereoSet dataset with BERT.

As β\beta increased, we observed an increase in the LMS score from 64.37 to 84.28, indicating improved model utility. However, the fairness score decreased from 57.64 to 51.03, suggesting a shift towards prioritizing utility over fairness. Setting β=1\beta=1 resulted in the optimal ICAT score, balancing fairness and utility.

Embedding Visualization: (1) Fairness Improvement: Fig. 2.a shows the T-SNE of the original BERT model, where male (blue dots) and female (red dots) embeddings form distinct clusters, indicating fairness issues Peltonen et al. (2023). In contrast, baseline methods and our CCD mix male and female embeddings, thus improving fairness. (2) Utility Preservation: DPCE (Fig. 2.b) separates gendered (blue and red) and neutral (yellow) embeddings, completely removing sensitive information. This disrupts the original embedding geometry and significantly reduces performance (Tables 2 and 4). ADEPT (Fig. 2.c) also causes a performance drop and worsens fairness, as shown in Tables 2 and 4. Notably, our approach (Fig. 2.d) maintains an embedding geometry similar to BERT while mixing male and female embeddings, achieving fairness without compromising utility.

5 Conclusion

In conclusion, we introduce CCED fairness for text embeddings, ensuring conditional independence and equal sensitive information between attributes and embeddings. We propose the CCD loss to achieve this fairness by ensuring that texts with varied sensitive attributes but identical content have equidistant embeddings from their neutral counterparts. By employing LLMs to fairly augment datasets, we achieve effective training with CCD. We establish CCED fairness as a benchmark for evaluating text embeddings fairness. Extensive evaluations on debiasing benchmarks and downstream tasks demonstrate CCD’s effectiveness in promoting fairness while preserving utility.

6 Limitaions

In this study, we utilize gender bias to demonstrate the efficacy of our method. As our approach constitutes a general pipeline, we plan to extend our methodology to address other types of biases (e.g., race, age) in the future. Moreover, we discuss the application of our method in a binary gender setting, which generally does not reflect the real world where gender (and other biases) may not be strictly binary. Fortunately, our method is readily extensible to any number of dimensions. We consider this extension as part of our future work.

7 Ethical Consideration

Our work pioneers in mitigating biases in text embeddings, crucial for fairness and inclusivity in NLP applications. We introduce a method that ensures fair representation by achieving conditional independence between sensitive attributes and text embeddings, aiming to reduce societal biases. Employing LLMs for data augmentation represents ethical advancement in tackling inherent biases, moving towards equitable technology and inspiring future bias-aware research. Our contribution significantly advances AI fairness by validating a method that minimizes bias in text embeddings, promoting inclusivity in machine learning.

References

  • Baeza-Yates (2018) Ricardo Baeza-Yates. 2018. Bias on the web. Communications of the ACM, 61(6):54–61.
  • Bentivogli et al. (2009) Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. TAC, 7(8):1.
  • Bolukbasi et al. (2016) Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29.
  • Borisov et al. (2022) Vadim Borisov, Kathrin Seßler, Tobias Leemann, Martin Pawelczyk, and Gjergji Kasneci. 2022. Language models are realistic tabular data generators. arXiv preprint arXiv:2210.06280.
  • Chia et al. (2022) Yew Ken Chia, Lidong Bing, Soujanya Poria, and Luo Si. 2022. Relationprompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction. arXiv preprint arXiv:2203.09101.
  • Dang et al. (2020) Nhan Cach Dang, María N Moreno-García, and Fernando De la Prieta. 2020. Sentiment analysis based on deep learning: A comparative study. Electronics, 9(3):483.
  • Deng et al. (2023) Wenlong Deng, Yuan Zhong, Qi Dou, and Xiaoxiao Li. 2023. On fairness of medical image classification with multiple sensitive attributes via learning orthogonal representations. In International Conference on Information Processing in Medical Imaging, pages 158–169. Springer.
  • Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
  • Dolan and Brockett (2005) William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
  • Gallegos et al. (2023) Isabel O Gallegos, Ryan A Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernoncourt, Tong Yu, Ruiyi Zhang, and Nesreen K Ahmed. 2023. Bias and fairness in large language models: A survey. arXiv preprint arXiv:2309.00770.
  • Hinton and Roweis (2002) Geoffrey E Hinton and Sam Roweis. 2002. Stochastic neighbor embedding. Advances in neural information processing systems, 15.
  • Hu et al. (2016) Renjun Hu, Charu C Aggarwal, Shuai Ma, and Jinpeng Huai. 2016. An embedding approach to anomaly detection. In 2016 IEEE 32nd International Conference on Data Engineering (ICDE), pages 385–396. IEEE.
  • Huang et al. (2020) Jui-Ting Huang, Ashish Sharma, Shuying Sun, Li Xia, David Zhang, Philip Pronin, Janani Padmanabhan, Giuseppe Ottaviano, and Linjun Yang. 2020. Embedding-based retrieval in facebook search. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2553–2561.
  • John et al. (2023) Jeen Mary John, Olamilekan Shobayo, and Bayode Ogunleye. 2023. An exploration of clustering algorithms for customer segmentation in the uk retail market. Analytics, 2(4):809–823.
  • Kaneko and Bollegala (2021) Masahiro Kaneko and Danushka Bollegala. 2021. Debiasing pre-trained contextualised embeddings. arXiv preprint arXiv:2101.09523.
  • Krieg et al. (2023) Klara Krieg, Emilia Parada-Cabaleiro, Gertraud Medicus, Oleg Lesota, Markus Schedl, and Navid Rekabsaz. 2023. Grep-biasir: A dataset for investigating gender representation bias in information retrieval results. In Proceedings of the 2023 Conference on Human Information Interaction and Retrieval, pages 444–448.
  • Leteno et al. (2023) Thibaud Leteno, Antoine Gourru, Charlotte Laclau, Rémi Emonet, and Christophe Gravier. 2023. Fair text classification with wasserstein independence. arXiv preprint arXiv:2311.12689.
  • Levesque et al. (2012) Hector Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In Thirteenth international conference on the principles of knowledge representation and reasoning.
  • Liang et al. (2020) Paul Pu Liang, Irene Mengze Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2020. Towards debiasing sentence representations. arXiv preprint arXiv:2007.08100.
  • Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.
  • Mary et al. (2019) Jérémie Mary, Clément Calauzenes, and Noureddine El Karoui. 2019. Fairness-aware learning for continuous attributes and treatments. In International Conference on Machine Learning, pages 4382–4391. PMLR.
  • May et al. (2019) Chandler May, Alex Wang, Shikha Bordia, Samuel R Bowman, and Rachel Rudinger. 2019. On measuring social biases in sentence encoders. arXiv preprint arXiv:1903.10561.
  • Mehrabi et al. (2021) Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM computing surveys (CSUR), 54(6):1–35.
  • Meng et al. (2022) Yu Meng, Jiaxin Huang, Yu Zhang, and Jiawei Han. 2022. Generating training data with language models: Towards zero-shot language understanding. Advances in Neural Information Processing Systems, 35:462–477.
  • Nadeem et al. (2020) Moin Nadeem, Anna Bethke, and Siva Reddy. 2020. Stereoset: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456.
  • Nangia et al. (2020) Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R Bowman. 2020. Crows-pairs: A challenge dataset for measuring social biases in masked language models. arXiv preprint arXiv:2010.00133.
  • Nissim et al. (2020) Malvina Nissim, Rik van Noord, and Rob van der Goot. 2020. Fair is better than sensational: Man is to doctor as woman is to doctor. Computational Linguistics, 46(2):487–497.
  • Packer et al. (2018) Ben Packer, Yoni Halpern, Mario Guajardo-Cspedes, and Margaret Mitchell. 2018. Text embedding models contain bias. here’s why that matters. Google Developers.
  • Palangi et al. (2016) Hamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, and Rabab Ward. 2016. Deep sentence embedding using long short-term memory networks: Analysis and application to information retrieval. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 24(4):694–707.
  • Peltonen et al. (2023) Jaakko Peltonen, Wen Xu, Timo Nummenmaa, and Jyrki Nummenmaa. 2023. Fair neighbor embedding. In International Conference on Machine Learning, pages 27564–27584. PMLR.
  • Pogodin et al. (2022) Roman Pogodin, Namrata Deka, Yazhe Li, Danica J Sutherland, Victor Veitch, and Arthur Gretton. 2022. Efficient conditionally invariant representation learning. arXiv preprint arXiv:2212.08645.
  • Rabelo et al. (2022) Juliano Rabelo, Randy Goebel, Mi-Young Kim, Yoshinobu Kano, Masaharu Yoshioka, and Ken Satoh. 2022. Overview and discussion of the competition on legal information extraction/entailment (coliee) 2021. The Review of Socionetwork Strategies, 16(1):111–133.
  • Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR.
  • Schick and Schütze (2021) Timo Schick and Hinrich Schütze. 2021. Generating datasets with pretrained language models. arXiv preprint arXiv:2104.07540.
  • Shao et al. (2023) Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. 2023. Synthetic prompting: Generating chain-of-thought demonstrations for large language models. arXiv preprint arXiv:2302.00618.
  • Shen et al. (2021) Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, and Lea Frermann. 2021. Contrastive learning for fair representations. CoRR, abs/2109.10645.
  • Shen et al. (2022) Aili Shen, Xudong Han, Trevor Cohn, Timothy Baldwin, and Lea Frermann. 2022. Does representational fairness imply empirical fairness? In Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, pages 81–95, Online only. Association for Computational Linguistics.
  • Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
  • Team et al. (2023) Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805.
  • Tiedemann (2012) Jörg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Lrec, volume 2012, pages 2214–2218. Citeseer.
  • Wang et al. (2023) Rui Wang, Pengyu Cheng, and Ricardo Henao. 2023. Toward fairness in text generation via mutual information minimization based on importance sampling. In International Conference on Artificial Intelligence and Statistics, pages 4473–4485. PMLR.
  • Wang et al. (2019) Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jinfeng Yi, and James Bailey. 2019. Symmetric cross entropy for robust learning with noisy labels. In Proceedings of the IEEE/CVF international conference on computer vision, pages 322–330.
  • Wu et al. (2024) Minghao Wu, Abdul Waheed, Chiyu Zhang, Muhammad Abdul-Mageed, and Alham Fikri Aji. 2024. LaMini-LM: A diverse herd of distilled models from large-scale instructions. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 944–964, St. Julian’s, Malta. Association for Computational Linguistics.
  • Yang et al. (2023) Ke Yang, Charles Yu, Yi R Fung, Manling Li, and Heng Ji. 2023. Adept: A debiasing prompt framework. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 10780–10788.
  • Ye et al. (2022) Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, and Lingpeng Kong. 2022. Zerogen: Efficient zero-shot learning via dataset generation. arXiv preprint arXiv:2202.07922.
  • Yin et al. (2019) Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. arXiv preprint arXiv:1909.00161.
  • Yu et al. (2023) Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, and Chao Zhang. 2023. Large language model as attributed training data generator: A tale of diversity and bias. arXiv preprint arXiv:2306.15895.
  • Zerveas et al. (2022) George Zerveas, Navid Rekabsaz, Daniel Cohen, and Carsten Eickhoff. 2022. Mitigating bias in search results through contextual document reranking and neutrality regularization. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2532–2538.
  • Zhang et al. (2024) Chiyu Zhang, Honglong Cai, Yuezhang Li, Yuexin Wu, Le Hou, and Muhammad Abdul-Mageed. 2024. Distilling text style transfer with self-explanation from LLMs. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop), pages 200–211, Mexico City, Mexico. Association for Computational Linguistics.
  • Zhang et al. (2016) Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. 2016. Collaborative knowledge base embedding for recommender systems. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 353–362.
  • Zhao and Gordon (2022) Han Zhao and Geoffrey J Gordon. 2022. Inherent tradeoffs in learning fair representations. The Journal of Machine Learning Research, 23(1):2527–2552.
  • Zhao et al. (2018) Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. arXiv preprint arXiv:1804.06876.
  • Zliobaite (2015) Indre Zliobaite. 2015. On the relation between accuracy and fairness in binary classification. arXiv preprint arXiv:1505.05723.

Appendix A Algorithm Details

A.1 Notation

Basic Variables
LL \triangleq loss function
ff,forif^{ori} \triangleq finetuned and original text embedding model.
hh \triangleq Large language model.
θp\theta_{p} \triangleq Few-shot prompts that used to empower a LLM.
AA, aia_{i} \triangleq Sensitive attribute set and ii-th sensitive attribute.
SaiS^{a_{i}},SnS^{n} \triangleq Text that relate to sensitive attribute aia_{i} and neutral text.
CC,CC^{\prime} \triangleq Content variable and predicted content.
XaiX^{a_{i}}, XnX^{n} \triangleq words from group aia_{i} and neutral words in a text.
VaiV^{a_{i}} \triangleq words list that contains all collected words related to attribute aia_{i}.
Table 8: Main notations used in this paper.

A.2 The significance of text embedding fairness and its distinction from subsequent task fairness

Recently Shen et al. (2021, 2022) apply contrastive learning losses to mitigate biases in language representations for text classification and  Leteno et al. (2023); Shen et al. (2022) find a representational fairness and subsequent task group fairness are not, or only partially, correlated. However, subsequent tasks and text embedding fairness represent two distinct areas that are both important and need to be distinguish:

The importance of embedding fairness: Recent efforts, as highlighted in the introduction of our paper, emphasize the significance of text embedding fairness. The fairness of embeddings is essential due to their widespread application across various systems. For instance, Search Engine Huang et al. (2020), preprocess all content—including documents, videos, and audios—into embeddings to save on storage. When a search query is submitted, it is converted into an embedding to retrieve the most relevant results, especially during the recall phase, where embedding similarity is used to filter through numerous documents to find pertinent ones. Moreover, embeddings are directly used in other applications such as zero-shot classification Yin et al. (2019); Radford et al. (2021), clustering John et al. (2023), and Anomaly Detection Hu et al. (2016), among others. Given the critical role that embeddings play in these and additional applications, addressing fairness issues within the embeddings themselves is undeniably crucial.

Difference between embedding fairness and subsequent task group fairness: This paper focuses on the intrinsic fairness of text embeddings. However, the group fairness of subsequent tasks extends beyond this, incorporating additional modules that take embeddings as input for predictions, which are influenced by other sources of bias. For instance, in a medical report dataset where only females are depicted as having a cold, even if the embedding captures information about gender equally (as defined in Definition 3.1), subsequent modules in the system might still incorrectly associate women with having colds. As a result, it is important to distinguish the difference between the fairness of subsequent tasks and the intrinsic fairness of embeddings.

What we explored and can explore in the future: In this paper, we focus on text embedding fairness and studied its influence on information retrieval tasks, as shown in Table 4 and Table 5 in our paper. Creating fair text embeddings directly improves the fairness of information retrieval. While group fairness of subsequent tasks falls outside the scope of this paper, exploring the relationship between embedding fairness and group fairness in future work could be valuable. This exploration would involve selecting an appropriate metric Mehrabi et al. (2021) for representation fairness and disentangle the fairness of subsequent task modules and embedding intrinsic fairness.

Considering the widespread use of embeddings, differences between group fairness and embedding fairness, we believe the fairness of text embeddings is indeed an important research topic in itself.

A.3 Dataset Details

We generated training data using the News-Commentary-v15 corpus Tiedemann (2012) focusing on gender bias. By employing Gemini and ChatGPT for data augmentation, we obtained datasets comprising 43,221 and 42,930 sample pairs, respectively. Each pair contains texts with identical content from male, female, and neutral perspectives. We use last 1000 data as validation set and the remaining data as training set.
For the bias evaluation dataset, we provide detailed statistics in Table 9. Our augmented dataset sets a new benchmark, featuring an extensive dataset size that enhances the robustness and comprehensiveness of bias assessment.

Evaluation Data Level Data Size
Sentence Encoder Association Test (SEAT) Text 5172
CrowS-Pairs Text 1508
StereoType Analysis Text 8497
Gender-Bias-IR Query-Doc 236
CCD-GPT (ours) Text 42,930
CCD-Gemini (ours) Text 43,221
Table 9: Dataset Statistics on various bias evaluation benchmarks.

A.4 Data Augmentation Prompts

The prompt template can be found in Figure 1. To provide a clearer demonstration, we also list the examples we used. Notably, to save computational costs, we have shortened the examples and merged the selected 10 examples into 8, as shown in the Table 10.

A.5 Ommited Proofs

In this section, we give a detailed proof of Theorem 3.3.

Proof.

Firstly, we establish the conditional independence ACCA\perp C^{\prime}\mid C for any ai,ajAa_{i},a_{j}\in A:

P(CA=ai,C)=P(CA=aj,C)\displaystyle P(C^{\prime}\mid A=a_{i},C)=P(C^{\prime}\mid A=a_{j},C) (8)

where CC^{\prime} represents the content embedding. Assuming equal probabilities for different sensitive attributes P(a1C)==P(aAC)P(a_{1}\mid C)=\cdots=P(a_{A}\mid C), we can rewrite Eq. (8) as:

P(CA=ai,C)P(aiC)\displaystyle P(C^{\prime}\mid A=a_{i},C)P(a_{i}\mid C) =P(CA=aj,C)P(ajC)\displaystyle=P(C^{\prime}\mid A=a_{j},C)P(a_{j}\mid C)
P(C,aiC)\displaystyle P(C^{\prime},a_{i}\mid C) =P(C,ajC)\displaystyle=P(C^{\prime},a_{j}\mid C) (9)

According to Section 3.1, f(SCai)f(S_{C}^{a_{i}}) encodes both content and sensitive information, allowing us to obtain:

P(f(SCai)C)\displaystyle P(f(S_{C}^{a_{i}})\mid C) =P(f(SCaj)C)\displaystyle=P(f(S_{C}^{a_{j}})\mid C) (10)

Because a fair and well-trained embedding model ff can effectively extract the content CC from the neutral text SCnS^{n}_{C} without introducing bias, we can approximate Eq. (10) as:

P(f(SCai)f(SCn))\displaystyle P(f(S_{C}^{a_{i}})\mid f(S_{C}^{n})) =P(f(SCaj)f(SCn))\displaystyle=P(f(S_{C}^{a_{j}})\mid f(S_{C}^{n})) (11)

Following Hinton and Roweis (2002); Yang et al. (2023), the conditional probability P(f(SCai)f(SCn))P(f(S_{C}^{a_{i}})\mid f(S_{C}^{n})) can be represented as the similarity between SCaiS_{C}^{a_{i}} and f(SCn)f(S_{C}^{n}), and can be modeled using a Gaussian distribution. We thus measuring P(f(SCai)f(SCn))P(f(S_{C}^{a_{i}})\mid f(S_{C}^{n})) by calculating:

P(f(SCai)f(SCn))=exp(f(SCai)f(SCn)22ρ2)aiAexp(f(SCai)f(SCn)22ρ2)\displaystyle P(f(S_{C}^{a_{i}})\mid f(S_{C}^{n}))=\frac{\exp\left(-\frac{\lVert f(S_{C}^{a_{i}})-f(S_{C}^{n})\rVert^{2}}{2\rho^{2}}\right)}{\sum_{a_{i}\in A}\exp\left(-\frac{\lVert f(S_{C}^{a_{i}})-f(S_{C}^{n})\rVert^{2}}{2\rho^{2}}\right)} (12)

where ρ\rho controls falloff of the PP with respect to distance and is set by hand. Eq. (12) can be interpreted as follows: (1) Consider setting a Gaussian distribution with a covariance matrix equal to ρ\rho times the identity matrix at the embedding of a neutral text SCS_{C} (with content CC), which is denoted as f(SCn)f(S_{C}^{n}). Then, a text with the same content but containing sensitive information aia_{i} appears in the distribution with a probability proportional to exp(f(SCai)f(SCn)22ρ2)\exp\left(-\frac{\lVert f(S_{C}^{a_{i}})-f(S_{C}^{n})\rVert^{2}}{2\rho^{2}}\right), represented as the numerator. (2) The denominator aggregates the aforementioned probabilities across all sensitive groups aiAa_{i}\in A and serves as the normalization factor. Then we combine Eq. (11) and Eq. (12) and obtain:

exp(f(SCai)f(SCn)22ρ2)aiAexp(f(SCai)f(SCn)22ρ2)\displaystyle\frac{\exp\left(-\frac{\lVert f(S_{C}^{a_{i}})-f(S_{C}^{n})\rVert^{2}}{2\rho^{2}}\right)}{\sum_{a_{i}\in A}\exp\left(-\frac{\lVert f(S_{C}^{a_{i}})-f(S_{C}^{n})\rVert^{2}}{2\rho^{2}}\right)} =exp(f(SCaj)f(SCn)22ρ2)ajAexp(f(SCaj)f(SCn)22ρ2)\displaystyle=\frac{\exp\left(-\frac{\lVert f(S_{C}^{a_{j}})-f(S_{C}^{n})\rVert^{2}}{2\rho^{2}}\right)}{\sum_{a_{j}\in A}\exp\left(-\frac{\lVert f(S_{C}^{a_{j}})-f(S_{C}^{n})\rVert^{2}}{2\rho^{2}}\right)}
exp(f(SCai)f(SCn)22ρ2)\displaystyle\exp\left(-\frac{\lVert f(S_{C}^{a_{i}})-f(S_{C}^{n})\rVert^{2}}{2\rho^{2}}\right) =exp(f(SCai)f(SCn)22ρ2)\displaystyle=\exp\left(-\frac{\lVert f(S_{C}^{a_{i}})-f(S_{C}^{n})\rVert^{2}}{2\rho^{2}}\right)
f(SCai)f(SCn)2\displaystyle\lVert f(S_{C}^{a_{i}})-f(S_{C}^{n})\rVert^{2} =f(SCaj)f(SCn)2\displaystyle=\lVert f(S_{C}^{a_{j}})-f(S_{C}^{n})\rVert^{2} (13)

Thus we obtain the Theorem 3.3. As a result, achieving conditional independence between sensitive attributes and content embeddings is equivalent to achieving content-conditioned equal distance. ∎

Example Original passage Neutral passage Male passage Female passage
Example 1 The high popularity of the current president (Socialist Michelle Bachelet, Chile’s first female chief executive) The high popularity of the current president (A Socialist, Chile’s first chief executive) The high popularity of the current president (Socialist Mike Bachelet, Chile’s first male chief executive) The current president (Socialist Michelle Bachelet, Chile’s first female chief executive)
Example 2 Rwanda has the highest female legislators in the world. Rwanda has the highest legislators in the world. Rwanda has the highest male legislators in the world. Rwanda has the highest female legislators in the world.
Example 3 When a kid arrived, accompanied by a doting father, the prophet’s son. When a kid arrived, accompanied by a doting parent, the prophet’s child. When a kid arrived, accompanied by a doting father, the prophet’s son. When a kid arrived, accompanied by a doting mother, the prophet’s daughter.
Example 4 wizards Hunt people, poor paternal nutrition. People Hunt people, poor nutrition. wizards Hunt people, poor paternal nutrition. Witch Hunt people, poor maternal nutrition.
Example 5 Bruni’s life path become opera divo, barman and actress. A people’s life path become opera performer, bar staff and acting. Michael’s life path become opera diva, barwoman and actor. Bruni’s life path become opera divo, barman and actress.
Example 6 Ally is marchioness, Bride for Sarkozy. they are noble, partner of someone. Alexandria is marquis, Groom for Sara. Ally is marchioness, Bride for Sarkozy.
Example 7 Mike embarked on a fascinating experiment with sons. Leader embarked on a fascinating experiment with offsprings. Mike embarked on a fascinating experiment with sons. Merkel embarked on a fascinating experiment with daughters.
Example 8 Orban and Tomy appointed a police as his secretary, most strong-minded male Democrat. They appointed a police as their secretary, most strong-minded Democrat. Orban and Tomy appointed a police as his secretary, most strong-minded male Democrat. Olivia and Michelle appointed a police as her secretary, most strong-minded female Democrat.
Table 10: Task template and prompt examples for gender-neutral, male, and female passages.