Advocating for the Silent: Enhancing Federated Generalization for Non-Participating Clients
Abstract
Federated Learning (FL) has surged in prominence due to its capability of collaborative model training without direct data sharing. However, the vast disparity in local data distributions among clients, often termed the Non-Independent Identically Distributed (Non-IID) challenge, poses a significant hurdle to FL’s generalization efficacy. The scenario becomes even more complex when not all clients participate in the training process, a common occurrence due to unstable network connections or limited computational capacities. This can greatly complicate the assessment of the trained models’ generalization abilities. While a plethora of recent studies has centered on the generalization gap pertaining to unseen data from participating clients with diverse distributions, the distinction between the training distributions of participating clients and the testing distributions of non-participating ones has been largely overlooked. In response, our paper unveils an information-theoretic generalization framework for FL. Specifically, it quantifies generalization errors by evaluating the information entropy of local distributions and discerning discrepancies across these distributions. Inspired by our deduced generalization bounds, we introduce a weighted aggregation approach and a duo of client selection strategies. These innovations are designed to strengthen FL’s ability to generalize and thus ensure that trained models perform better on non-participating clients by incorporating a more diverse range of client data distributions. Our extensive empirical evaluations reaffirm the potency of our proposed methods, aligning seamlessly with our theoretical construct.
Index Terms:
Federated learning, Non-participating clients, Information theory, Generalization theoryI Introduction
Federated Learning (FL) offers a collaborative paradigm to train a shared global model across distributed clients, ensuring data privacy by eliminating the need for direct data transfers [1, 2, 3]. Therefore, FL provides a secure architecture to bolster confidentiality and effectively manage diverse sensitive data [4, 5], ranging from financial to healthcare information. Consequently, FL applications span various domains, encompassing finance [6], healthcare [4], recommendation systems [7], Internet-of-Things (IoT) [8], and more. However, the inherent heterogeneity among clients—often due to their distinct operational environments—generates Non-Independent and Identically Distributed (Non-IID) scenario [9, 10, 11]. This distinctness complicates the assessment of FL’s generalization capabilities, setting it apart from traditional centralized learning [12, 13, 14].

While prevailing research on FL generalization predominantly concentrates on actively participating clients [13, 15, 16]. it offers a limited view: it addresses the model’s adaptability to observed local distributions without reconciling the divergence between observed data distributions in actively participating clients and unobserved data distributions from passive non-participating clients. In real-world settings, numerous clients might remain detached from the training process, due to unstable network connectivity or other constraints [10, 17]. For example, in cross-device federated edge learning, IoT devices, although reliant on robust deep learning models for edge inferences [18, 19], often abstain from FL owing to computational and communicative constraints [20]. Figure 1 depicts a FL system, where participating clients gather data to train a global model designed to provide services for non-participating clients. This landscape raises a pivotal question: Can models, honed by active participants, cater effectively to passive clients that do not participate in the training process?
At the heart of this question is the assurance of optimal performance for the model trained by FL on non-participating clients. For properly evaluating the generalization performance of models, some recent endeavors strive to demystify this by quantifying the performance disparity between models tested on active versus passive clients, proposing a dual-level framework to evaluate both out-of-sample and out-of-distribution generalization gaps, grounded in the premise that client data distributions stem from a meta-distribution [10, 12]. Yet, the approach by [12] primarily sketches the contours of the FL generalization gap, substantiating it with empirical evidence sans a comprehensive theoretical underpinning. Conversely, the approach by [10] accentuates theoretical insights on meta-distribution-based errors but leaves practical algorithmic solutions largely unexplored, particularly those that might uplift the global model’s adaptability for passive clients.
Motivation. Compared with traditional FL deploying a global model for actively participating clients and assuming an alignment between training and testing data distributions, our investigation spotlights the global model’s out-of-distribution generalization capabilities when trained by participating clients. In essence, we seek to cultivate a model via FL across active clients that reliably serves even the passive ones. Specifically, a novel contribution of our paper is the introduction of the self-information weighted expected risk – a metric to gauge model generalization. Our hypothesis is grounded in the conception that a model exhibiting proficiency with low-probability examples from training distributions might demonstrate adaptability to unfamiliar testing distributions. Such examples are anticipated to hold a greater significance in these unseen datasets. Operationalizing this concept within FL, our framework leverages the self-information of examples to craft a generalization boundary. Delving into the resulting information entropy-aware and distribution discrepancy-aware generalization disparities, a revelation emerges: data sources tend to manifest informational redundancy. This implies that certain data sources, characterized by diminished informational weight, can be seamlessly supplanted by others. Essentially, only a fraction of the client base holds substantial sway over FL’s generalization. This redundancy is especially pronounced in the IoT scenario, where a myriad of edge devices—often operating in overlapping zones—partake in FL [21]. Consider drones, serving as FL clients, and amassing spatial data for model training. Drones constrained by limited flight spans might be rendered redundant due to their shared operational territories with other drones [22]. Informed by these insights, our paper further proposes strategies to enhance the generalization capabilities of FL, enabling trained models to better serve non-participating clients.
Contributions. Our paper presents several key contributions.
-
•
We introduce a novel theoretical framework to scrutinize the generalization error in FL. Distinctly, our approach shines a light on the distribution discrepancy, an aspect largely glossed over in preceding research. This framework adeptly harnesses the information entropy of data sources, coupled with the distribution variances among them, to provide a more refined insight into the generalization potential of models.
-
•
Drawing from our theoretical results, we devise a weighted aggregation approach alongside a duo of client selection methods. These are designed to amplify the generalization prowess of FL, so trained models can provide better service for non-participating clients.
-
•
The empirical evaluations using three widely-referenced datasets underscore the efficacy of our methods, consistently eclipsing benchmarks, which is matched with our theoretical findings.
II Related Work
Data heterogeneity is a major challenge in federated learning [23, 24, 25]. Despite numerous studies investigating the generalization error in the presence of data heterogeneity [13, 15, 26], most of these have focused only on scenarios where a global model is trained on distributed data sources and tested on unseen data sampled from these sources. While [10, 12, 16] also consider this generalization problem in FL, they do not account for local distribution characteristics nor propose methods to enhance the out-of-distribution generalization performance of models. In essence, they cannot ensure that models trained via FL will perform effectively for non-participating clients. In contrast, our work is motivated by the need to design algorithms to train a global model providing good generalization performance on passive clients (that do not participate in the training process) with unknown data distributions. Our framework takes into account the local distribution property and provides methods for improving generalization performance in this setting.
Additionally, there have been several studies that propose information-theoretic generalization analyses for FL [27, 28, 29, 30]. For instance, [27] developed a framework for generalization bounds that also accounts for privacy leakage in FL. [28] presented generalization bounds for FL problems with Bregman divergence or Lipschitz continuous losses. [30] derives the algorithm-dependent generalization bound for federated edge learning. However, these works focus on bounding the generalization error via the mutual information between models and training samples, ignoring the information stored in data sources. And they only consider the in-distribution generalization error, with the assumption that the source distribution is identical to the target distribution.
Besides, compared with Federated Domain Generalization (FedDG) [14, 31, 32] dealing with the challenge of domain distribution shift under FL settings, our focus is solely on a specific collection of all the data sources in FL. Our objective is to train a global model from seen distributions of participating clients and generalize it to other unseen distributions of non-participating clients. Additionally, it is commonly assumed that each domain contains equal information and cannot represent each other in FedDG. In other words, FedDG does not consider the inherent distribution correlation or information redundancy across data sources. However, we consider the presence of information redundancy among data sources and only a subset of clients contributes to the generalization of FL. Our focus is thus to identify representative data sources. In terms of developed algorithms, most DG studies concentrate on learning an invariant representation across different domains [33, 34, 35]. Conversely, our focus is to design proper weighting aggregation and client selection methods to mitigate the generalization error in FL.
III Theoretical Framework
Similar to previous studies [10, 12], we model each data source as a random variable with its corresponding distribution. In order to better evaluate the generalization performance of FL to ensure trained models perform well for non-participating clients, we focus on the information-theoretic generalization gap in FL. Furthermore, similar to [10, 13], we also utilize uniform convergence-based generalization analysis in this study. This technique, as discussed by [36], is algorithm-independent, making it applicable to a wide range of federated optimization algorithms [2, 37, 38]. To conserve space, we have provided proofs for all our theorems in the appendix.
III-A Preliminaries
Let the sample space be the set of all the possible outcomes (e.g. image-label pairs) focused on in this paper, where is the feature space and is the label space. Let be the index set of all the possible clients. The total number of all possible clients in is and it is possibly infinite. We assume that only clients in the finite subset of participating in FL practically and the number of these clients is . Following [10], is commonly much larger than due to the unreliable network links. Additionally, we can only select a subset of for data collection and local model training in each round of FL. This approach offers the advantage of reducing both the computational load and communication burden on participating clients. Similarly, we denote the number of selected clients as . Therefore, we have and .
In this paper, we assume that each participating client is associated with a local data source , where is a discrete random variable with the probability mass function and is with support on . The sequence of participating data sources is denoted as and the corresponding joint distribution is denoted as in the following. The local distribution of each data source is different from each other, i.e., , which is common in FL [1, 10, 11].
The local training set stored on participating client is made of i.i.d. realizations from the local data source . Referring to [12], the objective of federated generalization is to train a global model on , such that all the possible clients in will be provided satisfactory service by this global model trained by participating clients. Let be a hypothesis class on . The loss function is a non-negative function and we assume that is bounded by and Lipschitz continuous. For simplicity, we denote as in the following.
To understand the proposed framework better, we first present some definitions as follows:
Definition 1 (Self-information weighted expected risk).
The self-information of outcome is denoted by . Then, the self-information weighted expected risk is defined by
(1) | ||||
where is a specific model in and is the loss function of model on sample .
The rationale behind the formulation of the risk outlined in (1) can be attributed to the fact that target distributions are unknown in the training stage under the OOD setting. The proposed loss , is indicative of the requisite focus on outcomes with lower probabilities in source distributions since they may have higher probabilities in unknown target distributions. Additionally, it is reasonable to assume that unknown data sources have maximum entropy distributions, signifying greater uncertainty, exemplified by a uniform distribution of discrete labels. For example, IoT devices may only collect data in specific areas, while the global model trained by these devices is expected to provide spatial-related services for all devices across the entire area [21]. Hence, the self-information of each outcome is indispensable for measuring the expected risk in such a situation. Moreover, we know that applying FL in the healthcare field [39] should focus on silos within rare disease cases, which underscores the significance of rare yet informative samples, thereby supporting our viewpoint.
Definition 2 (Joint self-information weighted expected risk).
The self-information of one particular combination of outcomes in the product space is denoted by . We use the term as the loss below. This term denotes the average loss of model predicated on . Then, the joint self-information weighted expected risk on multiple data sources is defined by
(2) | ||||
where is a specific model in hypothesis space . Similarly, the joint self-information reflects the uncertainty of the event that outcomes are sampled from respectively. We also denote as in the following.
Analogously, the motivation for defining the risk in Eq. (2) is that distributions of non-participating clients may differ from distributions of participating clients significantly. We must account for the self-information of every possible combination of outcomes in order to ensure that one model performing well on participating clients will also do well on non-participating clients with distinct distributions.
Moving forward, we introduce the general training objective in FL, as well as the self-information weighted semi-empirical risk which we propose in this paper, respectively. Let be the local training set of -th participating client based on i.i.d. realizations from examples of the data source , where denotes the size of . is the whole training set over all the participating clients defined as . The empirical risk minimization (ERM) objective in federated learning [10] is formulated as follows:
(3) |
where denotes the -th training sample at -th selected client and is the weighting factor of client . The empirical risk minimizer is defined by .
Motivated by the definition of semi-empirical risk in [10], we further propose the self-information weighted semi-empirical risk as follows. The self-information weighted semi-empirical risk rooted on data source is defined by,
(4) |
The proposed self-information weighted semi-empirical risk measures the average information-theoretic performance of on the participating data source .
III-B Federated Generalization
We now formally present the introduction of our proposed information-theoretic generalization framework in FL. We first define the information-theoretic generalization gap in FL as follows,
Definition 3 (Information-theoretic generalization gap in federated learning).
(5) | ||||
where .
The motivation for defining the gap in (5) is that we want to know the performance gap between the empirical risk evaluated by and the joint self-information weighted expected risk evaluated by . Based on our aforementioned analysis, this generalization gap can reflect how well the trained model will perform on unknown data sources .
Furthermore, we decompose the original generalization gap in (5) as follows,
(6) | ||||
We conduct our theoretical analysis using the following assumption.
Assumption 1 (Limited Independence).
Among the participating data sources, each single data source is independent of the sequence of other participating data sources , i.e., . In addition, data sources constructed by selected clients are independent of data sources , i.e., .
Rooted on the above assumption, we can derive two lemmas introduced in the appendix and further derive the below theorem about the information-theoretic generalization gap in FL.
Theorem 1 (Information entropy-aware generalization gap in FL).
Let be a family of functions related to hypothesis space with VC dimension . For any , if is bounded by , it follows that with probability at least ,
(7) | ||||
where is a constant. , where .
Remark 1.
Theorem 1 asserts that increasing the weighted sum will reduce this generalization bound. This suggests that models trained by FL will exhibit enhanced performance on unknown data sources when a greater weighting factor is assigned to the data source with richer information. In addition, model complexity and sample complexity also affect the generalization capacity of FL. Notice that alternative metrics, like Rademacher complexity and covering number, can be employed to refine VC-dimension-based generalization bounds [10, 40], but delving into these is beyond the scope of this paper. Our emphasis lies in devising algorithms by leveraging insights from the proposed information-theoretic generalization framework.
We then consider assigning the identical weighting factor for each client.To explore the impact on each individual client resulting from the distributed learning performed on decentralized data sources, we examine the average information-theoretic distributed learning gap defined as and show that its upper bound is related to the entropy rate of the stochastic process .
Corollary 1.
Assume the entropy rate of stochastic process exists. Let the weighting factor be for each client, we have,
(8) |
Remark 2.
Corollary 1 indicates that if the entropy rate of exists, the average information rate or uncertainty associated with the considered stochastic process influences the generalization of FL.
In the following, we consider only leveraging a selected subset with the identical weighting factor for each selected client to measure the generalization gap in FL. In other words, we turn to focus on the below generalization gap,
(9) | ||||
where and .
Similarly, we can decompose this gap as follows,
(10) | ||||
Referring to the derivation presented in the proof of Theorem 1 in the appendix, we first derive the upper bound of the proposed information-theoretic selection gap below.
Theorem 2 (Correlation-aware selection gap in FL).
If is bounded by , we have
(11) | ||||
where is the mutual information between and , measuring the correlation between these two data sources.
Remark 3.
Theorem 2 demonstrates that a lower mutual information results in a smaller selection gap. This implies that the performance of models trained by selected clients will be enhanced if the unselected data sources contain less information from participating data sources . In other words, Theorem 2 reveals that participating data sources contain redundant information and a subset of is adequate to represent the entirety.
Based on this result, we can further establish another theorem that pertains to the information-theoretical generalization gap in FL under the considered client selection scenario.
Theorem 3 (Distribution discrepancy-aware generalization gap in FL).
Let be a family of functions related to hypothesis space with VC dimension . For any , if is bounded by , it follows that with probability at least ,
(12) | ||||
where is a constant. , where . is the cross entropy between and , measuring the dissimilarity between these two distributions.
Remark 4.
Theorem 3 indicates that lower dissimilarity between unselected distributions and other participating distributions can reduce the generalization gap. Notice that we do not need to compute the cross entropy in practice, the derived bound only inspires us to design the client selection algorithms to enhance federated generalization.
IV Methods
This section focuses on introducing a weighting aggregation approach, along with two client selection methods in FL, that are rooted on the theoretical findings mentioned earlier. The objective of employing these methods is to enhance the generalization performance of FL.
IV-A Maximum Entropy Aggregation
Inspired by Theorem 1, it is easy to find that to minimize the information-theoretic generalization gap is actually to maximize the term as follows
(13) |
where , is the information entropy of data source .
The true entropy of data source in (13) is inaccessible in practice, so we can not assign for directly. Therefore, we can design proper weighting factors of local gradients in federated aggregation to maximize this term.
Empirical entropy-based weighting: Based on the above analysis, the weighting factor of local gradients or local models should be increased proportionally to the information entropy of the data source . This paper considers only the label distribution skew scenario for verifying the proposed empirical entropy-based weighting method. Therefore, we can design the aggregation weighting factor as follows:
(14) |
In this paper, the empirical entropy can be thus calculated via , where is the label of -th sample of local dataset of client and denotes the indicator function. The proposed maximum entropy aggregation can be applied to other distribution shift scenarios if we can estimate the empirical entropy of data source [41]. How to leverage this aggregation method to benefit the federated generalization for other scenarios is out of the scope of this paper.
The detailed workflow of the proposed empirical entropy-based weighting method is introduced as follows. Before FL, each participating client calculates the empirical entropy in Eq. (14) based on its local dataset and further uploads the empirical entropy to the server. The server can thus assign the aggregating weighting factors for all the clients rooted on the received empirical entropy prior to the first round of FL.
In the following, we will discuss the effect of privacy computation, noisy data and computation-communication costs on our proposed method.
Privacy Computation: Privacy computation techniques can be integrated into the proposed methods: clients upload the empirical entropy of local data sources via homomorphic encryption or secure multi-party computation with low computation overhead since the empirical entropy is a scalar value.
Noisy Data Challenges: If local models are trained on severely noisy data, we can regard such challenge as a form of Byzantine attack [42] and address this issue using corresponding solutions. We can also identify outliers with extremely large entropy and expel them from FL. Additionally, replacing entropy with a relative metric of calculating the similarity between the local label distribution and uniform distribution can mitigate the effect from noisy data. Besides, the methods of Noisy Clients Detection and Noisy Robust Aggregation proposed in [43] can be integrated with our own approaches to alleviate the influence of noisy data.
Computation and Communication Costs: In this study, we assume that data sources of participating clients are stationary during the whole FL process. Hence, clients only need to upload empirical entropy to the server for one time, which can reduce the communication cost.
IV-B Gradient similarity-based client selection
Below, we will outline the proposed client selection methods based on the theory presented. Before delving into the specifics of these methods, we first introduce an assumption that underpins the forthcoming discussion.
Assumption 2 (Bounded dissimilarity).
The dissimilarity between two local gradients is bounded by the divergence of corresponding local data distributions, i.e,
(15) |
where denotes the practical local gradient calculated by client without considering the self information of samples, i.e., , where denotes the global model and denotes the -th data source. depends on the dissimilarity of data distributions . In other words, a higher level of distribution similarity of will incur a lower level of [44, 37]. This is because that , the last inequality holds follows from Lipschitz continuous loss and Pinsker’s inequality. Hence, is related to the term .
Assumption 2 and Theorem 3 can help us to design client selection methods for improving the generalization performance of FL in the following.
We begin by introducing the general procedure of client selection methods. In the following, we also use the notation to denote the stochastic gradient calculated by the -th client at the -th round. The server repeats the following steps in each round of FL:
-
•
(Update the gradient table): After updating the global model based on local gradients uploaded by clients following the update rule of , the server updates and maintains a table that stores the latest local gradients uploaded by the clients selected in each round. More specifically, the server performs the following actions: and to maintain the table .
-
•
(Execute the selection algorithm): The server applies the client selection algorithms described below, utilizing the local gradients stored in the gradient table in order to determine the clients to participate in the next round of FL.
Notice that the server can maintain a dynamic table containing the client IDs, their respective entropy values, and the latest local gradients. This table empowers the server to effectively handle dynamic clients and execute proposed algorithms. Hence, the proposed method is robust to client churn, such as clients frequently joining, leaving, and rejoining the system.
IV-B1 Minimax gradient similarity-based client selection
Recall the Theorem 3, to minimize the distribution discrepancy-aware generalization gap is essentially to maximize the term , which means that selecting clients with distributions that differ significantly from other participating distributions can improve the generalization performance of FL. Furthermore, in reference to the Assumption 2, this objective can be formulated as a tractable optimization problem, given by
(16) | ||||
(11a) |
where represents the similarity metric between and uploaded by client and client respectively, is evaluated using cosine similarity in this work. This choice is motivated by the fact that the distance between bounded local gradients can be bounded by the distribution discrepancy between the corresponding local distributions, i.e, , where is a constant. Besides, the cosine similarity is related to the Euclidean distance: and it provides a more stable approach in FL [45]. We propose a feasible approximate method for solving this optimization problem in Algorithm 1. Additionally, in Algorithm 1, we define the concept of "similarity set" denoted as , which is a set of gradient similarities between the local gradient of each participating client and the gradients calculated by the other participating clients. We now provide the workflow of the proposed Minimax gradient similarity-based client selection in the following:
-
•
(Constructing the similarity set): First, we develop a "similarity set" for each stored gradients . This "similarity set" of -th stored gradients contains the cosine similarities between and all the other stored gradients , i.e., .
-
•
(Calculating the maximum similarity): Next, the server builds another set to store the maximum similarity in each while this maximum similarity measures the degree of the similarity between one data source and other data sources.
-
•
(Selecting clients with the smallest maximum similarity): In the final step, the server determines the clients whose maximum similarity is the smallest in the maximum similarity set .
The core idea behind the aforementioned operations is to identify data sources that are “distant” from other data sources, thereby approximately achieving the objective stated in (16).
IV-B2 Convex hull construction-based client selection
In the following, we provide another client selection method to enhance federated generalization based on our theoretical findings. Notice that we can also minimize the generalization gap in Theorem 3 via maximizing a term as follows,
(17) |
The above objective suggests that the larger difference between distributions and other participating distributions , the smaller generalization gap in Theorem 3. Following the insights gained from our analysis and Assumption 2, a convex hull construction-based client selection policy is proposed to enhance the generalization of FL. The key idea of this method is to identify the vertices of the convex hull of local gradients, and then select the corresponding clients whose gradients are located on the vertices of the constructed convex hull to participate in FL.
Prior to delving into the specifics of the client selection method based on convex hull construction, we will begin by providing the definition of the convex hull. The convex hull of a set , denoted as , refers to the set of all convex combinations of points in [46]:
(18) | ||||
Notice that the convex hull of a point set represents the smallest convex set that encompasses all the points in . The definition of the convex hull is visually depicted in Figure 2.

The formal work process of the proposed convex hull construction-based client selection is outlined as follows. a) The server initiates the execution of the quickhull algorithm proposed in [47] to construct the convex hull of the local gradients stored in the gradient table . b) Once the vertices of the considered point set, which comprises all the gradients stored in are identified, the server proceeds to select the corresponding clients whose gradients are located on these discovered vertices. These selected clients are then required to participate in the subsequent round of FL.
Intuitively, the distances between points located on the vertices of the convex hull and other points tend to be larger. This observation served as inspiration for our client selection method. Potential gradients generated by unseen non-participating clients can be considered as "random points" occurring within or around a given point set. By utilizing the vertices of the convex hull, we can more effectively "cover" these "random points", providing a geometric perspective to further explain our method.
We outline the general procedure of two proposed client selection methods in Algorithm 2. Then we provide some insights of reducing the communication and computation cost of these client selection methods.
Communication and Computation cost: To further reduce the communication and computation cost of proposed methods, we can employ the event-triggered communication techniques [48, 49] widely used in distributed optimization to diminish the overhead caused by frequent communication of local gradients. Moreover, based on event-triggered communication, the server can utilize historical gradient similarities stored in another table to avoid redundant computations.
V Experiment
In this section, we first evaluate the proposed methods on three common datasets in FL, in order to verify our theoretical results. Then we compare the proposed methods with more baselines on CIFAR-100 dataset to further validate the generalization performance of our methods.
Experiment setting: We fisrt consider three datasets commonly used in FL: i) image classification on EMNIST-10 and CIFAR-10 with a CNN model, ii) next character prediction on Shakespeare with a RNN model. For the image classification task, we split the dataset into different clients using the Dirichlet distribution spitting method, and we compare the proposed weighting aggregation method and client selection methods with baselines provided below. For the Shakespeare task, each speaking role in each play is set as a local dataset [50], and we only compare the proposed client selection methods with baselines on this dataset. We split all datasets into clients, and randomly select clients among them as participating clients. The remaining clients are considered non-participating clients. We evaluate the global model’s performance on two metrics: In Distribution (ID) performance and Out-of-Distribution (OOD) performance. The ID performance evaluates the global model on the local test set of selected clients, while the OOD performance evaluates the global model on a standard test set with a distribution equivalent to the total dataset of clients. All the selected clients perform local epochs before sending their updates. We use a batch size of and tune the local learning rate over a grid in all experiments.
Then we introduce some baseline methods: a) Random selection: Participating clients are selected randomly with equal probability. b) Maximum gradient similarity-based selection: This baseline evaluates the minimax gradient similarity-based client selection method. The server selects clients with the most similar local gradients. c) Interior selection: This baseline evaluates the convex hull construction-based selection method. The server constructs the convex hull of local gradients and then selects clients with gradients in the interior randomly. d) Full sampling: All participating clients will be selected in each round. e) Power-of-Choice selection: The server selects clients with the largest loss values in the current round [51]. f) Data size-based weighting: The weighting factor of each client is proportional to the data size of its local dataset. g) Equality weighting: The weighting factor of each client in aggregation is set to .
Experiment Results: Table I reports the ID and OOD test accuracy of client selection methods on three datasets. The results in Table I indicate that, with respect to OOD test accuracy, the proposed client selection methods outperform random selection, Power-of-Choice selection, and other baselines. Table II demonstrates that the proposed empirical entropy-based weighting method surpasses the data size-based weighting and equality weighting method on OOD test accuracy. This outcome aligns with our theoretical results and confirms that local models trained on distributions with greater information entropy contribute more significantly to federated generalization.












Method EMNIST-10 CIFAR-10 Shakespeare In Distribution Out-of-Distribution In Distribution Out-of-Distribution In Distribution Out-of-Distribution Convex Hull (ours) 91.8±0.8 82.1±4.6 50.7±3.7 42.9±1.1 56.1±2.0 43.6±1.0 MiniMaxSim (ours) 95.5±0.7 82.3±4.7 49.9±3.0 42.0±1.0 55.0±1.9 43.5±0.8 Random Selection 95.9±0.9 69.4±5.4 49.9±2.6 38.9±1.0 45.2±0.6 37.3±1.1 MaxSim 95.3±0.5 59.8±6.3 60.9±2.1 32.2±3.1 30.2±0.8 22.2±1.4 Interior 96.6±0.5 73.8±6.5 50.4±5.7 40.8±0.9 33.5±3.6 25.4±0.8 Full Sampling 98.7±0.3 80.1±5.5 61.4±6.8 41.6±1.2 53.7±2.3 43.2±1.8 Power-of-Choice 97.3±0.6 76.3±4.5 60.3±6.7 39.2±1.3 56.7±2.6 42.6±1.7
Method EMNIST-10 CIFAR-10 ID OOD ID OOD Entropy (ours) 96.7±0.2 74.9±10.5 56.5±0.7 35.7±1.0 Data size 94.7±1.8 53.1±12.1 53.3±3.2 33.2±0.3 Equality 96.2±0.6 70.7±13.2 60.8±3.9 34.9±0.9
Then we perform the convergence analysis on weighting aggregation methods for EMNIST-10 and CIFAR-10. The label distribution skew of FL is considered in this part. More specifically, we split the total training set into different clients via the Dirichlet distribution spitting. The splitting parameter of the Dirichlet distribution is set as and for EMNIST-10 and CIFAR-10 respectively. The convergence behavior of the proposed empirical entropy-based weighting method compared with other baselines for EMNIST-10 and CIFAR-10 is presented in Figure 6. For EMNIST-10, the proposed empirical entropy-based weighting method converges faster than the other two baselines and it also maintains the highest OOD test accuracy among these weighting methods after about -th communication round. For CIFAR-10, both the proposed empirical entropy-based weighting method and equality weighting method converge faster than the data size-based weighting method while the proposed weighting method converges more stably than other baselines. The above results show that giving higher aggregation weights to local gradients trained on data sources with greater information entropy will improve the generalization performance of models, which is matched with our theoretical basis in Theorem 1.


We then compare the convergence behavior of different client selection methods, and conduct the ablation studies for our proposed client selection methods on three datasets. The experimental results are shown in Figure 3, 4 and 5 resepectively. For EMNIST-10 and CIFAR-10, we still split the total training set via the Dirichlet distribution spitting. The splitting parameter of the Dirichlet distribution is set as and for EMNIST-10 and CIFAR-10 respectively in this part. For the Shakespeare dataset, each speaking role in each play is set as a local dataset.
We first focus on the Shakespeare and EMNIST-10 datasets. We can find that two proposed client selection methods almost converge at the same rate and converge faster than full sampling, power-of-choice selection and random selection. It shows the superiority of our proposed methods and indicates that the empirical results are matched with the presented theoretical findings. It can be also found that the full sampling scheme converges fastest and the most stably on EMNIST-10 dataset. However, it achieves worse OOD test accuracy than the proposed methods since the randomness induced by the selection will even improve the out-of-distribution performance of the global model. From another perspective, the nature of the proposed methods is to "compress" the information from participating data sources, i.e., removing the redundant information from certain data sources making less contribution to generalization. For CIFAR-10, we notice that the proposed methods perform much better than random selection and achieve better OOD test accuracy than power-of-choice selection in most of rounds. However, the proposed methods perform worse than full sampling for stability. The reason why the full sampling scheme performs more stably for CIFAR-10 is that training a model performing well on unseen distributions for CIFAR-10 is the most difficult among three tasks. Consequently, more participating clients will generalize the global model to unseen data source better.
Furthermore, we carry out the ablation studies for two proposed client selection methods on three datasets. The results in Figure 5 show that two proposed client selection methods converge faster than their ablation baselines. It indicates that selecting clients with more dissimilar local gradients and selecting clients with local gradients in the convex hull not in the interior will both improve the out-of-distribution generalization performance of the global model. According to Assumption 2 about the relationship between the gradient dissimilarity and the distribution discrepancy, we can get the following conclusion immediately: selecting clients with more diverse local distributions will enhance the generalization capacity of FL, so the trained model can provide better service for non-participating clients.



Additional experiments on verifying the capability of proposed methods combined with robust algorithms to address noisy label challenges: In the following, we conduct some simple experiments to validate the effectiveness of proposed methods combined with robust algorithms to cope with noisy labels challenges. In this part, we assess the OOD performance of a CNN model on CIFAR-10. Similarly, we partition the dataset into clients using Dirichlet distribution splitting with of . We also select clients to join in FL, while the remaining ones act as non-participating clients. For the hyper-parameters of local training, we utilize the batch size of , the learning rate of , and the local epoch of .
Next, we introduce the noisy label setting considered in this part. Following [43], we adopt the common Bernoulli distribution method to determine whether a local training set is affected by label noise. To inject label noise, we employ the symmetric noise method [52, 53], which manually introduces noise into datasets using a label transition matrix , where . In particular, we utilize an equal probability distribution for flipping true labels into other labels, thereby employing a symmetric noise method.
We integrate two prevalent Byzantine-robust algorithms into our proposed approaches to tackle the noisy label issues. Firstly, we combine two proposed client selection methods with Trimmed-mean [54], which removes outliers for each model parameter, and calculates the mean based on the remaining ones to update the global model. We combine the total variation (TV) norm penalty method [55], which adds a TV norm-based proximal regularization term in each local objective, with the proposed empirical entropy-based weighting method.
Figure 7 illustrates the effectiveness of proposed approaches in combination with robust algorithms mentioned above in addressing label noise. Notice that the baseline denoted as "Noise free" represents performing proposed methods on datasets without noisy labels. The baseline called "FedAvg" indicates conducting presented methods using datasets affected by label noise without the use of robust algorithms to mitigate such degradation. In Figure 7 (a), we observe that the proposed empirical entropy-based weighting method is significantly impacted by noisy labels. However, upon combining it with the TV-norm penalty method, the corresponding OOD test accuracy shows some improvement. In Figure 7 (b) and (c), we find that two proposed client selection methods, when combined with the Trimmed-mean method, effectively tackle with noisy labels and achieve OOD test accuracy comparable to that of the noise-free setting in the later stages of FL. These experiments have demonstrated that proposed methods can effectively integrate robust algorithms to alleviate the impact of noisy labels on FL.
Additional experiments on CIFAR-100 with ResNet-18: To further assess the generalization performance of the proposed methods, we proceed with additional experiments, comparing our approaches with recent baselines using the CIFAR-100 dataset [56] and a ResNet-18 model [57]. Initially, we partition the dataset into 100 clients using a Dirichlet distribution split with a parameter set to 0.03. We then randomly choose 40 clients from this set to participate in FL. Each selected client conducts local training with a batch size of 16, runs for 3 local epochs, and uses a learning rate of 0.01. Our focus remains exclusively on the OOD performance of the global model.
We compare the proposed empirical entropy-based weighting method with two baselines, namely FedALRC [40] and FedGAMMA [38]. Additionally, we compare two presented client selection methods with active selection proposed in [58]. The experimental results are depicted in Figure 8.


In Figure 8 (a), we employ a third-order moving average to present the results. We observe that the proposed convex hull construction-based client selection method performs worse than active selection in the later stage of FL. However, the proposed selection method based on minimax gradient similarity outperforms this baseline. This difference in performance can be attributed to the sensitivity of building convex hulls for complex models such as ResNes models, while using cosine similarity to measure gradient similarity demonstrates increased robustness in this situation. Figure 8 (b) illustrates that the proposed empirical entropy-based weighting method outperforms other baseline methods. This superiority is due to the fact that these baseline methods only consider enhancing I.I.D. generalization performance and overlook potential non-participating clients. In contrast, our presented empirical entropy-based weighting method can improve the generalization capacity of models by identifying representative data sources.


Additional experiments on the NICO dataset with ResNet-18: To further validate the generalizability of our proposed methods, we perform more experiments using a more realistic dataset. Specifically, we compared our approaches with previously mentioned baselines (FedALRC and FedGAMMA) using the Non-I.I.D. Image dataset with Contexts (NICO) dataset [59] with ResNet-18. The NICO dataset, which encompasses two superclasses of Animal and Vehicle, is tailored for Non-IID image classification. This dataset labels images with both primary concepts (e.g., dog) and the contexts (e.g., on grass) in which these visual concepts appear. The average number of images per context ranges from to , while the average number of images per class approximates , akin to ImageNet. For this evaluation, we focus solely on the Animal superclass, which consists of classes.
We employ experimental settings akin to previous experiments conducted on CIFAR-100, setting of the Dirichlet distribution to . The experimental results are presented in Figure 9. From Figure 9 (a), we find that the proposed Minimax client selection method still outperforms other methods. However, the method based on convex hull construction demonstrates similar or even inferior performance compared to active selection. This may be attributed to the bias introduced by using cosine similarity to construct the convex hull when dealing with complex datasets like NICO. Figure 9 (b) illustrates that the proposed empirical entropy-based weighting method also surpasses other baselines due to its effective leveraging of the intrinsic category diversity within NICO, thereby enhancing the OOD performance of trained models.
VI Conclusion
This paper addresses the generalization issue in FL by exploring whether a global model trained by participating clients is capable of performing well for non-participating clients in the presence of heterogeneous data. To capture the generalization gap in FL, we propose an information-theoretic generalization framework that takes into account both the information entropy of local distribution and the discrepancy between different distributions. Leveraging this framework, we are able to identify the generalization gap and further propose an empirical entropy-based weighting aggregation method, as well as two gradient similarity-based client selection methods. These methods aim to enhance federated generalization for providing better service for non-participating clients through distribution diversification. Numerical results corroborate our theoretical findings, demonstrating that the proposed approaches can surpass baselines. In the future, our goal is to develop more comprehensive theories and approaches for tackling the challenge of balancing attention between rare examples and the overall distribution. It is imperative to ensure that the emphasis on rare cases of proposed methods does not compromise performance on common cases in practical scenarios.
Appendix A Proof of theorems in federated generalization
In this section, we will present detailed proofs of theorems in the theoretical framework section.
A-A Proof of Theorem 1
Proof.
Let us recall the decomposed generalization gap in FL,
(19) | ||||
We regard the two terms in this decomposed generalization gap as two different lemmas and introduce the detailed proofs of these lemmas in the following.
Lemma 1 (Distributed learning gap).
(20) |
Proof.
In the following, we show that the "distributed learning gap" term can be bounded as,
(21) | ||||
∎
Lemma 2 (Semi-generalization gap).
Let be a family of functions related to hypothesis space with VC dimension . Distributed training sets are constructed by i.i.d. realizations sampled from different data sources . If the loss function is bounded by , for any , it follows that with probability at least ,
(23) |
where is a constant, and , where . The term represents the gap between the self-information weighted expected risk and the vanilla expected risk without considering the self-information of outcomes.
Before starting the detailed proof of Lemma 2, we introduce a theoretical result about the generalization bound for participating clients in IID federated learning in [10] into our paper as a lemma.
Lemma 3 (Generalization bound for participating clients in IID FL).
[10] Let be a family of functions related to hypothesis space with VC dimension . Distributed training sets are constructed by i.i.d. realizations sampled from different data sources . If the loss function is bounded by , for any , it follows that with probability at least ,
(24) | ||||
where is a constant.
Based on this lemma, we can complete the proof of Lemma 2. We then start the formal proof of the Lemma 2
Proof.
(25) | ||||
Rooted on the Lemma 3, we can immediately have the below result with a probability of at least ,
(26) | ||||
For simplicity, we denote the term as temporarily.
Assume that satisfying , we have
(27) | ||||
where holds since and follows from the fundamental inequality .
To sum up, we eventually have
(28) |
where the term represents . ∎
On the basis of the proven three lemmas introduced above, we can find that Theorem 1 is proven immediately. ∎
A-B Proof of Corollary 1
Proof.
A-C Proof of Theorem 2 and Theorem 3
Proof.
Following the proof of Theorem 1, we can also decompose and amplify the gap as,
(31) | ||||
We can use the theoretical results derived in Theorem 1 to bound the semi-generalization gap defined above directly. Therefore, we mainly focus on the selection gap defined above. The selection gap can be decomposed and amplified as,
(32) | ||||
Similar to the proof of Theorem 1, we can easily find that the "distributed learning gap" term can be bounded as,
(33) |
Then we turn to bound the "distribution gap" term .
(34) | ||||
To sum up, we can derive the selection gap as follows,
(36) | ||||
The term can be further derived as follows,
(37) | ||||
where follows from the chain rule property of entropy and the follows from conditioning reduces entropy. makes use of the fact , where and are two random variables. Based on the above results, we can derive the result in Theorem 2.
Notice that the mutual information can be rewritten by the form of KL-divergence, i.e., , we thus have,
(38) | ||||
where follows from the chain rule property of entropy and follows the fact that the cross entropy satisfies , where and are two probability distributions.
References
- [1] H. Zhu, J. Xu, S. Liu, and Y. Jin, “Federated learning on non-iid data: A survey,” Neurocomputing, vol. 465, pp. 371–390, 2021.
- [2] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA (A. Singh and X. J. Zhu, eds.), vol. 54 of Proceedings of Machine Learning Research, pp. 1273–1282, PMLR, 2017.
- [3] Y. Yan, X. Tong, and S. Wang, “Clustered federated learning in heterogeneous environment,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2023.
- [4] K. K. Coelho, M. Nogueira, A. B. Vieira, E. F. Silva, and J. A. M. Nacif, “A survey on federated learning for security and privacy in healthcare applications,” Comput. Commun., vol. 207, pp. 113–127, 2023.
- [5] Y. Zhang, D. Zeng, J. Luo, X. Fu, G. Chen, Z. Xu, and I. King, “A survey of trustworthy federated learning: Issues, solutions, and challenges,” ACM Trans. Intell. Syst. Technol., jul 2024.
- [6] Q. Mao, S. Wan, D. Hu, J. Yan, J. Hu, and X. Yang, “Leveraging federated learning for unsecured loan risk assessment on decentralized finance lending platforms,” in ICDM (Workshops), pp. 663–670, IEEE, 2023.
- [7] Z. Sun, Y. Xu, Y. Liu, W. He, L. Kong, F. Wu, Y. Jiang, and L. Cui, “A survey on federated recommendation systems,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–15, 2024.
- [8] Z. Wu, X. Wu, and Y. Long, “Joint scheduling and robust aggregation for federated localization over unreliable wireless D2D networks,” IEEE Trans. Netw. Serv. Manag., vol. 20, no. 3, pp. 3359–3379, 2023.
- [9] Y. Wang, Q. Shi, and T.-H. Chang, “Why batch normalization damage federated learning on non-iid data?,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–15, 2023.
- [10] X. Hu, S. Li, and Y. Liu, “Generalization bounds for federated learning: Fast rates, unparticipating clients and unbounded losses,” in International Conference on Learning Representations, 2023.
- [11] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated learning with non-iid data,” arXiv preprint arXiv:1806.00582, 2018.
- [12] H. Yuan, W. Morningstar, L. Ning, and K. Singhal, “What do we mean by generalization in federated learning?,” arXiv preprint arXiv:2110.14216, 2021.
- [13] M. Mohri, G. Sivek, and A. T. Suresh, “Agnostic federated learning,” in Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA (K. Chaudhuri and R. Salakhutdinov, eds.), vol. 97 of Proceedings of Machine Learning Research, pp. 4615–4625, PMLR, 2019.
- [14] W. Huang, M. Ye, Z. Shi, G. Wan, H. Li, B. Du, and Q. Yang, “Federated learning for generalization, robustness, fairness: A survey and benchmark,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
- [15] Z. Qu, X. Li, R. Duan, Y. Liu, B. Tang, and Z. Lu, “Generalized federated learning via sharpness aware minimization,” in International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA (K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvári, G. Niu, and S. Sabato, eds.), vol. 162 of Proceedings of Machine Learning Research, pp. 18250–18280, PMLR, 2022.
- [16] B. Wei, J. Li, Y. Liu, and W. Wang, “Non-iid federated learning with sharper risk bound,” IEEE Transactions on Neural Networks and Learning Systems, 2022.
- [17] W. Y. B. Lim, N. C. Luong, D. T. Hoang, Y. Jiao, Y.-C. Liang, Q. Yang, D. Niyato, and C. Miao, “Federated learning in mobile edge networks: A comprehensive survey,” IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 2031–2063, 2020.
- [18] M. Wang, Y. Pan, X. Yang, G. Li, and Z. Xu, “Tensor networks meet neural networks: A survey,” CoRR, vol. abs/2302.09019, 2023.
- [19] Y. Pan, Z. Su, A. Liu, J. Wang, N. Li, and Z. Xu, “A unified weight initialization paradigm for tensorial convolutional neural networks,” in ICML, vol. 162 of Proceedings of Machine Learning Research, pp. 17238–17257, PMLR, 2022.
- [20] A. Tak and S. Cherkaoui, “Federated edge learning: Design issues and challenges,” IEEE Network, vol. 35, no. 2, pp. 252–258, 2020.
- [21] Z. Wu, Z. Xu, D. Zeng, J. Li, and J. Liu, “Topology learning for heterogeneous decentralized federated learning over unreliable d2d networks,” IEEE Transactions on Vehicular Technology, pp. 1–6, 2024.
- [22] Y. Wang, Z. Su, T. H. Luan, R. Li, and K. Zhang, “Federated learning with fair incentives and robust aggregation for uav-aided crowdsensing,” IEEE Transactions on Network Science and Engineering, vol. 9, no. 5, pp. 3179–3196, 2021.
- [23] W. Huang, M. Ye, Z. Shi, and B. Du, “Generalizable heterogeneous federated cross-correlation and instance similarity learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 46, no. 2, pp. 712–728, 2024.
- [24] A. Reisizadeh, F. Farnia, R. Pedarsani, and A. Jadbabaie, “Robust federated learning: The case of affine distribution shifts,” in Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual (H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, eds.), 2020.
- [25] X. Ma, J. Zhu, Z. Lin, S. Chen, and Y. Qin, “A state-of-the-art survey on solving non-iid data in federated learning,” Future Generation Computer Systems, vol. 135, pp. 244–258, 2022.
- [26] D. Caldarola, B. Caputo, and M. Ciccone, “Improving generalization in federated learning by seeking flat minima,” in Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIII, pp. 654–672, Springer, 2022.
- [27] S. Yagli, A. Dytso, and H. V. Poor, “Information-theoretic bounds on the generalization error and privacy leakage in federated learning,” in 21st IEEE International Workshop on Signal Processing Advances in Wireless Communications, SPAWC 2020, Atlanta, GA, USA, May 26-29, 2020, pp. 1–5, IEEE, 2020.
- [28] L. P. Barnes, A. Dytso, and H. V. Poor, “Improved information theoretic generalization bounds for distributed and federated learning,” in IEEE International Symposium on Information Theory, ISIT 2022, Espoo, Finland, June 26 - July 1, 2022, pp. 1465–1470, IEEE, 2022.
- [29] M. Sefidgaran, R. Chor, and A. Zaidi, “Rate-distortion theoretic bounds on generalization error for distributed learning,” in Advances in Neural Information Processing Systems (A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, eds.), 2022.
- [30] Z. Wu, Z. Xu, H. Yu, and J. Liu, “Information-theoretic generalization analysis for topology-aware heterogeneous federated edge learning over noisy channels,” IEEE Signal Processing Letters, pp. 1–5, 2023.
- [31] R. Zhang, Q. Xu, J. Yao, Y. Zhang, Q. Tian, and Y. Wang, “Federated domain generalization with generalization adjustment,” in CVPR, pp. 3954–3963, IEEE, 2023.
- [32] A. T. Nguyen, P. H. S. Torr, and S. N. Lim, “Fedsr: A simple and effective domain generalization method for federated learning,” in NeurIPS, 2022.
- [33] A. T. Nguyen, P. Torr, and S.-N. Lim, “Fedsr: A simple and effective domain generalization method for federated learning,” in Advances in Neural Information Processing Systems, 2022.
- [34] B. Li, Y. Shen, Y. Wang, W. Zhu, C. Reed, D. Li, K. Keutzer, and H. Zhao, “Invariant information bottleneck for domain generalization,” in Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022, pp. 7399–7407, AAAI Press, 2022.
- [35] K. Muandet, D. Balduzzi, and B. Schölkopf, “Domain generalization via invariant feature representation,” in International conference on machine learning, pp. 10–18, PMLR, 2013.
- [36] S. Park, U. Simsekli, and M. A. Erdogdu, “Generalization bounds for stochastic gradient descent via localized $\varepsilon$-covers,” in NeurIPS, 2022.
- [37] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” Proceedings of Machine learning and systems, vol. 2, pp. 429–450, 2020.
- [38] R. Dai, X. Yang, Y. Sun, L. Shen, X. Tian, M. Wang, and Y. Zhang, “Fedgamma: Federated learning with global sharpness-aware minimization,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2023.
- [39] Y. Chen, W. Lu, X. Qin, J. Wang, and X. Xie, “Metafed: Federated learning among federations with cyclic knowledge distillation for personalized healthcare,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–12, 2023.
- [40] B. Wei, J. Li, Y. Liu, and W. Wang, “Non-iid federated learning with sharper risk bound,” IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 5, pp. 6906–6917, 2024.
- [41] L. Paninski, “Estimation of entropy and mutual information,” Neural computation, vol. 15, no. 6, pp. 1191–1253, 2003.
- [42] Z. Luan, W. Li, M. Liu, and B. Chen, “Robust federated learning: Maximum correntropy aggregation against byzantine attacks,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2024.
- [43] K. Tam, L. Li, B. Han, C. Xu, and H. Fu, “Federated noisy client learning,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2023.
- [44] Y. Zou, Z. Wang, X. Chen, H. Zhou, and Y. Zhou, “Knowledge-guided learning for transceiver design in over-the-air federated learning,” IEEE Transactions on Wireless Communications, vol. 22, no. 1, pp. 270–285, 2023.
- [45] D. Zeng, X. Hu, S. Liu, Y. Yu, Q. Wang, and Z. Xu, “Stochastic clustered federated learning,” CoRR, vol. abs/2303.00897, 2023.
- [46] S. P. Boyd and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
- [47] C. B. Barber, D. P. Dobkin, and H. Huhdanpaa, “The quickhull algorithm for convex hulls,” ACM Transactions on Mathematical Software (TOMS), vol. 22, no. 4, pp. 469–483, 1996.
- [48] M. Xiong, B. Zhang, D. W. C. Ho, D. Yuan, and S. Xu, “Event-triggered distributed stochastic mirror descent for convex optimization,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 9, pp. 6480–6491, 2023.
- [49] X. He, X. Yi, Y. Zhao, K. H. Johansson, and V. Gupta, “Asymptotic analysis of federated learning under event-triggered communication,” IEEE Transactions on Signal Processing, vol. 71, pp. 2654–2667, 2023.
- [50] S. Caldas, P. Wu, T. Li, J. Konečný, H. B. McMahan, V. Smith, and A. Talwalkar, “LEAF: A benchmark for federated settings,” CoRR, vol. abs/1812.01097, 2018.
- [51] Y. J. Cho, J. Wang, and G. Joshi, “Client selection in federated learning: Convergence analysis and power-of-choice selection strategies,” CoRR, vol. abs/2010.01243, 2020.
- [52] H. Song, M. Kim, D. Park, Y. Shin, and J.-G. Lee, “Learning from noisy labels with deep neural networks: A survey,” IEEE transactions on neural networks and learning systems, vol. 34, no. 11, pp. 8135–8153, 2022.
- [53] S. Liang, J. Huang, D. Zeng, J. Hong, J. Zhou, and Z. Xu, “Fednoisy: Federated noisy label learning benchmark,” CoRR, vol. abs/2306.11650, 2023.
- [54] D. Yin, Y. Chen, R. Kannan, and P. Bartlett, “Byzantine-robust distributed learning: Towards optimal statistical rates,” in International conference on machine learning, pp. 5650–5659, Pmlr, 2018.
- [55] J. Peng and Q. Ling, “Byzantine-robust decentralized stochastic optimization,” in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5935–5939, 2020.
- [56] A. Krizhevsky, G. Hinton, et al., “Learning multiple layers of features from tiny images,” 2009.
- [57] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
- [58] H. Huang, W. Shi, Y. Feng, C. Niu, G. Cheng, J. Huang, and Z. Liu, “Active client selection for clustered federated learning,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–15, 2023.
- [59] Y. He, Z. Shen, and P. Cui, “Towards non-iid image classification: A dataset and baselines,” Pattern Recognition, vol. 110, p. 107383, 2021.
![]() |
Zheshun Wu is a Ph.D. student at the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China. He received his Bachelor’s degree from South China University of Technology, China, and a Master’s degree from Sun Yat-sen University, China. His research interests include federated learning and wireless communications. |
![]() |
Zenglin Xu (Senior Member, IEEE) is currently a full professor in Fudan University. He received the Ph.D. degree in computer science and engineering from the The Chinese University of Hong Kong. He has been working at Michigan State University, Cluster of Excellence at Saarland University and Max Planck Institute for Informatics, Purdue University, and University of Electronic Science & Technology of China. His research interests include machine learning and its applications in computer vision, natural language processing, and health informatics. |
![]() |
Dun Zeng is a Ph.D. candidate at the School of Computer Science and Engineering, University of Electronic Science and Technology of China, focusing on federated learning and distributed optimization. |
![]() |
Qifan Wang received the BS and MS degrees from Tsinghua University, and the PhD degree from Purdue University. He is the research scientist with Meta AI, leading a team building innovative Deep Learning and Natural Language Processing models for Recommendation System. Before joining Meta, He worked with Google Research, and Intel Labs before joining Meta. His research interests include deep learning, natural language processing, information retrieval, data mining, and computer vision. He has co-authored more than 100 publications in top-tier conferences and journals, including NeurIPS, SIGKDD, WWW, SIGIR, AAAI, IJCAI, ACL, EMNLP, WSDM, CIKM, ECCV, TPAMI, TKDE, and TOIS. He also serve as area chairs, program committee members, editorial board members, and reviewers for academic conferences and journals. |
![]() |
Jie Liu (Fellow, IEEE) is a Chair Professor at Harbin Institute of Technology Shenzhen (HIT Shenzhen), China and the Dean of its AI Research Institute. Before joining HIT, he spent 18 years at Xerox PARC and Microsoft. He was a Principal Research Manager at Microsoft Research, Redmond and a partner of the company. His research interests are Cyber-Physical Systems, AI for IoT, and energy-efficient computing. He received IEEE TCCPS Distinguished Leadership Award and 7 Best Paper Awards from top conferences. He is an IEEE Fellow and an ACM Distinguished Scientist, and founding Chair of ACM SIGBED China. |