Non-IID Quantum Federated Learning with One-shot Communication Complexity
Abstract
Federated learning refers to the task of machine learning based on decentralized data from multiple clients with secured data privacy. Recent studies show that quantum algorithms can be exploited to boost its performance. However, when the clients’ data are not independent and identically distributed (IID), the performance of conventional federated algorithms is known to deteriorate. In this work, we explore the non-IID issue in quantum federated learning with both theoretical and numerical analysis. We further prove that a global quantum channel can be exactly decomposed into local channels trained by each client with the help of local density estimators. This observation leads to a general framework for quantum federated learning on non-IID data with one-shot communication complexity. Numerical simulations show that the proposed algorithm outperforms the conventional ones significantly under non-IID settings.
I Introduction
Recent advances in artificial intelligence [1] and quantum computing [2] have given birth to an emerging field of quantum machine learning [3, 4]. By leveraging quantum advantage in machine learning tasks, quantum machine learning, has demonstrated unprecedented power in solving a wide range of problems. Notable examples include solving linear equations [5], quantum supervised learning [6, 7, 8, 9], and quantum generative learning [10, 11, 12]. This line of research mostly focuses on developing quantum algorithms that can showcase quantum speed-up in machine learning problems.
One of the most important ingredients in machine learning is the data. In real-world applications, data are often distributed among multiple clients and cannot be gathered into a single joint dataset for various reasons. For example, medical data from patients across multiple hospitals [13] or cyber-physical attack data from Internet of Things devices [14] are sensitive and private, and cannot be directly shared without anonymization. Obstacles in data transmission may also forbid us from constructing a joint dataset. For example, the data size might be too large and therefore the transmission is too expensive, or we need quantum data that suffer from decoherence and are hard to preserve or transmit.
Machine learning algorithms with such decentralized data have been developed under the name of federated learning [15, 16]. The conventional solution is the federated averaging algorithm [16], or FedAvg, in which multiple clients jointly train a global model by sharing only the model parameters/updates while keeping their data private. Its quantum extensions, dubbed qFedAvg, have been proposed to incorporate quantum features such as quantum speed-up and blind computing [17, 18, 19, 20, 21].
However, the classical FedAvg algorithm has three shortcomings already identified in the literature. Firstly, it suffers from the non-IID quagmire [22, 23], i.e. its performance is proved to deteriorate when the local data of different clients are not independent and identically distributed (IID). But real data are often heterogeneous or even multi-modal for different clients. Secondly, the joint training involves gradient sharing, so data privacy is threatened by attacks based on gradient inversion [24, 25]. Thirdly, it requires many rounds of communication, which is often the bottleneck of real-world applications. To reduce communication burdens, multiple one-shot alternatives have been proposed [26, 27, 28, 29].
In this work, we discuss whether the non-IID quagmire exists in quantum federated learning. The answer is yes, and we support it with both theoretical analysis and numerical experiments. Then we move on to propose a solution to this problem. We prove that a global quantum channel can be exactly decomposed into channels trained by each client with the help of local density estimators. It provides a general framework, dubbed qFedInf, for quantum federated learning on non-IID data. Meanwhile, qFedInf is one-shot in terms of communication complexity and is free from gradient sharing. We further identify its connection to mixture of experts (MoE) and ensemble learning. Numerical experiments in highly non-IID settings demonstrate that the proposed framework outperforms the conventional algorithm significantly with only one communication round.
II Main Results
II.1 Decentralized Quantum Data
We begin by setting up a typical decentralized quantum dataset. Classical datasets can be regarded as a special case of it, where the data samples are orthogonal to each other.
In federated learning tasks, we are given a set of clients , and we can use a set of orthonormal basis elements to denote them. Each of the clients has access to its own dataset containing samples from the data Hilbert space . The dataset can be statistically represented by the density matrix . Note that the density matrices of different clients may vary dramatically (i.e. non-IID). For example, in a multi-class classification problem, each client may have only seen samples from two or three classes.
Since the construction of entanglement between macroscopic objects from distant places is in general very difficult and is not expected to be realized in the near future [2], we focus on the situation where there is no entanglement among the clients. Then the joint density matrix of both the data and the clients is represented by , where is the statistical weight of client , and is the total number of samples. For a classical dataset, the joint density matrix is diagonal, and its matrix elements correspond to the joint probability distribution .
The averaged data density matrix is obtained by tracing out the clients: Let be the projector onto the subspace of client . Then we can introduce the conditional density matrix [30, 31] , which characterizes the data of a given client . Tracing out the clients recovers the local dataset: . We summarize these concepts in Figure 1. For supervised learning tasks such as classification, there will also be a label associated with each data sample . We have omitted the labels here for simplicity.

II.2 Quantum Federated Averaging
Here we briefly review the quantum version of FedAvg [16]. In a general supervised quantum machine learning problem, we aim to find a quantum channel that takes an input state and transform it into the desired result (e.g. which class the image belongs to). We achieve this by tuning the variational parameters of the channel to minimize the average of some loss function on a given training dataset :
(1) |
where is the input and is the corresponding label. We use gradient descent with learning rate to iteratively solve this optimization problem: at time step ,
(2) |
For decentralized data, the total dataset is divided into local datasets from multiple clients: . Then we can decompose the update rule of Equation (2) into three steps: (1) local updates at time step for each client ,
(3) |
(2) global average for every steps, e.g. at time step ,
(4) |
and (3) broadcast the averaged weights to all clients as the initial parameters for the next iteration. This is the basic protocol of qFedAvg, the common ground of all the existing quantum federated learning algorithms [17, 18, 19, 20, 21]. It recovers the centralized update rule when . However, in practice, this cannot be achieved since we use mini-batch training strategies.
II.3 The Non-IID Quagmire of Quantum FedAvg
As pointed out in [22, 23], FedAvg faces the problem of non-IID quagmire, in which its performance deteriorates when the data of different clients are non-IID. Does this phenomenon also exist in the quantum regime?
We follow the steps of [22] and quantify the performance difference of FedAvg and the centralized case by the weight divergence , where and are the weights given by qFedAvg and the centralized case respectively. On the other hand, the level of non-IID can be quantified by the earth mover’s distance (EMD) [32] between the label distribution of client , , and the centralized distribution : . Then, as a direct quantum extension to Proposition 3.1 in [22], we have the following proposition:
Proposition 1
For a loss function of the form 111In a typical classification problem, denotes the different classes and the standard cross entropy loss is given by , where is the predicted probability of belonging to class ., if the gradient defined as is -Lipschitz for all possible value of label , then the following inequality holds for the weight divergence of qFedAvg:
(5) |
where and .
Therefore, EMD is indeed a source of the weight divergence when . This provides a theoretical explanation for the existence of non-IID quagmire in quantum federated learning, similar to its classical counterpart [22]. We provide a detailed proof of Proposition 1 in Appendix A. Numerical experiments are conducted in Section III.3 to empirically illustrate this phenomenon in quantum federated learning.
II.4 Federated Decomposition of Quantum Channels
Now that we have identified the non-IID quagmire in qFedAvg, we aim to find a different approach to tackle Equation (1) when the data are decentralized. We consider the case where the loss function is a function of the output of the quantum channel and the label: , where .
We begin by noting that when the data are decentralized, each client can still train a local channel on its own data :
(6) |
To make use of the local channels, we can decompose the original problem, Equation (1), as
(7) |
We can solve the whole minimization problem if we can solve each individual sub-problems simultaneously, which is exactly what does. Therefore, if we can construct a global channel , such that it coincides with all the local channels on their own data, i.e.
(8) |
then the goal is achieved. 222The fact that identical samples have identical labels guarantees that the global channel is well defined. For example, suppose we have two identical samples with the same label , but are from different clients . Then, in order to fulfill the local minimization problems, Equation (6), we must have . This can be rephrased backwards, as demanding that the local channels coincide with the global channel on condition that the input data comes from its own dataset , which is statistically represented by . That is,
(9) |
where denotes the projector. 333To avoid confusion, we insist on using different notations for to emphasize their differences in physical meaning: we use to denote the quantum state that you can load into your circuit, while we use to denote the projection operator.
To combine these local channels into a global one, we also need to capture the information of the local datasets . To achieve this, we introduce the local density estimator , which is trained to output the probability density of the input state within the local dataset:
(10) |
In fact, the local channels and density estimators are enough to give an explicit construction of the global channel . This is provided by the following theorem (proved in Appendix B):
Theorem 1
(Federated Decomposition of Quantum Channels)
For each client , which only has access to its own data with samples, a local channel and a local density estimator can be trained. Assuming that there is no entanglement among clients, then the global channel can be decomposed into
(11) |
where is the decomposition weight, is any pure input state and . Extension to mixed input states follows from direct linear superposition.
We note that the classical special case of this theorem was first introduced in [36]. As a result, for any input state , if we randomly apply a local channel with probability , the result will be statistically the same as the global channel . This fact leads to the following framework for quantum federated learning.
II.5 A One-shot Quantum Federated Learning Framework for Non-IID Data
Theorem 1 provides a framework for quantum federated learning. The specific protocol goes as follows. Firstly, each client trains a local channel and a local density estimator with its own data . This step is completely distributed and concludes the whole training phase. Secondly, the trained channels and density estimators are sent to the server for inference according to Equation (11). That is, when a new input comes, the server computes via the density estimators. Then it randomly loads the parameters from with probability and gathers the outcomes. We call this framework quantum federated inference, or qFedInf for short. The detailed algorithms are summarized in Algorithm 1 and 2.
In practice, there is a wide range of choices for the channels and the density estimators . For channels, we can use classical, quantum, or hybrid algorithms at our will [37, 1, 38]. For density estimators, we can use classical ones like Gaussian mixture models [37] and normalizing flows [39], quantum-inspired ones like [40], or quantum ones such as classical shadow tomography [41] and quantum state diagonalization algorithms [42, 43]. Each kind of density estimator comes with a different training strategy. We can also classify the possible scenarios based on the classical/quantum nature of the data and the channels:
Classical Data & Classical Channel: This is a purely classical problem already discussed in [36].
Classical Data & Quantum Channel: To apply quantum channels to classical data, one needs to choose an appropriate encoding scheme, e.g. amplitude encoding or gate encoding, to encode the data into quantum states. This gives rise to the problem of whether to use a classical density estimator on the original data or a quantum one on the encoded states. Note that before encoding, different samples are orthogonal to each other. However, the encoded quantum samples are in general overlapped, meaning that there’s a possibility of mistaking one sample for another. Nevertheless, quantum density estimators offer a potential exponential speed-up. So there’s a trade-off between accuracy and efficiency.
Quantum Data & Classical Channel: In general, trying to apply classical channels to process quantum data is not very efficient, as the tomography and representation of a quantum state cost exponentially large classical computational resources. Nevertheless, proposals [44] have been made to use the classical shadows [41] of a quantum state as the input to a classical machine learning algorithm. We defer to future works to investigate the performance of these proposals in a federated learning context.
Quantum Data & Quantum Channel: For quantum data, we need a quantum density estimator to estimate . This task can be solved by classical shadow tomography, which is proved to saturate the information-theoretic bound [41].
Compared to qFedAvg, the proposed framework qFedInf shares some merits with its classical counterpart [36]. It’s one-shot in the sense that only one communication between the server and each client is required. Meanwhile, there’s no need for a global public dataset or data synthesis/distillation, which are the common ground of existing one-shot algorithms [28, 29]. Moreover, since no gradient information is transmitted, it’s automatically immune to attacks based on gradient inversion [24, 25]. Finally, the density estimators can capture possible data heterogeneity, providing a new way to perform federated learning with non-IID data.
Though we have mainly focused on supervised learning throughout this paper, we also note that Theorem 1 holds for generic quantum channels, which is the most general form of quantum information processing. So the proposed framework qFedInf may be applied to machine learning tasks beyond classification. In Appendix C we provide an example of applying qFedInf to perform quantum generative learning [10, 11, 12].
Finally, we give a brief discussion on the complexity of the proposed framework. Suppose that the number of clients is , the number of iterations needed for training is , and the number of parameters in each of the quantum channels and density estimators are and . Then the communication complexity [15] of the proposed qFedInf is . In comparison, the communication complexity of qFedAvg is , which is much less efficient when is large. As for the circuit complexity [45, 46], it depends on the specific choice of the quantum circuit used as the channels. If the channels have a quantum speed-up, so will qFedInf.
II.6 Connection to MoE and Ensemble Learning
In this section, we discuss the connection between the proposed framework qFedInf and mixture of experts (MoE) [47, 48, 49], which is an important strategy in ensemble learning. The idea of ensemble learning is to combine several models to make a joint prediction [50, 51]. Different models are expected to compensate for each other, leading to better performance. MoE, as a special kind of ensemble learning method, consists of two parts: a set of functions serving as the judge that decides the relative weight of different models, called the gating function; and an ensemble of specialized models, called the experts, each expected to perform well only on a subset of the total input space. In practice, MoE, along with other ensemble approaches such as bagging and boosting, has given birth to many state-of-the-art solutions to a wide range of machine learning problems [50, 51, 47].
In hindsight, we can see that qFedInf also consists of two parts: a set of density estimators that decides the probabilistic weight of different local channels , as in Equation (11); and a set of local channels , each performing well only on the client’s own data. In the language of MoE, the density estimators are the gating functions, and the local channels correspond to the experts. Therefore, qFedInf is exactly an MoE working in the one-shot federated learning context. In Section III, numerical experiments show that qFedInf can indeed combine the knowledge of many weak classifiers (shallow circuits only capable of binary classification) to achieve the capability of large models.
III Numerical Experiments

Top-1 Accuracy/% | Centralized | qFedInf() | qFedAvg() | qFedInf() | qFedAvg() |
---|---|---|---|---|---|
MNIST | 91.20.6 | 92.40.3 | 86.21.0 | 92.70.2 | 88.40.8 |
Fashion-MNIST | 77.20.5 | 74.00.3 | 61.41.4 | 75.40.3 | 66.71.3 |


III.1 Constructing Non-IID Datasets
We observe that the existing quantum federated learning algorithms can already achieve high accuracies on binary classification tasks with synthetic and common classical/quantum datasets [20, 17]. Therefore, to better illustrate the performance differences between qFedInf and qFedAvg, we devise a highly heterogeneous federated dataset based on 8 classes (“0” through “7”) from the MNIST handwritten digits dataset. To produce heterogeneity, we adopt the star structure and cycle-m structure settings from [36] as follows.
Star structure: Each client only has access to the data of two classes, with one of the classes fixed. That is, client only has access to the data of digit “0” and “”.
Cycle-m structure: Each client only has access to the data of classes in a cyclic way. Client has access to the data of digit “”, …, “ module 8”.
Datasets of the same structures are also prepared for the Fashion-MNIST dataset, which is composed of images of daily objects and is regarded as a harder version of MNIST.
III.2 Details of the Quantum Classifiers
We parameterize the quantum classifier as a quantum circuit of layers. Each layer contains a set of controlled-NOT gates on adjacent qubits, followed by a parameterized rotation on each qubit. The detailed circuit is shown in Figure 2.
To load the data into an 8-qubit quantum circuit, we interpolate and resize each image into a size of 16 16 and use amplitude encoding to transform it into a quantum state [3, 52]:
(12) |
where are the pixel values of the image and denote the computational basis. In experiments, amplitude encoding can be implemented using quantum random access memory [53, 54] or universal gate decomposition [46, 55, 56].
To perform inference, we measure the expectation values on each qubit, amplify the outcomes by a factor of 10, and fed them into the softmax function to predict the probability of each class. We use the Adam optimizer [57] of learning rate and a batch size of to minimize the standard cross entropy loss function. The gradients used in the optimization can be computed via the parameter shift rule [58, 17, 59].
All the numerical simulations are conducted with JAX [60] and TensorCircuit [61] on one NVIDIA Tesla V100 GPU. The source code is available at https://github.com/JasonZHM/quantum-fed-infer.
III.3 Performance of qFedAvg and qFedInf on Non-IID Data


We begin by demonstrating the non-IID quagmire of qFedAvg discussed in Section II.3 with numerical simulations. Specifically, we train and test qFedAvg on datasets of the cycle-m structure. The parameter serves as a good controller over the level of non-IID: as increases, each client will have access to more classes, and the level of non-IID will decrease.
We train the channel with and using qFedAvg for 5 epochs. The global synchronization frequency of qFedAvg is set to be one time per batch step. The resulting test accuracies on a test set of size 1024 are plotted in Figure 3. In line with the theoretical analysis, we find that the top-1 accuracy increases as the level of non-IID drops. When the data are highly heterogeneous (), qFedAvg suffers from a loss of () in accuracy on MNIST (Fashion-MNIST) compared to the benchmark trained on the centralized data with the same circuit structure. Nevertheless, when the data heterogeneity is mild (), qFedAvg can achieve comparable performance compared with the centralized classifier. We also plot the test loss and accuracy curves on star structure in Figure 4. As expected, qFedAvg converges much slower to significantly lower accuracy.
To test the performance of qFedInf on non-IID data, we train and test qFedInf on the most heterogeneous settings, namely the star structure and cycle-2 structure, and compare it with qFedAvg and the centralized benchmark. In such settings, each client’s local classifier only needs to perform a binary classification, and thus in practice, we find that circuits with only a few layers suffice to achieve good performance. For comparison with the qFedAvg and the centralized benchmark, we choose for the local classifiers, so that the total number of variational parameters of the clients remains the same. We train the local classifiers for 5 epochs and plot the loss and accuracy curves in Figure 5. We adopt Gaussian mixture models with 5 modes as the local density estimators. The combined global model achieves a top-1 accuracy similar to the centralized benchmark and significantly higher than that of qFedAvg on both settings of both datasets. The detailed accuracies are listed in Table 1. Meanwhile, the performance of qFedInf is roughly unaffected by the number of classes per client, demonstrating its robustness against the level of non-IID.
We note that in the training process, qFedAvg requires a total number of 500 communication rounds, while qFedInf only needs one. Moreover, the local classifiers used in qFedInf are much shallower compared to those of qFedAvg and the centralized benchmark. If we change the of the centralized classifier to 6, its test accuracy will drop to only . To rule out the possibility that barren plateau [62] causes this performance difference, we also test qFedInf with , and the resulting performance is the same as , which suggests that the barren plateau issue is not significant in our settings. Therefore, putting the federated settings aside, qFedInf can indeed utilize the collective knowledge of many small models to achieve the capability of a large model, in line with MoE and ensemble learning as discussed in Section II.6.


IV Conclusions
In this work, we tackle the problem of non-IID data in quantum federated learning. We give a theoretical analysis of the non-IID quagmire in qFedAvg and support it with numerical experiments. We prove that a global quantum channel can be exactly decomposed into local channels trained by each client with the help of local density estimators. It leads to a general framework qFedInf for quantum federated learning on non-IID data. It’s one-shot in terms of communication complexity and immune to attacks based on gradient inversion.
We conduct numerical experiments on multi-class classification tasks to demonstrate the proposed framework. We devise a highly heterogeneous federated dataset based on MNIST and Fashion-MNIST. Experiments show that qFedInf achieves a comparable performance compared to the centralized benchmark, and outperforms qFedAvg with significantly fewer communication rounds.
The non-IID issue has been regarded as a major challenge and is under active research in the literature of classical federated learning. Future works may focus on a more thorough analysis of this issue in the quantum regime: develop more challenging quantum federated datasets to demonstrate the quantum non-IID quagmire and test the performance of different density estimators. On the other hand, as quantum channels are the most general form of quantum information processing, we expect that more quantum machine learning algorithms can be made federated through the proposed framework. Moreover, more quantum features such as quantum speed-up and universal blind quantum computation [63, 17] may also be incorporated into qFedInf in the future.
Acknowledgements.
We thank Weikang Li, Jingyi Zhang, Yuxuan Yan, Rebing Wu, and Yuchen Guo for their insightful discussions. We thank the anonymous reviewers for their constructive suggestions on the manuscript. We also acknowledge the Tsinghua Astrophysics High-Performance Computing platform for providing computational and data storage resources. This work is financially supported by Zhili College, Tsinghua University.Appendix A Proof of Proposition 1
Proposition 1 is a quantum generalization of its classical counterpart, Proposition 3.1 in [22]. Below, we provide a detailed proof following the ideas introduced in [22]. Based on the definition of and the update rules, Equations (2), (3) and (4), we have
(13) |
Now we apply the triangle inequality and the Lipschitz conditions. Together with the definition , we have
(14) |
Then we continue going backwards in the time steps. With the triangle inequality and the definitions of and , we have
(15) |
By induction and the broadcast rule , we arrive at
(16) |
Plug it into Equation (14), and we finally reach the desired result:
(17) |
Appendix B Proof of Theorem 1
With the definitions in Sections II.1 and II.4, for any pure input state , the global channel can be decomposed into
(18) |
where the second line utilizes the fact that is diagonal in : , and the last equality follows from
(19) |
As for mixed states, we note that they can always be decomposed into a linear combination of pure states. Thus following from the linearity of quantum channels, the formula for acting on mixed states follows from direct linear superposition. This completes the proof of Theorem 1.
Appendix C A Proposal of Quantum Generative Learning with qFedInf
We mentioned in the main text that the proposed framework qFedInf may be applied to machine learning tasks beyond classification. Here we provide a specific example of performing quantum generative learning [10, 11, 12] with qFedInf. This only serves as a preliminary proposal and we leave the detailed study of its performance and implications to future works.
In quantum generative learning, we aim to learn a generative model that can reconstruct some target quantum state . In a federated learning context, each client only has access to a small proportion of the total data, which statistically forms a quantum state . Thus the whole target state can be written as , where is the proportion of data accessible to client . The notations here are the same as in Section II.1.
We take the quantum generative adversarial network (qGAN) [11] as our quantum channel to perform the learning task. It’s a quantum circuit that takes some fixed initial state, for example , as its input, and outputs a quantum state, which is parameterized by the circuit parameters. Adversarial learning strategies are applied to train the circuit and the output state after training is expected to approximate the target state. Here we omit the training details as our focus is on the federated learning aspect.
In the qFedInf framework, each client trains its own qGAN, denoted as the local channel , with its own data . After training, we expect . As for the density estimation part, we note that the input states are fixed to be , so the density estimators become trivial, i.e. . Plug these into Equation (11) and we arrive at
(20) |
which is exactly our goal. This is a concrete proposal of quantum federated generative learning which has not appeared in the literature so far.
Declarations
Competing interests The author declares no competing interests.
Funding This work is financially supported by Zhili College, Tsinghua University.
Availability of data and materials All the data and materials used in this work can be accessed at https://github.com/JasonZHM/quantum-fed-infer.
References
- Goodfellow et al. [2016] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT Press, 2016) http://www.deeplearningbook.org.
- Nielsen and Chuang [2010] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, 2010).
- Biamonte et al. [2017] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, Quantum machine learning, Nature 549, 195 (2017).
- Das Sarma et al. [2019] S. Das Sarma, D.-L. Deng, and L.-M. Duan, Machine learning meets quantum physics, Physics Today 72, 48 (2019).
- Harrow et al. [2009] A. W. Harrow, A. Hassidim, and S. Lloyd, Quantum algorithm for linear systems of equations, Physical review letters 103, 150502 (2009).
- Lloyd et al. [2014] S. Lloyd, M. Mohseni, and P. Rebentrost, Quantum principal component analysis, Nature Physics 10, 631 (2014).
- Schuld and Killoran [2019] M. Schuld and N. Killoran, Quantum machine learning in feature hilbert spaces, Physical review letters 122, 040504 (2019).
- Havlíček et al. [2019] V. Havlíček, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M. Chow, and J. M. Gambetta, Supervised learning with quantum-enhanced feature spaces, Nature 567, 209 (2019).
- Rebentrost et al. [2014] P. Rebentrost, M. Mohseni, and S. Lloyd, Quantum support vector machine for big data classification, Physical review letters 113, 130503 (2014).
- Gao et al. [2018] X. Gao, Z.-Y. Zhang, and L.-M. Duan, A quantum machine learning algorithm based on generative models, Science advances 4, eaat9004 (2018).
- Lloyd and Weedbrook [2018] S. Lloyd and C. Weedbrook, Quantum generative adversarial learning, Physical review letters 121, 040502 (2018).
- Liu and Wang [2018] J.-G. Liu and L. Wang, Differentiable learning of quantum circuit born machines, Physical Review A 98, 062324 (2018).
- Rieke et al. [2020] N. Rieke, J. Hancox, W. Li, F. Milletari, H. R. Roth, S. Albarqouni, S. Bakas, M. N. Galtier, B. A. Landman, K. Maier-Hein, et al., The future of digital health with federated learning, NPJ digital medicine 3, 1 (2020).
- Khraisat and Alazab [2021] A. Khraisat and A. Alazab, A critical review of intrusion detection systems in the internet of things: techniques, deployment strategy, validation strategy, attacks, public datasets and challenges, Cybersecurity 4, 1 (2021).
- Konečnỳ et al. [2016] J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, Federated learning: Strategies for improving communication efficiency, arXiv preprint arXiv:1610.05492 (2016).
- McMahan et al. [2017] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, Communication-Efficient Learning of Deep Networks from Decentralized Data, in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, Vol. 54, edited by A. Singh and J. Zhu (PMLR, 2017) pp. 1273–1282.
- Li et al. [2021a] W. Li, S. Lu, and D.-L. Deng, Quantum federated learning through blind quantum computing, Science China Physics, Mechanics & Astronomy 64, 1 (2021a).
- Xia and Li [2021] Q. Xia and Q. Li, Quantumfed: A federated learning framework for collaborative quantum training, in 2021 IEEE Global Communications Conference (GLOBECOM) (IEEE, 2021) pp. 1–6.
- Chen and Yoo [2021] S. Y.-C. Chen and S. Yoo, Federated quantum machine learning, Entropy 23, 460 (2021).
- Chehimi and Saad [2022] M. Chehimi and W. Saad, Quantum federated learning with quantum data, in ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2022) pp. 8617–8621.
- Yun et al. [2022] W. J. Yun, J. P. Kim, S. Jung, J. Park, M. Bennis, and J. Kim, Slimmable quantum federated learning, arXiv preprint arXiv:2207.10221 (2022).
- Zhao et al. [2018] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, Federated learning with non-iid data, arXiv preprint arXiv:1806.00582 (2018).
- Hsieh et al. [2020] K. Hsieh, A. Phanishayee, O. Mutlu, and P. Gibbons, The non-iid data quagmire of decentralized machine learning, in International Conference on Machine Learning (PMLR, 2020) pp. 4387–4398.
- Zhu et al. [2019] L. Zhu, Z. Liu, and S. Han, Deep leakage from gradients, Advances in neural information processing systems 32 (2019).
- Geiping et al. [2020] J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, Inverting gradients-how easy is it to break privacy in federated learning?, Advances in Neural Information Processing Systems 33, 16937 (2020).
- Guha et al. [2019] N. Guha, A. Talwalkar, and V. Smith, One-shot federated learning, arXiv preprint arXiv:1902.11175 (2019).
- Salehkaleybar et al. [2021] S. Salehkaleybar, A. Sharif-Nassab, and S. J. Golestani, One-shot federated learning: Theoretical limits and algorithms to achieve them., J. Mach. Learn. Res. 22, 189 (2021).
- Zhou et al. [2020] Y. Zhou, G. Pu, X. Ma, X. Li, and D. Wu, Distilled one-shot federated learning, arXiv preprint arXiv:2009.07999 (2020).
- Kasturi et al. [2020] A. Kasturi, A. R. Ellore, and C. Hota, Fusion learning: A one shot federated learning, in International Conference on Computational Science (Springer, 2020) pp. 424–436.
- J. M. Swart [2020] J. M. Swart, Introduction to quantum probability (2020), http://staff.utia.cas.cz/swart/lecture_notes/qua20_04_27.pdf, Last accessed on 2022-1-7.
- González et al. [2021] F. A. González, V. Vargas-Calderón, and H. Vinck-Posada, Classification with quantum measurements, Journal of the Physical Society of Japan 90, 044002 (2021), https://doi.org/10.7566/JPSJ.90.044002 .
- Rubner et al. [2000] Y. Rubner, C. Tomasi, and L. J. Guibas, The earth mover’s distance as a metric for image retrieval, International journal of computer vision 40, 99 (2000).
- Note [1] In a typical classification problem, denotes the different classes and the standard cross entropy loss is given by , where is the predicted probability of belonging to class .
- Note [2] The fact that identical samples have identical labels guarantees that the global channel is well defined. For example, suppose we have two identical samples with the same label , but are from different clients . Then, in order to fulfill the local minimization problems, Equation (6), we must have .
- Note [3] To avoid confusion, we insist on using different notations for to emphasize their differences in physical meaning: we use to denote the quantum state that you can load into your circuit, while we use to denote the projection operator.
- Liu et al. [2022] J. Liu, Y. Tang, H. Zhao, X. Wang, F. Li, and J. Zhang, Cps attack detection under limited local information in cyber security: A multi-node multi-class classification ensemble approach (2022), arXiv:2209.00170 [cs.CR] .
- Bishop [2006] C. M. Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics) (Springer-Verlag, Berlin, Heidelberg, 2006).
- Li and Deng [2021] W. Li and D.-L. Deng, Recent advances for quantum classifiers, Science China Physics, Mechanics & Astronomy 65, 10.1007/s11433-021-1793-6 (2021).
- Rezende and Mohamed [2015] D. Rezende and S. Mohamed, Variational inference with normalizing flows, in International conference on machine learning (PMLR, 2015) pp. 1530–1538.
- González et al. [2022] F. A. González, A. Gallego, S. Toledo-Cortés, and V. Vargas-Calderón, Learning with density matrices and random features, Quantum Machine Intelligence 4, 1 (2022).
- Huang et al. [2020] H.-Y. Huang, R. Kueng, and J. Preskill, Predicting many properties of a quantum system from very few measurements, Nature Physics 16, 1050 (2020).
- LaRose et al. [2019] R. LaRose, A. Tikku, É. O’Neel-Judy, L. Cincio, and P. J. Coles, Variational quantum state diagonalization, npj Quantum Information 5, 1 (2019).
- Xin et al. [2021] T. Xin, L. Che, C. Xi, A. Singh, X. Nie, J. Li, Y. Dong, and D. Lu, Experimental quantum principal component analysis via parametrized quantum circuits, Physical Review Letters 126, 110502 (2021).
- Huang et al. [2022] H.-Y. Huang, R. Kueng, G. Torlai, V. V. Albert, and J. Preskill, Provably efficient machine learning for quantum many-body problems, Science 377, eabk3333 (2022).
- Li et al. [2021b] H.-S. Li, P. Fan, H. Peng, S. Song, and G.-L. Long, Multilevel 2-d quantum wavelet transforms, IEEE Transactions on Cybernetics (2021b).
- Barenco et al. [1995] A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. A. Smolin, and H. Weinfurter, Elementary gates for quantum computation, Physical review A 52, 3457 (1995).
- Masoudnia and Ebrahimpour [2014] S. Masoudnia and R. Ebrahimpour, Mixture of experts: a literature survey, Artificial Intelligence Review 42, 275 (2014).
- Jordan and Jacobs [1994] M. I. Jordan and R. A. Jacobs, Hierarchical mixtures of experts and the em algorithm, Neural computation 6, 181 (1994).
- Jacobs et al. [1991] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton, Adaptive mixtures of local experts, Neural computation 3, 79 (1991).
- Zhou [2021] Z.-H. Zhou, Machine learning (Springer Nature, 2021).
- Sagi and Rokach [2018] O. Sagi and L. Rokach, Ensemble learning: A survey, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 8, e1249 (2018).
- Li et al. [2014] H.-S. Li, Q. Zhu, M.-C. Li, H. Ian, et al., Multidimensional color image storage, retrieval, and compression based on quantum amplitudes and phases, Information Sciences 273, 212 (2014).
- Giovannetti et al. [2008a] V. Giovannetti, S. Lloyd, and L. Maccone, Quantum random access memory, Physical review letters 100, 160501 (2008a).
- Giovannetti et al. [2008b] V. Giovannetti, S. Lloyd, and L. Maccone, Architectures for a quantum random access memory, Physical Review A 78, 052310 (2008b).
- Long and Sun [2001] G.-L. Long and Y. Sun, Efficient scheme for initializing a quantum register with an arbitrary superposed state, Physical Review A 64, 014303 (2001).
- Plesch and Brukner [2011] M. Plesch and Č. Brukner, Quantum-state preparation with universal gate decompositions, Physical Review A 83, 032302 (2011).
- Kingma and Ba [2017] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization (2017), arXiv:1412.6980 [cs.LG] .
- Mitarai et al. [2018] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, Quantum circuit learning, Physical Review A 98, 032309 (2018).
- Li et al. [2017] J. Li, X. Yang, X. Peng, and C.-P. Sun, Hybrid quantum-classical approach to quantum optimal control, Physical review letters 118, 150503 (2017).
- Bradbury et al. [2018] J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. VanderPlas, S. Wanderman-Milne, and Q. Zhang, JAX: composable transformations of Python+NumPy programs (2018).
- Zhang et al. [2022] S.-X. Zhang, J. Allcock, Z.-Q. Wan, S. Liu, J. Sun, H. Yu, X.-H. Yang, J. Qiu, Z. Ye, Y.-Q. Chen, et al., Tensorcircuit: a quantum software framework for the nisq era, arXiv preprint arXiv:2205.10091 (2022).
- McClean et al. [2018] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, Barren plateaus in quantum neural network training landscapes, Nature communications 9, 1 (2018).
- Broadbent et al. [2009] A. Broadbent, J. Fitzsimons, and E. Kashefi, Universal blind quantum computation, in 2009 50th Annual IEEE Symposium on Foundations of Computer Science (IEEE, 2009) pp. 517–526.