Quantum Federated Learning for Distributed
Quantum Networks
Abstract
Federated learning is a framework for learning from distributed networks. It attempts to build a global model based on virtual fusion data without sharing the actual data. Nevertheless, the traditional federated learning process encounters two challenges: high computational cost and message transmission security. To address that, we propose a quantum federated learning for distributed quantum networks by utilizing quantum characteristics. First, we give two methods to extract the data information to the quantum state. It can cope with different acquisition frequencies of data information. Next, a quantum gradient descent algorithm is provided to help clients in the distributed quantum networks to train local models in parallel. Compared with the classical counterpart, the proposed algorithm achieves exponential acceleration in dataset scale and quadratic speedup in data dimensionality. And, a quantum secure multi-party computation protocol with Chinese residual theorem is designed. It could avoid errors and overflow problems that may occur in the process of large number operation. Security analysis shows that the protocol can resist common external and internal attacks. Finally, to demonstrate the effectiveness of the proposed framework, we use it to train a federated linear regression model and simulate the essential computation steps on the IBM Qiskit simulator.
Index Terms:
Quantum algorithm, federated learning, distributed networks, quantum gradient descent, quantum secure multi-party computation.I Introduction
With the development of information network, more and more data are generated and stored in distributed network system [1]. Integrating data from distributed networks enables the extraction of valuable information. Nevertheless, a significant proportion of the data contains sensitive and private information, making data owners hesitant to share it [2]. This situation has led to the emergence of federated learning (FL), which is a distributed machine learning (ML) method [3]. FL improves data privacy by localizing data and training it without sharing the raw data. This not only makes effective use of distributed data resources, but also facilitates the development of information network technology. However, the volume of locally trained data can be huge. At this point, the computing power of traditional computers will face great challenges. Furthermore, the transmission of training results poses a threat to user privacy, as it can provide an opportunity for attackers to infer sensitive information. While some classical passwords have been used to safeguard communication security, development in hardware poses a persistent threat to their security.
Quantum information processing (QIP) is an emerging field that explores the interaction between quantum mechanics and information technology. It sustains to show its charm, attracting the attention of scholars. In 1984, Bennett and Brassard proposed the famous BB84 protocol [4], which perfectly achieves a key distribution task between two remote parties. Subsequently, scholars utilized quantum information processing to ensure information security and proposed a series of quantum cryptography protocols. In contrast to the security of classical cryptography protocols that are based on the assumption of computational complexity, these protocols’ security relies on physical properties such as the Heisenberg uncertainty principle, which makes them unconditionally secure in theory. Quantum cryptography has developed as a significant application of quantum information processing, including quantum key distribution [5, 6, 7], quantum secret sharing [8, 9, 10], quantum secure direct communication [11, 12], and so on. Another exciting application of quantum information processing is quantum computing. It provided quantum speedup to certain classes of problems that are intractable on classical computers. For example, the factorization of large numbers via Shor algorithm [13] can provide exponential speedup. Furthermore, quantum computing has also made some advances in machine learning, such as the quantum linear systems solving algorithms [14, 15, 16], quantum regression [17, 18], quantum neural network [19, 20], variational quantum algorithms (VQA) [21, 22, 23], and so on.
Motivated by the advantages shown by quantum cryptography and quantum computing respectively in improving transmission security and computing speed, scholars have attempted to utilize QIP to address the challenges faced by FL. In 2021, Li et al. focused on the security issue of FL [24]. They proposed a private single-party delegated training protocol based on blind quantum computing for a variational quantum classifier, and then extended the protocol to quantum FL combined with differential privacy. This protocol can exploit the computing advantage of remote quantum servers with privacy of sensitive data. In 2024, Ren et al. proposed a quantum FL to solve the privacy preservation issue for the smart cyber-physical grid dynamic security assessment problem [25]. Moreover, Chen and Yoo proposed a quantum FL scheme with a hybrid quantum-classical machine learning model, focusing on improving the efficiency of the local training [26]. In their way, the classical convolutional network is used to extract data features and compress them into vectors which are input into variable quantum circuits for training. Compared with the classical process, this method can achieve the same level of accuracy more quickly. In 2022, Huang et al. utilized a variational quantum algorithm to estimate the gradient of the local model to avoid analyzing the gradient too costly [27]. As variational quantum algorithms approximate the target results by using circuits with variables, they are different from quantum algorithms that calculate the target results through the evolution of quantum gates. Therefore, we further explore the realization of FL with quantum resources.
In this paper, we focus on the quantum algorithm running on ordinary quantum computers and present a quantum federated learning based on gradient descent (QFLGD). It aims to provide a unified, secure, and effective gradient distribution estimation scheme with distributed quantum networks. In QFLGD, we propose two data preparation methods by analyzing the different acquisition frequencies of static data (the local training data) and dynamic data (the parameters that need to be updated during iteration). That can reduce the requirement of QFLGD on the performance of quantum random memory. At the same time, two main processes of FL are implemented in QFLGD, which exploit quantum properties. The first one is a quantum gradient descent (QGD) algorithm. It facilitates the acceleration of the training gradient for the client. QGD provides the client with a classical gradient at each iteration, which can be directly used to learn classical model parameters. Compared with the classical counterpart, this quantum process has exponential acceleration in terms of data scale and quadratic speedup in data dimensionality. The other is a quantum secure multi-party computation (QSMC) protocol, which allows the aggregation of gradients to securely be done with quantum communication networks. That is, the server is able to calculate the federated gradients without the client sharing the local gradients. Furthermore, the application of the Chinese remainder theorem in QSMC makes it possible to avoid errors and overflow problems that may occur during the calculation of large numbers. The proposed quantum federated learning framework can improve the local computing efficiency and data privacy of FL. We also apply QFLGD to train the federated linear regression (FLR) and give its numerical experiment to verify the correctness.
The remainder of this paper is organized as follows. The classical FL is reviewed in Sec. II. In Sec. III, we propose the framework for QFLGD. In Sec. IV, we analyze the time complexity and the security of QFLGD. Furthermore, an application to train the FLR and the numerical experiment are shown in Sec. V. In Sec. VI, we give the conclusion of our work.
II Review of classical FL
To clarify the framework of QFLGD in the distributed quantum networks, this section offers a overview of the fundamental ideas and processes of traditional FL. FL is a collaborative ML approach in which multiple clients train a shared model without exchanging raw data. A popular learning framework is FL based on gradient descent [3], which is depicted in Fig. 1. It mainly includes the following parts.

A) Data preparation and model initialization. In the FL framework, data is derived from various clients in a distributed network, such as hospital medical information, preference options in business surveys, and other sensitive data [28]. We consider general federated learning with clients participating in the model training. The server (Alice) initializes a global model that requires training parameters to make it more efficient. And the server distributes it to clients. The client () collects and preprocesses data samples , where and is the corresponding label.
B) Local training. To train the model, clients use standard ML algorithms without sharing raw data. The trained ML model evaluation task is expressed as minimizing the cost function, such as minimizing mean square error (MSE) loss function
(1) |
where is the activation function. This function is usually expressed as minimizing the difference between the model output and the expected output. In this case, the model optimization is to find the gradient of with respect to to adjust the model parameters. For the client (), he can obtain
(2) |
with his data. Here, is represented as , is denoted the th element of the local gradient , and is labeled the th element of the sample .
C) Model aggregation and update. The server (Alice) collects the gradients trained by all clients, and calculates the federated gradient
(3) |
where and is represented as the th element of the federated gradient . Then, Alice updates the global model parameters. Specifically, she adjusts the parameter to
(4) |
for , in the th iteration. In Eq.(4), is a learning rate [29].
D) Model evaluation and distribution. The server (Alice) evaluates the performance of the global model and sends the global model parameters to the clients for further local training if it has not yet converged (i.e., where is a threshold about gradient). Once the condition is satisfied, Alice announces that the training stops and distributes the model.
It is notable that the time consumption of FL is mainly in the calculation of the local gradient. For the client, he computes a gradient element takes time to estimate the inner product and time to sum. In general, it takes repetitions to estimate all elements of the local gradient. Therefore, takes time to calculate all the elements of the local gradient on a classical computer. In the era of big data, this is surely a very expensive calculation. Moreover, the security of federation learning may be compromised during local gradient aggregation. Traditional encryption methods can improve the security of this process. However, with the development of quantum technology, there are threats to traditional encryption methods.
III Quantum Federated Learning based on Gradient Descent Algorithm
In this section, we present the QFLGD, which focuses on the parallel and private computing architectures for data in distributed quantum networks. This distributed quantum network typically consists of a server and several clients with quantum computing capabilities. We first give ways to extract the data information to the quantum state. Subsequently, we propose a QGD algorithm that clients use to estimate the gradient locally. A QSMC protocol is designed to perform a private calculation of the global gradient when the server aggregates the training results of clients. Finally, the server updates the global parameters and shares the results with clients. The schematic diagram of QFLGD framework is presented in Fig. 2.

III-A Quantum data preparation and model initialization
Similar to the classical FL, a dataset is chosen by a client in quantum FL, where . is the corresponding label of each sample of , respectively. For convenience, assuming that for some ; otherwise, some zeros are inserted into the vector. Furthermore, the server initializes an ML model with parameters. The learnable parameters of the model are represented by a vector , which can be optimized using gradient descent. The ability of quantum computers to effectively solve practical problems depends on encoding this information into quantum states as input to a quantum algorithm. Here, we give methods to extract the data and parameters information to quantum states.
Considering the quantum oracles
(5) |
and
(6) |
are provided, where represents the th element of the th vector of the data set . These two oracles can respectively access the entries of , in time and [17, 30], when the data are stored in quantum random access memory (QRAM) [31] with an appropriate data structure [32]. In addition, the operation
(7) |
is required, which could access the -norm of the vector . Inspired by Ref. [33], can be implemented in time employing controlled rotation [34] and quantum phase estimation (QPE) [35]. The details are shown in appendix A. According to these assumptions, the processes of the quantum data preparation are described as follows.
In this step, the data information is extracted to the state . Firstly, three quantum registers are prepared in state , where the subscript numbers denote different registers. The is labeled as the qubits that are enough to store the information about the elements of data, i.e., . After that, is applied on the second register to generate a state
(8) |
Secondly, the quantum oracle is performed on the three registers. These registers are in a state
(9) |
Subsequently, a qubit in the state is added and rotated to controlled on , where . The system becomes
(10) |
Finally, the inverse operation is applied on the third register. The quantum state
(11) |
could be obtained via discarding the third register. The process is denoted as , which generates the state in time .
In order to train the gradient, the parameter should be introduced in the th iteration. Thus, it is necessary to generate a quantum state, which contains the information of . Depending on the fact that the parameter is different in each iteration, there are two methods to prepare the quantum state.
One way is based on the assumption that QRAM is allowed to read and write frequently. For the information of () are written in QRAM timely, the quantum state
(12) |
can be produced by the processes similar to step with the help of the oracle (), where and is denoted as the th element of the parameter vector in the th iteration. This way can be implemented in time .
For another, the parameter is extracted into the quantum state based on the operation , which is inspired by Ref. [36]. In this way, the is not required to be written in QRAM. The following are described as the processes.
Assuming that is easy to get () angle parameters () from the updated after the last iteration. The angle satisfies
(13) |
for , where and . In particular, for . And there are defined
(14) |
where is represented as the gate applied on qubits.
After that, a quantum state
(15) |
is generated in time by applying the operation for . Furthermore, a register in state is appended. The overall system is in the state
(16) |
To further interpret this method, an example is given in the appendix B.
According to Eq. (16), the state in Eq. (12) can be rewritten as
(17) |
It means that the above two methods both allow us to extract the parameter information into the quantum state . On the basis of the current quantum technology, we choose the second method which is more feasible, and denote the process as .
III-B Local training by quantum parallel computing (QGD algorithm)
Now, we propose a QGD algorithm. It enables clients to estimate the gradient of the model in parallel based on their respective local data. According to Eq. (2), with the help of the two operations and of quantum data preparation, the process of the QGD algorithm is described as follows.
Generate an intermediate quantum state.
The task of computing the gradient involves an inner product computation, which needs in classical computers. In the era of big data, this time is costly. Here, we generate an intermediate state that contains the information of . This state facilitates subsequent parallel estimation.
A quantum state is initialized as
(18) |
The Hadamard gate is performed on the fifth register. Then, a controlled operation is applied to produce a state
(19) |
Subsequently, the Hadamard gate is implemented on the fifth register to get
(20) |
where
(21) |
The state can be rewritten as
(22) |
where . It is easy to verify that
(23) |
and . By observing Eq. (22) and Eq. (23), it can be found that the essential information is provided by the system when its fourth and fifth registers are both in state . It means that the superposition of also does not affect the extraction of the required information if choosing the state of Eq. (17). Thus, the first method (in step ) is also suitable for our algorithm, which .
Calculate the in parallel.
The approximation of should be estimated and stored in a quantum state. To achieve this goal, the is needed to estimate via quantum phase estimation which the unitary operation is defined as
(24) |
where , and . Mathematically, the eigenvalues of are and the corresponding eigenvectors are ), respectively. Based on the set of its eigenvectors, can be rewritten as . The procedure of estimating the is displayed as follows.
Performing the QPE on with the state for some , an approximate state
(25) |
is obtained, where satisfies . Then, the quantum state
(26) |
is generated by using the sine gate. It holds for the fact that .
According to Eq. (23), it is needed to access to compute . Combining with the operation and the quantum arithmetic operations [37], we can get
(27) |
An oracle is supposed to achieve any function which has a convergent Taylor series [34]. Combining with , the function could be implemented (a simple example is described in Sec. V ). The state becomes
(28) |
Next, a register in the state is appended as the last register and rotated it to in a controlled manner, where . This results in the overall state
(29) |
The inverse operations of steps are performed on . Afterwards, a register in the state is added to obtain
(30) |
For convenience, the is marked as the operations which achieve . Its schematic quantum circuit is given in Fig. 3.

Estimate the gradient with swap test.
Three registers in state are prepared. Performing on it to generate the state
(31) |
The controlled rotation operation () is implemented to get
(32) |
The inverse operation of is performed. After that, we can obtain the state
(33) |
via undoing the register .
In order to obtain the gradient, the technology of swap test [38] is utilized. Combining the processes of generating the states and , a quantum state can be constructed. Then, measuring the first register to see whether it is in the state . The measurement has the success probability
(34) |
According to Eq. (2), the can be calculated. Hence, it is possible for to obtain the local gradient by repeating the steps of the above algorithm with his data.
III-C Model aggregation with QSMC protocol and update
We will design a protocol to safely compute the federated gradients in this section. That is, calculating without revealing the local gradient . To do it, the server Alice is assumed to be semi-honest who may misbehave on her own but cannot conspire with others. Moreover, the federated gradients are needed to be accurate to . This means that . Simply, the is marked as . And supposing that . The further details are described as follows.
Preparation for multi-party quantum communication.
Alice announces and the global dataset scale . At same time, the participants (server and clients) choose numbers () which are mutually prime and satisfy . Subsequently, calculates his secret
(35) |
Alice produces a -level -particle GHZ state
(36) |
and marks the particles by .
Distribution of quantum pairs.
For the sake of checking the presence of eavesdroppers, Alice prepares sets of decoy states, where each decoy photon randomly is in one of the states from the set and , where is represented as the quantum Fourier transform [39]. These sets are denoted as , respectively. Then Alice inserts into at a random position, and sends them to for .
Security checking of quantum channel.
After receiving particles, sends acknowledgements to Alice. Subsequently, the positions and the bases of the decoy photons are announced to by Alice. measures the decoy photons and returns the measurement results to Alice who then calculates the error rate by comparing the measurement results with initial states. If the error rate is higher than the threshold determined by the channel noise, Alice cancels this protocol and restarts it. Otherwise, the protocol is continued.
Measurement of particles and encoding of transmission information.
extracts all the decoy photons and discards them. Then, server and clients perform a measurement on the remaining particles, respectively. The measurement results record as and these satisfy . Subsequently, encodes his data and sends it to Alice.
Computation of federated gradient by server.
At this stage, Alice accumulates all the results to compute
(37) |
For , Alice can obtain equations such as Eq. (37). According to the Chinese remainder theorem, Alice compute the summation
(38) |
And it is easy to get the federated gradient
(39) |
After the similar processes, the federated gradient could be obtained by Alice. And she updates the global model parameters . In order to exhibit the process of model aggregation more clearly, a concrete example is presented in the appendix C.
III-D Model evaluation and distribution via classical communication networks
The server (Alice) needs to evaluate whether the model should be further optimized after one round of training in QFLGD. Similar to classical FL, Alice utilizes the smoothness of the gradient to evaluate the model performance. Specifically, the server sends a termination training signal and announces the global parameters when . Otherwise, she distributes the updated model parameters to clients for new training.
IV Analysis
In this section, we provide a brief analysis of the proposed framework. As discussed previously, the QGD algorithm (shown in Sec. III-B) enables clients to accelerate the training gradients on a local quantum computer. The QSMC protocol (shown in Sec. III-C) gives a method to securely update the federated parameters to protect the privacy of clients’ data. Therefore, two main aspects are considered in the analysis. One is the time complexity of local training (the QGD algorithm). The other is the security of model aggregation (the QSMC protocol).
IV-A Time complexity of local training (the QGD algorithm)
In the QFLGD framework, assuming that . Namely, the dataset scale is at most . And all clients need to accomplish the gradient training before calculating the federated gradient. Thus the waiting time for the distributed training gradient is the time consumed to train the dataset which scale is . In the following, the time complexity of the QGD algorithm is analyzed with the -scale dataset.
In the data preparation period (the Sec. III-A), the time consumption is caused by the processes of and , which generate the states and about data information. It could be implemented in time with the help of the , and the controlled rotation operation [14, 30]. The is represented as the number of qubits which store the data information. Afterwards, and are applied to produce the state in step of local training (the Sec. III-B). Hence, step can be implemented in time .
In step , we first consider the complexity of the unitary operation . It contains , , and which take time . Then, the QPE block needs applications of to estimate the within error [35]. Therefore, the time complexity of step is . The runtime [37] of implementing the sine gate can be ignored, which is much smaller than the QPE.
Next, the time complexity of step and step are discussed. The main operation of the two steps includes , , and the quantum arithmetic operation, which are performed to calculate in time . In step , the time complexity of the controlled rotation is . Step takes time to implement the inverse operations of steps -. Putting all the steps together to get the time complexity of step as .
In step , the processes of generating the (described in steps -) are accomplished in time . According to step , a copy of the quantum state is produced in time . The swap test is applied times to get the result within error in step [40]. And each swap test should prepare a copy of and . Therefore, the runtime is in step , that is the complexity of obtaining the desired result.
For convenience, we assume that , , then , . Therefore, could fulfill the number of qubit required to store data information. In addition, taking , , and equaling to . After that, the complexity of the entire quantum algorithm to get in each iteration can further simplify into
(40) |
This means that the time complexity of training gradient is when , achieving exponential acceleration on the number of data samples. Furthermore, the elements of can also be accessed in time if they are timely writing in QRAM. In this case, the proposed algorithm has exponential acceleration on the number and the quadratic speedup in the dimensionality , compared with the classical algorithm whose runtime is .
IV-B Security analysis of model aggregation (the QSMC protocol)
In this section, the security of model aggregation (the QSMC protocol) will be analyzed. For the secure multi-party computing, the attacks from outside and all participants are the challenges, which have to deal with. In the following, we will show these attacks are invalid to our protocol.
Firstly, the outside attacks are discussed. In this protocol, the decoy photons is used to prevent the eavesdropping. This idea is derived from the BB84 protocol [4], which has been proved unconditionally safe. Here, we take the intercept-resend attack as an example to demonstrate. If an outside eavesdropper Eve attempts to intercept the particles sent from Alice and replaces them with his own fake particles, he will introduce extra error rate . Therefore, Eve will be detected in step through security checking analysis.
Secondly, the participant attacks are analyzed. In the proposed protocol, the participants include the server (Alice) and clients (, ) who can access more information. Therefore, the participant attacks from dishonest clients or server should be considered.
For the participant attack from dishonest clients, only the extreme case of clients colluding to steal the secret from is considered here, because clients have the most powerful strength. In this case, even if the dishonest clients share their information, they cannot deduce without the help of Alice. That means they cannot obtain the secret by . Thus, our algorithm can resist the collusion attack of dishonest .
For the attack from Alice, the semi-honest Alice may steal the private information of without conspiring with any one. In step (C4), Alice collects for . However, she still cannot learn due to the lack of knowledge about which from clients.
V Application: Training the Federated Linear Regression Model
V-A Quantum federated linear regression algorithm
Linear regression (LR) is an important supervised learning algorithm, which establishes a model of the relationship between the variable and the observation . It has wide application in the scientific fields of biology, finance, and so on [41]. LR models are also usually fitted by minimizing the function in Eq. (1) and choosing ( is a migration parameter).
In this section, we apply the QFLGD framework to train the LR model. In the training process, we need to implement the function
(41) |
The state about can be generated according to the QGD. Then, the state is produced in the following steps.
(S1) The oracle is applied on the state to get
(42) |
in time .
(S2) After obtaining and , we implement the on to result in
(43) |
where for .
(S3) Subsequently, the controlled rotation operation () are performed on and (), we can get
(44) |
where the is defined as and .
(S4) Inverse is applied on the register , the state becomes
(45) |
Thus, the state can be obtained from the register A. Similarly, we can implement addition by changing the operation of step (S3) to . Thus, the state could be obtained and its quantum circuit is presented in Fig. 4. The operations of these processes are labeled as , which are implemented in time . Combining with the QFLGD framework, the quantum federated linear regression (QFLR) model could be constructed by algorithm 1.

V-B Numerical Simulation
In this section, the numerical simulation of the QFLR algorithm will be presented. In our simulation, two clients (, ) trained the QFLR model with a sever (Alice). The experiment is implemented on the IBM Qiskit simulator. The initial weight and the migration parameter are selected by Alice. chooses an input vector which corresponds the observation . Another client selects an input vector and the corresponding observation .
In the process of training the federated linear regression model, the main is to achieve the perfect calculation. That is, quantum computing values are required to be able to be stored in quantum registers with small error. An experiment of this step is presented with the data of . For convenience, setting , , and the error of quantum phase estimation. By substituting these into Eq. (41), the result could be obtained. It can also be computed by
(46) |
according to Eq. (23).
With the fact of and the most probable result (see Fig. 6(a)) from the QPE, Eq. (46) can be rewritten as
(47) |
Its circuit is designed as exhibited and encoded via Qiskit (see Fig. 5). In Fig. 5(e), the matrix form of is
(48) |
With the help of the IBM’s simulator (aersimulator), the measurement results can be obtained which are shown in Fig. 6(b). From Fig. 6(b), two values ( and ) stand out, which have a much higher probability of measurement than the rest. Based on the analysis of the phase estimation results, selecting result with a high probability of . It means . Compared with the theoretical result (shown in Eq. (46)), the experimental result has an error of which is tolerable. Subsequently, can estimate and by performing swap test. At same time, estimates , , and of his data via similar experiment.
As the analogous process of the example shown in the appendix C, Alice calculates the federated gradient via the QSMC protocol. Theoretical analysis shows that the error is within of the actual solution which is obtained in the example. Thus, the training algorithm is found to be successful.







VI Conclusions
This work focuses on the design of the QFLGD for distributed quantum networks that can securely implement FL over an exponentially large data set. We first gave two methods of quantum data preparation, which can extract static data information and dynamic parameter information into logarithmic qubits. Then, we put forth the QGD algorithm to allow the time-consuming gradient calculation to be done on a quantum computer. In this way, the clients can estimate some urgently needed results of gradient training in parallel based on quantum superposition. The time complexity analysis is shown that our algorithm is exponentially faster than its classical counterpart on the number of data samples when the error . Furthermore, the QGD algorithm could also achieve quadratic speedup on the dimensionality of the data sample if the parameters are stored in QRAM timely. And, we proposed a QSMC protocol to calculate the federated gradient securely. The evidence is demonstrated that the proposed protocol could resist some common outside and participant attacks, such as the intercept-resend attack. Finally, we indicated how to apply it to train a federated linear regression model and simulated some steps with the help of the IBM Qiskit simulator. The results also showed the effectiveness of QFLGD. In summary, the presented framework demonstrates the intriguing potential of achieving large-scale private distributed learning with quantum technologies and provides a valuable guide for exploring quantum advantages in real-life machine learning applications from the security perspective.
We hope the proposed framework can further be realized on a quantum platform with the gradual maturity of quantum technology. For example, how to implement the whole QFLGD process on the noisy intermediate-scale quantum (NISQ) devices is worth further exploration, and we will make more efforts.
Acknowledgments
This work was supported by National Natural Science Foundation of China (Grants No. 62171131, 61976053, and 61772134), Fujian Province Natural Science Foundation (Grant No. 2022J01186 and 2023J01533), and Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302901).
Appendix A Implement the Unitary Operation
In this appendix, we describe the implementation of a unitary operation , which could generate a state about the -norm of . Its steps as shown in the following.
(1) A quantum state is initialized as
(49) |
(2) The oracle is performed to obtain
(50) |
(3) A register in the state is appended as the last register and rotated to . After that, the system becomes
(51) |
where . We can observe the ancilla register in the state with probability . The state can be rewritten as
(52) |
where
(53) |
and
(54) |
(4) Appending a register in state . Then, the quantum phase estimation of is performed to obtain
(55) |
with the help of the square root circuit [42]. We denote the is a tolerance error of QPE, , and .
(5) The inverse operations of steps (2)-(3) are applied to generated the state
(56) |
Therefore, the could be implemented in the above steps. And its running time is mainly caused by the quantum phase estimation in step (4), which takes time . Moreover, could be estimated similarly.
Appendix B An example of extracting
the parameter information
In Sec. III-A, a way to prepare a quantum state of without the help of QRAM is shown in step . To further demonstrate it, an example is given in this appendix.
For convenience, supposing that . Then,we can get angle parameters , , and which are satisfied
(57) |
according to Eq. (13). The values of () are shown in Fig. 7, such as for .

Then, the operations are defined as
(58) |
based on . It is easy to verify that
(59) |
Thus, the quantum state of can be obtained.
Appendix C An example of the model aggregation
In this appendix, an example is presented to exhibit the model aggregation. Considering the model is trained by two clients (, ) who respectively have a -scale dataset, with the help of a server (Alice). The gradients , , , and are assumed to be gained in the QGD algorithm. Simply, the eavesdropping check phase is ignored.
Firstly, Alice announces the accuracy of parameters is and the global dataset scale is . She chooses and with clients. After that, calculates his secret
(60) |
, , and . At same time, can get , , , and .
Secondly, Alice prepares a -level- particle GHZ state for and gives a particle to each client respectively. Then these participants perform the measurement to get , , and . () encodes his secret by using () and sets to Alice. The result
(61) |
could be computed by Alice.
Finally, the equations
(62) |
and
(63) |
could be obtained through a similar procedure. According to the Chinese remainder theorem, the federated gradient (3.5,6.06) is easy to get.
References
- [1] Q. Jia, L. Guo, Y. Fang, and G. Wang, “Efficient privacy-preserving machine learning in hierarchical distributed system,” IEEE Transactions on Network Science and Engineering, vol. 6, pp. 599–612, 2019.
- [2] B. Gu, A. Xu, Z. Huo, C. Deng, and H. Huang, “Privacy-preserving asynchronous vertical federated learning algorithms for multiparty collaborative learning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, pp. 6103–6115, 2022.
- [3] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, 2017, pp. 1273–1282.
- [4] C. H. Bennett and G. Brassard, “Quantum cryptography: Public key distribution and coin tossing,” in Proceedings of the IEEE International Conference on Computers, Systems and Signal Processing. IEEE New York, 1984, pp. 175–179.
- [5] N. Gisin, G. Ribordy, W. Tittel, and H. Zbinden, “Quantum cryptography,” Reviews of Modern Physics, vol. 74, pp. 145–195, 2002.
- [6] V. Scarani, H. Bechmann-Pasquinucci, N. J. Cerf, M. Dušek, N. Lütkenhaus, and M. Peev, “The security of practical quantum key distribution,” Reviews of modern physics, vol. 81, p. 1301, 2009.
- [7] R. Schwonnek, K. T. Goh, I. W. Primaatmaja, E. Y.-Z. Tan, R. Wolf, V. Scarani, and C. C.-W. Lim, “Device-independent quantum key distribution with random key basis,” Nature communications, vol. 12, pp. 1–8, 2021.
- [8] M. Hillery, V. Bužek, and A. Berthiaume, “Quantum secret sharing,” Physical Review A, vol. 59, pp. 1829–1834, 1999.
- [9] A. Karlsson, M. Koashi, and N. Imoto, “Quantum entanglement for secret sharing and secret splitting,” Physical Review A, vol. 59, pp. 162–168, 1999.
- [10] R. Cleve, D. Gottesman, and H.-K. Lo, “How to share a quantum secret,” Physical Review Letters, vol. 83, pp. 648–651, 1999.
- [11] K. Boström and T. Felbinger, “Deterministic secure direct communication using entanglement,” Physical Review Letters, vol. 89, p. 187902, 2002.
- [12] F. G. Deng, G. L. Long, and X. S. Liu, “Two-step quantum direct communication protocol using the einstein-podolsky-rosen pair block,” Physical Review A, vol. 68, p. 042317, 2003.
- [13] P. W. Shor, “Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer,” SIAM Review, vol. 41, pp. 303–332, 1999.
- [14] A. W. Harrow, A. Hassidim, and S. Lloyd, “Quantum algorithm for linear systems of equations,” Physical Review Letters, vol. 103, p. 150502, 2009.
- [15] L.-C. Wan, C.-H. Yu, S.-J. Pan, S.-J. Qin, F. Gao, and Q.-Y. Wen, “Block-encoding-based quantum algorithm for linear systems with displacement structures,” Physical Review A, vol. 104, p. 062414, 2021.
- [16] H.-L. Liu, L.-C. Wan, C.-H. Yu, S.-J. Pan, S.-J. Qin, F. Gao, and Q.-Y. Wen, “A quantum algorithm for solving eigenproblem of the laplacian matrix of a fully connected weighted graph,” Advanced Quantum Technologies, vol. 6, p. 2300031, 2023. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/qute.202300031
- [17] C.-H. Yu, F. Gao, and Q.-Y. Wen, “An improved quantum algorithm for ridge regression,” IEEE Transactions on Knowledge and Data Engineering, vol. 33, pp. 858–866, 2021.
- [18] M.-H. Chen, C.-H. Yu, J.-L. Gao, K. Yu, S. Lin, G.-D. Guo, and J. Li, “Quantum algorithm for gaussian process regression,” Physical Review A, vol. 106, p. 012406, 2022.
- [19] F. Scala, A. Ceschini, M. Panella, and D. Gerace, “A general approach to dropout in quantum neural networks,” Advanced Quantum Technologies, p. 2300220, 2023. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/qute.202300220
- [20] Y.-D. Wu, Y. Zhu, G. Bai, Y. Wang, and G. Chiribella, “Quantum similarity testing with convolutional neural networks,” Physical Review Letters, vol. 130, p. 210601, May 2023.
- [21] R. LaRose, A. Tikku, É. O’Neel-Judy, L. Cincio, and P. J. Coles, “Variational quantum state diagonalization,” npj Quantum Information, vol. 5, p. 57, 2019.
- [22] H.-L. Liu, Y.-S. Wu, L.-C. Wan, S.-J. Pan, S.-J. Qin, F. Gao, and Q.-Y. Wen, “Variational quantum algorithm for the poisson equation,” Physical Review A, vol. 104, p. 022418, 2021.
- [23] S.-X. Zhang, Z.-Q. Wan, C.-Y. Hsieh, H. Yao, and S. Zhang, “Variational quantum-neural hybrid error mitigation,” Advanced Quantum Technologies, vol. 6, p. 2300147, 2023. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1002/qute.202300147
- [24] W. Li, S. Lu, and D. L. Deng, “Quantum federated learning through blind quantum computing,” Science China Physics, Mechanics Astronomy, vol. 64, pp. 1–8, 2021.
- [25] C. Ren, R. Yan, M. Xu, H. Yu, Y. Xu, D. Niyato, and Z. Y. Dong, “Qfdsa: A quantum-secured federated learning system for smart grid dynamic security assessment,” IEEE Internet of Things Journal, vol. 11, pp. 8414–8426, 2024.
- [26] S. Y.-C. Chen and S. Yoo, “Federated quantum machine learning,” Entropy, vol. 23, p. 460, 2021.
- [27] R. Huang, X. Tan, and Q. Xu, “Quantum federated learning with decentralized data,” IEEE Journal of Selected Topics in Quantum Electronics, vol. 28, pp. 1–10, 2022.
- [28] S. Wang, L. Huang, Y. Nie, X. Zhang, P. Wang, H. Xu, and W. Yang, “Local differential private data aggregation for discrete distribution estimation,” IEEE Transactions on Parallel and Distributed Systems, vol. 30, pp. 2046–2059, 2019.
- [29] R. Xue, K. Xue, B. Zhu, X. Luo, T. Zhang, Q. Sun, and J. Lu, “Differentially private federated learning with an adaptive noise mechanism,” IEEE Transactions on Information Forensics and Security, vol. 19, pp. 74–87, 2024.
- [30] L. Wossnig, Z. Zhao, and A. Prakash, “Quantum linear system algorithm for dense matrices,” Physical Review Letters, vol. 120, p. 050502, 2018.
- [31] V. Giovannetti, S. Lloyd, and L. Maccone, “Quantum random access memory,” Physical Review Letters, vol. 100, p. 160501, 2008.
- [32] I. Kerenidis and A. Prakash, “Quantum recommendation systems,” arXiv preprint arXiv:1603.08675, 2016.
- [33] K. Mitarai, M. Kitagawa, and K. Fujii, “Quantum analog-digital conversion,” Physical Review A, vol. 99, p. 012301, 2019.
- [34] I. Cong and L. Duan, “Quantum discriminant analysis for dimensionality reduction and classification,” New Journal of Physics, vol. 18, p. 073011, 2016.
- [35] G. Brassard, P. Høyer, M. Mosca, and A. Tapp, “Quantum amplitude amplification and estimation,” Contemporary Mathematics, vol. 305, pp. 53–74, 2002.
- [36] C. P. Shao, “Fast variational quantum algorithms for training neural networks and solving convex optimizations,” Physical Review A, vol. 99, p. 042325, 2019.
- [37] S. S. Zhou, T. Loke, J. A. Izaac, and J. Wang, “Quantum fourier transform in computational basis,” Quantum Information Processing, vol. 16, pp. 1–19, 2017.
- [38] H. Buhrman, R. Cleve, J. Watrous, and R. De Wolf, “Quantum fingerprinting,” Physical Review Letters, vol. 87, p. 167902, 2001.
- [39] M. A. Nielsen and I. L. Chuang, Quantum computation and quantum information. Cambridge University Press, 2010.
- [40] P. Rebentrost, M. Mohseni, and S. Lloyd, “Quantum support vector machine for big data classification,” Physical Review Letters, vol. 113, p. 130503, 2014.
- [41] A. Géron, Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow. O’Reilly Media Inc, 2022.
- [42] M. K. Bhaskar, S. Hadfield, A. Papageorgiou, and I. Petras, “Quantum algorithms and circuits for scientific computing,” arXiv preprint arXiv:1511.08253, 2015.