∎
[email protected]
School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, China.
College of Information Science and Engineering, ZaoZhuang University, ZaoZhuang Shandong 277160, China.
School of Cyberspace Security Security, Beijing University of Posts Telecommunications, Beijing 100876, China.
Information Security Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Post and Telecommunications, Beijing 100876, China.
GuiZhou University, Guizhou Provincial Key Laboratory of Public Big Data, Guizhou Guiyang, 550025, China.
Quantum adversarial metric learning model based on triplet loss function
Abstract
Metric learning plays an essential role in image analysis and classification, and it has attracted more and more attention. In this paper, we propose a quantum adversarial metric learning (QAML) model based on the triplet loss function, where samples are embedded into the high-dimensional Hilbert space and the optimal metric is obtained by minimizing the triplet loss function. The QAML model employs entanglement and interference to build superposition states for triplet samples so that only one parameterized quantum circuit is needed to calculate sample distances, which reduces the demand for quantum resources. Considering the QAML model is fragile to adversarial attacks, an adversarial sample generation strategy is designed based on the quantum gradient ascent method, effectively improving the robustness against the functional adversarial attack. Simulation results show that the QAML model can effectively distinguish samples of MNIST and Iris datasets and has higher -robustness accuracy over the general quantum metric learning. The QAML model is a fundamental research problem of machine learning. As a subroutine of classification and clustering tasks, the QAML model opens an avenue for exploring quantum advantages in machine learning.
Keywords:
Metric learning hybrid quantum-classical algorithm quantum machine learning1 Introduction
Machine learning has developed rapidly in recent years and is widely used in artificial intelligence and big data fields. Quantum computing can efficiently process data in exponentially sizeable Hilbert space and is expected to achieve dramatic speedups in solving some classical computational problems. Quantum machine learning, as the interplay between machine learning and quantum physics, brings unprecedented promise to both disciplines. On the one hand, machine learning methods have been extended to quantum world and applied to the data analysis in quantum physicscong2019quantum . On the other hand, quantum machine learning exploits quantum properties, such as entanglement and superposition, to revolutionize classical machine learning algorithms and achieves computational advantages over classical algorithmsbenedetti2019parameterized . Metric Learning is the core problem of some machine learning taskschen2018adversarial , such as -nearest neighbor, support vector machines, radial basis function networks, and -means clustering. Its core work is to construct an appropriate distance metric that maximizes the similarities of samples of the same class and minimizes the similarities of samples from different classes. Linear and nonlinear methods can be used to implement metric learning. In general, linear models have a limited number of parameters and are unsuitable for characterizing high-order features of samples. Recently, neural networks have been adopted to establish nonlinear metric learning models, and promising results have been achieved in face recognition and feature matching.
Classical metric learning models usually extract low-dimensional representations of samples, which will lose some details of samples. Quantum states are in high-dimensional Hilbert spaces, and their dimensions grow exponentially with the number of qubits. This quantum enables quantum models to learn high-dimensional representations of samples without explicitly invoking a kernel function. A parameterized quantum circuit is used to map samples in high-dimensional Hilbert space. The optimal metric model is obtained by optimizing the loss function based on Hilbert-Schmidt distances. With the increase of the the dimension, this speed-up advantage will become more and more pronounced, and it is expected to achieve exponential growth in computing speeds. In recent years, researchers began to study how to adopt quantum methods to implement metric learning. Lloydlloyd2020quantum firstly proposed a quantum metric learning model based on hybrid quantum-classical algorithms. A parameterized quantum circuit is used to map samples in high-dimensional Hilbert space. The optimal metric model is obtained by optimizing the loss function based on Hilbert-Schmidt distances. This model achieves better effects in classification tasks. Nhatnghiem2020unified introduced quantum explicit and implicit metric learning approaches from the perspective of whether the target space is known or not. The research establishes the relationship between quantum metric learning and other quantum supervised learning models. The above two algorithms mainly focus on classification tasks. Metric learning is a fundamental problem in machine learning, which can be applied not only to classification but also to clustering, face recognition, and other issues. In our research, we are devoted to constructing a quantum metric learning model that can serve various machine learning tasks.
Angular distance is a vital metric that quantifies the included angle between normalized samplesmao2019metric . Angular distance focuses on the difference in the direction of samples and is more robust to the variation of local featurewang2017deep ,duan2018deep . Considering the similarities between angular distances of classical data and inner products of quantum states, we design a quantum adversarial metric learning (QAML) model based on inner product distances, which is more suitable for image-related tasks. Unlike other quantum metric learning models, the QAML model maps samples from different classes into quantum superposition states and utilizes simple interface circuits to compute metric distances for multiple sample pairs in parallel. Furthermore, quantum systems in high-dimensional Hilbert space have counter-intuitive geometrical propertiesliu2020vulnerability . The QAML model using only natural samples is vulnerable to adversarial attacks, under which some samples are closer to the false class, so the model is easy to make wrong judgementsmadry2017towards . To solve this issue, we construct adversarial samples based on natural samples. The model’s robustness is improved by the alternative train of natural and adversarial samples. Our work has two main contributions:(i) We explore a quantum method to compute the triplet loss function, which utilizes quantum superposition states to calculate sample distances in parallel and reduce the demand for quantum resources. (ii) We design an adversarial samples generation strategy based on the quantum gradient ascent, and the robustness of the QAML model is significantly improved by alternatively training generated adversarial samples and natural samples. Simulation results show that the QAML model separates samples by a larger margin and has better robustness for functional adversarial attacks than general quantum metric learning models.
The paper is organized as follows. Section 2 gives the basic method of the QAML model. Section 3 verifies the performances of the QAML model. Finally, we get a conclusion and discuss the future research directions.
2 Quantum adversarial metric learning
2.1 Preliminary theory
Triplet loss function is a widely used strategy for metric learningsalakhutdinov2007learning , commonly used in image retrieval and face recognition. A triplet set consists of three samples from two classes, where anchor sample and positive sample belong to the same class, and negative sample comes from another class. The goal of metric learning based on triplet loss function is to find the optimal embedded representation space, in which positive sample pairs are pulled together and negative sample pairs are pushed away. Fig.1 shows sample space change in the metric learning process. As we can see, samples from different classes become linearly separable through metric learning. Fig.2 shows the schematic of the metric learning model based on triplet loss function. Firstly, the model prepares multiple triplet sets, and one triplet set is sent to convolutional neural networks (CNN), where three CNN with the same structure and parameters are needed. Each CNN acts on one sample of the triplet set to extract its features. The triplet loss function is obtained by computing metric distances for multiple sample pairs of triplet sets. In the learning process, the optimal parameters of CNN are obtained by minimizing the triplet loss function.


Let one batch samples include triplet sets. The triplet loss function is
(1) |
where represents the function mapping input samples to the embedded representation space, denotes the distance between a sample pair in the embedded representation space, and represents the hinge loss function. The goal of metric learning is to learn a metric that makes the distances between negative sample pairs greater than the distance between the corresponding positive sample pairs and satisfies the specified margin mao2019metric . In the triplet loss function, penalizes the positive sample pair that is too far apart, and penalizes the negative sample pair whose distance is less than the margin .
Metric learning can adopt various distance metric methods. Angular distance metric is robust to image illumination and contrast variation wang2017deep , which is an efficient way for metric learning tasks. In this method, samples need to be normalized to unit vectors in advance. The distance between a positive sample pair is
(2) |
where and represent -norm and -norm, respectively, and denotes the inner product operation for two vectors. The distance between negative sample pairs can be calculated in the same way.
2.2 Framework of quantum metric learning model
For most machine learning tasks, it is often challenging to adopt simple linear functions to distinguish samples of different classes. According to kernel theoryblank2020quantum , samples in high-dimensional feature space have better distinguishability. Classical machine learning algorithms usually adopt kernel methods to map samples to high-dimensional feature space, where the mapped samples can be separated by simple linear functions. Quantum states with -qubits are in -dimensional Hilbert space, where quantum systems characterize the nonlinear features of data and efficiently process data through a series of linear unitary operations.
In the QAML model, samples should be firstly mapped into quantum systems by qubit encoding. The Hilbert space after encoding usually does not correspond to the optimal space for separating samples of different classes. To search for the optimal Hilbert space, the QAML model performs parameterized quantum circuits on the encoded statesgrant2018hierarchical . As different variable parameters correspond to different mapping spaces, we can search the optimal space by modifying parameters . As long as has strong expressivity, we can find the optimal Hilbert space by optimizing the loss function of metric learningperez2020data ; schuld2021effect . with different structures and layers have different expressivity. The more layers has, the stronger the expressivity, and the easier it is to find the optimal metric space.
The classical metric learning model based on triplet loss function requires three identical CNN to map triplet sets into the novel Hilbert space. To reduce the demand for quantum resources, we construct a quantum superposition state to represent one triplet set so that a triplet set only needs one to transform it into Hilbert space. The core work of the building loss function is to compute inner products between sample pairs, but and subsequent conjugate operation counteract each other’s effects. To solve this issue, we add a repeated encoding operation after . It is worth mentioning that the repeated encoding operation is also conducive to the construction of high-dimensional features of samples.
The QAML model is mathematically represented as the minimization of the loss function with respect to the parameters . The triplet loss function consists of metric distances for positive and negative sample pairs, so the kernel work of the QAML model is constructing the metric distances for sample pairs in the transformed Hilbert space. The mapping samples and of Equ.2 are replaced by the quantum states of and , then the second term of Equ.2 is converted to the inner product between quantum states of the positive sample pair , which can be got by the method of the Hadamard classifierblank2020quantum . The triplet loss function can be viewed as the weighted sum of the inner product of sample pairs and the inner product of sample pairs . With the help of ancilla registers, the triplet set can be prepared in superposition states form. According to the entanglement property of superposition states, the triplet loss function can be implemented with one parameterized quantum circuit. Then, the triplet loss function value is transmitted to a classical optimizer, and parameters are optimized until the optimal metric is obtained. The QAML model constructs adversarial samples according to the gradient of natural samples and trains alternatively natural and adversarial samples to improve the model’s robustness against adversarial attacks. The schematic of the QAML model is shown in Fig.3.

2.3 Quantum embedding
In the QAML model, classical samples are firstly mapped into quantum states by qubit encoding, where each element is encoded as a Pauli rotation angle of one qubit. The number of qubits required by qubit encoding is equivalent to the dimension of the input sample. Still, the dimension of one quantum state grows exponentially with the input dimension, and -dimensional samples will be mapped to -dimensional Hilbert space. The qubit encoding method cannot use logarithm qubits of the input sample dimension to represent classical samples. However, easy state preparation and low circuit depth make qubit encoding more suitable for implementation on near-term quantum devices.
Samples in practical applications are usually in real space. Applying and rotations on quantum states would introduce imaginary terms, so the QAML model adopts rotation to prepare the initial mapped states, where classical samples determine the rotation angles of qubits. Let denote the th element of the sample scaling to the range , and its corresponding qubit encoding is
(3) |
Then, the qubit encoding of corresponds to the tensor product state
(4) |
In the QAML model, the parameterized quantum circuit is responsible for transforming the Hilbert space of samples. The variable parameters are continuously optimized in iterations to obtain the optimal Hilbert space for separating samples of different classes. Parameterized quantum circuit, also called ansatz, generally adopts a multi-layer circuit structure, where each layer contains a series of unitary operations depending on variable parameters. Ansatz can embed samples into the Hilbert space that classical metric learning methods cannot represent. Hardware-efficient ansatz, one of the common ansatzes, has strong expressivity with fewer layerszoufal2019quantum , and it is widely applied in Noisy Intermediate-Scale Quantum (NISQ)devices. Hardware-efficient ansatz adopts a layered circuit layoutkandala2017hardware , where each layer consists of interleaved 2-qubits unitary modules. Let denote the unitary module acting on the neighboring qubit pair in the th layer. The unitary operation in the th layer can be written as
(5) |
where and represent the odd and even subsets of . For -layer structure, the ansatz can be written as .
The dimension of the mapping quantum state is exponential in the input dimension. As the input dimension increases, the dimension of the mapping quantum states will be much larger than the input dimension. In some machine learning tasks, the QAML model may be expected to have a smaller output dimension to facilitate subsequent subroutine execution, the QAML model needs to add some unitary models to adjust the output dimension. A primary strategy is to add dimension reduction operation following the repeated encoding layer to reduce the output dimensioncong2019quantum . The dimension reduction operation is shown in Fig.3 (b). Firstly, alternating 2-qubit unitary modules act on two neighboring qubits to entangle the mapping features. Then, one qubit of each module is measured, and the measurement result is used to control the unitary operation acting on another qubit. Let denote the operation acting on the qubit pair in the th layer, where represents the partial operation on the th qubit. is the controlled unitary, which represents to perform single-qubit unitary or on the second register according to the measurement result of the first qubit, then represent the dimension reduction operation of the th layer. Assume the dimension reduction operation includes layers, and the output state can be reduced to -dimensional Hilbert space.
Classical metric learning based on triplet loss function needs three identical CNN to extract the features of the triplet set . To reduce the requirement of parameterized quantum circuits, the QAML model encodes the triplet set on two-qubit basis, then interferes with positive and negative sample pairs by a Hadamard gate. The inner products for the positive and negative sample pair are got in parallel by measuring the expectation of observables with respect to 2 qubits of basis state. Let , , and represent the states of anchor sample , positive sample , and negative sample , respectively. Firstly, the QAML model prepares a superposition state
(6) |
for the triplet set , where is sample register, and 1 and 2 denote ancilla registers for basis states. Metric learning based on triplet loss function requires a specific margin between the samples of different classes. To construct the margin, we replace with
(7) |
and with
(8) |
where is the parameter determining the margin. , , and may not be in the optimal Hilbert space for separating samples of different classes. Then, the parameterized quantum circuit acts on , where and denote the identity operations acting on ancilla registers 1 and 2, and represents the ansatz acting on the sample register . The system gets the state
(9) |
where , , , .
As , the inner product acting on the state pairs and or and will counteract the effect of and . An effective strategy is to perform the repeated encoding operation on , which not only solves the problem of the unitary operation and its conjugate operation counteracting effects of each other in the inner product calculation process but also extends the addressable Hilbert space. After the repeated encoding operation , the system gets the state
(10) |
where , , and .
2.4 Triplet loss function
A simple method of computing inner products between sample pairs is the Hadamard classifier methodblank2020quantum . In this method, two samples are firstly projected into orthogonal subspaces, spanned by standard basis states of one ancilla register. Then, a Hadamard gate acts on the standard basis states to interfere with two samples in the 2-dimensional subspaces. Finally, the inner product between two samples is got by measuring the expectation value of for the ancilla register. The triplet loss function, consisting of inner products for positive and negative sample pairs, needs to compute the weighted sum of inner products for sample pairs, where the weight of positive sample pairs is , and the weight of negative sample pairs is . The states of the triplet sets have been prepared on the two-qubit standard basis, shown in Equ.10. The QAML model consists of two ancilla registers, Ancilla register 2 is used to build the inner products of sample pairs. The QAML model adopts one Hadamard gate acting on ancilla register 2 to interfere with sample pairs. If only the expectation of the observable for the ancilla register 2 is measured, the QAML model will get the sum of the inner products for positive and negative sample pairs. The QAML model adds another register (Ancilla register 1) to distinguish between different sample pairs, and measuring the expectation with respect to the operator can get the weights of sample pairs. So the QAML model not only measures the expectation of the observable with respect to ancilla registers 1 but also the expectation for ancilla registers 2. The expectation on two ancilla registers is
(11) |
where represents the margin for separating positive and negative samples. With the help of classical computation, one gets the triplet loss function
(12) |
In practical applications, one batch of samples may contain multiple triplet sets, so the QAML model needs to add a index register to distinguish different triplet sets. Let one batch of samples include triple sets. , and of Equ.6 are replaced by the superposition states , , and to construct the loss function for this batch, where the subscript denotes the index register. The QAML model performs Equ10-12 and yields the expectation value of the observable with respect to ancilla register 1 and 2 as
(13) |
which corresponds to the weighted sum of the inner products for one batch samples.
2.5 Adversarial samples generation
Metric learning is vulnerable to adversarial attacks. Attackers usually adopt adding small and imperceptible perturbations on natural samples to generate adversarial samples for deceiving metric learning models. Adversarial attacks make metric learning models unable to accurately distinguish positive and negative samples and give rise to misclassification. Miyatomiyato2018virtual proposed an adversarial training method, where ambiguous but critical adversarial samples are generated based on the gradients of natural samples and added to the training setliu2020vulnerability . This method effectively fights against white-box attacks and improves the robustness of the model. Inspired by this method, we developed a quantum adversarial samples generation method. Considering the efficiency of the triplet loss function, we do not create adversarial samples corresponding to all natural samples. Anchor samples in the triplet loss function are used twice to compute the inner products of positive and negative sample pairs. The adversarial samples corresponding to anchor samples can provide more valuable information for adversarial training, so the QAML model only build adversarial samples corresponding to anchor samples.
Let denote the adversarial sample corresponding to the anchor sample . According to the characteristics of adversarial attacks, is far from the positive sample but close to the negative sample , and this characteristic makes the QAML model hard to build accurate metric distances. According to Refkurakin2016adversarial , adversarial attacks generated along the direction of gradient ascent will produce the strongest disturbance to metric learning, so we develop a quantum gradient ascent method to generate adversarial samples. Let denote the gradient vector of the loss function with respect to the anchor sample , where the element is the partial derivation of the loss function with respect to the th element of .
The QAML model may encounter many attacks. One of the common attacks is the white-box attack, under which the attackers have complete information about the QAML model, including the loss function implemented by parameterized quantum circuit, so that they can compute the gradients of the loss function with respect to gate parameters. Let the QAML model suffer from the functional adversarial attackmcclean2017hybrid (one kind of white-box attacks), under which each element of quantum states is influenced by the attack independently. According to the idea of gradient ascent, the adversarial anchor sample can be written as
(14) |
where is a constant vector used to control the disturbance within a specified bound. Usually, is determined by the problem to be solved and its upper bound is , where denotes -norm.
Let denote the unitary acting on the anchor sample to generate the adversarial sample , where represents the unitary operation acting on the th element of . It is expected that has small impact on the state , so is close to the identity operator . can be implemented by the rotation operation
(15) |
where . As the QAML model only adopts anchor samples to generate adversarial samples, we define the unitary operation to generate adversarial sample as
(16) |
where acts on the sample register only when the ancilla register 2 is , and and mean the identity unitary acting on registers and 1, respectively. Fig.3 (c) shows the schematic of generating adversarial samples, where replaces to generate the adversarial sample . In the QAML training process, the parameters are optimized by alternatively minimizing the loss function and , where natural and adversarial samples are respectively served as input.
The core work of generating adversarial samples is to compute the partial deviation . Many methods can be adopted to calculate , such as the finite difference scheme and parameter shift rulecrooks2019gradients ; schuld2019evaluating ; mitarai2018quantum . The parameter shift rule has faster convergence in the training process, making it more suitable for NISQ devices. is evaluated using the parameter shift rule
(17) |
where , and is the unit vector with only the th qubit being 1. According to Equ17, one partial derivative can be got by evaluating the loss function twice.
3 Numerical simulations and discussions
In this section, we adopt the PennyLane software frameworkbergholm2018pennylane to demonstrate the performances of the QAML model. The QAML model is implemented by a hybrid quantum-classical algorithm, where the quantum device and classical optimizer cooperate to implement parameter optimization. RMSPropmukkamala2017variants optimizer serves as a classical optimizer with a learning rate of 0.01. Our first work is to demonstrate the performance of the QAML model on the MNIST dataset, consisting of -dimensional grayscale images of handwritten digits . The QAML model focuses on binary classification tasks, so only two categories of handwritten digits, ’’ and ’’, are chosen to form data sets. As NISQ devices have limited circuit depth and qubits, the QAML model first reduces samples into 2-dimensional vectors using the principal component analysis (PCA) method. The training and test sets contain 100 samples, respectively, where 50 samples are from class ’’ and 50 samples come from class ’’.
Fig.4 shows the distributions of test samples in the Hilbert space. Simulation results show that samples from different classes are pushed apart with a larger margin and become linearly separable after performing the QAML model. Fig.5 (colorbar figure) shows the inner products between test sample pairs (the larger the inner product, the smaller the distance). The QAML model without adding adversarial samples can be viewed as the general quantum metric learning model, named as the QML model. Panel (a) shows the inner products of sample pairs before performing the QML or QAML models. Panel (b) shows the inner products of sample pairs after performing the QML model, where the training set only includes natural samples. Panel (c) denotes the inner products of sample pairs after completing the QAML model, where the training set consists of natural and adversarial samples. Before training, the inner products for sample pairs of the same and different classes have little difference. This phenomenon means that samples from different categories are close to each other and are difficult to separate. After performing the QML model, the inner products for negative sample pairs become smaller (close to 0), indicating that the distances between samples from different categories begin to get larger. After performing the QAML model, the inner products for negative sample pairs are going to -1, smaller than the values obtained through the QML model. This result indicates that the distance between samples of different categories after executing the QAML model is greater than that after executing the QML model. Let represent the average inner product of all sample pairs from the same class, shown in Table.1. denotes the average inner product of all sample pairs from different classes, offered in Table.2. The result shows that the average inner product in the case of adding adversarial samples is smaller than that without adversarial samples, regardless of training or test sets. This result also means that the QAML model can obtain a larger separation margin than the QML model. We also can find that the average inner product for test and training samples have little differences, indicating that the QAML model has a good generalization for the unseen test data.





Samples | Training | Test | Training+adv | Test+adv |
---|---|---|---|---|
Before | 0.8280 | 0.8168 | 0.8280 | 0.8168 |
After | 0.8348 | 0.8021 | 0.8537 | 0.8249 |
Samples | Training | Test | Training+adv | Test+adv |
---|---|---|---|---|
Before | 0.3040 | 0.4787 | 0.3040 | 0.4787 |
After | -0.7971 | -0.6968 | -0.8326 | -0.7696 |
To further verify the separation effects for other data sets, we simulate the performances of the QML and QAML models on Iris dataset. Iris dataset contains 150 samples with 4-dimensional features, where samples belong to class 1, samples belong to class 2, and samples belong to class 3. Samples from classes 2 and 3 are difficult to separate by simple linear functions, so we select them to build a binary data set, where 30 samples of each category are used to construct the training set, and the other 20 samples are served as the test set. Fig.6 shows the average inner products of test sample pairs for Iris dataset. Panels (a), (b), and (c) show the inner products for test sample pairs before performing the QML or QAML model, after performing the QML model, and after performing the QAML model, respectively. Simulation results show that the QAML model also has good separation effects on Iris dataset, superior to the QML model. Tables 3 and 4 show the average inner products and for Iris dataset, respectively. Simulation results show that all have similar values, indicating that the sample from the same class has relatively stable distances regardless of whether performing the QAML model. Before performing the QML or QAML model, has a larger value, which means that samples from different classes are close to each other and are difficult to separate. After performing the QML and QAML models, the average inner products get smaller values, where of the QAML model has smaller values than that of the QML model. We can find that the QAML model yields a better separation effect than the QML model, and the conclusion is consistent with that got based on MNIST dataset.



Samples | Training | Test | Training+adv | Test+adv |
---|---|---|---|---|
Before | 0.5065 | 0.5909 | 0.5065 | 0.5909 |
After | 0.5473 | 0.6109 | 0.5549 | 0.6544 |
Samples | Training | Test | Training+adv | Test+adv |
---|---|---|---|---|
Before | 0.3377 | 0.4787 | 0.3377 | 0.4787 |
After | -0.6314 | -0.3424 | -0.6752 | -0.4653 |
Furthermore, we prove the robustness of the QAML model based on the -robust accuracy proposed in Ref.guan2020robustness . Given a test sample set and a smaller threshold . Let represent the quantum state of a test sample of . If and another state belong to different classes and the inner product between them is larger than the threshold , then is viewed as the adversarial sample of . If has no adversarial samples within , is -robust state. Let denote the -robust accuracy of , which is equal to the proportion of -robust states of the sample set . Let the threshold be . The -robust accuracies of the QML and QAML models in MNIST dataset are and , respectively. The -robust accuracies of the QML and QAML models on Iris dataset are and , respectively. Compared with the QML model, the QAML model improves the robustness by adding the adversarial samples to the training set.
References
- (1) I. Cong, S. Choi, M.D. Lukin, Nature Physics 15(12), 1273 (2019)
- (2) M. Benedetti, E. Lloyd, S. Sack, M. Fiorentini, Quantum Science and Technology 4(4), 043001 (2019)
- (3) S. Chen, C. Gong, J. Yang, X. Li, Y. Wei, J. Li, arXiv preprint arXiv:1802.03170 (2018)
- (4) S. Lloyd, M. Schuld, A. Ijaz, J. Izaac, N. Killoran, arXiv preprint arXiv:2001.03622 (2020)
- (5) N.A. Nghiem, S.Y.C. Chen, T.C. Wei, arXiv preprint arXiv:2010.13186 (2020)
- (6) C. Mao, Z. Zhong, J. Yang, C. Vondrick, B. Ray, Advances in Neural Information Processing Systems 32 (2019)
- (7) J. Wang, F. Zhou, S. Wen, X. Liu, Y. Lin, in Proceedings of the IEEE international conference on computer vision (2017), pp. 2593–2601
- (8) Y. Duan, W. Zheng, X. Lin, J. Lu, J. Zhou, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 2780–2789
- (9) N. Liu, P. Wittek, Physical Review A 101(6), 062331 (2020)
- (10) A. Madry, A. Makelov, L. Schmidt, D. Tsipras, A. Vladu, arXiv preprint arXiv:1706.06083 (2017)
- (11) R. Salakhutdinov, G. Hinton, in Artificial Intelligence and Statistics (PMLR, 2007), pp. 412–419
- (12) C. Blank, D.K. Park, J.K.K. Rhee, F. Petruccione, npj Quantum Information 6(1), 1 (2020)
- (13) E. Grant, M. Benedetti, S. Cao, A. Hallam, J. Lockhart, V. Stojevic, A.G. Green, S. Severini, npj Quantum Information 4(1), 1 (2018)
- (14) A. Pérez-Salinas, A. Cervera-Lierta, E. Gil-Fuster, J.I. Latorre, Quantum 4, 226 (2020)
- (15) M. Schuld, R. Sweke, J.J. Meyer, Physical Review A 103(3), 032430 (2021)
- (16) C. Zoufal, A. Lucchi, S. Woerner, npj Quantum Information 5(1), 1 (2019)
- (17) A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J.M. Chow, J.M. Gambetta, Nature 549(7671), 242 (2017)
- (18) T. Miyato, S.i. Maeda, M. Koyama, S. Ishii, IEEE transactions on pattern analysis and machine intelligence 41(8), 1979 (2018)
- (19) A. Kurakin, I. Goodfellow, S. Bengio, arXiv preprint arXiv:1611.01236 (2016)
- (20) J.R. McClean, M.E. Kimchi-Schwartz, J. Carter, W.A. De Jong, Physical Review A 95(4), 042308 (2017)
- (21) G.E. Crooks, arXiv preprint arXiv:1905.13311 (2019)
- (22) M. Schuld, V. Bergholm, C. Gogolin, J. Izaac, N. Killoran, Physical Review A 99(3), 032331 (2019)
- (23) K. Mitarai, M. Negoro, M. Kitagawa, K. Fujii, Physical Review A 98(3), 032309 (2018)
- (24) V. Bergholm, J. Izaac, M. Schuld, C. Gogolin, M.S. Alam, S. Ahmed, J.M. Arrazola, C. Blank, A. Delgado, S. Jahangiri, et al., arXiv preprint arXiv:1811.04968 (2018)
- (25) M.C. Mukkamala, M. Hein, in International conference on machine learning (PMLR, 2017), pp. 2545–2553
- (26) J. Guan, W. Fang, M. Ying, CoRR (2020)