This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: 🖂Jian Li
[email protected]
1{}^{\textbf{1}}School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, China.
2{}^{\textbf{2}}College of Information Science and Engineering, ZaoZhuang University, ZaoZhuang Shandong 277160, China.
3{}^{\textbf{3}}School of Cyberspace Security Security, Beijing University of Posts Telecommunications, Beijing 100876, China.
4{}^{\textbf{4}}Information Security Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Post and Telecommunications, Beijing 100876, China.
5{}^{\textbf{5}}GuiZhou University, Guizhou Provincial Key Laboratory of Public Big Data, Guizhou Guiyang, 550025, China.

Quantum adversarial metric learning model based on triplet loss function

Yan-Yan Hou 1,2{}^{\textbf{1,2}}    Jian Li 3{}^{\textbf{3}}    Xiu-Bo Chen 4,5{}^{\textbf{4,5}}    Chong-Qiang Ye 1{}^{\textbf{1}}   
(Received: date / Accepted: date)
Abstract

Metric learning plays an essential role in image analysis and classification, and it has attracted more and more attention. In this paper, we propose a quantum adversarial metric learning (QAML) model based on the triplet loss function, where samples are embedded into the high-dimensional Hilbert space and the optimal metric is obtained by minimizing the triplet loss function. The QAML model employs entanglement and interference to build superposition states for triplet samples so that only one parameterized quantum circuit is needed to calculate sample distances, which reduces the demand for quantum resources. Considering the QAML model is fragile to adversarial attacks, an adversarial sample generation strategy is designed based on the quantum gradient ascent method, effectively improving the robustness against the functional adversarial attack. Simulation results show that the QAML model can effectively distinguish samples of MNIST and Iris datasets and has higher ϵ\epsilon-robustness accuracy over the general quantum metric learning. The QAML model is a fundamental research problem of machine learning. As a subroutine of classification and clustering tasks, the QAML model opens an avenue for exploring quantum advantages in machine learning.

Keywords:
Metric learning hybrid quantum-classical algorithm quantum machine learning

1 Introduction

Machine learning has developed rapidly in recent years and is widely used in artificial intelligence and big data fields. Quantum computing can efficiently process data in exponentially sizeable Hilbert space and is expected to achieve dramatic speedups in solving some classical computational problems. Quantum machine learning, as the interplay between machine learning and quantum physics, brings unprecedented promise to both disciplines. On the one hand, machine learning methods have been extended to quantum world and applied to the data analysis in quantum physicscong2019quantum . On the other hand, quantum machine learning exploits quantum properties, such as entanglement and superposition, to revolutionize classical machine learning algorithms and achieves computational advantages over classical algorithmsbenedetti2019parameterized . Metric Learning is the core problem of some machine learning taskschen2018adversarial , such as kk-nearest neighbor, support vector machines, radial basis function networks, and kk-means clustering. Its core work is to construct an appropriate distance metric that maximizes the similarities of samples of the same class and minimizes the similarities of samples from different classes. Linear and nonlinear methods can be used to implement metric learning. In general, linear models have a limited number of parameters and are unsuitable for characterizing high-order features of samples. Recently, neural networks have been adopted to establish nonlinear metric learning models, and promising results have been achieved in face recognition and feature matching.

Classical metric learning models usually extract low-dimensional representations of samples, which will lose some details of samples. Quantum states are in high-dimensional Hilbert spaces, and their dimensions grow exponentially with the number of qubits. This quantum enables quantum models to learn high-dimensional representations of samples without explicitly invoking a kernel function. A parameterized quantum circuit is used to map samples in high-dimensional Hilbert space. The optimal metric model is obtained by optimizing the loss function based on Hilbert-Schmidt distances. With the increase of the the dimension, this speed-up advantage will become more and more pronounced, and it is expected to achieve exponential growth in computing speeds. In recent years, researchers began to study how to adopt quantum methods to implement metric learning. Lloydlloyd2020quantum firstly proposed a quantum metric learning model based on hybrid quantum-classical algorithms. A parameterized quantum circuit is used to map samples in high-dimensional Hilbert space. The optimal metric model is obtained by optimizing the loss function based on Hilbert-Schmidt distances. This model achieves better effects in classification tasks. Nhatnghiem2020unified introduced quantum explicit and implicit metric learning approaches from the perspective of whether the target space is known or not. The research establishes the relationship between quantum metric learning and other quantum supervised learning models. The above two algorithms mainly focus on classification tasks. Metric learning is a fundamental problem in machine learning, which can be applied not only to classification but also to clustering, face recognition, and other issues. In our research, we are devoted to constructing a quantum metric learning model that can serve various machine learning tasks.

Angular distance is a vital metric that quantifies the included angle between normalized samplesmao2019metric . Angular distance focuses on the difference in the direction of samples and is more robust to the variation of local featurewang2017deep ,duan2018deep . Considering the similarities between angular distances of classical data and inner products of quantum states, we design a quantum adversarial metric learning (QAML) model based on inner product distances, which is more suitable for image-related tasks. Unlike other quantum metric learning models, the QAML model maps samples from different classes into quantum superposition states and utilizes simple interface circuits to compute metric distances for multiple sample pairs in parallel. Furthermore, quantum systems in high-dimensional Hilbert space have counter-intuitive geometrical propertiesliu2020vulnerability . The QAML model using only natural samples is vulnerable to adversarial attacks, under which some samples are closer to the false class, so the model is easy to make wrong judgementsmadry2017towards . To solve this issue, we construct adversarial samples based on natural samples. The model’s robustness is improved by the alternative train of natural and adversarial samples. Our work has two main contributions:(i) We explore a quantum method to compute the triplet loss function, which utilizes quantum superposition states to calculate sample distances in parallel and reduce the demand for quantum resources. (ii) We design an adversarial samples generation strategy based on the quantum gradient ascent, and the robustness of the QAML model is significantly improved by alternatively training generated adversarial samples and natural samples. Simulation results show that the QAML model separates samples by a larger margin and has better robustness for functional adversarial attacks than general quantum metric learning models.

The paper is organized as follows. Section 2 gives the basic method of the QAML model. Section 3 verifies the performances of the QAML model. Finally, we get a conclusion and discuss the future research directions.

2 Quantum adversarial metric learning

2.1 Preliminary theory

Triplet loss function is a widely used strategy for metric learningsalakhutdinov2007learning , commonly used in image retrieval and face recognition. A triplet set (xia,xip,xin){(x_{i}^{a},x^{p}_{i},x^{n}_{i})} consists of three samples from two classes, where anchor sample xiax_{i}^{a} and positive sample xipx^{p}_{i} belong to the same class, and negative sample xinx^{n}_{i} comes from another class. The goal of metric learning based on triplet loss function is to find the optimal embedded representation space, in which positive sample pairs (xia,xip){(x_{i}^{a},x^{p}_{i})} are pulled together and negative sample pairs (xia,xin){(x_{i}^{a},x^{n}_{i})} are pushed away. Fig.1 shows sample space change in the metric learning process. As we can see, samples from different classes become linearly separable through metric learning. Fig.2 shows the schematic of the metric learning model based on triplet loss function. Firstly, the model prepares multiple triplet sets, and one triplet set (xia,xip,xin){(x_{i}^{a},x^{p}_{i},x^{n}_{i})} is sent to convolutional neural networks (CNN), where three CNN with the same structure and parameters are needed. Each CNN acts on one sample of the triplet set to extract its features. The triplet loss function is obtained by computing metric distances for multiple sample pairs of triplet sets. In the learning process, the optimal parameters of CNN are obtained by minimizing the triplet loss function.

Refer to caption
Figure 1: Sample space change in metric learning process. Before metric learning, the distances between negative sample pairs are smaller, and samples from different classes are difficult to separate by linear functions. After metric learning, the distances between negative sample pairs become larger, and a large margin separates samples from different classes. Linear functions can easily separate positive and negative samples.
Refer to caption
Figure 2: The schematic of the metric learning model based on triplet loss function. A triplet set includes an anchor sample, a positive sample, and a negative sample. The input consists of a batch of triplet sets, and only one triplet set serves as input in each iteration. Three CNN with the same structure and parameters are used to map the triplet set into the embedded representation space. CNN, consisting of multiple convolutions, pooling, and fully connected layers, is responsible for extracting the features of samples. The triplet loss function is further constructed based on the extracted features.

Let one batch samples include N1N_{1} triplet sets. The triplet loss function is

L=1N1i=1N1[D(g(xia),g(xip))D(g(xia),g(xin))+μ]+,\displaystyle\begin{array}[]{ccc}L=\frac{1}{N_{1}}\sum_{i=1}^{N_{1}}[D(g(x^{a}_{i}),g(x^{p}_{i}))-D(g(x^{a}_{i}),g(x^{n}_{i}))+\mu]_{+},\end{array} (1)

where g()g(\cdot) represents the function mapping input samples to the embedded representation space, D(,)D(\cdot,\cdot) denotes the distance between a sample pair in the embedded representation space, and [,]+=max(0,)[\ \cdot\ ,\ \cdot\ ]_{+}=max(0,\ \cdot\ ) represents the hinge loss function. The goal of metric learning is to learn a metric that makes the distances between negative sample pairs greater than the distance between the corresponding positive sample pairs and satisfies the specified margin μ+\mu\in\mathbb{R}^{+}mao2019metric . In the triplet loss function, D(g(xia),g(xip))D(g(x^{a}_{i}),g(x^{p}_{i})) penalizes the positive sample pair (xia,xip)(x^{a}_{i},x^{p}_{i}) that is too far apart, and D(g(xia),g(xin))D(g(x^{a}_{i}),g(x^{n}_{i})) penalizes the negative sample pair (xia,xip)(x^{a}_{i},x^{p}_{i}) whose distance is less than the margin μ\mu.

Metric learning can adopt various distance metric methods. Angular distance metric is robust to image illumination and contrast variation wang2017deep , which is an efficient way for metric learning tasks. In this method, samples need to be normalized to unit vectors in advance. The distance between a positive sample pair is

D(g(xia),g(xip))=1|g(xia)g(xip)|g(xia)2g(xip)2,\displaystyle\begin{array}[]{ccc}D(g(x^{a}_{i}),g(x^{p}_{i}))=1-\frac{|g(x^{a}_{i})\cdot g(x^{p}_{i})|}{||g(x^{a}_{i})||_{2}||g(x^{p}_{i})||_{2}},\end{array} (2)

where |||\ | and ||||2||\ ||_{2} represent l1l_{1}-norm and l2l_{2}-norm, respectively, and \cdot denotes the inner product operation for two vectors. The distance between negative sample pairs can be calculated in the same way.

2.2 Framework of quantum metric learning model

For most machine learning tasks, it is often challenging to adopt simple linear functions to distinguish samples of different classes. According to kernel theoryblank2020quantum , samples in high-dimensional feature space have better distinguishability. Classical machine learning algorithms usually adopt kernel methods to map samples to high-dimensional feature space, where the mapped samples can be separated by simple linear functions. Quantum states with nn-qubits are in 2n2^{n}-dimensional Hilbert space, where quantum systems characterize the nonlinear features of data and efficiently process data through a series of linear unitary operations.

In the QAML model, samples should be firstly mapped into quantum systems by qubit encoding. The Hilbert space after encoding usually does not correspond to the optimal space for separating samples of different classes. To search for the optimal Hilbert space, the QAML model performs parameterized quantum circuits W(θ)W(\theta) on the encoded statesgrant2018hierarchical . As different variable parameters θ\theta correspond to different mapping spaces, we can search the optimal space by modifying parameters θ=(θ11,,θij)\theta=(\theta_{1}^{1},...,\theta_{i}^{j}). As long as W(θ)W(\theta) has strong expressivity, we can find the optimal Hilbert space by optimizing the loss function of metric learningperez2020data ; schuld2021effect . W(θ)W(\theta) with different structures and layers have different expressivity. The more layers W(θ)W(\theta) has, the stronger the expressivity, and the easier it is to find the optimal metric space.

The classical metric learning model based on triplet loss function requires three identical CNN to map triplet sets (xia,xip,xin)(x_{i}^{a},x_{i}^{p},x_{i}^{n}) into the novel Hilbert space. To reduce the demand for quantum resources, we construct a quantum superposition state to represent one triplet set so that a triplet set only needs one W(θ)W(\theta) to transform it into Hilbert space. The core work of the building loss function is to compute inner products between sample pairs, but W(θ)W(\theta) and subsequent conjugate operation W(θ)W^{\dagger}(\theta) counteract each other’s effects. To solve this issue, we add a repeated encoding operation after W(θ)W(\theta). It is worth mentioning that the repeated encoding operation is also conducive to the construction of high-dimensional features of samples.

The QAML model is mathematically represented as the minimization of the loss function with respect to the parameters θ\theta. The triplet loss function consists of metric distances for positive and negative sample pairs, so the kernel work of the QAML model is constructing the metric distances for sample pairs in the transformed Hilbert space. The mapping samples h(xia)/h(xia)2h(x_{i}^{a})/||h(x_{i}^{a})||_{2} and h(xip)/h(xip)2h(x_{i}^{p})/||h(x_{i}^{p})||_{2} of Equ.2 are replaced by the quantum states of xiax_{i}^{a} and xipx_{i}^{p}, then the second term of Equ.2 is converted to the inner product between quantum states of the positive sample pair (xia,xip)(x_{i}^{a},x_{i}^{p}), which can be got by the method of the Hadamard classifierblank2020quantum . The triplet loss function can be viewed as the weighted sum of the inner product of sample pairs (xia,xip)(x_{i}^{a},x_{i}^{p}) and the inner product of sample pairs (xia,xin)(x_{i}^{a},x_{i}^{n}). With the help of ancilla registers, the triplet set can be prepared in superposition states form. According to the entanglement property of superposition states, the triplet loss function can be implemented with one parameterized quantum circuit. Then, the triplet loss function value is transmitted to a classical optimizer, and parameters are optimized until the optimal metric is obtained. The QAML model constructs adversarial samples according to the gradient of natural samples and trains alternatively natural and adversarial samples to improve the model’s robustness against adversarial attacks. The schematic of the QAML model is shown in Fig.3.

Refer to caption
Figure 3: Overview of quantum adversarial metric learning (QAML) model. Panel (a) shows the framework of quantum adversarial metric learning. Reg.sReg.s is the sample register that stores triplet sets, and Reg.1Reg.1 and Reg.2Reg.2 are ancilla registers used distinguishing different samples. The model firstly adopts principal component analysis (PCA) to reduce the input dimension. Subsequently, anchor, negative and positive samples are encoded into a quantum superposition state by controlled qubit encoding. The transformation of Hilbert space is implemented by parameterized quantum circuit W(θ)W(\theta) and the subsequent qubit encoding U1(xi)U_{1}(x_{i}). Finally, Hadamard and measurement operations act on ancilla registers to simultaneously compute the inner products for the positive and negative sample pairs, and the triplet loss function is further obtained. In each iteration, the parameters θ\theta are updated by optimizing the triplet loss function with a classical optimizer. Panel (b) shows the quantum dimension reduction circuit to reduce the number of output qubits. In each module, only one qubit is measured, and the controlled unitary based on its measurement result acts on another qubit. Panel (c) shows another case of the QAML model, where adversarial samples are built and added to the training process. V(λia)V^{{}^{\prime}}(\lambda\nabla_{i}^{a}) is the unitary operation based on the gradient of anchor sample xiax_{i}^{a} and acts on the encoded quantum states to produce its adversarial sample. In the QAML model training process, natural and adversarial samples alternatively serve as input.

2.3 Quantum embedding

In the QAML model, classical samples are firstly mapped into quantum states by qubit encoding, where each element is encoded as a Pauli rotation angle of one qubit. The number of qubits required by qubit encoding is equivalent to the dimension of the input sample. Still, the dimension of one quantum state grows exponentially with the input dimension, and NN-dimensional samples will be mapped to 2N2^{N}-dimensional Hilbert space. The qubit encoding method cannot use logarithm qubits of the input sample dimension to represent classical samples. However, easy state preparation and low circuit depth make qubit encoding more suitable for implementation on near-term quantum devices.

Samples in practical applications are usually in real space. Applying RXR_{X} and RZR_{Z} rotations on quantum states would introduce imaginary terms, so the QAML model adopts RYR_{Y} rotation to prepare the initial mapped states, where classical samples determine the rotation angles of qubits. Let xijx_{i}^{j} denote the jjth element of the sample xix_{i} scaling to the range [1,1][-1,1], and its corresponding qubit encoding is

|φ(xij)=cos(π2xij)|0+sin(π2xij)|1.\displaystyle\begin{array}[]{ccc}|\varphi(x_{i}^{j})\rangle=\cos(\frac{\pi}{2}x_{i}^{j})|0\rangle+\sin(\frac{\pi}{2}x_{i}^{j})|1\rangle.\end{array} (3)

Then, the qubit encoding of xix_{i} corresponds to the tensor product state

|φi=|φ(xi1)|φ(xi2)|φ(xiN).\displaystyle\begin{array}[]{ccc}|\varphi_{i}\rangle=|\varphi(x_{i}^{1})\rangle\otimes|\varphi(x_{i}^{2})\rangle\otimes...\otimes|\varphi(x_{i}^{N})\rangle.\end{array} (4)

In the QAML model, the parameterized quantum circuit is responsible for transforming the Hilbert space of samples. The variable parameters are continuously optimized in iterations to obtain the optimal Hilbert space for separating samples of different classes. Parameterized quantum circuit, also called ansatz, generally adopts a multi-layer circuit structure, where each layer contains a series of unitary operations depending on variable parameters. Ansatz can embed samples into the Hilbert space that classical metric learning methods cannot represent. Hardware-efficient ansatz, one of the common ansatzes, has strong expressivity with fewer layerszoufal2019quantum , and it is widely applied in Noisy Intermediate-Scale Quantum (NISQ)devices. Hardware-efficient ansatz adopts a layered circuit layoutkandala2017hardware , where each layer consists of interleaved 2-qubits unitary modules. Let Wijk(θ)W^{k}_{ij}(\theta) denote the unitary module acting on the neighboring qubit pair (i,j)(i,j) in the kkth layer. The unitary operation in the kkth layer can be written as

Wk(θ)=iN1Wi,(i+1)k(θ)jN2Wj,(j+1)k(θ),\displaystyle\begin{array}[]{ccc}W^{k}(\theta)=\prod_{i\in N_{1}}{W_{i,(i+1)}^{k}}(\theta)\prod_{j\in N_{2}}{W_{j,(j+1)}^{k}(\theta)},\end{array} (5)

where N1N_{1} and N2N_{2} represent the odd and even subsets of [0,N1][0,N-1]. For l1l_{1}-layer structure, the ansatz can be written as W(θ)=k=1l1Wk(θ)W(\theta)=\prod_{k=1}^{l_{1}}W^{k}(\theta).

The dimension of the mapping quantum state is exponential in the input dimension. As the input dimension increases, the dimension of the mapping quantum states will be much larger than the input dimension. In some machine learning tasks, the QAML model may be expected to have a smaller output dimension to facilitate subsequent subroutine execution, the QAML model needs to add some unitary models to adjust the output dimension. A primary strategy is to add dimension reduction operation following the repeated encoding layer U1(xi)U_{1}(x_{i}) to reduce the output dimensioncong2019quantum . The dimension reduction operation is shown in Fig.3 (b). Firstly, alternating 2-qubit unitary modules act on two neighboring qubits to entangle the mapping features. Then, one qubit of each module is measured, and the measurement result is used to control the unitary operation acting on another qubit. Let Qijk=tri(Pijk)Q_{ij}^{k}=tr_{i}(P^{k}_{ij}) denote the operation acting on the (i,j)(i,j) qubit pair in the kkth layer, where tritr_{i} represents the partial operation on the iith qubit. Pijk=|00|Pij0+|11|Pij1P_{ij}^{k}=|0\rangle\langle 0|\otimes P^{0}_{ij}+|1\rangle\langle 1|\otimes P^{1}_{ij} is the controlled unitary, which represents to perform single-qubit unitary Pij0P^{0}_{ij} or Pij1P^{1}_{ij} on the second register according to the measurement result of the first qubit, then Qk=i,jQijkQ^{k}=\prod_{i,j}Q_{ij}^{k} represent the dimension reduction operation of the kkth layer. Assume the dimension reduction operation includes l2l_{2} layers, and the output state can be reduced to 2N/(2l2)2^{N/(2^{l_{2}})}-dimensional Hilbert space.

Classical metric learning based on triplet loss function needs three identical CNN to extract the features of the triplet set (xia,xip,xin)(x_{i}^{a},x_{i}^{p},x_{i}^{n}). To reduce the requirement of parameterized quantum circuits, the QAML model encodes the triplet set on two-qubit basis, then interferes with positive and negative sample pairs by a Hadamard gate. The inner products for the positive and negative sample pair are got in parallel by measuring the expectation of σz\sigma_{z} observables with respect to 2 qubits of basis state. Let |φia|\varphi^{a}_{i}\rangle, |φip|\varphi^{p}_{i}\rangle, and |φin|\varphi^{n}_{i}\rangle represent the states of anchor sample xiax^{a}_{i}, positive sample xipx^{p}_{i}, and negative sample xinx^{n}_{i}, respectively. Firstly, the QAML model prepares a superposition state

|φi=12|φias|01|02+12|φias|11|02+12|φins|01|12+12|φips|11|12\displaystyle\begin{array}[]{ccc}|\varphi_{i}\rangle=\frac{1}{{2}}|\varphi_{i}^{a}\rangle_{s}|0\rangle_{1}|0\rangle_{2}+\frac{1}{{2}}|\varphi_{i}^{a}\rangle_{s}|1\rangle_{1}|0\rangle_{2}+\frac{1}{2}|\varphi_{i}^{n}\rangle_{s}|0\rangle_{1}|1\rangle_{2}+\frac{1}{2}|\varphi_{i}^{p}\rangle_{s}|1\rangle_{1}|1\rangle_{2}\end{array} (6)

for the triplet set (xia,xip,xin)(x^{a}_{i},x^{p}_{i},x^{n}_{i}), where ss is sample register, and 1 and 2 denote ancilla registers for basis states. Metric learning based on triplet loss function requires a specific margin between the samples of different classes. To construct the margin, we replace |φins|01|12|\varphi_{i}^{n}\rangle_{s}|0\rangle_{1}|1\rangle_{2} with

|φins|01(αα2+1|02+1α2+1|12)\displaystyle\begin{array}[]{ccc}|\varphi_{i}^{n}\rangle_{s}|0\rangle_{1}(\frac{\alpha}{\sqrt{\alpha^{2}+1}}|0\rangle_{2}+\frac{1}{\sqrt{\alpha^{2}+1}}|1\rangle_{2})\end{array} (7)

and |φips|11|12|\varphi_{i}^{p}\rangle_{s}|1\rangle_{1}|1\rangle_{2} with

|φips|11(αα2+1|02+1α2+1|12),\displaystyle\begin{array}[]{ccc}|\varphi_{i}^{p}\rangle_{s}|1\rangle_{1}(-\frac{\alpha}{\sqrt{\alpha^{2}+1}}|0\rangle_{2}+\frac{1}{\sqrt{\alpha^{2}+1}}|1\rangle_{2}),\end{array} (8)

where α\alpha is the parameter determining the margin. |φia|\varphi_{i}^{a}\rangle, |φip|\varphi_{i}^{p}\rangle, and |φin|\varphi_{i}^{n}\rangle may not be in the optimal Hilbert space for separating samples of different classes. Then, the parameterized quantum circuit W(θ)sI1I2W(\theta)_{s}\otimes I_{1}\otimes I_{2} acts on |φi|\varphi_{i}\rangle, where I1I_{1} and I2I_{2} denote the identity operations acting on ancilla registers 1 and 2, and W(θ)sW(\theta)_{s} represents the ansatz acting on the sample register ss. The system gets the state

|φi=2α2+12α2+1|φi00s|01|02+2α2+12α2+1|φi10s|11|02+12α2+1|φi01s|01|12+12α2+1|φi11s|11|12,\displaystyle\begin{array}[]{ccc}|\varphi_{i}^{{}^{\prime}}\rangle=\frac{\sqrt{2\alpha^{2}+1}}{2\sqrt{\alpha^{2}+1}}|\varphi_{i}^{00}\rangle_{s}|0\rangle_{1}|0\rangle_{2}+\frac{\sqrt{2\alpha^{2}+1}}{2\sqrt{\alpha^{2}+1}}|\varphi_{i}^{10}\rangle_{s}|1\rangle_{1}|0\rangle_{2}\\ \\ +\frac{1}{2\sqrt{\alpha^{2}+1}}|\varphi_{i}^{01}\rangle_{s}|0\rangle_{1}|1\rangle_{2}+\frac{1}{2\sqrt{\alpha^{2}+1}}|\varphi_{i}^{11}\rangle_{s}|1\rangle_{1}|1\rangle_{2},\end{array} (9)

where |φi00s=W(θ)s(α2+12α2+1|φias+α2α2+1|φins)|\varphi_{i}^{00}\rangle_{s}=W(\theta)_{s}(\frac{\sqrt{\alpha^{2}+1}}{\sqrt{2\alpha^{2}+1}}|\varphi_{i}^{a}\rangle_{s}+\frac{\alpha}{\sqrt{2\alpha^{2}+1}}|\varphi_{i}^{n}\rangle_{s}), |φi10s=W(θ)s(α2+12α2+1|φiasα2α2+1|φips)|\varphi_{i}^{10}\rangle_{s}=W(\theta)_{s}(\frac{\sqrt{\alpha^{2}+1}}{\sqrt{2\alpha^{2}+1}}|\varphi_{i}^{a}\rangle_{s}-\frac{\alpha}{\sqrt{2\alpha^{2}+1}}|\varphi_{i}^{p}\rangle_{s}), |φi01s=W(θ)s|φins|\varphi_{i}^{01}\rangle_{s}=W(\theta)_{s}|\varphi_{i}^{n}\rangle_{s}, |φi11s=W(θ)s|φips|\varphi_{i}^{11}\rangle_{s}=W(\theta)_{s}|\varphi_{i}^{p}\rangle_{s}.

As W(θ)sW(θ)s=IW(\theta)_{s}W^{\dagger}(\theta)_{s}=I, the inner product acting on the state pairs |φi00|\varphi_{i}^{00}\rangle and |φi01|\varphi_{i}^{01}\rangle or |φi10|\varphi_{i}^{10}\rangle and |φi11|\varphi_{i}^{11}\rangle will counteract the effect of W(θ)W(\theta) and W(θ)W^{\dagger}(\theta). An effective strategy is to perform the repeated encoding operation U1(xi)U_{1}(x_{i}) on |φi|\varphi_{i}^{{}^{\prime}}\rangle, which not only solves the problem of the unitary operation and its conjugate operation counteracting effects of each other in the inner product calculation process but also extends the addressable Hilbert space. After the repeated encoding operation U1(xi)U_{1}(x_{i}), the system gets the state

|φi=2α2+12α2+1|φi00s|01|02+2α2+12α2+1|φi10s|11|02+12α2+1|φi01s|01|12+12α2+1|φi11s|11|12,\displaystyle\begin{array}[]{ccc}|\varphi_{i}^{{}^{\prime}}\rangle=\frac{\sqrt{2\alpha^{2}+1}}{2\sqrt{\alpha^{2}+1}}|\varphi_{i}^{00^{\prime}}\rangle_{s}|0\rangle_{1}|0\rangle_{2}+\frac{\sqrt{2\alpha^{2}+1}}{2\sqrt{\alpha^{2}+1}}|\varphi_{i}^{10^{\prime}}\rangle_{s}|1\rangle_{1}|0\rangle_{2}\\ \\ +\frac{1}{2\sqrt{\alpha^{2}+1}}|\varphi_{i}^{01^{\prime}}\rangle_{s}|0\rangle_{1}|1\rangle_{2}+\frac{1}{2\sqrt{\alpha^{2}+1}}|\varphi_{i}^{11^{\prime}}\rangle_{s}|1\rangle_{1}|1\rangle_{2},\end{array} (10)

where |φi00s=U1(xi)|φi00s|\varphi_{i}^{00^{\prime}}\rangle_{s}=U_{1}(x_{i})|\varphi_{i}^{00}\rangle_{s}, |φi10s=U1(xi)|φi10s|\varphi_{i}^{10^{\prime}}\rangle_{s}=U_{1}(x_{i})|\varphi_{i}^{10}\rangle_{s}, |φi01s=U1(xi)|φi01s|\varphi_{i}^{01^{\prime}}\rangle_{s}=U_{1}(x_{i})|\varphi_{i}^{01}\rangle_{s} and |φi11s=U1(xi)|φi11s|\varphi_{i}^{11^{\prime}}\rangle_{s}=U_{1}(x_{i})|\varphi_{i}^{11}\rangle_{s}.

2.4 Triplet loss function

A simple method of computing inner products between sample pairs is the Hadamard classifier methodblank2020quantum . In this method, two samples are firstly projected into orthogonal subspaces, spanned by standard basis states of one ancilla register. Then, a Hadamard gate acts on the standard basis states to interfere with two samples in the 2-dimensional subspaces. Finally, the inner product between two samples is got by measuring the expectation value of σz\sigma_{z} for the ancilla register. The triplet loss function, consisting of inner products for positive and negative sample pairs, needs to compute the weighted sum of inner products for sample pairs, where the weight of positive sample pairs is +1+1, and the weight of negative sample pairs is 1-1. The states of the triplet sets have been prepared on the two-qubit standard basis, shown in Equ.10. The QAML model consists of two ancilla registers, Ancilla register 2 is used to build the inner products of sample pairs. The QAML model adopts one Hadamard gate acting on ancilla register 2 to interfere with sample pairs. If only the expectation of the observable σz\sigma_{z} for the ancilla register 2 is measured, the QAML model will get the sum of the inner products for positive and negative sample pairs. The QAML model adds another register (Ancilla register 1) to distinguish between different sample pairs, and measuring the expectation with respect to the σz\sigma_{z} operator can get the weights of sample pairs. So the QAML model not only measures the expectation of the observable σz\sigma_{z} with respect to ancilla registers 1 but also the expectation for ancilla registers 2. The expectation on two ancilla registers is

σz1,σz2=2α2+14α2+1φi00|φi012α2+14α2+1φi10|φi11=14α2+1(φin|W(θ)U1(xin)U1(xia)W(θ)|φiaφip|W(θ)U1(xip)U1(xia)W(θ)|φiaαα2+1),\displaystyle\begin{array}[]{ccc}\langle\sigma^{1}_{z},\sigma_{z}^{2}\rangle=\frac{\sqrt{2\alpha^{2}+1}}{4\sqrt{\alpha^{2}+1}}\langle\varphi_{i}^{00^{\prime}}|\varphi_{i}^{01^{\prime}}\rangle-\frac{\sqrt{2\alpha^{2}+1}}{4\sqrt{\alpha^{2}+1}}\langle\varphi_{i}^{10^{\prime}}|\varphi_{i}^{11^{\prime}}\rangle\\ \\ =\frac{1}{{4\sqrt{\alpha^{2}+1}}}(\langle\varphi_{i}^{n}|W^{\dagger}(\theta)U_{1}^{\dagger}(x_{i}^{n})U_{1}(x_{i}^{a})W(\theta)|\varphi_{i}^{a}\rangle\\ \\ -\langle\varphi_{i}^{p}|W^{\dagger}(\theta)U_{1}^{\dagger}(x_{i}^{p})U_{1}(x_{i}^{a})W(\theta)|\varphi_{i}^{a}\rangle-\frac{\alpha}{\sqrt{\alpha^{2}+1}}),\\ \end{array} (11)

where αα2+1\frac{\alpha}{\sqrt{\alpha^{2}+1}} represents the margin for separating positive and negative samples. With the help of classical computation, one gets the triplet loss function

Ll(θ,|φia,|φip,|φin)=[0,4α2+1σz1,σz2]+.\displaystyle\begin{array}[]{ccc}L_{l}(\theta,|\varphi_{i}^{a}\rangle,|\varphi_{i}^{p}\rangle,|\varphi_{i}^{n}\rangle)=[0,4\sqrt{\alpha^{2}+1}\langle\sigma_{z}^{1},\sigma_{z}^{2}\rangle]_{+}.\end{array} (12)

In practical applications, one batch of samples may contain multiple triplet sets, so the QAML model needs to add a index register to distinguish different triplet sets. Let one batch of samples include mm triple sets. |φias|\varphi_{i}^{a}\rangle_{s}, |φips|\varphi_{i}^{p}\rangle_{s} and |φins|\varphi_{i}^{n}\rangle_{s} of Equ.6 are replaced by the superposition states |φ~ias,d=1mΣj=im(i+1)m1|φjas|jd|\widetilde{\varphi}_{i}^{a}\rangle_{s,d}=\frac{1}{\sqrt{m}}\Sigma_{j=im}^{(i+1)m-1}|\varphi_{j}^{a}\rangle_{s}|j\rangle_{d}, |φ~ips,d=1mΣj=im(i+1)m1|φjps|jd|\widetilde{\varphi}_{i}^{p^{\prime}}\rangle_{s,d}=\frac{1}{\sqrt{m}}\Sigma_{j=im}^{(i+1)m-1}|\varphi^{p}_{j}\rangle_{s}|j\rangle_{d}, and |φ~ins,d=1mΣj=im(i+1)m1|φjns|jd|\widetilde{\varphi}_{i}^{n^{\prime}}\rangle_{s,d}=\frac{1}{\sqrt{m}}\Sigma_{j=im}^{(i+1)m-1}|\varphi^{n}_{j}\rangle_{s}|j\rangle_{d} to construct the loss function for this batch, where the subscript dd denotes the index register. The QAML model performs Equ10-12 and yields the expectation value of the observable σz\sigma_{z} with respect to ancilla register 1 and 2 as

σz1,σz2=14m(α2+1)i=1m(φin|W(θ)U1(xin)U1(xia)W(θ)|φiaφip|W(θ)U1(xip)U1(xia)W(θ)|φiaαα2+1),\displaystyle\begin{array}[]{ccc}\langle\sigma^{1}_{z},\sigma^{2}_{z}\rangle=-\frac{1}{4m(\sqrt{\alpha^{2}+1})}\sum_{i=1}^{m}(\langle\varphi_{i}^{n^{\prime}}|W^{\dagger}(\theta)U_{1}^{\dagger}(x_{i}^{n})U_{1}(x_{i}^{a})W(\theta)|\varphi_{i}^{a^{\prime}}\rangle\\ \\ -\langle\varphi_{i}^{p^{\prime}}|W^{\dagger}(\theta)U_{1}^{\dagger}(x_{i}^{p})U_{1}(x_{i}^{a})W(\theta)|\varphi_{i}^{a^{\prime}}\rangle-\frac{\alpha}{\sqrt{\alpha^{2}+1}}),\end{array} (13)

which corresponds to the weighted sum of the inner products for one batch samples.

2.5 Adversarial samples generation

Metric learning is vulnerable to adversarial attacks. Attackers usually adopt adding small and imperceptible perturbations on natural samples to generate adversarial samples for deceiving metric learning models. Adversarial attacks make metric learning models unable to accurately distinguish positive and negative samples and give rise to misclassification. Miyatomiyato2018virtual proposed an adversarial training method, where ambiguous but critical adversarial samples are generated based on the gradients of natural samples and added to the training setliu2020vulnerability . This method effectively fights against white-box attacks and improves the robustness of the model. Inspired by this method, we developed a quantum adversarial samples generation method. Considering the efficiency of the triplet loss function, we do not create adversarial samples corresponding to all natural samples. Anchor samples in the triplet loss function are used twice to compute the inner products of positive and negative sample pairs. The adversarial samples corresponding to anchor samples can provide more valuable information for adversarial training, so the QAML model only build adversarial samples corresponding to anchor samples.

Let |φa|\varphi_{a}^{*}\rangle denote the adversarial sample corresponding to the anchor sample |φa|\varphi_{a}\rangle. According to the characteristics of adversarial attacks, |φa|\varphi_{a}^{*}\rangle is far from the positive sample |φp|\varphi_{p}\rangle but close to the negative sample |φn|\varphi_{n}^{*}\rangle, and this characteristic makes the QAML model hard to build accurate metric distances. According to Refkurakin2016adversarial , adversarial attacks generated along the direction of gradient ascent will produce the strongest disturbance to metric learning, so we develop a quantum gradient ascent method to generate adversarial samples. Let ia=((ia)1,(ia)2,,(ia)N)\bigtriangledown_{i}^{a}=({(\bigtriangledown_{i}^{a})}^{1},{(\bigtriangledown_{i}^{a})}^{2},...,{(\bigtriangledown_{i}^{a})}^{N}) denote the gradient vector of the loss function Ll(θ,|φia,|φip,|φin)L_{l}(\theta,|\varphi_{i}^{a}\rangle,|\varphi_{i}^{p}\rangle,|\varphi_{i}^{n}\rangle) with respect to the anchor sample |φia|\varphi_{i}^{a}\rangle, where the element (ia)j=(Ll(θ,|φia,|φip,|φin))/(|φiaj)(\bigtriangledown_{i}^{a})^{j}=\partial(L_{l}(\theta,|\varphi_{i}^{a}\rangle,|\varphi_{i}^{p}\rangle,|\varphi_{i}^{n}\rangle))/\partial(|\varphi_{i}^{a}\rangle^{j}) is the partial derivation of the loss function with respect to the jjth element of |φia|\varphi_{i}^{a}\rangle.

The QAML model may encounter many attacks. One of the common attacks is the white-box attack, under which the attackers have complete information about the QAML model, including the loss function implemented by parameterized quantum circuit, so that they can compute the gradients of the loss function with respect to gate parameters. Let the QAML model suffer from the functional adversarial attackmcclean2017hybrid (one kind of white-box attacks), under which each element of quantum states is influenced by the attack independently. According to the idea of gradient ascent, the adversarial anchor sample |φai|\varphi^{i*}_{a}\rangle can be written as

|φia=11+λ2||ia||22(|φia+λia|φia),\displaystyle\begin{array}[]{ccc}|\varphi^{a*}_{i}\rangle=\frac{1}{\sqrt{1+\lambda^{2}||\bigtriangledown_{i}^{a}||_{2}^{2}}}(|\varphi^{a}_{i}\rangle+\lambda\bigtriangledown_{i}^{a}|\varphi_{i}^{a}\rangle),\end{array} (14)

where λ=(λ1,λ2,,λN)\lambda=(\lambda_{1},\lambda_{2},...,\lambda_{N}) is a constant vector used to control the disturbance within a specified bound. Usually, λ\lambda is determined by the problem to be solved and its upper bound is λpε||\lambda||_{p}\leq\varepsilon, where ||||p||\cdot||_{p} denotes lpl_{p}-norm.

Let V(λia)=v(λ1(ia)1)v(λN(ia)N)V(\lambda\nabla_{i}^{a})=v(\lambda_{1}{(\nabla_{i}^{a})}^{1})\otimes...\otimes v(\lambda_{N}{(\nabla_{i}^{a})}^{N}) denote the unitary acting on the anchor sample |φia|\varphi_{i}^{a}\rangle to generate the adversarial sample |φia|\varphi_{i}^{a*}\rangle, where v(λj(ia)j)v(\lambda_{j}{(\nabla_{i}^{a})}^{j}) represents the unitary operation acting on the jjth element of |φia|\varphi_{i}^{a}\rangle. It is expected that v(λj(ia)j)v(\lambda_{j}{(\nabla_{i}^{a})}^{j}) has small impact on the state |φia|\varphi_{i}^{a}\rangle, so V(λia)V(\lambda\nabla_{i}^{a}) is close to the identity operator II. v(λj(ia)j)v(\lambda_{j}{(\nabla_{i}^{a})}^{j}) can be implemented by the rotation operation

Ry(2β)=[cos(β),sin(β)sin(β),cos(β)],\displaystyle\begin{array}[]{ccc}R_{y}(2\beta)=\begin{bmatrix}\cos(\beta),&-\sin(\beta)\\ \\ \sin(\beta),&\ \ \ \cos(\beta)\\ \end{bmatrix},\par\end{array} (15)

where β=arccos(1+λj(ia)j)\beta=\arccos(1+\lambda_{j}{(\nabla_{i}^{a})}^{j}). As the QAML model only adopts anchor samples to generate adversarial samples, we define the unitary operation to generate adversarial sample as

V(λia)=V(λia)sI120+IsI121,\displaystyle\begin{array}[]{ccc}V^{{}^{\prime}}(\lambda\nabla_{i}^{a})=V(\lambda\nabla_{i}^{a})_{s}\bigotimes I_{1}\bigotimes\prod_{2}^{0}+I_{s}\bigotimes I_{1}\bigotimes\prod_{2}^{1},\end{array} (16)

where V(λia)V(\lambda\nabla^{a}_{i}) acts on the sample register ss only when the ancilla register 2 is |0|0\rangle, and IsI_{s} and I1I_{1} mean the identity unitary II acting on registers ss and 1, respectively. Fig.3 (c) shows the schematic of generating adversarial samples, where U1(xia)=V(λia)U1(xia)U^{{}^{\prime}}_{1}(x_{i}^{a})=V^{{}^{\prime}}(\lambda\nabla_{i}^{a})U_{1}(x_{i}^{a}) replaces U1(xia)U_{1}(x_{i}^{a}) to generate the adversarial sample |φia|\varphi_{i}^{a*}\rangle. In the QAML training process, the parameters θ\theta are optimized by alternatively minimizing the loss function Ll(θ,|xia,|xip,|xin)L_{l}(\theta,|x_{i}^{a}\rangle,|x_{i}^{p}\rangle,|x_{i}^{n}\rangle) and Ll(θ,|xia,|xip,|xin)L_{l}(\theta,|x_{i}^{a*}\rangle,|x_{i}^{p}\rangle,|x_{i}^{n}\rangle), where natural and adversarial samples are respectively served as input.

The core work of generating adversarial samples is to compute the partial deviation (ia)j{(\nabla_{i}^{a})}^{j}. Many methods can be adopted to calculate (ia)j{(\nabla_{i}^{a})}^{j}, such as the finite difference scheme and parameter shift rulecrooks2019gradients ; schuld2019evaluating ; mitarai2018quantum . The parameter shift rule has faster convergence in the training process, making it more suitable for NISQ devices. (ia)j(\nabla_{i}^{a})^{j} is evaluated using the parameter shift rule

(Ll(θ,|xia,|xip,|xin)/((xia)j)=12(Ll(θ,|xi,ja+,|xip,|xin)Ll(θ,|xi,ja,|xip,|xin)),\displaystyle\begin{array}[]{ccc}\partial(L_{l}(\theta,|x_{i}^{a}\rangle,|x_{i}^{p}\rangle,|x_{i}^{n}\rangle)/\partial((x_{i}^{a})^{j})\\ \\ =\frac{1}{2}(L_{l}(\theta,|x_{i,j}^{a+}\rangle,|x_{i}^{p}\rangle,|x_{i}^{n}\rangle)-L_{l}(\theta,|x_{i,j}^{a-}\rangle,|x_{i}^{p}\rangle,|x_{i}^{n}\rangle)),\end{array} (17)

where xi,ja±=xia±π2ejx_{i,j}^{a\pm}=x_{i}^{a}\pm\frac{\pi}{2}e^{j}, and eje^{j} is the unit vector with only the jjth qubit being 1. According to Equ17, one partial derivative can be got by evaluating the loss function twice.

3 Numerical simulations and discussions

In this section, we adopt the PennyLane software frameworkbergholm2018pennylane to demonstrate the performances of the QAML model. The QAML model is implemented by a hybrid quantum-classical algorithm, where the quantum device and classical optimizer cooperate to implement parameter optimization. RMSPropmukkamala2017variants optimizer serves as a classical optimizer with a learning rate of 0.01. Our first work is to demonstrate the performance of the QAML model on the MNIST dataset, consisting of 28×2828\times 28-dimensional grayscale images of handwritten digits 090\sim 9. The QAML model focuses on binary classification tasks, so only two categories of handwritten digits, ’0’ and ’11’, are chosen to form data sets. As NISQ devices have limited circuit depth and qubits, the QAML model first reduces samples into 2-dimensional vectors using the principal component analysis (PCA) method. The training and test sets contain 100 samples, respectively, where 50 samples are from class ’0’ and 50 samples come from class ’11’.

Fig.4 shows the distributions of test samples in the Hilbert space. Simulation results show that samples from different classes are pushed apart with a larger margin and become linearly separable after performing the QAML model. Fig.5 (colorbar figure) shows the inner products between test sample pairs (the larger the inner product, the smaller the distance). The QAML model without adding adversarial samples can be viewed as the general quantum metric learning model, named as the QML model. Panel (a) shows the inner products of sample pairs before performing the QML or QAML models. Panel (b) shows the inner products of sample pairs after performing the QML model, where the training set only includes natural samples. Panel (c) denotes the inner products of sample pairs after completing the QAML model, where the training set consists of natural and adversarial samples. Before training, the inner products for sample pairs of the same and different classes have little difference. This phenomenon means that samples from different categories are close to each other and are difficult to separate. After performing the QML model, the inner products for negative sample pairs become smaller (close to 0), indicating that the distances between samples from different categories begin to get larger. After performing the QAML model, the inner products for negative sample pairs are going to -1, smaller than the values obtained through the QML model. This result indicates that the distance between samples of different categories after executing the QAML model is greater than that after executing the QML model. Let did_{i} represent the average inner product of all sample pairs from the same class, shown in Table.1. dod_{o} denotes the average inner product of all sample pairs from different classes, offered in Table.2. The result shows that the average inner product dod_{o} in the case of adding adversarial samples is smaller than that without adversarial samples, regardless of training or test sets. This result also means that the QAML model can obtain a larger separation margin than the QML model. We also can find that the average inner product dod_{o} for test and training samples have little differences, indicating that the QAML model has a good generalization for the unseen test data.

Refer to caption
(a) Before QAML
Refer to caption
(b) After QAML
Figure 4: The distributions of samples in the Hilbert space. ’star’ denotes samples from class ’0’, and ’circle’ represent samples from class ’1’. Panel (a) shows the distribution of samples before performing the QAML model. Panel (b) shows the distribution of samples after completing the QAML model.
Refer to caption
(a) Before QML (QAML)
Refer to caption
(b) After QML
Refer to caption
(c) After QAML
Figure 5: The inner products between test sample pairs of MNIST. The horizontal and vertical axes represent the indexes of samples. Indexes 0-49 denote the samples from class ’0’, and indexes 50-99 represent the samples from class ’1’. Panel (a) shows the inner products between all sample pairs before performing the QML model, also corresponding to the inner products before performing the QAML model. Panel (b) shows the inner products of test sample pairs, where the QML model is trained through 1000 training epochs but adversarial samples are not added to the training set. Panel (c) shows the inner products of test sample pairs after 1000 training epochs, where adversarial samples are added to the training set.
Table 1: The average inner products did_{i} of sample pairs from the same class (MNIST dataset). The first row describes the average inner products for sample pairs before training, and the second row depicts the inner products for sample pairs after training. The first two columns represent the average inner products for training and test sample pairs, respectively, where adversarial samples are not added to the training set. The last column represent the average inner products for training and test sample pairs, respectively, where adversarial samples are added to the training set.
Samples Training Test Training+adv Test+adv
Before 0.8280 0.8168 0.8280 0.8168
After 0.8348 0.8021 0.8537 0.8249
Table 2: The average inner products dod_{o} of sample pairs from different classes (MNIST dataset). The description of rows and columns is the same as Table 1.
Samples Training Test Training+adv Test+adv
Before 0.3040 0.4787 0.3040 0.4787
After -0.7971 -0.6968 -0.8326 -0.7696

To further verify the separation effects for other data sets, we simulate the performances of the QML and QAML models on Iris dataset. Iris dataset contains 150 samples with 4-dimensional features, where samples 0490\sim 49 belong to class 1, samples 509950\sim 99 belong to class 2, and samples 100149100\sim 149 belong to class 3. Samples from classes 2 and 3 are difficult to separate by simple linear functions, so we select them to build a binary data set, where 30 samples of each category are used to construct the training set, and the other 20 samples are served as the test set. Fig.6 shows the average inner products of test sample pairs for Iris dataset. Panels (a), (b), and (c) show the inner products for test sample pairs before performing the QML or QAML model, after performing the QML model, and after performing the QAML model, respectively. Simulation results show that the QAML model also has good separation effects on Iris dataset, superior to the QML model. Tables 3 and 4 show the average inner products did_{i} and dod_{o} for Iris dataset, respectively. Simulation results show that all did_{i} have similar values, indicating that the sample from the same class has relatively stable distances regardless of whether performing the QAML model. Before performing the QML or QAML model, dod_{o} has a larger value, which means that samples from different classes are close to each other and are difficult to separate. After performing the QML and QAML models, the average inner products dod_{o} get smaller values, where dod_{o} of the QAML model has smaller values than that of the QML model. We can find that the QAML model yields a better separation effect than the QML model, and the conclusion is consistent with that got based on MNIST dataset.

Refer to caption
(a) Before QML (QAML)
Refer to caption
(b) After QML
Refer to caption
(c) After QAML
Figure 6: The inner products for sample pairs of Iris dataset. Indexes 0-19 denote test samples from class 2, and indexes 20-39 represent test samples from class 3. Panel (a) shows the inner products of test sample pairs before performing the QML (QAML) model. Panel (b) shows the inner product of test sample pairs after performing the QML model. Panel (c) shows the inner products of test sample pairs after performing the QAML model.
Table 3: The average inner products did_{i} of samples from the same class (Iris dataset). The description of rows and columns is the same as Table 1.
Samples Training Test Training+adv Test+adv
Before 0.5065 0.5909 0.5065 0.5909
After 0.5473 0.6109 0.5549 0.6544
Table 4: The average inner products dod_{o} of samples from different classes (Iris dataset). The description of rows and columns is the same as Table 1.
Samples Training Test Training+adv Test+adv
Before 0.3377 0.4787 0.3377 0.4787
After -0.6314 -0.3424 -0.6752 -0.4653

Furthermore, we prove the robustness of the QAML model based on the ϵ\epsilon-robust accuracy proposed in Ref.guan2020robustness . Given a test sample set 𝒮\mathcal{S} and a smaller threshold ϵ\epsilon. Let ρ𝒮\rho\in\mathcal{S} represent the quantum state of a test sample of 𝒮\mathcal{S}. If ρ\rho and another state σ\sigma belong to different classes and the inner product between them is larger than the threshold ϵ\epsilon, then σ\sigma is viewed as the adversarial sample of ρ\rho. If ρ\rho has no adversarial samples within ϵ\epsilon, ρ\rho is ϵ\epsilon-robust state. Let μϵ\mu_{\epsilon} denote the ϵ\epsilon-robust accuracy of 𝒮\mathcal{S}, which is equal to the proportion of ϵ\epsilon-robust states of the sample set 𝒮\mathcal{S}. Let the threshold be ϵ=0.02\epsilon=0.02. The ϵ\epsilon-robust accuracies of the QML and QAML models in MNIST dataset are 92%92\% and 100%100\%, respectively. The ϵ\epsilon-robust accuracies of the QML and QAML models on Iris dataset are 91%91\% and 95%95\%, respectively. Compared with the QML model, the QAML model improves the robustness by adding the adversarial samples to the training set.

References

  • (1) I. Cong, S. Choi, M.D. Lukin, Nature Physics 15(12), 1273 (2019)
  • (2) M. Benedetti, E. Lloyd, S. Sack, M. Fiorentini, Quantum Science and Technology 4(4), 043001 (2019)
  • (3) S. Chen, C. Gong, J. Yang, X. Li, Y. Wei, J. Li, arXiv preprint arXiv:1802.03170 (2018)
  • (4) S. Lloyd, M. Schuld, A. Ijaz, J. Izaac, N. Killoran, arXiv preprint arXiv:2001.03622 (2020)
  • (5) N.A. Nghiem, S.Y.C. Chen, T.C. Wei, arXiv preprint arXiv:2010.13186 (2020)
  • (6) C. Mao, Z. Zhong, J. Yang, C. Vondrick, B. Ray, Advances in Neural Information Processing Systems 32 (2019)
  • (7) J. Wang, F. Zhou, S. Wen, X. Liu, Y. Lin, in Proceedings of the IEEE international conference on computer vision (2017), pp. 2593–2601
  • (8) Y. Duan, W. Zheng, X. Lin, J. Lu, J. Zhou, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018), pp. 2780–2789
  • (9) N. Liu, P. Wittek, Physical Review A 101(6), 062331 (2020)
  • (10) A. Madry, A. Makelov, L. Schmidt, D. Tsipras, A. Vladu, arXiv preprint arXiv:1706.06083 (2017)
  • (11) R. Salakhutdinov, G. Hinton, in Artificial Intelligence and Statistics (PMLR, 2007), pp. 412–419
  • (12) C. Blank, D.K. Park, J.K.K. Rhee, F. Petruccione, npj Quantum Information 6(1), 1 (2020)
  • (13) E. Grant, M. Benedetti, S. Cao, A. Hallam, J. Lockhart, V. Stojevic, A.G. Green, S. Severini, npj Quantum Information 4(1), 1 (2018)
  • (14) A. Pérez-Salinas, A. Cervera-Lierta, E. Gil-Fuster, J.I. Latorre, Quantum 4, 226 (2020)
  • (15) M. Schuld, R. Sweke, J.J. Meyer, Physical Review A 103(3), 032430 (2021)
  • (16) C. Zoufal, A. Lucchi, S. Woerner, npj Quantum Information 5(1), 1 (2019)
  • (17) A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J.M. Chow, J.M. Gambetta, Nature 549(7671), 242 (2017)
  • (18) T. Miyato, S.i. Maeda, M. Koyama, S. Ishii, IEEE transactions on pattern analysis and machine intelligence 41(8), 1979 (2018)
  • (19) A. Kurakin, I. Goodfellow, S. Bengio, arXiv preprint arXiv:1611.01236 (2016)
  • (20) J.R. McClean, M.E. Kimchi-Schwartz, J. Carter, W.A. De Jong, Physical Review A 95(4), 042308 (2017)
  • (21) G.E. Crooks, arXiv preprint arXiv:1905.13311 (2019)
  • (22) M. Schuld, V. Bergholm, C. Gogolin, J. Izaac, N. Killoran, Physical Review A 99(3), 032331 (2019)
  • (23) K. Mitarai, M. Negoro, M. Kitagawa, K. Fujii, Physical Review A 98(3), 032309 (2018)
  • (24) V. Bergholm, J. Izaac, M. Schuld, C. Gogolin, M.S. Alam, S. Ahmed, J.M. Arrazola, C. Blank, A. Delgado, S. Jahangiri, et al., arXiv preprint arXiv:1811.04968 (2018)
  • (25) M.C. Mukkamala, M. Hein, in International conference on machine learning (PMLR, 2017), pp. 2545–2553
  • (26) J. Guan, W. Fang, M. Ying, CoRR (2020)