This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Detected the steerability bounds of the generalized Werner states via BackPropagation neural network

Jun Zhang [email protected] College of Data Science, Taiyuan University of Technology, Taiyuan 030024, China    Kan He [email protected] College of Mathematics, Taiyuan University of Technology, Taiyuan, 030024, China College of Information and Computer Science, Taiyuan University of Technology, Taiyuan, 030024, China College of software, Taiyuan University of Technology, Taiyuan, 030024, China    Ying Zhang College of Information and Computer Science, Taiyuan University of Technology, Taiyuan, 030024, China    Yu-yang Hao College of software, Taiyuan University of Technology, Taiyuan, 030024, China    Jin-chuan Hou College of Mathematics, Taiyuan University of Technology, Taiyuan, 030024, China    Fang-Peng Lan College of Information and Computer Science, Taiyuan University of Technology, Taiyuan, 030024, China    Bao-Ning Niu [email protected] College of Information and Computer Science, Taiyuan University of Technology, Taiyuan, 030024, China
Abstract

We use error BackPropagation (BP) neural network to determine whether an arbitrary two-qubit quantum state is steerable and optimize the steerability bounds of the generalized Werner state. The results show that no matter how we choose the features for the quantum states, we can use the BP neural network to construct several models to realize high-performance quantum steering classifiers compared with the support vector machine (SVM). In addition, we predict the steerability bounds of the generalized Werner states by using the classifiers which are newly constructed by the BP neural network, that is, the predicted steerability bounds are closer to the theoretical bounds. In particular, high-performance classifiers with partial information of the quantum states which we only need to measure in three fixed measurement directions are obtained.

pacs:
03.67.Mn, 03.65.Ud, 03.67.-a

I Introduction

In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics based on EPR paradox in their EPR paper EPR . The EPR argument led to long lasting discussions. Schrödinger introduced the concept of quantum steering Schrodinger in order to formalize the “spooky action at distance” in the EPR paper. In a quantum steering scenario, Alice can steer Bob’s quantum state by choosing a proper local measurement. This work did not cause widespread concern. It was not until 2007 that Wiseman, Jones and Dougherty proposed a precise definition and systematic criteria Wiseman that people began to pay attention to steering. In the modern view, steering steer is a concept incompatible with the local hidden state (LHS) model. It is a quantum correlation between entanglement entangle1 ; entangle2 and Bell nonlocality nonlocal . Now, steering has been applied to various quantum information processing tasks Wiseman ; app2 ; app3 , such as one-sided device-independent quantum key distribution Branciard ; qkd2 ; qkd3 ; qkd4 ; qkd5 ; qkd6 , channel discrimination Piani ; channel , randomness certification Passaro ; random1 ; random2 and teleamplification He .

The set of EPR steerable states is a proper subset of entangled states, namely, the quantum entangled states are not necessarily steerable quantum state, but the steerable quantum state must be a quantum entangled state. Thus, it is important to detect the steerability of a given state shared by Alice and Bob Wiseman ; app2 ; app3 ; Piani ; 13 ; 14 ; 15 ; 16 ; 17 ; 18 ; 19 ; 20 ; 22 ; 21 ; 30 ; 23 ; 24 . So far, various criteria and inequalities for steering have been proposed 13 ; 14 ; 15 ; 16 ; 17 ; 18 ; 19 ; 20 ; 22 ; 24 . But sometimes it is difficult to determine whether an arbitrary unknown two-qubit quantum state is steerable through these criteria and inequalities. Fortunately, these criteria can be computed by a semidefinite programming (SDP) SDP where we must find Alice’s optimal measurement direction for a given state shared by Alice and Bob in order to determine whether Alice can steer Bob’s state. But this requires a lot of measurements, it becomes very hard to calculate as the number of Alice’s measurements increases steer . In addition, the situation gets worse when dealing with the steerability of a series of different rapidly generated states.

Machine learning can quickly make predictions on given new data with reasonable accuracy by learning from a large amount of existing data. Recently, machine learning has been applied in quantum physics, such as entanglement engtanglement , nonlocality nonlocality1 ; nonlocality2 , phase transitions identification phase1 ; phase2 , quantum state tomography Tomolography , Markovianity Markovianity and steering 32 ; 30 . Compared with the SDP method, machine learning requires much less resources and can quickly determine whether a quantum state is steerable. Ren and Chen proposed a machine learning method to detect the steerability of an arbitrary two-qubit state based on support vector machine (SVM), gave the validity and efficiency of steering classification 30 . As we know, SVM maps the samples from the original space to a higher-dimensional feature space through the kernel function, so that the samples are linearly separable in this feature space and the global optimal solution can be found. However, when predict the steerability bounds of the generalized Werner states, the quantum steering classifiers trained by SVM will misjudge the steerable state to be the unsteerable state, vice versa. The main reason is that the data generated by SDP existing errors, that is, based on the finite values of measurements, SDP program for a given quantum state does not necessarily mean that the quantum state is steerable, if we try more measurements and more values of the measurements, the SDP program may conclude that the quantum state is unsteerable.

Thus, we explore a machine learning method using BackPropagation (BP) neural network to reduce the error rate of misjudge, though the misdata generated by SDP can not be avoid. The BP neural network has the advantage of easily adjusting the parameters. Moreover, the BP neural network has a better performance when processing large amounts of data and continuous data features. So no matter what the features of the quantum states are, the BP neural network can realize high-performance quantum steering classifiers compared with the SVM method, namely, it can realized the higher accuracy of classification. In addition, we can use the BP neural network to construct several new classifiers to optimize the steerability bounds of the generalized Werner state. Although the steerability bounds of the generalized Werner state obtained by SVM are better than that obtained by SDP, they are not very close to the theoretical bounds. The steerability bounds of the generalized Werner states in terms of the BP neural network are closer to the theoretical bounds than the SVM approach.

This article is organized as follows: In Sec. II, we introduce some basic concepts, principles and algorithms to be used in this article. In Sec. III, we illustrate the collection process of our datasets in detail, give several classifiers which are constructed by the BP neural network based on four features, and compare the performance of these four classes of classifiers with the classifiers trained by the SVM method. We also obtain a better steerability bounds of the generalized Werner state. Finally, we summarize our results in Sec. IV.

II Preliminaries

II.1 Quantum steering

Firstly, we introduce the concept of quantum steering briefly. Assuming that for an unknown quantum state ρ\rho shared by Alice and Bob, Alice performs the quantum measurements {Mx={Ma|x}}\{M_{x}=\{M_{a|x}\}\} on her subsystems and the corresponding measurement outcomes are denoted by aa. Based on the quantum theory, according to choosing quantum measurement Ma|xM_{a|x} and the measurement outcome aa, the unnormalized quantum states assemblage of the subsystem ρB\rho_{B} is {p(a|x),ρa|x}\{p(a|x),\rho_{a|x}\} where

ρa|x=trA[(Ma|xIB)ρ],\rho_{a|x}=\mathrm{tr}_{A}[(M_{a|x}\otimes I^{B})\rho], (1)

and the probability distribution

p(a|x)=tr[(Ma|xIB)ρ].p(a|x)=\mathrm{tr}[(M_{a|x}\otimes I^{B})\rho]. (2)

Quantum steering describes a scenario: Bob can full control of his measurements and access the condition state ρa|x\rho_{a|x} without characterization of Alice’s measurements. Namely, Bob performs tomography to reconstruct the set of conditional assemblage {ρa|x}\{\rho_{a|x}\} and all the results do not dependent on any particular information of how Alice’s measurements work. In other words, Alice can steer Bob’s local state by performing local measurement and classical communication on the particle she owns. Utilizing the LHS model, quantum steering can be defined as the impossibility of remotely generating ensembles which produced by a LHS model. That is, suppose that a source sends a classical message λ\lambda to Alice and a corresponding state σλ\sigma_{\lambda} to Bob. If the measurement applied by Alice is xx, the classical variable λ\lambda leads to the probability of she getting measurement outcome aa is p(a|x,λ)p(a|x,\lambda). Whilst the probability distribution of the classical message λ\lambda is p(λ)p(\lambda), since Bob can not access to the classical variable λ\lambda, the finial corresponding assemblage Bob observes is composed by the elements

ρa|x=dλp(λ)p(a|x,λ)σλ.\displaystyle\rho_{a|x}=\int\mathrm{d}\lambda p(\lambda)p(a|x,\lambda)\sigma_{\lambda}. (3)

If there exists an conditional assemblage ρa|x\rho_{a|x} corresponding to the quantum state ρ\rho can be generated from a LHS model, then Alice can not steer Bob’s state. Otherwise, quantum state ρ\rho is steerable from Alice to Bob. As an example, the generalized Werner state is identified as a simple family of one-way steerable two-qubit states 31 . It can be given by

ρ(α,χ)=α|ψχψχ|+(1α)IA2ρB,\displaystyle\rho(\alpha,\chi)=\alpha|\psi_{\chi}\rangle\langle\psi_{\chi}|+(1-\alpha)\frac{I^{A}}{2}\otimes\rho^{B}, (4)

where |ψχ=cosχ|00+sinχ|11|\psi_{\chi}\rangle=\cos\chi|00\rangle+\sin\chi|11\rangle and ρB=trA|ψχψχ|\rho^{B}=\mathrm{tr}_{A}|\psi_{\chi}\rangle\langle\psi_{\chi}|, 0α1,0<χπ40\leqslant\alpha\leqslant 1,0<\chi\leqslant\frac{\pi}{4}. It has been proved that ρ(α,χ)\rho(\alpha,\chi) is steerable from Alice to Bob if and only if the inequality cos22χ2α1(2α)α3\cos^{2}2\chi\geqslant\frac{2\alpha-1}{(2-\alpha)\alpha^{3}} holds. That is, the state ρ(α,χ)\rho(\alpha,\chi) is a one-way steerable state. However, it is difficult to efficiently determine whether the arbitrary quantum state is steerable or not.

Subsequently, L. Vandenberghe and S. Boyd gave the numerical calculation quantum steering criterion in terms of SDP as followsSDP .

Suppose that Alice performs mm measurements with ww outcomes each, i.e., x=0,,m1x=0,\ldots,m-1 and a=0,,w1a=0,\ldots,w-1, λ\lambda^{{}^{\prime}} is a function from {0,,m1}\{0,\ldots,m-1\} to {0,,w1}\{0,\ldots,w-1\}. We can identify every λ\lambda^{{}^{\prime}} by a string of outcomes λ=(ax=0,ax=1,,ax=m1)\lambda^{{}^{\prime}}=(a_{x=0},a_{x=1},\ldots,a_{x=m-1}). Obviously, there are d=wmd=w^{m} such strings. Define the deterministic probability distribution D(a|x,λ)D(a|x,\lambda^{{}^{\prime}}) as D(a|x,λ)=δa,λ(x)D(a|x,\lambda^{{}^{\prime}})=\delta_{a,\lambda^{{}^{\prime}}(x)}, hence there are dd such distributions. Then Eq. (3) can be rewritten as

ρa|x=λ=1dD(a|x,λ)ρλ,\displaystyle\rho_{a|x}=\sum_{\lambda^{{}^{\prime}}=1}^{d}D(a|x,\lambda^{{}^{\prime}})\rho_{\lambda^{{}^{\prime}}}, (5)

where p(a|x,λ)=λ=1dp(λ|λ)D(a|x,λ)p(a|x,\lambda)=\sum_{\lambda^{{}^{\prime}}=1}^{d}p(\lambda^{{}^{\prime}}|\lambda)D(a|x,\lambda^{{}^{\prime}}), and ρλ𝑑λp(λ)p(λ|λ)σλ\rho_{\lambda^{{}^{\prime}}}\equiv\int d\lambda p(\lambda)p(\lambda^{{}^{\prime}}|\lambda)\sigma_{\lambda}.

Now we write the SDP which determines that Alice can steer Bob SDP . If the quantum state {ρa|x}\{\rho_{a|x}\} and the deterministic probability distribution {D(a|x,λ)}λ\{D(a|x,\lambda^{{}^{\prime}})\}_{\lambda^{{}^{\prime}}} are given, considering the minimum value of the objective funciton

min{Fa|x}traxFa|xρa|x,\min_{\{F_{a|x}\}}\mathrm{tr}\sum_{ax}F_{a|x}\rho_{a|x}, (6)

where the Hermitian matrices {Fa|x}\{F_{a|x}\} are satisfy axFa|xD(a|x,λ)0λ\sum_{ax}F_{a|x}D(a|x,\lambda^{{}^{\prime}})\geqslant 0\quad\forall\lambda^{{}^{\prime}} and traxλFa|xD(a|x,λ)=1\mathrm{tr}\sum_{ax\lambda^{{}^{\prime}}}F_{a|x}D(a|x,\lambda^{{}^{\prime}})=1. If the minimum value of the objective function (6) is negative, then quantum state ρ\rho is steerable from Alice to Bob. Whilst, if the minimum value of the objective function (6) is positive, then quantum state ρ\rho is unsteerable from Alice to Bob.

II.2 BP artificial neural network model

BP neural network usually refers to a multi-layer feedforward network trained by error BackPropagation Algorithm, and it is one of the most widely used neural network models. The term BackPropagation and its general use in neural networks was announced in BP1 , and its modern overview is given in the textbook BP2 .

BP neural network has the ability of learning and storing a large amount of mapping relationships between inputs and outputs without knowing the mathematical equations of the relationships. A BP neural network model includes an input layer, several hidden layers and an output layer. In order to minimize the accumulated error of the network, its learning rule is to use the gradient descent method to continuously adjust the connection weights and thresholds (or biases) of the network through back propagation.

For the convenience of discussion, we show a typical BP neutral network model with only 3 layers in Fig. 1. Given a training set D={(𝒙1,𝒚1),(𝒙2,𝒚2),,(𝒙d,𝒚d)}D=\{(\bm{x}_{1},\bm{y}_{1}),(\bm{x}_{2},\bm{y}_{2}),\ldots,(\bm{x}_{d},\bm{y}_{d})\}, where 𝒙tm,𝒚tn\bm{x}_{t}\in\mathbb{R}^{m},\bm{y}_{t}\in\mathbb{R}^{n}, the BP neutral network model has mm input-layer neurons, nn output-layer neurons and hh hidden-layer neurons, where the thresholds of the jjth neuron in the hidden layer and the kkth neuron in the output layer are denoted by γj\gamma_{j} and θk\theta_{k} respectively, the connection weight between the iith neuron in the input layer and the jjth neuron in the hidden layer is denoted by wijw_{ij}, and the connection weight between the jjth neuron in the hidden layer and the kkth neuron in the output layer is labeled as vjkv_{jk}. All the weights and thresholds are initialized by the Normal distribution.

Refer to caption
Figure 1: 3-layer BP neural network model.

By calculation, the input received by the jjth neuron in the hidden layer is αj=i=1mwijxi\alpha_{j}=\sum_{i=1}^{m}w_{ij}x_{i}, and the input received by the kkth neuron in the output layer is βk=j=1hvjkzj\beta_{k}=\sum_{j=1}^{h}v_{jk}z_{j}. Suppose the activation functions from the input layer to the hidden layer and from the hidden layer to the output layer are ff and gg respectively and the output of the network is 𝒚^t=(y^1t,y^2t,,y^nt)\bm{\hat{y}}_{t}=(\hat{y}_{1}^{t},\hat{y}_{2}^{t},\ldots,\hat{y}_{n}^{t}) for the training example (𝒙t,𝒚t)(\bm{x}_{t},\bm{y}_{t}), the output of the jjth neuron in the hidden layer is zj=f(αjγj)z_{j}=f(\alpha_{j}-\gamma_{j}), and the output of the kkth neuron in the output layer is y^kt=g(βkθk)\hat{y}_{k}^{t}=g(\beta_{k}-\theta_{k}). In this paper, every activation function is chosen by the ReLU function

ReLU(x)={x,ifx>0,0,ifx0.\mathrm{ReLU}(x)=\left\{\begin{aligned} x,\quad\quad\mathrm{if}~{}x>0,\\ 0,\quad\quad\mathrm{if}~{}x\leqslant 0.\end{aligned}\right. (7)

So the mean square error of the network on (𝒙t,𝒚t)(\bm{x}_{t},\bm{y}_{t}) is

Et=12k=1n(y^ktykt)2,\displaystyle\begin{array}[]{l}E_{t}=\frac{1}{2}\sum_{k=1}^{n}(\hat{y}_{k}^{t}-y_{k}^{t})^{2},\end{array} (9)

and the accumulated error (cost function) of the network on the training set DD is

E=1dt=1dEt.\displaystyle\begin{array}[]{l}E=\frac{1}{d}\sum_{t=1}^{d}E_{t}.\end{array} (11)

It should be noticed that the goal of the BP algorithm is to minimize the function (11).

Now there are (m+n+1)h+n(m+n+1)h+n parameters in the network of the Fig. 1 to be determined: mhmh connection weights between the input layer and the hidden layer, hnhn connection weights between the hidden layer and the output layer, hh thresholds of the hidden layer neurons, and nn thresholds of the output layer neurons. BP algorithm is an iterative learning algorithm in which the parameters are updated and estimated in each round of iteration, and the updated formula of an arbitrary parameter ω\omega is given by

ωω+Δω.\displaystyle\begin{array}[]{l}\omega\leftarrow\omega+\Delta\omega.\end{array} (13)

BP algorithm adjust every parameter in the direction of the negative gradient of the target to minimize the cost function EE based on the gradient descent strategy. Given a learning rate η\eta, any parameter ω\omega can be adjusted η\eta units along the gradient descent direction every time, that is,

Δω=ηEω.\displaystyle\begin{array}[]{l}\Delta\omega=-\eta\frac{\partial E}{\partial\omega}.\end{array} (15)

After continuous iterations, the error EE can be minimized.

In summary, we show the workflow of the BP algorithm in Table 1. For each training example, the BP algorithm performs the following operations. First of all, the algorithm provides the input example to the input-layer neurons, and then propagates forward the signal layer by layer until the output layer generates a result; Then the network calculates the error of the output layer (lines 4-5), propagate the error back to the hidden-layer neurons (line 6), and adjust the connection weights and thresholds according to the error of the hidden-layer neurons (line 7). Finally, this iterative process loops until the cost function reaches the minimum.

Input: Training set D={(𝒙t,𝒚t)}t=1dD=\{(\bm{x}_{t},\bm{y}_{t})\}_{t=1}^{d};
Learning rate η\eta.
Process:
1: Randomly initialize all connection weights and threshol-
ds in the network within the range of (0,1)
2: repeat
3:  for all (𝒙t,𝒚t)D(\bm{x}_{t},\bm{y}_{t})\in D do
4:   Calculate the current output 𝒚^t\hat{\bm{y}}_{t};
5:   Calculate the gradient term of the neurons in the o-
utput layer;
6:   Calculate the gradient term of the neurons in the h-
idden layer;
7:   Update connection rights wij,vjkw_{ij},v_{jk} and biases γj,θk\gamma_{j},\theta_{k}
8:  end for
9: until the cost function reaches the minimum
Output: Multi-layer feedforward neural network determined
by connection weights and thresholds.
Table 1: Error BackPropagation Algorithm

III Detecting the steerability by BP neural networks

SVM maps the samples from the original space to a higher-dimensional feature space through the kernel function, so that the samples are linearly separable in this feature space and the global optimal solution can be found. But the data generated by SDP inevitably exists errors, so the models trained by SVM have errors. Because of the advantage of easily adjusting the parameters and a better performance when processing large amounts of data and continuous data features of the neural network, we use the BP neural network to detect the quantum steerability.

III.1 Datasets

In order to train the quantum steering classifiers, we need to collect the data of quantum states and select the features for the data. Inspired by Ref. 30 , we generate two random 4×44\times 4 complex matrices AA and BB. Then we use the two matrices to generate a Hermitian matrix H(A+iB)(A+iB)H\equiv(A+\mathrm{i}B)(A+\mathrm{i}B)^{\dagger}, where \dagger means conjugate transpose. Finally, the density matrix ρH/tr(H)\rho\equiv H/\mathrm{tr}(H) can be obtained. Thus, utilizing SDP to find out whether the each sample quantum state is steerable or not whilst label the quantum state to be “1-1” and “+1+1” , respectively. In this paper, we use the datasets collected in 30 , for which we select the features in the following four cases.

F1: Every feature vector is composed of 15 features in F1, that is, ρii,i{1,2,3}\rho_{ii},i\in\{1,2,3\}, the real and imaginary part of ρij,i>j\rho_{ij},i>j.

F2: Every feature vector is composed of 9 features in F2, that is, tr[(σkσl)ρ],{k,l}{1,2,3}\mathrm{tr}[(\sigma_{k}\otimes\sigma_{l})\rho],\{k,l\}\in\{1,2,3\}.

An arbitrary two-qubit density operator ρ\rho in Bloch representation can be written as

ρ=\displaystyle\rho= 14(I+i=13xiσiIB+j=13yjIAσj+\displaystyle\frac{1}{4}(I+\sum_{i=1}^{3}x_{i}\sigma_{i}\otimes I^{B}+\sum_{j=1}^{3}y_{j}I^{A}\otimes\sigma_{j}+
k,l=13tklσkσl),\displaystyle\sum_{k,l=1}^{3}t_{kl}\sigma_{k}\otimes\sigma_{l}), (16)

where tkl=tr[(σkσl)ρ]t_{kl}=\mathrm{tr}[(\sigma_{k}\otimes\sigma_{l})\rho] can construe the correlation matrix that stands for a certain quantum correlation. The partial information is extracted by computing tr[(σkσl)ρ]\mathrm{tr}[(\sigma_{k}\otimes\sigma_{l})\rho] as features. If the machine learning model with high performance can be trained by selecting these 9 features, we can judge whether an arbitrary unknown two-qubit quantum state is steerable by only measuring in three fixed directions x,y,zx,y,z.

F3: Every feature vector is composed of 9 features in F3, that is, ρρ(IAρB)ρ(IAρB),tr[(σkσl)ρ],{k,l}{1,2,3}\rho\to\rho^{\prime}\equiv(I^{A}\otimes\sqrt{\rho^{B}})\rho(I^{A}\otimes\sqrt{\rho^{B}}),\mathrm{tr}[(\sigma_{k}\otimes\sigma_{l})\rho^{\prime}],\{k,l\}\in\{1,2,3\}.

To further explore a high-performance machine learning model with partial information, we convert the state ρ\rho into a canonical form ρ\rho^{\prime} by local unitaries, which preserves the steerability of ρ\rho. As proved in Ref. 31 , the map is given by

ρρ(IAρB)ρ(IAρB),\displaystyle\rho\to\rho^{\prime}\equiv(I^{A}\otimes\sqrt{\rho^{B}})\rho(I^{A}\otimes\sqrt{\rho^{B}}), (17)

where ρB=trAρ\rho^{B}=\mathrm{tr}_{A}\rho.

Similarly, we extract the coefficients of the correlation terms of ρ\rho^{\prime}, tklt^{\prime}_{kl} to combine the feature vector. Same to the case of F2, we only need to measure an arbitrary two-qubit state in three fixed directions: x,y,zx,y,z to predict the steerability of it.

F4: Every feature vector is composed of 6 features in F4, that is, F3\mathrm{F}_{3} except for the terms of {σ2σ1,σ3σ1,σ3σ2}\{\sigma_{2}\otimes\sigma_{1},\sigma_{3}\otimes\sigma_{1},\sigma_{3}\otimes\sigma_{2}\}.

To explore a high performance-machine learning model of steering with less information, according to the symmetry, we drop the coefficients of the correlation terms, {σ2σ1,σ3σ1,σ3σ2}\{\sigma_{2}\otimes\sigma_{1},\sigma_{3}\otimes\sigma_{1},\sigma_{3}\otimes\sigma_{2}\}.

According to the numbers of measurements m=2,3,,8m=2,3,\ldots,8, the whole dataset can be divide into 28 datasets after selecting the feature vectors. For each mm, there exists at least 5000 examples with the label “+1+1” and 5000 examples with the label “1-1” in the corresponding dataset. From the data sets, we can select 1000 positive examples and 1000 negative examples as the test set randomly, and the residual as the training set. We employ the BP neural network to train a model for every dataset.

III.2 Training and testing

The neural networks used in this article are all have 2 hidden layers, and every layer has 200-1000 neurons. We choose ReLU as the activation function and Adam for the gradient descent method. The batch sizes of the models are all 200 and learning rate equals 0.001.

After a neural network model is trained, we test the performance by creating a set of feature vectors of new quantum states, which is different from the dataset used for training. For a test dataset with known labels, the classification accuracy of the learned model on the test set is the percentage of the number of examples correctly predicted to the size of the test set. The Fig. 2,3,4,5 represent classification accuracies of machine learning models with F1,F2,F3,F4 features, respectively. In the four figures, we select every test set as a subset of the training set or a new dataset different from the training set to demonstrate the generalization ability and compare the three kinds of accuracies of the classifiers trained by BP neural network with that trained by SVM 30 .

Refer to caption
Figure 2: Classification accuracy of machine learning with F1 features. The first and second columns (diagonal-filled green and diagonal-filled blue) depict the accuracy of cross validation of the classifiers trained by SVM and BP neutral network, respectively. The third and forth columns (grid-filled green and grid-filled blue) depict the classification accuracy on random states of the classifiers trained by SVM and BP neutral network, respectively. The fifth and sixth columns (solid green and solid blue) depict the classification accuracy on the Werner state of the classifiers trained by SVM and BP neutral network, respectively.
Refer to caption
Figure 3: Classification accuracy of machine learning with F2 features. The first and second columns (diagonal-filled green and diagonal-filled blue) depict the accuracy of cross validation of the classifiers trained by SVM and BP neutral network, respectively. The third and forth columns (grid-filled green and grid-filled blue) depict the classification accuracy on random states of the classifiers trained by SVM and BP neutral network, respectively. The fifth and sixth columns (solid green and solid blue) depict the classification accuracy on the Werner state of the classifiers trained by SVM and BP neutral network, respectively.
Refer to caption
Figure 4: Classification accuracy of machine learning with F3 features. The first and second columns (diagonal-filled green and diagonal-filled blue) depict the accuracy of cross validation of the classifiers trained by SVM and BP neutral network, respectively. The third and forth columns (grid-filled green and grid-filled blue) depict the classification accuracy on random states of the classifiers trained by SVM and BP neutral network, respectively. The fifth and sixth columns (solid green and solid blue) depict the classification accuracy on the Werner state of the classifiers trained by SVM and BP neutral network, respectively.
Refer to caption
Figure 5: Classification accuracy of machine learning with F4 features. The first and second columns (diagonal-filled green and diagonal-filled blue) depict the accuracy of cross validation of the classifiers trained by SVM and BP neutral network, respectively. The third and forth columns (grid-filled green and grid-filled blue) depict the classification accuracy on random states of the classifiers trained by SVM and BP neutral network, respectively. The fifth and sixth columns (solid green and solid blue) depict the classification accuracy on the Werner state of the classifiers trained by SVM and BP neutral network, respectively.

Firstly, in order to verify whether the model is well trained, we select a subset of the training set as the test set. Inspired by Ref. 30 , in our experiments, we take the method of fourfold cross validation. Namely, we divide the entire training set into four equal parts randomly and take each part in turn as the test set, and the rest as the training set. Based on the trained four classifiers, we can obtain the accuracy of cross validation which is the average accuracy of the four classifiers. This accuracy information is used to generate the final classifier. In Fig. 2,3,4,5, the final cross validation accuracies of the classifiers trained by SVM are illustrated by the first (diagonal-filled green) columns, and that trained by BP neural network are illustrated by the second (diagonal-filled blue) columns.

Secondly, the classification accuracies of the models trained by SVM and BP neural network on the random test set, formed by the reserved 2000 examples, are illustrated by the third (grid-filled green) and the forth (grid-filled blue) columns respectively. All accuracies are higher than 0.9, which clearly shows that the models are well trained.

Finally, the last classification accuracies of the models trained by SVM and BP neural network on the test set generated by the Werner state (χ=π4\chi=\frac{\pi}{4}) which are illustrated as solid green and solid blue columns respectively in Fig. 2,3,4,5.

As we know, the generalized Werner states is steerable from Alice to Bob if the following condition holds,

cos22χ2α1(2α)α3.\displaystyle\cos^{2}2\chi\geqslant\frac{2\alpha-1}{(2-\alpha)\alpha^{3}}. (18)

Obviously, the bound of the parameter α\alpha that Alice can steer Bob’s state is determined by Eq. (18). We now construct generalized Werner state based on the uniform distribution of α\alpha and χ\chi. For each χ={π4,π6,π8,π12}\chi=\{\frac{\pi}{4},\frac{\pi}{6},\frac{\pi}{8},\frac{\pi}{12}\}, we can generate 10000 states and use them to create a dataset for every feature (F1,F2,F3,F4), where each example has a feature Fii(ii=1,2,3,4), and a label (+1+1 for steerable and 1-1 for unsteerable) determined by Eq. (18). Notice that these states have completely correct labels.

Interestingly, for all features, the classification accuracies of the Werner state of the models trained by BP neural network are all higher than that training by SVM. Some accuracies of fourfold cross validation and on the random datasets of the models trained by BP neural network have increased, and even if some other accuracies have decreased but not very much. This shows that our models trained by BP neural network have good generalization abilities. This is because some unsteerable states computed by SDP may be misjudged. The reason is that the number of measurement values reaches 100 and fails to return a negative objective value of Eq. (6), the SDP is stopped and marked that the quantum state is not steerable; in other word, the number of measurement values is increased, negative values may be returned and the quantum state can be labeled as a steerable state.

III.3 Optimizing the steerability bounds

In this section, we use the quantum steering classifiers that trained by BP neutral network to predict the steerability bounds of the generalized Werner state, and compare them with the bounds that computed by SDP and trained by SVM 30 .

As shown in Fig. 6,7,8,9, these four figures correspond to the classifiers trained with features F1, F2, F3, and F4, respectively. As illustrated in Fig. 6, the four subfigs in this figure correspond to χ={π4,π6,π8,π12}\chi=\{\frac{\pi}{4},\frac{\pi}{6},\frac{\pi}{8},\frac{\pi}{12}\} respectively. In each subfig, the blue-circle lines stand for the results predicted by the classifiers trained by SVM for m=2,,8m=2,\ldots,8, respectively. The black-plus lines are the results predicted by the classifiers trained by BP neutral network for m=2,,8m=2,\ldots,8, respectively. The red-star lines are the results predicted by SDP with m=2,,8m=2,\ldots,8. The yellow-cross lines are the steerability bounds from Alice to Bob which are determined by Eq. (18). Notice that the steering bounds which are predicted by SVM (blue-circle lines) and BP neural network (black-plus lines) are all lower than the bounds calculated by SDP (red-star lines). Especially, when χ=π4\chi=\frac{\pi}{4}, the generalized Werner states reduces to the Werner state, the bounds predicted by classifiers are always higher than the theoretical bounds (yellow-cross lines) and the bounds predicted by BP neural network are lower than the bounds predicted by SVM and SDP, namely, the bounds are more closer to the theoretical bounds. In addition, as illustrated in Fig. 7,8,9, they show that the bounds predicted by BP neural network are obviously very close to the theoretical bounds.

Refer to caption
Figure 6: The predictions of steerability for generalized Werner states by learned classifiers and SDP. The blue-circle line is the result predicted by learning classifiers trained by SVM with F1 features, the black-plus line is the result predicted by learning classifiers trained by BP neural network with F1 features, the red-star line is the result predicted by SDP, and the yellow-cross line is the steerability bound from Alice to Bob.
Refer to caption
Figure 7: The predictions of steerability for generalized Werner states by learned classifiers and SDP. The blue-circle line is the result predicted by learning classifiers trained by SVM with F2 features, the black-plus line is the result predicted by learning classifiers trained by BP neural network with F2 features, the red-star line is the result predicted by SDP, and the yellow-cross line is the steerability bound from Alice to Bob.
Refer to caption
Figure 8: The predictions of steerability for generalized Werner states by learned classifiers and SDP. The blue-circle line is the result predicted by learning classifiers trained by SVM with F3 features, the black-plus line is the result predicted by learning classifiers trained by BP neural network with F3 features, the red-star line is the result predicted by SDP, and the yellow-cross line is the steerability bound from Alice to Bob.
Refer to caption
Figure 9: The predictions of steerability for generalized Werner states by learned classifiers and SDP. The blue-circle line is the result predicted by learning classifiers trained by SVM with F4 features, the black-plus line is the result predicted by learning classifiers trained by BP neural network with F4 features, the red-star line is the result predicted by SDP, and the yellow-cross line is the steerability bound from Alice to Bob.

During the process of the predicted the bounds of the generalized Werner states, comparing with the bounds predicted by SVM, the bounds predicted by our models are more closer to the theoretical bounds, although a few bounds predictions are lower than the theoretical bounds. This shows that the learning classifiers may be more better than SDP, but they still have the possibilities of predicting steerability bounds lower than the theoretical bounds, which almost never happens for SDP. This is because the error of SDP mainly occurs when it finds out the data sets. The error date sets lead to the machine learning predict positive to be negative, and predict negative to be positive. With the decrease of χ\chi, the prediction errors of the learning classifiers and SDP are all increase, because of the predictions of the marginal states become more and more difficult. As shown in Fig. 6,7,8,9, the predicted bound made by classifiers that trained by BP neutral network are remain to be better than that made by the SVM method.

The above results clearly show that the BP neural network is effective in steerability detection. Compared with the SVM method in Ref. 30 , our method makes not only none significantly reduction of the accuracies of the classifiers, but also the predictions of the bounds of the generalized Werner states closer to the theoretical bounds. Moreover, our method makes the time-consuming roughly the same as the SVM method. Now we take m=8m=8 as an example, the learning classifiers trained by these two kinds of machine learning methods take about 10210^{-2} seconds to predict an unknown state, while for the SDP program it takes about 10210^{2} seconds. This also illustrates the time advantage of machine learning classifiers.

III.4 Comparing the classifiers trained with four features

From Fig. 3 to Fig. 5, we explore classifiers with partial information of a quantum state with F2, F3 and F4 features. Due to the accuracies in Fig. 3 are lower than others, it is clear that F3 and F4 are all better than F2. Next, let’s compare the performance of classifiers trained with features Fi,i=1,2,3,4i,i=1,2,3,4 on different datasets.

As we know, with the measurement mm increase, SDP could find out the quantum state steerable or unsteerable more accurate. Thus, it is reasonably that the steerability classifiers are well trained with the measurement mm increases. Next, we prove the validity of the classifiers trained by the data sets for m=8m=8, and use these classifiers to test the data sets for different mm. In Fig. 10, the blue-circle line, the orange-plus line, the green-star line, the red-cross line stand for different features F1, F2, F3 and F4, respectively. It is obvious that, the trend of test accuracy are grow rapidly and it is generally the same as the theoretical prediction. However, the curves occur inflection points at m=6m=6 and there exists the second inflection point for steerability classifier of feature F2 at m=4m=4. This phenomenon comes from the drawback in the training process. In the whole, though the range of accuracies is relatively large, the their accuracies tendency is correct. And the steerability classifier of feature F3 presents the higher performance.

Refer to caption
Figure 10: Accuracy of classification on random test data of m=8m=8 with different classifiers.
Refer to caption
Figure 11: Classification accuracy for generalized Werner states by each machine learing classifier with the different features: F1, F2, F3, and F4, respectively.

In Fig. 11, it shows that the classification accuracy for the generalized Werner states (χ=π4,π6,π8,π12\chi=\frac{\pi}{4},\frac{\pi}{6},\frac{\pi}{8},\frac{\pi}{12}) by each machine learing classifier with the different features F1, F2, F3, and F4, respectively. The classification accuracy of the generalized Werner states tested by the steerability classifier of feature F1 presents in the first subfig, it maintains at a high level. With the parameter χ\chi decreases, the accuracy will be decrease. In the second subfig of the steerability classifier of feature F2, the trend of curves grow down, it demonstrates that the ability of predicted the bounds of the generalized Werner states is poor. The third and forth subfigs show that the trend of the accuracy have the similar trends. They illustrate that the more measurements, the higher accuracy of the classification and the accuracy decreases with decreasing parameters χ\chi. That is, the steerability classifiers of features F3 and F4 present the relative higher performance of predicted the bounds of the generalized Werner states.

IV Conclusion

In this work, we have applied the BP neural network to construct several classifiers to identify the steerability of the two-qubit quantum states and optimize the steerability bounds of the generalized Werner states. Our main purpose is to construct machine learning models with stronger generalization abilities. Firstly, we find out that no matter what the features (F1,F2,F3,F4) of the quantum states are, the BP neural network can construct several models to realize high-performance quantum steering classifiers compared with the SVM approach. Secondly, we use the BP neural network to construct several new classifiers to predict the steerability bounds of the generalized Werner states, which shows that the predictions of the steerability bounds are closer to the theoretical bounds, i.e., it is very effective for testing or predicting the steerability of a large amount of the generalized Werner states. Finally, we particularly construct high-performance classifiers with partial information which we only need to measure in three fixed measurement directions to detect the steerability of arbitrary states validly. In conclusion, it shows that BP neural network can be very effective for identifying the steerability of a large amount of arbitrary states in the quantum information processing.

Acknowledgement.—This work was supported by the National Natural Science Foundation of China (Grant No. 11771011), the Natural Science Foundation of Shan-xi Province, China (Grant No. 201801D221032, 201801D 121016) and Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi (Grant No. 2019L0178).

References

  • (1) A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. 47, 777 (1935).
  • (2) E. Schrödinger, Math. Proc. Camb. Phil. Soc. 31, 4 (1935).
  • (3) H. M. Wiseman, S. J. Jones, and A. C. Doherty, Phys. Rev. Lett. 98, 140402 (2007).
  • (4) D. Cavalcanti and P. Skrzypczyk, Rep. Prog. Phys. 80, 024001 (2017).
  • (5) O. Gühne and G. Tóth, Phys. Rep. 474, 1 (2009).
  • (6) R. Horodecki, P . Horodecki, M. Horodecki, and K. Horodecki, Rev. Mod. Phys. 81, 865 (2009).
  • (7) N. Brunner, D. Cavalcanti, S. Pironio, V . Scarani, and S. Wehner, Rev. Mod. Phys. 86, 419 (2014).
  • (8) S. J. Jones, H. M. Wiseman, and A. C. Doherty, Phys. Rev. A 76, 052116 (2007).
  • (9) P. Skrzypczyk, M. Navascués, and D. Cavalcanti, Phys. Rev. Lett. 112, 180404 (2014).
  • (10) C. Branciard, E. G. Cavalcanti, S. P. Walborn, V. Scarani, and H. M. Wiseman, Phys. Rev. A 85, 010301(R) (2012).
  • (11) T. Gehring, V. Händchen, J. Duhme, F. Furrer, T. Franz, C. Pacher, R. F. Werner, and R. Schnabel, Nat. Commun. 6, 8795 (2015).
  • (12) N. Walk et al., Optica 3, 634 (2016).
  • (13) Y. Wang, W. S. Bao, H. W. Li, C. Zhou, and Y. Li, Phys. Rev. A 88, 052322 (2013).
  • (14) C. Zhou, P. Xu, W. S. Bao, Y. Wang, Y. Y. Zhang, M. S. Jiang, and H. W. Li, Opt. Express 25, 16971 (2017).
  • (15) E. Kaur, M. M. Wilde, and A. Winter, New J. Phys. 22, 023039 (2020).
  • (16) M. Piani and J. Watrous, Phys. Rev. Lett. 114, 060404 (2015).
  • (17) K. Sun, X. J. Ye, Y. Xiao, X. Y. Xu, Y. C. Wu, J. S. Xu, J. L. Chen, C. F. Li, and G. C. Guo, npj Quantum Inf. 4, 12 (2018).
  • (18) E. Passaro, D. Cavalcanti, P. Skrzypczyk, and A. Acín, New J. Phys. 17, 113010 (2015).
  • (19) P. Skrzypczyk and D. Cavalcanti, Phys. Rev. Lett. 120, 260401 (2018).
  • (20) B. Coyle, M. J. Hoban, and E. Kashefi, arXiv:1806.10565v2 [quant-ph]
  • (21) Q. He, L. Rosales-Zárate, G. Adesso, and M. D. Reid, Phys. Rev. Lett. 115, 180502 (2015).
  • (22) M. D. Reid, Phys. Rev. A 40, 913 (1989).
  • (23) M. D. Reid, P. D. Drummond, W. P. Bowen, E. G. Cavalcanti, P. K. Lam, H. A. Bachor, U. L. Andersen, and G. Leuchs, Rev. Mod. Phys. 81, 1727 (2009).
  • (24) E. G. Cavalcanti, S. J. Jones, H. M. Wiseman, and M. D. Reid, Phys. Rev. A 80, 032112 (2009).
  • (25) S. P . Walborn, A. Salles, R. M. Gomes, F. Toscano, and P. H. Souto Ribeiro, Phys. Rev. Lett. 106, 130402 (2011).
  • (26) J. Schneeloch, C. J. Broadbent, S. P . Walborn, E. G. Cavalcanti, and J. C. Howell, Phys. Rev. A 87, 062103 (2013).
  • (27) M. F. Pusey, Phys. Rev. A 88, 032313 (2013).
  • (28) T. Pramanik, M. Kaplan, and A. S. Majumdar, Phys. Rev. A 90, 050305(R) (2014).
  • (29) I. Kogias, P . Skrzypczyk, D. Cavalcanti, A. Acín, and G. Adesso, Phys. Rev. Lett. 115, 210401 (2015).
  • (30) E. G. Cavalcanti, C. J. Foster, M. Fuwa, and H. M. Wiseman, J. Opt. Soc. Am. B 32, A74 (2015).
  • (31) C. Ren, C. Chen, Phys. Rev. A 100, 022314 (2019).
  • (32) I. Kogias, A. R. Lee, S. Ragy, and G. Adesso, Phys. Rev. Lett. 114, 060403 (2015).
  • (33) H. Zhu, M. Hayashi, and L. Chen, Phys. Rev. Lett. 116, 070403 (2016).
  • (34) H. C. Nguyen and T. Vu, Europhys. Lett. 115, 10003 (2016).
  • (35) L. Vandenberghe and S. Boyd, SIAM Rev. 38, 49 (1996).
  • (36) S. Lu, S. Huang, K. Li, J. Li, J. Chen, D. Lu, Z. Ji, Y. Shen, D. Zhou, and B. Zeng, Phys. Rev. A 98, 012315 (2018).
  • (37) A. Canabarro, S. Brito, R. Chaves, Phys. Rev. Lett. 122, 200401 (2019).
  • (38) D. L. Deng, Phys. Rev. Lett. 120, 240402 (2018).
  • (39) K. Ch’ng, J. Carrasquilla, R. G. Melko, and E. Khatamis. Phys. Rev. X 7, 031038 (2017).
  • (40) N. Yoshioka, Y. Akagi, and H. Katsura, Phys. Rev. B 97, 205110 (2018).
  • (41) M. Neugebauer, L. Fischer, A. Jäger, S. Czischek, S. Jochim, M. Weidemüller, and M. Gärttner, Phys. Rev. A 102, 042604 (2020).
  • (42) F. F. Fanchini, G. Karpat, D. Z. Rossatto, A. Norambuena, and R. Coto, arXiv:2009.03946v1 [quant-ph]
  • (43) Y. Q. Zhang, L. J. Yang, Q. L. He, and L. Chen Zhang, Quantum Inf. Process. 19, 263 (2020).
  • (44) J. Bowles, T. Vértesi, M. T. Quintino, and N. Brunner, Phys. Rev. A 93, 022121 (2016).
  • (45) D. E. Rumelhart, G. E. Hinton, and R. J. Williams, Nature 19, 533 (1986).
  • (46) I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (The MIT Press, Cambridge, USA, 2016).