This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Quantum Kernel t-Distributed Stochastic Neighbor Embedding

Yoshiaki Kawase [email protected] Graduate School of Information Science and Technology, The University of Tokyo
7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan
Graduate School of Engineering Science, Osaka University
1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan
   Kosuke Mitarai [email protected] Graduate School of Engineering Science, Osaka University
1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan
Center for Quantum Information and Quantum Biology, Osaka University 560-0043, Japan.
   Keisuke Fujii [email protected] Graduate School of Engineering Science, Osaka University
1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan
Center for Quantum Information and Quantum Biology, Osaka University 560-0043, Japan. RIKEN Center for Quantum Computing, Wako Saitama 351-0198, Japan.
Abstract

Data visualization is important in understanding the characteristics of data that are difficult to see directly. It is used to visualize loss landscapes and optimization trajectories to analyze optimization performance. Popular optimization analysis is performed by visualizing a loss landscape around the reached local or global minimum using principal component analysis. However, this visualization depends on the variational parameters of a quantum circuit rather than quantum states, which makes it difficult to understand the mechanism of optimization process through the property of quantum states. Here, we propose a quantum data visualization method using quantum kernels, which enables us to offer fast and highly accurate visualization of quantum states. In our numerical experiments, we visualize hand-written digits dataset and apply kk-nearest neighbor algorithm to the low-dimensional data to quantitatively evaluate our proposed method compared with a classical kernel method. As a result, our proposed method achieves comparable accuracy to the state-of-the-art classical kernel method, meaning that the proposed visualization method based on quantum machine learning does not degrade the separability of the input higher dimensional data. Furthermore, we visualize the optimization trajectories of finding the ground states of transverse field Ising model and successfully find the trajectory characteristics. Since quantum states are higher dimensional objects that can only be seen via observables, our visualization method, which inherits the similarity of quantum data, would be useful in understanding the behavior of quantum circuits and algorithms.

data visualization, quantum kernel, quantum machine learning, visualizing optimization trajectories
preprint: APS/123-QED

I Introduction

Data visualization is a popular and important application of dimensionality reduction, such as principal component analysis (PCA), autoencoder [1], and variational autoencoder [2]. Data visualization is performed by mapping high-dimensional data into two-dimensional space. The visualized data are useful in finding trends and understanding the characteristics of data that are difficult to see directly from the raw high-dimensional data. PCA and t-distributed stochastic neighbor embedding (t-SNE) [3] are popular choices for data visualization in classical machine learning. For example, they are used to visualize loss landscapes [4] and optimization processes [5, 6] to gain intuitive insights into the complex optimization processes.

In quantum computing, there is a similar demand to obtain an intuitive insight into the optimization of quantum circuit architecture design or variational quantum algorithms. More concretely, variational quantum algorithms [7], such as variational quantum eigensolver [8, 9], quantum approximate optimization algorithm [10], and quantum neural networks [11, 12, 13], are promising routes to use noisy intermediate-scale quantum devices for practically important problems. However, it is difficult to design a parameterized quantum circuit for a variational ansatz because different tasks require different expressibility [14, 15]. Furthermore, once a parameterized quantum circuit is fixed, there exist challenges in parameter optimization, such as vanishing gradient problem [16, 17, 18] and being trapped in local minima [19]. Quantum data visualization [20, 21] can be used to address these issues in variational quantum algorithms. Indeed, visualizing loss landscapes and optimization trajectories by principal component analysis have been performed in quantum machine learning [20]. However, this approach only visualizes changes in variational parameters and does not reflect direct information about quantum states during optimization process.

To visualize quantum states, we have proposed a variational quantum algorithm to visualize classical and quantum data using a quantum neural network [21] based on t-SNE. Specifically, we map quantum states representing high-dimensional data onto a two-dimensional plane by optimizing a parameterized quantum circuit to preserve the similarities between the high and low-dimensional data. However, this method requires too many accesses to a quantum computer to optimize the design and parameters of a quantum circuit applied to quantum states. The algorithm sometimes does not work in a practical timescale, even on a noiseless simulator using a few qubits. Additionally, we must design a quantum circuit to avoid barren plateaus and local minima. These issues make it difficult to visualize quantum states.

To resolve these issues and realize faster and easier quantum data visualization, we here propose a visualization method based on a quantum kernel method [22, 23], which uses inner products between quantum states representing data to perform machine learning tasks, e.g., regression or classification. Specifically, we employ quantum kernels instead of classical kernels in kernel t-SNE [24, 25, 26], which is a kernel-based extension of the original t-SNE. Since the kernel t-SNE only uses kernel values between data during its visualization process, we can fix the number of quantum circuit execution before starting the algorithm, as opposed to our previous approach which iteratively optimizes a circuit for better visualization. Consequently, our proposed method requires fewer accesses to a quantum computer in most cases than our previous quantum neural network approach [21]. Moreover, because a parameterized quantum circuit applied to quantum states is not required, we can visualize quantum states more directly using our proposed method than our previous one [21].

To verify our proposed method in terms of visualization performance, we first visualize quantum states generated through a quantum feature map from hand-written digits data set [27] in a two-dimensional plane. To quantitatively evaluate our model compared with our previous and classical machine learning models, we apply the kk-nearest neighbor algorithm to the low-dimensional data and compare the accuracy of the classification. We confirm the prediction by our proposed method is more accurate than by our previous quantum neural network approach [21]. Moreover, our method achieves the accuracy mostly equivalent to the kernel t-SNE using Gaussian kernels.

Secondly, we use the method for visualizing quantum states generated during the optimization process of the variational quantum eigensolver (VQE) [8, 7] algorithm applied to transverse field Ising models, which we could not perform by the previous method [21]. This visualization reveals the effect of parameter initialization and that a part of a trajectory is shared with other trajectories starting from different initial points. We believe that visualizing the optimization process of variational quantum algorithms will be helpful in improving optimization strategies.

The rest of this paper is organized as follows. In Sec. II, we describe the data visualization method of t-SNE and the extension of t-SNE. In Sec. III, we explain our proposed method to visualize data by quantum kernels. In Sec. IV, to verify our proposed method, we visualize a hand-written digits data set embedded in quantum states via a quantum feature map, and compare the prediction accuracy by applying kk-nearest neighbor algorithm to the low-dimensional data. Additionally, we visualize the optimization trajectories of a variational quantum algorithm. We describe our conclusion and future work in Sec. V.

II Preliminary

In this section, we give a rough overview of existing data visualization methods. First, we introduce t-SNE in Sec. II.1. Then, we explain an extension of t-SNE called parametric t-SNE, which uses neural networks, quantum neural networks, and kernel methods in Secs. II.2, II.3, and II.4.

II.1 t-SNE

t-SNE [28] is a non-parametric unsupervised learning method to visualize data. More concretely, t-SNE maps high-dimensional data 𝒙i\bm{x}_{i} to lower dimensional data 𝒚i\bm{y}_{i} while preserving the similarities between high- and low-dimensional data. The high-dimensional data similarities pijp_{ij} between data 𝒙i\bm{x}_{i} and 𝒙j\bm{x}_{j} are defined using a Gaussian distribution:

pj|i\displaystyle p_{j|i} =exp(𝒙i𝒙j2/2σi2)kiexp(𝒙i𝒙k2/2σi2),\displaystyle=\frac{\exp{(-\|\bm{x}_{i}-\bm{x}_{j}\|^{2}/2\sigma_{i}^{2})}}{\sum_{k\neq i}\exp{(-\|\bm{x}_{i}-\bm{x}_{k}\|^{2}/2\sigma_{i}^{2})}},
pij\displaystyle p_{ij} =pj|i+pi|j2N,\displaystyle=\frac{p_{j|i}+p_{i|j}}{2N}, (1)
pii\displaystyle p_{ii} =0,\displaystyle=0,

where NN denotes the number of data, and σi\sigma_{i} is a parameter determined by making a quantity called perplexity Perpi=2jpj|ilog2pj|i\mbox{Perp}_{i}=2^{-\sum_{j}p_{j|i}\log_{2}{p_{j|i}}} equal to a user-specified value. The low-dimensional data similarities are defined using t-distribution with one degree of freedom:

qij\displaystyle q_{ij} =(1+𝒚i𝒚j2)1klk(1+𝒚k𝒚l2)1,\displaystyle=\frac{(1+\|\bm{y}_{i}-\bm{y}_{j}\|^{2})^{-1}}{\sum_{k}\sum_{l\neq k}(1+\|\bm{y}_{k}-\bm{y}_{l}\|^{2})^{-1}}, (2)
qii\displaystyle q_{ii} =0.\displaystyle=0.

In t-SNE, the initial positions of low-dimensional data are determined at random. The low-dimensional data are then iteratively moved to minimize the cost function CC defined by Kullback-Leibler(KL) divergence between the similarities of data distributions in the high- and low- dimensional spaces:

C=KL(PQ)=ijpijlogpijqij.C=KL(P\|Q)=\sum_{i}\sum_{j}p_{ij}\log{\frac{p_{ij}}{q_{ij}}}. (3)

After the optimization, we can visualize data by plotting the resulting low-dimensional data. A caveat of t-SNE is that we cannot add data after the visualization; to add data to the visualization, we must redo the whole process from the beginning. This short-coming is resolved by modified versions of t-SNE which are discussed in the following subsections.

II.2 Parametric t-SNE

Parametric t-SNE [29] is the variant of t-SNE that can be applied to unseen data. It explicitly constructs a map from a high-dimensional space to a low-dimensional space by a neural network. Let us denote weights of a neural network as 𝒘\bm{w}. A neural network 𝒚=f(𝒙|𝒘)\bm{y}=f(\bm{x}|\bm{w}) is trained to minimize the cost function CC in Eq. (3). The trained network allows us to visualize data, including unseen ones. In numerical experiments, we use parametric t-SNE to compare the accuracy of our proposed method.

II.3 Parametric t-SNE with a quantum neural network

We have proposed to use a quantum neural network instead of a neural network in parametric t-SNE to visualize both classical and quantum data [21]. Specifically, we apply a parametrized quantum circuit to a set of quantum states {|ψi}\{\ket{\psi_{i}}\} to visualize them. The states {|ψi}\{\ket{\psi_{i}}\} can either be intrinsically quantum, e.g. quantum states provided from physical experiments or from other quantum circuits, or those that encode classical data {𝒙i}\{\bm{x}_{i}\} through some quantum feature map. In this method, the high-dimensional data similarities are defined as

pj|i\displaystyle p_{j|i} =exp(d(𝒙i,𝒙j)2/2σi2)kiexp(d(𝒙i,𝒙k)2/2σi2),\displaystyle=\frac{\exp{(-d(\bm{x}_{i},\bm{x}_{j})^{2}/2\sigma_{i}^{2})}}{\sum_{k\neq i}\exp{(-d(\bm{x}_{i},\bm{x}_{k})^{2}/2\sigma_{i}^{2})}},
pij\displaystyle p_{ij} =pj|i+pi|j2N,\displaystyle=\frac{p_{j|i}+p_{i|j}}{2N}, (4)
pii\displaystyle p_{ii} =0,\displaystyle=0,

where

{d(𝒙i,𝒙j)2=𝒙i𝒙j2(for classical data)d(|ψi,|ψj)2=1|ψi|ψj|2(for quantum data),\begin{cases}d(\bm{x}_{i},\bm{x}_{j})^{2}=\|\bm{x}_{i}-\bm{x}_{j}\|^{2}&\mbox{(for classical data)}\\ d(\ket{\psi_{i}},\ket{\psi_{j}})^{2}=1-|\langle\psi_{i}|\psi_{j}\rangle|^{2}&\mbox{(for quantum data)}\end{cases}, (5)

and \|\cdot\| is Euclidean norm. On the other hand, the low-dimensional data yμy_{\mu} is given by

yμ(𝒙i,𝜽)=βμ0|U(𝒙i,𝜽)OμU(𝒙i,𝜽)|0y_{\mu}(\bm{x}_{i},\bm{\theta})=\beta_{\mu}\bra{0}U^{\dagger}(\bm{x}_{i},\bm{\theta})O_{\mu}U(\bm{x}_{i},\bm{\theta})\ket{0}

for classical data, and

yμ(|ψi,𝜽)=βμψi|U(𝜽)OμU(𝜽)|ψiy_{\mu}(\ket{\psi_{i}},\bm{\theta})=\beta_{\mu}\bra{\psi_{i}}U^{\dagger}(\bm{\theta})O_{\mu}U(\bm{\theta})\ket{\psi_{i}}

for quantum data. Note that U(𝒙i,𝜽)U(\bm{x}_{i},\bm{\theta}) or U(𝜽)U(\bm{\theta}) is a parameterized quantum circuit, {Oμ}μ=1d\{O_{\mu}\}_{\mu=1}^{d} are observables, and dd is the dimension of low-dimensional space. Also, note that βμ\beta_{\mu} is used as a trainable parameter instead of a hyper-parameter in Ref. [21] to increase expressibility in our numerical experiments.

II.4 Kernel t-SNE

Kernel t-SNE is the variant of t-SNE that can be applied to unseen data [24, 25]. Given a training dataset {𝒙i}\{\bm{x}_{i}\}, the task here is to visualize a high-dimensional vector 𝒙\bm{x} on a dd-dimensional space along with {𝒙i}\{\bm{x}_{i}\}. The kernel t-SNE maps 𝒙\bm{x} to dd-dimensional data (low-dimensional data) 𝒚\bm{y} as

𝒚(𝒙)=i=1N𝜶ik(𝒙i,𝒙),\bm{y}(\bm{x})=\sum_{i=1}^{N}\bm{\alpha}_{i}k(\bm{x}_{i},\bm{x}), (6)

where NN is the number of training data, kk is a kernel function, and 𝜶id\bm{\alpha}_{i}\in\mathbb{R}^{d} are trainable parameters. In classical machine learning, a Gaussian kernel k(𝒙𝒊,𝒙)=exp(𝒙𝒊𝒙2/(2σ2))k(\bm{x_{i}},\bm{x})=\exp{(-\|\bm{x_{i}}-\bm{x}\|^{2}/(2{\sigma^{\prime}}^{2}))} is often employed as the kernel function. The parameters 𝜶i\bm{\alpha}_{i} are determined by minimizing the t-SNE cost function [24, 25]. This method is expected to accelerate the mapping of all data points through learning the map of t-SNE consisting of the subset. In this paper, we use this method to accelerate the visualization in terms of the number of access to a quantum computer. Specifically, we propose to use quantum kernels defined by inner products of quantum states. This allows us to visualize them without optimizing a parametrized quantum circuit in contrast to our previous work [21].

III Quantum Kernel t-SNE

Here, we describe our proposal, that is, a visualization method using quantum kernels and its advantages over the previous one [21].

Refer to caption
Figure 1: The outline of our proposed method. After the fidelities between quantum states representing data are evaluated for all data pairs on a quantum computer, the rest of the optimization process is performed entirely on a classical computer. The optimization minimizes the cost function by updating 𝜶i\bm{\alpha}_{i}. After the optimization, we visualize data by plotting the low-dimensional data 𝒚\bm{y}.

III.1 Algorithm

Before we describe our proposed method, we first refer to our notation. We are interested in visualizing a quantum state |ψ\ket{\psi} along with a set of quantum states {|ψi}i=1N\{\ket{\psi_{i}}\}_{i=1}^{N} provided as a training dataset. As in Sec. II.3, the states {|ψi}\{\ket{\psi_{i}}\} and |ψ\ket{\psi} can either be intrinsically quantum, e.g. quantum states provided from physical experiments or from other quantum circuits, or those that encode classical data {𝒙i}\{\bm{x}_{i}\} and 𝒙\bm{x} through some quantum feature map.

The outline of our proposed method is shown in Fig. 1. The procedure of our proposed method is described as follows:

  1. 1.

    Calculate the high-dimensional data similarities pijp_{ij} for all data pairs by Eqs. (II.3) and (5).

  2. 2.

    Calculate the low-dimensional data 𝒚\bm{y} through Eq. (6) with k(|ψi,|ψ)=|ψi|ψ|2k(\ket{\psi_{i}},\ket{\psi})=|\langle\psi_{i}|\psi\rangle|^{2}.

  3. 3.

    Calculate the low-dimensional data similarities qijq_{ij} and the cost function CC defined as Eq. (3).

  4. 4.

    Update 𝜶i\bm{\alpha}_{i} to minimize the cost function CC.

After the optimization, we can visualize data by plotting the low-dimensional data 𝒚\bm{y} using the optimized {𝜶i}\{\bm{\alpha}_{i}\}.

III.2 Advantage over the previous method

In this section, we explain the advantages of our proposed method compared to our previous approach [21]. Specifically, we compare the number of accesses to a quantum computer. Let us define pp as the number of parameters in a quantum circuit and mm as the number of iterations required to optimize the parameters within a circuit in our previous approach.

First, we compare the number of accesses to a quantum computer when visualizing classical data. In the present method, we need to evaluate O(N2)O(N^{2}) fidelities to calculate the kernel function during training. For our previous approach [21], we need to use O(Np)O(Np) accesses for each iteration to evaluate the cost function and its gradient. So, O(Npm)O(Npm) accesses are required for the whole optimization. The present method is therefore advantageous roughly when mp>Nmp>N in terms of the number of accesses to a quantum computer compared to the previous work. This condition is satisfied in a common situation. For example, in Ref. [30], they use a quantum neural network with 1515 or 6060 parameters and optimize the parameters with 400400 iterations to classify 360360 data into two labels. If we visualize the data, the present method performs visualization more efficiently than the previous one under the condition that the latter requires roughly the same number of iterations for the optimization of the quantum neural network as in Ref. [30].

Next, we compare the number of accesses when visualizing quantum data. We need O(N2)O(N^{2}) accesses in the present method. On the other hand, the previous one [21] needs O(N2)O(N^{2}) accesses to evaluate fidelities to calculate high-dimensional data similarities. Additionally, it needs O(Np)O(Np) accesses for each iteration to evaluate the cost function and its gradient. In total, O(N2)+O(Nmp)O(N^{2})+O(Nmp) accesses are required for the whole optimization. Therefore, our proposed method is more efficient for the visualization of quantum data than our previous work [21] without any assumptions in contrast to the case of classical data. We summarize the results in Table 1.

Table 1: This table shows the number of accesses to a quantum computer. In the table, NN, mm, and pp denote the number of data, iterations required in the optimization of parameters in a quantum circuit, and parameters in a parametrized quantum circuit, respectively.
Data type classical quantum
Parametric t-SNE with QNN [21] O(Nmp)O(Nmp) O(N2)+O(Nmp)O(N^{2})+O(Nmp)
This work O(N2)O(N^{2}) O(N2)O(N^{2})

IV Numerical Experiments

In this section, first, we perform numerical experiments to validate our proposed method by visualizing classical data. We apply the kk-nearest neighbor algorithm to low-dimensional data to quantitatively evaluate the visualization performance. Then, we present a visualization of optimization trajectories in a variational quantum eigensolver. From the visualization, we explore the characteristics of ansatz’s optimization trajectories. We use Qulacs [31] to classically simulate quantum circuits in our numerical experiments.

IV.1 Visualizing classical data

Here, we visualize hand-written digits data set provided in scikit-learn [27] as an example of visualizing classical data. Specifically, we split 80%80\% of the given data as training data and the remaining 20%20\% as test data. We employ the quantum feature map defined by a quantum circuit described in Fig. 2 with 1212 qubits. As a preprocess, we reduce the dimension of the data to 1212 by PCA. Additionally, we normalize each principal component from 0 to π/2\pi/2 so that the data can be input as the angles of quantum gates. As shown in Fig. 2, we input the iith principal component xix_{i} as angles of single-qubit rotations acting on qubit ii for i=0,,n1i=0,\ldots,n-1, which are followed by nn CZ gates with control qubit ii and target qubit (i+1)(modn)(i+1)\pmod{n} to entangle the quantum states. Furthermore, input the principal components through single-qubit rotation gates’ angles again, namely data re-uploading, to improve the visualization performance. We optimize parameters 𝜶l\bm{\alpha}_{l} in Eq. (6) with Adam [32]. The perplexity is set to 3030 and the initial values of 𝜶\bm{\alpha} are taken from uniform random values between 0 and 11. We show the resulting low-dimensional data in Fig. 3. The left figure visualizes the training data, and the right one visualizes the test data. We see that the present method can nicely form the clusters by the data belonging to each class representing digits.

Next, we quantitatively evaluate the visualization performance of our proposed method. Specifically, we investigate it by the classification accuracy of low-dimensional data generated by quantum and classical methods. If the visualization works well, the low-dimensional data form clusters according to the belonging classes, and, in addition, the clusters are separated from the others. In such a case, a low-dimensional point is surrounded by points belonging to the same class; therefore, kk-nearest neighbor algorithm is effective for evaluating the visualization performance. In Ref. [29, 33], the visualization performance is evaluated by the accuracy of kk-nearest neighbor algorithm of low-dimensional data points. In our numerical experiments, we classify the low-dimensional data using kk-nearest neighbor algorithm and compare 55-fold cross-validation accuracy with four conventional models. Specifically, the models to be compared are a kernel t-SNE with Gaussian kernels (t-sne(gauss)), a parametric t-SNE with a quantum neural network (t-sne(qnn)), a parametric t-SNE with a neural network with a single-hidden layer (t-sne(nn1)), and that with three hidden layers (t-sne(nn3)). We describe their details in Appendix A.

The accuracies are shown in Table 2. The accuracy of t-sne(qnn) is the worst for all values of kk. It is because we could not use many parameters due to the long optimization time it causes. From Table 2, our model achieves almost the same accuracy with t-sne(gauss) but results in a lower accuracy than t-sne(nn1) and t-sne(nn3). While the present method could not beat the methods based on neural networks, we believe that achieving the comparable performance with respect to the gaussian kernel which can be seen as a basic traditional technique is an encouraging result; in fact, the state-of-the-art result in recognizing hand-written digits using quantum machine learning techniques is at this level [34]. This showcases the effectiveness of our approach.

Refer to caption
Figure 2: The encoding circuit for the hand-written digits data set. In the figure, xix_{i} is the iith principal component of a data 𝒙\bm{x}.
Refer to caption
Figure 3: These figures visualize the hand-written digits data set, where we split 80%80\% of the given data as training data and the remaining 20%20\% as test data. The left figure visualizes the training data and the right one visualizes the test data.
Table 2: The accuracy of 55-fold cross-validation by kk-nearest neighbor algorithm
k=1k=1 k=10k=10 k=20k=20
[proposed method] quantum kernel 0.7920.792 0.8240.824 0.8250.825
[t-sne(gauss)] classical gaussian kernel 0.7880.788 0.8160.816 0.8140.814
[t-sne(nn1)] parametric t-SNE 0.7870.787 0.8360.836 0.8380.838
[t-sne(nn3)] parametric t-SNE 0.8700.870 0.8840.884 0.8830.883
[t-sne(qnn)] quantum neural network 0.6830.683 0.7280.728 0.7220.722

IV.2 Visualizing quantum data

In this section, we present an example of visualizing quantum data. More concretely, we visualize quantum states generated within optimization processes of VQE applied to transverse field Ising model:

H=Ji=1n1ZiZi+1+hi=1nXi,H=J\sum_{i=1}^{n-1}Z_{i}Z_{i+1}+h\sum_{i=1}^{n}X_{i},

where we set n=8,J=1.0,h=0.75n=8,J=-1.0,h=-0.75. The task of the VQE here is to find the ground state of HH which we denote by |ψg\ket{\psi_{\text{g}}}. We use the so-called hardware-efficient type ansatz [9, 15], which is shown in Fig. 4. We set the depth d=6d=6 and use three different initial parameters drawn from a uniform distribution between 0 and 2π2\pi. The parameters are optimized by BFGS algorithm implemented in SciPy [35] with 100100 iterations. We visualize the quantum state at each iteration of the optimization by our kernel t-SNE method. The optimization of parameters in Eq. (6) is performed by Adam [32] setting the perplexity to 1010 and the initial values of 𝜶\bm{\alpha} to uniform random values between 0 and 0.10.1.

We show the visualization result in Fig. 5. The initial states exist around the origin of the two-dimensional plane because Eq. (6) returns values that are close to zeros when a quantum state has low similarity to all other quantum states in the data unless αl\alpha_{l} is large. They move away from the origin as the optimizations proceed. The trajectories of hardware efficient ansatz converge to the same final state |ψg\ket{\psi_{\text{g}}}. The trajectories of trajectory0 and trajectory2 first diverge but then they meet with each other after 3030 iterations. Their fidelities are actually larger than 0.900.90 in this range. The trajectory of trajectory1, in contrast to the above two, passes nearby the 11st excited state. Making this kind of observation has been impossible with the existing works. For instance, our previous approach [21] demands too much computational resource. Also, the approach proposed in [20] only looks at the parameters of the circuit and does not visualize quantum states themselves. With further experiments of this kind, we might be able to extract useful information in characterizing the optimization process of variational quantum algorithms, which may then lead to improved optimizers and ansatz designs.

Refer to caption
Figure 4: This figure shows hardware efficient ansatz, which we use in our numerical experiments. This ansatz consists of single qubit rotation gates and CNOT gates. We set n=8n=8 and d=6d=6. We repeat the block as d=0,1,,d1d^{\prime}=0,1,\ldots,d-1.
Refer to caption
Figure 5: This figure shows the optimization process of a variational quantum eigensolver to find the ground states of transverse field Ising model. Here, we use hardware efficient ansatz to find the ground state, shown in Figs. 4. We visualize three trajectories that start from different initial values and label them as trajectoryii.

V Conclusion

In this paper, we have proposed a data visualization method using quantum kernels. Through numerical experiments, we confirmed that our proposed method could visualize both classical and quantum data.

For classical data, we visualized a hand-written digits dataset and applied the kk-nearest neighbor algorithm to low-dimensional data to validate our proposed method. The accuracy of our proposed method is almost the same as that of a kernel t-SNE with Gaussian kernels but lower than that of a parametric t-SNE with a neural network. Note that this result is highly dependent on the choice of quantum kernel employed in the method. Because the quantum kernel used in Sec. IV.1 may not be the best choice, we might be able to improve the visualization performance by using a data re-uploading circuit [36, 37] or optimizing the circuit architecture. For quantum data, we visualized the optimization process of the VQE. We expect this kind of visualization to be helpful in improving initialization and optimization strategies for the VQE.

One interesting future application of our visualization method is to perform quantum architecture search. We might be able to construct better quantum circuits more intuitively by visualizing the quantum states that are gerenrated by different architechtures. We believe that our visualization method contributes to the development of quantum algorithms by providing means to gain insight about the design of quantum circuits and optimization strategies.

Acknowledgements.
K.M. is supported by JST PRESTO Grant No. JPMJPR2019 and JSPS KAKENHI Grant No. 20K22330. This work is supported by MEXT Quantum Leap Flagship Program (MEXTQLEAP) Grant No. JPMXS0118067394 and JPMXS0120319794. We also acknowledge support from JSTCOI-NEXT program Grant No. JPMJPF2014.

Appendix A The models used in numerical experiments

Here, we explain our models used in numerical experiments. Since we described our proposed method in Sec. III.1, we explain the other models below.

Kernel t-SNE with Gaussian kernels

As we explained in Sec. II.4, the kernel t-SNE is a method of mapping high-dimensional data to low-dimensional space using kernel functions so that the similarities between low-dimensional data preserve those between high-dimensional data. Specifically, we adopt the kernel function in Eq. (6) as a Gaussian kernel k(𝒙𝒍,𝒙)=exp(𝒙𝒍𝒙2/2)k(\bm{x_{l}},\bm{x})=\exp{(-\|\bm{x_{l}}-\bm{x}\|^{2}/2)}. Then, we optimize the 𝜶l\bm{\alpha}_{l} to minimize the cost function Eq. (3).

Parametric t-SNE

As we explained in Sec. II.2, parametric t-SNE with a neural network is a method of mapping high-dimensional data to a low-dimensional space using a neural network in such a way that the similarities between low-dimensional data are almost equal to those between high-dimensional data. First, we explain the architecture of a single-hidden layer neural network used in the numerical experiments. As shown in Fig. 6, the architecture consists of an input layer, a hidden layer, the activation function of ReLU, and an output layer. We set the number of nodes in the hidden layer to 18001800, which is approximately the same as the number of data. Next, we explain the architecture of a three-hidden layer neural network used in the numerical experiments. As shown in Fig. 7, the architecture consists of an input layer, hidden layers with 15001500, 10001000, and 15001500 nodes with the activation function of ReLU and dropout layers with the rate of 0.250.25 for each hidden layer, and then an output layer. Note that the input data we used is that we reduce the dimensions of the given data to 1212 by PCA and standardize each feature of them.

Parametric t-SNE with a quantum neural network

As we explained in Sec. II.3, parametric t-SNE with a quantum neural network is a method of mapping high-dimensional data to low-dimensional space using a quantum neural network so that the similarities between low-dimensional data preserve those between high-dimensional data [21]. As shown in Fig. 8, the architecture of the quantum neural network consists of data encoding by the circuit described in Fig. 2 and unitary transformation including parameterized quantum gates. Specifically, we repeat a circuit block consisting of UinU_{\text{in}} and UentU_{\text{ent}}, alternatively d=3d=3 times, where UinU_{\text{in}} and UentU_{\text{ent}} are described in Fig. 8. We input data through UinU_{\text{in}}, consisting of RxR_{x}, RyR_{y}, and CZ gates, and transform them by repeating UentU_{\text{ent}} m=4m=4 times, consisting of CZ and RyR_{y} gates. Note that we normalize each feature of the data to fit within [0.0,0.5][0.0,0.5] so that the principal components are ranged within [1.0,1.0][-1.0,1.0]. Then, we optimize the angles 𝜽\bm{\theta} in RyR_{y} gates in UentU_{\text{ent}}. Let us denote the map of the quantum neural network as U(𝒙i,𝜽)U(\bm{x}_{i},\bm{\theta}), the low-dimensional data are described by

yi,0(𝒙i,𝜽)\displaystyle y_{i,0}(\bm{x}_{i},\bm{\theta}) =j=1n/2βj0|U(𝒙i,𝜽)XjU(𝒙i,𝜽)|0,\displaystyle=\sum_{j=1}^{n/2}\beta_{j}\bra{0}U^{\dagger}(\bm{x}_{i},\bm{\theta})X_{j}U(\bm{x}_{i},\bm{\theta})\ket{0},
yi,1(𝒙i,𝜽)\displaystyle y_{i,1}(\bm{x}_{i},\bm{\theta}) =j=n/2+1nβj0|U(𝒙i,𝜽)XjU(𝒙i,𝜽)|0,\displaystyle=\sum_{j=n/2+1}^{n}\beta_{j}\bra{0}U^{\dagger}(\bm{x}_{i},\bm{\theta})X_{j}U(\bm{x}_{i},\bm{\theta})\ket{0},

where we optimize both βj\beta_{j} and 𝜽\bm{\theta} to minimize the cost function.

Refer to caption
Figure 6: This figure shows the architecture of the single-hidden layer neural network used in Sec. IV.1. The model consists of an input layer, a hidden layer, the activation function of ReLU, and an output layer.
Refer to caption
Figure 7: This figure shows the architecture of the three-hidden layer neural network used in Sec. IV.1. The model consists of an input layer, hidden layers, the activation function of ReLU, dropout layers with the rate 0.250.25, and an output layer.
Refer to caption
Figure 8: This figure shows the architecture of the quantum neural network used in Sec. IV.1. We encode data by Uin(𝒙)U_{\text{in}}(\bm{x}) as in Fig. 8(a), and transform them by repeating Uent(𝜽)U_{\text{ent}}(\bm{\theta}) m=4m=4 times, consisting of CZ and RyR_{y} gates. The angles in the RyR_{y} gates in Uent(𝜽)U_{\text{ent}}(\bm{\theta}) are parameters to be optimized. We alternatively repeat Uin(𝒙)U_{\text{in}}(\bm{x}) and Uent(𝜽)U_{\text{ent}}(\bm{\theta}) d=3d=3 times.

References

  • Hinton and Salakhutdinov [2006] G. E. Hinton and R. R. Salakhutdinov, science 313, 504 (2006).
  • Kingma and Welling [2013] D. P. Kingma and M. Welling, arXiv preprint arXiv:1312.6114  (2013).
  • Van der Maaten and Hinton [2008] L. Van der Maaten and G. Hinton, Journal of machine learning research 9 (2008).
  • Li et al. [2018] H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein, Advances in neural information processing systems 31 (2018).
  • Erhan et al. [2010] D. Erhan, A. Courville, Y. Bengio, and P. Vincent, in Proceedings of the thirteenth international conference on artificial intelligence and statistics (JMLR Workshop and Conference Proceedings, 2010) pp. 201–208.
  • Fort et al. [2019] S. Fort, H. Hu, and B. Lakshminarayanan, arXiv preprint arXiv:1912.02757  (2019).
  • Cerezo et al. [2021a] M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio, et al., Nature Reviews Physics 3, 625 (2021a).
  • Peruzzo et al. [2014] A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’brien, Nature communications 5, 1 (2014).
  • Kandala et al. [2017] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Nature 549, 242 (2017).
  • Farhi et al. [2014] E. Farhi, J. Goldstone, and S. Gutmann, arXiv preprint arXiv:1411.4028  (2014).
  • Mitarai et al. [2018] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, Physical Review A 98, 032309 (2018).
  • Farhi and Neven [2018] E. Farhi and H. Neven, arXiv preprint arXiv:1802.06002  (2018).
  • Schuld et al. [2014] M. Schuld, I. Sinayskiy, and F. Petruccione, Quantum Information Processing 13, 2567 (2014).
  • Sim et al. [2019] S. Sim, P. D. Johnson, and A. Aspuru-Guzik, Advanced Quantum Technologies 2, 1900070 (2019).
  • Nakaji and Yamamoto [2021] K. Nakaji and N. Yamamoto, Quantum 5, 434 (2021).
  • Cerezo et al. [2021b] M. Cerezo, A. Sone, T. Volkoff, L. Cincio, and P. J. Coles, Nature communications 12, 1 (2021b).
  • McClean et al. [2018] J. R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush, and H. Neven, Nature communications 9, 1 (2018).
  • Wang et al. [2021] S. Wang, E. Fontana, M. Cerezo, K. Sharma, A. Sone, L. Cincio, and P. J. Coles, Nature communications 12, 1 (2021).
  • Bittel and Kliesch [2021] L. Bittel and M. Kliesch, Physical Review Letters 127, 120502 (2021).
  • Rudolph et al. [2021] M. S. Rudolph, S. Sim, A. Raza, M. Stechly, J. R. McClean, E. R. Anschuetz, L. Serrano, and A. Perdomo-Ortiz, arXiv preprint arXiv:2111.04695  (2021).
  • Kawase et al. [2022] Y. Kawase, K. Mitarai, and K. Fujii, Phys. Rev. Res. 4, 043199 (2022).
  • Havlíček et al. [2019] V. Havlíček, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M. Chow, and J. M. Gambetta, Nature 567, 209 (2019).
  • Schuld and Killoran [2019] M. Schuld and N. Killoran, Physical review letters 122, 040504 (2019).
  • Gisbrecht et al. [2012] A. Gisbrecht, W. Lueks, B. Mokbel, and B. Hammer, in ESANN (2012).
  • Bunte et al. [2012] K. Bunte, M. Biehl, and B. Hammer, Neural Computation 24, 771 (2012).
  • Gisbrecht et al. [2015] A. Gisbrecht, A. Schulz, and B. Hammer, Neurocomputing 147, 71 (2015).
  • Dua and Graff [2017] D. Dua and C. Graff, UCI machine learning repository (2017).
  • van der Maaten and Hinton [2008] L. van der Maaten and G. E. Hinton, Journal of Machine Learning Research 9, 2579 (2008).
  • Van Der Maaten [2009] L. Van Der Maaten, in Artificial intelligence and statistics (PMLR, 2009) pp. 384–391.
  • Du et al. [2021] Y. Du, M.-H. Hsieh, T. Liu, S. You, and D. Tao, PRX Quantum 2, 040337 (2021).
  • Suzuki et al. [2021] Y. Suzuki, Y. Kawase, Y. Masumura, Y. Hiraga, M. Nakadai, J. Chen, K. M. Nakanishi, K. Mitarai, R. Imai, S. Tamiya, et al., Quantum 5, 559 (2021).
  • Kingma and Ba [2014] D. P. Kingma and J. Ba, arXiv preprint arXiv:1412.6980  (2014).
  • McInnes et al. [2018] L. McInnes, J. Healy, and J. Melville, arXiv preprint arXiv:1802.03426  (2018).
  • Haug et al. [2023] T. Haug, C. N. Self, and M. S. Kim, Machine Learning: Science and Technology 4, 015005 (2023).
  • Virtanen et al. [2020] P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İ. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy 1.0 Contributors, Nature Methods 17, 261 (2020).
  • Pérez-Salinas et al. [2020] A. Pérez-Salinas, A. Cervera-Lierta, E. Gil-Fuster, and J. I. Latorre, Quantum 4, 226 (2020).
  • Lloyd et al. [2020] S. Lloyd, M. Schuld, A. Ijaz, J. Izaac, and N. Killoran, arXiv preprint arXiv:2001.03622  (2020).