This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Quantum-Train with Tensor Network
Mapping Model and Distributed Circuit Ansatz

Chen-Yu Liu 16, Chu-Hsuan Abraham Lin37, Kuan-Cheng Chen348 1Graduate Institute of Applied Physics, National Taiwan University, Taipei, Taiwan 3Department of Electrical and Electronic Engineering, Imperial College London, London, UK 4Centre for Quantum Engineering, Science and Technology (QuEST), Imperial College London, London, UK Email:6 [email protected], 7 [email protected], [email protected]
Abstract

In the Quantum-Train (QT) framework, mapping quantum state measurements to classical neural network weights is a critical challenge that affects the scalability and efficiency of hybrid quantum-classical models. The traditional QT framework employs a multi-layer perceptron (MLP) for this task, but it struggles with scalability and interpretability. To address these issues, we propose replacing the MLP with a tensor network-based model and introducing a distributed circuit ansatz designed for large-scale quantum machine learning with multiple small quantum processing unit nodes. This approach enhances scalability, efficiently represents high-dimensional data, and maintains a compact model structure. Our enhanced QT framework retains the benefits of reduced parameter count and independence from quantum resources during inference. Experimental results on benchmark datasets demonstrate that the tensor network-based QT framework achieves competitive performance with improved efficiency and generalization, offering a practical solution for scalable hybrid quantum-classical machine learning.

Index Terms:
Quantum Machine Learning, Quantum Neural Networks, Distributed Quantum Computing, Model Compression

I Introduction

In recent years, Quantum Computing and Quantum Machine Learning (QML) have shown substantial potential in enhancing learning efficiency and flexibility by integrating diverse computational architectures [1, 2]. Leveraging the unique properties of quantum systems, such as superposition and entanglement, QML can perform parallel computations across multiple basis states simultaneously[3]. This capability has enabled a broad spectrum of applications, including classification tasks [4], reinforcement learning [5], time-series forecasting [6], and the incorporation of quantum algorithms into various computational frameworks [7, 8, 9].

In conventional QML approaches, data is typically introduced into quantum circuits through encoding techniques like gate-angle encoding and amplitude encoding [10]. However, encoding large datasets into quantum circuits remains a significant challenge due to the limited number of available qubits and the constraints on circuit depth imposed by the coherence times of current quantum systems. Beyond data encoding limitations, once a QML model is trained, the inference phase often necessitates the use of quantum hardware (commonly accessed via cloud-based platforms), where hybrid computations occur layer-by-layer. This dependency can result in inefficiencies, particularly for time-sensitive applications such as real-time decision-making in autonomous vehicles.

To address these challenges, the Quantum-Train (QT) framework has been proposed as an innovative “learning-wise” hybrid quantum-classical architecture. The central concept is to decouple the quantum processing from direct data interaction by utilizing a quantum neural network (QNN) to generate the weights of a classical machine learning (ML) model. This approach effectively circumvents data encoding issues, as the data is processed entirely within the classical model, bypassing the need for direct quantum data input. Moreover, it eliminates the requirement for quantum hardware during the inference stage, making the final model fully classical and independent of quantum resources post-training. These characteristics position QT as a highly practical and efficient framework for near-term QML applications [11, 12, 13, 6].

Refer to caption
Figure 1: Conceptual diagram of the distributed Quantum-Train architecture applied in Quantum High-Performance Computing (HPC) for large-scale quantum machine learning problems.

A critical step in the QT framework is mapping the measured probabilities of quantum basis states to the weights of the target machine learning model. In previous implementations, this mapping has primarily been achieved using a multi-layer perceptron (MLP) architecture. However, due to the high-dimensional nature of the quantum states generated by the QNN, MLPs may struggle to efficiently capture the complex structure of these probabilities. To address this, Tensor Networks (TN) present a compelling alternative due to their ability to efficiently represent weakly interacting quantum states and their widespread use in quantum circuit simulations[14, 15].

Another significant challenge in QNNs is the extensive qubit requirement, which limits scalability for both simulations and real quantum hardware. To address these limitations, a distributed circuit ansatz can be employed [16, 17]. Additionally, the possible integration of distributed quantum computing with hybrid quantum HPC architectures has been explored in recent work [18]. As illustrated in Fig. 1, the concept of large-scale distributed Quantum-Train (QT) for a Quantum HPC center aims to tackle large-scale machine learning problems. This approach involves partitioning the quantum circuit into smaller sub-circuits, thereby reducing overall qubit usage and allowing the results to be efficiently combined through tensor products after computation.

In this work, we propose two key enhancements to the QT framework: (1) substituting the MLP mapping model with a Tensor Network structure to improve expressive power while reducing the number of parameters, and (2) designing the QNN ansatz in a distributed manner, which not only accelerates simulations but also decreases qubit requirements for real quantum hardware applications.

II Quantum-Train

Refer to caption
Figure 2: The scheme of the QT framework with a TN mapping model and a distributed circuit ansatz is illustrated as follows. The process begins with two distributed quantum circuits, whose results are combined via a tensor product. The basis information and the corresponding measurement probabilities are then input into the MPS structure. The MPS subsequently generates the weights for the target CNN model.

The traditional QT framework is outlined as follows: Consider a classical neural network (NN) with parameters ωm\vec{\omega}\in\mathbb{R}^{m}, defined as

ω=(ω1,ω2,,ωm).\vec{\omega}=(\omega_{1},\omega_{2},\ldots,\omega_{m}). (1)

A QNN with N=log2mN=\lceil\log_{2}m\rceil qubits is then constructed using a real amplitude ansatz, expressed as:

|ψ(θ)=(iCNOTi,i+1jRyj)L|0N.|\psi(\theta)\rangle=\left(\prod_{i}\text{CNOT}^{i,i+1}\prod_{j}R_{y}^{j}\right)^{L}|0\rangle^{\otimes N}. (2)

Rather than updating all mm parameters, as is done in conventional ML, the QT approach utilizes the quantum state |ψ(θ)|\psi(\theta)\rangle to produce 2N2^{N} distinct measurement probabilities |ϕi|ψ(θ)|2|\langle\phi_{i}|\psi(\theta)\rangle|^{2}, where i{1,2,,2N}i\in\{1,2,\ldots,2^{N}\}, and |ϕi|\phi_{i}\rangle represents the ii-th basis state. These probabilities are passed through a mapping model GβG_{\beta}, a classical MLP network with parameters β\beta.

The first mm measurement probabilities, along with their corresponding basis states |ϕi|\phi_{i}\rangle, are transformed from values in the interval [0,1][0,1] to the range (,)(-\infty,\infty) using the following relation:

Gβ(|ϕi,|ϕi|ψ(θ)|2)=ωi,i=1,2,,m.G_{\beta}(|\phi_{i}\rangle,|\langle\phi_{i}|\psi(\theta)\rangle|^{2})=\omega_{i},\quad i=1,2,\ldots,m. (3)

This shows that the parameters ωi\omega_{i} of the target model are derived from the QNN state |ψ(θ)|\psi(\theta)\rangle and the mapping model GβG_{\beta}. Crucially, the number of tunable parameters in both θ\theta and β\beta scales as O(polylog(m))O(\text{polylog}(m)) [12], enabling efficient training of the target model with only O(polylog(m))O(\text{polylog}(m)) parameters, rather than updating all mm parameters directly.

II-A Tensor Network Mapping Model

TNs have achieved significant success in quantum many-body physics, machine learning tasks, and even quantum circuit simulations. Due to their efficiency in representing high-dimensional data, TNs are expected to more effectively represent the mapping between quantum measurement probabilities and classical NN weights. Following the formulation in [19], the matrix product state (MPS) decomposition of the mapping model’s weight tensor WW is expressed as:

Ws1,s2,,sN+1=αAs1α1As2α1α2AsN+1αN,W_{s_{1},s_{2},\ldots,s_{N+1}}=\sum_{{\alpha}}A^{\alpha_{1}}_{s_{1}}A^{\alpha_{1}\alpha_{2}}_{s_{2}}\ldots A^{\alpha_{N}}_{s_{N+1}}, (4)

where NN corresponds to the number of qubits, as mentioned in the previous section, since the vector representation of (|ϕi,|ϕi|ψ(θ)|2)(|\phi_{i}\rangle,|\langle\phi_{i}|\psi(\theta)\rangle|^{2}) has the shape N+1N+1. With the feature map

Ξs1,s2,,sN+1(𝐱)=ξs1(x1)ξs2(x2)ξsN+1(xN+1),\Xi^{s_{1},s_{2},\ldots,s_{N+1}}(\mathbf{x})=\xi^{s_{1}}(x_{1})\otimes\xi^{s_{2}}(x_{2})\otimes\ldots\xi^{s_{N+1}}(x_{N+1}), (5)

and

ξsj(xj)=[xj1xj],\xi^{s_{j}}(x_{j})=\begin{bmatrix}x_{j}\\ 1-x_{j}\end{bmatrix}, (6)

the mapping model is now given by

G(|ϕ,|ϕ|ψ(θ)|2)=WΞ(|ϕ,|ϕ|ψ(θ)|2)=ω.G(|\phi\rangle,|\langle\phi|\psi(\theta)\rangle|^{2})=W\cdot\Xi(|\phi\rangle,|\langle\phi|\psi(\theta)\rangle|^{2})=\vec{\omega}. (7)

In this formulation, the MPS tensors AsαjA^{\alpha_{j}}_{s} serve as the tunable parameters of the TN mapping model, where each virtual index αj\alpha_{j} is associated with a bond dimension rr.

II-B Distributed Circuit Ansatz

The number of qubits required for a QNN can sometimes become too large for efficient evaluation, both in classical simulations and on quantum hardware, depending on the specific task. For instance, the CIFAR-10 classification task within the QT proposal in [12] requires log2285226=19\lceil\log_{2}285226\rceil=19 qubits. Although 19 qubits is a feasible size, evaluating a quantum circuit of this scale is significantly slower than using circuits with 9 or 10 qubits.

Since the goal of the QNN in this case is to generate 2192^{19} measurement probabilities, the circuit can be decomposed into two sub-circuits with 9 and 10 qubits, respectively, given that the Hilbert space dimension satisfies 219=210×292^{19}=2^{10}\times 2^{9}. This decomposition is expressed as:

|ψN=19=|ψN=9|ψN=10,|\psi\rangle_{N=19}=|\psi\rangle_{N=9}\otimes|\psi\rangle_{N=10}, (8)

where the measurement probabilities for the full system are given by:

|s1s2s19|ψN=19|2=\displaystyle|\langle s_{1}s_{2}\ldots s_{19}|\psi\rangle_{N=19}|^{2}= (9)
|s1s2s9|ψN=9|2|s10s19|ψN=10|2.\displaystyle|\langle s_{1}s_{2}\ldots s_{9}|\psi\rangle_{N=9}|^{2}|\langle s_{10}\ldots s_{19}|\psi\rangle_{N=10}|^{2}. (10)

The distributed ansatz is then written as:

|ψ(θ)N=|ψ(θ(1))N1|ψ(θ(2))N2,\displaystyle|\psi(\theta)\rangle_{N}=|\psi(\theta^{(1)})\rangle_{N_{1}}\otimes|\psi(\theta^{(2)})\rangle_{N_{2}}, (11)

where |ψ(θ(1))N1|\psi(\theta^{(1)})\rangle_{N_{1}} and |ψ(θ(2))N2|\psi(\theta^{(2)})\rangle_{N_{2}} correspond to the components as defined in Eq. 2. Following the training procedure outlined in the original QT framework, the QT method with a TN mapping model and distributed circuit ansatz is illustrated in Fig. 2.

III Numerical Results and Discussion

Refer to caption
Figure 3: The comparison between the TN mapping model with the distributed circuit ansatz and the MLP mapping model with the conventional circuit ansatz is presented. The blue line represents the results obtained in [12].

The effectiveness of the TN mapping model and distributed circuit ansatz is evaluated using the following setup: The target model is the same convolutional neural network (CNN) from the original QT paper [12]. For the CIFAR-10 classification task, the model contains 285226 parameters. The results for the QT framework with the MLP mapping model, shown as the blue line in Fig. 3 (QT-MLP), are taken from [12] as the baseline for comparison. The QNN repetition counts LL, ranging from small to large parameter counts, are {19,95,171,247,323}\{19,95,171,247,323\}. In comparison, the QT-TN distributed ansatz results with L{19,38,76}L\in\{19,38,76\} are displayed. The parameter variations are controlled by the bond dimensions of the MPS, with r{2,4,8,16,24}r\in\{2,4,8,16,24\}. It is evident that the QT-TN results outperform those of the MLP mapping model. At a testing accuracy of approximately 60%60\%, the QT-TN model requires slightly more than 10000 parameters, whereas the MLP mapping model requires approximately 23000 parameters. Nevertheless, both approaches offer significant compression compared to the original parameter count of 285226.

An interesting property can also be observed in Fig. 3. When comparing the slope of the QT-TN results with respect to increasing parameter counts, the slope of the QT-MLP results is noticeably steeper. The QT-TN results correspond to increasing the “classical” TN parameters, specifically the bond dimension, while the QT-MLP results reflect an increase in “quantum” parameters, specifically the QNN block repetition LL, as shown in [12]. This indicates that the impact of increasing classical and quantum parameters is not equivalent: increasing quantum parameters results in a greater improvement in accuracy per unit of parameter increase. However, in the current noisy intermediate-scale quantum (NISQ) era, increasing the number of QNN layers requires a longer coherence time for qubits on real quantum hardware, as well as extended circuit simulation time on classical simulators. In this context, although the TN mapping model with an increased bond dimension yields less performance improvement per unit of parameter increase, it remains a cost-effective approach to map quantum state measurement probabilities to classical NN weights in the current stage.

IV Conclusion and Future Work

In this work, we introduced significant enhancements to the QT framework by replacing the conventional MLP mapping model with a TN mapping model and implementing a distributed circuit ansatz to mitigate the challenges posed by large qubit requirements. These improvements enable more efficient parameter representation and reduce the computational overhead in both quantum hardware and classical simulation. Our results, validated on the CIFAR-10 classification task, show that the QT-TN model offers superior parameter efficiency compared to the QT-MLP baseline, requiring fewer parameters to reach comparable levels of accuracy.

A key observation from our findings is the different impact of increasing quantum versus classical parameters on model performance. While increasing quantum parameters, such as the QNN block repetition, provides greater accuracy gains per parameter unit, it also imposes significant demands on quantum coherence time and circuit simulation. In contrast, increasing classical parameters via the bond dimension of the tensor network provides a more scalable and practical solution in the noisy intermediate-scale quantum (NISQ) era, where quantum resources are limited and expensive to deploy.

From a broader perspective, this work demonstrates that hybrid quantum-classical frameworks like QT are highly adaptable. By introducing TNs, we open the possibility of leveraging classical techniques from quantum many-body physics to improve the scalability and performance of QML models. Additionally, our distributed circuit ansatz aligns with current trends toward modular and distributed quantum computation, positioning the QT framework as a forward-looking solution that can evolve alongside advancements in quantum hardware.

The significance of our contribution lies not only in improving the scalability of the QT framework but also in demonstrating a path forward for hybrid architectures that balance quantum and classical computation. By decoupling inference from quantum hardware and optimizing the training phase using TNs, we enable more flexible deployment of QML models in real-world applications. This adaptability makes the QT-TN framework particularly suitable for use cases in ML tasks that demand scalability, efficiency, and minimal reliance on quantum resources.

Our future work will focus on exploring more advanced tensor network architectures to further improve the efficiency and scalability of the QT framework. We aim to investigate alternative tensor decompositions, such as tree tensor networks (TTN) and projected entangled pair states (PEPS), which may offer better compression rates and accuracy trade-offs. Another area of interest is optimizing the distributed circuit ansatz to further reduce the qubit count without compromising model performance. Furthermore, testing the proposed framework on larger datasets and more complex tasks, such as reinforcement learning and natural language processing, will provide a broader assessment of its generalizability. Finally, integrating error mitigation techniques and optimizing the TN mapping model for real quantum hardware will be crucial in advancing the framework toward practical, real-world applications.

References

  • [1] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, “Quantum machine learning,” Nature, vol. 549, no. 7671, pp. 195–202, 2017.
  • [2] V. Dunjko, J. M. Taylor, and H. J. Briegel, “Quantum-enhanced machine learning,” Physical review letters, vol. 117, no. 13, p. 130501, 2016.
  • [3] H.-K. Lau, R. Pooser, G. Siopsis, and C. Weedbrook, “Quantum machine learning over infinite dimensions,” Physical review letters, vol. 118, no. 8, p. 080501, 2017.
  • [4] K.-C. Chen, X. Xu, H. Makhanov, H.-H. Chung, and C.-Y. Liu, “Quantum-enhanced support vector machine for large-scale multi-class stellar classification,” in International Conference on Intelligent Computing.   Springer, 2024, pp. 155–168.
  • [5] S. Y.-C. Chen, C.-H. H. Yang, J. Qi, P.-Y. Chen, X. Ma, and H.-S. Goan, “Variational quantum circuits for deep reinforcement learning,” IEEE access, vol. 8, pp. 141 007–141 024, 2020.
  • [6] C.-H. A. Lin, C.-Y. Liu, and K.-C. Chen, “Quantum-train long short-term memory: Application on flood prediction problem,” arXiv preprint arXiv:2407.08617, 2024.
  • [7] C.-Y. Liu, C.-H. A. Lin, and K.-C. Chen, “Learning quantum phase estimation by variational quantum circuits,” arXiv preprint arXiv:2311.04690, 2023.
  • [8] C.-Y. Liu, “Practical quantum search by variational quantum eigensolver on noisy intermediate-scale quantum hardware,” in 2023 International Conference on Computational Science and Computational Intelligence (CSCI).   IEEE, 2023, pp. 397–403.
  • [9] C.-Y. Liu and H.-S. Goan, “Reinforcement learning quantum local search,” in 2023 IEEE International Conference on Quantum Computing and Engineering (QCE), vol. 2.   IEEE, 2023, pp. 246–247.
  • [10] H.-Y. Huang, M. Broughton, M. Mohseni, R. Babbush, S. Boixo, H. Neven, and J. R. McClean, “Power of data in quantum machine learning,” Nature communications, vol. 12, no. 1, p. 2631, 2021.
  • [11] C.-Y. Liu, E.-J. Kuo, C.-H. A. Lin, S. Chen, J. G. Young, Y.-J. Chang, and M.-H. Hsieh, “Training classical neural networks by quantum machine learning,” arXiv preprint arXiv:2402.16465, 2024.
  • [12] C.-Y. Liu, E.-J. Kuo, C.-H. A. Lin, J. G. Young, Y.-J. Chang, M.-H. Hsieh, and H.-S. Goan, “Quantum-train: Rethinking hybrid quantum-classical machine learning in the model compression perspective,” arXiv preprint arXiv:2405.11304, 2024.
  • [13] C.-Y. Liu, C.-H. A. Lin, C.-H. H. Yang, K.-C. Chen, and M.-H. Hsieh, “Qtrl: Toward practical quantum reinforcement learning via quantum-train,” arXiv preprint arXiv:2407.06103, 2024.
  • [14] F. Pan and P. Zhang, “Simulation of quantum circuits using the big-batch tensor network method,” Physical Review Letters, vol. 128, no. 3, p. 030501, 2022.
  • [15] K.-C. Chen, T.-Y. Li, Y.-Y. Wang, S. See, C.-C. Wang, R. Willie, N.-Y. Chen, A.-C. Yang, and C.-Y. Lin, “cutn-qsvm: cutensornet-accelerated quantum support vector machine with cuquantum sdk,” arXiv preprint arXiv:2405.02630, 2024.
  • [16] D. Ferrari, S. Carretta, and M. Amoretti, “A modular quantum compilation framework for distributed quantum computing,” IEEE Transactions on Quantum Engineering, 2023.
  • [17] F. Burt, K.-C. Chen, and K. Leung, “Generalised circuit partitioning for distributed quantum computing,” arXiv preprint arXiv:2408.01424, 2024.
  • [18] K.-C. Chen, X. Li, X. Xu, Y.-Y. Wang, and C.-Y. Liu, “Quantum-classical-quantum workflow in quantum-hpc middleware with gpu acceleration,” in 2024 International Conference on Quantum Communications, Networking, and Computing (QCNC).   IEEE, 2024, pp. 304–311.
  • [19] E. Stoudenmire and D. J. Schwab, “Supervised learning with tensor networks,” Advances in neural information processing systems, vol. 29, 2016.