This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Feasible Architecture for Quantum Fully Convolutional Networks

Yusui Chen [email protected] Physics Department, New York Institute of Technology, Old Westbury, NY 11568, USA    Wenhao Hu School of Computing Science, University of Glasgow, Glasgow G12 8QQ, UK    Xiang Li QuantumX Technologies Inc., 100 Wall Street No. 1602, New York, NY 10005
Abstract

Fully convolutional networks are robust in performing semantic segmentation, with many applications from signal processing to computer vision. From the fundamental principles of variational quantum algorithms, we propose a feasible pure quantum architecture that can be operated on noisy intermediate-scale quantum devices. In this work, a parameterized quantum circuit consisting of three layers, convolutional, pooling, and upsampling, is characterized by generative one-qubit and two-qubit gates and driven by a classical optimizer. This architecture supplies a solution for realizing the dynamical programming on a one-way quantum computer and maximally taking advantage of quantum computing throughout the calculation. Moreover, our algorithm works on many physical platforms, and particularly the upsampling layer can use either conventional qubits or multiple-level systems. Through numerical simulations, our study represents the successful training of a pure quantum fully convolutional network and discusses advantages by comparing it with the hybrid solution.

Quantum computing, Neural network, Machine learning, Deep learning, FCN
preprint: APS/123-QED

I Introduction

Deep learning as a method of data analysis allows computers to discover and improve the model from data and perform automatically with minimal human intervention. Convolutional neural networks (CNN) [1, 2] as one type of deep learning algorithms have many successful applications in the field of science and technology, e.g., high-energy particle physics, condensed matter physics, biological and chemical systems, image recognition, natural language processing[3, 4, 5, 6, 7]. Through the multiple designed convolutional and pooling layers, the original data is coarse-grained and fully connected. As a result, CNNs provide a practical method to capture the spatial and temporal dependencies.

However the success of deep learning algorithms highly depends on the volume of data and the computational power. Quantum machine learning (QML) is a feasible solution to address the challenge of solving large data involved computational problems because quantum computing has its natural advantage over the classical ones that it can turn the dense classical computation into a series of measurements on the quantum system and speedup the process exponentially [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]. Due to the lack of quantum error correction[20, 21], current quantum computers cannot implement the generic quantum algorithms, e.g., Shor algorithm, Grover algorithm, on the noisy intermediate-scale quantum (NISQ) devices[22]. But recent works have demonstrated that the variational quantum algorithms (VQAs), a method using the variational principle to provide approximating solutions to a computational problem, can be implemented on NISQ devices. In general, a VQA can be mapped to a fully-parameterized quantum circuit (PQC) and driven by a classical optimizer[23, 24, 25, 26, 27, 28, 29]. Quantum convolutional neural networks (QCNN) as an example of QML have emerged as the overlap of classical convolutional networks (CNN) and QML, and provide a potential solution to speeding up the data processing an·d increasing the capability of handling data base on the NISQ devices[30, 31, 32, 33]. In addition, QCNNs provide us the potential of further quantum supremacy applications in deep learning area, because CNNs are the base architecture of many advanced neural networks. Although the architecture of CNN/ QCNN provides the coarse-gained model and successes in pattern recognition, it has the limited ability to perform the dense multiple data patterns recognition. Fully convolutional networks (FCN) [34, 35, 36, 37, 38] as a natural extension from the CNN architecture to the encoding-decoding framework can realize the complex data pattern discovery and supply a practical method to perform the dense prediction that label each unit of data with a specific class, e.g., semantic segmentation, image segmentation, carrier signal detection in broadband power spectrum, and time series classification. A compromise solution to perform FCNs in the framework of quantum computing is a hybrid model in which the outcomes of QCNN are fed into a classical FCN. But the hybrid solution cannot avoid bottlenecks in classical algorithms. As a result, a pure quantum model is necessary that a PQC consists of a pure quantum CNN and the quantum upsampling (QFCN). Moreover, the quantum solution can be performed on the one-way quantum computer which allows dynamical programming and reuses the measured qubits to perform the decoding process. Particularly, in the domain of computer vision, video recognition with deep learning is constructed to handle continuous data with time.

Inspired by QCNNs and the multi-scale entanglement renormalization ansatz (MERA)[39, 40], we present a fully PQC for the QFCN model to perform semantic segmentation on classical data. Although VQA-based algorithms have been proposed to address the classical hard computational problems on quantum computers, some key features such as trainability and efficiency of VQAs are still under debate. In this work, we also compare the performance of the hybrid model and the pure quantum one, and discuss the impacts of volume of data set and the fine adjustments of the decoding algorithm.

In this work, we start by introducing the basic PQC for QFCN model based on the architecture of VQAs. We then focus on the numerical simulations of the algorithm in different setups, including the hybrid style and the pure quantum structure, as well as the numerical stability depending on different data sets. In the last section, we present predictions for quantum advantage on NISQ devices and conclude on the possible applications in other relevant quantum computing fields.

II Theory

II.1 Quantum variational learning

Refer to caption
Figure 1: Scheme of generic quantum and classical variational learning algorithms.

Generally, the variational learning is training a fully-parameterized map f:XYf:X\mapsto Y that minimize the distance between the trained model and the true one. Starting from a set of random chosen parameters, the optimizer evaluates the quality of the model by measuring the distance between the output and the labeled data in the test group and update a new set of parameters in the model. Once the distance is less than the error tolerance, such a model is considered as the approximated-real model. In current techniques, the most time-consuming components are that: (1) calculating the output variable due to the large size of the data; (2) finding the global minimal position on the high dimensional parameter hyper surface is difficult[41, 18].

A typical quantum variational algorithm (QVA) is consisting of three steps: (1) preparing the raw data in the initial state |ψin(x)|\psi_{in}(\vec{x})\rangle; (2) measuring the output state |ψout=U^(θ)|ψin|\psi_{out}\rangle=\hat{U}(\theta)|\psi_{in}\rangle to compute the required data Yi=Tr(Y^i|ψoutψout|)Y_{i}=Tr(\hat{Y}_{i}|\psi_{out}\rangle\langle\psi_{out}|), where the operator U^(θ)\hat{U}(\theta) characterizes the whole quantum circuit and {θi}\{\theta_{i}\} in θ\theta are all possible parameters; (3) updating the parameters based on the outcome of the classical optimizer U^(θ)U^(θ)\hat{U}(\theta)\rightarrow\hat{U}(\theta^{\prime}). As shown in Fig. 1, QVAs can speedup the computing process by turning the classical computing process into a series of quantum measurements. As a result, this advantage will be dominant when the size of data exceeds some thresholds. Some practical applications of quantum advantage over classical supercomputers have been explored, e.g., variational algorithms, machine learning problems.

In the classical optimizer, the distance between the trained data YiY_{i} and the labeled data 𝒴i\mathcal{Y}_{i} is characterized by the cost function. In this work, we use the mean of squared error (MSE)[42] to serve as the loss function (θ)\mathcal{L}(\theta),

(θ)=1Nj=1N(𝒴jYj)2.\displaystyle\mathcal{L}(\theta)=\frac{1}{N}\sum_{j=1}^{N}(\mathcal{Y}_{j}-Y_{j})^{2}. (1)

Meanwhile, the loss function minimum can be located by tweaking the parameters θ\theta iteratively,

θk+1=θkδθ(θk),\displaystyle\theta^{k+1}=\theta^{k}-\delta\nabla_{\theta}\mathcal{L}(\theta^{k}),

where δ\delta is the step size, until the error tolerance (ϵ\epsilon) converges

|(θk+1)(θk)|ϵ.\displaystyle|\mathcal{L}(\theta^{k+1})-\mathcal{L}(\theta^{k})|\leq\epsilon.

II.2 Basic quantum CNN

Quantum convolutional neural networks (QCNNs) are originally aimed to classify the quantum phase of a given state in spin-chain models, which is a ubiquitous difficult question in many-body physics. In most recent proposals, QCNNs refer to a full-parameterized quantum algorithms which can be trained by a classical optimizer. In those models, as shown in Fig. 2, the neural network structure is a parameterized quantum circuit, which contains multiple quantum gates to build up a quantum computing mission. By naturally mapping the classical convolution computing onto a many-body Hamiltonian, QCNNs resolve the difficulty and enhance the efficiency when deal with a large set of data according to the increased system size.

A generative QCNN consists of two types of layers: the convolutional layer U^j\hat{U}_{j} and the pooling V^j\hat{V}_{j} layer. Inside each layer, the operator U^i\hat{U}_{i} or V^j\hat{V}_{j} can be decomposed as a set of gates, connecting all engaged qubits where the quantum gates can be properly chosen depending on the physical realizations. Theoretically, every quantum gate operated on qubits is equivalent to the combination of two fundamental gates: single-qubit gates and two-qubit entangling gates. The major difference between the two types of layers is that the convolutional layer does not change the dimension of data, while the pooling layer decreases the size of data. In quantum computing, decreasing the dimension of data is naturally realized by performing partial measurement,

ρR\displaystyle\rho_{R} =TrM(ρRM)\displaystyle=Tr_{M}(\rho_{R\otimes M})
=n,j,k,l,mψMn|ρjklm|ψRj|ψMkψRl|ψMm|ψMn\displaystyle=\sum_{n,j,k,l,m}\langle\psi_{M}^{n}|\rho_{jklm}|\psi_{R}^{j}\rangle|\psi_{M}^{k}\rangle\langle\psi_{R}^{l}|\langle\psi_{M}^{m}|\psi_{M}^{n}\rangle
=j,l(nρjnln)|ψRjψRl|,\displaystyle=\sum_{j,l}(\sum_{n}\rho_{jnln})|\psi_{R}^{j}\rangle\langle\psi_{R}^{l}|, (2)

where |ρRM|\rho_{R\otimes M}\rangle is the total quantum state of the system before the measurement that can be explicitly written in the product basis of remaining qubits {|ψRj}\{|\psi_{R}^{j}\rangle\} and the to-be-measured qubits {|ψMk}\{|\psi_{M}^{k}\rangle\}, associated with coefficients {ρjklm}\{\rho_{jklm}\}. After performing partial trace, the element in the outcome state is ρjl=nρjnln\rho_{jl}=\sum_{n}\rho_{jnln}. The final classification step is readout by the measurement on the last qubit.

Refer to caption
Figure 2: Sketch of multi-layer (L layers) quantum convolutional neural networks (QCNNs). All {Ui,Vj}\{U_{i},V_{j}\} are linear combinations of two-qubit quantum gates which fully connect all engaged qubits. The size is progressively decreased by measuring some qubits. The network output is read out via measurements on the final qubit.

II.3 Quantum fully convolutional networks

Refer to caption
Figure 3: Sketch of two potential solutions to perform FCN based on the data obtained from the QCNN.

After extracting convoluted features and pooling, the coarse-grained data is fed into the FCNs to perform semantic segmentation. Different from the traditional CNN, the readout is used to learn the semantics and location jointly, in which the key difficulty is resolving the inherent coupling between semantics and location. As a result, the next step is to upscale the coarse-grained data back to the original size. There are two potential solutions: (1) the hybrid solution and (2) the pure quantum solution, as shown in Fig.3. The advantages of the pure quantum solution is similar to that of QCNNs, comparing to the traditional CNNs, that the natural parallel computing in a quantum circuit can exponentially speedup the computing process and extend the ability to deal with large-sized data. Moreover, from the previous discussion, the pure quantum solution can bring into more global coherence in the learning process which is important and helpful to increase the quality of machine learning algorithms.

Refer to caption
Figure 4: Sketch of quantum fully convolutional networks (QFCNs). All {Wi}\{W_{i}\} are linear combinations of two-qubit quantum gates which fully connect all engaged qubits. The size is increased by entangling measured qubits back into the circuit. The network output is read out via measuring all qubits.

In this work, we propose a parameterized quantum circuit reproducing pure quantum fully convolutional networks (QFCNs), as shown in Fig.4. In the classical FCN, the transposed convolution is realized by integrating The semantic vectors, the so-called upsampling layers, and extend the dimension of data to the original size of the input data. As a result, the outcome can skip some intermediate steps in the deep learning and perform dense predictions.

In the context of quantum computing, increase the dimension of states can be implemented by operating controlled gates on ancillary qubits. Such operations can extend the size of the Hilbert space of the entire system. Depending on the quantum system and the questions, the outcome from the QCNN is no longer restricted to the last readout qubit. As shown in Fig.4, the readout can be multiple qubits and the upsampling layer can be realized

|ψout[Ictrl00W^j][|ψout|ψan],\displaystyle|\psi_{out}\rangle\rightarrow\begin{bmatrix}I_{ctrl}&0\\ 0&\hat{W}_{j}\end{bmatrix}\begin{bmatrix}|\psi_{out}\rangle\\ |\psi_{an}\rangle\end{bmatrix}, (3)

where |ψan|\psi_{an}\rangle is the prepared initial state of the ancillary qubits (we use the cluster state in our work). Following the original idea of VQAs, the upsampling layer W^j\hat{W}_{j} can also be composed using single-qubit gates and non-local two-qubit gates. Without loss of generality, we train the unique W^j\hat{W}_{j} for every ancillary qubit. In some cases, the model can be simplified by using limited number of W^\hat{W} gates and lower the cost of computation. The to-be-determined parameters in W^\hat{W} gates can be trained after multiple times of epochs.

Our QFCN architectures clone the classical FCNs in the quantum context, that re-distribute the global information stored in the outcome state of the QCNN circuit in the new system in the same size of the original data and reserve the inherent tension between global and local information via the entanglement. By measuring the outcome state, the learning process consists of initializing all parameters and progressively optimizing them until convergence,

(θ)=1Nj=1N|𝒴jYj|2,\displaystyle\mathcal{L}(\theta)=\frac{1}{N}\sum_{j=1}^{N}|\vec{\mathcal{Y}}_{j}-\vec{Y}_{j}|^{2}, (4)

where Yj\vec{Y}_{j} is the readout of all qubits in the end of upsampling layer and 𝒴j\vec{\mathcal{Y}}_{j} is the column vector of the labeled data.

III Results

We simulate the quantum circuit consisting of 8 qubits using Google Tensorflow Quantum package [43]. The 8 qubits are initially prepared in the cluster state. In our simulations, each set of training data that carries noises and consists of two patterns is integrated in the 8 qubits as the rotation angle in the RX(θ)RX(\theta) gate. In the convolutional and pooling layers, we employ 15 parameterized essential logic quantum gate to simulate every fully parameterized two-qubit gate [30]. For simplicity, all gates on every two qubits in the same layer are chosen to be the same. To perform upsampling, the output of the last pooling layer consists of two qubits. In Figs.5 (a) and (b), the results from the hybrid solution and the pure quantum solution are compared. It shows that the pure quantum solution converges faster than the hybrid solution. In addition, the accuracy of the trained model via pure quantum circuit is higher than that from the hybrid model. The comparison between the two potential solutions indicates that the quantum solution can better supply the global coherence between non-local qubits, which leads the higher accuracy in the validation. However, the trade-off is that the loss is not as low as the hybrid solution, as shown in Fig.5 (a), which explained that the classical FCNs can better expose the local patterns through fitting the training data. (Here, the overfitting issue is not considered.) The comparisons have validated that the overall QFCN architecture is performing as intended, that both the hybrid-and the pure- QFCN models can successfully realize the learning process respectively.

Refer to caption
Figure 5: The performance of the pure quantum FCN (QFCN) model compared to the hybrid model.

In our experiment, we also compare two possible ways to build up the pure quantum upsampling layers. Firstly, we operate a unique two-qubit gate to extend the size of the Hilbert space of the system, as what we set up in QCNNs. Secondly, we instead use different two-qubit gates for every two qubits pair. In Figs. 6 (a) and 6 (b), the comparisons between the above-mentioned two setups are displayed. It is observed that in every way the fully-parameterized solution is better than the other that the fully-parameterized circuit converges faster and has better accuracy in validation and lower error in training model. In the classical algorithm, the upsampling kernels are initially same for every element in the data, which is similar to the operation that integrating an ancillary qubit into the system. The next step in the classical algorithm is to learn the variables in each kernel via gradient descent. It is worthy to note that each kernel is independent in this process. So it explains that the fixed and uniform ancillary qubit model does not work as it wipes out the independence of qubit at different locations. Moreover, in this experiment, using ancillary qubits to extend the size of the Hilbert space, but it is not the only solution. Other ancillary systems, e.g., multi-level systems, and continuous systems, can also serve as the upsampling kernel satisfying the deep learning process. It can extend our QFCN architecture workable on various quantum platforms for solving particular computational questions.

Refer to caption
Figure 6: The performance of the unique upsampling gate model compared to the fully parameterized quantum gates model.

IV Conclusion

In this work, we propose a quantum fully convolutional network to perform semantic segmentation on NISQ devices. Based on VQAs, the QFCN can be characterized by a parameterized quantum circuit that can be trained by converging the loss function following the classical optimizer. QFCNs can provide a promising, dynamical and scalable quantum machine learning application to speedup solving real-world problems. Our simulations prove the feasibility of performing classical FCNs on NISQ devices, and indicate that within a typical deep neural network architecture, QFCNs can increase accuracy in the network. Moreover, our results present some potential advantages of quantum algorithms: (1) pure quantum solutions can maximally speedup the computing process in the convolutional, pooling and upsampling layers; (2) quantum upsampling kernels can bring in the global coherence between non-local qubits which is better to fit large-sized data when there are weak couplings between separate parts. In addition, our algorithm offers a potential way to prepare measured qubits into a new initial state to perform QFCNs, a dynamical architecture that can increase the efficiency of the entire system. Lastly, our algorithm is open to arbitrary sized systems which serve as the upsampling kernel. As a result, we can freely choose the atomic system or continuous systems to realize upsampling, and use various quantum control strategies to operate the upsampling circuits.

Although this research does not show the quantum advantage over the classical counterpart in the deep machine learning, the results indeed present that the quantum solution can achieve the convergence faster than the hybrid solution. And there are a number of practical questions need further discussions to make the QFCNs functionality, e.g., increasing the efficiency of preparing the initial states, performing measurements on qubits and operating the multiple two-qubit gates in the context of open quantum systems. These discussions will be included in our future works.

Acknowledgements.
We acknowledge grant support from the NYIT’s Institutional Support for Research and Creativity (ISRC) Grants.

References

  • Vapnik et al. [1994] V. Vapnik, E. Levin, and Y. L. Cun, Measuring the vc-dimension of a learning machine, Neural Computation 6, 851 (1994).
  • Indolia et al. [2018] S. Indolia, A. K. Goswami, S. Mishra, and P. Asopa, Conceptual understanding of convolutional neural network- a deep learning approach, Procedia Computer Science 132, 679 (2018), international Conference on Computational Intelligence and Data Science.
  • Bhimji et al. [2017] W. Bhimji, S. A. Farrell, T. Kurth, M. Paganini, Prabhat, and E. Racah, Deep Neural Networks for Physics Analysis on low-level whole-detector data at the LHC, J. Phys.: Conf. Ser. 1085, 042034. 6 p (2017), presented at ACAT 2017 Conference, Submitted to J. Phys. Conf. Ser, arXiv:1711.03573 .
  • Cao et al. [2019] Z. Cao, Y. Dan, Z. Xiong, C. Niu, X. Li, S. Qian, and J. Hu, Convolutional neural networks for crystal material property prediction using hybrid orbital-field matrix and magpie descriptors, Crystals 910.3390/cryst9040191 (2019).
  • Ma et al. [2021] H. Ma, T. W. Tan, and K. H. K. Ban, A multi-task CNN learning model for taxonomic assignment of human viruses, BMC Bioinformatics 22, 194 (2021).
  • LeCun et al. [1999] Y. LeCun, P. Haffner, L. Bottou, and Y. Bengio, Object recognition with gradient-based learning, in Shape, Contour and Grouping in Computer Vision (Springer Berlin Heidelberg, Berlin, Heidelberg, 1999) pp. 319–345.
  • Wang and Gang [2018] W. Wang and J. Gang, Application of convolutional neural network in natural language processing, in 2018 International Conference on Information Systems and Computer Aided Education (ICISCAE) (2018) pp. 64–70.
  • Nielsen and Chuang [2010] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition (Cambridge University Press, 2010).
  • Giovannetti et al. [2011] V. Giovannetti, S. Lloyd, and L. Maccone, Advances in quantum metrology, Nature Photonics 5, 222 (2011).
  • Biamonte et al. [2017] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, Quantum machine learning, Nature 549, 195 (2017).
  • Orús et al. [2019] R. Orús, S. Mugel, and E. Lizaso, Quantum computing for finance: Overview and prospects, Reviews in Physics 4, 100028 (2019).
  • Houssein et al. [2021] E. H. Houssein, Z. Abohashima, M. Elhoseny, and W. M. Mohamed, Hybrid quantum convolutional neural networks model for covid-19 prediction using chest x-ray images (2021), arXiv:2102.06535 [eess.IV] .
  • Zhong et al. [2020] H.-S. Zhong, H. Wang, Y.-H. Deng, M.-C. Chen, L.-C. Peng, Y.-H. Luo, J. Qin, D. Wu, X. Ding, Y. Hu, P. Hu, X.-Y. Yang, W.-J. Zhang, H. Li, Y. Li, X. Jiang, L. Gan, G. Yang, L. You, Z. Wang, L. Li, N.-L. Liu, C.-Y. Lu, and J.-W. Pan, Quantum computational advantage using photons, Science 370, 1460 (2020)https://www.science.org/doi/pdf/10.1126/science.abe8770 .
  • Carolan et al. [2020] J. Carolan, M. Mohseni, J. P. Olson, M. Prabhu, C. Chen, D. Bunandar, M. Y. Niu, N. C. Harris, F. N. C. Wong, M. Hochberg, S. Lloyd, and D. Englund, Variational quantum unsampling on a quantum photonic processor, Nature Physics 16, 322 (2020).
  • Debnath et al. [2016] S. Debnath, N. M. Linke, C. Figgatt, K. A. Landsman, K. Wright, and C. Monroe, Demonstration of a small programmable quantum computer with atomic qubits, Nature 536, 63 (2016).
  • Kokail et al. [2019] C. Kokail, C. Maier, R. van Bijnen, T. Brydges, M. K. Joshi, P. Jurcevic, C. A. Muschik, P. Silvi, R. Blatt, C. F. Roos, and P. Zoller, Self-verifying variational quantum simulation of lattice models, Nature 569, 355 (2019).
  • Hempel et al. [2018] C. Hempel, C. Maier, J. Romero, J. McClean, T. Monz, H. Shen, P. Jurcevic, B. P. Lanyon, P. Love, R. Babbush, A. Aspuru-Guzik, R. Blatt, and C. F. Roos, Quantum chemistry calculations on a trapped-ion quantum simulator, Phys. Rev. X 8, 031022 (2018).
  • Peruzzo et al. [2014] A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’Brien, A variational eigenvalue solver on a photonic quantum processor, Nature Communications 5, 4213 (2014).
  • Chatterjee and Yu [2017] R. Chatterjee and T. Yu, Generalized coherent states, reproducing kernels, and quantum support vector machines, Quantum information and computation 17, 1292 (2017).
  • Chen et al. [2014] Y. Chen, J. Q. You, and T. Yu, Exact non-markovian master equations for multiple qubit systems: Quantum-trajectory approach, Physical Review A 90, 052104 (2014).
  • Ma et al. [2014] T. Ma, Y. Chen, T. Chen, S. R. Hedemann, and T. Yu, Crossover between non-markovian and markovian dynamics induced by a hierarchical environment, Phys. Rev. A 90, 042108 (2014).
  • Preskill [2018] J. Preskill, Quantum Computing in the NISQ era and beyond, Quantum 2, 79 (2018).
  • Wei et al. [2021] S. Wei, Y. Chen, Z. Zhou, and G. Long, A quantum convolutional neural network on nisq devices (2021), arXiv:2104.06918 [quant-ph] .
  • Cerezo et al. [2021] M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio, and P. J. Coles, Variational quantum algorithms, Nature Reviews Physics 3, 625 (2021).
  • Chen et al. [2020] S. Y. Chen, T. Wei, C. Zhang, H. Yu, and S. Yoo, Quantum convolutional neural networks for high energy physics data analysis, CoRR abs/2012.12177 (2020)2012.12177 .
  • Abbas et al. [2021] A. Abbas, D. Sutter, C. Zoufal, A. Lucchi, A. Figalli, and S. Woerner, The power of quantum neural networks, Nature Computational Science 1, 403 (2021).
  • McClean et al. [2016] J. R. McClean, J. Romero, R. Babbush, and A. Aspuru-Guzik, The theory of variational hybrid quantum-classical algorithms, New Journal of Physics 18, 023023 (2016).
  • Liu et al. [2020] D. Liu, Z. Yao, and Q. Zhang, Quantum-classical machine learning by hybrid tensor networks (2020), arXiv:2005.09428 [cs.LG] .
  • Zhu et al. [2019] D. Zhu, N. M. Linke, M. Benedetti, K. A. Landsman, N. H. Nguyen, C. H. Alderete, A. Perdomo-Ortiz, N. Korda, A. Garfoot, C. Brecque, L. Egan, O. Perdomo, and C. Monroe, Training of quantum circuits on a hybrid quantum computer, Science advances 510.1126/sciadv.aaw9918 (2019).
  • Cong et al. [2019] I. Cong, S. Choi, and M. D. Lukin, Quantum convolutional neural networks, Nature Physics 15, 1273 (2019).
  • Oh et al. [2020] S. Oh, J. Choi, and J. Kim, A tutorial on quantum convolutional neural networks (qcnn), in 2020 International Conference on Information and Communication Technology Convergence (ICTC) (2020) pp. 236–239.
  • Pesah et al. [2020] A. Pesah, M. Cerezo, S. Wang, T. Volkoff, A. T. Sornborger, and P. J. Coles, Absence of barren plateaus in quantum convolutional neural networks (2020), arXiv:2011.02966 [quant-ph] .
  • Franken and Georgiev [2020] L. Franken and B. Georgiev, Explorations in quantum neural networks with intermediate measurements, in 28th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2020, Bruges, Belgium, October 2-4, 2020 (2020) pp. 297–302.
  • Long et al. [2015] J. Long, E. Shelhamer, and T. Darrell, Fully convolutional networks for semantic segmentation, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015).
  • Huang et al. [2020] H. Huang, J.-Q. Li, J. Wang, and H. Wang, Fcn-based carrier signal detection in broadband power spectrum, IEEE Access 8, 113042 (2020).
  • Guo et al. [2019] Y. Guo, Z. Xiao, L. Geng, J. Wu, F. Zhang, Y. Liu, and W. Wang, Fully convolutional neural network with gru for 3d braided composite material flaw detection, IEEE Access 7, 151180 (2019).
  • Karim et al. [2018] F. Karim, S. Majumdar, H. Darabi, and S. Chen, Lstm fully convolutional networks for time series classification, IEEE Access 6, 1662 (2018).
  • Rosafalco et al. [2020] L. Rosafalco, A. Manzoni, S. Mariani, and A. Corigliano, Fully convolutional networks for structural health monitoring through multivariate time series classification, Advanced Modeling and Simulation in Engineering Sciences 7, 38 (2020).
  • Vidal [2009] G. Vidal, Entanglement renormalization: an introduction, Understanding Quantum Phase Transitions  (2009).
  • Vidal [2008] G. Vidal, Class of quantum many-body states that can be efficiently simulated, Phys. Rev. Lett. 101, 110501 (2008).
  • Benenti et al. [2018] G. Benenti, G. Casati, D. Rossini, and G. Strini, Principles of Quantum Computation and Information (WORLD SCIENTIFIC, 2018) https://www.worldscientific.com/doi/pdf/10.1142/10909 .
  • Sammut and Webb [2010] C. Sammut and G. I. Webb, eds., Mean squared error, in Encyclopedia of Machine Learning (Springer US, Boston, MA, 2010) pp. 653–653.
  • Abadi et al. [2015] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, TensorFlow: Large-scale machine learning on heterogeneous systems (2015), software available from tensorflow.org.