A Quantum Information Theoretic View On A Deep Quantum Neural Network
Abstract
We discuss a quantum version of an artificial deep neural network where the role of neurons is taken over by qubits and the role of weights is played by unitaries. The role of the non-linear activation function is taken over by subsequently tracing out layers (qubits) of the network. We study two examples and discuss the learning from a quantum information theoretic point of view. In detail, we show that the lower bound of the Heisenberg uncertainty relations is defining the change of the gradient descent in the learning process. We raise the question if the limit by Nature to two non-commuting observables, quantified in the Heisenberg uncertainty relations, is ruling the optimization of the quantum deep neural network. We find a negative answer.
I Introduction

Machine Learning, supervised, unsupervised, reinforcement or GANs (Generative Adversarial Networks), has shown in the recent years impressive successes (e.g. Ref. [1] for an application to photovoltaic systems or Ref. [2] for utilizing machine learning for identification of particles or Ref. [3] utilizing deep networks for automatic cleaning of data). Here we want to raise the question if we can do better with quantum systems. In general there are several approaches and claims, but no clear candidate. We focus on quantum artificial neural networks that utilize qubits as perceptrons. For a general overview over the current perspective of quantum algorithms on a quantum computer in the noisy intermediate-scale quantum (NISQ) era the reader may e.g. be referred to Ref. [4].
Classical neural networks started their success story once hidden layers were introduced in addition to a non-linear activation function. The very working of classical neural networks is shown in Fig. 1. Weights and bias at different layers are the parameters that have to be learnt by the training pairs provided in the learning process. In addition an activation function has to be chosen which, as it turned out, has to be non-linear in order to guarantee the universal approximation theorem, i.e. that any function can be efficiently approximated by this neural network. The learning of the classical network is given by defining a cost function and utilizing backward propagation, which allows to update the weights and bias over a gradient descent such that the cost function optimizes, i.e. the desired output is reached better and better.
We focus on a quantum version of a classical neural network that interchanges each perceptron with a qubit. The weights and maybe bias are realized with different unitary matrices. The main challenge is to introduce an activation function in the quantum version since the quantum theory is manifestly linear and the unitary evolution is reversible and non-dissipative. This is in strong contrast to classical neural networks which have at their heart nonlinear activation functions and a dissipative dynamics. It is generally open which properties of classical artificial neural networks should be met to call it a meaningful quantum artificial neural network. But this question goes deeper since it generally asks what is the difference between classical and quantum information and its processing.
In this paper we discuss these issues by considering a particular example of a deep quantum artificial neural network.
II A quantum artificial deep neural network

A minimal deep quantum artificial neural network is sketched in Fig. 2. A unitary acts upon two input qubits and the first qubit of the hidden layer, this is followed by a unitary that acts upon the two input qubits and the second qubit of the hidden layer. Obviously, the ordering of these two unitaries is important. Then a partial trace is applied to the input layer – first two qubits – resulting in a two qubit state for which the same process is started by two new unitaries followed by a partial trace over the hidden layer, which is then the two–qubit output state of the quantum neural network. The partial trace may be interpreted as the activation function and the parameters of the four unitary operators as the weights or bias.
For the minimal network of a two qubit input layer ( layer) and a hidden layer of two qubits ( layer) and an output layer of two qubits ( layer) the four unitarities have each the dimension of which implies free parameters and in total parameters. As a cost function we will define the fidelity, which is a measure of the “closeness” of two quantum states. It expresses the probability that one state will pass a test to identify as the other. It is generally defined by
(1) |
which reduces in the special case of pure states to the overlap of those two states. For qubits the fidelity reduces also to . The fidelity takes values and in the case of the states can be considered as equivalent. The problems that we will consider will have a desired output state that is chosen to be pure. This simplifies the loss function to , which is if the output state of the network perfectly overlaps with the desired state and else .
II.1 Feed Forward Propagation
The two-qubit output state of the minimal network of “two-qubits–two-qubits–two-qubits” is given by
(2) |
where is a given input state, the initial states of the hidden and output layers have been chosen to be in the state (w.o.l.g.) and the unitaries only address the subspaces described before.
II.2 Cost Function Optimization - Backward Propagation
The cost function for our problem is then defined by
(3) |
In Ref. [5] a composite parametrization was introduced which will allow us to compute the derivative of the unitaries, for the optimization of the network, analytically. For any unitary operation acting on a Hilbert space with spanned by the orthonormal basis there exist real values with and and for and for such that any with
(4) |
The sequence of the product is defined by . Here, the are one-dimensional projectors and are the generalized anti-symmetric Pauli-matrices with .
The parameter can be gathered in a “parameterization matrix”
(10) |
where the diagonal entries represent global phase transformations, the upper right entries are related to rotations in the subspaces spanned by and , while the lower left entries are relative phases in these subspaces (with respect to the basis ). Note that for optimization one does not need to restrict the parameter to the intervals given above.
Now we want to change the unitaries of the neural network in order to maximize the cost function and this for each parameter , i.e. we can consider a Taylor expansion:
(11) | |||||
with
(15) |
and
(18) |
where we have used the results of Ref. [5]. Note that are hermitian, thus the unitarity condition holds for every order in the expansion. Moreover, note that depends for on all other parameters except .
Thus the parameters of the unitaries are changed by which is given for the first perceptron by the term
(19) |
and for the second perceptron by the term
(20) |
and so on. Here can be chosen arbitrarily and in principle different for each unitary and plays the role of a learning parameter in a classical network, i.e. chosen too low the cost function will only increase slowly but chosen too high we may miss the optimum. It is a hyper parameter in the learning process. We chose it for all four unitaries the same in our applications.
Let us emphasize here that each equation forms a Heisenberg uncertainty, in the so called Robertson version [8], i.e.
(21) |
where the is the standard deviation of the operator with respect to . Clearly, there are only two ways how the lower bound on a Heisenberg uncertainty can vanish, either the two observables are commuting or the state has a spectrum of zero. The first way is the general foundational limit provided by Nature, if two observables are not commuting, for instance the famous position operator and momentum operator , we have and therefore, for all possible states the lower bound is . Differently stated, there exists no states for which the standard deviation of the position and momentum can be smaller than this value.
On the other hand if we consider e.g. Pauli operators , then the commutator is and thus the lower bound gives
(22) |
which may be a non-zero value for a general , but choosing an appropriate it may still vanish though the Pauli operators do not commute. This property of the Robertson version of the Heisenberg uncertainty relation was criticized and an entropic version was found overcoming this issue, which we discuss in the conclusions. The question we want to discuss first is if in the optimization process of the quantum net, these fundamental limits are utilized.
III Examples and Results
Here we present two different examples by increasing the complexity of the general problem.
III.1 Example A: Learning A Single Unitary
Let us start with a simple example, namely learning a particular unitary , that was first considered in Ref. [6]. In Ref [13] this quantum neural network was applied to the real data of the Iris flower and the performance of this quantum network was compared with other networks. The ground truth is then given by choosing arbitrary states and computing the desired output by . The goal is that the network learns this unitary (generally only parameters) by optimizing the parameters of the network by utilizing the cost function. One can fix , which may be interpreted as a learning parameter, in each round but we optimize by taking the maximum of the cost function for the computed corrections of all unitaries. We chose randomly pairs and used for the optimization and for the validation. The software for implementation was Mathematica Wolfram.
In Fig. 3 we plotted the cost function and the validation function for different training rounds. At each training round the cost function is plotted as a function of and its maximum is taken. As can be seen the curves are monotonically increasing at each round but the convergence is slow.
A typical update for the looks like (example for , round )
(31) |
or for (example for , round )
(40) |
which is the sum of all training states defining the lower bound in the Heisenberg relation and does not vanish. The zeros are due to the fact that we have chosen the hidden layer and output layer states to be . In Fig. 4 we show how the cost function typically changes with the learning parameter . It is quite constrained if all four unitaries are included, but for a single one the parameter space is quite flat. This suggests that the interplay of all four unitaries is relevant for the problem, but the constraint due to each single unitary does not do the job.
Even though we are close to the maximum cost function value it seems that the derivatives do not vanish. To see if this is due the average over in this case pairs, we picked one out and optimized it to a cost function value of , the corrections terms are still of same order as above. This suggests that the neural network does not optimize the uncertainty relation but the parameters of the unitaries which are not unique. Let us choose now a non-trivial problem.


III.2 Example B: A State Learning Its Own Quantum Properties?!
Now we create pairs such that the desired output state encodes the quantum properties, i.e. its purity and its entanglement property. For that purpose we choose the concurrence [7], a computable measure for bipartite qubits. The concurrence is defined as maximal between zero and the maximal eigenvalue minus the other three eigenvalues of the quantity with being the -Pauli matrix. For pure states it simplifies to .
Our desired output states are chosen to be
(41) |
This means that each pair is again connected by a unitary (if we assume only pure input states), , but it is chosen according to the quantum properties of the input states. Thus the net needs to learn a set of unitaries defined by the quantum properties (entanglement&purity) of the arbitrary input state. Consequently, the question is whether the neural net processes also the properties of the state itself or if only the information of the training pair is exploited as it would be the case in a classical neural network.
We tried different sets for the training and here we discuss the result for pairs for the training. The convergence is even slower than for the problem of single unitaries. We find typically a cost function value of with a standard deviation of . If we use for the validation only pairs the cost function value was found to be higher, i.e. for our set . This shows a high statistic fluctuation with the randomly chosen set, meaning the general problem is not (yet) fully learnt. As a further test we can interchange output with the input, this gave a cost function of with a standard deviation of . We also tried random inputs, which gave in general very low cost function values. Consequently, the net is indeed learning some features of the training set, which also applies to an arbitrary set.
In Fig. 5 we visualize how well the quantum properties are learnt per se. The first graphs correspond to an early time in the optimization process, here the cost function gave the value with a standard deviation of . We see that in the optimization process the net learns e.g. the symmetry between the and states (Fig. 5(b)), but the range of the errors does not get smaller when compared to a later stage in the optimization. From Fig. 5(a) we can deduce that the error in purity is significantly reduced (having only pure states in the training), but the system also predicts values greater than , which is of course unphysical. This could be compensated by adding a Lagrange multiplier to the cost function. In general we observe that the training and validation pairs distribute quite similarly. The range in the error of the concurrence (Fig. 5(a)), however, is not reduced.
The correction terms obtained by back propagation are always of similar size similar to the trivial example discussed in the previous section and have been visualized in Fig. 6. The dependence on the learning parameter is depicted in Fig. 7. In conclusion, the net learns partial properties of the set but the cost function does converge slowly. In the next section we discuss if the learning exploits the limits by the Heisenberg uncertainty relation.
(a)
(b)

IV Conclusion & Discussion
In this contribution we have analysed a minimal deep quantum neural network, i.e. a net taking as an input a two–qubit state and producing a two–qubit state as an output, with one hidden layer of two qubits in between. For that we performed two case studies. In the first case, each training pair is connected by one particular randomly chosen unitary matrix. In the second example each pair is connected by a unitary that allows to deduce from the output states the concurrence, a measure of entanglement, and the purity of the input state. Hence here we ask whether the net also learns those implicit properties of the input state, which is obviously classically impossible. In both cases at each round of optimization the cost function was always strictly increasing but typically not by a huge amount. Consequently, some learning of the net has always been observed.
The unitaries involved in the net have been parameterized in a composite way, which allows a quantum information theoretic view into the working of the net. In particular it shows that the corrections to such parameterized unitaries are of the form of a Heisenberg uncertainty relation, Eq. (21). One striking feature of the Quantum Nature of our world is that two non-commuting observables lead in general to a universal limit by Nature on the measurement outcomes. The most famous example is the uncertainty in momentum and position, . The fact that the lower bound, the universal limit by Nature, is independent of the state is a special property of those two observables. In general one obtains the quantity . This is the Robertson form [8] of the Heisenberg uncertainty relation and was criticized, because by choosing an appropriate state it can vanish even if the two observables do not commute.
Furthermore, it was shown that there exists an information theoretic formulation of uncertainty principle [10], which does not suffer from this problem of state dependence. This puts a limit on the extent to which the two observables can be simultaneously peaked. This entropic uncertainty relation of two non-degenerate observables is given by (introduced by D. Deutsch [10], improved in Ref. [11] and proven in Ref. [12])
(42) |
where
(43) |
is the entropy for e.g. a certain prepared pure state and the is the probability associated with the measurement of outcome of for , hence . Thus in general there is a universal limit to any two observables if they are non-commuting.
Coming back to our quantum neural network. We observed that those lower bounds never vanish, not even for a single generator . From that we can conclude that the net does not optimize the unitaries involved such that all or some commutators vanish. Consequently, we can conjecture that the universal limit is not exploited in the optimization. Rather the fact that the parameters are oscillating shows the similarity to classical neural networks optimization. From that we infer that the optimization process does not exploit a particular quantum phenomenon.
In summary, those preliminary results have to be taken with care since we only used a minimal version of a net, e.g. no deeper nets, only one example of a gradient descent-based optimization and a limited set of problems. Moreover, there are several more techniques that could be applied to optimize the learning process. Utilizing a gradient descent-based optimization our findings are also strongly correlated to other works, e.g. Refs. [14, 15, 16], discussing e.g. barren plateau landscapes and how to avoid them. Further detailed studies are necessary to confirm these findings. However, for this minimal setting discussed here Heisenberg’s uncertainty relation is not a guiding principle.
(a)
(b)
Acknowledgements.
BCH thanks the organizers of the workshop “International Workshop on Machine Learning and Quantum Computing, Applications in Medicine and Physics (WMLQ2022)” for putting together an inspiring and at the top of the knowledge programme and a vivid environment for discussions. BCH also acknowledges gratefully that this research was funded in whole, or in part, by the Austrian Science Fund (FWF) project P36102.References
- [1] H. Behrends, D. Millinger, W. Weihs-Sedivy, A. Javornik, G. Roolfs and St. Geißendörfer. Analysis Of Residual Current Flows In Inverter Based Energy Systems Using Machine Learning Approaches, Energies 15, 582 (2022).
- [2] LHCb collaboration, A new algorithm for identifying the flavour of mesons at LHCb, Journal of Instrumentation 11, 05010 (2016).
- [3] G. Angloher et al., Towards an automated data cleaning with deep learning in CRESST, https://doi.org/10.48550/arXiv.2211.00564.
- [4] F. Leymann and J. Barzen, The Bitter Truth About Quantum Algorithms in the NISQ Era, Quantum Sci. Technol. 5, 044007 (2020).
- [5] Ch. Spengler, M. Huber and B.C. Hiesmayr, Composite parameterization and Haar measure for all unitary and special unitary groups, J. Math. Phys. 53, 013501 (2012).
- [6] K. Beer, D. Bondarenko, T. Farrelly, T. J. Osborne, R. Salzmann, D. Scheiermann, and R. Wolf, Training deep quantum neural networks, Nature Communications 11, 808 (2020).
- [7] S. Hill and W. K. Wootters, Entanglement of a Pair of Quantum Bits, Phys.Rev.Lett.785, 022 (1997).
- [8] H.P. Robertson, The Uncertainty Principle, Phys. Rev. 34, 163 (1929).
- [9] I. Bialynici-Birula and L. Rudnicki, Entropic Relation in Quantum Physics, Statistical Complexity, Ed. K. D. Sen, Springer, 2011, Ch. 1, https://arxiv.org/abs/1001.4668.
- [10] D. Deutsch, Uncertainty in quantum measurements, Phys. Rev. Lett. 50, 631 (1983).
- [11] K. Kraus, Complementary observables and uncertainty relations, Phys. Rev. D 35, 3070 (1987).
- [12] H. Maassen and J.B.M. Uffink, Generalized Entropy Uncertainty Relation, Phys. Rev. Lett. 60, 1103 (1988).
- [13] S. Wilkinson and M. Hartmann, Evaluating the performance of sigmoid quantum perceptrons in quantum neural networks, quant-ph/2208.06198v1, https://doi.org/10.48550/arXiv.2208.06198 (2022).
- [14] A. Kulshrestha and I. Safro, Avoiding Barren Plateaus in Variational Quantum Algorithms, arXiv.2204.13751, https://doi.org/10.48550/arXiv.2204.13751
- [15] J.R. McClean, S. Boixo, V. N. Smelyanskiy, R. Babbush and H. Neven, Barren plateaus in quantum neural network training landscapes, Nature Communication 9, 4812 (2018).
- [16] D. Heimann, G. Schönhoff, F. Kirchner, Learning capability of parametrized quantum circuits, arXiv:2209.10345, https://doi.org/10.48550/arXiv.2209.10345 (2022).