Classical and Quantum Algorithms for Orthogonal Neural Networks
Abstract
Orthogonal neural networks have recently been introduced as a new type of neural networks imposing orthogonality on the weight matrices. They could achieve higher accuracy and avoid evanescent or explosive gradients for deep architectures. Several classical gradient descent methods have been proposed to preserve orthogonality while updating the weight matrices, but these techniques suffer from long running times and provide only approximate orthogonality. In this paper, we introduce a new type of neural network layer called Pyramidal Circuit, which implements an orthogonal matrix multiplication. It allows for gradient descent with perfect orthogonality with the same asymptotic running time as a standard layer. This algorithm is inspired by quantum computing and can therefore be applied on a classical computer as well as on a near term quantum computer. It could become the building block for quantum neural networks and faster orthogonal neural networks.
1 Introduction
In the evolution of neural network structures, adding constraints to the weight matrices has often been an effective path. Recently, orthogonal neural networks (OrthoNNs) have been proposed [3, 4, 5, 6] as a new type of neural networks for which, at each layer, the weight matrix should remain orthogonal. This property is useful to reach higher accuracy performance and avoid vanishing or exploding gradient for deep architectures. Several classical gradient descent methods have been proposed to preserve the orthogonality while updating the weight matrices. However, these techniques suffer from longer running time and sometimes only approximate the orthogonality. In particular, the main method for achieving orthogonality during training is to first perform the usual gradient descent to update the weight matrix (which is now not going to be orthogonal) and then perform Singular Value Decomposition to orthogonalize or almost orthogonalize the weight matrix. We can see then why achieving orthogonality hinders a fast training, since at every step an SVD computation needs to be performed (See Section 1.1).
In the emergent field of quantum machine learning, several proposals have been made to implement neural networks. Some algorithms rely on long term and perfect quantum computers [7, 8], while others try to harness the existing quantum devices using variational circuits [9, 10]. As in classical neural networks, they use gradient descent methods to update the quantum parameters of the circuits. Such quantum neural networks have been trained for very small sizes, however there is still a need to understand how such architectures will scale and whether they will provide efficient and accurate training.
In this work, we present a new training method for neural networks that preserves perfect orthogonality while having the same running time as usual gradient descent methods without the orthogonality condition, thus achieving the best of both worlds, most efficient training and perfect orthogonality.
The main idea comes from the quantum world, where we know that any quantum circuit corresponds to an operation described by a unitary matrix, which if we only use gates with real amplitudes is an orthogonal matrix. In particular, we propose a novel special-architecture quantum circuit, for which there is an efficient way to map the elements of the orthogonal weight matrix to the parameters of the gates of the quantum circuit and vice versa. In other words, while performing a gradient descent on the elements of the weight matrix individually does not preserve orthogonality, performing a gradient descent on the parameters of the quantum circuit preserves orthogonality (since any quantum circuit with real parameters corresponds to an orthogonal matrix) and is equivalent to updating the weight matrix. We also prove that performing gradient descent on the parameters of the quantum circuit can be done efficiently classically (with constant update cost per parameter) thus concluding that there exists a quantum-inspired, but fully classical way of efficiently training perfectly orthogonal neural networks.
Moreover, the special-architecture quantum circuit we defined has many properties that make it a good candidate for NISQ implementation: it uses only one type of quantum gates, requires simple connectivity between the qubits, has depth linear in the input and output node sizes, and benefits from powerful error mitigation techniques that make it resilient to noise. This allows us to also propose an inference method running the quantum circuit on data which might offer a faster running time, given the shallow depth of the quantum circuit.
Our main contributions are summarized in Table 1, where we have considered the time to perform a feedforward pass, or one gradient descent step. A single neural network layer is considered, with input and output of size .
Algorithm | Feedforward Pass | Weight Matrix Update |
---|---|---|
Quantum Pyramidal Circuit (This work) | ||
Classical Pyramidal Circuit (This work) | ||
Classical Approximated OrthoNN (SVB) [3] | ||
Classical Strict OrthoNN (Stiefel Manifold) [3] | ||
Standard Neural Network (non orthogonal) |
1.1 Related Work on Classical OrthoNNs
The idea behind Orthogonal Neural Networks (OrthoNNs) is to add a constraint to the weight matrices corresponding to the layers of a neural network. Imposing orthogonality to these matrices has theoretical and practical benefits in the generalization error [3]. Orthogonality ensures a low weights redundancy and preserves the magnitude of the weight matrix’s eigenvalues to avoid vanishing gradients. In terms of complexity, for a single layer, the feedforward pass of an OrthoNN is simply a matrix multiplication, hence has a running time of if is the size of the orthogonal matrix. It is also interesting to note that OrthoNNs have been generalized to Convolutional Neural Networks [4].
The main difficulty of OrthoNNs is to preserve the orthogonality of the matrices while updating them during gradient descent. Several algorithms have been proposed to this end [4, 6, 11], but they all point that pure orthogonality is computationally hard to conserve. Therefore, previous works allow for approximations: strict orthogonality is no longer required, and the matrices are often pushed toward orthogonality using regularization techniques during weights update.
We present two algorithms from [3] for updating orthogonal matrices.
The first algorithm is an approximate one, called Singular Value Bounding (SVB). It starts by applying the usual gradient descent update on the matrix, therefore making it not orthogonal anymore. Then, the singular values of the new matrix are extracted using Singular Value Decomposition (SVD), their values are manually pushed to be close to 1, and the matrix is recomposed hence enforcing orthogonality. This method shows less advantage on practical experiments [3]. It has a complexity of due to the SVD, and in practice is better than the second algorithm described below. Note that this running time is still asymptotically worse than , the running time to perform standard gradient descent.
The second algorithm ensures perfect orthogonality by performing the gradient descent in the manifold of orthogonal matrices, called the Stiefel Manifold. In practice [3] this method showed a substantially advantageous classification results on standard datasets, indicating that perfect orthogonality can be a much more desired property than approximate one. This algorithm requires operations, and is quite prohibitive in practice. We give an informal step-by-step detail of this algorithm:
-
1.
Compute the gradient of the weight matrix .
-
2.
Project the gradient matrix in the tangent space, (The space tangent to the manifold at this point ): multiply by some other matrices based on :
This requires several matrix-matrix multiplications. In the case of square matrices, each has complexity . the result of this projection is called the manifold gradient .
-
3.
update , where is the chosen learning rate.
-
4.
Perform a retraction from the tangent space to the manifold. To do so we multiply by factor of the QR decomposition, obtained using Gram Schmidt orthonormalization, which has complexity .
In conclusion, previous work points to the two following statements. First, perfect orthogonality can indeed increase classification accuracy. Second, previously known algorithms for training orthogonal neural networks were not as efficient as a standard training algorithm.
2 A parametrized quantum circuit for orthogonal neural networks
In this section, we define a special-architecture parametrized quantum circuit that will be useful for performing training and inference on orthogonal neural networks. As we said, the training will be completely classical in the end, but the intuition of the new method comes from this quantum circuit, while the inference can happen both classically or by applying this quantum circuit. A basic introduction to quantum computing concepts necessary for this work is given in the Appendix (Section A.1).
2.1 The Gate
The quantum circuit proposed in this work (see Fig.1), which implements a fully connected neural network layer with an orthogonal weight matrix, uses only one type of quantum gate, the Reconfigurable Beam Splitter (RBS) gate. This two-qubit gate is parametrizable with one angle . Its matrix representation is given as:
(1) |
We can think of this gate as a rotation in the two-dimensional subspace spanned by the basis , while it acts as the identity in the remaining subspace . Or equivalently, starting with two qubits, one in the state and the other one in the state , the qubits can be swapped or not in superposition. The qubit stays on its wire with amplitude or switches with the other qubit with amplitude if the new wire is below () or if the new wire is above (). Note that in the two other cases ( and ) the gate acts as identity.

This gate can be implemented rather easily, either as a native gate, known as [12], or using four Hadamard gates, two rotation gates, and two two-qubits CZ gates:

2.2 Quantum Pyramidal Circuit
We now propose a quantum circuit that implements an orthogonal layer of a neural network. The circuit is a pyramidal structure of gates, each with an independent angle, as represented in Fig.3(a). In Section 2.3 and 3, more details are provided concerning respectively the input loading, and the equivalence with a neural network’s orthogonal layer.


To mimic a given classical layer with a quantum circuit, the number of output qubits should be the size of the classical layer’s output. We refer to the square case when the input and output sizes are equal, and to the rectangular case otherwise (Fig.4(a)).
The important property to note is that the number of parameters of the quantum pyramidal circuit corresponding to a neural network layer of size is exactly the same as the number of degrees of freedom of an orthogonal matrix of dimension .


For simplicity, we pursue our analysis using only the square case but everything can be easily extended to the rectangular case. As we said, the full pyramidal structure of the quantum circuit described above imposes the number of free parameters to be , the exact number of free parameters to specify a orthogonal matrix.
In Section 3 we will show how the parameters of the gates of this pyramidal circuit can be easily related to the elements of the orthogonal matrix of size that describes it. We note that alternative architectures can be imagined as long as the number of gate parameters is equal to the parameters of the orthogonal weight matrix and a simple mapping between them and the elements of the weight matrix can be found.
Note finally that this circuit has linear depth and is convenient for near term quantum hardware platforms with restricted connectivity. Indeed, the distribution of the gates requires only nearest neighbor connectivity between qubits.
2.3 Loading the Data
Before applying the quantum pyramidal circuit, we will need to upload the classical data into the quantum circuit. We will use one qubit per feature of the data. For this, we use a unary amplitude encoding of the input data. Let’s consider an input sample , such that . We will encode it in a superposition of unary states:
(2) |
We can also rewrite the previous state as , where represents the ith unary state with a in the ith position . Recent work [13] proposed a logarithmic depth data loader circuit for loading such states. Here we will use a much simpler circuit. It is a linear depth cascade of -1 gates which, due to the particular structure of our quantum pyramidal circuit, only adds 2 extra steps to our circuit.

The circuit starts in the all state and flips the first qubit using an gate, in order to obtain the unary state as shown in Fig.5. Then a cascade of gates allow to create the state using a set of angles . Using Eq.(1), we will choose the angles such that, after the first gate of the loader, the qubits would be in the state and after the second one in the state and so on, until obtaining as in Eq.(2). To this end, we simply perform a classical preprocessing to compute recursively the -1 loading angles, in time . We choose , , and so on.
The ability of loading data in such a way relies on the assumption that each input vector is normalized, i.e. . This normalization constraint could seem arbitrary and impact the ability to learn from the data. In fact, in the case of an orthogonal neural network, this normalization shouldn’t degrade the training because orthogonal weight matrices are in fact orthonormal and thus norm-preserving. Hence, changing the norm of the input vector, by diving each component by , in both classical and quantum settings is not a problem. The normalization would impose that each input has the same norm, or the same "luminosity" in the context of images, which can be helpful or harmful depending on the case.
3 OrthoNNs Feedforward Pass
In this section, we will detail the effect of the quantum pyramidal circuit on an input encoded in a unary basis, as in Eq.(2). We will also see in the end how to simulate this quantum circuit classically with a small overhead and thus be able to provide a fully classical scheme.
Let’s first consider one pure unary input, where only the qubit is in state (e.g. ). This unary input will be transformed into a superposition of unary states, each with an amplitude. If we consider again only one of these possible unary outputs, where only the qubit is in state , its amplitude can be interpreted as a conditional amplitude to transfer the from qubit to qubit . Intuitively, this value is the sum of the quantum amplitudes associated to each possible path that connects the qubit to qubit , as shown in Fig.6.

Using this image of connectivity between input and output qubits, we can construct a matrix , where each element is the overall conditional amplitude to transfer the from qubit to qubit .
Fig.6 shows an example where exactly three paths can be taken to map the input qubit (the 7th unary state) to the qubit (the 6th unary state). Each path comes with a certain amplitude. For instance, one of the paths (the red one in Fig.6) moves up at the first gate, and then stays put in the next three gates, with a resulting amplitude of . The sum of the amplitudes of all possible paths give us the element of the matrix (where, for simplicity, and respectively stand for and ):
(3) |
In fact, the matrix can be seen as the unitary matrix of our quantum circuit if we solely consider the unary basis, which is specified by the parameters of the quantum gates. A unitary is a complex unitary matrix, but in our case, with only real operations, the matrix is simply orthogonal. This proves the correspondence between any matrix and the pyramidal quantum circuit.
The full unitary in the Hilbert Space of our -qubit quantum circuit is a matrix with the matrix embedded in it as a submatrix on the unary basis. This is achieved by loading the data as unary states and by using only gates that keep the number of 0s and 1s constant.
For instance, as shown in Fig.7, a 3-qubit pyramidal circuit is described as a unique matrix, that can be easily verified to be orthogonal.

In Fig.6, we considered the case of single unary for both the input and output. But with actual data, as seen in Section 2.3, input and output states are in fact a superposition of unary states. Thanks to the linearity of quantum mechanics in absence of measurements, the previous descriptions remain valid and can be applied on a linear combination of unary states.

Let’s consider an input vector encoded as a quantum state where represents the ith unary state (see Section 2.3). By definition of , each unary will undergo a proper evolution . This yields, by linearity, the following mapping
(4) |
As explained above, our quantum circuit is equivalently described by the sparse unitary or on the unary basis by the matrix . This can be summarized with
(5) |
We see from Eq.(4) and Eq.(5) that the output is in fact , the unary encoding of the vector , which is the output of a matrix multiplication between the orthogonal matrix and the input . As expected, each element of is given by . See Fig.8 for a diagram representation of this mapping.
Therefore, for any given neural network’s orthogonal layer, there is a quantum pyramidal circuit that reproduces it. On the other hand, any quantum pyramidal circuit is implementing an orthogonal layer of some sort.
As a side note, we can ask if a circuit with only qubits could also implement an orthogonal matrix multiplication of size . Indeed, it would be a unitary matrix in , but since the circuit should also have free parameters to tune, this would come at a cost of large depth, potentially unsuitable for NISQ devices.
3.1 Error Mitigation
It is important to notice that with our restriction to unary states, strong error mitigation techniques become available. Indeed, as we expect to obtain only quantum superposition of unary states at every layer, we can post process our measurements and discard the ones that present non unary states (i.e. states with more than one qubit in state , or the ground state). The most expected error is a bit flip between and . The case where two bit flips happen at the same time, which would change a unary state to a different unary state and would thus pass through our error mitigation, is even less probable. This error mitigation procedure can be applied efficiently to the results of a hardware demonstration and it has been used in the results presented in this paper.
3.2 Extracting the classical output
As shown in Fig.8, when using the quantum circuit, the output is a quantum state . As often in quantum machine learning [14], it is important to consider the cost of retrieving the classical outputs, using a procedure called tomography. In our case this is even more crucial since, between each layer, the quantum output will be converted into a classical one in order to apply a non linear function, and then reloaded for the next layer.
Retrieving the amplitudes of a quantum state comes at cost of multiple measurements, which requires running the circuit multiples times, hence adding a multiplicative overhead in the running time. A finite number of samples is also a source of approximation error in the final result. In this work, we will allow for errors [7]. The tomography on a quantum state with unary encoding on qubits requires measurements, where is the error threshold allowed. For each , will be obtained with an absolute error , and if , it will most probably not be measured, hence set to 0. In practice, one would perform as many measurements as is convenient during the experiment, and deduce the equivalent precision from the number of measurements made.
Note that it is important to obtain the amplitudes of the quantum state, which in our case are positive or negative real numbers, and not just the probabilities of the outcomes, which are the squares of the amplitudes. There are different ways of obtaining the sign of the amplitudes and we present two different ways below.
Indeed, a simple measurement in the computational basis will only provide us with estimations of the probabilities that are the squares of the quantum amplitudes (see Section A.1). In the case of neural networks, it is important to obtain the sign of the layer’s components in order to apply certain type of non linearities. For instance, the ReLu activation function is often used to set all negative components to 0.

In Fig.9, we propose a specific enhancement to our circuit to obtain the signs of the vector’s components at low cost. The sign retrieval procedure consists of three parts.
-
a)
The circuit is first applied as described above, allowing to retrieve each squared amplitude with precision using the tomography. The probability of measuring the unary state (i.e. ), is .
-
b)
We apply the same steps a second time on a modified circuit. It has additional RBS gates with angle at the end, which will mix the amplitudes pair by pair. The probabilities to measure and are now given by and . Therefore if , we have , and if , we have . The same holds for the pairs (, ), and so on.
-
c)
We finally perform the same where the RBS are shifted by one position below. Then we compare the signs of the pairs (, ), (, ) and so on.
At the end, we are able to recover each value with its sign, assuming that for instance. This procedure has the benefit of not adding depth to the original circuit, but requires 3 times more runs. The overall cost of the tomography procedure with sign retrieval is given by .
In Fig.10 we propose another method to obtain the values of the amplitudes and their signs, which is in fact what we used for the hardware demonstrations. Compared to the above procedure, it relies on one circuit only, but requires an extra qubit and a depth of instead of .

This circuit performs a Hadamard and CNOT gate in order to initialize the qubits in the state , where the second register corresponds to the qubits that will be processed by the pyramidal circuit and the loaders.
Next, applying the data loader for the normalized input vector (see Section 2.3) and the pyramidal circuit will, according to Eq.(4), map the state to
(6) |
In other words, we performed the pyramid circuits controlled on the first qubit being in state . Then, we flip the fisrt qubit with an X gate and perform a controlled loading of the uniform norm-1 vector . For this, we add the adjoint data loader for the state, a CNOT gate and the data loader a second time. Recall that if a circuit is followed by , it is equivalent to the identity. Therefore, this will load the uniform state only when the first qubit is in state :
(7) |
Finally, a Hadamard gate will mix both parts of the amplitudes on the extra qubit to give us the desired state:
(8) |
On this final state, we can see that the difference in the probabilities of measuring the extra qubit in state or and rest in the unary state is given by . Therefore, for each , we can deduce the sign of by looking at the most frequent output of the measurement of the first qubit. To deduce as well the value of , we simply use or depending on the sign found before. For instance, if we have .
Combining with the tomography and the non linearity, the overall cost of this tomography is given by as well.
3.3 Multiple Quantum Layers


In the previous sections, we have seen how to implement a quantum circuit to perform the evolution of one orthogonal layer. In classical deep learning, such layers are stacked to gain in expressivity and accuracy. Between each layer, a non-linear function is applied to the resulting vector. The presence of these non-linearities is key in the ability of the neural network to learn any function [15].
The benefit of using our quantum pyramidal circuit is the ability to simply concatenate them to mimic a multi layer neural network. After each layer, a tomography of the output state is performed to retrieve each component, corresponding to its quantum amplitudes (see Section 3.2). A non linear function is then applied classically to obtain . The next layer starts with a new unary data loader (See Section 2.3). This hybrid scheme allows as well to keep the depth of the quantum circuits reasonable for NISQ devices, by applying the neural network layer by layer.
Note that the quantum neural networks we propose here are close to the behaviour of classical neural networks and thus we can control and understand the quantum mapping and implement each layer and its non linearities in a modular way. They are also different regarding the training strategies which are close to the classical ones but utilise a different optimization landscape that can provide diffrent models (see Section 4.2 for details). It will be interesting to compare our pyramidal circuit to a quantum variational circuit with qubits and gates of any type, as we usually see in the literature. Using such circuits we would explore among all possible matrices instead of classical orthogonal matrices, but so far there’s no theoretical ground to explain why this should provide an advantage.
As an open outlook, one could imagine incorporating additional entangling gates after each pyramid layer (composed, for instance, of or ). This would mark a step out of the unary basis and could effectively allow exploring more interactions in the Hilbert Space.
Classical implementation

While we presented the quantum pyramidal circuit as the inspiration of the new methods for orthogonal neural networks, it is not hard to see that these quantum circuits can be simulated classical with a small overhead, thus yielding classical methods for orthogonal neural networks. This classical algorithm is simply the simulation of the quantum pyramidal circuit, where each gate is replaced by a planar rotation between its two inputs.
As shown in Fig.12, we propose a similar classical pyramidal circuit, where each layer is constituted of planar rotations, for a total of basic operations. Therefore our single layer feedforward pass has the same complexity as the usual matrix multiplication.
One may still have an advantage in performing the quantum circuit for inference, since the quantum circuit has depth , instead of the classical complexity of the matrix-vector multiplication. In addition, as we will see below, the main advantage of our method is that we can also now train orthogonal weight matrices classically in time , instead of the previously best-known .
4 Backpropagation
4.1 Classical Backpropagation Algorithm
The backpropagation in a fully connected neural network is a well known and efficient procedure to update the weight matrix at each layer [16, 17]. At layer , we note its weight matrices and biases . Each layer is followed by a non linear function , and can therefore be written as
(9) |
After the last layer, one can define a cost function that compares the output to the ground truth. The goal is to calculate the gradient of with respect to each weight and bias, namely and . In the backpropagation, we start by calculating these gradients for the last layer, then propagate back to the first layer.
We will require to obtain the error vector at layer defined by . One can show the backward recursive relation , where symbolizes the Hadamard product, or entry-wise multiplication. Note that the previous computation requires simply to apply the layer (ie apply matrix multiplication) in reverse. We can then show that each element of the weight gradient matrix at layer is given by . Similarly, the gradient with respect to the biases is easily defined as .
Once these gradients are computed, we update the parameters using the gradient descent rule, with learning rate :
(10) |
4.2 OrthoNN training: Angle’s Gradient Calculation and Orthogonal Matrix Update
Looking through the prism of our pyramidal quantum circuit, the parameters to update are no longer the individual elements of the weight matrices directly, but the angles of the RBS gates that give rise to these matrices. Thus, we need to adapt the backpropagation method to our setting based on the angles. We will start by introducing some notation for a single layer , which will not be explicit in the notation for simplicity. We assume we have as many output bits as input bits, but this can easily be extended to the rectangular case.
We first introduce the notion of timesteps inside each layer, which correspond to the computational steps in the pyramidal structure of the circuit (see Fig.13). It is easy to show that for inputs, there will be such timesteps, each one indexed by an integer . Applying a timestep consists in applying the matrix , made of all the RBS gates aligned vertically at this timestep ( is the unitary in the unary basis, see Section 3 for details). Each time a timestep is applied, the resulting state is a vector in the unary basis named inner layer and noted . This evolution can be written as . We use this notation similar to the real layer , with the weight matrix and the resulting vector (see Section 4.1).

In fact we have the correspondences for the first inner layer, which is the input of the actual layer, and for the last one. We also have .
We use the same kind of notation for the backpropagation errors. At each timestep we define an inner error . This definition is similar to the layer error . In fact we will use the same backpropagation formulas, without non linearities, to retrieve each inner error vector . In particular, for the last timestep, the first to be calculated, we have . Finally, we can retrieve the error at the previous layer using the correspondence .
The reason for this breakdown into timesteps is the ability to efficiently obtain the gradient with respect to each angle. Let’s consider the timestep and one of its gate with angle noted acting on qubits and (note that the numbering is different from Fig.13). We will decompose the gradient using each component, indexed by the integer , of the inner layer and inner error vectors , where is the kth row of matrix .
Since timestep is only composed of separated RBS gates, the matrix consists of diagonally arranged block submatrices given in Eq.(1). Only one of these submatrices depends on the angle considered here, at the position and in the matrix. We can thus rewrite the above gradient as , or:
(11) |
(12) |
Therefore we have shown a way to compute each angle gradient: During the feedforward pass, one must apply sequentially each of the timesteps, and store the resulting vectors, the inner layers . During the backpropagation, one obtains the inner errors by applying the timesteps in reverse. One can finally use a gradient descent on each angle , while preserving the orthogonality of the overall equivalent weight matrix . Since the optimization is performed in the angle landscape, and not on the equivalent weight landscape, it can potentially be different and produce different models. We leave open the study of the properties of both landscapes.
As one can see from the above description, this is in fact a classical algorithm to obtain the angle’s gradients, which allows us to train our OrthoNN efficiently classically while preserving the strict orthogonality. To obtain the angle’s gradient, one needs to store the inner layers during the feedforward pass. Next, given the error at the following layer, we perform a backward loop on each timestep (see Fig.12). At each timestep, we obtain the gradient for each angle parameter, by simply applying Eq.(12). This requires operations for each angle. Since there are at most angles per timesteps, estimating gradients has a complexity of . After each timestep, the next inner error is computed as well, using at most operations.
In the end, our classical algorithm allows us to compute the gradients of the angles in time , thus performing a gradient descent respecting the strict orthogonality of the weight matrix in the same time. This is considerably faster than previous methods based on Singular Value Decomposition methods and provides a training method that is asymptotically as fast as for normal neural networks, while providing the extra property of orthogonality.
5 Numerical Experiments
Network Architecture | Inference Accuracy | ||
---|---|---|---|
Classical Pyramidal Circuit | IBM Simulator | IBM Quantum Computer | |
98,4% | 98,4% | 98,0% | |
97,4% | 97,4% | 95,0% | |
98,2% | 98,2% | 82,8% |
We performed basic numerical experiments to verify the abilities of our pyramidal circuit, on the standard MNIST dataset [18]. Note that current quantum hardware and software are not yet suited for bigger experiments. We first compared the training of our Classical OrthoNN to the SVB algorithm from [3] (see Section 1.1). Results as reported in Fig.14. These small scale tests confirmed that the pyramidal circuits and the corresponding gradient descent on the angles were efficient for learning a classification task.
Then, we implemented the quantum circuit on a real quantum computer provided by IBM. We used a 16 and 5 qubits device to perform respectively a [8,2] and a [4,2] orthogonal layer. We also branched two layers to perform a [4,4,2] network with non linearity. A pyramidal OrthoNN was trained classically, and the resulting angles were transferred to test the quantum circuit on a classification task on classes 6 and 9 of the MNIST dataset, over 500 samples. We compared the real experiment with a simulated one, and the classical pyramidal circuit as well. Results are reported in Table 2.






6 Conclusion and Outlook
In this work, we have proposed for the first time training methods for orthogonal neural networks (OrthoNNs) that run in quadratic time, a significant improvement from previous methods based on Singular Value Decomposition. The main idea of our method is to replace usual weights and orthogonal matrices with an equivalent pyramidal circuit made of two-dimensional rotations. Each rotation is parametrizable by an angle, and the gradient descent takes place in the angle’s optimization landscape. This unique type of gradient backpropagation ensures perfect orthogonality of the weight matrices while substantially improving the running time compared to previous work. Moreover, we propose both classical and quantum methods for inference, where the forward pass on a near term quantum computer would provide a provable advantage in the running time. This work expands the field of quantum deep learning by introducing new tools, concepts, and equivalences with classical deep learning theory. We have highlighted open questions regarding the construction of such pyramidal circuits for neural networks and their potential new advantages in terms of execution time, accuracy, and learning properties.
6.0.1 Acknowledgements
The authors thank Ku Jia and Shuai Li for their help and comments. This work was supported by ANR quBIC, quData, and QuantERA project QuantAlgo. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team.
References
- [1] N. Mathur, J. Landman, Y. Y. Li, M. Strahm, S. Kazdaghli, A. Prakash, and I. Kerenidis, “Medical image classification via quantum neural networks,” arXiv preprint arXiv:2109.01831, 2021.
- [2] J. Landman, N. Mathur, Y. Y. Li, M. Strahm, S. Kazdaghli, A. Prakash, and I. Kerenidis, “Quantum Methods for Neural Networks and Application to Medical Image Classification,” Quantum, vol. 6, p. 881, Dec. 2022. [Online]. Available: https://doi.org/10.22331/q-2022-12-22-881
- [3] K. Jia, S. Li, Y. Wen, T. Liu, and D. Tao, “Orthogonal deep neural networks,” IEEE transactions on pattern analysis and machine intelligence, 2019.
- [4] J. Wang, Y. Chen, R. Chakraborty, and S. X. Yu, “Orthogonal convolutional neural networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11 505–11 515.
- [5] B. Nosarzewski, “Deep orthogonal neural networks,” 2018.
- [6] N. Bansal, X. Chen, and Z. Wang, “Can we gain more from orthogonality regularizations in training deep cnns?” arXiv preprint arXiv:1810.09102, 2018.
- [7] I. Kerenidis, J. Landman, and A. Prakash, “Quantum algorithms for deep convolutional neural networks,” in Proceedings of the International Conference on Learning Representations (ICLR), 2020.
- [8] J. Allcock, C.-Y. Hsieh, I. Kerenidis, and S. Zhang, “Quantum algorithms for feedforward neural networks,” arXiv preprint arXiv:1812.03089, 2018.
- [9] I. Cong, S. Choi, and M. D. Lukin, “Quantum convolutional neural networks,” Nature Physics, vol. 15, no. 12, pp. 1273–1278, 2019.
- [10] E. Farhi and H. Neven, “Classification with quantum neural networks on near term processors,” arXiv preprint arXiv:1802.06002, 2018.
- [11] M. Lezcano-Casado and D. Martınez-Rubio, “Cheap orthogonal constraints in neural networks: A simple parametrization of the orthogonal and unitary group,” in International Conference on Machine Learning. PMLR, 2019, pp. 3794–3803.
- [12] B. Foxen, C. Neill, A. Dunsworth, P. Roushan, B. Chiaro, A. Megrant, J. Kelly, Z. Chen, K. Satzinger, R. Barends et al., “Demonstrating a continuous set of two-qubit gates for near-term quantum algorithms,” Physical Review Letters, vol. 125, no. 12, p. 120504, 2020.
- [13] I. Kerenidis, “A method for loading classical data into quantum states for applications in machine learning and optimization,” U.S. Patent Application No. 16/986,553 and 16/987,235, 2020.
- [14] S. Aaronson, “Read the fine print,” Nature Physics, vol. 11, no. 4, pp. 291–293, 2015.
- [15] M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken, “Multilayer feedforward networks with a nonpolynomial activation function can approximate any function,” Neural networks, vol. 6, no. 6, pp. 861–867, 1993.
- [16] R. Hecht-Nielsen, “Theory of the backpropagation neural network,” in Neural networks for perception. Elsevier, 1992, pp. 65–93.
- [17] R. Rojas, “The backpropagation algorithm,” in Neural networks. Springer, 1996, pp. 149–182.
- [18] Y. LeCun and C. Cortes, “MNIST handwritten digit database,” 2010. [Online]. Available: http://yann.lecun.com/exdb/mnist/
- [19] M. A. Nielsen and I. Chuang, “Quantum computation and quantum information,” 2002.
Appendix A Appendix
A.1 Preliminaries in Quantum Computing
We present a succinct broad-audience quantum information background necessary for this work. See [19] for a detailed course.
Qubits:
In classical computing, a bit can be either 0 or 1. With a quantum information perspective, a quantum bit or qubit can be is state , . We use the braket notation to specify the quantum nature of the bit. The qubits can be in superposition of both states where such that . The coefficients and are called amplitudes. The probabilities of observing either 0 or 1 when measuring the qubit are linked to the amplitudes:
(13) |
As quantum physics teaches us, any superposition is possible before the measurement, which gives special abilities in terms of computation. With a qubits, possible binary combinations (e.g. ) can exist simultaneously, each with its own amplitude.
A qubits system can be represented as a normalized vector in a dimensional Hilbert space. A multiple qubit system is called a quantum register. If and are two quantum states or quantum registers, the whole system can be represented as a tensor product , also written as or .
Quantum Computation:
As logical gates in classical circuits, qubits or quantum registers are processed using quantum gates. A quantum gate is a unitary mapping in the Hilbert space, preserving the unit norm of the quantum state vector. Therefore, a quantum gate acting on qubits is a matrix such that , with being the adjoint, or conjugate transpose, of .
Common single qubit gates include the Hadamard gate that maps and , creating the quantum superposition, the NOT gate that permutes and , or rotation gate parametrized by an angle , given by .
Common two-qubits gates includes the CNOT gate which is a NOT gate applied on the second qubit only if the first one is in state , or similarly the CZ gate .
The main advantage of quantum gates is their ability to be applied to a superposition of inputs. Indeed, given a gate such that , we can apply it to all possible combinations of at once .