Jet Discrimination with Quantum Complete Graph Neural Network
Abstract
Machine learning, particularly deep neural networks, has been widely used in high-energy physics, demonstrating remarkable results in various applications. Furthermore, the extension of machine learning to quantum computers has given rise to the emerging field of quantum machine learning. In this paper, we propose the Quantum Complete Graph Neural Network (QCGNN), which is a variational quantum algorithm based model designed for learning on complete graphs. QCGNN with deep parametrized operators offers a polynomial speedup over its classical and quantum counterparts, leveraging the property of quantum parallelism. We investigate the application of QCGNN with the challenging task of jet discrimination, where the jets are represented as complete graphs. Additionally, we conduct a comparative analysis with classical models to establish a performance benchmark. The code is available at https://github.com/NTUHEP-QML/QCGNN
I Introduction
The proton-proton collisions at the Large Hadron Collider (LHC) produce jets from hard scattering events. Jets are collimated sprays of particles formed through the hadronization of elementary particles. Jet discrimination, i.e., identifying the type of elementary particle that initiates the jet, is one of the most challenging tasks in particle physics.
Deep neural networks (DNNs), celebrated for their architectural flexibility and expressive power, have been widely adopted in high-energy physics (HEP) [1, 2, 3]. Designing a DNN model tailored for jet discrimination poses a significant challenge due to the variable number of constituent particles within jets. Various data representations and DNN models have been proposed for jet discrimination, including images [4, 5, 6, 7, 8, 9, 10, 11], sequences [12, 13, 14, 15, 16, 17, 18], trees [19, 20], graphs [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33], and sets [34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44]. Jet images are typically two-dimensional representations (e.g., pseudorapidity versus azimuthal angle ) where particle information is encoded in a discretized two-dimensional grid. Sequences or trees order particles according to specific criteria (e.g., by transverse momentum or distance parameter). Despite the simplicity of these representations, they often lose information about individual particles, lack translational or rotational invariance, or disregard permutation invariance. To preserve the information of each constituent particle and the relevant symmetries, graphs or sets are widely used for jet representation, with each particle represented as a node in a graph or an element of a set.
In the upcoming high-luminosity LHC (HL-LHC), the data volume is expected to increase by several orders of magnitude compared to the LHC. The increased luminosity and event complexity due to pile-up will make data analysis even more challenging. Consequently, efficient methodologies and novel technologies in data analysis, such as parallel computing and machine learning, are in high demand. Furthermore, quantum computing has made significant strides in recent decades, leading to the development of quantum machine learning (QML) [45, 46, 47, 48, 49]. QML leverages the unique properties of quantum systems, such as superposition and entanglement, to potentially achieve learning capabilities unattainable with classical computers. QML has been explored in several HEP analyses [50], including reconstruction [51, 52, 53], classification [54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67], anomaly detection [68, 69, 70, 71, 72], and data generation [73, 74, 75, 76, 77].
In this paper, we introduce the Quantum Complete Graph Neural Network (QCGNN), a variational quantum algorithm [78, 79, 80] based model specifically designed for learning complete graphs [81]. For QCGNN with deep parametrized operators, it has a polynomial speedup over its classical counterparts by utilizing the property of quantum parallelism. The application of QCGNN is studied through jet discrimination using two public datasets: the Top dataset [82, 83] for binary classification and the JetNet dataset [84, 85] for multi-class classification.
The structure of this paper is as follows: In Sec. II, we describe the architectures of QCGNN and the classical graph neural networks used for benchmarking, as well as discuss the computational complexity of learning with classical and quantum models. Sec. III details the experimental setup for jet discrimination, with the results presented in Sec. IV. Finally, we summarize our findings in Sec. V.
II Methodology
II.1 Graph Neural Network
Graphs are ubiquitous data structures that represent relationships and connections between entities. Analyzing and extracting valuable information from graph-structured data is a fundamental challenge in modern data science and machine learning. To address this challenge, Graph Neural Networks (GNNs) have emerged as a powerful and versatile framework for learning from graph-structured data [86, 87, 88].
A graph is described by its set of nodes and edges. Let denote the number of nodes, and the edge from the -th node to the -th node. Throughout this paper, the graphs are assumed to be undirected () and unweighted (all edges have equal weights). Furthermore, a graph is considered complete [81] if all pairs of nodes are connected. A complete graph that is undirected and unweighted can also be viewed as a set. Each node is associated with a feature vector , where is the dimension of the features.
Graphs are permutation invariant, meaning that reordering the indices of the nodes does not alter the information contained in the graph. The general structure of permutation-invariant models has been studied in [89, 34]. In this work, we focus on a specific type of GNN, the Message Passing Graph Neural Network (MPGNN) [90], which provides a simple and intuitive approach to designing GNNs. The MPGNN is constructed by iterating the following formula111The formula is adapted from the PyTorch Geometric documentation https://pytorch-geometric.readthedocs.io/en/latest/tutorial/create_gnn.html. The term is omitted here as we only consider undirected and unweighted graphs.
(1) |
where extracts information from the neighboring nodes of the -th node, and updates the node features in the -th iteration. As pointed out in [89], summation over a set is the sufficient and necessary condition for satisfying permutation invariance. This concept can be generalized by the aggregation function , which is typically chosen as SUM, MEAN, MAX, or MIN222Note that MAX and MIN can be expressed as summation over the -norm, which can be absorbed into ., among others. A brief discussion on the relation between state-of-the-art models and the MPGNN is provided in Appendix A.
II.2 Quantum Complete Graph Neural Network

In the noisy intermediate-scale quantum (NISQ) era [92], variational quantum algorithms [78, 79, 80] present an intuitive approach to implementing quantum neural networks. These networks utilize a variational quantum circuit (VQC) with tunable parameters updated via classical optimization routines. Typically, an -qubit VQC can be expressed as:
for some Pauli string observable and unitary operator . The unitary operator encodes the data into the quantum state and includes several tunable parameters . These parameters are optimized via gradient descent [93, 94, 95, 96] using an appropriate loss function. Typically, can be divided into encoding and parametrized operators, denoted as and , respectively.
The QCGNN architecture comprises two qubit registers: an index register (IR) and a neural network register (NR), with and qubits, respectively. Fig. 1 illustrates an example of a QCGNN circuit with and . To encode the information of an undirected and unweighted complete graph with nodes, we set . For graphs with different number of nodes, could be different, e.g., a 4-particle jet needs qubits, while a 5-particle jet requires . We will show that the output of QCGNN justifies that QCGNN can handle variable size of graphs. The quantum state in the IR is initialized to uniform basis states through a uniform state oracle (USO) and evolves as
(2) |
where is the initial quantum state in the NR with all qubits in state. The decimal representation, e.g., , , , , is used here, equivalent to the binary representation , , , . The context should clarify whether the decimal or binary representation is being used. If is a power of 2, i.e., , the USO can be constructed using Hadamard gates. Otherwise, other methods described in [97, 98] can be employed.
The node features of the -th node are encoded through a series of unitary operators , controlled by the corresponding basis state in IR, where the control condition is the binary representation of with digits, denoted as . This controlled operator acts on the quantum state as follows:
(3) |
where is the Kronecker delta. The controlled operator on the left-hand side acts on both the IR and NR, while on the right-hand side acts only on the NR. After encoding the information of all nodes, the quantum state evolves as
(4) |
where it is used that must be acted upon by one of the as ranges from to . For example, in Fig. 1, where , , and , the quantum state can be expressed as
(5) |
where is equivalent to . At first glance, Eq. 4 and Eq. 5 appear to depend on the node ordering. However, as we will demonstrate, by selecting the appropriate observables, the final output of QCGNN is permutation invariant.
Next, a series of parametrized unitary operators with tunable parameters are applied to the NR, evolving the quantum state to
(6) |
To increase the expressive power of VQCs, the data re-uploading technique [91] can be employed. The idea of data re-uploading is to encode the data multiple times, allowing the quantum state to interact with the data in a more complex manner. The data re-uploading technique can be implemented by alternately applying the encoding and parametrized operators several times, but with different parameters. As the encoding operators in correspond to the node indices in the IR, the data re-uploading technique can be implemented straightforwardly without making each particle information entangled. After re-uploading times, the final quantum state evolves as
(7) |
where the parameters may differ across each operator, denoted by the superscript . We denote the quantum state of the NR as
Given a Pauli string observable applied to the NR, the expectation value of the measurement is given by
(8) |
which results in a sum over each node. Now, consider an additional Hermitian matrix filled with ones, used as an observable of the IR. Observing that
(9) |
where , and we define
which is symmetric, i.e., . Notice that Eq. 9 computes the average value over all possible pairs, leading to the permutation invariance of the final output. In practice, the observable can be decomposed as
(10) |
where refers to the Pauli-X observable of the -th qubit in IR, and is the identity matrix. The expansion of Eq. 10 on the right-hand side yields different combinations of Pauli string observables. The value of Eq. 9 can be obtained by summing the expectation values of all combinations of Pauli strings in Eq. 10, corresponding to the SUM operation in the classical MPGNN’s aggregation function. In certain cases, one might wish to exclude contributions from nodes themselves, these values can be simply removed by considering
which is equivalent to subtracting the value of Eq. 8 from Eq. 9.
II.3 Computational Complexity
In this subsection, we examine the computational complexity of both MPGNN and QCGNN in the context of complete graphs. We focus on the simplest case where both models map a set of -dimensional features to a scalar, i.e., . For instance, the MPGNN might employ a feed-forward neural network terminating in a single neuron, while the QCGNN could utilize one qubit in NR, measured in the basis ( and ). Additionally, we assume that the computational cost of obtaining a scalar output in classical models is roughly equivalent to the cost of measuring a Pauli string observable in quantum models.
To compute all pairwise information, MPGNN requires computations, with each pair passing through the neural network once. In contrast, due to the quantum parallelism inherent in QCGNN, can process all pairs of nodes simultaneously. To aggregate the final result, QCGNN requires only measuring on Pauli string observables, as indicated by Eq. 10. This suggests that QCGNN could offer a polynomial speedup over MPGNN. However, in the case of QCGNN, additional costs associated with multi-controlled operators and the USO should be taken into account. Although various methods exist for decomposing multi-controlled operators, such as those discussed in [99, 100, 101], for simplicity, we adhere to a basic approach outlined in [102], which requires an additional ancilla qubits and Toffoli gates. Based on the results in [97, 98], preparing a uniform quantum state necessitates gates. If the parametrized operators sufficiently deep, these additional costs may become negligible, enabling QCGNN to achieve an speedup over MPGNN.
Certain traditional VQC ansatz, such as quantum kernel methods [103, 104], also share similarities with QCGNN. Quantum kernel methods compute the kernel function and are typically constructed using and . These methods have a computational complexity of , as they still require computing each pair individually. Again, if is sufficiently deep, the additional costs from data encoding may be negligible, allowing QCGNN to achieve an speedup over quantum kernel methods. This advantage arises because QCGNN computes pairwise information simultaneously, with only additional measurement costs, as given by Eq. 10.
II.4 Extending QCGNN to General Graphs
QCGNN can also be extended to weighted graphs, but the additional cost might render it impractical. Consider a simple case, where an undirected, weighted graph has an adjacency matrix that can be expressed as the outer product of a vector , with edge weight . Instead of initializing the IR uniformly, we initialize the quantum state as
so that the terms in Eq. 9 are modified as
Instead of initializing with a uniform state, the quantum state can be initialized using AMPLITUDE EMBEDDING, where the information of the weights is embedded in the amplitude of .
To generalize to directed and weighted graphs, note that any matrix can be decomposed into symmetric and skew-symmetric matrices. Since both types are normal matrices, they are diagonalizable according to the spectral theorem. One can apply the method described above for each eigenbasis individually and multiply by a factor proportional to the corresponding eigenvalue. However, the additional computational cost associated with diagonalizing matrices and AMPLITUDE EMBEDDING might be substantial, potentially negating the advantages of QCGNN.
For practical applications, we primarily consider the use of QCGNN for undirected, unweighted, and complete graphs. The added complexity of handling weighted, directed, and incomplete graphs may diminish the computational benefits of QCGNN, making it less feasible for real-world applications without further optimizations.
III Experimental Setup
III.1 Dataset for Jet Discrimination
We demonstrate the feasibility of QCGNN using two publicly available Monte Carlo simulated datasets333In the first published version, we used the dataset generated by ourselves using [105, 106, 107, 108, 109]. For the revised version, we switched to other existing public datasets since the number of data is much more sufficient. for jet discrimination: the Top dataset [82] and the JetNet dataset [84]. The jets in both datasets are clustered using the anti- algorithm [110, 111] with a distance parameter .
The Top dataset [82] is used for binary classification, distinguishing signal jets from top quarks (Top) and background jets from mixed quark-gluon interactions (QCD). The transverse momentum of the jets is in the range GeV. The dataset is divided into 1.2 million training samples, 400 thousand validation samples, and 400 thousand testing samples. Further details of the Top dataset can be found in [83].
The JetNet dataset [84] is used for multi-class classification, with jets originating from gluons (g), top quarks (t), light quarks (q), bosons (w), and bosons (z). Each class of jet has a transverse momentum of approximately 1 TeV, with around 170 thousand samples. For each jet event, only the top 30 particles with the highest transverse momentum are retained if the number of particles exceeds 30. Further details of the JetNet dataset can be found in [85]
In our approach, each jet is represented as a complete graph. Each node corresponds to a particle in the jet, with node features related to particle flow information. For the -th particle, the input features include the transverse momentum fraction , the relative pseudorapidity , and the relative azimuthal angle . For QCGNN, these input features are further preprocessed as follows:
(11) |
since rotation gates are used for data encoding (see Sec. III.2). Note that the indices of the particles are arbitrary due to the use of permutation-invariant models for graphs.
III.2 Classical and Quantum Models



The classical model for benchmarking is based on MPGNN from Eq. 1, with the aggregation function chosen to be SUM. The function is implemented as a feed-forward neural network consisting of linear layers and the ReLU activation functions [113], while is simply the summation of only, i.e., , where ranges from 0 to except . The input to is simply the concatenation of and , requiring only 6 neurons in the input layer of . Consequently, the graph feature, denoted as , is computed through
(12) |
The structure of MPGNN is similar to the Particle Flow Network (PFN) in [34], with the distinction that PFN calculates for each particle, whereas MPGNN calculates the pairwise information between particles.
The quantum model is based on QCGNN, which consists of encoding operators and parametrized operators. The data re-uploading technique [91] is employed 2 times before the final measurements (indicated by the dashed box in Fig. 1 with ). For simplicity, we use single-angle rotation gates, defined as
(13) |
and triple-angle rotation gates, defined as
(14) |
to encode the particle flow information. The parametrized operators are constructed with strongly entangling layers [112] using rotation gates and CNOT gates. The ansatz for encoding -th particle features and the strongly entangling layers are shown in Fig. 2 and Fig. 3 respectively. The -th component of the graph feature is computed by
(15) |
where the observable refers to the Pauli-Z measurement of the -th qubit in NR only, and the summations are computed through Eq. 8 and Eq. 9. To be clarified, the subscript of corresponds to the -th node, while the subscript of refers to the -th component of QCGNN output. Note that all qubits in NR can be measured simultaneously, but the measurement output from other qubits is ignored when calculating the expectation value over . This setup can be thought of as a classical feed-forward neural network with neurons in the output layer. Notice how Eq. 15 resembles Eq. 12, indicating the permutation invariance of the final output.
Eventually, both and are followed by another feed-forward neural network consisting of linear layers and ReLU activation functions. The full setup of the classical and quantum models is depicted in Fig. 4. For binary classification using the Top dataset, the output layer has a single neuron followed by a Sigmoid function and is trained with binary cross-entropy loss. For multi-class classification using the JetNet dataset, the output layer has five neurons followed by a Softmax function and is trained with multi-class cross-entropy loss.
III.3 Training Setup and Model Hyperparameters

The complete training process was conducted with 5 different random seeds, each for 30 epochs. The Top and the JetNet datasets comprise 2 and 5 classes, respectively. For each class, we selected 25,000 training samples, 2,500 validation samples, and 2,500 testing samples. This limited data selection is due to the extensive training time required for the QCGNN, as discussed in Sec. III.4. For each random seed, the data were randomly sampled from the original dataset. To balance between demonstrating the training performance and computational demands, particles with transverse momentum less than 2.5% of , i.e., , were discarded, so that the majority of the distribution of the number of particles per jet is between 4 and 16. The histograms of the number of particles per jet for the original and preprocessed Top and JetNet datasets are shown in Fig. 5. To mitigate the extensive training time associated with simulating QML, events with fewer than 4 or more than 16 particles were discarded. These choices strike a balance between performance and the amount of training data, as discussed in Appendix B.
Due to limited computational resources, the number of qubits in NR was tested with and , with the number of strongly entangling layers in each parametrized operator () set to . Given that the maximum number of particles in a jet is 16, we used qubits for IR. The number of hidden neurons in the MPGNN was set to 3 or 6 for comparison with the QCGNN, ensuring that both models have a comparable number of parameters, as discussed in Appendix C.
We also evaluated the performance of classical state-of-the-art models, including the Particle Flow Network (PFN) [34], Particle Net (PNet) [35], and Particle Transformer (ParT) [36]. The structure and hyperparameters of PFN, PNet, and ParT were configured according to their respective original publications. Notably, we excluded mass information from the interaction matrix of ParT, as only particle flow information was used. The -nearest neighbor method used in the original PNet was configured with , given that the minimum number of particles per jet was 4.
The classical models were implemented using PyTorch [114] and PyTorch Geometric [115], while the quantum circuit of QCGNN was simulated using PennyLane [116]. The cross-entropy loss was optimized using the Adam optimizer [117] with a learning rate of for all models. The batch size was set to 64, the maximum allowable due to memory constraints, as simulating quantum circuits requires substantial memory resources.
III.4 Implementing QCGNN with Simulators
Unlike classical models, parameter gradients for real quantum computers cannot be computed using traditional methods such as finite difference methods. Instead, the parameter-shift rule (PSR) [93, 94, 95, 96] can be employed to calculate gradients. However, applying PSR on quantum computers necessitates extensive requests and long queue times with actual quantum devices. Furthermore, the noise in current quantum computers is insufficiently low to enable stable training of quantum neural networks, often resulting in training failures.
To circumvent these issues during the NISQ era [92], we trained the QCGNNs on classical computers using PennyLane [116] quantum circuit simulators with zero noise. Nonetheless, simulating quantum circuits is highly time-consuming, even for a few qubits. Although PennyLane supports QML on GPUs, speed improvements over CPUs are significant only with many qubits, typically more than 20 qubits444The benchmark of quantum simulation on GPU can be found in PennyLane’s blog: ”Lightning-fast simulations with PennyLane and the NVIDIA cuQuantum SDK”. For this study, we used CPUs to train the QCGNNs. Training a 10-qubit ( and ) QCGNN with 10,000 samples and a batch size of 64 takes approximately 1,000 seconds per epoch. Consequently, the training over 5 random seeds for 30 epochs required nearly a month.
IV Results
IV.1 Performance of Classical and Quantum Models
Model | Top Dataset (2 classes) | JetNet Dataset (5 classes) | ||||
---|---|---|---|---|---|---|
# params | AUC | Accuracy | # params | AUC | Accuracy | |
Particle Transformer | 2.2M | 0.9460.005 | 0.8680.009 | 2.2M | 0.8890.002 | 0.6560.006 |
Particle Net | 177K | 0.9530.003 | 0.8850.006 | 178K | 0.8960.003 | 0.6690.004 |
Particle Flow Network | 72.3K | 0.9540.004 | 0.8850.005 | 72.7K | 0.9000.003 | 0.6750.005 |
MPGNN - | 13K | 0.9610.003 | 0.8960.003 | 13.3K | 0.9030.002 | 0.6830.007 |
MPGNN - | 255 | 0.9240.006 | 0.8660.006 | 323 | 0.8650.004 | 0.6150.010 |
MPGNN - | 126 | 0.9220.005 | 0.8640.006 | 194 | 0.7570.110 | 0.4750.141 |
QCGNN - | 201 | 0.9320.004 | 0.8680.005 | 269 | 0.8220.003 | 0.5430.006 |
QCGNN - | 99 | 0.9190.006 | 0.8640.005 | 167 | 0.7960.009 | 0.5050.014 |
The performance of the classical and quantum models on the Top and the JetNet datasets is summarized in Table. 1. The inference scores of the MPGNN and QCGNN are comparable when the number of parameters is in roughly the same order of amount. We anticipate that QCGNN has the potential to achieve performance on par with MPGNN as the number of qubits increases. When training with a smaller number of parameters on the multi-class classification JetNet dataset, we observe that QCGNN is more stable than MPGNN, with the latter exhibiting a larger standard deviation.
Among the state-of-the-art models, MPGNN with has higher inference scores compared to others. This is likely due to the information of jets being lost during the data preprocessing, where only 4 to 16 particles per jet are utilized. When training with the full information from the original jet dataset, i.e., without discarding information from soft particles, other state-of-the-art models can compete with MPGNN, even better. Details of the state-of-the-art models trained on the original jet dataset are provided in Appendix B.
IV.2 Executing Pre-trained QCGNN on IBMQ

Although implementing full training on quantum computers is impractical in the NISQ era, we can still evaluate the performance of the pre-trained QCGNN on IBMQ real devices [118]. To minimize the noise effects caused by real quantum gates, we select events with only four particles from the Top dataset, i.e., using qubits in IR, thereby reducing the number of gates required for initial state preparation and data encoding. In this setup, the USO can be efficiently implemented using Hadamard gates for each qubit in IR. On IBMQ real devices, only 1-qubit and 2-qubit gates are available, and the multi-controlled gates used in data encoding are decomposed using methods described in [102].
We selected ibm_brussels with 1024 shots to test the performance of QCGNN on an IBMQ real device. However, the quantum computers in the NISQ era are currently too noisy to yield usable results. For binary classification, the inference of QCGNN on ibm_brussels results in approximately AUC and accuracy, which equates to random guessing. To assess how noise affects the performance of QCGNN, we perform an extrapolation over noise using PennyLane simulators, with the results shown in Fig. 6. We simulate quantum noise, including depolarizing error and amplitude damping, occurring after each quantum operation with a certain probability. As indicated in Fig. 6, the noise probability must be reduced to below to achieve reliable results555The noise probability here corresponds to the simulated noise in the quantum circuit simulation..
IV.3 QCGNN Runtime on IBMQ
IBMQ Backend | N | ||
---|---|---|---|
ibm_nazca | 2 | 2.567 | 0.209 |
4 | 5.352 | 0.197 | |
8 | 10.551 | 0.219 | |
ibm_strasbourg | 2 | 2.595 | 0.217 |
4 | 5.416 | 0.197 | |
8 | 11.085 | 0.211 |
To validate the time complexity analysis discussed in Sec. II.3, we initialized untrained QCGNNs and executed them on various IBMQ backends, including ibm_nazca and ibm_strasbourg, with and 1024 shots. We set the number of nodes to 2, 4, and 8, such that only Hadamard gates are required for the initial state preparation. To determine the quantum gate runtime for encoding and parametrized operators, we first ran QCGNN without any operators to measure the runtime for quantum state initialization and measurement. We then applied encoding operators with 10 times of re-uploading to obtain the runtime . Finally, we applied parametrized operators, constructed with 10 strongly entangling layers and 10 times of re-uploading (resulting in 100 strongly entangling layers in total), to measure the runtime . Each runtime measurement was averaged over 10 executions. The runtime of encoding operators was computed as follows:
and the runtime of each strongly entangling layer in parametrized operators is calculated by
The results presented in Table. 2 indicate that the runtime of encoding operators scales approximately linearly with the number of particles per jet, while the runtime of parametrized operators remains approximately constant, as expected. As discussed in Sec. II.3, when the parametrized operators are sufficiently deep, the runtime will be dominated by these operators, making the additional computational cost associated with data encoding negligible.
V Summary
The representation of jets as graphs, leveraging the property of permutation invariance, has been widely utilized in particle physics. However, constructing graphs from particle jets in a physically meaningful manner remains an unresolved challenge. In the absence of specific physical assumptions, we adopt a straightforward approach by representing jets as complete graphs with undirected, unweighted edges. Motivated by the structure of complete graphs, we propose the Quantum Complete Graph Neural Network (QCGNN) for learning through aggregation using SUM or MEAN operations. When training on particles, QCGNN exhibits computational complexity if the parametrized operators are sufficiently deep, offering a polynomial speedup over classical models that require .
To demonstrate the practicality of QCGNN, we conduct experiments on jet discrimination. Sec. IV.1 shows that QCGNN performs comparably to classical models with a similar number of parameters. Moreover, QCGNN displays a more stable training process across different random seeds. Although the pre-trained QCGNN has been tested on IBMQ real devices, the noise in quantum circuits remains too significant to yield reliable results. To assess the impact of noise in the NISQ era, we perform noise extrapolation using simulators, as detailed in Sec. IV.2. We also conducted a series of executions on IBMQ quantum devices to estimate the runtime of QCGNN, as discussed in Sec. IV.3. The time costs of encoding and parametrized operators are approximately linear and constant to the number of particles per jet, respectively.
In conclusion, QCGNN provides a more efficient method for learning unstructured jets using QML. The additional computational costs associated with quantum state initialization and data encoding are negligible when the parametrized operators are sufficiently deep, as discussed in Sec. II.3. However, it remains an open question whether QML provides a definitive quantum advantage in HEP. Moreover, developing more expressive and suitable methods for HEP data encoding continues to be an intriguing and ongoing area of research.
Acknowledgement
The authors thank Chiao-Hsuan Wang for helpful discussions and suggestions about quantum computation. The accessibility of IBMQ resources is supported by the IBM Quantum Hub at National Taiwan University.
Appendix A Relation Between State-of-the-Art Models and MPGNN
In Sec. II.1, we introduce the MPGNN. Here, we show that some state-of-the-art models can be considered as a special case of MPGNN, i.e., in the form of
(16) |
A.1 Particle Flow Networks (PFN) as MPGNN
The PFN introduced in [34] first transforms the particle features into a latent space via a feed-forward neural network , followed by a summation. Then another feed-forward neural network will be applied to get the final score of jet discrimination. In the form of MPGNN, the PFN can be written as
A.2 Particle Net (PNet) as MPGNN
The PNet introduced in [35] turns jets into graphs by dynamically determining the edges through the distance in feature space or latent space. The EdgeConv of PNet can be written as
where the neighbors are dynamically determined through the -nearest neighbor method. The EdgeConv also calculates the difference between features, then passes through either a convolutional neural network or a feed-forward neural network, captured by and .
A.3 Particle Transformer (ParT) as MPGNN
The ParT introduced in [36] uses the transformer architecture to learn the jet features. The structure of the transformer is rather complicated, but each attention block can still be written in the form of MPGNN. As ParT considers all pairs of particle information without positional embedding, the ParT can be seen as dealing with complete graphs. The queries (Q) and the keys (K) in the attention mechanism are captured by , with the aggregation function chosen to be SOFTMAX (can be seen as a summation over a particular transformation that can be absorbed into ), and the values (V) in the attention mechanism are captured by . Note the functions in the transformer such as GeLU or LayerNorm can also be absorbed into and .
Appendix B Performance of Classical Models on Different Number of Training Samples

As described in Sec. III.1, we selected 25,000 training samples with a maximum of 16 particles per jet for each class. In this appendix, we justify that this setup is sufficient to evaluate the performance of each model. We trained state-of-the-art classical models, including the Particle Flow Network (PFN) [34], Particle Net (PNet) [35], Particle Transformer (ParT) [36], and MPGNN with 64 hidden neurons ().
The performance of each model on both the Top and the JetNet datasets is obtained by training with varying numbers of samples per class, across 5 different random seeds. The results are presented in Fig. 7. The training samples were preprocessed as outlined in Sec. III.1, using only events with at least 4 and at most 16 particles. The performance of each state-of-the-art model is getting saturated between 25,000 and 50,000 training samples, indicating that the choice of 25,000 samples in Sec. III.1 is almost adequate for demonstrating the model performance. We also conducted experiments with the full-particle jets from the original dataset, without applying the transverse momentum cutoff, using 100,000 samples per class. We found that, when training with a few particles, the simplest MPGNN model performs better than the other models. However, when using the full original dataset, the ParT outperforms the other models.
Appendix C Number of Parameters in MPGNN and QCGNN
In this appendix, we compute the number of parameters for the MPGNN and QCGNN models based on the structures outlined in Sec. III.2. It is important to distinguish these calculations from the total number of parameters reported in Table. 1, which includes the parameters of the final feed-forward network in both MPGNN and QCGNN.
For MPGNN with hidden neurons in both the hidden and output layers, and an input dimension of 6 (since features of two particles are concatenated), if there are hidden layers, the total number of parameters is given by
(17) |
where the in each parenthesis accounts for the bias term in the linear layers.
For QCGNN, suppose there are qubits in the NR with the strongly entangling layers ansatz as depicted in Fig. 3. Each strongly entangling layer consists of rotation gates, with each gate having 3 parameters. If there are strongly entangling layers and times of re-uploading, the total number of parameters is
(18) |
To ensure that both models have the same output dimension, we set . Assuming is a multiple of 3, and setting , the number of parameters for MPGNN and QCGNN are
(19) |
It is evident that by choosing , the leading term for both models scales as . In this study, we set . To approximate the linear term in MPGNN, one could set , resulting in . However, this approach was not considered in this study due to the increased simulation time required for longer circuits.
References
- [1] HEP ML Community, A Living Review of Machine Learning for Particle Physics.
- Feickert and Nachman [2021] M. Feickert and B. Nachman, A living review of machine learning for particle physics (2021), arXiv:2102.02770 [hep-ph] .
- Radovic et al. [2018] A. Radovic, M. Williams, D. Rousseau, M. Kagan, D. Bonacorsi, A. Himmel, A. Aurisano, K. Terao, and T. Wongjirad, Machine learning at the energy and intensity frontiers of particle physics, Nature 560, 41 (2018).
- Chen and Chien [2020] K.-F. Chen and Y.-T. Chien, Deep learning jet substructure from two-particle correlations, Phys. Rev. D 101, 114025 (2020).
- Kheddar et al. [2024] H. Kheddar, Y. Himeur, A. Amira, and R. Soualah, Image classification in high-energy physics: A comprehensive survey of applications to jet analysis (2024), arXiv:2403.11934 [hep-ph] .
- Lee et al. [2019a] J. S. H. Lee, I. Park, I. J. Watson, and S. Yang, Quark-gluon jet discrimination using convolutional neural networks, Journal of the Korean Physical Society 74, 219 (2019a).
- Li and Sun [2020] J. Li and H. Sun, An attention based neural network for jet tagging (2020), arXiv:2009.00170 [hep-ph] .
- Choi et al. [2023] S. K. Choi, J. Li, C. Zhang, and R. Zhang, Automatic detection of boosted higgs boson and top quark jets in an event image, Phys. Rev. D 108, 116002 (2023).
- Kasieczka et al. [2017] G. Kasieczka, T. Plehn, M. Russell, and T. Schell, Deep-learning top taggers or the end of qcd?, Journal of High Energy Physics 2017, 6 (2017).
- Komiske et al. [2017] P. T. Komiske, E. M. Metodiev, and M. D. Schwartz, Deep learning in color: towards automated quark/gluon jet discrimination, Journal of High Energy Physics 2017, 110 (2017).
- Baldi et al. [2016] P. Baldi, K. Bauer, C. Eng, P. Sadowski, and D. Whiteson, Jet substructure classification in high-energy physics with deep neural networks, Phys. Rev. D 93, 094034 (2016).
- jet [2017] Identification of Jets Containing -Hadrons with Recurrent Neural Networks at the ATLAS Experiment, Tech. Rep. (CERN, Geneva, 2017) all figures including auxiliary figures are available at https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PUBNOTES/ATL-PHYS-PUB-2017-003.
- de Lima [2021] R. T. de Lima, Sequence-based machine learning models in jet physics (2021), arXiv:2102.06128 [physics.data-an] .
- Bols et al. [2020] E. Bols, J. Kieseler, M. Verzetti, M. Stoye, and A. Stakia, Jet flavour classification using deepjet, Journal of Instrumentation 15 (12), P12012.
- Guest et al. [2016] D. Guest, J. Collado, P. Baldi, S.-C. Hsu, G. Urban, and D. Whiteson, Jet flavor classification in high-energy physics with deep neural networks, Phys. Rev. D 94, 112002 (2016).
- Lee et al. [2019b] J. S. H. Lee, S. M. Lee, Y. Lee, I. Park, I. J. Watson, and S. Yang, Quark gluon jet discrimination with weakly supervised learning, Journal of the Korean Physical Society 75, 652 (2019b).
- Egan et al. [2017] S. Egan, W. Fedorko, A. Lister, J. Pearkes, and C. Gay, Long short-term memory (lstm) networks with jet constituents for boosted top tagging at the lhc (2017), arXiv:1711.09059 [hep-ex] .
- Pearkes et al. [2017] J. Pearkes, W. Fedorko, A. Lister, and C. Gay, Jet constituents for deep neural network based top quark tagging (2017), arXiv:1704.02124 [hep-ex] .
- Cheng [2018] T. Cheng, Recursive neural networks in quark/gluon tagging, Computing and Software for Big Science 2, 3 (2018).
- Louppe et al. [2019] G. Louppe, K. Cho, C. Becot, and K. Cranmer, Qcd-aware recursive neural networks for jet physics, Journal of High Energy Physics 2019, 57 (2019).
- Henrion et al. [2017] I. Henrion, J. Brehmer, J. Bruna, K. Cho, K. Cranmer, G. Louppe, and G. Rochette, Neural message passing for jet physics (2017).
- Moreno et al. [2020] E. A. Moreno, O. Cerri, J. M. Duarte, H. B. Newman, T. Q. Nguyen, A. Periwal, M. Pierini, A. Serikova, M. Spiropulu, and J.-R. Vlimant, Jedi-net: a jet identification algorithm based on interaction networks, The European Physical Journal C 80, 58 (2020).
- Chakraborty et al. [2019] A. Chakraborty, S. H. Lim, and M. M. Nojiri, Interpretable deep learning for two-prong jet classification with jet spectra, Journal of High Energy Physics 2019, 135 (2019).
- Chakraborty et al. [2020] A. Chakraborty, S. H. Lim, M. M. Nojiri, and M. Takeuchi, Neural network-based top tagger with two-point energy correlations and geometry of soft emissions, Journal of High Energy Physics 2020, 111 (2020).
- Shlomi et al. [2020] J. Shlomi, P. Battaglia, and J.-R. Vlimant, Graph neural networks in particle physics, Machine Learning: Science and Technology 2, 021001 (2020).
- Ju and Nachman [2020] X. Ju and B. Nachman, Supervised jet clustering with graph neural networks for lorentz boosted bosons, Phys. Rev. D 102, 075014 (2020).
- Dreyer and Qu [2021] F. A. Dreyer and H. Qu, Jet tagging in the lund plane with graph networks (2021), arXiv:2012.08526 [hep-ph] .
- Gong et al. [2022] S. Gong, Q. Meng, J. Zhang, H. Qu, C. Li, S. Qian, W. Du, Z.-M. Ma, and T.-Y. Liu, An efficient lorentz equivariant graph neural network for jet tagging, Journal of High Energy Physics 2022, 30 (2022).
- Ma et al. [2023] F. Ma, F. Liu, and W. Li, Jet tagging algorithm of graph network with haar pooling message passing, Phys. Rev. D 108, 072007 (2023).
- Mokhtar et al. [2022] F. Mokhtar, R. Kansal, and J. Duarte, Do graph neural networks learn traditional jet substructure? (2022), arXiv:2211.09912 [hep-ex] .
- Murnane [2023] D. Murnane, Graph structure from point clouds: Geometric attention is all you need (2023), arXiv:2307.16662 [cs.LG] .
- Thais et al. [2022] S. Thais, P. Calafiura, G. Chachamis, G. DeZoort, J. Duarte, S. Ganguly, M. Kagan, D. Murnane, M. S. Neubauer, and K. Terao, Graph neural networks in particle physics: Implementations, innovations, and challenges (2022), arXiv:2203.12852 [hep-ex] .
- Guo et al. [2021] J. Guo, J. Li, T. Li, and R. Zhang, Boosted higgs boson jet reconstruction via a graph neural network, Phys. Rev. D 103, 116025 (2021).
- Komiske et al. [2019] P. T. Komiske, E. M. Metodiev, and J. Thaler, Energy flow networks: deep sets for particle jets, Journal of High Energy Physics 2019, 121 (2019).
- Qu and Gouskos [2020] H. Qu and L. Gouskos, Jet tagging via particle clouds, Phys. Rev. D 101, 056019 (2020).
- Qu et al. [2024] H. Qu, C. Li, and S. Qian, Particle transformer for jet tagging (2024), arXiv:2202.03772 [hep-ph] .
- Dolan and Ore [2021] M. J. Dolan and A. Ore, Equivariant energy flow networks for jet tagging, Phys. Rev. D 103, 074022 (2021).
- jet [2020] Deep Sets based Neural Networks for Impact Parameter Flavour Tagging in ATLAS, Tech. Rep. (CERN, Geneva, 2020) all figures including auxiliary figures are available at https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PUBNOTES/ATL-PHYS-PUB-2020-014.
- Käch et al. [2022] B. Käch, D. Krücker, and I. Melzer-Pellmann, Point cloud generation using transformer encoders and normalising flows (2022), arXiv:2211.13623 [hep-ex] .
- Athanasakos et al. [2023] D. Athanasakos, A. J. Larkoski, J. Mulligan, M. Ploskon, and F. Ringer, Is infrared-collinear safe information all you need for jet classification? (2023), arXiv:2305.08979 [hep-ph] .
- Käch and Melzer-Pellmann [2023] B. Käch and I. Melzer-Pellmann, Attention to mean-fields for particle cloud generation (2023), arXiv:2305.15254 [hep-ex] .
- Mondal et al. [2023] S. Mondal, G. Barone, and A. Schmidt, Paired jet: A multi-pronged resonance tagging strategy across all lorentz boosts (2023), arXiv:2311.11011 [hep-ex] .
- Odagiu et al. [2024] P. Odagiu, Z. Que, J. Duarte, J. Haller, G. Kasieczka, A. Lobanov, V. Loncar, W. Luk, J. Ngadiuba, M. Pierini, P. Rincke, A. Seksaria, S. Summers, A. Sznajder, A. Tapper, and T. K. Aarrestad, Sets are all you need: Ultrafast jet classification on fpgas for hl-lhc (2024), arXiv:2402.01876 [hep-ex] .
- Gambhir et al. [2024] R. Gambhir, A. Osathapan, and J. Thaler, Moments of clarity: Streamlining latent spaces in machine learning using moment pooling (2024), arXiv:2403.08854 [hep-ph] .
- Biamonte et al. [2017] J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, Quantum machine learning, Nature 549, 195 (2017).
- Zeguendry et al. [2023] A. Zeguendry, Z. Jarir, and M. Quafafou, Quantum machine learning: A review and case studies, Entropy 25, 10.3390/e25020287 (2023).
- García et al. [2022] D. P. García, J. Cruz-Benito, and F. J. García-Peñalvo, Systematic literature review: Quantum machine learning and its applications (2022), arXiv:2201.04093 [quant-ph] .
- Tychola et al. [2023] K. A. Tychola, T. Kalampokas, and G. A. Papakostas, Quantum machine learning mdash;an overview, Electronics 12, 10.3390/electronics12112379 (2023).
- Schuld and Petruccione [2021] M. Schuld and F. Petruccione, Machine Learning with Quantum Computers (2021).
- Guan et al. [2021] W. Guan, G. Perdue, A. Pesah, M. Schuld, K. Terashi, S. Vallecorsa, and J.-R. Vlimant, Quantum machine learning in high energy physics, Machine Learning: Science and Technology 2, 011003 (2021).
- Araz and Spannowsky [2021] J. Y. Araz and M. Spannowsky, Quantum-inspired event reconstruction with tensor networks: Matrix product states, Journal of High Energy Physics 2021, 112 (2021).
- Duckett et al. [2024] P. Duckett, G. Facini, M. Jastrzebski, S. Malik, T. Scanlon, and S. Rettie, Reconstructing charged particle track segments with a quantum-enhanced support vector machine, Phys. Rev. D 109, 052002 (2024).
- Tüysüz, Cenk et al. [2020] Tüysüz, Cenk, Carminati, Federico, Demirköz, Bilge, Dobos, Daniel, Fracas, Fabio, Novotny, Kristiane, Potamianos, Karolos, Vallecorsa, Sofia, and Vlimant, Jean-Roch, Particle track reconstruction with quantum algorithms, EPJ Web Conf. 245, 09013 (2020).
- Blance and Spannowsky [2021a] A. Blance and M. Spannowsky, Quantum machine learning for particle physics using a variational quantum classifier, Journal of High Energy Physics 2021, 212 (2021a).
- Terashi et al. [2021] K. Terashi, M. Kaneda, T. Kishimoto, M. Saito, R. Sawada, and J. Tanaka, Event classification with quantum machine learning in high-energy physics, Computing and Software for Big Science 5, 2 (2021).
- Chen et al. [2020] S. Y.-C. Chen, T.-C. Wei, C. Zhang, H. Yu, and S. Yoo, Quantum convolutional neural networks for high energy physics data analysis (2020), arXiv:2012.12177 [cs.LG] .
- Wu et al. [2021a] S. L. Wu, J. Chan, W. Guan, S. Sun, A. Wang, C. Zhou, M. Livny, F. Carminati, A. D. Meglio, A. C. Y. Li, J. Lykken, P. Spentzouris, S. Y.-C. Chen, S. Yoo, and T.-C. Wei, Application of quantum machine learning using the quantum variational classifier method to high energy physics analysis at the lhc on ibm quantum computer simulator and hardware with 10 qubits, Journal of Physics G: Nuclear and Particle Physics 48, 125003 (2021a).
- Chen et al. [2021] S. Y.-C. Chen, T.-C. Wei, C. Zhang, H. Yu, and S. Yoo, Hybrid quantum-classical graph convolutional network (2021), arXiv:2101.06189 [cs.LG] .
- Blance and Spannowsky [2021b] A. Blance and M. Spannowsky, Unsupervised event classification with graphs on classical and photonic quantum computers, Journal of High Energy Physics 2021, 170 (2021b).
- Heredge et al. [2021] J. Heredge, C. Hill, L. Hollenberg, and M. Sevior, Quantum support vector machines for continuum suppression in b meson decays (2021), arXiv:2103.12257 [quant-ph] .
- Wu et al. [2021b] S. L. Wu, S. Sun, W. Guan, C. Zhou, J. Chan, C. L. Cheng, T. Pham, Y. Qian, A. Z. Wang, R. Zhang, M. Livny, J. Glick, P. K. Barkoutsos, S. Woerner, I. Tavernelli, F. Carminati, A. Di Meglio, A. C. Y. Li, J. Lykken, P. Spentzouris, S. Y.-C. Chen, S. Yoo, and T.-C. Wei, Application of quantum machine learning using the quantum kernel algorithm on high energy physics analysis at the lhc, Phys. Rev. Res. 3, 033221 (2021b).
- Belis et al. [2021] V. Belis, S. González-Castillo, C. Reissel, S. Vallecorsa, E. F. Combarro, G. Dissertori, and F. Reiter, Higgs analysis with quantum classifiers, EPJ Web of Conferences 251, 03070 (2021).
- Gianelle et al. [2022] A. Gianelle, P. Koppenburg, D. Lucchesi, D. Nicotra, E. Rodrigues, L. Sestini, J. de Vries, and D. Zuliani, Quantum machine learning for b-jet charge identification, Journal of High Energy Physics 2022, 14 (2022).
- Abel et al. [2022] S. Abel, J. C. Criado, and M. Spannowsky, Completely quantum neural networks, Phys. Rev. A 106, 022601 (2022).
- Araz and Spannowsky [2022] J. Y. Araz and M. Spannowsky, Classical versus quantum: Comparing tensor-network-based quantum circuits on large hadron collider data, Physical Review A 106, 10.1103/physreva.106.062423 (2022).
- Peixoto et al. [2023] M. C. Peixoto, N. F. Castro, M. Crispim Romão, M. G. J. Oliveira, and I. Ochoa, Fitting a collider in a quantum computer: tackling the challenges of quantum machine learning for big datasets, Frontiers in Artificial Intelligence 6, 10.3389/frai.2023.1268852 (2023).
- Hammad et al. [2023] A. Hammad, K. Kong, M. Park, and S. Shim, Quantum metric learning for new physics searches at the lhc (2023), arXiv:2311.16866 [hep-ph] .
- Ngairangbam et al. [2022] V. S. Ngairangbam, M. Spannowsky, and M. Takeuchi, Anomaly detection in high-energy physics using a quantum autoencoder, Phys. Rev. D 105, 095004 (2022).
- Alvi et al. [2023] S. Alvi, C. W. Bauer, and B. Nachman, Quantum anomaly detection for collider physics, Journal of High Energy Physics 2023, 220 (2023).
- Araz and Spannowsky [2023] J. Y. Araz and M. Spannowsky, Quantum-probabilistic hamiltonian learning for generative modeling and anomaly detection, Phys. Rev. A 108, 062422 (2023).
- Woźniak et al. [2023] K. A. Woźniak, V. Belis, E. Puljak, P. Barkoutsos, G. Dissertori, M. Grossi, M. Pierini, F. Reiter, I. Tavernelli, and S. Vallecorsa, Quantum anomaly detection in the latent space of proton collision events at the lhc (2023), arXiv:2301.10780 [quant-ph] .
- Schuhmacher et al. [2023] J. Schuhmacher, L. Boggia, V. Belis, E. Puljak, M. Grossi, M. Pierini, S. Vallecorsa, F. Tacchino, P. Barkoutsos, and I. Tavernelli, Unravelling physics beyond the standard model with classical and quantum anomaly detection, Machine Learning: Science and Technology 4, 045031 (2023).
- Bravo-Prieto et al. [2022] C. Bravo-Prieto, J. Baglio, M. Cè, A. Francis, D. M. Grabowska, and S. Carrazza, Style-based quantum generative adversarial networks for Monte Carlo events, Quantum 6, 777 (2022).
- Delgado and Hamilton [2022] A. Delgado and K. E. Hamilton, Unsupervised quantum circuit learning in high energy physics, Physical Review D 106, 10.1103/physrevd.106.096006 (2022).
- Rousselot and Spannowsky [2024] A. Rousselot and M. Spannowsky, Generative invertible quantum neural networks, SciPost Phys. 16, 146 (2024).
- Rehm et al. [2023] F. Rehm, S. Vallecorsa, K. Borras, D. Krücker, M. Grossi, and V. Varo, Precise image generation on current noisy quantum computing devices, Quantum Science and Technology 9, 015009 (2023).
- Hoque et al. [2024] S. Hoque, H. Jia, A. Abhishek, M. Fadaie, J. Q. Toledo-Marín, T. Vale, R. G. Melko, M. Swiatlowski, and W. T. Fedorko, Caloqvae : Simulating high-energy particle-calorimeter interactions using hybrid quantum-classical generative models (2024), arXiv:2312.03179 [hep-ex] .
- Cerezo et al. [2021] M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio, and P. J. Coles, Variational quantum algorithms, Nature Reviews Physics 3, 625 (2021).
- Peruzzo et al. [2014] A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’Brien, A variational eigenvalue solver on a photonic quantum processor, Nature Communications 5, 4213 (2014).
- McClean et al. [2016] J. R. McClean, J. Romero, R. Babbush, and A. Aspuru-Guzik, The theory of variational hybrid quantum-classical algorithms, New Journal of Physics 18, 023023 (2016).
- [81] E. W. Weisstein, ”complete graph.” from mathworld–a wolfram web resource, last visited on 2/11/2023.
- Kasieczka et al. [2019a] G. Kasieczka, T. Plehn, J. Thompson, and M. Russel, Top quark tagging reference dataset, 10.5281/zenodo.2603256 (2019a).
- Kasieczka et al. [2019b] G. Kasieczka, T. Plehn, A. Butter, K. Cranmer, D. Debnath, B. M. Dillon, M. Fairbairn, D. A. Faroughy, W. Fedorko, C. Gay, L. Gouskos, J. F. Kamenik, P. Komiske, S. Leiss, A. Lister, S. Macaluso, E. Metodiev, L. Moore, B. Nachman, K. Nordström, J. Pearkes, H. Qu, Y. Rath, M. Rieger, D. Shih, J. Thompson, and S. Varma, The machine learning landscape of top taggers, SciPost Physics 7, 10.21468/scipostphys.7.1.014 (2019b).
- Kansal et al. [2022a] R. Kansal, J. Duarte, H. Su, B. Orzari, T. Tomei, M. Pierini, M. Touranakou, J.-R. Vlimant, and D. Gunopulos, Jetnet, 10.5281/zenodo.6975118 (2022a).
- Kansal et al. [2022b] R. Kansal, J. Duarte, H. Su, B. Orzari, T. Tomei, M. Pierini, M. Touranakou, J.-R. Vlimant, and D. Gunopulos, Particle cloud generation with message passing generative adversarial networks (2022b), arXiv:2106.11535 [cs.LG] .
- Fan et al. [2019] W. Fan, Y. Ma, Q. Li, Y. He, E. Zhao, J. Tang, and D. Yin, Graph neural networks for social recommendation (2019), arXiv:1902.07243 [cs.IR] .
- Zhang et al. [2021] X.-M. Zhang, L. Liang, L. Liu, and M.-J. Tang, Graph neural networks and their current applications in bioinformatics, Frontiers in Genetics 12, 10.3389/fgene.2021.690049 (2021).
- Reiser et al. [2022] P. Reiser, M. Neubert, A. Eberhard, L. Torresi, C. Zhou, C. Shao, H. Metni, C. van Hoesel, H. Schopmans, T. Sommer, and P. Friederich, Graph neural networks for materials science and chemistry, Communications Materials 3, 93 (2022).
- Zaheer et al. [2018] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. Salakhutdinov, and A. Smola, Deep sets (2018), arXiv:1703.06114 [cs.LG] .
- Gilmer et al. [2017] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl, Neural message passing for quantum chemistry (2017), arXiv:1704.01212 [cs.LG] .
- Pérez-Salinas et al. [2020] A. Pérez-Salinas, A. Cervera-Lierta, E. Gil-Fuster, and J. I. Latorre, Data re-uploading for a universal quantum classifier, Quantum 4, 226 (2020).
- Preskill [2018] J. Preskill, Quantum Computing in the NISQ era and beyond, Quantum 2, 79 (2018).
- Mitarai et al. [2018] K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii, Quantum circuit learning, Phys. Rev. A 98, 032309 (2018).
- Schuld et al. [2019] M. Schuld, V. Bergholm, C. Gogolin, J. Izaac, and N. Killoran, Evaluating analytic gradients on quantum hardware, Phys. Rev. A 99, 032331 (2019).
- Wierichs et al. [2022] D. Wierichs, J. Izaac, C. Wang, and C. Y.-Y. Lin, General parameter-shift rules for quantum gradients, Quantum 6, 677 (2022).
- Crooks [2019] G. E. Crooks, Gradients of parameterized quantum gates using the parameter-shift rule and gate decomposition (2019), arXiv:1905.13311 [quant-ph] .
- Shukla and Vedula [2024] A. Shukla and P. Vedula, An efficient quantum algorithm for preparation of uniform quantum superposition states, Quantum Information Processing 23, 10.1007/s11128-024-04258-4 (2024).
- Babbush et al. [2018] R. Babbush, C. Gidney, D. W. Berry, N. Wiebe, J. McClean, A. Paler, A. Fowler, and H. Neven, Encoding electronic spectra in quantum circuits with linear t complexity, Physical Review X 8, 10.1103/physrevx.8.041015 (2018).
- Vale et al. [2023] R. Vale, T. M. D. Azevedo, I. C. S. Araújo, I. F. Araujo, and A. J. da Silva, Decomposition of multi-controlled special unitary single-qubit gates (2023), arXiv:2302.06377 [quant-ph] .
- Saeedi and Pedram [2013] M. Saeedi and M. Pedram, Linear-depth quantum circuits for -qubit toffoli gates with no ancilla, Phys. Rev. A 87, 062318 (2013).
- da Silva and Park [2022] A. J. da Silva and D. K. Park, Linear-depth quantum circuits for multiqubit controlled gates, Physical Review A 106, 10.1103/physreva.106.042602 (2022).
- Nielsen and Chuang [2007] M. A. Nielsen and I. L. Chuang, Controlled operations, in Quantum Computation and Quantum Information (Cambridge University Press, 2007) Chap. 4.3 Controlled operations - Figure 4.10.
- Havlíček et al. [2019] V. Havlíček, A. D. Córcoles, K. Temme, A. W. Harrow, A. Kandala, J. M. Chow, and J. M. Gambetta, Supervised learning with quantum-enhanced feature spaces, Nature 567, 209 (2019).
- Schuld [2021] M. Schuld, Supervised quantum machine learning models are kernel methods (2021), arXiv:2101.11020 [quant-ph] .
- Alwall et al. [2014] J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations, JHEP 07, 079, arXiv:1405.0301 [hep-ph] .
- Ovyn et al. [2010] S. Ovyn, X. Rouby, and V. Lemaitre, Delphes, a framework for fast simulation of a generic collider experiment (2010), arXiv:0903.2225 [hep-ph] .
- de Favereau et al. [2014] J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens, and M. Selvaggi (DELPHES 3), DELPHES 3, A modular framework for fast simulation of a generic collider experiment, JHEP 02, 057, arXiv:1307.6346 [hep-ex] .
- Bierlich et al. [2022] C. Bierlich, S. Chakraborty, N. Desai, L. Gellersen, I. Helenius, P. Ilten, L. Lönnblad, S. Mrenna, S. Prestel, C. T. Preuss, T. Sjöstrand, P. Skands, M. Utheim, and R. Verheyen, A comprehensive guide to the physics and usage of pythia 8.3 (2022), arXiv:2203.11601 [hep-ph] .
- Pappadopulo et al. [2014] D. Pappadopulo, A. Thamm, R. Torre, and A. Wulzer, Heavy Vector Triplets: Bridging Theory and Data, JHEP 09, 060, arXiv:1402.4431 [hep-ph] .
- Cacciari et al. [2008] M. Cacciari, G. P. Salam, and G. Soyez, The anti-kt jet clustering algorithm, Journal of High Energy Physics 2008, 063 (2008).
- Cacciari et al. [2012] M. Cacciari, G. P. Salam, and G. Soyez, Fastjet user manual, The European Physical Journal C 72, 1896 (2012).
- Schuld et al. [2020] M. Schuld, A. Bocharov, K. M. Svore, and N. Wiebe, Circuit-centric quantum classifiers, Phys. Rev. A 101, 032308 (2020).
- Agarap [2019] A. F. Agarap, Deep learning using rectified linear units (relu) (2019), arXiv:1803.08375 [cs.NE] .
- [114] P. Team, Pytorch.
- Fey and Lenssen [2019] M. Fey and J. E. Lenssen, Fast graph representation learning with PyTorch Geometric, in ICLR Workshop on Representation Learning on Graphs and Manifolds (2019).
- Bergholm et al. [2022] V. Bergholm, J. Izaac, M. Schuld, C. Gogolin, S. Ahmed, V. Ajith, M. S. Alam, G. Alonso-Linaje, B. AkashNarayanan, A. Asadi, J. M. Arrazola, U. Azad, S. Banning, C. Blank, T. R. Bromley, B. A. Cordier, J. Ceroni, A. Delgado, O. D. Matteo, A. Dusko, T. Garg, D. Guala, A. Hayes, R. Hill, A. Ijaz, T. Isacsson, D. Ittah, S. Jahangiri, P. Jain, E. Jiang, A. Khandelwal, K. Kottmann, R. A. Lang, C. Lee, T. Loke, A. Lowe, K. McKiernan, J. J. Meyer, J. A. Montañez-Barrera, R. Moyard, Z. Niu, L. J. O’Riordan, S. Oud, A. Panigrahi, C.-Y. Park, D. Polatajko, N. Quesada, C. Roberts, N. Sá, I. Schoch, B. Shi, S. Shu, S. Sim, A. Singh, I. Strandberg, J. Soni, A. Száva, S. Thabet, R. A. Vargas-Hernández, T. Vincent, N. Vitucci, M. Weber, D. Wierichs, R. Wiersema, M. Willmann, V. Wong, S. Zhang, and N. Killoran, Pennylane: Automatic differentiation of hybrid quantum-classical computations (2022), arXiv:1811.04968 [quant-ph] .
- Kingma and Ba [2017] D. P. Kingma and J. Ba, Adam: A method for stochastic optimization (2017), arXiv:1412.6980 [cs.LG] .
- Javadi-Abhari et al. [2024] A. Javadi-Abhari, M. Treinish, K. Krsulich, C. J. Wood, J. Lishman, J. Gacon, S. Martiel, P. D. Nation, L. S. Bishop, A. W. Cross, B. R. Johnson, and J. M. Gambetta, Quantum computing with Qiskit (2024), arXiv:2405.08810 [quant-ph] .