Energy-Efficient Large-Scale Random Quantum Circuit Simulation
Abstract
The random quantum circuit sampling problem serves as a practical benchmark to demonstrate quantum computational advantage. The performance of state-of-the-art classical computing for this problem is crucial to establish and verify the quantum advantage. Recent progress on classical algorithms, especially those based on tensor network methods, has significantly reduced the classical simulation time and challenged the claim of first-generation quantum advantage experiment. In this study, we introduce an efficient classical algorithmic post-processing technique to reduce the overall computational effort. We also integrate state-of-the-art hardware-efficient tensor processors to achieve one order of lower energy consumption compared to previous works. Our results push the baseline of classical computing in both algorithmic time and energy for the characterization of quantum advantage experiments.
I Introduction.
Quantum computer have been set different at three stages of development, quantum advantage, quantum simulator, universal quantum computer[Preskill2018quantumcomputingin].Quantum advantage, a milestone of progress toward more valuable applications down the road in future, is to reach the point that the performance of quantum computer is better than classic computer for a particular task[Preskill2012]. In 2019, Google claimed that they realize quantum advantage for the first time in a quantum processor named , which have 53 qubits and 20 layers gates[Arute2019]. In 2020, University of Science and Technology of China(USTC) reaches quantum advantage using a photonic quantum computer prototype named [doi:10.1126/science.abe8770]. Besides, USTC also realize this in 2.0 and 2.1, containing 56 qubits, 20 layers gates and 60 qubits, 24 layers gates, respectively[Zuchongzhi2.0_PhysRevLett.127.180501, zuchongzhi2.12022240]. The Sycamora finish the random quantum circuit sampling problem, which is circuit (as Fig. 1 show) consists of random single gates and two-qubit gates leading highly entangled state and high complexity for classic simulation[aaronson_et_al:LIPIcs:2017:7527], with one million samples and fidelity , in 200s through quantum hardware, while they estimate this problem would take 10000yr in the Summit supercomputer, and these isn’t enough random assess memory to store such entangled state[Arute2019]. The fidelity in the experiment is estimated by the value of linear cross-entropy (XEB). The XEB is a value that describe how the distribution of bitstrings from hardware experiment or classic simulation close to theoretical ideal distribution, under the premise of white noise, the quantity of XEB usually used as a estimate of fidelity. And superconducting circuit are all white noise[]. In quantum experiment, it is believed that XEB is a good approximation of fidelity, while in classic simulation high XEB samples could be generate from low fidelity samples[gao2021limitations], which reduce the complexity of simulation.

However, the classic algorithm used by Google is Schrödinger–Feynman algorithm, which isn’t a powerful classic algorithm. And the tensor network algorithm have been used to solve the sampling problem in order to close the gap of quantum advantage. Some of the simulation performance is comparable with that of quantum hardware[10.1145/3458817.3487399, Pan2022PhysRevLett.129.090502, PhysRevLett.128.030501]. For example, Yong et al.[10.1145/3458817.3487399] use the new Sunway supercomputer to simulate the Sycamore and calculate one million correlated amplitudes in 304 sec, which was awarded the Gordon Bell Prize in 2021. Pan et al.[Pan2022PhysRevLett.129.090502] show an algorithm based on tensor network takes 15 hours to sample one million uncorrelated from the same random quantum circuit same as Sycamore by using 512 GPU with a fidelity of . However, classical computer simulations are still two orders of magnitude slower than quantum computers in this algorithm. Xun et al.[gao2021limitations] use another algorithm to solve this task in 0.6s with a XEB of only. Gao’s algorithm can only be applied in a Simplified version of Sycamore, and these algorithm couldn’t be applied in the simulation of the circuit as same as Sycamore.
The previous work about simulation of Sycamore have high computational complexity and cost a lot of electric energy. Pan’s work[Pan2022PhysRevLett.129.090502] needs 15 hours and 2688 kWh electric energy and Yong’s work[10.1145/3458817.3487399] needs 304 sec and 2955kWh electric energy. Taking this problem into consideration, quantum computer still take advantage. Here we use a new algorithm realize a lower complexity and low energy consumption simulation, which would be a stronger proof for Sycamore not achieving quantum advantage.
II Results.
In this article, we propose a new algorithm can simulate the sycamore circuit in 1477 sec with XEB = 0.002. By contrast, Sycamore needs 600 seconds to get 3 million samples with the same XEB in the total experiment. In terms of electricity consumption, Yong et al.[10.1145/3458817.3487399] use the Sunway supercomputer with 35MW for 304 sec, while we use 1024 NVIDIA A100 GPUs with total power 409.6 kW for 1477 sec. So total electricity consumption of Yong et al.[10.1145/3458817.3487399] is 17.5873 times that of ours. The total average power consumption of Sycamore for chilled water production to be 26 kW. This means that the energy consumed during the 600 sec required to acquire 3 million samples in their experiment is 4.3 kWh[Arute2019]. Our energy consumption is very closed to Sycamore. We summery and compare the energy consumption or running time of recent work in Fig. 2. Our algorithm use the cuTENSOR package, which is developed by NVIDIA, dramatically increased the speed of the GPU, which increase the speed of calculation about 8 times than the PyTorch package used in Pan’s work[Pan2022PhysRevLett.129.090502]. Besides, our algorithm has better parallelism and we also use twice number of GPU than Pan’s work[Pan2022PhysRevLett.129.090502]. All in all, our simulation time decreases from 15h of Pan’s work[Pan2022PhysRevLett.129.090502] to 1477 sec.

Our algorithm includes two main steps, the first step is approximate simulation, the second step is post process. Our algorithm use the tensor network method to simulate the quantum circuit, which means that we map the random quantum circuit to a tensor network, and we can calculate the final state amplitudes by contract the network. In the first step, we calculate 3 million bitstring groups each group including 512 correlative bitstrings, which meas we open 9 qubits and the bitstrings will go through all the possible case of those 9 qubits. In the second step, we post process each correlative group by choosing the bitstring with highest probability. Finally, we sample 3 million uncorrelated bitstrings with XEB = 0.002.
The approximate simulation is similar to the idea of Pan’s work[Pan2022PhysRevLett.129.090502]. We only contract a fraction of the tensor network. This method significantly reduces the computational complexity at the expense of a decrease in fidelity. In random quantum circuit sampling problem the fidelity is estimated by the XEB, and the definition of XEB() is:
(1) |
where, the is the size of total space, for qubit circuit, the is . is the set of sample bitstrings we get directly through the probability distribution of approximate simulation final state , and is the size of this set. And is the ideal probability of bitstrings in theory. If the sample set is come from uniform random distribution, the ; if the sample set is come from the probability distribution of ideal random quantum circuit final state, the . From this point, can be seem as fidelity[boixo_characterizing_2018].
One way to improve the is to post-process (pp) the bitstring set {s} by selecting the most likely bitstrings that has higher probabilities in the ideal distribution . However, we cannot directly access in the first step, because we only compute a small fraction of the whole tensor network, and only have an approximate probability distribution that is positively correlated with . And we improve 6.24 times by the post-process algorithm. This allows us to identify the bitstrings with higher values. We will explain the relation between and in more detail later.
For the first time, our study delves into the correlation between the approximate probability and the ideal probability , leveraging this insight to introduce a post-processing method for addressing the random quantum circuit sampling problem. Our approach combines approximate simulation and post-processing techniques, with the former reducing computational and memory complexities, and the latter increasing XEB. Consequently, we achieve the simulation of large-scale quantum circuits with significantly reduced simulation time and electric energy.
III Methods.
There exists a positive correlation between the approximate probability and the ideal probability for bitstrings. To demonstrate this correlation, we perform simulations of a quantum circuit consisting of 30 qubits and 14 layers of gates using both approximate and exact simulation methods. Our theoretical analysis confirms that higher fidelity of results in closer approximation to . This observation agrees with the numerical results obtained. We calculate the Pearson linear correlation coefficient [Pearson1895] to quantify the correlation. When simulating 1/2 of the circuit, the fidelity and . Similarly, when simulating 1/256 of the circuit, the fidelity and . These findings indicate a positive correlation between and , implying that selecting bitstrings with higher approximate probability increases the likelihood of higher exact probability. This property can be effectively utilized to enhance the XEB. We provide empirical evidence supporting our proposition, as illustrated in Fig. 3(a) and (b), depicting the correlation.
In Fig. 3(c), we observe a remarkable and consistent improvement in XEB through the post-processing step. The ratio between the post-process XEB(pp) and the fidelity is found to be , obtained by selecting the top bitstrings with the highest approximate probabilities from a total of bitstrings. This ratio closely aligns with the theoretical value of ln, indicating how much the XEB can be improved. Moreover, the improvement exhibits a minimal fluctuation of only 2.517%. We provide a more detailed analysis of the relationship between XEB(pp) and the post-process in subsequent sections. And the fidelity of the numerical results is highly consistent with the estimated fidelity , which corresponds to the fraction of the tensor network taken into consideration. We research some different cases with and represents the number of broken edges. It is important to note that the broken edge is similar to a sliced edge, but only a single value is chosen without summation over other values. This consistency implies that the tensor network can be divided into distinct parts, with each part contributing equally to the fidelity. Consequently, the total fidelity is determined by the number of contraction parts [Pan2022PhysRevLett.129.090502].
The quantification of XEB improvement through post-processing can be estimated as follows. We observe that the distribution of the ideal probability satisfies the Porter-Thomas distribution [boixo_characterizing_2018] given by . By choosing the top- largest values of for calculating XEB, we obtain (as shown in the Supplemental Material) , where represents the total number of bitstrings (also the size of the Hilbert space), and is referred to as the post-process coefficient.
Considering that the probability approximates the ideal probability with fidelity , the XEB of the K largest values is given by:
(2) |
This equation holds statistically, indicating that serves as the scale factor between top- XEB and fidelity. Moreover, this factor is greater than 1 when , where is the natural constant. For instance, in the case where and , Equation 2 predicts that post-processing improves XEB by a factor of —a result consistent with the numerical outcome of observed in Fig. 3(c). To further verify Equation 2, we calculate XEB(pp) using different top- values, as shown in Fig. 4.

For small values of k, the numerical results deviate from the theoretical curve due to the limited sample size, resulting in significant statistical fluctuations. However, when k is larger than 8, the numerical results align closely with the theoretical curve, indicating a high level of agreement. Therefore, Equation 2 holds true when the sample size exceeds 256. We define the minimum value of k that ensures the validity of Equation 2 as the critical post-process coefficient . In summary, the post-process coefficient k should fall within the range .
In the approximate simulation step of Sycamore 53 qubit 20 layers of gates, we employ a tensor network algorithm, which builds upon previous work [Pan2022PhysRevLett.129.090502]. The key enhancement lies in our improved capacity for parallel computation on classical computers by dividing the overall task into smaller subtasks. Additionally, we utilize the slicing method to partition the larger task, whereby specific indices of the tensor network are fixed while summing over all possible values. Each subtask exhibits a computational complexity of only FLOPS. Despite the presence of subtasks, they are all independent and can be concurrently computed. On an NVIDIA A100 GPU, the computation of a single subtask requires 3.9 seconds. Furthermore, We have observed an inverse proportion relationship between the total computation time and the number of computational resources, as depicted in Fig. 5. This highlights the scalability and efficiency of our approach when additional computing resources are allocated to the task.

The slicing algorithm is a key method employed to reduce memory requirements during the tensor network contraction process. Additionally, we implement the Einstein summation with the cuTENSOR package, which dynamically selects the optimal tensor contraction algorithm to maximize memory utilization. As a result of these optimizations, we are able to simulate the random circuit sampling experiment with a post-process XEB(pp) of in just 156.1 seconds using 10,000 NVIDIA A100 GPUs. The complexity of this task is summarized in Tab. 1. In summary, our algorithm exhibits a space complexity of , where represents the size of the data type and corresponds to the targeted contraction tree width, which can be specified in our slicing algorithm. Furthermore, the time complexity is characterized by , where denotes the number of gate layers, represents the number of qubits, represents the fidelity, and denotes the post-process coefficient. One notable advantage of our algorithm is the reduction in time complexity achieved through approximate simulation and post-processing algorithm. Additionally, our algorithm allows for complete control over the space complexity.
To verify that the fidelity corresponds to the fraction of the tensor network considered during contraction, we utilize the exact amplitudes from a previous study[liu2022validating] to calculate the numerical fidelity of the simulation of the 53 qubits 20 layers of gate circuit, denoted as . By comparing the results, we observe that when , the corresponding numerical fidelity is found to be . This close agreement between the estimated fidelity and the numerical fidelity provides empirical evidence supporting the correctness of our algorithm.
In this study, we employed a computational infrastructure comprising 1024 NVIDIA A100 GPUs provided by the Shanghai Artificial Intelligence Laboratory. Utilizing this powerful setup, we completed the simulation of the random circuit sampling experiment in 1477.18 seconds, as depicted in Figure 5. Intriguingly, based on our estimations, employing 10,000 A100 GPUs would enable us to further reduce the simulation time to 156.1 seconds. This noteworthy achievement indicates that our classical simulation surpasses the performance of the Sycamore quantum computer, suggesting that establishing quantum advantage requires a larger scale of quantum experiments.
each subtask | Fidelity | ||
80 | |||
Efficiency | - | - |
IV Discuss.
Quantum advantage is a significant milestone in the field of quantum computing. While advancing quantum experimental techniques, it is crucial to develop classical algorithms based on practical experimental setups in order to truly establish the boundaries of classical computation [cite]. With the rapid development of classical algorithms, such as tensor networks, the demand for more powerful quantum computers becomes increasingly evident in order to achieve a solid quantum advantage. Establishing a reliable criterion for determining quantum advantage remains an important open question in the field of quantum computing. The development of such a standard is crucial for accurately assessing the capabilities and potential applications of quantum systems, as well as guiding future research and technological advancements.
Appendix A The theory value of XEB after post process
We assume that the bitstring probability follows the Porter-Thomas distribution:
with and is the number of qubitrs. Let us set , then we have . we select K bitstrings with the top-K probabilities from bitstrings, we can determine the minimum probability among the selected bitstrings as
which gives , we set . It is easy to check that the expectation of the fraction of bitstrings is
Then we can calculate XEB of the -fraction bistrings
where denotes indices of top bitstrings. If the state is an approximation with fidelity , then we expect that the XEB is
so we have proven the equation 2.