Received 17 August 2023; revised 24 November 2023; accepted 11 December 2023; date of publication 15 December 2023;
date of current version 12 January 2024.
10.1109/TQE.2023.3343625
The work of Travis LeCompte was supported by a Louisiana Board of Regents Graduate Fellowship. This work was supported in part by the National Science Foundation (NSF) under Grant OIA-2019511, in part by Enabling Practical-scale Quantum Computing (EPiQC), an NSF Expedition in Computing, under Award CCF-1730449, in part by Software-Tailored Architecture for Quantum (STAQ) under Award NSF Phy-1818914, in part by the US Department of Energy (DOE) Office of Advanced Scientific Computing Research, Accelerated Research for Quantum Computing Program, in part by the NSF Quantum Leap Challenge Institute for Hybrid Quantum Architectures and Networks under NSF Award 2016136, in part by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, in part by the Army Research Office under Grant W911NF-23-1-0077, and in part by the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
Corresponding author: Lu Peng (email: [email protected]).
Quantum Vulnerability Analysis to Guide Robust Quantum Computing System Design
Abstract
While quantum computers provide exciting opportunities for information processing, they currently suffer from noise during computation that is not fully understood. Incomplete noise models have led to discrepancies between quantum program success rate (SR) estimates and actual machine outcomes. For example, the estimated probability of success (ESP) is the state-of-the-art metric used to gauge quantum program performance. The ESP suffers poor prediction since it fails to account for the unique combination of circuit structure, quantum state, and quantum computer properties specific to each program execution. Thus, an urgent need exists for a systematic approach that can elucidate various noise impacts and accurately and robustly predict quantum computer success rates, emphasizing application and device scaling. In this article, we propose quantum vulnerability analysis (QVA) to systematically quantify the error impact on quantum applications and address the gap between current success rate (SR) estimators and real quantum computer results. The QVA determines the cumulative quantum vulnerability (CQV) of the target quantum computation, which quantifies the quantum error impact based on the entire algorithm applied to the target quantum machine. By evaluating the CQV with well-known benchmarks on three 27-qubit quantum computers, the CQV success estimation outperforms the estimated probability of success state-of-the-art prediction technique by achieving on average six times less relative prediction error, with best cases at 30 times, for benchmarks with a real SR rate above 0.1%. Direct application of QVA has been provided that helps researchers choose a promising compiling strategy at compile time.
Index Terms:
Quantum Computing, resilience, success rate (SR), vulnerability analysis.=-15pt
I Introduction
Excitement surrounds quantum computation due to the great theoretical potential both fault-tolerant [44, 16] and near-term [30] quantum computers have to solve high-impact problems. By carefully leveraging quantum superposition, interference, and entanglement, quantum computers are projected to be applied to computational tasks that are currently intractable on today’s most powerful supercomputers. Recent progress in quantum hardware has allowed many prototype quantum computers to emerge, and superconducting circuits [5, 6, 45] are gaining popularity as one of the forefront quantum computing technologies. Compared to other quantum hardware, superconducting quantum computers have advantages in scalability, microwave control, and nanosecond-scale gate operation [10, 9, 7].
While promising, superconducting quantum architectures are currently too error-prone to support programs targeted for large-scale applications. Near-term superconducting quantum computers suffer from various noise channels that degrade both quantum information and computation. This noise is difficult to fully characterize and causes retention and operational errors that vary both across-chip and between quantum computers[46] significantly. Quantum error correction was developed to accommodate occasional errors during quantum computation, but current noisy intermediate-scale quantum (NISQ) era machines do not have the operator precision or device scale to implement this routine [40]. Therefore, NISQ quantum machines perform noisy operations as errors can happen on any physical qubit at any time during program execution according to error rates characterized by randomized benchmarking. Fig. 1 shows the full quantum computing flow that transforms a research idea into an experiment on a real quantum computer.
Improving circuit success rate (SR) is a popular research topic, but few studies examine the quantum computer and circuit-dependent errors that result in specific SR. A robust noise model is essential for accurate SR prediction and developing SR boost technologies such as error reduction, bypass, and compression. Unfortunately, existing methods like statistical fault injection [41, 15, 37] and estimated probability of success (ESP) [47] have accuracy and scaling challenges. These models neglect the impact of circuit composition on reported error rates from randomized benchmarking. Therefore, this article proposes a systematic approach to explain different errors and provide accurate SR prediction.
While studying ESP, we found that there are gates whose impact on the output is much more significant than their error rates. We propose the quantum vulnerability analysis (QVA) to address these gate error inconsistent behaviors. QVA is a systematic methodology that performs error modeling based on randomized benchmarking to determine the success of a compiled circuit on a targeted quantum computer: the vulnerability metric cumulative quantum vulnerability (CQV) quantifies the compiled circuit performance, and 1-CQV provides an SR estimation that closely models actual quantum computer performance. By performing the QVA during compilation, researchers will have a clear and direct view of how gate error behaviors couple to the circuit structure to influence runtime performance. The blue box in Fig. 1 reveals the brief structure of the QVA and how CQV would be used in an experimental flow to estimate the quantum computer SR before real machine evaluation.
We extensively validated the accuracy and stability of QVA in predicting the SR of any given compiled circuits. This assertion is backed by over 160 K experiments conducted on three state-of-the-art 27-qubit IBM quantum computers [1], spanning six distinguished algorithms with varying qubit sizes. The compiled circuits were produced using a variety of compilation strategies and incorporated multiple error-mitigation techniques to address the diverse noise profiles encountered over months of experimentation. This comprehensive approach bolsters the extendibility of our analysis to a broader range of circuits.
All results show that QVA maintains a stable SR estimation via 1-CQV, providing on average 6x improvements (30x in the best case) in relative prediction error over the widely implemented ESP estimator. Additionally, we conduct a case study to provide the quantum community with a direct application of the presented method by choosing promising compiling strategies at compile time. Further, our QVA module provides instructions for reconstructing the model based on a particular quantum device’s topology and error behavior. It ensures its compatibility with various superconducting quantum computers, irrespective of their vendor or technology. Below are the contributions of our article:
-
•
We design and build a lightweight error modeling scheme based on QVA and are the first that quantify error rate impact propagating across the CNOT gates with the error rate reported by randomized benchmarking.
-
•
We implement and evaluate a framework that calculates the unique CQV for an algorithm/machine pairing. We show that the proposed SR estimator, 1-CQV, outperforms the current state-of-the-art SR estimation, ESP, by an average 6x less relative prediction error.
-
•
We highlight the scaling potential of CQV: as an algorithm reaches and surpasses the quantum volume of a device, CQV-based methods experience more than 10x improvement in relative prediction error rate compared with the state of the art.
II Background AND Related Work
II-A NISQ Era Quantum Basics And Error Characterization
The flow for generating a quantum executable from a given algorithm and evaluating it on a quantum chip is illustrated in Fig. 1. The quantum compiler will be given information about the target quantum chip and compiler strategy, such as optimization levels, initial layout method, mapping method, etc. Based on that input, the compiler will follow all intermediate compiling steps to generate a compiled circuit for execution on a quantum computer.
When operating a superconducting quantum computer, operations may fail due to poor environmental conditions, inaccurate control, state decoherence, and more. The error rate associated with an operation, Operational Error Rates, is closely estimated by randomized benchmarking, described in detail below, which approximates the extent of failures without revealing the exact source. The lifetime of a qubit, or its ability to retain a quantum state, is determined by its relaxation time (T1) and decoherence time (T2), so-called Retention Errors. The decoherence and relaxation time represent the qubit’s average time to retain its energized and superimposed states, respectively.
Randomized Benchmarking: Operator performance must be accurately characterized to use a quantum computer effectively. Unfortunately, quantum computer noise models are complex, and it is unscalable to completely characterize system noise via process tomography [39]. In addition to scaling considerations, characterization procedures must separate noise associated with quantum gates from errors stemming from state preparation and measurement to ensure that computation quality can be adequately estimated. Randomized benchmarking[27, 26] is a method of assessing quantum computer hardware that achieves an average error rate for operations through a process known as twirling. At the high level, twirling implements long sequences of random gate operations and fits the resulting data to a curve to determine the average error. Because randomized benchmarking considers only the exponential decay of sequences of random gates, sensitivity to measurement noise in the resulting average error is minimized. Meanwhile, the T1 and T2 errors on gate operation are included as part of the gate’s error rate reported by the randomized benchmarking [36]. Randomized benchmarking, while applicable to systems of any dimension, is predominantly employed for single-qubit or two-qubit gates [11]. This preference arises from the exponential growth in the number of required gates as the system’s dimension increases, making the method less practical for larger systems. Despite this, the technique can be adapted to pinpoint errors due to unintended crosstalk [12]. Notably, IBM utilizes randomized benchmarking in each calibration cycle to ascertain error rates for their quantum computer’s single and two-qubit gates, as reflected in the system’s properties.
(1) |
II-B Related Work on Quantum SR Estimation
Early quantum computing research was focused on designing quantum hardware [28], instruction set architecture [8], and quantum computer microarchitecture [10, 9, 32, 33]. Afterward, the temporal and spatial noise variation challenges of SC quantum computers were studied to discover mapping and allocation-enhanced compilations to make algorithm execution more robust to diverse errors[46, 31, 20, 21, 22]. Additionally, works such as [18] contribute to the understanding of how noise, fidelity, and computational cost interplay in quantum processing, enriching the broader discourse on quantum system performance. Currently, the focus of quantum computing is on optimizing the success rate by applying different compiler strategies, such as mitigating the effect of errors by enhancing the quantum instructions [43, 14], decreasing measurement errors [48, 13], mitigating crosstalk errors [34, 7], combining pre- and post-execution software approaches to improve performance [47, 19], and compiling with specific constraints [23, 7].
While many studies have improved quantum program performance, two areas have been underexplored: 1) accurate SR prediction for specific compiled circuits and 2) better modeling of error/algorithm relationships in current quantum systems. Regarding SR prediction methods, popular alternatives to noisy quantum computer simulations include using machine learning for SR prediction, which treats the entire computation as a black box [25], developing detailed noise models, and methods like statistical fault injection [17, 37, 41] and estimated success probability (ESP) [47, 46, 36].
(2) |
(3) |
III Motivation
III-A Limitation of Current SR Estimator
Machine Learning-based: The Machine Learning-based Success Rate (SR) prediction method, referenced in [25], simplifies quantum computations into a black box model. This approach can blur distinctions between circuits with varied gate parameters and requires retraining when adapting to different quantum machine sizes. Data collection for larger machines is resource-intensive, especially given the method’s requirement to gather data within a single calibration period. Furthermore, its validation, limited to specific compiled circuits, may not account for the complexities of larger machines or diverse error-mitigation strategies.
Fault Injection-based: The statistical fault injection method employs classical computers to simulate full-state quantum computations. During this process, errors are systematically injected into each basis gate based on specific triggering probabilities [15]. A very recent work [37] employs fault injection methods to evaluate quantum vulnerabilities. This study proposes the use of quantum fault injection to scrutinize circuit vulnerabilities, with a specific emphasis on radiation-induced errors. We note, however, that the fault injection approach may face challenges when applied to large machines. This complexity is accentuated when sampling a broad spectrum of circuits that exhibit variations in qubit size, circuit depth, and concurrent fault counts, especially in the context of larger machines and intricate algorithms. Moreover, the quantum vulnerability factor (QVF) metric proposed in [37], can face challenges when assessing circuits reaching the quantum supremacy, typically seen in machines with 50+ qubits [4]. The QVF calculation hinges on the contrast function, which necessitates prior knowledge of P(A), the expected correct state. In the absence of this knowledge, the correct state must be determined through a noise-free simulation. Relying on classical computing for full-state quantum circuit simulation poses a significant challenge, as such methods approach the limits of classical computational capabilities. Alternatively, our research introduces a different approach, formulating a noise model addressing 1- and 2-qubit gate errors, measured errors, and crosstalk errors. Our methodology is crafted with a focus on scalability.
Estimated Success Probability-based: The estimated success probability (ESP), shown in Equation 3, predicts the correct output trial probability by multiplying the success rate, or fidelity, of each gate () and measurement () operations, generated by one minus the gate () and measurement () error rate in equation 2. While ESP considers all circuit operations, the product treats all gate errors that contribute to the final SR estimation equally when some gate errors influence the final circuit outcome more or less than others. As a basic demonstration of the inaccuracy of ESP modeling, the gate success products in Equation 3 commute, whereas most operations in quantum circuits are fixed in ordering [29]. Based on such position differences of the gates on the compiled circuits, gate errors will contribute differently based on their propagation path to the measurement, which influence the estimated success rate differently than their original gate error rates. The detailed analysis is presented in Sec.III-B. On the other hand, the simplicity of the ESP metric has made it frequently applied in quantum compiler design and circuit optimization efforts as a method to predict quantum program success on quantum computers [47, 2, 38, 3, 32, 36, 24]. However, if a better SR estimator was available, the effectiveness of the aforementioned quantum computer optimizations could potentially experience significant improvements.
We used statistical fault injection, ESP_CP [46], and ESP, as illustrated in Fig. 2, to estimate the Success Rate (SR) of the Bernstein-Vazirani (BV) and Quantum Fourier Transform (QFT) algorithms across varying scales on three distinct quantum computers. The BV algorithm was selected due to its relatively shallow depth, allowing us to demonstrate the effects of significant algorithm size increments. Conversely, the QFT, characterized by deeper circuits, exhibits only modest size increases. For a comprehensive evaluation, both algorithms were tested with three different input sizes, yielding actual success rates ranging from 70% to 5%. ESP_CP, a variant of ESP, focuses solely on multiplying the success rates of gates situated on the critical path. Unfortunately, the predicted outcomes from both methods show significant deviation from the actual SR, with discrepancies ranging from 25% to 60% and relative error rates ranging between 70% to 470%. Further compounding the issue, both methods produce SR estimations that diverge sharply from the real machine results as the circuits increase in size, indicating that their tendency toward scaling is not well performed.
III-B Some Errors Matter, While Others Do Not
The ESP model, upon closer examination, presents potential sources of inaccuracies in SR prediction. Illustrated in Fig. 3, the ESP model accurately predicts the SR only for the middle circuit. When considering a compiled circuit, its final SR results from the product of individual qubit success rates, which are pre-measured. Therefore, only errors impacting a measurement gate influence the final output.
For scenarios akin to the first circuit, the ESP model tends to overestimate error rates. Here, the error from the red-boxed gate doesn’t influence the measurement gate, meaning the gate’s error only impacts the success rate of . Yet, this isn’t captured by the subsequent measurement gate.
Contrastingly, for situations resembling the third circuit, the ESP model tends to underestimate error rates. Errors originating from the two red-boxed gates not only affect the measurements of their associated qubits but also influence other qubits via the CNOT gate. Despite this, the gate error to success rate transformation, as outlined in equation 2, only accounts for this error once. Any subsequent error impacts on other qubit measurements arising from error propagation are disregarded in the ESP model.
Such observations underscore that certain errors exert more influence on the final output than others, highlighting the need for a refined approach. This analysis emphasizes the importance of taking into account circuit structure and architectural vulnerabilities when estimating SR on actual quantum computers.
IV Quantum Vulnerability Analysis
IV-A QVA Overview
In Section III-B, we discover that the SR of quantum computation is the outcome of the success rate of each qubit being measured. Each measured qubit’s correctness is influenced by gate errors propagated to it. Our proposed quantum vulnerability analysis (QVA) is a systematic methodology that follows error propagation. The QVA will estimate the vulnerability of the compiled circuit by performing error modeling based on the error rates from randomized benchmarking calibration.
The QVA generates the cumulative quantum vulnerability (CQV) metric. Definition of CQV: CQV presents the final circuit’s vulnerability by predicting the failure rate (FR) for the compiled circuit. We emphasize that the CQV will not predict the correct result but the possibility of an incorrect result during runtime. The calculation of represents the estimated success rate of a compiled circuit on the target machine calculated with the CQV. In Fig. 4.a, we present the complete workflow of QVA.
IV-B CQV Determination
Circuit Array Generator: To understand the error propagation path within a quantum circuit during runtime, a connection between when an error occurs and how much it affects the compiled circuit must be established. For more granularity, we quantify the compiled circuit to a finer degree by representing the algorithm at the cycle level. Cycle-level representation for a quantum circuit is analogous to the classical electrical circuit diagram to replace the previous analysis at the level of the complete compiled circuit. The Circuit Array Generator block transfers the compiled circuit further to a array where each element represents the attributes of the physical qubit at that cycle, as shown in Fig. 4.b. The circuit array records each physical qubit’s attribution in every cycle, including the gate type, gate error, associated virtual qubit, its cumulative success rate, etc. Based on the cycle level compiled circuit, a snapshot of the operating quantum chip at a given cycle can be linked with the corresponding cycle in the compiled circuit.
Calculating (): The CQV methodology aims to predict the failure rate for a given compiled circuit on a designated quantum chip by effectively modeling the errors based on its propagation. The Calculating block of Figure 4.a will first receive a circuit array with attributes filled. Next, we perform a crosstalk error calibration based on [34] and update it to the by multiplying their success rate based on Equation 2. To determine the success rate estimation, which is , we introduce an algorithm (referenced as Algorithm 1). This algorithm progressively updates the cumulative success rate (CSR) of each physical qubit based on the circuit array, considering the propagation of errors.
The algorithm initiates by setting the CSR for all entries in the Circuit Array to a perfect score, i.e., CSR = 1 (100% success), as depicted in line 3. Subsequent steps, from lines 3 to 16, loop through all the gates, updating CSR values. For single-qubit gates, lines 6 and 7 modify the CSR for that gate by multiplying its total success rate with the preceding CSR value of the same qubit from the last cycle.
Complex operations like the CNOT gate necessitate a deeper understanding. Here, errors from one qubit can cascade to itself and affect the paired qubit. In lines 8 to 11, for every occurrence of such two-qubit interactions in a given compiled circuit, we introduce a weight . This weight, which lies between 0 and 1, signifies the fraction of cumulative error originating from the paired qubit that might propagate via the CNOT gate.
This error propagation model stems from the constraints imposed by the error rates disclosed through randomized benchmarking. The intricacy lies in the fact that these reported error rates cannot be disaggregated into individual types, such as phase, bit, or decoherence errors. Each type behaves differently when channeled through the CNOT gate.
For a target qubit involved in a CNOT gate, its CSR is computed as a product of its own success rate (), its preceding CSR, and the success rate inherited from its paired qubit. This inherited success rate factors in only the weighted portion of the cumulative error. It’s crucial to remember that the success rate (or the associated error rate) for any quantum element (qubit, gate, or circuit) can be deduced using equation 2, which is based on the complementary relationship of success and error rates.
For idle cycles in the quantum circuit, we attribute an error value of zero. In the case of repeated gates, either compiler-introduced or manually inserted by the programmer, we appoint the error value based on the known error rates of such gates in the target machine.
In conclusion, the value, which represents the overall success rate prediction, is derived by multiplying the CSRs of all measurement gates. Upon examining Algorithm 1, it becomes evident that its complexity is , where represents total gate counts for all types. This complexity arises because the algorithm iteratively checks each physical qubit’s gate in every cycle to update the corresponding cumulative success rate. Consequently, the algorithm scales linearly with the gate count, encompassing all gate types, including ideal gates. This linear scalability of our noise model offers a significant advancement, facilitating both current NISQ-era and future quantum computing endeavors without being constrained by the limitations of classical computational power.
CNOT Weight Selection: In Sec. V, we have provided a study to observe the weight selection impact for various compiled circuits with different error profiles. Then we used a machine learning-based method to learn the weight value and used it to infer the proper weight in the CQV calculation. For more details. please refer to section V.
V Determining CNOT Weight
To accurately predict the real success rate, QVA requires a proper weight value, between zero and one, at compile time to assist the CQV calculation. Before identifying the value of the weight, we first demonstrate the prediction performance when the weight is set equal to zero or one representing no error or full error crossover the CNOT gate, respectively. The compiled circuits of the experiments are generated from different combinations of compiler settings for four benchmarks at two different algorithm sizes. The CQV calculation is performed using the calibration error and execution results from IBMQ_Montreal on Apr. 1st, 2022. As shown in Fig. 5, though the CQV results with weight set to zero, , are closer to the real success rate than ESP, there is still a non-trivial gap between and the real SR meaning that some errors are not well represented. Meanwhile, sets the weight to one, which makes the predictions close to zero all the time and loses track of the real SR, meaning the errors are being overestimated. The experimental results show that, for those benchmarks, using zero or one as the weight will lead to inaccurate predictions. When brute-force performing the CQV prediction for all the weights with 1% granularity, we found the correlation between the weight value and its corresponding 1-CQV prediction is approximately -1. In other words, among all the weight values, there will always be one and only one weight value that returns a success rate prediction closest to the real SR, which will be labeled as the best weight.
Based on such observation, we calculated the best weight for all compiled circuits generated from combining all the different algorithms, target machines, and compiled strategies. As shown in Fig. 6, we have plotted the best weight against the CNOT count of the compiled circuit. The result is consistent with our expectation – the best weight will be very arbitrary when the CNOT count is low, but as the CNOT count of the compiled circuit increases, the best weight begins to approach zero and shows an overall decreasing trend.
After analysis, we found that many factors, such as machine error properties, compiled circuit properties, etc., influence the best weight value. To fulfill the need of taking the graph-like machine information and circuit features into consideration, we choose Graph Neural Network (GNN) [42] and combine it with feed-forward networks to perform the best weight prediction shown in Fig. 7. The Node Matrix represents the information for every physical qubit, including single-qubit operation error rates. The Edge Matrix in the figure presents the CNOT error rates for each physical qubit pair. The circuit matrix contains information for each qubit’s operation count, measurement info, and two-qubit operations count. As shown in Fig. 8, by leveraging the GNN model, we can provide the weight value with an average 2% difference to the best weight, which strongly supports an accurate SR prediction in later execution. The trained model will be used to infer the best weight for CQV calculation.
VI Implementation
We employ Qiskit [15], a renowned open-source framework for quantum computing, as the foundation for implementing and assessing our QVA methodology. Our work extends Qiskit version 0.34.2, enabling it to execute QVA and compute the (1-CQV) for any specified compiled circuit. Our QVA approach is meticulously crafted to provide an accurate and efficient estimation of the success rate for compiled circuits on specific quantum machines. To ensure a diverse set of compiled circuits, we utilize various combinations of compiling policies for each benchmark-machine pairing. This approach integrates a spectrum of error-mitigation techniques at every stage, ensuring comprehensive and robust evaluations. Table I describes the backends and benchmarks used in our evaluation. We repeatedly perform all the algorithms that range in scale from 4 input qubits to 15 input qubits over months, which generated 160K distinctive compiled circuits captured with diverse noise calibration profiles. To focus on meaningful results, we ignore experiments that return a real SR below 0.1%. We chose the state-of-the-art SR estimator ESP for the baseline and ignored the ESP_CP since it underestimates error.
In the experimental flow, we first run the given compiled circuits on their target quantum computers for the default 8192 trials and log the outputs. Then, we use the Qiskit simulator to capture the correct result and generate the success rate of the experiment based on the logged output on the classical computer. For scenarios approaching quantum supremacy, where classical computers struggle with full-state quantum circuit simulations, we introduce an alternative method to ascertain correct results, as detailed in Sec. VII-B. Now the experiment’s real performance is known and named real SR. For each machine, we used the first ten days of data to perform the GNN training on weight selection and to infer the weight to assist CQV prediction for the rest of the experiment data. This offline training is a one-time procedure, and in our experiments, it took approximately an hour on an NVIDIA 3060 GPU. Then we perform ESP and 1-CQV, which estimate the real SR based on the calibrated error of the experiment on the classical computer. The estimated SR generated from ESP and 1-CQV will be compared with the real SR. In addition to quantifying the estimation accuracy directly by performing the absolute difference between the real and estimated SR, we also use the Relative Prediction Error metric, which uses the absolute difference divided by the real SR, to present the accuracy trend of the method while compensating for increases in the algorithm size resulting in decreasing real SR.
Item | Description |
---|---|
BV | Bernstein-Vazirani |
DJ | Deutsch-Jozsa |
HS | Hidden Shift |
GHZ | Greenberger-Horne-Zeilinger |
QFT | Quantum Fourier Transform |
QPE | Quantum Phase Estimation |
ibmq_montreal | 27-qubits with Hexagon |
ibmq_toronto | 27-qubits with Hexagon |
ibmq_mumbai | 27-qubits with Hexagon |
VII Results
VII-A The CQV Accuracy
Algorithm size within Quantum Volume: Here, we present the CQV prediction performance for all six benchmarks on the Quantum machines IBMQ_Montreal, IBMQ_Toronto, and IBMQ_Mumbai. As shown in Fig. 9, we presented the 1-CQV prediction accuracy compared with ESP with varying compiled circuits for a five-qubit QPE algorithm on IBMQ_Montreal on Apr. 5th, 2022. The different configurations are guidelines for the compiler to generate the final compiled circuit based on the given logical circuit and target device.
We observe that is much closer to the real SR than the ESP. After being shown to the right of Fig. 9, the average absolute error rate for ESP over all the configurations compared to ground truth SR is 29.8%, while achieves an average error of 4.8%. Such an error rate difference means that achieves an 84% error reduction compared to the ESP, which also means the CQV calculation is adaptable to the variation of errors across different calibration periods and provides excellent predictions.
As shown in Fig. 10, we present the 1-CQV prediction for all the benchmarks on the single machine IBMQ_Montreal. We include experiments with an SR higher than 0.1%. The relative prediction error is produced by dividing the absolute prediction error by the SR. The trend emerges that 1-CQV prediction outperforms the ESP by achieving an average of 6 times less relative prediction error rate and the best improvement of 30 times. Meanwhile, the relative prediction error jumps clearly when the algorithm size increases, which follows the nature of the benchmark by employing significantly more two-qubit gates. When increasing algorithm size, not only does the number of two-qubit gates increase, but the number of swapping operations also increases. As shown in Fig. 11, 1-CQV presents a stable and accurate average relative prediction error across all machines and benchmarks. Based on the results, we conclude that the CQV achieved the goal of designing a more precise SR estimator consistently across different dates, machines, and algorithms than the state-of-the-art SR estimator.
Algorithm size Beyond Quantum Volume: From the results shown in Fig. 9, 10, 11, proves to predict SR with a closer distance to the real SR even when it falls in the 10% to 0.1% range. The primary reason for the low success rate is that the size of the compiled circuits equals or exceeds the desired quantum volume of the target quantum machine. Quantum volume can be defined as the product of the number of virtual qubits and maximum circuit depth supported by the machine.
After making a full-spectrum comparison among all the backends and benchmarks, from the observations of the results, we can say that the CQV has better error modeling than the ESP noise model. Fig. 10 demonstrates that the CQV prediction is stable across different algorithms with a much lower relative error rate. Additionally, the results show that CQV performs much better when the algorithm reaches or exceeds the quantum volume, which is a valuable property when the limited quantum volume is the bottleneck of the current NISQ era. Therefore, we conclude that the prediction is accurate across the full spectrum of algorithm sizes and success rates.
Algorithm | CX count | ESP | Qiskit | CQV |
---|---|---|---|---|
QFT_5 | 59 | 0.00299 | 1.31142 | 0.01830 |
QFT_10 | 408 | 0.02004 | 1.86264 | 0.05310 |
QFT_20 | 2657 | 0.09802 | mins | 0.25227 |
QFT_50 | 26408 | 0.71115 | N/P | 6.01701 |
QFT_100 | 157428 | 1.83842 | N/P | 7.82150 |
QFT_120 | 208260 | 3.10630 | N/P | 20.4157 |
VII-B Scalability Analysis of CQV
In the rapidly evolving landscape of quantum computing, the intricacy of quantum circuits is escalating, bringing us closer to addressing real-world challenges. As we navigate this frontier, the need for prediction models that are both accurate and scalable becomes paramount. Our study, therefore, focuses on the scalability of the Quantum Vulnerability Analysis (QVA), examining its efficiency across a spectrum of quantum circuit sizes.
The Quantum Fourier Transform (QFT) was chosen as the benchmark for our evaluation. With input sizes spanning from 5 to 120 qubits, it’s noteworthy that machines operating with around qubits are on the threshold of quantum supremacy [4]. Our scalability experiments for QVA were conducted based on the topology and error profile of the IBMQ_Washington machine, a state-of-the-art 127-qubit quantum computer.
To ensure our execution time assessment was both transparent and unbiased, we considered only the inference time of the Graph Neural Network (GNN), which provides the CNOT weight value, and the execution time of the CQV noise model. This choice was made because the GNN training is an offline process, executed just once. As shown in Table II, our CQV approach demonstrated linear scalability, with execution time increasing proportionally with the CNOT gate counts. It is worth mentioning that the execution of the CQV takes place on the CPU, including the pre-trained GNN to infer the weight value . This linear trajectory underscores the potential of our model to efficiently manage complex quantum circuits in the future.
However, the full-state quantum circuit simulator-based approaches face challenges in scalability, particularly with circuits that exceed 32 qubits. This limitation highlights the simulator’s constraints when tasked with emulating large-scale, real-world quantum systems. In contrast, our approach requires only a fraction of the computational power for estimating success rates. To both validate our predictions and refine our noise model, particularly for circuits approaching quantum supremacy, we’ve devised a novel method: by merging the original circuit with its inverse and using the input states as a benchmark, we can effectively measure the circuit’s performance and fine-tune our noise model. It’s pertinent to note potential overfitting for specific circuits due to the reliance on reverse circuits. Nevertheless, we believe our approach adds a valuable technique to the toolbox for exploring circuit vulnerabilities.
Furthermore, while our preliminary results on QVA’s scalability are promising, they represent just the tip of the iceberg. Our model’s design is inherently versatile, unencumbered by specific error rates, gate types, or circuit structures. This adaptability not only allows for the integration of partial error correction techniques in larger devices but also hints at the vast potential for future optimization strategies, further refining the success rate estimation process.
VIII Case Study: Choosing Compiling Strategy
The current access modes for quantum computers are either limited free access to small machines or expensive hourly institutional subscriptions to large devices. Naturally, users will want the highest SR with as few executions as possible to save time, money, and access. However, finding the best combination among all the available machines and compiler configurations to achieve the best performance is challenging. The search space will grow enormously when also considering compilation optimizations. Without performing a brute-force execution of all the combinations, identifying the best strategy is challenging. Since CQV is more accurate than ESP, we would naturally ask, could CQV be used to suggest the compiling configuration with the optimal performance? To answer that, we performed both ESP and CQV for all the combinations of compiler configuration at compile time for the HS benchmark and picked the two configurations with the highest estimated SR for both ESP and CQV. Based on the result, the CQV outperforms the ESP across all algorithm sizes. One example of HS at level 2 optimization and six input states is shown in Fig. 12. It is clear that the two choices with the highest 1-CQV estimation not only have less prediction error but also result in the highest Real SR. In contrast, the top two high-ranking ESP configurations result in the 4th and 5th best in real SR out of 9 combinations. Moreover, it will be the 7th and 8th choice of ESP to pick up the compiler configurations for the highest real SR. For the computation overhead, executing CQV prediction is acceptable since the calculation is done on a classical computer. Furthermore, since the execution of all the SR predictions is independent of each other, it is possible to perform parallel computing for different compiler strategies, and the individual prediction overhead is discussed in VII-B. In this case, the CQV can guide a user to choose a more effective and reliable compiler strategy with a higher SR than ESP. No prediction can be perfect (that would be computationally intractable), but CQV improves prediction enough to be usable for compiler decisions.
ACKNOWLEDGMENTS
The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
IX Conclusion
In the rapidly evolving field of quantum computing, predicting the success rate (SR) of a quantum circuit remains a challenging task. Existing methodologies often fall short, either by oversimplifying the error model or by not adequately accounting for the intricacies of error propagation within complex quantum circuits. Recognizing this gap, we present the Quantum Vulnerability Analysis (QVA) in this paper, a robust systematic approach to determining a given computation’s Cumulative Quantum Vulnerability (CQV). The QVA offers a nuanced, detailed method to estimate the failure rate of a given compiled circuit, considering the effects of individual gate errors, their cumulative influence, and the unique properties of quantum gates such as CNOT. To establish the efficacy of QVA, we subjected it to rigorous validation on cutting-edge quantum machines using well-known benchmarks. The results demonstrated that QVA consistently outperformed the prevalent success rate estimator, ESP, and showcased linear scalability. On average, our model exhibited a six-fold reduction in the relative prediction error rate compared to ESP. Such accuracy not only bolsters confidence in our method but also has far-reaching implications at both the hardware and software levels.
References
- [1] “Ibm quantum systems,” https://quantum-computing.ibm.com/services?systems=all, accessed: 2022-07-28.
- [2] N. Acharya and S. M. Saeed, “A lightweight approach to detect malicious/unexpected changes in the error rates of nisq computers,” in 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD). IEEE, 2020, pp. 1–9.
- [3] N. Acharya and S.-M. Saeed, “Automated flag qubit insertion for reliable quantum circuit output,” in 2021 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). IEEE, 2021, pp. 431–436.
- [4] F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, F. G. Brandao, D. A. Buell et al., “Quantum supremacy using a programmable superconducting processor,” Nature, vol. 574, no. 7779, pp. 505–510, 2019.
- [5] J. Clarke and F. K. Wilhelm, “Superconducting quantum bits,” Nature, vol. 453, no. 7198, pp. 1031–1042, 2008.
- [6] L. DiCarlo, J. M. Chow, J. M. Gambetta, L. S. Bishop, B. R. Johnson, D. Schuster, J. Majer, A. Blais, L. Frunzio, S. Girvin et al., “Demonstration of two-qubit algorithms with a superconducting quantum processor,” Nature, vol. 460, no. 7252, pp. 240–244, 2009.
- [7] Y. Ding, P. Gokhale, S. F. Lin, R. Rines, T. Propson, and F. T. Chong, “Systematic crosstalk mitigation for superconducting qubits via frequency-aware compilation,” in 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2020, pp. 201–214.
- [8] X. Fu, L. Riesebos, M. Rol, J. van Straten, J. Van Someren, N. Khammassi, I. Ashraf, R. Vermeulen, V. Newsum, K. Loh, J. C. de Sterke, W. J. Vlothuizen, R. N. Schouten, C. G. Almudever, L. DiCarlo, and K. Bertels, “eqasm: An executable quantum instruction set architecture,” in 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2019, pp. 224–237.
- [9] X. Fu, M. Rol, C. Bultink, J. van Someren, N. Khammassi, I. Ashraf, R. Vermeulen, J. de Sterke, W. Vlothuizen, R. Schouten, L. DiCarlo, and K. Bertels, “A microarchitecture for a superconducting quantum processor,” IEEE Micro, vol. 38, no. 3, pp. 40–47, 2018.
- [10] X. Fu, M. A. Rol, C. C. Bultink, J. Van Someren, N. Khammassi, I. Ashraf, R. Vermeulen, J. De Sterke, W. Vlothuizen, R. Schouten, C. G. Almudever, L. DiCarlo, and K. Bertels, “An experimental microarchitecture for a superconducting quantum processor,” in Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, 2017, pp. 813–825.
- [11] J. P. Gaebler, A. M. Meier, T. R. Tan, R. Bowler, Y. Lin, D. Hanneke, J. D. Jost, J. Home, E. Knill, D. Leibfried et al., “Randomized benchmarking of multiqubit gates,” Physical review letters, vol. 108, no. 26, p. 260503, 2012.
- [12] J. M. Gambetta, A. D. Córcoles, S. T. Merkel, B. R. Johnson, J. A. Smolin, J. M. Chow, C. A. Ryan, C. Rigetti, S. Poletto, T. A. Ohki et al., “Characterization of addressability by simultaneous randomized benchmarking,” Physical review letters, vol. 109, no. 24, p. 240504, 2012.
- [13] P. Gokhale, O. Angiuli, Y. Ding, K. Gui, T. Tomesh, M. Suchara, M. Martonosi, and F. T. Chong, “Optimization of simultaneous measurement for variational quantum eigensolver applications,” in 2020 IEEE International Conference on Quantum Computing and Engineering (QCE). IEEE, 2020, pp. 379–390.
- [14] P. Gokhale, A. Javadi-Abhari, N. Earnest, Y. Shi, and F. T. Chong, “Optimized quantum compilation for near-term algorithms with openpulse,” in 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2020, pp. 186–200.
- [15] I. Q. Group, “Open-source quantum development,” https://qiskit.org/, retrieved on 04-16-2021.
- [16] L. K. Grover, “A fast quantum mechanical algorithm for database search,” in Proceedings of the twenty-eighth annual ACM symposium on Theory of computing, 1996, pp. 212–219.
- [17] IBM, “Ibm quantum,” https://quantum-computing.ibm.com/, retrieved on 04-16-2021.
- [18] K. Kechedzhi, S. Isakov, S. Mandrà, B. Villalonga, X. Mi, S. Boixo, and V. Smelyanskiy, “Effective quantum volume, fidelity and computational cost of noisy quantum processing experiments,” arXiv preprint arXiv:2306.15970, 2023.
- [19] R. LaRose, A. Mari, S. Kaiser, P. J. Karalekas, A. A. Alves, P. Czarnik, M. El Mandouh, M. H. Gordon, Y. Hindy, A. Robertson et al., “Mitiq: A software package for error mitigation on noisy quantum computers,” Quantum, vol. 6, p. 774, 2022.
- [20] T. LeCompte, F. Qi, and L. Peng, “Robust cache-aware quantum processor layout,” in 2020 International Symposium on Reliable Distributed Systems (SRDS). IEEE, 2020, pp. 276–287.
- [21] T. LeCompte, F. Qi, X. Yuan, N.-F. Tzeng, M. H. Najaf, and L. Peng, “Graph neural network assisted quantum compilation for qubit allocation,” in ACM Great Lakes Symposium on VLSI (GLSVLSI). ACM, 2023.
- [22] T. LeCompte, F. Qi, X. Yuan, N.-F. Tzeng, M. H. Najafi, and L. Peng, “Machine learning-based qubit allocation for error reduction in quantum circuits,” IEEE Transactions on Quantum Engineering, 2023.
- [23] G. Li, Y. Ding, and Y. Xie, “Tackling the qubit mapping problem for nisq-era quantum devices,” in Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, 2019, pp. 1001–1014.
- [24] G. Li, A. Wu, Y. Shi, A. Javadi-Abhari, Y. Ding, and Y. Xie, “Paulihedral: a generalized block-wise compiler optimization framework for quantum simulation kernels,” in Proceedings of the 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2022, pp. 554–569.
- [25] J. Liu and H. Zhou, “Reliability modeling of nisq-era quantum computers,” in 2020 IEEE international symposium on workload characterization (IISWC). IEEE, 2020, pp. 94–105.
- [26] E. Magesan, J. Gambetta, and J. Emerson, “Robust randomized benchmarking of quantum processes,” arXiv preprint arXiv:1009.3639, 2010.
- [27] E. Magesan, J. M. Gambetta, and J. Emerson, “Characterizing quantum gates via randomized benchmarking,” Physical Review A, vol. 85, no. 4, p. 042311, 2012.
- [28] J. Majer, J. Chow, J. Gambetta, J. Koch, B. Johnson, J. Schreier, L. Frunzio, D. Schuster, A. A. Houck, A. Wallraff et al., “Coupling superconducting qubits via a cavity bus,” Nature, vol. 449, no. 7161, pp. 443–447, 2007.
- [29] A. Matsuo, W. Hattori, and S. Yamashita, “Reducing the overhead of mapping quantum circuits to ibm q system,” in 2019 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 2019, pp. 1–5.
- [30] N. Moll, P. Barkoutsos, L. S. Bishop, J. M. Chow, A. Cross, D. J. Egger, S. Filipp, A. Fuhrer, J. M. Gambetta, M. Ganzhorn et al., “Quantum optimization using variational algorithms on near-term quantum devices,” Quantum Science and Technology, vol. 3, no. 3, p. 030503, 2018.
- [31] P. Murali, J. M. Baker, A. Javadi-Abhari, F. T. Chong, and M. Martonosi, “Noise-adaptive compiler mappings for noisy intermediate-scale quantum computers,” in Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, 2019, pp. 1015–1029.
- [32] P. Murali, N. M. Linke, M. Martonosi, A. J. Abhari, N. H. Nguyen, and C. H. Alderete, “Architecting noisy intermediate-scale quantum computers: A real-system study,” IEEE Micro, vol. 40, no. 3, pp. 73–80, 2020.
- [33] P. Murali, N. M. Linke, M. Martonosi, A.-J. Abhari, N. H. Nguyen, and C. H. Alderete, “Full-stack, real-system quantum computer studies: Architectural comparisons and design insights,” in 2019 ACM/IEEE 46th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2019, pp. 527–540.
- [34] P. Murali, D. C. McKay, M. Martonosi, and A. Javadi-Abhari, “Software mitigation of crosstalk on noisy intermediate-scale quantum computers,” in Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, 2020, pp. 1001–1016.
- [35] M. A. Nielsen and I. Chuang, “Quantum computation and quantum information,” 2002.
- [36] S. Nishio, Y. Pan, T. Satoh, H. Amano, and R. V. Meter, “Extracting success from ibm’s 20-qubit machines using error-aware compilation,” ACM Journal on Emerging Technologies in Computing Systems (JETC), vol. 16, no. 3, pp. 1–25, 2020.
- [37] D. Oliveira, E. Giusto, B. Baheri, Q. Guan, B. Montrucchio, and P. Rech, “A systematic methodology to compute the quantum vulnerability factors for quantum circuits,” IEEE Transactions on Dependable and Secure Computing, 2023.
- [38] T. Patel, B. Li, R. B. Roy, and D. Tiwari, “UREQA: Leveraging Operation-Aware error rates for effective quantum circuit mapping on NISQ-Era quantum computers,” in 2020 USENIX Annual Technical Conference (USENIX ATC 20), 2020, pp. 705–711.
- [39] J. Poyatos, J. I. Cirac, and P. Zoller, “Complete characterization of a quantum process: the two-bit quantum gate,” Physical Review Letters, vol. 78, no. 2, p. 390, 1997.
- [40] J. Preskill, “Quantum computing in the nisq era and beyond,” Quantum, vol. 2, p. 79, 2018.
- [41] S. Resch, S. Tannu, U. R. Karpuzcu, and M. Qureshi, “A day in the life of a quantum error,” IEEE Computer Architecture Letters, vol. 20, no. 1, pp. 13–16, 2020.
- [42] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The graph neural network model,” IEEE transactions on neural networks, vol. 20, no. 1, pp. 61–80, 2008.
- [43] Y. Shi, N. Leung, P. Gokhale, Z. Rossi, D. I. Schuster, H. Hoffmann, and F. T. Chong, “Optimized compilation of aggregated instructions for realistic quantum computers,” in Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, 2019, pp. 1031–1044.
- [44] P. W. Shor, “Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer,” SIAM review, vol. 41, no. 2, pp. 303–332, 1999.
- [45] R. Stassi, M. Cirio, and F. Nori, “Scalable quantum computer with superconducting circuits in the ultrastrong coupling regime,” npj Quantum Information, vol. 6, no. 1, pp. 1–6, 2020.
- [46] S. Tannu and M. Qureshi, “Not all qubits are created equal: A case for variability-aware policies for nisq-era quantum computers,” in Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, 2019, pp. 987–999.
- [47] S. S. Tannu and M. Qureshi, “Ensemble of diverse mappings: Improving reliability of quantum computers by orchestrating dissimilar mistakes,” in Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019, pp. 253–265.
- [48] S. S. Tannu and M. K. Qureshi, “Mitigating measurement errors in quantum computers by exploiting state-dependent bias,” in Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019, pp. 279–290.