This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\WarningFilter

revtex4-2Repair the float package

Concatenating Binomial Codes with the Planar Code

Juliette Soule [email protected] Centre for Engineered Quantum Systems, School of Physics, University of Sydney, Sydney, NSW 2006, Australia.    Andrew C. Doherty [email protected] Centre for Engineered Quantum Systems, School of Physics, University of Sydney, Sydney, NSW 2006, Australia.    Arne L. Grimsmo Centre for Engineered Quantum Systems, School of Physics, University of Sydney, Sydney, NSW 2006, Australia. AWS Center for Quantum Computing, Pasadena, CA 91125, USA California Institute of Technology, Pasadena, CA 91125, USA
Abstract

Rotation symmetric bosonic codes are an attractive encoding for qubits into oscillator degrees of freedom, particularly in superconducting qubit experiments. While these codes can tolerate considerable loss and dephasing, they will need to be combined with higher level codes to achieve large-scale devices. We investigate concatenating these codes with the planar code in a measurement-based scheme for fault-tolerant quantum computation. We focus on binomial codes as the base level encoding, and estimate break-even points for such encodings under loss for various types of measurement protocol. These codes are more resistant to photon loss errors, but require both higher mean photon numbers and higher phase resolution for gate operations and measurements. We find that it is necessary to implement adaptive phase measurements, maximum likelihood quantum state inference, and weighted minimum weight decoding to obtain good performance for a planar code using binomial code qubits.

I Introduction

One of the major challenges in building a quantum computer is implementing error correction. Error correction, the encoding of quantum information to protect it against environmental noise, is crucial in order to perform large-scale calculations. In this work we are interested in the performance of bosonic codes in which qubits are encoded into the infinite-dimensional Hilbert space of a harmonic oscillator. Bosonic codes can provide protection against typical sources of noise such as loss and dephasing [1, 2, 3] and can be implemented in a variety of physical architectures, including electromagnetic modes, for example microwave modes controlled by superconducting qubits [4, 5, 6], or mechanical degrees of freedom such as trapped ion motional modes [7, 8].

Experiments in the context of superconducting qubits have shown that bosonic codes are amongst the most promising approaches to practical quantum computing in that system. Specifically it has been possible to operate these experiments beyond the memory break even point at which the lifetime of an encoded qubit equals that of an unencoded qubit [4, 9]. In addition it is possible to perform logical gates and fault tolerant measurements in such experiments [5, 10, 11, 12, 13, 14, 15].

No matter how good bosonic codes become, however, there will be residual errors. Large scale quantum computation will require bosonic codes to be concatenated with a higher level code, such as a surface code [16, 17], that allows fault tolerant quantum computation [2]. The bosonic code must be able to suppress the noise below the fault tolerance threshold of the higher level code. For the so-called GKP codes [2] there have been studies of how to achieve this [18, 19, 20] but for widely used codes such as the so-called cat codes [21, 22, 23] and binomial codes [24] there is not a quantitative understanding of the performance required for large-scale computation.

In this work we describe and implement simulations to determine the performance of an architecture that concatenates the surface code with a class of bosonic codes that Grimsmo and collaborators termed rotational symmetric codes [25]. They provided a unified analysis of these codes, which include both cat codes and binomial codes, as well as a theoretical scheme for logical gates, measurements, and state preparation [25]. Our approach uses this gate set to study the performance of a specific choice of higher level code and architecture for quantum computation.

The gate set proposed in [25] suggests that a natural approach to realising fault tolerant compuation with concatenated bosonic codes is Measurement Based Quantum Computation (MBQC) with code foliation [26, 27, 28]. In particular, for rotation symmetric codes the natural “easy” operations include a C¯Z=diag(1,1,1,1)\bar{C}_{Z}=\text{diag}(1,1,1,-1) gate, realized with a cross-Kerr interaction between two modes, and X-basis measurement, realized as a phase measurement [25]. Together with state preparation of |+|+\rangle states, this forms the the basis for MBQC. These considerations motivate our scheme which uses bosonic encoded qubits, in particular binomial qubits, as the ’physical qubits’ of a 2D surface code, realised by MBQC.

The aim of this work is to conduct a preliminary investigation of this quantum computation scheme, taking into account a realistic model of photon loss which is the primary source of noise for bosonic codes. Furthermore, the X basis measurements that we require for our scheme will be realised as phase measurements of a bosonic mode. The phase measurement itself, as well as the procedure for inferring the qubit state, are imperfect processes which will introduce inaccuracy to our scheme. We will compare different methods of phase measurement such as heterodyne and adaptive homodyne measurement [29], combined with different procedures for qubit state inference.

We obtain threshold values for varying orders of discrete rotational symmetry. We find that whilst increasing the order of discrete rotational symmetry generally improves the threshold value, thresholds for binomial encoded qubits tend to be lower than for the trivial Fock space encoding, |0L=|0,|1L=|1|0\rangle_{L}=|0\rangle,|1\rangle_{L}=|1\rangle, when using the most naive measurement and qubit state inference techniques. By using more sophisticated measurements and state inference methods as well as incorporating the soft information from measurements into decoding we are able to find schemes that beat the trivial encoding when qubit state measurement error for the trivial encoding is not too small.

The point at which the lifetime of a bosonic encoded qubit outperforms the trivial encoding is referred to as break even [30]. The competition between the bosonic codes and the trivial encoding arises because the rotationally symmetric codes achieve tolerance to loss errors at the expense of increased Fock number. In comparison, the trivial encoding has no intrinsic robustness to loss but has a very low average number. We find that optimal measurement and state inference is required for the rotational symmetric codes to outperform the trivial encoding. Finally, we examine the sub-threshold performance of binomial codes and find that below 50%50\% of threshold the loss tolerance of a binomial code is more beneficial and codes with increased rotational symmetry perform better than the trivial code and better than codes with reduced symmetry.

This paper is structured as follows. In Sec. II we review rotation symmetric bosonic codes and measurement based quantum computation. In Sec. III we introduce the measurement models and qubit state inference techniques used to realise the computational scheme. Sec. III.4 then quantifies the performance of these methods. In Sec. III.5 we introduce methods used for the 2D planar code, including an X basis Pauli twirling approximation and modified MWPM decoding algorithm. Sec. III.6 compares the code thresholds obtained using these techniques, as well as examining the subthreshold scaling of the code. Finally, in Sec. IV we review our findings and highlight the main conclusions.

II Setup and Description of Cluster State Model

In this section we will define the various elements of the error correction scheme we investigate. This scheme involves encoding qubits in bosonic modes using a rotational symmetric encoding, specifically the binomial encoding. These encoded qubits are then entangled using CZ{C}_{Z} gates to form a specific two dimensional many-body entangled resource state, also known as a cluster state, which maps onto the two dimensional surface code [26]. In this work, we will restrict to the 2D surface code. The rotation symmetric bosonic (RSB) encoded qubits are then measured and subject to error correction in order to enact the identity gate on the code. Note that the scheme could be extended to non-trivial gates, however this will not be considered in this work. Though the 2D surface code is not fault tolerant, it is a more straightforward setting in which we conduct a preliminary investigation into our concatenated scheme.

II.1 Rotation Symmetric Bosonic Codes

Encoding a qubit in a single bosonic mode involves defining a two dimensional subspace of the mode’s Hilbert space as the span of the logical codewords. The remaining space can be used to detect and correct errors. The most prevalent error sources to which bosonic modes are subject are loss and dephasing. Rotation-symmetric encodings have been tailored specifically to be able to correct loss and can exactly correct up to a certain order of photon loss. This property is tightly connected to the discrete rotational symmetry of these codes.

Rotation-Symmetric Bosonic (RSB) codes [25] have logical ZZ-operators of the form

Z^N\displaystyle\hat{Z}_{N} =eiπn^/N,\displaystyle=e^{i\pi\hat{n}/N}, (1)

where n^=a^a^\hat{n}=\hat{a}^{\dagger}\hat{a} is the Fock space number operator, and a^,a^\hat{a},\hat{a}^{\dagger} are the Fock space annihilation and creation operators, respectively. Such codes have discrete rotational symmetry of order NN as the projector onto the codespace commutes with the discrete rotation operator

R^N\displaystyle\hat{R}_{N} =ei2πn^/N.\displaystyle=e^{i2\pi\hat{n}/N}. (2)

Indeed, R^2N=Z^N.\hat{R}_{2N}=\hat{Z}_{N}. The choice of the logical Z operator determines the form of the logical codewords as finite rotated superpositions of a “primitive” state [25] |θ|\theta\rangle, where different types of rotation symmetric codes, such as cat and binomial codes, differ only in the choice of primitive. Explicitly, for μ{0,1}\mu\in\{0,1\}

|μN\displaystyle|\mu\rangle_{N} =1𝒩μm=02N1(1)μmeiπmn^/N|θ.\displaystyle=\frac{1}{\sqrt{\mathcal{N}_{\mu}}}\sum_{m=0}^{2N-1}(-1)^{\mu m}e^{i\pi m\hat{n}/N}|\theta\rangle. (3)

The Z^N\hat{Z}_{N} operator also enforces a specific Fock grid structure of the logical codewords. In particular, |μN|\mu\rangle_{N} for μ{0,1}\mu\in\{0,1\} has support on every |(2k+μ)N|(2k+\mu)N\rangle Fock state, as per Fig. 1. This structure results in a Fock grid distance dn=Nd_{n}=N between logical zero and logical one states.

Dual basis codewords are defined in the usual way

|±N\displaystyle|\pm\rangle_{N} =12(|0N±|1N).\displaystyle=\frac{1}{\sqrt{2}}(|0\rangle_{N}\pm|1\rangle_{N}). (4)

In contrast to primal codewords, they have support on every Fock state |kN.|kN\rangle. They are separated by a distance in phase space dθ=π/N.d_{\theta}=\pi/N. Note that dnNd_{n}\propto N whilst dθ1N.d_{\theta}\propto\frac{1}{N}. Therefore, whilst increasing NN is advantageous for the resilience of |0N,|1N|0\rangle_{N},|1\rangle_{N} codewords to loss and gain errors, it is detrimental to the capacity of |±N|\pm\rangle_{N} codewords to tolerate dephasing errors.

Refer to caption
Figure 1: Fock and phase space structure of an N=2N=2 RSB code. Fock states supporting |0|0\rangle are offset by NN from those supporting |1N|1\rangle_{N}, whilst Fock states supporting either |0N|0\rangle_{N} or |1N|1\rangle_{N} are offset by 2N.2N. |±N|\pm\rangle_{N} states (orange and blue respectively) are separated by dθ=π/Nd_{\theta}=\pi/N in phase space.

Binomial codes are so called due to the binomial coefficient weighting of their Fock state coefficients. In particular

|μNbin\displaystyle|\mu_{N}\rangle_{\mathrm{bin}} =k=0K2μ12K1(K2k+μ)|(2k+μ)N\displaystyle=\sum_{k=0}^{\left\lceil{\frac{K}{2}}\right\rceil-\mu}\sqrt{\frac{1}{2^{K-1}}\binom{K}{2k+\mu}}|(2k+\mu)N\rangle (5)

where the parameter KK relates to the number of loss and dephasing errors that are correctable by the code, as detailed in [24].

In the following we will omit the NN subscript in our notation and simply use |ψ|\psi\rangle to denote an RSB qubit encoded in the state ψ.\psi. When referring to ‘logical’ operations and states, we will mean operations and states at the level of the surface code.

We study a quantum computing scheme which requires the preparation of RSB |+|+\rangle states, destructive X basis measurement, and an RSB CZC_{Z} gate. These three elements are sufficient to enact all Clifford gates [25]. Universality may be achieved for this scheme by injecting magic states but non-trivial gates are beyond the scope of this work.

II.1.1 RSB CZC_{Z} Gate

As discussed above the only logical gate we need to enact on RSB encoded qubits in our computational scheme is the controlled phase CZC_{Z} gate. The CZC_{Z} gate may be realised by a controlled rotation at the physical level generated by a cross-Kerr interaction between the modes [21, 31]

HCROTij\displaystyle H_{\textsc{CROT}_{ij}} =Ωn^in^j\displaystyle=-\Omega\hat{n}_{i}\otimes\hat{n}_{j}
CROTij\displaystyle\textsc{CROT}_{ij} =eiHCROTijtgate\displaystyle=e^{-iH_{\textsc{CROT}_{ij}}t_{\rm gate}}
=eiπn^in^jNM\displaystyle=e^{\frac{i\pi\hat{n}_{i}\otimes\hat{n}_{j}}{NM}} (6)

where modes i,ji,j involve RSB qubits with discrete rotational symmetry of order N,MN,M respectively. Loss is the dominant imperfection in our model and if the cross-Kerr gate is slow the effect of loss is exacerbated. Both the strength of the loss and the time required for the gate are captured by the parameter γ=κtgate\gamma=\kappa t_{gate} which we will use to characterize the noise level. The gate time is chosen such that Ωtgate=π/NM\Omega t_{\rm gate}=\pi/NM thus γ\gamma is proportional to the photon loss rate and inversely proportional to the cross-Kerr nonlinearity. In all that follows we only consider CROT gates between identical encodings, so N=M.N=M.

Note that low cross-Kerr nonlinearity in practical devices might lead to a different coupling non-linearity being preferred, such as a SNAIL [32]. In this case as well the effect of loss will depend on the per photon loss probability during a gate. So while the details of code performance would differ, we expect qualitatively similar results for such a gate.

The other RSB operation we need is a destructive single qubit X measurement, which we investigate in Sec.III.1.

II.2 Noise Model

A primary noise source plaguing bosonic modes is photon loss. We consider a model in which photon loss occurs during the implementation of each CROT gate.

II.2.1 Photon Loss Noise

To describe the noise due to photon loss, we use the framework of quantum trajectory theory [33] to analyse a master equation describing losses occurring on each qubit during the implementation of the CROT gate. The master equation describing the combined effects of the gate and loss is

ρ˙=i[HCROT12,ρ]+κi=12𝒟[ai^]ρ\displaystyle\dot{\rho}=-i[H_{\textsc{CROT}_{12}},\rho]+\kappa\sum_{i=1}^{2}\mathcal{D}[\hat{a_{i}}]\rho (7)

where a^1=a^𝕀,a^2=𝕀a^\hat{a}_{1}=\hat{a}\otimes\mathbb{I},\hat{a}_{2}=\mathbb{I}\otimes\hat{a}, 𝒟[L]ρ=LρLLLρ/2ρLL/2\mathcal{D}[L]\rho=L\rho L^{\dagger}-L^{\dagger}L\rho/2-\rho L^{\dagger}L/2 is the Lindblad superoperator. We will describe the effects of loss using the parameter γ=κtgate\gamma=\kappa t_{\rm gate},

In the trajectory approach the density matrix can be written as a sum over the number of photon emission events that occur during the gate. Each term involves an integral over the emission time of the photons. As described in more detail in Appendix A we commute the loss past the CROT gate in each such term so that we are left with an effective error operator acting on each qubit.

We will introduce a notation that can be generalized to the case of MM modes. Suppose that the times for which losses occur on mode 1 are listed in a vector 𝐭1\mathbf{t}_{1}. We will suppose that these emission times are arranged in time order such that for times tm,tm+1𝐭1t_{m},t_{m+1}\in\mathbf{t}_{1} we have tm+1>tmt_{m+1}>t_{m}. Similarly emission times for mode 2 are 𝐭2\mathbf{t}_{2}. We will sometimes need to refer to the emission times for all modes, and we do this by defining an array of emission time vectors 𝐭={𝐭1,𝐭2}.\mathbf{t}=\{\mathbf{t}_{1},\mathbf{t}_{2}\}. The number of photon emissions for mode aa is ja=|𝐭a|j_{a}=|\mathbf{t}_{a}| the number of entries in the vector 𝐭a\mathbf{t}_{a}. The total number of photon emissions kk is the sum of the jaj_{a}, so for two modes we have j1+j2=kj_{1}+j_{2}=k. Then the noisy CROT gate with a fixed number of photon emissions can be expressed as

CROT~12,𝐭\displaystyle\widetilde{\textsc{CROT}}_{12,\mathbf{t}} =E^1,𝐭E^2,𝐭CROT12\displaystyle=\hat{E}_{1,{\mathbf{t}}}\hat{E}_{2,{\mathbf{t}}}\textsc{CROT}_{12} (8)

where E^𝐭,E^𝐭\hat{E}_{{\mathbf{t}}},\hat{E}_{{\mathbf{t}}} are error operators acting on qubits 1,2 respectively, given by

E^1,𝐭=\displaystyle\hat{E}_{1,{\mathbf{t}}}= κj1eκτ(𝐭1)/2eiΩτ(𝐭2)n^1a^1j1eκtgaten^1/2\displaystyle\sqrt{\kappa}^{j_{1}}e^{\kappa\tau(\mathbf{t}_{1})/2}e^{i\Omega\tau(\mathbf{t}_{2})\hat{n}_{1}}\hat{a}_{1}^{j_{1}}e^{-\kappa t_{\rm gate}\hat{n}_{1}/2} (9a)
E^2,𝐭=\displaystyle\hat{E}_{2,{\mathbf{t}}}= κj2eκτ(𝐭2)/2eiΩτ(𝐭1)n^2a^2j2eκtgaten^2/2.\displaystyle\sqrt{\kappa}^{j_{2}}e^{\kappa\tau(\mathbf{t}_{2})/2}e^{i\Omega\tau(\mathbf{t}_{1})\hat{n}_{2}}\hat{a}_{2}^{j_{2}}e^{-\kappa t_{\rm gate}\hat{n}_{2}/2}. (9b)

This expression is derived in the Appendix A, specifically Eq. A.1. The effective delay parameters τ(𝐭a)\tau(\mathbf{t}_{a}) appearing in these expressions are given by

τ(𝐭a)=jatgatetm𝐭atm.\tau(\mathbf{t}_{a})=j_{a}t_{\rm gate}-\sum_{t_{m}\in\mathbf{t}_{a}}t_{m}. (10)

The various factors in E^1,𝐭\hat{E}_{1,\mathbf{t}} have a natural interpretation. E^1,𝐭\hat{E}_{1,{\mathbf{t}}} removes j1j_{1} photons from mode 11, hence the factor a^1j1\hat{a}_{1}^{j_{1}}. The probability of an event with this number of photon emissions, and the resulting conditioning of the wavefunction, are described by the constant factor κj1exp(κτ(𝐭1)/2)\sqrt{\kappa}^{j_{1}}\exp(\kappa\tau(\mathbf{t}_{1})/2) and the non-unitary operator exp(κtgaten^1/2)\exp(-\kappa t_{\rm gate}\hat{n}_{1}/2). Finally the action of the gate during the photon loss means that photon loss events on the neighbouring mode 22 lead to a phase shift on mode 1. This correlated noise process is described by the unitary factor exp[iΩτ(𝐭2)n^1]\exp[i\Omega\tau(\mathbf{t}_{2})\hat{n}_{1}]. It has very significant consequences for the performance of error correction as we will see.

Turning now to the multimode case where we will need to perform many simultaneous CROT gates. Specifically the gates to be performed are represented by a graph G=(V,E)G=(V,E) where the vertices VV represent qubits and the edges EE represent the location of the gates. We will also use the notation of neighbourhoods - that is, a qubit ii is said to be in the neighbourhood of qubit jj, i𝒩(j)i\in\mathcal{N}(j), if qubits i,ji,j share an edge. We consider applying all CROT gates simultaneously. Using the same notations as for the two-qubit case, we can express the net noise operator for a qubit aa as

E^a,𝐭=κjaeκτ(𝐭a)/2(b𝒩(a)eiΩτ(𝐭b)n^a)a^ajaeκtgaten^a/2.\displaystyle\hat{E}_{a,\mathbf{t}}=\sqrt{\kappa}^{j_{a}}e^{\kappa\tau(\mathbf{t}_{a})/2}\left(\prod_{b\in\mathcal{N}(a)}e^{i\Omega\tau(\mathbf{t}_{b})\hat{n}_{a}}\right)\hat{a}_{a}^{j_{a}}e^{-\kappa t_{\rm gate}\hat{n}_{a}/2}. (11)

Note that the mode aa acquires a phase shift for a photon emission on any neighbouring qubit.

The overall effect of the noise for a given set of photon emission times 𝐭\mathbf{t} is given by the operator

U~𝐭=(a=1ME^a,𝐭)eECROTe1,e2\tilde{U}_{\mathbf{t}}=\left(\prod_{a=1}^{M}\hat{E}_{a,\mathbf{t}}\right)\prod_{e\in E}\textsc{CROT}_{e_{1},e_{2}} (12)

This equation, derived in the Appendix as equation A.1, expresses the noise as an ideal gate followed by a modified photon loss noise operator. We can think of the overall effect of this noise as defining a noise operator that acts on the ideal cluster state as follows

Γ~𝐭(ρCS)\displaystyle\tilde{\Gamma}_{\mathbf{t}}(\rho_{CS}) =U~𝐭(|++|)MU~𝐭\displaystyle=\tilde{U}_{\mathbf{t}}\left(|+\rangle\langle+|\right)^{\otimes M}\tilde{U}^{\dagger}_{\mathbf{t}} (13)

where

ρCS=U(|++|)MU\displaystyle\rho_{CS}=U\left(|+\rangle\langle+|\right)^{\otimes M}U^{\dagger} (14)

and UU is the ideal gate with κ=0\kappa=0 and no photon loss events.

This leaves us to determine how to correctly sample a set of emission times 𝐭\mathbf{t}. We show in Appendix A that the probability distribution for photon emission times is unnaffected by the application of the CROT gates. This greatly simplifies our simulations since the photon emission times can be determined independently for each qubit according to a probability distribution that is independent of the particular choice of GG. The details of the sampling of emissions times 𝐭\mathbf{t} are givein in Appendix A.2.

II.3 Measurement Based Quantum Computation

Measurement Based Quantum Computation (MBQC) [34, 26] is an alternative to the circuit-based model of quantum computing. In MBQC, computation is performed by preparing an entangled many-body resource state upon which single qubit measurements are performed, realising quantum gates. The resource state, often referred to as a cluster state, is entangled exclusively using CZ{C}_{Z} gates. The single qubit measurements on the cluster state effectively consume the qubits and drive the computation.

In this work we will use MBQC to implement the 2D surface code [35]. Whilst the 2D surface code is not fault-tolerant, it provides a simpler setting to explore the implications of realistic noise and measurement on code performance than a full 3D simulation.

In this section we will first describe some preliminary notation used for cluster states, before detailing the construction of 1D cluster states. We will then explain how 1D cluster states can be entangled to realise the 2D surface code as a foliation of the repetition code.

II.3.1 Cluster States

A cluster state can be associated to a graph G=(V,E)G=(V,E) where vertices represent qubits. Two vertices share an edge if the two qubits of the cluster state are entangled via a CZ{C}_{Z} gate. In our model, each vertex of the graph represents an RSB qubit, and vertices share an edge if the qubit pair is entangled via a CROT gate. The cluster state provides the resource for the quantum computation. Computation will be carried out by single qubit measurements, which will be elaborated upon in Sec. II.3.

II.3.2 1D Cluster State Description

We begin by constructing a 1D cluster state and using it to teleport a qubit along the line. For a length MM 1D cluster state, we prepare a single qubit in an arbitrary state |ψ,|\psi\rangle, and the remaining qubits in the |+|+\rangle state. We then entangle qubits in a line by applying CZC_{Z} between adjacent qubits. We then measure each qubit in the X basis, which teleports the logical state from the first qubit, down the chain to the final qubit.

We can understand the 1D cluster state in terms of its stabilisers, as in [36] for example. Initially, the logical operators describing the 1D chain will simply be XL=X1,ZL=Z1X_{L}=X_{1},Z_{L}=Z_{1} and before the CZC_{Z} gates the stabiliser group is given by S={X1,X2,,Xn}.S=\{X_{1},X_{2},...,X_{n}\}. After applying CZ{C}_{Z} gates, operators transform as

Zi\displaystyle Z_{i} Zi\displaystyle\rightarrow Z_{i}
Xi\displaystyle X_{i} Xij𝒩iZj\displaystyle\rightarrow X_{i}\prod_{j\in\mathcal{N}_{i}}Z_{j} (15)

where 𝒩i\mathcal{N}_{i} denotes all qubits which are entangled to qubit ii by a CZ{C}_{Z} gate. This causes logicals and stabilisers to transform as

ZL\displaystyle Z_{L} =Z1\displaystyle=Z_{1}
XL\displaystyle X_{L} =X1Z2\displaystyle=X_{1}Z_{2}
S\displaystyle S ={X1Z2,Z1X2Z3,,ZM1XM}.\displaystyle=\{X_{1}Z_{2},Z_{1}X_{2}Z_{3},...,Z_{M-1}X_{M}\}. (16)

At the conclusion of the XX measurements the first M1M-1 qubits are left in XX eigenstates and by multiplication with stabilizers we find that the logicals become ZL=ZMZ_{L}=Z_{M} and XL=XMX_{L}=X_{M} reflecting teleportation along the chain.

It is useful to consider how loss at any given location on the chain affects the performance of the scheme. Loss on qubit ii propagates through the controlled rotation (CZ{C}_{Z}) to act as dephasing on qubits i±1i\pm 1. In our simulations of this teleportation scheme we assess performance by assuming that the final qubit in the chain has no noise and applying a noise-free recovery to assign a fidelity to the teleportation procedure.

II.3.3 2D Surface Code

In order to construct the 2D surface code in Fig. 2, we start with parallel 1D cluster states, prepared as before and entangle every odd (primal) qubit of each 1D cluster with the corresponding primal qubit of its neighbouring 1D cluster state, via a dual qubit prepared in the |+|+\rangle state. Every second qubit of the 1D cluster state is also a dual qubit. The resulting cluster state is depicted in Fig. 4. Measuring every qubit in the X basis realises a measurement of parity checks of primal stabilisers given by a product of XX on qubits around a primal plaquette. A primal plaquette stabiliser checks for ZZ type errors on that plaquette. More explicitly, consider the stabilisers of the cluster state. For a plaquette pp of the cluster state, the associated stabiliser is Sp=epXeS_{p}=\prod_{e\in p}X_{e}, where ee denote the primal qubits represented by the edges of the given plaquette as in Fig. 3. To teleport logical information across the 2D foliated state, we measure both primal and dual qubits in the X basis. This also provides stabiliser outcomes, from which we extract the error syndrome.

Refer to caption
Figure 2: 1D cluster states foliated into the 2D planar code. Blue qubits are primal, red qubits are dual.
Refer to caption
Figure 3: Primal plaquette stabilisers for the 2D planar code for the bulk and rough boundary, respectively. Blue ’X’ indicates X measurements of primal qubits that contribute to the stabiliser. Red circles are dual qubits which do not contribute to the stabiliser measurement.

Having obtained the error syndrome, error correction can be carried out in the usual way [16, 17]. We use a minimum weight perfect matching (MWPM) algorithm to match up pairs of violated stabilisers in the error syndrome in a way that minimises the total distance between pairs of violated stabilisers. The paths between violated stabilisers given by MWPM determine which qubits have experienced errors and therefore give the Pauli correction to which the final logical state with be subject. After the correction has been applied, the simulation can perform a hypothetical logical measurement that determines if the correction has succeeded, or if it has failed and the code has suffered a logical error.

The model we simulate can either be viewed as the foliated 1D repetition code, or equivalently a single 2D timeslice of plaquette stabilizers of the surface code. The noise model we adopt for this scheme includes photon loss occurring during gates and measurement errors on single qubit X measurements, the latter of which arise naturally from the realistic measurement model employed. In the commonly used phenomenological error models for the surface code, both these noise sources map to gate noise rather than phenomenological measurement noise, so that we are still able to observe an error correction threshold for this scheme. Note however that there is only a single logical operator that can be corrected. Therefore while it is possible to learn a lot about the interplay between loss and measurement errors in this code concatenation scheme, a larger scale simulation corresponding to a 2+1D2+1D surface code simulation would still be desirable.

Refer to caption
Figure 4: A cluster state used to implement the 2D surface code using MBQC. The green oval represents |ψL,|\psi\rangle_{L}, the initial logical state on the first layer of qubits. The remaining qubits are initialised in the |+L|+\rangle_{L} state.

III Analysis and Numerics

III.1 Single Qubit Phase Measurement

We require X basis measurements of individual qubits for our quantum computation scheme. This measurement translates to a phase-estimation problem based on measurements of a single bosonic mode. Feasible methods for conducting phase measurements are heterodyne measurement and adaptive homodyne (AHD) measurement [29]. We compare the performance of heterodyne and AHD measurement to an ideal canonical phase measurement which would be the optimal choice of measurement if it could be implemented.

All phase measurements can be defined using the general positive operator-valued measurement (POVM) [29]

F^(ϕ)\displaystyle\hat{F}(\phi) =12πn,m=0eiϕ(mn)Hmn|mn|\displaystyle=\frac{1}{2\pi}\sum_{n,m=0}^{\infty}e^{i\phi(m-n)}H_{mn}|m\rangle\langle n| (17)

where ϕ[0,2π)\phi\in[0,2\pi) is the measurement outcome and H is a Hermitian matrix with real positive entries and Hmm=1H_{mm}=1 for all mm that is defined explicitly in Appendix B. The quality of the phase measurement depends on the choice of HH.

III.1.1 Canonical Phase Measurement

The ideal realisation of a phase measurement of a bosonic mode is simply the projector onto an (unnormalised) phase eigenstate [37]. This measurement model is very hard to implement, however we include it in simulations as a benchmark of the best possible measurement to which we can compare the other schemes implemented. It has POVM elements

F^c(ϕ)\displaystyle\hat{F}_{\mathrm{c}}(\phi) =12π|ϕϕ|\displaystyle=\frac{1}{2\pi}|\phi\rangle\langle\phi| (18)

where the phase eigenstate is |ϕ=n=0ein^ϕ|n.|\phi\rangle=\sum_{n=0}^{\infty}e^{i\hat{n}\phi}|n\rangle.

III.1.2 Heterodyne Measurement

Heterodyne measurement involves simultaneous homodyne measurements of orthogonal quadratures of a bosonic mode, resulting in adding noise to both quadratures. Heterodyne measurement projects the qubit onto the coherent state |αα||\alpha\rangle\langle\alpha|. We can obtain then obtain the phase as arg(α)=ϕ.\mathrm{arg}(\alpha)=\phi. The POVM for heterodyne measurement with outcome α\alpha is

F^coh(α)\displaystyle\hat{F}_{\mathrm{coh}}(\alpha) =1π|αα|.\displaystyle=\frac{1}{\pi}|\alpha\rangle\langle\alpha|. (19)

and

F^h(ϕ)=0F^coh(reiϕ)𝑑r.\hat{F}_{\mathrm{h}}(\phi)=\int_{0}^{\infty}\hat{F}_{\mathrm{coh}}(re^{i\phi})dr. (20)

III.1.3 Adaptive Homodyne Measurement

Adaptive Homodyne Measurement (AHD) [29] is a better performing alternative to heterodyne measurement. Ordinary homodyne measurement involves the measurement of a single quadrature of the harmonic oscillator mode. It has lower noise than heterodyne measurement but cannot determine the phase of the bosonic mode. In AHD measurement, however, the phase θ\theta of the local oscillator field is continuously updated based on prior measurements. This rotates the quadrature that is being measured and the scheme is designed to lock in of the phase of the field. Various adaptive schemes are possible we use the Mark II scheme from [29]. The AHD POVM elements are determined by a specific choice of the matrix HmnH_{mn} that appears in Eq. 17, which is defined in Appendix B.

A detailed discussion of both AHD and canonical phase measurement schemes can be found in [29]. An experimental realisation of AHD measurement is described in [38].

III.1.4 Measurement Error

In our scheme phase measurements are used to implement XX basis measurements of a RSB code. The phase measurement must be able to resolve angles less than π/N\pi/N in order to distinguish qubit states. Thus for a fixed phase measurement it becomes harder to perform qubit measurement as NN increases. Neither AHD nor heterodyne measurements provide ideal projective qubit measurements. Both have inherent measurement error which is dependent upon the mean photon number, in addition to the requirement to resolve angles of order π/N\pi/N.

Since in this setting we just require the ability to distinguish the two qubit XX-eigenstates based on phase measurement, we will use a Qubit State Inference (QSI) procedure to achieve this. If the inferred value of the qubit state differs from ‘actual’ state of the qubit determined by our simulation then that constitutes a qubit measurement error. As we will explain in greater detail below this QSI procedure can be chosen to partial compensate from the phase errors that we have seen occur correlated to photon losses on neighbouring qubits.

In Figs. 5,6 the data for the measurement error rate as a function of mean photon number is generated numerically. We assume γ=0\gamma=0 in order to decouple the measurement error from the noise due to photon loss. We then simulate many shots of qubit measurement and calculate the fraction of shots where the logical state as given by the Qubit State Inference, see Sec. III.2, differs from the actual answer found according to Eq. 39. By varying KK, for fixed N,N, we vary the mean photon number.

Fig. 5 shows the measurement error qq as a function of n¯\bar{n} for heterodyne measurement, for binomial codes of different N.N. We see that for a given qq, codes of higher NN will have a higher mean photon number.

Refer to caption
Figure 5: Measurement Error vs n¯=1/2KN\bar{n}=1/2\cdot KN for binomial code with different NN values using heterodyne measurement.

Fig. 6 shows measurement error as a function of mean photon number for AHD measurement. Not only is the overall measurement error much lower than in the heterodyne case, as expected, but also the measurement error is more consistent across NN values. As we will see in Sec. III.6, this has a significant impact on threshold values.

Refer to caption
Figure 6: Measurement Error vs n¯=1/2KN\bar{n}=1/2\cdot KN for binomial code with different NN values using AHD measurement.

It is important to note that the ‘measurement error’ we are referring to is measurement error at the level of the bosonic mode. This manifests as Pauli errors of the binomial code qubits, rather than as errors in the binomial code single qubit XX measurement outcomes. Correcting binomial code qubit measurement outcomes would require multiple measurement rounds. Measurement error in our context contributes to Pauli noise independently from noise due to photon loss.

III.2 Qubit State Inference

Once we have obtained the phase of an RSB encoded qubit, we must infer how the phase of the bosonic mode maps onto a ±1\pm 1 X basis qubit measurement outcome. In this section, we will quantify the performance of such qubit state inference (QSI) techniques in terms of the fidelity of teleportation along a 3-qubit 1D cluster state as described in Sec.II.3.2 using heterodyne measurement for the phase measurement.

III.2.1 Binning Algorithm

The binning algorithm utilises the discrete rotational symmetry of RSB codes to straightforwardly bin the phase measurement in the complex plane according to the position of the logical plus and minus states in phase space. Fig. 7 below represents a binning algorithm QSI for an N=2N=2 RSB code.

Refer to caption
Figure 7: Plot of binning regions in the complex plane for an N = 2 code.

III.2.2 Maximum Likelihood QSI

For a three qubit 1D cluster state, the maximum likelihood QSI uses the phase measurement outcomes of the first two qubits to simultaneously infer the states of these qubits. It takes the full noise channel into account for each qubit, and therefore is the most accurate of the QSI techniques we examine. The maximum likelihood QSI works by calculating the probability of the two qubits being in states i,ji,j given their phase measurement outcomes ϕ1,ϕ2\phi_{1},\phi_{2} and choosing i,ji,j which maximise this probability.

To design our QSI techniques we analyse their performance relative to a simplified noise model where the photon loss occurs before the CROT gates. The loss may be commuted past the CROT gates and will spread to dephasing noise on adjacent qubits. This is in contrast to how we model the noise to which the bosonic modes are subject in our full simulations where loss occurs duringduring the CROT gates, as described in Sec. 14. We use a simpler noise model to design our QSI techniques in order to make them tractable to implement and analyse numerically. Consequently, for the QSI techniques we model photon loss using Kraus operators for photon loss of order kk

A^k=1k!(1eγ)k/2eγn^2a^k.\displaystyle\hat{A}_{k}=\frac{1}{\sqrt{k!}}(1-e^{-\gamma})^{k/2}e^{\frac{-\gamma\hat{n}}{2}}\hat{a}^{k}. (21)

Here the parameter γ\gamma describes the loss probability for each photon. It corresponds to κtgate\kappa t_{\rm gate} in the full model discussed previously in which photon losses can occur an any time. The dephasing operators acting on qubit ii, given by the commutator with the CROTij\textsc{CROT}_{ij} of loss on qubit jj, are

C^k=eiπkn^iN2.\displaystyle\hat{C}_{k}=e^{\frac{-i\pi k\hat{n}_{i}}{N^{2}}}. (22)

Suppose that the first two qubits in the cluster have measurement outcomes ϕ1,ϕ2\phi_{1},\phi_{2} and we aim to infer qubit states i1,i2{±1}i_{1},i_{2}\in\{\pm 1\} in order to obtain the Pauli correction we need to apply to qubit three. We will define the likelihood function (i1,i2|ϕ1,ϕ2)\mathcal{L}(i_{1},i_{2}|\phi_{1},\phi_{2}) which is given by the conditional probability p(ϕ1,ϕ2|i1,i2)p(\phi_{1},\phi_{2}|i_{1},i_{2}) for the phase measurement outcomes given the initial qubit states.

Each qubit of the cluster is subject to noise due to photon loss as well as dephasing from the photon loss ocurring on neighbouring qubits. Therefore

(i1,i2|ϕ1,ϕ2)=k,k=0Tr[(F(ϕ1)F^(ϕ2))(C^kA^k|i1i1|A^kC^kC^kA^k|i2i2|A^kC^k)].\displaystyle\mathcal{L}(i_{1},i_{2}|\phi_{1},\phi_{2})=\sum_{k,k^{\prime}=0}^{\infty}{\rm Tr}\left[\left(F(\phi_{1})\otimes\hat{F}(\phi_{2})\right)\left(\hat{C}_{k^{\prime}}\hat{A}_{k}|i_{1}\rangle\langle i_{1}|\hat{A}^{\dagger}_{k}\hat{C}^{\dagger}_{k^{\prime}}\otimes\hat{C}_{k}\hat{A}_{k^{\prime}}|i_{2}\rangle\langle i_{2}|\hat{A}_{k^{\prime}}^{\dagger}\hat{C}^{\dagger}_{k}\right)\right]. (23)

Explicitly, maximum likelihood QSI chooses i,ji,j giving

argmaxi,j(i,j|ϕ1,ϕ2).\displaystyle\mathrm{argmax}_{i,j}\mathcal{L}(i,j|\phi_{1},\phi_{2}). (24)

For schemes with more qubits the exact maximum likelihood QSI can be formulated in the same way. However it becomes intractable as the number of qubits in the cluster state is increased. For this reason we consider an approximation that can be performed efficiently.

III.2.3 Local Maximum Likelihood QSI

We define a local maximum likelihood QSI that uses a local approximation of the noise channel acting on any given qubit to obtain an efficient approximation for the likelihood function.

The form of the (ia|ϕ)\mathcal{L}(i_{a}|\phi), likelihood function for the qubit aa being in state ii conditioned upon its phase measurement outcome ϕ\phi, depends upon the number of qubits to which the qubit in question is entangled. Suppose that the qubits entangled to a specific qubit aa are denoted by l𝒩(a).l\in\mathcal{N}(a). Let the state of qubit ll be ili_{l} and the state of all the qubits will be indicated by i\vec{i}. Let 𝒮\mathcal{S} denote the set containing all vectors j\vec{j} of length |𝒩(a)||\mathcal{N}(a)| with entries 0jlu0\leq j_{l}\leq u indicating the number of photon emissions on the mode ll. Let

ρia\displaystyle{\rho_{i_{a}}} =k=0uA^k|iaia|A^k\displaystyle=\sum_{k=0}^{u}\hat{A}_{k}|i_{a}\rangle\langle i_{a}|\hat{A}_{k}^{\dagger} (25)

and

𝒪a,i\displaystyle\mathcal{O}_{a,\vec{i}} =j𝒮pi(j)C^jρiaC^j\displaystyle=\sum_{\vec{j}\in\mathcal{S}}p_{\vec{i}}(\vec{j})\hat{C}_{\vec{j}}\rho_{i_{a}}\hat{C}^{\dagger}_{\vec{j}} (26)

Where the a priori photon emission probabilities are

pi(j)\displaystyle p_{\vec{i}}(\vec{j}) =l𝒩(a)pil(jl)\displaystyle=\prod_{l\in\mathcal{N}(a)}p_{i_{l}}(j_{l})
=l𝒩(a)Tr(A^jl|ilil|A^jl)\displaystyle=\prod_{l\in\mathcal{N}(a)}\mathrm{Tr}(\hat{A}_{j_{l}}|i_{l}\rangle\langle i_{l}|\hat{A}_{j_{l}}^{\dagger}) (27)

and

C^j=l𝒩(a)C^jl\hat{C}_{\vec{j}}=\prod_{l\in\mathcal{N}(a)}\hat{C}_{j_{l}} (28)

Then we define the approximate likelihood function to be

LL(i|ϕ)\displaystyle\mathcal{L}_{LL}(\vec{i}|\phi) =a=1MTr[F^(ϕ)𝒪a,i]\displaystyle=\prod_{a=1}^{M}\mathrm{Tr}\left[\hat{F}(\phi)\mathcal{O}_{a,\vec{i}}\right] (29)

where we truncate the two sums over numbers of photon losses at uu for numerical tractability. Given p(i|ϕ)p(i|\phi) the local maximum likelihood QSI works by choosing ii as follows

argmaxiLL(i|ϕ).\displaystyle\mathrm{argmax}_{\vec{i}}\mathcal{L}_{LL}(\vec{i}|\phi). (30)

III.3 1D Performance Metric

Having used a QSI procedure to determine the states of qubits in the chain up to the final qubit MM, we apply a Pauli operation that depends on these outcomes. In the ideal case by applying the recovery operator to the final qubit, we obtain the original logical state on that qubit. We can write the effect of both the QSI procedure and the recovery operation as a CPTP map (ϕ)\mathcal{R}(\vec{\phi}) where ϕ\vec{\phi} is the set of all M1M-1 measurement outcomes.

The quantum channel representing the overall teleportation operation, including encoding into a linear cluster state chain, photon loss, phase measurement QSI and recovery is

(|ψψ|)=𝑑ϕ(ϕ)Tr1M1[F(ϕ)Γ~(ρψ)](ϕ)\displaystyle\mathcal{E}(|\psi\rangle\langle\psi|)=\int d\vec{\phi}\mathcal{R}(\vec{\phi})\mathrm{Tr}_{1\ldots M-1}\left[F(\vec{\phi})\tilde{\Gamma}(\rho_{\psi})\right]\mathcal{R}^{\dagger}(\vec{\phi}) (31)

where

ρψ=U(|ψψ||++|M1)U\displaystyle\rho_{\psi}=U\left(|\psi\rangle\langle\psi|\otimes|+\rangle\langle+|^{\otimes M-1}\right)U^{\dagger}

and

U=i=1M1CROTi,i+1.U=\prod_{i=1}^{M-1}\textsc{CROT}_{i,i+1}.

The quantum channel \mathcal{E} acts on an initial qubit state |ψ|\psi\rangle and maps it to an output qubit state encoded in the final RSB qubit of the cluster. Γ~\tilde{\Gamma} is the noise channel commuted past the CROT gates as defined as in Eq. 13 and F(ϕ)=i=1M1F(ϕi)F(\vec{\phi})=\otimes_{i=1}^{M-1}F(\phi_{i}) are the measurement operators..

We will use the entanglement fidelity as the performance metric to quantify the performance of the 1D cluster state. We follow the standard definition of entanglement fidelity [39] of the quantum channel by supposing that our system of interest QQ, in our case the final qubit of the cluster state, is coupled to an ancilla qubit RR in a reservoir and letting

F(ρ,)\displaystyle F(\rho,\mathcal{E}) =RQ|(R)(|RQRQ|)|RQ\displaystyle=\langle RQ|(\mathcal{I}_{R}\otimes\mathcal{E})(|RQ\rangle\langle RQ|)|RQ\rangle (32)

where R\mathcal{I}_{R} denotes the identity channel on the ancilla qubit. |RQ|RQ\rangle may be any maximally entangled state but we choose |RQ=(|00+|11)/2|RQ\rangle=(|00\rangle+|11\rangle)/\sqrt{2}.

III.4 1D Cluster State Results

III.4.1 Numerical Performance of Phase Measurements

We performed simulations of the minimal instance of the binomial encoded 1D cluster state scheme with a 3-qubit cluster state. Fig. 8 below quantifies the performance of the different measurement schemes. We see that both canonical phase and AHD measurement schemes significantly outperform heterodyne. Furthermore, there does not appear to be a notable difference between the performance of AHD and canonical phase measurements. This indicates that AHD is a good alternative to the optimal canonical phase measurement. Note that the nonzero infidelity at zero γ\gamma is due to the measurement error inherent described in Sec. III.1.4. Even in the absence of photon loss, due to the finite mean photon number of the binomial encoded qubits, measurement error will lead to qubit-level errors.

Refer to caption
Figure 8: Comparison of measurement schemes using log of Infidelity vs γ\gamma, strength of photon loss, for a 3-qubit cluster state with binomial encoded qubits, N=2,K=6N=2,K=6.

III.4.2 Qubit State Inference Comparison

We now examine a numerical comparison of the performance of the binning algorithm QSI, maximum likelihood QSI and local maximum likelihood QSI for the three qubit cluster state telecorrection scheme. These qubit state inference techniques are described in detail in Sec. III.2.

Fig. 9 shows the performance of the three QSI techniques for a 3-qubit cluster state with binomial code qubits, as a function of the loss parameter γ.\gamma. As expected, for no loss the QSI technqiues converge on the same value of infidelity. However as γ\gamma is increased, the binning algorithm performs significantly worse than both the maximum likelihood and local maximum likelihood QSI techniques. Notably, the performance of maximum likelihood and local maximum likelihood is statistically identitical here, indicating that the local maximum likelihood QSI is a good replacement for the full maximum likelihood QSI.

Refer to caption
Figure 9: Comparison of QSI techniques for a 3-qubit increases uster state with binomial code qubits with N=2,K=6N=2,K=6.

Together these results indicate that adaptive phase measurement combined with a local maximum likelihood QSI is a considerable improvement on the straightfoward heterodyne and binning approach to qubit measurement for the binomial code.

III.5 2D Surface Code Methods

To enable simulations of the full 2D surface code we need to introduce some additional methods. In the following we will discuss the twirling methods that we used in order to have tractable simulations as well as the details of the surface code decoder that we implemented.

III.5.1 X-Basis Pauli Twirl

For the purpose of numerical tractability when simulating these larger systems, we introduce an X-basis Pauli twirl for the RSB qubits. We show in Appendix C that in the limit where γ=0\gamma=0 the twirl has no effect on the measurement statistics and this approximation becomes exact. For non-zero photon loss the twirled noise model is only approximately the same as the physical photon loss model. By removing certain coherences in the photon loss error model the twirl enables simulations to find the threshold of the planar code. We do not expect that the twirl qualitatively changes our results. Simulations that avoid it would require much more expensive simulations, for example based on tensor network simulations [40].

We define the twirl as a mapping 𝒫Xi\mathcal{P}_{X_{i}} to each qubit ii of the RSB code cluster state. The X-basis Pauli twirl acting on a state ρ\rho is given by

𝒫Xa(ρ)\displaystyle\mathcal{P}_{X_{a}}(\rho) =|++|a+|ρ|+a+||a|ρ|a.\displaystyle=|+\rangle\langle+|_{a}\otimes\langle+|\rho|+\rangle_{a}+|-\rangle\langle-|_{a}\otimes\langle-|\rho|-\rangle_{a}. (33)

Note that while the twirl defined here affects qubit aa, the density matrix ρ\rho describes the state of the whole set of qubits, not all of which are affected by the twirl. The operator |ρ|a\langle-|\rho|-\rangle_{a} acts on the qubits that are not affected by the twirl.

Consider the first XX measurement to be performed on qubit aa given times and locations of the photon emissions labelled by 𝐭\mathbf{t}. The state of the remaining qubits for a given phase measurement outcome ϕa\phi_{a} will be

ρϕa,𝐭=Tra[F^a(ϕa)Γ~𝐭(ρ)]\rho_{\phi_{a},\mathbf{t}}=\mathrm{Tr}_{a}[\hat{F}_{a}(\phi_{a})\tilde{\Gamma}_{\mathbf{t}}(\rho)] (34)

The norm of this state relates to the probability of the measurement outcome, and in fact we will only be interested in integrals over values of ϕa\phi_{a} that map to a given outcome ±\pm for qubit aa.

In our simulations we replace these states with states where a XX-basis twirl is applied prior to the measurement:

ρ¯ϕa,𝐭=Tra{F^a(ϕa)Γ~𝐭[𝒫Xa(ρ)]}\bar{\rho}_{\phi_{a},\mathbf{t}}=\mathrm{Tr}_{a}\{\hat{F}_{a}(\phi_{a})\tilde{\Gamma}_{\mathbf{t}}[\mathcal{P}_{X_{a}}(\rho)]\} (35)

To see why this replacement could be justified, let M±M_{\pm} denote the POVM elements of the noise-plus-measurement-plus-QSI process that is applied to the ideal RSB code qubits and aggregates the phase measurements into the ±1\pm 1 outcomes for the measurement. The application of the Pauli twirl 𝒫X\mathcal{P}_{X} to this encoded ideal initial state is justified so long as M±M_{\pm} is diagonal in the X-basis of the RSB code qubit since in this case 𝒫X[M±]=M±\mathcal{P}^{\dagger}_{X}[M_{\pm}]=M_{\pm}. For the case that γ=0\gamma=0 we provide a proof that 𝒫X[M±]=M±\mathcal{P}^{\dagger}_{X}[M_{\pm}]=M_{\pm} in the case of the binning QSI in Appendix C.

Note that this replacement works only in the case where, as here, we are interested in some averaged quantity for the schemes such as average entanglement fidelity for a logical qubit channel. If we inspected the data conditioned on phase measurements analysed in some more fine grained way then this twirling approximation is not necessarily justified.

This twirl dramatically simplifies our numerical implementation. Suppose that the ideal initial cluster state of the scheme is ρ\rho. So in particular ρ\rho is in the codespace of the RSB code. Consider measuring qubit aa of the cluster as before. Then measuring qubit aa of an MM qubit cluster state will result in the state

ρϕa,𝐭=\displaystyle\rho_{\phi_{a},\mathbf{t}}= Tra{F^a(ϕa)Γ~𝐭[𝒫Xa(ρ)]}\displaystyle\mathrm{Tr}_{a}\{\hat{F}_{a}(\phi_{a})\tilde{\Gamma}_{\mathbf{t}}[\mathcal{P}_{X_{a}}(\rho)]\}
=+|Γa,𝐭[F^a(ϕa)]|+Γ~𝐭/a[+|ρ|+a]\displaystyle=\langle+|\Gamma^{\dagger}_{a,\mathbf{t}}[\hat{F}_{a}(\phi_{a})]|+\rangle\tilde{\Gamma}_{\mathbf{t}/a}[\langle+|\rho|+\rangle_{a}]
+|Γa,𝐭[F^a(ϕa)]|Γ~𝐭/a[|ρ|a]\displaystyle+\langle-|\Gamma^{\dagger}_{a,\mathbf{t}}[\hat{F}_{a}(\phi_{a})]|-\rangle\tilde{\Gamma}_{\mathbf{t}/a}[\langle-|\rho|-\rangle_{a}] (36)

This is a state on M1M-1 qubits. The adjoint noise map on qubit aa is

Γ~a,𝐭()\displaystyle\tilde{\Gamma}_{a,\mathbf{t}}^{\dagger}(\cdot) =E^a,𝐭E^a,𝐭\displaystyle=\hat{E}_{a,\mathbf{t}}^{\dagger}\cdot\hat{E}_{a,\mathbf{t}} (37)

where E^a,𝐭\hat{E}_{a,\mathbf{t}} denotes the loss operator acting on qubit aa with the times and locations of photon emissions given by 𝐭\mathbf{t}. The noise map on all qubits other than aa is

Γ~𝐭/a()\displaystyle\tilde{\Gamma}_{\mathbf{t}/a}(\cdot) =(iaE^i,𝐭)(iaE^i,𝐭)\displaystyle=\left(\prod_{i\neq a}\hat{E}_{i,\mathbf{t}}\right)\cdot\left(\prod_{i\neq a}\hat{E}_{i,\mathbf{t}}\right)^{\dagger} (38)

Eq. III.5.1 is simply stating that after the the measurement, the remaining cluster state is in a mixture of “qubit aa was in the logical plus state when measured”, and “qubit aa was in the logical minus state when measured”, with probabilities

p(±|ϕa,𝐭)\displaystyle p(\pm|\phi_{a},\mathbf{t}) =±|Γa,𝐭[F^a(ϕa)]|±tr{Γ~𝐭[F^a(ϕa)ρ~]}\displaystyle=\frac{\langle\pm|\Gamma^{\dagger}_{a,\mathbf{t}}[\hat{F}_{a}(\phi_{a})]|\pm\rangle}{\mathrm{tr}\{\tilde{\Gamma}^{\dagger}_{\mathbf{t}}[\hat{F}_{a}(\phi_{a})\tilde{\rho}]\}} (39)

where ρ~=𝒫Xa(ρ),\tilde{\rho}=\mathcal{P}_{X_{a}}(\rho), and the denominator of the above expression can be inferred by the normalisation requirement. Note that to obtain this we have used the fact that Tr+|ρ|+a=Tr|ρ|a\mathrm{Tr}\langle+|\rho|+\rangle_{a}=\mathrm{Tr}\langle-|\rho|-\rangle_{a} which is a readily verified property of the ideal cluster state ρ\rho. While we have focussed here on a single measurement, clearly each successive measurement can be treated in the same way since after the first measurement the states ±|ρ|±a\langle\pm|\rho|\pm\rangle_{a} are again ideal cluster states in the RSB code subspace.

In our numerical simulations we sample using these probabilities to determine which state |±±|a|\pm\rangle\langle\pm|_{a} the qubit was in when measured. The Pauli X-basis twirl is implicit in this step of the algorithm. We then compare this outcome to the outcome we get by inferring the qubit state from the phase measurement outcome ϕ\phi. A bit flip error is placed on the qubit if the ‘actual’ state of the qubit disagrees with the inferred measurement outcome.

III.5.2 Modified MWPM Decoder

We can further optimise the performance of RSB codes in our concatenated model by modifying the standard MWPM decoder. The standard MWPM decoder works by taking the error syndromes and forming a complete graph, where edge weights between nodes are given by the Manhattan distance - that is, each each edge of the lattice is assigned unit weight. We can use a modified MWPM decoder to assign edge weights based on phase measurement results. We follow the approach of [41] by utilising the information contained in realistic phase measurements which is discarded when mapping to a binary X basis measurement outcome. As in [41], we adopt the concepts of hard and soft measurement. In our model, the phase measurement of the RSB qubit gives the ‘soft’ measurement outcome ϕ\phi, which is then processed by a QSI technique and mapped onto an observed ‘hard’ outcome μ^{+1,1}\hat{\mu}\in\{+1,-1\}, corresponding to the |±|\pm\rangle, respectively. Recall the discussion in Sec. III.5.1 in which we stated that after subjecting a qubit of the cluster to the X-basis Pauli twirl, we sample probabilistically according to Eq. 39 to determine whether the qubit was in the state |±±|i|\pm\rangle\langle\pm|_{i} when measured. We define the ‘ideal’ hard outcome μ¯{±1}\bar{\mu}\in\{\pm 1\} to be this sampled outcome, which is kept track of in our numerical simulation but is not accessible to the decoder.

During the MWPM algorithm, a complete graph is created from nodes of the syndrome graph. Distances are then calculated between all possible combinations of node pairs. If, for a given node pair, edge weights of the syndrome graph are calculated using soft measurement outcomes, distances cannot be precomputed and must be evaluated using modified MWPM algorithm. This means that for each node pair in the complete graph, modified MWPM algorithm must be run again. In contrast, if edge weights for a node pair are given fixed unit weight, modified MWPM algorithm need only be run a single time. We use a hybrid model in which for nodes separated by Manhattan distances greater than log2(L)\log_{2}(L), edge weights are calculated using observed hard measurement outcomes, whilst for nodes pairs for which dManhattanlog2(L)d_{Manhattan}\leq\log_{2}(L), edge weights are calculated using soft measurement outcomes. This provides the advantage of exploiting soft information for a more accurate syndrome decoder, whilst avoiding the long runtime of exhaustively calculating shortest paths between every possible node pair combination.

Let the soft outcome observed be ϕ\phi, the probability distribution function of which, conditioned on the ideal hard outcome μ¯=±1\bar{\mu}=\pm 1, is fμ¯(ϕ)f^{\bar{\mu}}(\phi). For the maximum likelihood and local maximum likelihood QSI techniques, fμ¯(ϕ)f^{\bar{\mu}}(\phi) is given by Eqs. 23 and 29 respectively. The soft outcome is then mapped onto the observed hard outcome μ^\hat{\mu} according to to hardening map

μ^={+1,f+1(ϕ)>f1(ϕ)1,f1(ϕ)>f+1(ϕ).\displaystyle\hat{\mu}=\begin{cases}&+1,f^{+1}(\phi)>f^{-1}(\phi)\\ &-1,f^{-1}(\phi)>f^{+1}(\phi).\end{cases} (40)

The QSI technique we choose gives the explicit form of the hardening map. In our 2D surface code simulations, the two QSI techniques we employ are the binning algorithm QSI and local maximum likelihood QSI. The full maximum likelihood QSI would be too computationally expensive.

We say that a bit flip occurs on the qubit if the observed hard outcome disagrees with the ideal hard outcome. Again, this information is accessible only to the simulation and not to the decoder.

As mentioned, a standard MWPM decoder proceeds by assigning each edge of the surface code lattice unit weight, we=1.w_{e}=1. This is equivalent to saying that each qubit is equally likely to have experienced an error. However this assumption is not accurate in general and the information contained in the soft measurement outcomes can be utilised to weight the qubits (edges) according to the probability that the observed hard outcome disagrees with the ideal hard outcome, and thus the probability that the qubit experienced an error. We define the likelihood ratio for a given qubit

L(ϕ)\displaystyle L(\phi) =fμ^(ϕ)fμ^(ϕ)\displaystyle=\frac{f^{\hat{\mu}^{\prime}}(\phi)}{f^{\hat{\mu}}(\phi)} (41)

where ϕ\phi is the soft measurement outcome, μ^\hat{\mu} is the observed hard measurement outcome and μ^\hat{\mu}^{\prime} is the ‘other’ value of the hard outcome, ie μ^=+1μ^=1.\hat{\mu}=+1\implies\hat{\mu}^{\prime}=-1. Edges for which the soft measurement outcome is used to determine the weight are then weighted according to

w\displaystyle w =log[L(ϕ)]\displaystyle=-\log[L(\phi)] (42)

whilst edges whose weights are determined by the observed hard measurement outcome are weighted as

w=log[pe/(1pe)]\displaystyle w=-\log[p_{e}/(1-p_{e})] (43)

where pep_{e} is the physical error rate. In our model, the physical error rate is given by the effective bit flip rate due to measurement error and photon loss, and can be calculated numerically. This is done by, for a given loss rate, simulating the code subject to loss and averaging over the number of qubits suffering a bit flip to obtain an effective bit flip rate.

III.5.3 2D Performance Metrics

We quantify the behaviour of the 2D foliated surface code in terms of its threshold value.

The threshold of a code refers to the error rate below which increasing the size of the code decreases the logical error rate. For realistic computations, it will be necessary to operate in the sub-threshold regime. Therefore, having a high threshold is a desirable property as it reflects a code’s capacity to tolerate noise whilst still being able to do computations.

Finally, we note that we benchmark the performance of the binomial code against the trivial Fock space encoding, as well as binomial codes of different orders of discrete rotational symmetry.

III.5.4 Overview of Numerical Model

We now provide a sketch of the numerical model used.

Implementation of noise: As described in Eq. 14 and in AppendixA the photon emission times for each qubit can be sampled independently according to a probability distribution that ignores the CROT gates. The outcome of this sample is an array 𝐭\mathbf{t} describing the times and locations of photon emissions. Given this the error operator for each qubit E^a,𝐭\hat{E}_{a,\mathbf{t}} is determined and the overall error operation is just the product of each of these single-qubit error operators.

Phase measurement: Iterating over the lattice, each qubit is subject to a phase measurement modelled by a rejection sampling algorithm with ϕ[0,2π]\phi\in[0,2\pi]. Assuming a phase measurement with POVM F^(ϕ),\hat{F}(\phi), the probability distribution used in rejection sampling is pa(ϕ)=tr[F^a(ϕ)(E^a,tρE^a,t)]p_{a}(\phi)=\mathrm{tr}[\hat{F}_{a}(\phi)(\hat{E}_{a,\textbf{t}}\rho\hat{E}_{a,\textbf{t}}^{\dagger})] where the error operator for the qubit aa being measured is E^a,t\hat{E}_{a,\textbf{t}} as defined in Eq. 13, and ρ=12(|++|+||)\rho=\frac{1}{2}(|+\rangle\langle+|+|-\rangle\langle-|) is the maximally mixed state. The initial state of the qubit is taken as the maximally mixed state which is the reduced density matrix of any single qubit in a cluster state, as can be shown by a simple stabilizer argument. This probability distribution corresponds to the XX-basis twirled phase measurement probability distribution implied by Eq.  III.5.1. If the qubit being measured is primal, the measurement outcome is stored in a primal measurement outcome array, and similarly if the qubit is dual.

Single-qubit decoding: The binning algorithm QSI requires no input other than the soft measurement outcome of the qubit in question, and uses the discrete rotational symmetry of RSB codes to bin the measurement outcome in the complex plane, as previously described. The local maximum likelihood QSI uses the (soft) measurement outcome of each qubit, as well as the (hard) inferred measured outcomes of neighbouring qubits to determine the explicit form of the probability distribution p(ϕ|i=±1)p(\phi|i=\pm 1) to be maximised. Soft measurement outcomes are inferred sequentially across the lattice left to right, top to bottom, so that each qubit has already had measurement outcomes of neighbours ‘above’ it in the lattice inferred to use as input for its own QSI. Once the soft measurement outcome of the qubit has been inferred, for dual qubits we are done. For primal qubits we determine the ‘actual state’ of the qubit as described in Sec. III.5.1. The ‘actual state’ of the primal qubit is compared to the inferred measurement outcome. If they disagree, a bit flip error is placed on that qubit which is represented by a 1-1. Otherwise, no error has occurred and the qubit is assigned the value +1.+1.

Error correction: We begin by measuring logical Z before error correction. We choose one of the smooth boundaries. If an odd number of boundary qubits have errors, the logical Z measurement is 1-1, otherwise it is +1+1. Error correction proceeds as in Sec. II.3. Stabiliser measurements are modelled by, for each plaquette, multiplying together the stored values of all primal qubits comprising the plaquette. If a plaquette has an odd number of qubits with errors, there will be an odd number of 1-1 values and so the result of the multiplication will be 1-1. Otherwise, it will be +1+1. Stabilisers with 1-1 measurement outcomes form the error syndrome. The BlossomV implementation of the MWPM algorithm is then used to pair up nodes in the error syndrome. As we are modelling the planar code, there can be an odd number of 1-1 stabiliser outcomes, so it is also possible to pair syndrome nodes with the boundary. If the modified MWPM algorithm decoder is being used, it is used as described in Sec. III.5.2 to calculate weights between pairs of syndrome nodes to feed into MWPM. Otherwise, each edge of the lattice is assigned weight one. The now paired syndrome nodes are used to determine if a logical error has occurred. We pick the same smooth boundary we used for our initial logical Z measurement. For each of the paired syndrome nodes, we check if the path between them crosses our chosen smooth boundary. If an odd number of crosses occur out of all pairs, the logical Z measurement is 1-1, otherwise it is +1.+1. We compare the logical Z measurement before correction to the new logical Z measurement. If they disagree, a logical error has occurred.

III.6 2D Surface Code Results

In this section we will discuss the results of our simulations of the planar code concatenated with binomial codes under loss errors.

III.6.1 Threshold curves

Refer to caption
Figure 10: Threshold curve for loss errors γ=κtgate\gamma=\kappa t_{\rm gate} for heterodyne phase measurement and binning QSI (Method I). The trivial encoding with measurement error probability qq and amplitude damping noise γ\gamma is compared against the binomial encodings, for different values of NN and KK. The threshold γ\gamma for each value of NN and KK is plotted at the corresponding effective qubit measurement error rate. There is no apparent advantage to the binomial code, and each value of NN and KK has roughly the same performance.
Refer to caption
Figure 11: The thresholds from Fig. 10 are now plotted against the quantity n¯γ,q\bar{n}\cdot\gamma,q. The probability of a single photon loss scales as γn¯\gamma\cdot\bar{n}, showing that higher NN codes do indeed correct more loss errors. Trivial encoding is compared against the binomial encoding, for different values of NN and KK and by this measure the trivial encoding is inferior to the binomial codes.

In this section we will refer to codes using heterodyne phase measurement and the binning algorithm QSI as Method 1 Codes, and codes using AHD measurement and local ML QSI as Method 2 Codes. Figs. 10,12,13,14 show the threshold for different codes as a function of measurement error and γ,\gamma, the photon loss rate. Measurement error is related to mean photon number as per Figs. 5,6, and is determined by the code parameters N,KN,K through n¯=(1/2)KN.\bar{n}=(1/2)KN. The thresholds in the plots are determined for each code (fixed by K,NK,N) by sweeping γ\gamma for different code distances.

Fig. 10 shows the threshold for a range of binomial codes against photon loss γ=κtgate\gamma=\kappa t_{\rm gate}. Binomial codes with varying values of NN are compared to the trivial encoding. The data point for a given value of NN and KK is plotted at the effective measurement error that is determine for this code in Sec. III. This allows a comparison of the threshold for the trivial encoding with a given qubit measurement error. In this initial example, Method 1 codes are used. The most striking feature of this plot is that the trivial encoding significantly outperforms the binomial codes, for all values of N.N. Furthermore, increasing NN does not have an advantageous effect on the threshold, as might be expected. Finally, varying the binomial code parameter K,K, and therefore varying the effective measurement error (for a code of fixed NN) also does not appear to significantly change the threshold.

These results can be understood as follows. Firstly examining Fig. 5 we see that for heterodyne measurement of binomial codes, increasing NN causes a significant jump in the measurement error, for a given KK. This is likely because each higher value of NN requires increased phase resolution for the phase measurement, going as π/N\pi/N. Moreover the total photon loss probability is proportional to mean photon number n¯\bar{n}, which in turn is proportional to NN for binomial codes. Finally the phase uncertainty of the heterodyne phase measurement is proportional to 1/n¯1/\bar{n}. Therefore, the improved tolerance to loss we expect to see when increasing NN for a binomial code is counteracted by the increase in measurement error and the increase in the total photon loss probability, meaning that we do not see an improvement in threshold. As KK decreases, the measurement error increases, meaning that the decreased mean photon number and therefore reduced probability of photon loss is counteracted by the increase in measurement error. This explains why the threshold values for the binomial codes of varying KK roughly lie on a straight line in Fig. 10.

These effects are strongly suggested by Fig. 11, which plots the same data as Fig. 10, except against γn¯\gamma\bar{n} rather than γ\gamma. This effectively normalises the data against mean photon number. We see that binomial codes of higher NN do better by this measure, and furthermore that binomial codes outperform the trivial encoding. Thus if we look at the thresholds against the probability of a single photon emission occuring γn¯\gamma\bar{n} rather than against the probability for each photon to be lost γ\gamma then indeed there is increased tolerance to photon loss errors for the higher NN codes, although it is the original thresholds measured against γ\gamma that matter in practice.

Refer to caption
Figure 12: Threshold for photon loss errors γ\gamma with improved quantum state inference based on heterodyne phase measurement. The trivial encoding for photon loss γ\gamma and measurement error qq is compared against the binomial encoding for different values of NN and KK with each data point being plotted at the effective qubit measurement error rate for that value of NN and KK. Local maximum likelihood quantum state inference is used to process heterodyne phase measurements, showing a performance improvement from switching QSI techniques.

We can improve upon the threshold results from Fig. 10 by using the Local Maximum Likelihood QSI rather than the Binning Algorithm QSI for single qubit measurements. Fig. 12 shows that using the Local ML QSI, whilst still using heterodyne measurement, results in improved threshold values for codes of lower NN. Local ML QSI takes into account an approximation of the local noise experienced by a qubit; the loss to which the qubit itself is subject, but also the dephasing errors that are propagated from loss errors on neighbouring qubits. Crucially, the dephasing errors are proportional to π/N,\pi/N, which is just enough to cause a measurement error on neighbouring qubits. Therefore, the Local ML QSI can be expected to have a significant effect on threshold compared to the Binning Algorithm QSI.

Refer to caption
Figure 13: Threshold for photon loss errors γ\gamma with improved phase measurements as well as quantum state inference. The trivial encoding for photon loss γ\gamma and measurement error qq is compared against the binomial encoding for different values of NN and KK with each data point being plotted at the effective qubit measurement error rate for that value of NN and KK. Method 2, which involves both local ML QSI and adaptive phase measurement, is used. Again, we see a performance improvement from changing phase measurement methods.

We can further improve threshold results by using AHD measurement rather than heterodyne measurement. As shown in Fig. 6, for AHD measurement, not only is the overall measurement error much lower than for heterodyne measurement, but also increasing NN increases the measurement error by a much smaller margin. The effects of this on threshold values are shown in Fig. 13. We have significant overall improvements in threshold using AHD measurement, and in particular increasing NN corresponds to an increase in threshold, for a given value of measurement error. As increasing NN does not cause a large increase in measurement error for AHD measurement, the increase in NN reflects the increased capacity of a code to tolerate loss.

Refer to caption
Figure 14: Threshold for photon loss errors γ\gamma with modified MWPM decoding. The trivial encoding for photon loss γ\gamma and measurement error qq is compared against the binomial encoding for different values of NN and KK with each data point being plotted at the effective qubit measurement error rate for that value of NN and KK. Method 2, which involves both local ML QSI and adaptive phase measurement, is used. Thresholds of the binomial codes are now competitive with the trivial encoding at comparable measure error levels.

The final improvement to threshold results we can make is by using modified MWPM decoding to improve MWPM. Modified MWPM decoding utilises the continuous variable nature of bosonic codes to weight error paths and improve the accuracy of the matching step of the MWPM algorithm. Fig. 14 shows that using modified MWPM decoding significantly increases threshold values relative to MWPM decoding. Interestingly, we see N=3N=3 and N=2N=2 outperforming the N=4N=4 code. For codes of higher NN, it is more difficult to distinguish between |±L|\pm\rangle_{L} codewords as their angular separation in phase space is less and in addition they have higher photon numbers n¯\bar{n} leading to more photon loss events. Despite the increased loss tolerance of the high NN codes it appears that N=3N=3 is the optimal NN value when subject to these competing effects.

As we noted above the measurement errors resulting from imperfect phase measurement depend on the parameters of the binomial code. Higher values of KK offer enhanced phase resolution at the price of increased n¯\bar{n} as shown in Figs. 5,6. For this reason the best choice of code balances better phase resolution with increased loss due to higher mean photon number. For both regular and modified MWPM decoding, this optimum appears to be at a value of measurement error 1%~{}1\%. This corresponds to an optimal choice of KK for each value of NN. This behaviour is quite distinct from what we saw in Fig.  10 where there was only a very weak dependence of the threshold on KK. This is likely because the phase resolution in Method I is dominated by poor performance of heterodyne phase measurement which has phase uncertainty scaling like 1/n¯1/\bar{n} relative to the AHD phase measurement which has uncertainty scaling like 1/n¯3/21/\bar{n}^{3/2}. This opens a window over which it is possible to improve performance by increasing phase resolution at the price of increased n¯\bar{n}.

III.6.2 Sub-threshold Behaviour

While we have found that the thresholds against photon loss of the binomial codes are at best only comparable to the trivial encoding, this reflects the behaviour of these codes at high levels of photon loss. Far below the threshold the increased loss tolerance of the binomial codes could result in reduced overhead using binomial codes. This motivates us to look at the sub-threshold performance of these codes by investigate the scaling of logical errors with the distance of the planar code.

Below the code threshold, it is well known [42] that the logical error rate of the code will scale as

pLeαd\displaystyle p_{L}\propto e^{-\alpha d} (44)

where dd is the code distance. Clearly, we may obtain α\alpha as the slope of a plot of log(pL)\mathrm{log}(p_{L}) against d.d. We quantify the performance of the codes in the subthreshold regime by this parameter α.\alpha. A steeper slope corresponds to a faster decay of the logical failure rate with increased code distance. This is desirable as it means that for a fixed target logical failure rate, a code with a larger α\alpha would require a smaller code size.

Fig. 15 shows α\alpha vs γ\gamma for binomial codes of N=2,3,4N=2,3,4 for both Method 1 and Method 2 Codes. For Method 1 Codes, we see α\alpha increase as γ\gamma decreases, as expected. We also see that higher NN corresponds to higher α,\alpha, an effect which grows as γ\gamma decreases. This is expected as codes with higher NN are better able to correct loss errors. In the sub-threshold regime where γ\gamma is low, this property of higher NN codes is is evident as it is not outweighed by the detrimental effect of higher mean photon number. For Method 2 Codes, we see a similar pattern. In both cases, having higher NN appears to be advantageous. For a given value of γ\gamma, Method 2 codes have higher α\alpha values than Method 1 codes, leading us to the conclusion that Method 2 codes with a higher discrete rotational symmetry number are the best performing codes to use in the sub-threshold regime. Note that when Method 2 is used, the codes have a higher threshold compared to Method 1. Therefore, for a given γ,\gamma, Method 1 Codes will be at a lower fraction of the threshold value than Method 2 Codes. The trivial encoding outperforms both Method 1 and Method 2 codes in the subthreshold regime.

However again we note that the threshold of the trivial encoding is significantly higher than the binomial codes; therefore, for a given γ\gamma the trivial encoding is at a lower proportion of threshold, so better performance is unsurprising.

The comparison to the trivial encoding in the sub-threshold regime in particular should be interpreted as a helpful benchmark rather than a rigorous prediction of how the trivial encoding would actually perform in this regime. We have not attempted to use a realistic measurement model for the trivial encoding but rather ideal projective measurements with no measurement error. In contrast, we have subjected the binomial codes to realistic models of measurement. This effect is particularly important to note in the sub-threshold regime, where the strength of noise due to photon loss is low and therefore the impact of measurement error is more significant.

Figs. 1617 corrects for this by plotting α\alpha against γ\gamma normalised by the threshold value, γc.\gamma_{c}. We are most interested in the right hand side of these plots, which is the region in which we are further away from γc,\gamma_{c}, ie further below threshold. Sufficiently below threshold, Method 2 codes outperform their Method 1 counterparts with the same NN values. For N=2,N=2, Method 2 codes outperform Method 1 for all fractions of threshold. For N=3,4N=3,4 Method 2 codes do better lower than 30%30\% below threshold. Realistic quantum computers are expected to operate far below threshold and Method 2 codes have superior subthreshold scaling in this regime.

Furthermore, codes of higher NN outperform their lower NN counterparts sufficiently far below threshold. For Method 2 codes, below 50%50\% of threshold it is advantageous to have higher NN. For Method 1 codes this is true below 80%80\% of threshold. For instance, for the N=4N=4 Method 1 code at γ=0.035,\gamma=0.035, we have α=0.8,\alpha=0.8, whilst for the N=4N=4 Method 2 code, α=0.2\alpha=0.2. This difference would scale up to a significant reduction in overhead in a realistic setting. Consider, for instance, using these codes to achieve a target logical error rate of pL=1010,p_{L}=10^{-10}, which is standard for quantum chemistry algorithms [43]. For the Method 1 N=4N=4 code, this would require a code distance of 115, whilst for the Method 2 code it would require a code distance of 28. Given that the codes are 2D, this amounts to saving over 12000 qubits.

Compared to the trivial encoding we see N=4N=4 codes do better for Method 1 and N=3,4N=3,4 codes do better for Method 2 when the location of threshold is taken into account. This reinforces the advantage obtained by using higher NN codes in the sub-threshold regime, especially for Method 2 codes.

In this comparison where we have attempted to factor out the location of the threshold in order to study sub-threshold performance, note that that the location of the threshold will be a crucial consideration in practice. Namely, for codes with a higher threshold, it will be easier to access the sub-threshold regime. Overall the main point here is that codes with lower mean photon number can be highly advantageous, and that the choice of RSB code in a large-scale computation needs careful consideration. The best choice likely depends on details that we have not attempted to capture in this preliminary investigation.

Refer to caption
Figure 15: Sub-threshold scaling parameter α\alpha against γ\gamma for both Method 1 and Method 2 binomial codes and the trivial encoding. Higher α\alpha indicates more rapid reduction of logical errors with planar code distance. We see Method 2 codes significantly outperform compared to Method 1 codes in this regime. Furthermore, codes of higher NN perform better than their lower NN counterparts.
Refer to caption
Figure 16: Subthreshold scaling parameter α\alpha against (γcγ)/γc(\gamma_{c}-\gamma)/\gamma_{c} for Method 1 Codes. In this plot the threshold of all codes is at the left and the far-below threshold regime is to the right. The logical errors of higher NN codes are seen to drop more rapidly as γ\gamma falls below threshold.
Refer to caption
Figure 17: Subthreshold scaling parameter α\alpha against (γcγ)/γc(\gamma_{c}-\gamma)/\gamma_{c} for Method 2 Codes. In this plot the threshold of all codes is at the left and the far-below threshold regime is to the right. The logical errors of higher NN codes are seen to drop more rapidly as γ\gamma falls below threshold. The method 2 codes outperform method 1 codes.

IV Discussion and Conclusion

In this paper, we have investigated a scheme for quantum computation in which the binomial code was concatenated with the surface code and realised by MBQC. We subjected this scheme to a realistic model of photon loss and compared several methods of phase measurement and qubit state inference. We also incorporated the soft information available from phase measurements into the decoding process by using a hybrid modified MWPM algorithm decoder. Using these techniques, we found phase diagrams of code thresholds as a function of loss, γ\gamma and measurement error, qq, for codes of varying orders of discrete rotational symmetry, NN. Finally, we investigated the performance of these codes in the subthreshold regime.

A key conclusion of our analysis is the impact of the quality of phase measurement on the performance of binomial codes. As shown in Fig. 5, increasing the order of discrete rotational symmetry NN, and therefore increasing the mean photon number, significantly increases the effective measurement error for heterodyne measurement. This counteracts the advantage of increased tolerance to photon loss possessed by codes of higher NN. This effect can be clearly seen in Figs. 12, 10, in which for a fixed N,N, varying the measurement error by changing the binomial code parameter KK gives a roughly flat line in γ.\gamma. This affect arises because the probability of a photon loss event is roughly linear in NN while the phase resolution of the heterodyne measurement is roughly inversely proportional to NN. The phase resolution of heterodyne measurement does not grow sufficiently with increased mean photon number to counteract the increased rate of photon loss events in that limit.

It is therefore necessary to use an adaptive homodyne phase measurement to realise the full power of the binomial codes. The improved phase resolution of adaptive homotdyne schemes enables the increased loss tolerance of higher NN codes to ‘win out,’ resulting in higher thresholds against loss.

The other main performance improvement we saw came from using the modified algorithm decoding rather than unweighted MWPM decoding. By using the soft information of the continuous variable measurement outcomes, we were able to achieve a threshold against loss of 20%~{}20\% , compared to 14%~{}14\% using MWPM. This is not only a significant improvement, but is objectively a high threshold against loss.

An important consideration is our comparison of the performance of binomial codes to the trivial encoding. We find that even our best scheme for phase measurement and quantum state inference results in thresholds that are at best comparable to those of the trivial encoding with some measurement error probability. The trivial encoding only has a maximum of a single photon per qubit, and therefore performs relatively well against loss because it has a very low mean photon number. We believe that the conclusion that should be drawn from this is that the which error correcting coding is best in a given situation will depend on details of the noise model in a given physical system that are beyond the scope of this work. When modelling the trivial encoding we assumed ideal measurements and added the measurement error as classical noise on the measurement outcome. A more accurate comparison would simulate realistic measurement for the trivial encoding. Whether or not the trivial encoding does actually outperform the binomial code would depend on the noise of such a detailed implementation. One way of making a comparison between the trivial and binomial codes in these simulations is to note that the threshold against loss that we found corresponds to a trivial encoding with a measurement error rate of around 1%1\%.

Finally we studied the supression of logical errors below threshold. In this case we do see that the binomial codes are able to suppress logical errors more rapidly with increasing NN when the photon loss rate is low enough.

Our analysis has shown that for the binomial code concatenated with the surface code, the performance is highly dependent upon accurate phase measurement, qubit state inference and decoding. All of these features need to be well chosen for the apparent advantage of binomial codes in tolerating photon loss errors to be realised in practice.

Acknowledgements

We thank Ben Brown for his contributions to the initial stages of this project. We acknowledge support from Australian Research Council via the Centre of Excellence in Engineered Quantum Systems (CE170100009). We acknowledge the traditional owners of the land on which this work was undertaken at the University of Sydney, the Gadigal people of the Eora Nation.

Appendix A Quantum Trajectory Simulation of Loss

A.1 Justification of independent sampling of cluster qubits

We aim to produce the state corresponding to the graph graph G=(V,E)G=(V,E) which has MM vertices total. This can be produced by applying a CROT gate to all pairs of qubits that share an edge of GG:

U=eECROTe1,e2=eiHCROTtgate\displaystyle U=\prod_{e\in E}\textsc{CROT}_{e_{1},e_{2}}=e^{-iH_{\textsc{CROT}}t_{\rm gate}} (45)

where

HCROT\displaystyle H_{\textsc{CROT}} =ΩeEn^e1n^e2\displaystyle=-\Omega\sum_{e\in E}\hat{n}_{e_{1}}\otimes\hat{n}_{e_{2}} (46)

and tgate=π/N2Ω.t_{\rm gate}=\pi/N^{2}\Omega.

We aim to simulate photon losses occurring at random times during the implementation of these gates using a quantum trajectories approach, as per [44]. Having found an analytic expression for these quantum trajectories we show by analysing the norm of the full evolution of the cluster that loss times can be sampled independently for each qubit of the cluster.

According to quantum trajectory theory the evolution of the quantum state between photon emissions is given by the non-Hermitian Hamiltonian

ddt|ψ~(t)\displaystyle\frac{d}{dt}|\tilde{\psi}(t)\rangle =iHeff|ψ~(t)\displaystyle=-iH_{\rm eff}|\tilde{\psi}(t)\rangle
=(iΩeGn^e1n^e2κ2a=1Mn^a)|ψ~(t)\displaystyle=\left(-i\Omega\sum_{e\in G}\hat{n}_{e_{1}}\otimes\hat{n}_{e_{2}}-\frac{\kappa}{2}\sum_{a=1}^{M}\hat{n}_{a}\right)|\tilde{\psi}(t)\rangle (47)

where the tilde indicates that the state is unnormalised.

Suppose that kk photons in total have been emitted during the gate time tgatet_{\rm gate} for the gate CROT gate. The jjth photon emission removes a photon from the mode αj\alpha_{j} with j=1,,kj=1,\ldots,k. The time between the jjth and j+1j+1th photon emission is Δj\Delta_{j}. (Δk=tgatetk\Delta_{k}=t_{\rm gate}-t_{k} where tkt_{k} is the final photon emission time).

As in the main text the vector 𝐭a\mathbf{t}_{a} is a temporally ordered list of photon emission times for mode aa. The number of emissions for mode aa is ja=|𝐭a|j_{a}=|\mathbf{t}_{a}|. The full set of emission times is recorded in the array 𝐭={𝐭1,,𝐭M}\mathbf{t}=\{\mathbf{t}_{1},\ldots,\mathbf{t}_{M}\}. We have k=ajak=\sum_{a}j_{a}.

Trajectory theory states that the non-Hermitian time evolution operator corresponding to the noisy gate with the specified photon emission times is

U~𝐭=κkeiHeffΔkaαkeiHeffΔ1aα1eiHefft1.\tilde{U}_{\mathbf{t}}=\sqrt{\kappa}^{k}e^{-iH_{\rm eff}\Delta_{k}}a_{\alpha_{k}}...e^{-iH_{\rm eff}\Delta_{1}}a_{\alpha_{1}}e^{-iH_{\rm eff}t_{1}}. (48)

We will make use of the following identities [25] to simplify this expression,

ecn^a^k\displaystyle e^{c\hat{n}}\hat{a}^{k} =ecka^kecn^\displaystyle=e^{-ck}\hat{a}^{k}e^{c\hat{n}} (49)
eicn^1n^2a^1k\displaystyle e^{ic\hat{n}_{1}\otimes\hat{n}_{2}}\hat{a}_{1}^{k} =eickn^2a^1keicn^1n^2.\displaystyle=e^{-ick\hat{n}_{2}}\hat{a}_{1}^{k}e^{ic\hat{n}_{1}\otimes\hat{n}_{2}}. (50)

One can show using induction that the time evolution operator corresponding to this sequence of photon loss events is

U~𝐭\displaystyle\tilde{U}_{\mathbf{t}} =κkeκm=1kΔmm/2eiΩm=1k(l=mkΔl)b𝒩(αm)n^b(m=1ka^αm)eiHefftgate\displaystyle=\sqrt{\kappa}^{k}e^{\kappa\sum_{m=1}^{k}\Delta_{m}m/2}e^{i\Omega\sum_{m=1}^{k}(\sum_{l=m}^{k}\Delta_{l})\sum_{b\in\mathcal{N}(\alpha_{m})}\hat{n}_{b}}\left(\prod_{m=1}^{k}\hat{a}_{\alpha_{m}}\right)e^{-iH_{\rm eff}t_{\rm gate}}
=κka=1M[eκτ(𝐭a)/2(b𝒩(a)eiΩtm𝐭b(tm+1tm)mn^a)a^aja]eiHefftgate\displaystyle=\sqrt{\kappa}^{k}\prod_{a=1}^{M}\left[e^{\kappa\tau(\mathbf{t}_{a})/2}\left(\prod_{b\in\mathcal{N}(a)}e^{i\Omega\sum_{t_{m}\in\mathbf{t}_{b}}(t_{m+1}-t_{m})m\hat{n}_{a}}\right)\hat{a}_{a}^{j_{a}}\right]e^{-iH_{\rm eff}t_{\rm gate}}
=(a=1ME^a,𝐭)U.\displaystyle=\left(\prod_{a=1}^{M}\hat{E}_{a,\mathbf{t}}\right)U. (51)

The operators E^a,𝐭\hat{E}_{a,\mathbf{t}} are as defined in the main text. We have used the fact that tm𝐭a(tm+1tm)m=jatgatetm𝐭atm=τ(𝐭a)\sum_{t_{m}\in\mathbf{t}_{a}}(t_{m+1}-t_{m})m=j_{a}t_{\rm gate}-\sum_{t_{m}\in\mathbf{t}_{a}}t_{m}=\tau(\mathbf{t}_{a}).

This expression gives the error process as the product of ideal CROT gates followed by photon losses on each qubit. We used this representation of the errors in the main text and also in our simulations. However for the simulations it is necessary to be able to sample accurately from the distribution over photon loss events 𝐭\mathbf{t}. To compute the probability density for this pattern of emissions 𝐭\mathbf{t} it is helpful to also be able to write the noise operator as photon emissions followed by the ideal gate. We can rearrange the expression above as follows

U~𝐭=κkeκa=1Mτ(𝐭a)/2U[a=1M(b𝒩(a)eiΩ[τ(𝐭b)jbtgate]n^a)](a=1Ma^aja)eκtgateaMn^a/2\displaystyle\tilde{U}_{\mathbf{t}}=\sqrt{\kappa}^{k}e^{\kappa\sum_{a=1}^{M}\tau(\mathbf{t}_{a})/2}U\left[\prod_{a=1}^{M}\left(\prod_{b\in\mathcal{N}(a)}e^{i\Omega\left[\tau(\mathbf{t}_{b})-j_{b}t_{\rm gate}\right]\hat{{n}}_{a}}\right)\right]\left(\prod_{a=1}^{M}\hat{a}_{a}^{j_{a}}\right)e^{-\kappa t_{\rm gate}\sum_{a}^{M}\hat{n}_{a}/2} (52)

We now consider the inner product

+|MU~𝐭U~𝐭|+M\displaystyle\langle+|^{\otimes M}\tilde{U}_{\mathbf{t}}^{\dagger}\tilde{U}_{\mathbf{t}}|+\rangle^{\otimes M} =κkeκa=1Mτ(𝐭a)+|MeκtgateaMn^a/2(a=1Ma^aja)(a=1Ma^aja)eκtgateaMn^a/2|+M\displaystyle=\kappa^{k}e^{\kappa\sum_{a=1}^{M}\tau(\mathbf{t}_{a})}\langle+|^{\otimes M}e^{-\kappa t_{\rm gate}\sum_{a}^{M}\hat{n}_{a}/2}\left(\prod_{a=1}^{M}\hat{a}^{\dagger j_{a}}_{a}\right)\left(\prod_{a=1}^{M}\hat{a}_{a}^{j_{a}}\right)e^{-\kappa t_{\rm gate}\sum_{a}^{M}\hat{n}_{a}/2}|+\rangle^{\otimes M}
=κkeκa=1Mτ(𝐭a)+|MeκtgateaMn^a/2[a=1M:(a^aa^a)ja:]eκtgateaMn^a/2|+M\displaystyle=\kappa^{k}e^{\kappa\sum_{a=1}^{M}\tau(\mathbf{t}_{a})}\langle+|^{\otimes M}e^{-\kappa t_{\rm gate}\sum_{a}^{M}\hat{n}_{a}/2}\left[\prod_{a=1}^{M}:\left(\hat{a}_{a}^{\dagger}\hat{a}_{a}\right)^{j_{a}}:\right]e^{-\kappa t_{\rm gate}\sum_{a}^{M}\hat{n}_{a}/2}|+\rangle^{\otimes M}
=a=1Mκjaeκτ(𝐭a)+|eκtgaten^a/2:(a^aa^a)ja:eκtgaten^a/2|+\displaystyle=\prod_{a=1}^{M}\kappa^{j_{a}}e^{\kappa\tau(\mathbf{t}_{a})}\langle+|e^{-\kappa t_{\rm gate}\hat{n}_{a}/2}:(\hat{a}^{\dagger}_{a}\hat{a}_{a})^{j_{a}}:e^{-\kappa t_{\rm gate}\hat{n}_{a}/2}|+\rangle (53)

Where :::: indicates the normally ordered product of operators and so

:(a^a^)k:\displaystyle:(\hat{a}^{\dagger}\hat{a})^{k}: =(a^)ka^k\displaystyle=(\hat{a}^{\dagger})^{k}\hat{a}^{k} (54)

Eq. A.1 shows that the probability for obtaining a given pattern of photon emissions is a simple product distribution over the modes. This greatly simplifies the task of drawing random samples 𝐭\mathbf{t} of photon emission events during the CROT gate. Moreover it is clear that the probability distribution has no dependence on Ω\Omega. A simple calculation shows that in fact the photon emission probabilities are the same as for uncoupled modes experiencing photon loss at rate κ\kappa for a time tgatet_{\rm gate} and no other dynamics. This justifies the approach of independently sampling loss times for the cluster qubits that is described in detail in the following subsection.

A.2 Sampling Algorithm

Given the result on the statistic of the photon emissions, we choose to simplify the numerical recipe by iterating across each qubit of the cluster, and sampling from each qubit independently as follows:
1. Generate a uniform random number RR.
2. Iteratively solve the Schrodinger equation

ddt|ψ~(t)\displaystyle\frac{d}{dt}|\tilde{\psi}(t)\rangle =κ2n^|ψ~(t).\displaystyle=-\frac{\kappa}{2}\hat{n}|\tilde{\psi}(t)\rangle. (55)

3. Do step 2 up until time TT at which

ψ~(T)|ψ~(T)\displaystyle\langle\tilde{\psi}(T)|\tilde{\psi}(T)\rangle =+|en^μκT|+<R.\displaystyle=\langle+|e^{-\hat{n}_{\mu}\kappa T}|+\rangle<R.

Apply the jump operator c^μ=κa^μ\hat{c}_{\mu}=\sqrt{\kappa}\hat{a}_{\mu}, renormalise the state.
4. Repeat steps 1-3 until t=tgate.t=t_{\rm gate}.

The output of this sampling is a particular set of photon emissions 𝐭\mathbf{t}. The noisy state is then obtained by applying the noise operator E^a,𝐭\hat{E}_{a,\mathbf{t}} to each qubit aa.

Appendix B POVMs for AHD Measurement

We follow the presentation of [30] in giving the theoretical details of AHD POVMs as follows

HmnAHD\displaystyle H_{mn}^{AHD} =p=0m2q=0n2γm,pγn,qCp,q(m,n)\displaystyle=\sum_{p=0}^{\left\lfloor{\frac{m}{2}}\right\rfloor}\sum_{q=0}^{\left\lfloor{\frac{n}{2}}\right\rfloor}\gamma_{m,p}\gamma_{n,q}C^{(m,n)}_{p,q} (56)

where

γm,p\displaystyle\gamma_{m,p} =m!2p(m2p)!p!\displaystyle=\frac{\sqrt{m!}}{2^{p}(m-2p)!p!} (57)
Cp,q(n,m)\displaystyle C^{(n,m)}_{p,q} =l=0l=0(nm2l)(mn2l)Mp+l,q+l\displaystyle=\sum_{l=0}^{\infty}\sum_{l^{\prime}=0}^{\infty}\binom{\frac{n-m}{2}}{l}\binom{\frac{m-n}{2}}{l^{\prime}}M_{p+l,q+l^{\prime}} (58)

and

(αn)\displaystyle\binom{\alpha}{n} =k=1nαk+1k\displaystyle=\prod_{k=1}^{n}\frac{\alpha-k+1}{k} (59)

are generalised binomial coefficients. The Mm,nM_{m,n} are recursively defined

Mm,n\displaystyle M_{m,n} =nMn1,m+mMn,m12(nm)2+n+m\displaystyle=\frac{nM_{n-1,m}+mM_{n,m-1}}{2(n-m)^{2}+n+m} (60)
Mn,0\displaystyle M_{n,0} =M0,n=1(2n+1)!!.\displaystyle=M_{0,n}=\frac{1}{(2n+1)!!}. (61)

Appendix C X Basis Pauli Twirl

In the following we will show the invariance of the probability distribution of qubit measurement outcomes under an X-basis Pauli twirl in the case γ=0\gamma=0.

We start by defining the measurement operators M±M_{\pm} as described in the main text in the limit of no photon loss. Given a phase measurement F^(ϕ)\hat{F}(\phi) the POVM elements for qubit measurement are as follows

M±,a,𝐭=ϕ±𝑑ϕF^(ϕ)\displaystyle M_{\pm,a,\mathbf{t}}=\int_{\phi\rightarrow\pm}d\phi\hat{F}(\phi) (62)

The integral over ϕ\phi is determined by the QSI technique resulting in the measurement outcome either being binned as +1+1 or 1-1. We will restrict our attention here only to a pure binning QSI.

We will show that

+|M±|=0=|M±|+\displaystyle\langle+|M_{\pm}|-\rangle=0=\langle-|M_{\pm}|+\rangle (63)

where |±|\pm\rangle are the XX-eigenstates of our RSB code.

If this we perform this measurement on a qubit aa that is part of a RSB code cluster state ρ\rho then this property is sufficient to ensure that the conditioned state satisfies

ρa,±=Tra[Ma,±ρ]=Tra[Ma,±𝒫Xa(ρ)].\rho_{a,\pm}=\mathrm{Tr}_{a}[M_{a,\pm}\rho]=\mathrm{Tr}_{a}[M_{a,\pm}\mathcal{P}_{X_{a}}(\rho)]. (64)

This identity means performing an XX-basis twirl on qubit aa does not affect either the probability of the measurement outcome or the post-measurement state.

Before making this calculation we recall the following definitions. An arbitrary phase measurement can be represented by the general POVM [29]

F^(ϕ)\displaystyle\hat{F}(\phi) =12πn,m=0ei(mn)ϕHmn|mn|\displaystyle=\frac{1}{2\pi}\sum_{n,m=0}^{\infty}e^{i(m-n)\phi}H_{mn}|m\rangle\langle n| (65)

where HmnH_{mn} is a Hermitian matrix with real positive entries, Hmm=1H_{mm}=1 for all mm and ϕ[0,2π].\phi\in[0,2\pi]. And for a rotational symmetric bosonic code [25]

|+\displaystyle|+\rangle =12k=0fk|kN\displaystyle=\frac{1}{\sqrt{2}}\sum_{k=0}f_{k}|kN\rangle
|\displaystyle|-\rangle =12k=0fk(1)k|kN\displaystyle=\frac{1}{\sqrt{2}}\sum_{k=0}f_{k}(-1)^{k}|kN\rangle (66)

where the fkf_{k} are real coefficients. The fkf_{k} satisfy the following normalisation conditions

kevenfk2=1=koddfk2.\sum_{k\ \mathrm{even}}f_{k}^{2}=1=\sum_{k\ \mathrm{odd}}f_{k}^{2}.

Suppose that we are using the binning algorithm QSI. There are 2N2N binning regions centered at πl/N\pi\cdot l/N for l{0,1,,2N1}.l\in\{0,1,...,2N-1\}. Each binning region spans the angles π(l1/2)/Nπ(l+1/2)/N\pi(l-1/2)/N\rightarrow\pi(l+1/2)/N where even ll corresponds to +1+1 bins and odd ll corresponds to 1-1 bins. The POVM for ‘measuring a ±1\pm 1 X basis outcome’ is therefore given by the sum over binning regions for even (odd) ll, respectively. We will now integrate the term with ϕ\phi dependence from Eq. 62, ei(kk)Nϕ,e^{i(k-k^{\prime})N\phi}, over the binning regions

ϕ+1𝑑ϕei[(kk)N]ϕ\displaystyle\int_{\phi\rightarrow+1}d\phi e^{i[(k-k^{\prime})N]\phi} =l=0l even2N1π(l1/2)/Nπ(l+1/2)/Nei(kk)Nϕ𝑑ϕ\displaystyle=\sum_{l=0\textrm{, $l$ even}}^{2N-1}\int_{\pi(l-1/2)/N}^{\pi(l+1/2)/N}e^{i(k-k^{\prime})N\phi}d\phi (67)
ϕ1𝑑ϕei[(kk)N]ϕ\displaystyle\int_{\phi\rightarrow-1}d\phi e^{i[(k-k^{\prime})N]\phi} =l=0l odd2N1π(l1/2)/Nπ(l+1/2)/Nei(kk)Nϕ𝑑ϕ.\displaystyle=\sum_{l=0\textrm{, $l$ odd}}^{2N-1}\int_{\pi(l-1/2)/N}^{\pi(l+1/2)/N}e^{i(k-k^{\prime})N\phi}d\phi. (68)

Consider the case kkk\neq k^{\prime}

ck,k,l=\displaystyle c_{k,k^{\prime},l}= π(l1/2)/Nπ(l+1/2)/Nei(kk)Nϕ𝑑ϕ\displaystyle\int_{\pi(l-1/2)/N}^{\pi(l+1/2)/N}e^{i(k-k^{\prime})N\phi}d\phi
=\displaystyle= 2(kk)Ncos[(kk)πl]sin[(kk)π/2]\displaystyle\frac{2}{(k-k^{\prime})N}\cos[(k-k^{\prime})\pi l]\sin[(k-k^{\prime})\pi/2] (69)

This is zero whenever kkk-k^{\prime} is even, and is an even function of kkk-k^{\prime} when that number is odd.

In the case k=kk=k^{\prime} we have

ck,k,l=π(l1/2)/Nπ(l+1/2)/Nei(kk)Nϕ𝑑ϕ\displaystyle c_{k,k,l}=\int_{\pi(l-1/2)/N}^{\pi(l+1/2)/N}e^{i(k-k)N\phi}d\phi =πN\displaystyle=\frac{\pi}{N} (70)

Now we can find

+|M+|=\displaystyle\langle+|M_{+}|-\rangle= 14πl=0,even2N1k,k(π(l1/2)/Nπ(l+1/2)/Nei(kk)Nϕ𝑑ϕ)fkfk(1)kHkN,kN\displaystyle\frac{1}{4\pi}\sum_{l=0,\mathrm{even}}^{2N-1}\sum_{k,k^{\prime}}\left(\int_{\pi(l-1/2)/N}^{\pi(l+1/2)/N}e^{i(k-k^{\prime})N\phi}d\phi\right)f_{k}f_{k^{\prime}}(-1)^{k^{\prime}}H_{kN,k^{\prime}N}
=\displaystyle= 14πl=0,even2N1k,kck,k,lfkfk(1)kHkN,kN\displaystyle\frac{1}{4\pi}\sum_{l=0,\mathrm{even}}^{2N-1}\sum_{k,k^{\prime}}c_{k,k^{\prime},l}f_{k}f_{k^{\prime}}(-1)^{k^{\prime}}H_{kN,k^{\prime}N}
=\displaystyle= 14kfk2(1)kHkN,kN+14πl=0,even2N1kkck,k,lfkfk(1)kHkN,kN\displaystyle\frac{1}{4}\sum_{k}f_{k}^{2}(-1)^{k}H_{kN,kN}+\frac{1}{4\pi}\sum_{l=0,\mathrm{even}}^{2N-1}\sum_{k\neq k^{\prime}}c_{k,k^{\prime},l}f_{k}f_{k^{\prime}}(-1)^{k^{\prime}}H_{kN,k^{\prime}N} (71)

The first term here is zero because Hmm=1H_{mm}=1 for all mm and due to the normalisation conditions on the fkf_{k}. The final term is also zero. When kkk-k^{\prime} is even the coefficients ck,k,lc_{k,k^{\prime},l} are zero. In the case that kkk-k^{\prime} is odd the terms with k,kk,k^{\prime} and k,kk^{\prime},k cancel pairwise. The factor in the summand ck,k,lfkfkHkN,kNc_{k,k^{\prime},l}f_{k}f_{k^{\prime}}H_{kN,k^{\prime}N} does not change if we swap kk and kk^{\prime} while the factor (1)k(-1)^{k^{\prime}} changes sign. This is because if kkk-k^{\prime} is odd then one and only one of k,kk,k^{\prime} are odd. Thus we have +|M+|=0\langle+|M_{+}|-\rangle=0 as required.

This gives the result for M+M_{+}, the argument for MM_{-} is exactly the same.

References

  • Chuang et al. [1997] I. L. Chuang, D. W. Leung, and Y. Yamamoto, Bosonic quantum codes for amplitude damping, Phys. Rev. A 56, 1114 (1997).
  • Gottesman et al. [2001] D. Gottesman, A. Kitaev, and J. Preskill, Encoding a qubit in an oscillator, Phys. Rev. A 64, 012310 (2001).
  • Cochrane et al. [1999a] P. T. Cochrane, G. J. Milburn, and W. J. Munro, Macroscopically distinct quantum-superposition states as a bosonic code for amplitude damping, Physical Review A 59, 2631 (1999a).
  • Ofek et al. [2016] N. Ofek, A. Petrenko, R. Heeres, P. Reinhold, Z. Leghtas, B. Vlastakis, Y. Liu, L. Frunzio, S. Girvin, L. Jiang, et al., Extending the lifetime of a quantum bit with error correction in superconducting circuits, Nature 536, 441 (2016).
  • Hu et al. [2019] L. Hu, Y. Ma, W. Cai, X. Mu, Y. Xu, W. Wang, Y. Wu, H. Wang, Y. Song, C.-L. Zou, et al., Quantum error correction and universal gate set operation on a binomial bosonic logical qubit, Nature Physics 15, 503 (2019).
  • Campagne-Ibarcq et al. [2020] P. Campagne-Ibarcq, A. Eickbusch, S. Touzard, E. Zalys-Geller, N. E. Frattini, V. V. Sivak, P. Reinhold, S. Puri, S. Shankar, R. J. Schoelkopf, L. Frunzio, M. Mirrahimi, and M. H. Devoret, Quantum error correction of a qubit encoded in grid states of an oscillator, Nature 584, 368 (2020).
  • Flühmann et al. [2019] C. Flühmann, T. L. Nguyen, M. Marinelli, V. Negnevitsky, K. Mehta, and J. Home, Encoding a qubit in a trapped-ion mechanical oscillator, Nature 566, 513 (2019).
  • De Neeve et al. [2022] B. De Neeve, T.-L. Nguyen, T. Behrle, and J. P. Home, Error correction of a logical grid state qubit by dissipative pumping, Nature Physics 18, 296 (2022).
  • Sivak et al. [2022] V. V. Sivak, A. Eickbusch, B. Royer, S. Singh, I. Tsioutsios, S. Ganjam, A. Miano, B. L. Brock, A. Z. Ding, L. Frunzio, S. M. Girvin, R. J. Schoelkopf, and M. H. Devoret, Real-time quantum error correction beyond break-even (2022).
  • Rosenblum et al. [2018a] S. Rosenblum, P. Reinhold, M. Mirrahimi, L. Jiang, L. Frunzio, and R. J. Schoelkopf, Fault-tolerant detection of a quantum error, Science 361, 266 (2018a)https://www.science.org/doi/pdf/10.1126/science.aat3996 .
  • Rosenblum et al. [2018b] S. Rosenblum, P. Reinhold, M. Mirrahimi, L. Jiang, L. Frunzio, and R. J. Schoelkopf, Fault-tolerant detection of a quantum error, Science 361, 266 (2018b)https://www.science.org/doi/pdf/10.1126/science.aat3996 .
  • Ma et al. [2020] W.-L. Ma, M. Zhang, Y. Wong, K. Noh, S. Rosenblum, P. Reinhold, R. J. Schoelkopf, and L. Jiang, Path-independent quantum gates with noisy ancilla, Phys. Rev. Lett. 125, 110503 (2020).
  • Reinhold et al. [2020] P. Reinhold, S. Rosenblum, W.-L. Ma, L. Frunzio, L. Jiang, and R. J. Schoelkopf, Error-corrected gates on an encoded qubit, Nature Physics 16, 822 (2020).
  • Ma et al. [2021] W.-L. Ma, S. Puri, R. J. Schoelkopf, M. H. Devoret, S. Girvin, and L. Jiang, Quantum control of bosonic modes with superconducting circuits, Science Bulletin 66, 1789 (2021).
  • Vlastakis et al. [2013] B. Vlastakis, G. Kirchmair, Z. Leghtas, S. E. Nigg, L. Frunzio, S. M. Girvin, M. Mirrahimi, M. H. Devoret, and R. J. Schoelkopf, Deterministically encoding quantum information using 100-photon schr&#xf6;dinger cat states, Science 342, 607 (2013)https://www.science.org/doi/pdf/10.1126/science.1243289 .
  • Dennis et al. [2002] E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, Topological quantum memory, Journal of Mathematical Physics 43, 4452–4505 (2002).
  • Kitaev [2003] A. Kitaev, Fault-tolerant quantum computation by anyons, Annals of Physics 303, 2–30 (2003).
  • Noh and Chamberland [2020] K. Noh and C. Chamberland, Fault-tolerant bosonic quantum error correction with the surface–gottesman-kitaev-preskill code, Physical Review A 10110.1103/physreva.101.012316 (2020).
  • Noh et al. [2022] K. Noh, C. Chamberland, and F. G. Brandão, Low-overhead fault-tolerant quantum error correction with the surface-gkp code, PRX Quantum 3, 010315 (2022).
  • Vuillot et al. [2019] C. Vuillot, H. Asasi, Y. Wang, L. P. Pryadko, and B. M. Terhal, Quantum error correction with the toric gottesman-kitaev-preskill code, Phys. Rev. A 99, 032344 (2019).
  • Cochrane et al. [1999b] P. T. Cochrane, G. J. Milburn, and W. J. Munro, Macroscopically distinct quantum-superposition states as a bosonic code for amplitude damping, Phys. Rev. A 59, 2631 (1999b).
  • Leghtas et al. [2013] Z. Leghtas, G. Kirchmair, B. Vlastakis, R. J. Schoelkopf, M. H. Devoret, and M. Mirrahimi, Hardware-efficient autonomous quantum memory protection, Phys. Rev. Lett. 111, 120501 (2013).
  • Mirrahimi et al. [2014] M. Mirrahimi, Z. Leghtas, V. V. Albert, S. Touzard, R. J. Schoelkopf, L. Jiang, and M. H. Devoret, Dynamically protected cat-qubits: a new paradigm for universal quantum computation, New Journal of Physics 16, 045014 (2014).
  • Michael et al. [2016] M. H. Michael, M. Silveri, R. Brierley, V. V. Albert, J. Salmilehto, L. Jiang, and S. Girvin, New class of quantum error-correcting codes for a bosonic mode, Physical Review X 610.1103/physrevx.6.031006 (2016).
  • Grimsmo et al. [2020] A. L. Grimsmo, J. Combes, and B. Q. Baragiola, Quantum computing with rotation-symmetric bosonic codes, Physical Review X 1010.1103/physrevx.10.011058 (2020).
  • Raussendorf et al. [2002] R. Raussendorf, D. Browne, and H. Briegel, The one-way quantum computer–a non-network model of quantum computation, Journal of Modern Optics 49, 1299 (2002).
  • Bolt et al. [2016] A. Bolt, G. Duclos-Cianci, D. Poulin, and T. Stace, Foliated quantum error-correcting codes, Physical Review Letters 11710.1103/physrevlett.117.070501 (2016).
  • Brown and Roberts [2020] B. J. Brown and S. Roberts, Universal fault-tolerant measurement-based quantum computation, Physical Review Research 210.1103/physrevresearch.2.033305 (2020).
  • Wiseman and Killip [1998] H. M. Wiseman and R. B. Killip, Adaptive single-shot phase measurements: The full quantum theory, Physical Review A 57, 2169–2185 (1998).
  • Hillmann et al. [2021] T. Hillmann, F. Quijandría, A. L. Grimsmo, and G. Ferrini, Performance of teleportation-based error correction circuits for bosonic codes with noisy measurements (2021), arXiv:2108.01009 [quant-ph] .
  • Zhang et al. [2017] Y. Zhang, X. Zhao, Z.-F. Zheng, L. Yu, Q.-P. Su, and C.-P. Yang, Universal controlled-phase gate with cat-state qubits in circuit qed, Phys. Rev. A 96, 052317 (2017).
  • Frattini et al. [2017] N. E. Frattini, U. Vool, S. Shankar, A. Narla, K. M. Sliwa, and M. H. Devoret, 3-wave mixing josephson dipole element, Applied Physics Letters 11010.1063/1.4984142 (2017).
  • Wiseman and Milburn [2009] H. M. Wiseman and G. J. Milburn, Quantum measurement and control (Cambridge university press, 2009).
  • Raussendorf et al. [2007] R. Raussendorf, J. Harrington, and K. Goyal, Topological fault-tolerance in cluster state quantum computation, New Journal of Physics 9, 199–199 (2007).
  • Raussendorf and Harrington [2007] R. Raussendorf and J. Harrington, Fault-tolerant quantum computation with high threshold in two dimensions, Physical Review Letters 9810.1103/physrevlett.98.190504 (2007).
  • Claes et al. [2023] J. Claes, J. E. Bourassa, and S. Puri, Tailored cluster states with high threshold under biased noise, npj Quantum Information 910.1038/s41534-023-00677-w (2023).
  • Leonhardt et al. [1995] U. Leonhardt, J. A. Vaccaro, B. Böhmer, and H. Paul, Canonical and measured phase distributions, Phys. Rev. A 51, 84 (1995).
  • Martin et al. [2020] L. S. Martin, W. P. Livingston, S. Hacohen-Gourgy, H. M. Wiseman, and I. Siddiqi, Implementation of a canonical phase measurement with quantum feedback, Nature Physics 16, 1046 (2020).
  • Nielsen and Chuang [2010] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition (Cambridge University Press, 2010).
  • Darmawan and Poulin [2017] A. S. Darmawan and D. Poulin, Tensor-network simulations of the surface code under realistic noise, Physical Review Letters 119, 040502 (2017).
  • Pattison et al. [2021] C. A. Pattison, M. E. Beverland, M. P. da Silva, and N. Delfosse, Improved quantum error correction using soft information (2021), arXiv:2107.13589 [quant-ph] .
  • Bravyi and Vargo [2013] S. Bravyi and A. Vargo, Simulation of rare events in quantum error correction, Physical Review A 8810.1103/physreva.88.062308 (2013).
  • Kim et al. [2021] I. H. Kim, E. Lee, Y.-H. Liu, S. Pallister, W. Pol, and S. Roberts, Fault-tolerant resource estimate for quantum chemical simulations: Case study on li-ion battery electrolyte molecules (2021), arXiv:2104.10653 [quant-ph] .
  • Carmichael [2007] H. Carmichael, Statistical Methods in Quantum Optics 2: Non-Classical Fields, Vol. 2008 (2007).