This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\newcites

SMSM References

Sample-efficient adaptive calibration of quantum networks
using Bayesian optimization

Cristian L. Cortes Center for Nanoscale Materials,
Argonne National Laboratory, Lemont, Illinois 60439, USA
   Pascal Lefebvre Institute for Quantum Science and Technology, and Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4, Canada    Nikolai Lauk Division of Physics, Mathematics and Astronomy, and Alliance for Quantum Technologies (AQT), California Institute of Technology, 1200 E. California Boulevard, Pasadena, California 91125, USA    Michael J. Davis Chemical Sciences and Engineering Division, Argonne National Laboratory, Lemont, Illinois 60439, United States    Neil Sinclair Division of Physics, Mathematics and Astronomy, and Alliance for Quantum Technologies (AQT), California Institute of Technology, 1200 E. California Boulevard, Pasadena, California 91125, USA John A. Paulson School of Engineering and Applied Sciences, Harvard University, 29 Oxford Street, Cambridge, Massachusetts 02138, USA    Stephen K. Gray Center for Nanoscale Materials,
Argonne National Laboratory, Lemont, Illinois 60439, USA
   Daniel Oblak Institute for Quantum Science and Technology, and Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4, Canada
Abstract

Indistinguishable photons are imperative for advanced quantum communication networks. Indistinguishability is difficult to obtain because of environment-induced photon transformations and loss imparted by communication channels, especially in noisy scenarios. Strategies to mitigate these transformations often require hardware or software overhead that is restrictive (e.g. adding noise), infeasible (e.g. on a satellite), or time-consuming for deployed networks. Here we propose and develop resource-efficient Bayesian optimization techniques to rapidly and adaptively calibrate the indistinguishability of individual photons for quantum networks using only information derived from their measurement. To experimentally validate our approach, we demonstrate the optimization of Hong-Ou-Mandel interference between two photons–a central task in quantum networking– finding rapid, efficient, and reliable convergence towards maximal photon indistinguishability in the presence of high loss and shot noise. We expect our resource-optimized and experimentally friendly methodology will allow fast and reliable calibration of indistinguishable quanta, a necessary task in distributed quantum computing, communications, and sensing, as well as for fundamental investigations.

Introduction

Algorithmic optimization of quantum systems plays a key role in quantum computing, simulation, and sensing (e.g. see Carleo and Troyer (2017); Havlíček et al. (2019); Bogaerts et al. (2020); Kokail et al. (2019); Mott et al. (2017); Carleo and Troyer (2017); Peruzzo et al. (2014); Ding et al. (2020); Bonato et al. (2016); Lumino et al. (2018); Preskill (2018)), as well as for quantum system characterization Gao et al. (2018); Tachella et al. (2019); Wittler et al. (2021); Simmerman et al. (2020). Yet, there has been little effort on algorithmic optimization of quantum communications and networks Wallnöfer et al. (2020); Ismail et al. (2019); Melnikov et al. (2020); Kottmann et al. (2021). In particular, to use such methods to overcome unavoidable channel-induced variations of properties (degrees of freedom) of photons, along with the well-known impacts of loss and noise, which restrict demonstrations of advanced, multi-qubit, quantum networks, especially those which crucially rely on interference Kimble (2008); Pirandola et al. (2015); Moehring et al. (2007); Delteil et al. (2016); Pompili et al. (2021); Bhaskar et al. (2020); Daiss et al. (2021); Guccione et al. (2020); Lucamarini et al. (2018); Herbst et al. (2015); Ren et al. (2017); Du et al. (2021); Valivarthi et al. (2016); Lago-Rivera et al. (2021); Liu et al. (2021). These variations, which originate from changes in the environment, render photons distinguishable, thereby restricting their ability to interfere Gisin et al. (2002). This precludes crucial tasks in a multi-node quantum network Kimble (2008), like that schematized in Fig. 1, including two-photon Bell-state measurements Michler et al. (1996) which underpin measurement-device-independent quantum key distribution (MDI-QKD) Lo et al. (2012) or quantum repeaters Briegel et al. (1998), for example. The required indistinguishability for deployed networks is obtained by calibrating all degrees of freedom of a photon, i.e. its polarization, temporal, spectral, and spatial modes. This is a process that requires additional hardware and software, and relies on (often “brute-force”) methods that either restrict the communication rate, add noise, do not scale to multi- photon or node networks, or are physically or financially infeasible for remote network nodes Lucamarini et al. (2018); Herbst et al. (2015); Ren et al. (2017); Du et al. (2021); Valivarthi et al. (2016); Lago-Rivera et al. (2021); Liu et al. (2021).

Refer to caption
Figure 1: High-level overview of quantum network calibration using Bayesian optimization. Quantum network nodes emit single or entangled photons into fibers and two-photon Bell-state measurements, whose fidelities are determined by Hong-Ou-Mandel interference, are performed to facilitate quantum network protocols. Within the two-photon measurement node, a feedback loop between the two-photon measurement apparatus and a Bayesian optimizer is used to automate the calibration of photon indistinguishability using the objective function f(𝐱(i))f(\mathbf{x}^{(i)}) which we introduce in the main text, and is plotted in the diagrammatic computer screen. The value 𝐱(i)\mathbf{x}^{(i)} denotes the variable experimental degrees of freedom determined by the experimental apparatus during the iith iteration of the algorithm, with two variables of 𝐱\mathbf{x} defining the axes of the plot on the screen.

Here we employ a Gaussian Process (GP) Bayesian optimization algorithm rasmussen2006 to rapidly calibrate the degrees of freedom of photons for quantum networks. Our method operates with minimal resources: it requires only direct measurements of the low-rate streams of photons, which are inherent to quantum communications, with threshold detectors to overcome the impact of sampling noise and network channel-induced photon variations. While Gaussian process modeling has been successfully used in many physical applications, for example to describe the optical response of plasmonic systems Miller et al. (2010, 2012), its use in the quantum networking context, here for calibration, is an exciting new direction. Specifically, we present and experimentally demonstrate an adaptive calibration algorithm that maximizes two-photon Hong-Ou-Mandel (HOM) Hong et al. (1987) interference to efficiently render photons indistinguishable despite their low probability of detection (see Fig. 1, right). Our proposed methodology for low-cost autonomous calibration leverages advantages of the Bayesian optimization framework in that it is model-agnostic, sample efficient, i.e., demonstrates convergence with minimal samples, and is robust to shot noise and, accordingly, is well-suited for the conditions of quantum communications. We also test the Bayesian optimization algorithm with respect to the kernel, initial sampling strategy, and acquisition function to improve its robustness, and show that it provides the best performance compared to other approaches. We expect that our experimentally-friendly method will accelerate the development of high-fidelity quantum networks and reduce the complexity of implementing workable quantum technology.

Results

Bayesian Optimization

We envision using a GP Bayesian optimization algorithm rasmussen2006 to maximize the indistinguishability of photons generated at a quantum node and, after travelling through deployed fiber optics cable, arriving at a two-photon measurement station, as sketched on the right hand side of Fig. 1. In addition to being a powerful optimization framework, the final GP surrogate model allows for a deep analysis of a high-dimensional parameter space which, as described below, corresponds to degrees of freedom of the photons in this work. The calibration problem consists of optimizing an expensive objective function, f(𝐱)f(\mathbf{x}), corresponding to a measure of the change in coincidence detections due to two-photon interference, as described in the next section. GP Bayesian optimization is based on the assumption that the likelihood and prior distributions correspond to multivariate Gaussian distributions, where the objective function is assumed to be a random variable sampled from a Gaussian Process,

f(𝐱)𝒢𝒫[μ(𝐱),K(𝐱,𝐱)],f(\mathbf{x})\sim\mathcal{GP}[\mu(\mathbf{x}),K(\mathbf{x},\mathbf{x}^{\prime})], (1)

which is defined by the mean function, μ(𝐱)=𝔼[f(𝐱)]\mu(\mathbf{x})=\mathbb{E}[f(\mathbf{x})], and covariance (or kernel) function, K(𝐱,𝐱)=𝔼[(f(𝐱)μ(𝐱)(f(𝐱)μ(𝐱))]K(\mathbf{x},\mathbf{x}^{\prime})=\mathbb{E}[(f(\mathbf{x})-\mu(\mathbf{x})(f(\mathbf{x^{\prime}})-\mu(\mathbf{x^{\prime}}))] respectively. The Bayesian optimization algorithm proceeds by feeding the optimizer, a single function call which is equivalent to a single measurement result, f(𝐱(i))f(\mathbf{x}^{(i)}), during the iith iteration of the algorithm,. The variable 𝐱(i)=(x1(i),x2(i),,xD(i))\mathbf{x}^{(i)}=(x_{1}^{(i)},x_{2}^{(i)},\cdots,x_{D}^{(i)}) is a DD-dimensional vector corresponding to DD experimental parameters. For two-photon interference, these parameters control the photonic degrees of freedom that are adjusted to compensate for channel transformations. For our proof-of-principle laboratory demonstration described in the next section, these correspond to a time delay controlling the mutual arrival time of the photons, half-waveplate angle controlling the polarization of the photons, and spectral filter pass-band determining the frequency detuning (offset) of the photons. Once the iith measurement result has been fed into the Bayesian optimizer, it builds a Gaussian Process surrogate model by performing an optimization with respect to kernel hyperparameters (see Appendix .1). An acquisition function, such as the lower confidence bound, which we found worked best in this work (see Appendix .5), is then used to suggest the next measurement, 𝐱(i+1)\mathbf{x}^{(i+1)}. Various initial sampling strategies, kernel functions, and hyper-parameter optimization strategies are also tested with details of the algorithmic fine-tuning provided in Appendices .4-.6.

Refer to caption
Figure 2: Experimental setup used for HOM interference. A diode laser generates 766.35 nm wavelength light that goes through a polarization controller (PC) before going through a periodically-poled lithium niobate crystal to generate a 1532.7 nm wavelength photon pair by Type-0 spontaneous parametric down conversion (SPDC). A 50 GHz-bandwidth bandpass filter (BPF) selects the degenerate photon pairs around 1532.7 nm wavelength and a beam splitter (BS), probabilistically separates each photon, directing them to different output paths. The top path, in free-space, directs the photon through half- and quarter-waveplates for control of the polarization, with the path length (i.e. time delay) controlled by a translation stage (TS), before the photon passes through a fiber-based polarizing beam splitter (PBS). The bottom path, entirely in fiber, has a phase modulator (PM) plus a polarization controller (PC) to maximize transmission through the PBS. The two paths meet back at a fiber-based beam splitter (BS) which performs HOM interference, and the photons are detected by superconducting nanowire single photon detectors (SNSPDs, identified as D1 and D2). Both paths also contain an optional 12 GHz-bandwidth tunable bandpass filter (TBPF). The components within the dashed line are those adjusted by the Bayesian optimizer.
Refer to caption
Figure 3: Real-time Bayesian optimization results by calibrating the photons using two degrees of freedom. (a) Normalized coincidence detections as a function of relative half-waveplate angle measured in degrees and relative translation stage position measured in millimeters (acquired over 13 h). This experimental data set is considered as the baseline function for all benchmark results. The results are symmetric about zero for negative relative polarization angles. (b) Full benchmark results using the baseline data set in (a) as a black-box function for the Bayesian optimizer. The dark blue line is the average benchmark result with respect to one hundred simulated trials. The true minimum corresponds to the minimum value from the baseline data in (a). (c) Live demonstration results using the Bayesian optimizer in four different trials with detector integration time in parentheses. Left: convergence plot. Right: Final surrogate model prediction. The black dots represent the sampled parameter settings and the red star shows the optimal settings predicted by the GP algorithm.

Experimental setup

To test the algorithm, we performed a HOM interference measurement of two photons. When two indistinguishable independent photons are sent to different input ports of a beam splitter, they will bunch at one of the output ports Hong et al. (1987). This results in a minimum (zero without imperfections) of the zero-delay-time normalized second-order correlation function Beck (2007),

g(2)(0)=C12TS1S2Δτ,\displaystyle g^{(2)}(0)=\frac{C_{12}T}{S_{1}S_{2}\Delta\tau}, (2)

where S1S_{1} and S2S_{2} (C12C_{12}) denote the total number of photons detected at each (in coincidence at both) of the outputs of the beamsplitter within time interval Δτ\Delta\tau over a total amount of time TT. This quantity is related to the commonly used HOM interference visibility V=(C12maxC12min)/C12maxV=(C_{12}^{\mathrm{max}}-C_{12}^{\mathrm{min}})/C_{12}^{\mathrm{max}}, where C12maxC_{12}^{\mathrm{max}} (C12minC_{12}^{\mathrm{min}}) denotes the maximum (minimum) number of photons detected in coincidence as a degree of freedom of a photon is varied (e.g. its time of arrival) Hong et al. (1987); Valivarthi et al. (2020). Either VV or g(2)(0)g^{(2)}(0) are used to quantify the impact of imperfections in HOM interference, which ultimately impacts Bell-state measurement fidelities, and hence the fidelity of qubit distribution in quantum networks Nielsen and Chuang (2002); Metcalf et al. (2014); Valivarthi et al. (2020).

In our experiment, which is depicted in Fig. 2,a continuous-wave laser emits near-visible-wavelength light that pumps a periodically-poled lithium niobate crystal waveguide to create a photon pair at telecommunication wavelength through spontaneous parametric down conversion (SPDC) Burnham and Weinberg (1970). The leftover pump light is removed with a 50 GHz bandpass filter, and the photon pair is sent to an initial beam-splitter to probabilistically separate the photons. Here we employ photons originating from the same source to demonstrate our principle even though photons would originate from independent sources in a quantum network. To avoid first-order interference between correlated photons generated by the SPDC process, one of them passes through a fiber-based phase-modulator used to randomize the phase difference between the photon pair Jin et al. (2013). The second photon goes through a free-space optics setup in which a set of half- and quarter-waveplates adjust the photon polarization while mirrors on a translation stage vary the path length difference (corresponding to a mutual time delay) between each of the photons. Each photon then passes an optional, independent, and tunable 12 GHz-bandwidth bandpass filter. The waveplates, translation stage, and tunable filters, surrounded by a dashed outline in Fig. 2, are adjusted according to the algorithm. Next, each photon passes through fiber-based polarizing beam-splitters (PBSs), which ensure that the polarizations of both photons are identical and that any polarization rotations are converted into intensity variations. Then, each photon is directed into a different input port of a second fiber-based beamsplitter at which two-photon interference occurs. Finally, the output photons are guided to two cryogenically cooled superconducting nanowire threshold single-photon detectors. A time-to-digital converter records the time-of-arrival of the photons at the detectors.

Optimization using two degrees of freedom

In our first measurement, we remove the tunable band-pass filters such that the path-length difference and the half-waveplate angle correspond to the experimental parameters 𝐱\mathbf{x} that are fed into the Bayesian optimization algorithm. It is important to point out that there are several choices for possible objective functions f(𝐱)f(\mathbf{x}) one may use for the optimization algorithm. Perhaps unsurprisingly, we find that

f(𝐱)g(2)(0)[𝐱]\displaystyle f(\mathbf{x})\equiv g^{(2)}(0)[\mathbf{x}] (3)

provides the best performance when compared to others and is what we employ for our scheme. For example, C12C_{12} can vary for experimental settings that do not yield quantum interference, or VV which requires adjustment of experimental parameters to assess both C12minC_{12}^{\mathrm{min}} and C12maxC_{12}^{\mathrm{max}}. Note here that the zero in g(2)(0)g^{(2)}(0) indicates that we always measure about the same relative time-of-arrival of the photons (about Δτ=2\Delta\tau=2 ns) when we vary the degrees of freedom, specifically a time that corresponds to a minimum g(2)(0)g^{(2)}(0) when photons are rendered indistinguishable.

Before discussing the results of our optimization, note that we develop a theoretical model for g(2)(0)g^{(2)}(0) to validate our experimental measurements and the predictions of the Bayesian optimization algorithm (see Appendix Fig. 4). The degenerate output of our Type-0 SPDC crystal is described by a squeezed vacuum state, which is a Gaussian state, i.e. it is completely characterized by a displacement vector and a covariance matrix. Since all optical operations in the experiment, including the photon detection, can be described as Gaussian operations, that is, they map Gaussian states to other Gaussian states, we can apply a characteristic function formalism LaukInPrep ; Takeoka et al. (2015) to determine the final displacement vector and the final covariance matrix, which allow us to predict S1S_{1}, S2S_{2} and C12C_{12}. In this model, the experimental degrees of freedom are modeled as virtual beam splitters with variable transmittances ηu,ηd\eta_{u},\eta_{d}, and ζ\zeta, where ηu/d\eta_{u/d} corresponds to overall photon coupling efficiency in the upper/lower path (see Fig. 2) and ζ=exp(x2)\zeta=\exp(-x^{2}) corresponds to the mode overlap parameterization between the two photons. Details of the model are provided in the Appendix .9 along with the theoretical plot of g(2)(0)g^{(2)}(0) in Appendix Fig. 5a.

Our optimization using two degrees of freedom is depicted in Fig. 3a, presenting a two-dimensional map of the measured g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}] as a function of the full range of settings 𝐱\mathbf{x} of the translation stage and half-waveplate angle - using 60 s for changing the parameter settings and T=60T=60 s of data-collection per setting. Our result shows good agreement with that predicted by the theoretical model as plotted in Appendix Fig. 5a. In particular, g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}] displays V0.5V\leq 0.5, as expected for a squeezed vacuum input state as the input to the initial beam splitter, which splits the pair probabilistically. Unlike the g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}] predicted by the theoretical model, the experimental data exhibits random variations due to sampling noise.

To benchmark the Bayesian optimization algorithm, we used the baseline data in Fig. 3a as the source of values for our objective function g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}] which are fed to the algorithm. Starting with a set of twelve measurement settings [𝐱(1),𝐱(2),,𝐱(12)][\mathbf{x}^{(1)},\mathbf{x}^{(2)},\cdots,\mathbf{x}^{(12)}] selected based on Latin hypercube sampling (see Appendix .6 for details), the algorithm proceeds to select the next setting 𝐱(13)\mathbf{x}^{(13)}, and then the next and so on, for a total number of measurements, nn, that it deems optimal for establishing the minimum of g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}]. As shown in Fig. 3b, we find that the Bayesian optimization algorithm reliably converges to the minimum g(2)[𝐱]=20.2g^{(2)}[\mathbf{x}]=20.2 in less than n=30n=30 measurements, on average, (dark blue line). The light blue lines correspond to single instances of the Bayesian optimization algorithm, which converge more slowly or more quickly depending on the random instance of initially sampled points. By benchmarking the algorithm with over one-hundred independent trials, we are able to remove the effect of randomness, resulting in the average (expected) convergence shown by the dark blue line.

While this benchmark confirms the validity of the approach, it is important to test the efficacy of our algorithm in a live setting where every measurement is performed in real-time, as opposed to using the tabulated baseline data from Fig. 3a. The results for four different real-time trials are shown in Fig. 3c. The plots for trials 1-3, for which data was acquired for T=60T=60 s (same as for the baseline data), show that the minimum value of g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}] is found within n=15n=15 in all three cases. The maps on the right of Fig. 3c show the final Gaussian Process surrogate model prediction made after n=30n=30 measurements. It is worth noting that the surrogate models do not, and indeed are not intended to, accurately reproduce g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}] over the entire parameter range of the input variables. The surrogate models need only accurately predict the location of the minimum of g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}]. In fact, this pursuit of incomplete but sufficient information is the underlying reason why GP Bayesian optimization is more efficient than brute-force mapping of g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}]. In the fourth trial we reduce TT to just 1 s, which significantly increases the level of shot noise. Yet, the GP optimization still finds the minimum value after just n=23n=23, albeit the polarization setting is slightly off the optimum value. As the PBS transmission follows a cosine of the half-waveplate angle g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}] becomes less sensitive to small variations and, thus, renders the optimization more difficult. However, the consequence of a slight offset of the polarization angle is also less significant due to the cosine dependence of the loss. On the other hand, without the PBS the distinguishability would be directly proportional to the offset of the half-waveplate angle.

Our results using two degrees of freedom validate the effectiveness of the algorithm for a realistic signal that is subject to various sources of noise including shot noise, while demonstrating fast convergence. In comparison, the full experimental map (shown in Fig. 3a) consists of 350 data points acquired over 13 h.

Refer to caption
Figure 4: Real-time Bayesian optimization results using three degrees of freedom. (a) Convergence plot of the minimum g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}] as a function of total number of measurements, nn. (b) Partial dependence plots for each degree freedom (see main text for a description). (c) Benchmarking the optimization using a simulated objective function derived from the theoretical model. Note that theoretical model is based on same experimental parameters as those shown in Fig. 3a, however, the additional spectral filtering for the three dimensional optimization lowers the mean-photon number and thus tends to increase g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}].

Optimization using three degrees of freedom

Our next measurements test the Bayesian optimization algorithm in a higher-dimensional setting. Specifically, we add the 12 GHz-bandwidth tunable filters to the experimental setup so as to include the frequency offset (between 0 and 6 GHz) of the photons as a third degree of freedom. At 0 GHz offset, both filters are resonant with the degenerate spectral mode of the photon pair, while at the maximum detuning of 6 GHz between the two photons, the spectral filters transmit correlated but partially non-degenerate photons. Compared to the previous setup, a broader region of interference with respect to varied path-length difference is expected due to the spectral filtering. Since the parameter space of settings in three degrees of freedom is too large to acquire a baseline data-set as that in Fig. 3a, we proceed directly to a live demonstration, with results shown in Fig. 4a. We find excellent convergence towards the minimum value within n=25n=25 measurements. Two-dimensional partial-dependence plots of the thirty-point surrogate model are shown in Fig. 4b. The partial dependence plots illustrate the surrogate model prediction with respect to two degrees of freedom, with the remaining degree of freedom averaged out. We note that the first two degrees of freedom, corresponding to the half-waveplate angle and path length difference, display a near minimum at around 85 degrees and 7.5 mm respectively, as expected. The final degree of freedom, corresponding to the frequency offset, displays a near-constant dependence with a small slope predicting a minimum at X2=0X_{2}=0 (zero-frequency offset).

Using our theoretical model, we generate a map of the predicted g(2)(0)g^{(2)}(0) for all possible parameter settings. Note that our model captures the spectral offset of the two photons as a transition between degenerate squeezed vacuum to non-degenerate two-mode squeezed vacuum states as inputs to the final beamsplitter, as shown in Appendix Fig. 5a,b. Thus, our theoretical model, instead of an experimentally acquired baseline map, is used as the source of the g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}] values employed for systematically benchmarking the Bayesian optimization algorithm for the three-dimensional parameter set. In Fig. 4c we plot the value of the predicted g(2)(0)[𝐱]g^{(2)}(0)[\mathbf{x}] as a function of the total number of measurements nn averaged over a hundred trials. A set of fifteen measurement settings based on Latin hypercube sampling are used as the initial sampling points. The results show excellent convergence towards the predicted minimum within n=40n=40 measurement calls, showing the efficacy of the result in the higher dimensional setting while also demonstrating the utility of the theoretical model.

Optimization for simulated thermal input states

For deployed quantum networks, where photon sources are separated, one will not interfere correlated photons as in our demonstration. Instead, two-photon measurements are typically performed on two uncorrelated coherent photons, as in the case of MDI-QKD Lo et al. (2012), or two photons, each entangled with another photon not partaking in the measurement, as in the case of quantum repeaters Briegel et al. (1998). For the latter case, photons from separate SPDC sources will be independently calibrated in order to maximize the degree of photon indistinguishability. When one photon from an SPDC source is observed without its correlated partner, it will follow a thermal photon number distribution Burnham and Weinberg (1970). Hence, we provide additional benchmarks with thermal states at the input of the beamsplitter – corresponding to two independent members of separate photon pairs – using our theoretical model for the predicted g(2)(0)g^{(2)}(0), see Appendix Fig. 5c. We simulate the effect of finite sampling by performing Poisson sampling of the theoretical model, which we define as the baseline data. The independent degrees of freedom correspond to ζ=exp(x2)\zeta=\exp(-x^{2}) (due to path length difference) and ηu\eta_{u} (loss due to polarization misalignment), which in an experimental setting would be proportional to the cosine of the polarization, see the Appendix .9 for more details. The result is shown in Fig. 5a. The effect of finite sampling is evident by the random noise, which reflects what a real acquired signal would look like, e.g. as in Fig. 3a. The baseline objective function is, thus, used to test the Bayesian optimizer under experimental conditions for which the photon detection rate can be very limited. The results shown in Fig. 5 provide the final benchmark results for Bayesian optimization using thermal input states, averaged with 150 different instances. As before, a set of twelve measurement settings based on Latin hypercube sampling are used as the initial sampling points. The results shown in Fig. 5c show that the Bayesian optimizer finds the expected minimum within n=30n=30 measurements (located near ηu=0.2\eta_{u}=0.2 and x=0x=0). These results are promising for in situ quantum network calibration.

Refer to caption
Figure 5: Simulation results for Bayesian optimization using thermal input states. (a) Simulated baseline data using Poisson sampling. (b) Final surrogate model prediction after 40 measurement calls. (c) Convergence plot averaged over 150 instances (dark blue).

Discussion

It is worth discussing how another widespread optimization approach, gradient-based optimization methods, would operate and compare to the Bayesian optimization framework presented in the manuscript. First-order gradient descent based methods, such as conjugate gradient descent, find the local or global minimum by using the update rule, 𝐱(i+1)=𝐱(i)γf(𝐱(i))\mathbf{x}^{(i+1)}=\mathbf{x}^{(i)}-\gamma\nabla f(\mathbf{x}^{(i)}), where γ\gamma is the step size value and f(𝐱(i))\nabla f(\mathbf{x}^{(i)}) corresponds to the gradient of the objective function. While gradient descent scales well to high dimensions, it becomes very time consuming for expensive functions where gradient information is not available. In principle, it is possible to obtain gradient information for the current set-up, but it would require an additional measurement at each point in order to use the finite-difference formula, fθ|a=limδθ0(f(a+δθ)f(a))/δθ\tfrac{\partial f}{\partial\theta}|_{a}=\lim_{\delta\theta\rightarrow 0}(f(a+\delta\theta)-f(a))/\delta\theta, resulting in a doubling of the total number of measurements. In addition, vanilla gradient descent methods typically converge with hundreds of function calls at a minimum, therefore, they would be much more time-consuming than the Bayesian optimization presented here. Similar reasoning applies to second-order methods which require second-order derivatives to construct the Hessian. While second-order methods converge much faster than first-order methods, they come at the expense of additional function calls to construct the Hessian at each optimization step.

The computational bottleneck of Bayesian optimization is also an important consideration for high-dimensional problems. The computational cost of the inference step in Bayesian optimization involves solving the linear system of equations, (K+σ2I)1𝐲(K+\sigma^{2}I)^{-1}\mathbf{y}, which scales as 𝒪(n3)\mathcal{O}(n^{3}) with 𝒪(n2)\mathcal{O}(n^{2}) storage, where nn is equal to the total number of sampling points. The predictive mean and variance prediction scales as 𝒪(n)\mathcal{O}(n) and 𝒪(n2)\mathcal{O}(n^{2}) respectively per test point. This implies that for a large number of iterations, the computational cost can become prohibitive. To reduce computational complexity, many approximation schemes have been proposed Wilson and Nickisch (2015). For example, it may be possible to use the KISS-GP approach of Wilson and Nickisch Wilson and Nickisch (2015) which reduces the inference complexity to 𝒪(n)\mathcal{O}(n) and the test point prediction complexity to 𝒪(1)\mathcal{O}(1). This approach decomposes covariance matrices in terms of Kronecker products of Toeplitz matrices, which naturally occur in 1D regularly spaced grids, which are then approximated by circulant matrices. By performing local kernel interpolation, it then becomes possible to speed up Bayesian optimization resulting in the much improved computational cost mentioned above. Other methods such as covariance matrix decomposition, likelihood approximations, penalized likelihood, low-rank approximations, and graphical representation may also be used for scalability and will be explored in future work Geoga et al. (2020); Stein et al. (2004); Bilionis et al. (2013); Krock et al. (2021); Constantinescu and Anitescu (2013, 2013); Constantinescu et al. (2006).

The advantage we have demonstrated in using Bayesian optimization for two-photon interference-based for quantum communications is for the case where the parameters that we adjust are mutually independent. However, our demonstration could be extended to more complex scenarios, like linear quantum repeaters based on two-photon interference and entanglement swapping, which may not allow for mutually independent adjustment, depending on the approach. For instance, the emission wavelength of light from a laser may drift, and will need to be adjusted. Thus, if photon pairs are created using SPDC, such adjustment will vary the wavelengths of both photons (or reduce the distribution rate, depending on the filters used). More crucially, the use of entangled states fundamentally links the properties of two photons together by way of the relative phase of the qubit bases Franson (1989), a property that is not considered in our demonstration, and one that is also relevant for single photon-based repeaters Cabrillo et al. (1999) or twin-field QKD Lucamarini et al. (2018). For instance, the wavelength and phase of photons in an entangled pair are coupled, and thus an optimization algorithm will need to account for the increased complexity when more links are employed to reach greater distances, as well as for calibration of the measurement bases at the end of the communication channel (e.g. the relative phase of an interferometer when using time-bin encoding). Investigation of our, as well as other optimization methods, including its scaling advantage with communication distance, and how it applies to other network protocols, topologies, or encodings (e.g. all-photonic repeaters Azuma et al. (2015), larger Hilbert spaces Bergmann and van Loock (2019), or continuous-variables Pirandola et al. (2015)), or other quantum tasks Nielsen and Chuang (2002) will be considered in future work.

Conclusion

We have presented and demonstrated a GP Bayesian optimization framework for both accelerating and automating the calibration of photonic degrees of freedom in order to maximize photon indistinguishability, which is an inescapable task in future quantum networks. The methodology is sample-efficient, easy-to-use, and has small computational overhead for a small number of optimization dimensions (ranging from two to ten). It is also robust to noise and experimental imperfections, making it a suitable as a plug-and-play approach for calibrating quantum network experiments from scratch. We have also implemented a theoretical model based on Gaussian operations to validate our optimization and generate baseline data. We envision that our optimization approach should be applicable to a wide variety of quantum optics experiments outside of the quantum network experimental focus for the current manuscript. For example, this approach could be useful for calibrating quantum optical computational devices Metcalf et al. (2014); Qiang et al. (2018); Spring et al. (2013); Arrazola et al. (2021), in particular in distributed computing architectures, as well as quantum imaging and spectroscopic methods Brida et al. (2010); Lemos et al. (2014), especially in prototype systems where loss and noise play a large role. In particular, as quantum networks become more complex and with greater demands on performance, efficient and practical optimization methods must be considered to overcome the deleterious impact of the environment on qubit transmission rates and fidelities.

Acknowledgements

We thank E. M. Constantintescu for discussions on Gaussian process modeling and R. Valivarthi for discussions in the early stages of this work. We thank Jean-Roch Vlimant for reading and commenting on the manuscript. The detectors were provided by V. B. Verma, F. Marsili, M. Shaw, and S.W. Nam. The SPDC waveguides were provided by L. Oesterling through a collaboration with W. Tittel. This work was performed, in part, at the Center for Nanoscale Materials, a U.S. Department of Energy Office of Science User Facility, and supported by the U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357. This work is supported by the U. S. Department of Energy, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences, and Biosciences operating under Contract Number DE-AC02-06CH11357. N.L. and N.S. acknowledge support from the AQT Intelligent Quantum Networks and Technologies (INQNET) research program and DOE/HEP QuantISED program grant, QCCFP/Quantum Machine Learning and Quantum Computation Frameworks (QCCFP-QMLQCF) for HEP, Grant No. DE-SC0019219. N.S. further acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC). P.L. and D.O. acknowledge support of the Natural Sciences and Engineering Research Council of Canada through the Discovery Grant program and the CREATE QUANTA training program, and the Alberta Ministry for Jobs, Economy and Innovation through the Major Innovation Fund Quantum Technologies Project (QMP).

References

  • Carleo and Troyer (2017) Giuseppe Carleo and Matthias Troyer, “Solving the quantum many-body problem with artificial neural networks,” Science 355, 602–606 (2017).
  • Havlíček et al. (2019) Vojtěch Havlíček, Antonio D Córcoles, Kristan Temme, Aram W Harrow, Abhinav Kandala, Jerry M Chow,  and Jay M Gambetta, “Supervised learning with quantum-enhanced feature spaces,” Nature 567, 209–212 (2019).
  • Bogaerts et al. (2020) Wim Bogaerts, Daniel Pérez, José Capmany, David AB Miller, Joyce Poon, Dirk Englund, Francesco Morichetti,  and Andrea Melloni, “Programmable photonic circuits,” Nature 586, 207–216 (2020).
  • Kokail et al. (2019) Christian Kokail, Christine Maier, Rick van Bijnen, Tiff Brydges, Manoj K Joshi, Petar Jurcevic, Christine A Muschik, Pietro Silvi, Rainer Blatt, Christian F Roos, et al., “Self-verifying variational quantum simulation of lattice models,” Nature 569, 355–360 (2019).
  • Mott et al. (2017) Alex Mott, Joshua Job, Jean-Roch Vlimant, Daniel Lidar,  and Maria Spiropulu, “Solving a higgs optimization problem with quantum annealing for machine learning,” Nature 550, 375–379 (2017).
  • Peruzzo et al. (2014) Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J Love, Alán Aspuru-Guzik,  and Jeremy L O’brien, “A variational eigenvalue solver on a photonic quantum processor,” Nature communications 5, 1–7 (2014).
  • Ding et al. (2020) Yongcheng Ding, José D Martín-Guerrero, Mikel Sanz, Rafael Magdalena-Benedicto, Xi Chen,  and Enrique Solano, “Retrieving quantum information with active learning,” Physical review letters 124, 140504 (2020).
  • Bonato et al. (2016) Cristian Bonato, Machiel S Blok, Hossein T Dinani, Dominic W Berry, Matthew L Markham, Daniel J Twitchen,  and Ronald Hanson, “Optimized quantum sensing with a single electron spin using real-time adaptive measurements,” Nature nanotechnology 11, 247–252 (2016).
  • Lumino et al. (2018) Alessandro Lumino, Emanuele Polino, Adil S Rab, Giorgio Milani, Nicolò Spagnolo, Nathan Wiebe,  and Fabio Sciarrino, “Experimental phase estimation enhanced by machine learning,” Physical Review Applied 10, 044033 (2018).
  • Preskill (2018) John Preskill, “Quantum computing in the nisq era and beyond,” Quantum 2, 79 (2018).
  • Gao et al. (2018) Jun Gao, Lu-Feng Qiao, Zhi-Qiang Jiao, Yue-Chi Ma, Cheng-Qiu Hu, Ruo-Jing Ren, Ai-Lin Yang, Hao Tang, Man-Hong Yung,  and Xian-Min Jin, “Experimental machine learning of quantum states,” Physical review letters 120, 240501 (2018).
  • Tachella et al. (2019) Julián Tachella, Yoann Altmann, Nicolas Mellado, Aongus McCarthy, Rachael Tobin, Gerald S Buller, Jean-Yves Tourneret,  and Stephen McLaughlin, “Real-time 3d reconstruction from single-photon lidar data using plug-and-play point cloud denoisers,” Nature communications 10, 1–6 (2019).
  • Wittler et al. (2021) Nicolas Wittler, Federico Roy, Kevin Pack, Max Werninghaus, Anurag Saha Roy, Daniel J Egger, Stefan Filipp, Frank K Wilhelm,  and Shai Machnes, “Integrated tool set for control, calibration, and characterization of quantum devices applied to superconducting qubits,” Physical Review Applied 15, 034080 (2021).
  • Simmerman et al. (2020) Emma M Simmerman, H-H Lu, Andrew M Weiner,  and Joseph M Lukens, “Efficient compressive and bayesian characterization of biphoton frequency spectra,” Optics letters 45, 2886–2889 (2020).
  • Wallnöfer et al. (2020) Julius Wallnöfer, Alexey A Melnikov, Wolfgang Dür,  and Hans J Briegel, “Machine learning for long-distance quantum communication,” PRX Quantum 1, 010301 (2020).
  • Ismail et al. (2019) Yaseera Ismail, Ilya Sinayskiy,  and Francesco Petruccione, “Integrating machine learning techniques in quantum communication to characterize the quantum channel,” JOSA B 36, B116–B121 (2019).
  • Melnikov et al. (2020) Alexey A Melnikov, Pavel Sekatski,  and Nicolas Sangouard, “Setting up experimental bell tests with reinforcement learning,” Physical Review Letters 125, 160401 (2020).
  • Kottmann et al. (2021) Jakob Kottmann, Mario Krenn, Thi Ha Kyaw, Sumner Alperin-Lea,  and Alán Aspuru-Guzik, “Quantum computer-aided design of quantum optics hardware,” Quantum Science and Technology  (2021).
  • Kimble (2008) H Jeff Kimble, “The quantum internet,” Nature 453, 1023–1030 (2008).
  • Pirandola et al. (2015) Stefano Pirandola, Jens Eisert, Christian Weedbrook, Akira Furusawa,  and Samuel L Braunstein, “Advances in quantum teleportation,” Nature photonics 9, 641–652 (2015).
  • Moehring et al. (2007) David L Moehring, Peter Maunz, Steve Olmschenk, Kelly C Younge, Dzmitry N Matsukevich, L-M Duan,  and Christopher Monroe, “Entanglement of single-atom quantum bits at a distance,” Nature 449, 68–71 (2007).
  • Delteil et al. (2016) Aymeric Delteil, Zhe Sun, Wei-bo Gao, Emre Togan, Stefan Faelt,  and Ataç Imamoğlu, “Generation of heralded entanglement between distant hole spins,” Nature Physics 12, 218–223 (2016).
  • Pompili et al. (2021) Matteo Pompili, Sophie LN Hermans, Simon Baier, Hans KC Beukers, Peter C Humphreys, Raymond N Schouten, Raymond FL Vermeulen, Marijn J Tiggelman, Laura dos Santos Martins, Bas Dirkse, et al., “Realization of a multinode quantum network of remote solid-state qubits,” Science 372, 259–264 (2021).
  • Bhaskar et al. (2020) Mihir K Bhaskar, Ralf Riedinger, Bartholomeus Machielse, David S Levonian, Christian T Nguyen, Erik N Knall, Hongkun Park, Dirk Englund, Marko Lončar, Denis D Sukachev, et al., “Experimental demonstration of memory-enhanced quantum communication,” Nature 580, 60–64 (2020).
  • Daiss et al. (2021) Severin Daiss, Stefan Langenfeld, Stephan Welte, Emanuele Distante, Philip Thomas, Lukas Hartung, Olivier Morin,  and Gerhard Rempe, “A quantum-logic gate between distant quantum-network modules,” Science 371, 614–617 (2021).
  • Guccione et al. (2020) Giovanni Guccione, Tom Darras, Hanna Le Jeannic, Varun B Verma, Sae Woo Nam, Adrien Cavaillès,  and Julien Laurat, “Connecting heterogeneous quantum networks by hybrid entanglement swapping,” Science Advances 6, eaba4508 (2020).
  • Lucamarini et al. (2018) Marco Lucamarini, Zhiliang L Yuan, James F Dynes,  and Andrew J Shields, “Overcoming the rate–distance limit of quantum key distribution without quantum repeaters,” Nature 557, 400–403 (2018).
  • Herbst et al. (2015) Thomas Herbst, Thomas Scheidl, Matthias Fink, Johannes Handsteiner, Bernhard Wittmann, Rupert Ursin,  and Anton Zeilinger, “Teleportation of entanglement over 143 km,” Proceedings of the National Academy of Sciences 112, 14202–14205 (2015)https://www.pnas.org/content/112/46/14202.full.pdf .
  • Ren et al. (2017) Ji-Gang Ren, Ping Xu, Hai-Lin Yong, Liang Zhang, Sheng-Kai Liao, Juan Yin, Wei-Yue Liu, Wen-Qi Cai, Meng Yang, Li Li, et al., “Ground-to-satellite quantum teleportation,” Nature 549, 70–73 (2017).
  • Du et al. (2021) Dounan Du, Paul Stankus, Olli-Pentti Saira, Mael Flament, Steven Sagona-Stophel, Mehdi Namazi, Dimitrios Katramatos,  and Eden Figueroa, “An elementary 158 km long quantum network connecting room temperature quantum memories,” arXiv preprint arXiv:2101.12742  (2021).
  • Valivarthi et al. (2016) Raju Valivarthi, Qiang Zhou, Gabriel H Aguilar, Varun B Verma, Francesco Marsili, Matthew D Shaw, Sae Woo Nam, Daniel Oblak, Wolfgang Tittel, et al., “Quantum teleportation across a metropolitan fibre network,” Nature Photonics 10, 676–680 (2016).
  • Lago-Rivera et al. (2021) Dario Lago-Rivera, Samuele Grandi, Jelena V Rakonjac, Alessandro Seri,  and Hugues de Riedmatten, “Telecom-heralded entanglement between multimode solid-state quantum memories,” Nature 594, 37–40 (2021).
  • Liu et al. (2021) Xiao Liu, Jun Hu, Zong-Feng Li, Xue Li, Pei-Yun Li, Peng-Jun Liang, Zong-Quan Zhou, Chuan-Feng Li,  and Guang-Can Guo, “Heralded entanglement distribution between two absorptive quantum memories,” Nature 594, 41–45 (2021).
  • Gisin et al. (2002) Nicolas Gisin, Grégoire Ribordy, Wolfgang Tittel,  and Hugo Zbinden, “Quantum cryptography,” Reviews of modern physics 74, 145 (2002).
  • Michler et al. (1996) Markus Michler, Klaus Mattle, Harald Weinfurter,  and Anton Zeilinger, “Interferometric bell-state analysis,” Phys. Rev. A 53, R1209–R1212 (1996).
  • Lo et al. (2012) Hoi-Kwong Lo, Marcos Curty,  and Bing Qi, “Measurement-device-independent quantum key distribution,” Physical Review Letters 108, 130503 (2012).
  • Briegel et al. (1998) H-J Briegel, Wolfgang Dür, Juan I Cirac,  and Peter Zoller, “Quantum repeaters: the role of imperfect local operations in quantum communication,” Physical Review Letters 81, 5932 (1998).
  • Rasmussen and Williams (2006) C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning, Adaptive Computation and Machine Learning (MIT Press, Cambridge, MA, USA, 2006) p. 248.
  • Miller et al. (2010) Ryan L. Miller, Zhen Xie, Sven Leyffer, Michael J. Davis,  and Stephen K. Gray, “Surrogate-Based Modeling of the Optical Response of Metallic Nanostructures,” Journal of Physical Chemistry C 114, 20741–20748 (2010).
  • Miller et al. (2012) Ryan L. Miller, Lawrence B. Harding, Michael J. Davis,  and Stephen K. Gray, “Bi-fidelity fitting and optimization,” Journal of Chemical Physics 136 (2012).
  • Hong et al. (1987) Chong-Ki Hong, Zhe-Yu Ou,  and Leonard Mandel, “Measurement of subpicosecond time intervals between two photons by interference,” Physical review letters 59, 2044 (1987).
  • Beck (2007) M. Beck, “Comparing measurements of g((2))(0) performed with different coincidence detection techniques,” Journal of the Optical Society of America B-Optical Physics 24, 2972–2978 (2007).
  • Valivarthi et al. (2020) Raju Valivarthi, Samantha I Davis, Cristián Peña, Si Xie, Nikolai Lauk, Lautaro Narváez, Jason P Allmaras, Andrew D Beyer, Yewon Gim, Meraj Hussein, et al., “Teleportation systems toward a quantum internet,” PRX Quantum 1, 020317 (2020).
  • Nielsen and Chuang (2002) Michael A Nielsen and Isaac Chuang, “Quantum computation and quantum information,”  (2002).
  • Metcalf et al. (2014) Benjamin J Metcalf, Justin B Spring, Peter C Humphreys, Nicholas Thomas-Peter, Marco Barbieri, W Steven Kolthammer, Xian-Min Jin, Nathan K Langford, Dmytro Kundys, James C Gates, et al., “Quantum teleportation on a photonic chip,” Nature photonics 8, 770–774 (2014).
  • Burnham and Weinberg (1970) David C Burnham and Donald L Weinberg, “Observation of simultaneity in parametric production of optical photon pairs,” Physical Review Letters 25, 84 (1970).
  • Jin et al. (2013) Jeongwan Jin, Joshua A Slater, Erhan Saglamyurek, Neil Sinclair, Mathew George, Raimund Ricken, Daniel Oblak, Wolfgang Sohler,  and Wolfgang Tittel, “Two-photon interference of weak coherent laser pulses recalled from separate solid-state quantum memories,” Nature communications 4, 1–7 (2013).
  • (48) Nikolai Lauk et al., Manuscript in preparation.
  • Takeoka et al. (2015) Masahiro Takeoka, Rui-Bo Jin,  and Masahide Sasaki, “Full analysis of multi-photon pair effects in spontaneous parametric down conversion based photonic quantum information processing,” New Journal of Physics 17, 043030 (2015).
  • Wilson and Nickisch (2015) Andrew Wilson and Hannes Nickisch, “Kernel interpolation for scalable structured gaussian processes (kiss-gp),” in International Conference on Machine Learning (PMLR, 2015) pp. 1775–1784.
  • Geoga et al. (2020) Christopher J Geoga, Mihai Anitescu,  and Michael L Stein, “Scalable gaussian process computations using hierarchical matrices,” Journal of Computational and Graphical Statistics 29, 227–237 (2020).
  • Stein et al. (2004) Michael L Stein, Zhiyi Chi,  and Leah J Welty, “Approximating likelihoods for large spatial data sets,” Journal of the Royal Statistical Society: Series B (Statistical Methodology) 66, 275–296 (2004).
  • Bilionis et al. (2013) Ilias Bilionis, Nicholas Zabaras, Bledar A Konomi,  and Guang Lin, “Multi-output separable gaussian process: Towards an efficient, fully bayesian paradigm for uncertainty quantification,” Journal of Computational Physics 241, 212–239 (2013).
  • Krock et al. (2021) Mitchell Krock, William Kleiber, Dorit Hammerling,  and Stephen Becker, “Modeling massive multivariate spatial data with the basis graphical lasso,” arXiv preprint arXiv:2101.02404  (2021).
  • Constantinescu and Anitescu (2013) Emil M Constantinescu and Mihai Anitescu, “Physics-based covariance models for gaussian processes with multiple outputs,” International Journal for Uncertainty Quantification 3 (2013).
  • Constantinescu et al. (2006) Emil M Constantinescu, Tianfeng Chai, Adrian Sandu,  and Gregory R Carmichael, Autoregressive models of background errors for chemical data assimilation, Tech. Rep. (Department of Computer Science, Virginia Polytechnic Institute & State …, 2006).
  • Franson (1989) James D Franson, “Bell inequality for position and time,” Physical review letters 62, 2205 (1989).
  • Cabrillo et al. (1999) Carlos Cabrillo, J Ignacio Cirac, Pablo Garcia-Fernandez,  and Peter Zoller, “Creation of entangled states of distant atoms by interference,” Physical Review A 59, 1025 (1999).
  • Azuma et al. (2015) Koji Azuma, Kiyoshi Tamaki,  and Hoi-Kwong Lo, “All-photonic quantum repeaters,” Nature communications 6, 1–7 (2015).
  • Bergmann and van Loock (2019) Marcel Bergmann and Peter van Loock, “Hybrid quantum repeater for qudits,” Physical Review A 99, 032349 (2019).
  • Qiang et al. (2018) Xiaogang Qiang, Xiaoqi Zhou, Jianwei Wang, Callum M Wilkes, Thomas Loke, Sean O’Gara, Laurent Kling, Graham D Marshall, Raffaele Santagati, Timothy C Ralph, et al., “Large-scale silicon quantum photonics implementing arbitrary two-qubit processing,” Nature photonics 12, 534–539 (2018).
  • Spring et al. (2013) Justin B Spring, Benjamin J Metcalf, Peter C Humphreys, W Steven Kolthammer, Xian-Min Jin, Marco Barbieri, Animesh Datta, Nicholas Thomas-Peter, Nathan K Langford, Dmytro Kundys, et al., “Boson sampling on a photonic chip,” Science 339, 798–801 (2013).
  • Arrazola et al. (2021) JM Arrazola, V Bergholm, K Brádler, TR Bromley, MJ Collins, I Dhand, A Fumagalli, T Gerrits, A Goussev, LG Helt, et al., “Quantum circuits with many photons on a programmable nanophotonic chip,” Nature 591, 54–60 (2021).
  • Brida et al. (2010) Giorgio Brida, Marco Genovese,  and I Ruo Berchera, “Experimental realization of sub-shot-noise quantum imaging,” Nature Photonics 4, 227–230 (2010).
  • Lemos et al. (2014) Gabriela Barreto Lemos, Victoria Borish, Garrett D Cole, Sven Ramelow, Radek Lapkiewicz,  and Anton Zeilinger, “Quantum imaging with undetected photons,” Nature 512, 409–412 (2014).

Appendix

.1 Bayesian optimization overview

Bayesian optimization is a sample-efficient technique that performs sequential optimization of time-consuming black-box functions. There are two main steps to any Bayesian optimization algorithm: (1) the construction of a conditional probabilistic model for the objective function based on a set of observations, and (2) the construction of an acquisition function that uses this model to predict future observations that optimize the objective function of interest. The first step requires an understanding of Bayesian inference, which quantifies how we can update our belief of a particular hypothesis of the objective function based on the current set of observations. The second step requires defining an effective criterion that may be used to predict new observation points that will (most likely) be close to the optimum. In the following section, we will introduce basic concepts related to Bayesian inference. The second section will define Gaussian Processes (GP) which are often used as computationally tractable surrogate models for Bayesian optimization. The third section will present various kernel functions relevant to Bayesian optimization. The fourth section will define and discuss various acquisition functions which are often used in practice. The fifth section will define the quantum network black-box objective used for this study.

.2 Bayesian Inference

Bayesian inference aims to construct a conditional probability distribution p(|𝒟)p(\mathcal{H}|\mathcal{D}) for a hypothesis \mathcal{H} based on observed data 𝒟\mathcal{D}. For our purposes, the hypothesis represents the real values of an objective function f(𝐱)f(\mathbf{x}) where 𝐱\mathbf{x} is a set of experimental knobs/parameters within the quantum optical experiment. In the context of quantum networks, we are interested in maximizing photon indistinguishability in order to maximize the performance of a wide variety quantum network operations and protocols. A single-measurement objective function which is able to quantify the degree of photon indistinguishability is the normalized second order photon correlation function, f(𝐱)g(2)(0)[𝐱]f(\mathbf{x})\equiv g^{(2)}(0)[\mathbf{x}], measured with the Hong-Ou-Mandel experimental set-up shown in the main text. Here, the goal is to minimize the second order correlation function in order to maximize indistinguishability. For general Bayesian optimization, the experimental knobs can be categorical, integer, or real-valued quantities but for our purposes, we will only consider real-valued quantities such as the polarization rotation angle, position delay stage, and frequency filter bandwidth.

In the following, we define the basic concepts of Bayesian inference without reference to the photon correlation function, however, we will later refine the likelihood functions and kernel functions for the Gaussian Process in order to refine the Bayesian optimization algorithm to the quantum network problem.

Bayes Theorem

Let X=[𝐱(1),,𝐱(n)]X=[\mathbf{x}^{(1)},\cdots,\mathbf{x}^{(n)}] represent the matrix of observed input vectors 𝐱(i)=(x1(i),x2(i),,xD(i))\mathbf{x}^{(i)}=(x_{1}^{(i)},x_{2}^{(i)},\cdots,x_{D}^{(i)}) defined in a DD-dimensional space, and 𝐲=[y(1),,y(n)]T\mathbf{y}=[y^{(1)},\cdots,y^{(n)}]^{T} represent the vector of observed outputs y(i)y^{(i)}. The posterior distribution p(|𝒟)p(\mathcal{H}|\mathcal{D}) conditioned on the observed data 𝒟={X,𝐲}\mathcal{D}=\{X,\mathbf{y}\} is computed using Bayes’ theorem,

p(|𝒟)=p(𝒟|)p()p(𝒟),p(\mathcal{H}|\mathcal{D})=\frac{p(\mathcal{D}|\mathcal{H})p(\mathcal{H})}{p(\mathcal{D})}, (A.1)

where p(𝒟|)p(\mathcal{D}|\mathcal{H}) is the likelihood function and p()p(\mathcal{H}) is the prior probability distribution for the hypothesis \mathcal{H}. Bayesian inference uses Bayes’ theorem (1) to update the probability of our hypothesis \mathcal{H} as more information becomes available. The Bayesian approach provides several advantages including: (i) providing a full probabilistic description of our hypothesis (such as parameter estimates) rather than point estimates, (ii) it is generally more robust to noise and outliers, (iii) it allows for inclusion of prior knowledge, (iv) and it is straightforward to use in the small sample size limit. For a given posterior distribution, the posterior predictive distribution for a new data point 𝒟={𝐱,y}\mathcal{D}^{*}=\{\mathbf{x}^{*},y^{*}\} is defined as:

p(𝒟|𝒟)=p(𝒟|)p(|𝒟)𝑑.p(\mathcal{D}^{*}|\mathcal{D})=\int p(\mathcal{D}^{*}|\mathcal{H})p(\mathcal{H}|\mathcal{D})d\mathcal{H}. (A.2)

This quantity predicts the distribution of new, unobserved data. Our hypothesis and prior knowledge will control the convergence of the Bayesian optimization algorithm. We will discuss the Gaussian Process priors below. See Ref. rasmussen2006, for a more detailed discussion of Bayesian optimization and Gaussian processes.

.3 Gaussian Processes

For arbitrary likelihood and prior distributions, the exact calculation of the posterior distribution is not tractable. In particular, the denominator, which acts as a normalization constant for the posterior to remain a valid probability distribution, requires the calculation of a high-dimensional integral that is generally intractable. It is possible to use Monte-Carlo-based methods, however, these approaches are generally computationally expensive and ultimately become intractable for high dimensions. By assuming that the likelihood and prior correspond to multivariate Gaussian distribution functions, it is possible to calculate the posterior and predictive posterior exactly. In the Gaussian Process framework, we assume that the training and test data have Gaussian noise and may be written in the form,

y(i)=f(𝐱(i))+ϵ(i),y^{(i)}=f(\mathbf{x}^{(i)})+\epsilon^{(i)}, (A.3)

where ϵi\epsilon_{i} corresponds to Gaussian-distributed noise with zero mean, p(ϵ)=𝒩(0,σ2)p(\epsilon)=\mathcal{N}(0,\sigma^{2}). The likelihood of the observed data 𝒟\mathcal{D} is then given by:

p(𝐲|X,𝐟)=𝒩(𝐟,σ2I).p(\mathbf{y}|X,\mathbf{f})=\mathcal{N}(\mathbf{f},\sigma^{2}I). (A.4)

The “noiseless” signal 𝐟=[f(𝐱(1),f(𝐱(2)),,f(𝐱(n))]T\mathbf{f}=[f(\mathbf{x}^{(1)},f(\mathbf{x}^{(2)}),\cdots,f(\mathbf{x}^{(n)})]^{T} plays the role of the hypothesis \mathcal{H} which corresponds to the parameters we wish to estimate. In this regard, the Gaussian Process approach is model-agnostic, which may be advantageous in scenarios where parametric models are not available, or perhaps undesirable to avoid the introduction of model bias. Since the noiseless vector 𝐟\mathbf{f} plays the role of the effective parameters, we require defining a prior distribution on these parameters. The Gaussian Process framework assumes the prior distribution is a Gaussian Process written as, f(𝐱)𝒢𝒫(μ(𝐱),k(𝐱,𝐱))f(\mathbf{x})\sim\mathcal{GP}(\mu(\mathbf{x}),k(\mathbf{x},\mathbf{x}^{\prime})), where the symbol “\sim” is read as: “is sampled from” or “is distributed as.” This implies that we treat the measured outputs fif_{i} as random variables that are sampled from the Gaussian process distribution function. Note that a Gaussian process is a distribution over functions completely defined by the mean function μ(𝐱)=𝔼[f(𝐱)]\mu(\mathbf{x})=\mathbb{E}[f(\mathbf{x})] and covariance function k(𝐱,𝐱)=𝔼[(f(𝐱)μ(𝐱))(f(𝐱)μ(𝐱))]k(\mathbf{x},\mathbf{x}^{\prime})=\mathbb{E}[(f(\mathbf{x})-\mu(\mathbf{x}))(f(\mathbf{x}^{\prime})-\mu(\mathbf{x}^{\prime}))]. The prior distribution for 𝐟\mathbf{f} can therefore be written as a multivariate Gaussian,

p(𝐟)=𝒩(μ(𝐱),K)p(\mathbf{f})=\mathcal{N}(\mu(\mathbf{x}),K) (A.5)

where KK is the kernel covariance matrix,

K\displaystyle K K(X,X)\displaystyle\equiv K(X,X) (A.6)
=[k(𝐱(1),𝐱(1))k(𝐱(1),𝐱(2))k(𝐱(1),𝐱(n))k(𝐱(2),𝐱(1))k(𝐱(2),𝐱(2))k(𝐱(2),𝐱(n))k(𝐱(n),𝐱(1))k(𝐱(n),𝐱(2))k(𝐱(n),𝐱(n))],\displaystyle=\begin{bmatrix}k(\mathbf{x}^{(1)},\mathbf{x}^{(1)})&k(\mathbf{x}^{(1)},\mathbf{x}^{(2)})&\cdots&k(\mathbf{x}^{(1)},\mathbf{x}^{(n)})\\ k(\mathbf{x}^{(2)},\mathbf{x}^{(1)})&k(\mathbf{x}^{(2)},\mathbf{x}^{(2)})&\cdots&k(\mathbf{x}^{(2)},\mathbf{x}^{(n)})\\ \cdots&\cdots&\cdots&\cdots\\ k(\mathbf{x}^{(n)},\mathbf{x}^{(1)})&k(\mathbf{x}^{(n)},\mathbf{x}^{(2)})&\cdots&k(\mathbf{x}^{(n)},\mathbf{x}^{(n)})\end{bmatrix},

and must be a positive semi-definite matrix. Given the likelihood (A.4) and prior distribution (A.5), it is possible to calculate the posterior as:

p(𝐟|X,𝐲)\displaystyle p(\mathbf{f}|X,\mathbf{y}) =p(𝐲|X,𝐟)p(𝐟)p(𝐲|X)\displaystyle=\frac{p(\mathbf{y}|X,\mathbf{f})p(\mathbf{f})}{p(\mathbf{y}|X)} (A.7)
=𝒩(K(K+σ2I)1𝐲,σ2(K+σ2I)1K)\displaystyle=\mathcal{N}(K(K+\sigma^{2}I)^{-1}\mathbf{y},\sigma^{2}(K+\sigma^{2}I)^{-1}K)

where, as an aside, it is worth noting that σ2(K+σ2I)1K=KK(K+σ2I)1K\sigma^{2}(K+\sigma^{2}I)^{-1}K=K-K(K+\sigma^{2}I)^{-1}K, and the latter quantity is often found in textbooks. Let us now calculate the predictive distribution for unobserved data, 𝒟={X,𝐲}\mathcal{D}^{*}=\{X^{*},\mathbf{y}^{*}\}. The joint Gaussian process prior for the observed and unobserved data is written as:

p(𝐟𝐟)=𝒩[(μ(𝐱)μ(𝐱)),(Kx,x+σ2IKx,xKx,xKx,x)].p\begin{pmatrix}\mathbf{f}\\ \mathbf{f}^{*}\end{pmatrix}=\mathcal{N}\left[\begin{pmatrix}\mu(\mathbf{x})\\ \mu(\mathbf{x}^{*})\end{pmatrix},\begin{pmatrix}K_{x,x}+\sigma^{2}I&K_{x,x^{*}}\\ K_{x^{*},x}&K_{x^{*},x^{*}}\end{pmatrix}\right]. (A.8)

The predictive distribution, p(𝐟|𝒟)=p(𝐟|𝐟)p(𝐟|𝒟)𝑑𝐟p(\mathbf{f}^{*}|\mathcal{D})=\int p(\mathbf{f}^{*}|\mathbf{f})p(\mathbf{f}|\mathcal{D})d\mathbf{f}, is then found to be:

p(𝐟|𝒟)=𝒩(𝐟|μp(𝐱),Σp(𝐱))p(\mathbf{f}^{*}|\mathcal{D})=\mathcal{N}(\mathbf{f}^{*}|\mu_{p}(\mathbf{x}^{*}),\Sigma_{p}(\mathbf{x}^{*})) (A.9)

with μp(𝐱)\mu_{p}(\mathbf{x}^{*}) and Σp(𝐱)\Sigma_{p}(\mathbf{x}^{*}) corresponding to the predicted mean and variance of the model at point 𝐱\mathbf{x}^{*},

μp(𝐱)\displaystyle\mu_{p}(\mathbf{x}^{*}) =μ(𝐱)+kT(K+σ2I)1(𝐟μ(𝐱))\displaystyle=\mu(\mathbf{x}^{*})+k_{*}^{T}(K+\sigma^{2}I)^{-1}(\mathbf{f}-\mu(\mathbf{x})) (A.10)
Σp(𝐱)\displaystyle\Sigma_{p}(\mathbf{x}^{*}) =k(𝐱,𝐱)kT(K+σ2I)1k\displaystyle=k(\mathbf{x}^{*},\mathbf{x}^{*})-k_{*}^{T}(K+\sigma^{2}I)^{-1}k_{*} (A.11)

where k=k(𝐱,𝐱)k_{*}=k(\mathbf{x},\mathbf{x}^{*}) is an (n×m)(n\times m) matrix corresponding to the nn training points 𝐱\mathbf{x} and mm test points 𝐱\mathbf{x}^{*}. Equations (A.10) and (A.11) are the key equations which describe Gaussian-Process-based Bayesian inference.

.4 Kernel functions

The covariance function encodes information about the shape and structure the objective function. The kernel ultimately affects the convergence rate of the Bayesian optimization algorithm. Below, we write several well-known kernel functions from the literature. For example, the squared exponential kernel is written as:

k(𝐱,𝐱)=exp(𝐱𝐱222),k_{(}\mathbf{x},\mathbf{x}^{\prime})=\exp\left(-\frac{||\mathbf{x}-\mathbf{x}^{\prime}||^{2}}{2\ell^{2}}\right), (A.12)

which ensures that nearby points have similar function values within the length scale given by \ell. The periodic kernel is given by:

k(𝐱,𝐱)=exp(2sin2(π𝐱𝐱/p)2)k(\mathbf{x},\mathbf{x}^{\prime})=\exp\left(-\frac{2\sin^{2}(\pi||\mathbf{x}-\mathbf{x}^{\prime}||/p)}{\ell^{2}}\right) (A.13)

where \ell is a length scale parameter and pp is the periodicity of the kernel. Finally, we also consider the Matern kernel,

k(𝐱,𝐱)=1Γ(ν)2ν1(2νl𝐱𝐱2)νKν(2νl𝐱𝐱2),\displaystyle k(\mathbf{x},\mathbf{x}^{\prime})=\frac{1}{\Gamma(\nu)2^{\nu-1}}\left(\tfrac{\sqrt{2\nu}}{l}||\mathbf{x}-\mathbf{x}^{\prime}||^{2}\right)^{\nu}\!\!K_{\nu}(\tfrac{\sqrt{2\nu}}{l}||\mathbf{x}-\mathbf{x}^{\prime}||^{2}), (A.14)

where KνK_{\nu} is a modified Bessel function and Γ\Gamma is the gamma function. The parameter ν\nu controls the smoothness of the function. The smaller the ν\nu, the less smooth the function will be. A comparison of various kernels is shown in Appendix Fig. 1, demonstrating that the Matern kernel provides the best performance compared to all other kernels.

Learning the hyperparameters of the kernel

Given data set 𝒟\mathcal{D}, the kernel hyperparameters such as \ell and pp (here, we denote them as θ\theta) will strongly affect the final GP surrogate model predictions. How do we choose the correct hyperparameters values that should be used for the posterior p(𝐟|𝒟)p(\mathbf{f}|\mathcal{D})? The idea here is to calculate the probability of observing the given data under our prior: p(𝐲|X,θ)=p(𝐲|𝐟)p(𝐟|X,θ)𝑑𝐟p(\mathbf{y}|X,\theta)=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|X,\theta)d\mathbf{f}, which is referred to as the marginal likelihood. The hyperparameters θ\theta can be determined by performing maximum likelihood estimation of the marginal likelihood p(𝐲|𝐱,θ)=𝒩(μ(𝐱),kθ(𝐱,𝐱)+σ2I)p(\mathbf{y}|\mathbf{x},\theta)=\mathcal{N}(\mu(\mathbf{x}),k_{\theta}(\mathbf{x},\mathbf{x}^{\prime})+\sigma^{2}I). Taking the logarithm of this function, we obtain

lnp(𝐲|𝐱,θ)\displaystyle\ln p(\mathbf{y}|\mathbf{x},\theta) =12lndet(kθ(𝐱,𝐱)+σ2I)\displaystyle=-\frac{1}{2}\ln\det(k_{\theta}(\mathbf{x},\mathbf{x}^{\prime})+\sigma^{2}I) (A.15)
12(𝐲μ)T(kθ(𝐱,𝐱)+σ2I))1(𝐲μ),\displaystyle-\tfrac{1}{2}(\mathbf{y}-\mathbf{\mu})^{T}(k_{\theta}(\mathbf{x},\mathbf{x}^{\prime})+\sigma^{2}I))^{-1}(\mathbf{y}-\mathbf{\mu}),

where we have dropped terms that are not dependent on the kernel hyperparameters θ\theta. Here, the first term corresponds to the volume of the prior and becomes large when the volume of the prior is small (i.e. when the model is simple), while the second term becomes large when the data fits the model very well. Assuming that the posterior distribution over hyperparameters θ\theta is well-concentrated, we can approximate the predictive posterior as

p(𝐟|𝒟)p(𝐟|𝒟,θMLE)p(\mathbf{f}^{*}|\mathcal{D})\approx p(\mathbf{f}^{*}|\mathcal{D},\theta_{MLE}) (A.16)

where θMLE\theta_{MLE} corresponds to the hyperparameters that maximize the log-likelihood (A.15),

θMLE=argmaxθp(𝐲|𝐱,θ).\theta_{MLE}=\text{argmax}_{\theta}\;p(\mathbf{y}|\mathbf{x},\theta). (A.17)
Refer to caption
Appendix Fig. 1: Convergence plots for different kernels including the radial basis function RBF kernel (also known as squared exponential kernel), the periodic kernel, and the Matern kernel (ν=5/2\nu=5/2) using the 3D baseline model for testing purposes.

.5 Acquisition Functions

An acquisition function a(x)a(x) aims to evaluate the expected loss associated with evaluating f(𝐱)f(\mathbf{x}) at point 𝐱\mathbf{x}, and selects the point with the lowest expected loss. In the following, we compare three different acquisition functions which are often used in the literature.

(i) Probability of improvement

Let ff^{\prime} denote the minimum value of the objective function that has been observed so far. The probability of improvement aims to evaluate ff at the location most likely to improve upon this value. Here, we define the utility function,

u(x)={0f(x)>f1f(x)f.u(x)=\begin{cases}0&f(x)>f^{\prime}\\ 1&f(x)\leq f^{\prime}.\end{cases} (A.18)

This utility function implies that a unit reward is given if f(x)f(x) is less than ff^{\prime}, and provides no reward otherwise. The probability of improvement is the expected utility as a function of x:

aPI(x)=𝔼[u(x)|x,𝒟]\displaystyle a_{PI}(x^{*})=\mathbb{E}\left[u(x^{*})|x^{*},\mathcal{D}\right] =f𝒩(μ(x),K(x,x))df\displaystyle=\int_{-\infty}^{f^{\prime}}\mathcal{N}(\mu({x}^{*}),K(x^{*},x^{*}))\,\mathrm{d}f
=Φ(μ(x),K(x,x)).\displaystyle=\Phi(\mu(x^{*}),K(x^{*},x^{*})). (A.19)

The point with the highest probability of improvement is selected. Note that this acquisition function provides a reward regardless of the size of improvement.

(ii) Expected improvement

The expected improvement improves on the previous result by defining a reward that is dependent on the size of the improvement by defining the utility function:

u(x)=max(0,ff(x)).u(x)=\max(0,f^{\prime}-f(x)). (A.20)

The acquisition function for the expected improvement is then written as:

aEI(x)\displaystyle a_{EI}(x^{*}) =𝔼[u(x)|x,𝒟]\displaystyle=\mathbb{E}\left[u(x^{*})|x^{*},\mathcal{D}\right]
=f(ff)𝒩(μ(x),K(x,x))df\displaystyle=\int_{-\infty}^{f^{\prime}}(f^{\prime}-f)\mathcal{N}(\mu({x}^{*}),K(x^{*},x^{*}))\,\mathrm{d}f
=[fμ(x)]Φ(μ(x),K(x,x))\displaystyle=[f^{\prime}-\mu(x^{*})]\Phi(\mu(x^{*}),K(x^{*},x^{*}))
+K(x,x)𝒩(μ(x),K(x,x)).\displaystyle+K(x^{*},x^{*})\mathcal{N}(\mu(x^{*}),K(x^{*},x^{*})). (A.21)

The first term is contingent on the size of the improvement, meaning that it will tend to be large for points that are closer to the minimum relative to ff^{\prime}. The second term is dependent on the variance of the point xx^{*}. Points with large variance will have a large degree of uncertainty, therefore, it makes sense that those points should be explored in order to reduce our uncertainty of the surrogate model. Due to these two terms, this acquisition function encodes a trade-off between exploitation due to the first term and exploration due to the second term.

(iii) Lower confidence bound

Here, the lower confidence bound is written as:

aLCB(x)=μ(x)βσ(x),a_{LCB}(x^{*})=\mu(x^{*})-\beta\sigma(x^{*}), (A.22)

where β>0\beta>0 is a trade-off parameter and σ(x)=K(x,x)\sigma(x^{*})=\sqrt{K(x^{*},x^{*})} is the standard deviation of point xx^{*}. Unlike the previous two acquisition functions, this quantity cannot be interpreted in terms of computing the expectation of a utility function, nevertheless, there are strong theoretical results that imply that this acquisition function will converge to the true global minimum of ff under certain conditions. A benchmark comparison of different acquisition functions is shown in figure (2) demonstrating that the lower confidence bound acquisition function outperforms the other two, where we used β=2\beta=2, as the trade-off parameter value.

Refer to caption
Appendix Fig. 2: Convergence plots for different acquisition functions where LCBLCB refers to Lower Confidence bound, PI refers to probability of improvement, and EI corresponds to expected improvement.

.6 Initial Sampling Scheme Comparison

While Bayesian optimization can start with zero knowledge of the objective function (i.e. the size of training data is null), it is possible to accelerate the convergence of the Bayesian optimization algorithm by using a set of judiciously chosen initial sampling points. Conventionally, there are many techniques available for sampling such as: uniform or random sampling, Sobol sampling, Halton sampling, and Latin hypercube sampling (lhs). By using the skopt initial sampling package, we performed additional benchmarks with respect to the number of initial sampling points as well as the type of sampling used. In Appendix Fig. 3, we provide a comparison between Sobol, Halton, random, conventional Latin hypercube sampling and Maximin Latin hypercube sampling showing that the maximin and conventional Lain hypercube sampling provided the best performance. For live testing purposes, we opted to use the conventional Latin Hypercube Sampling scheme, however, further work will be required to determine optimal initial sampling methods for other response functions of interest for quantum network calibration experiments.

Refer to caption
Appendix Fig. 3: Convergence plots with different initial sampling schemes including Sobol, Halton, random, Maximin Latin Hypercube Sampling, and conventional Latin Hypercube sampling using the the 3D baseline model for testing purposes.

.7 Summary of Bayesian Optimization Algorithm

The proposed Bayesian optimization algorithm can be summarized by the following steps:

  1. 1.

    Perform nn initial measurements 𝐲=(y1,,yn)\mathbf{y}=(y_{1},\cdots,y_{n}) with respect to the experimental degrees of freedom, X=[𝐱1,,𝐱n]X=[\mathbf{x}_{1},\cdots,\mathbf{x}_{n}], using Maximin latin hypercube sampling.

  2. 2.

    Build Gaussian Process surrogate model based on existing measurements.

  3. 3.

    Suggest new measurement location 𝐱i+1\mathbf{x}_{i+1} based on the acquisition function prediction.

  4. 4.

    Repeat 2-3 until stopping criterion is met.

The Gaussian Process model built in step 2 of the Bayesian optimization algorithm above can be further decomposed into the following steps:

  1. 1.

    Build gram matrix, equation (6), for Matern Kernel (ν=5/2\nu=5/2) based on equation (14).

  2. 2.

    Optimize the Kernel hyperparameters based on the marginal likelihood, equation (17).

  3. 3.

    Provide predictive mean and variances based on equations (A.10) and (A.11).

.8 Numerical Implementation

The numerical implementation and benchmarking of the Bayesian optimization algorithm was done with in-house code written in Python with standard numpy and scipy linear algebra and optimization packages. However, there are a wide variety of excellent open-source software implementations that are available through github. We recommend skopt (Bayesian optimization) and GPytorch (Gaussian Process regression) Gpytorch which provide an excellent starting point for initial testing. It is worth noting that we found the customization of the open-source software difficult (for example, adding custom acquisition functions, kernel functions, as well as other functionalities) which is why we opted to perform the benchmarking (shown in this Appendix) with our in-house code. The live testing was performed with a modified version of the skopt Bayesian optimization algorithm.

.9 Hong-Ou-Mandel interference model

To model the experiment we use methods of phase space quantum optics, in particular the characteristic function formalism LaukInPrep ; Takeoka2015 . This method allows us to describe many experimental imperfections, such as coupling losses, non-perfect detector efficiencies, high photon number contributions, photon number statistics, etc., and is best suited to deal with the Gaussian quantum optical states. Gaussian states are the states whose characteristic function, or in fact any other phase space representation, is given by a multidimensional Gaussian function

χ(ξ)=exp(14ξγξidξ),\displaystyle\chi(\xi)=\exp(-\frac{1}{4}\xi\gamma\xi-id\xi), (A.23)

with dd and γ\gamma being displacement vector and covariance matrix of the system respectively, ξ2n\xi\in\mathbb{R}^{2n} with nn being the number of independent bosonic mode of the system. The examples of the Gaussian states include the vacuum state, coherent states, as well as squeezed vacuum and two-mode squeezed vacuum states. The later two are directly relevant to our experimental studies since they describe the output states of degenerate and non-degenerate spontaneous parametric down conversion process respectively. It is known that linear optic operations, such as beam splitters, phase shifters etc., preserve Gaussian states Weedbrook2012 ; Braunstein2005 , i.e. they map Gaussian states onto Gaussian states. For the characteristic function it means that for any linear optic operation there exists a symplectic matrix SS that transforms the initial displacement vector and the covariance matrix to the output form, γ=STγS\gamma^{\prime}=S^{T}\gamma S and d=Sdd^{\prime}=Sd.

Refer to caption
Appendix Fig. 4: Representation of the model. The output of type-0 SPDC crystal, which is described by a squeezed vacuum state, is mixed in with vacuum at a beam splitter. The coupling efficiencies of the two arms are modeled by a virtual beamsplitters with a transmittance ηu\eta_{u} and ηd\eta_{d} for the upper and lower arm respectively. Assuming equal coupling efficiencies we can simulate the polarization match by varying ηu\eta_{u} from 0 to ηd\eta_{d} corresponding to crossed and aligned polarizations respectively. Another virtual beamsplitter with a transmittance ζ\zeta models the mode overlap of the two modes. Only the transmitted parts corresponding to the indistinguishable proportion of the two photons, interfere with each other on the following beamsplitter, while the reflected parts are mixed with vacuum. A controllable phase shift ϕ\phi is also present in the bottom arm.

The model representation of the experimental setup is shown in Appendix Fig. 4. The output of the SPDC crystal is mixed with a vacuum input on the first beam splitter BS. The overall coupling efficiencies in the upper and the lower arm can be modelled by the virtue of a virtual beam splitter with the transmittance ηu\eta_{u} and ηd\eta_{d} respectively. The polarization degree of freedom can be modeled by a variable transmittance, e.g. ηu\eta_{u}, that can take values from zero, corresponding to crossed polarizations, to some finite value ηuf\eta_{u}^{f}, corresponding to parallel polarizations. Another virtual beam splitter with a transmittance ζ\zeta is used to model the path length or the time of arrival degree of freedom. ζ\zeta corresponds to the mode overlap of the arriving photons and can take values from 0, corresponding to the case of completely distinguishable photon wave packets with zero overlap, to 1, corresponding to the case of completely indistinguishable photons. To resemble the experimental data more closely we parametrize the ζ\zeta parameter as a Gaussian function ζ=exp(x2)\zeta=\exp(-x^{2}) reflecting the fact that the overlap of two Gaussian pulses is again a Gaussian function. As we can see from the figure only transmitted parts interfere at the following beam splitter whereas reflected parts do not interfere but are mixed with a vacuum input instead. To model the phase averaging we introduce a phase shifter in the lower arm that shifts the relative phase between the two arms by an amount ϕ\phi. To obtain the experimental values of the (phase dependent) quantities we average over ϕ\phi from 0 to 2π2\pi. In the experimental setup this averaging is achieved by the phase-modulator in one of the arms continuously sweeping over a 2π2\pi phase-shift.

After determining individual symplectic transformations and combining them together we obtain the covariance matrix that completely describes the final state. Using the characteristic function description we can now calculate the probabilities for a single and coincidence photon detection as follows:

PD1/D2\displaystyle P_{D1/D2} =Tr{ρ^(1|00|1/2)}=1Tr{ρ^(|00|)1/2}\displaystyle=\mathrm{Tr}\{\hat{\rho}(1-\ket{0}\bra{0}_{1/2})\}=1-\mathrm{Tr}\{\hat{\rho}(\ket{0}\bra{0})_{1/2}\} (A.24)
PD1D2\displaystyle P_{D1D2} =Tr{ρ^(1|00|1)(1|00|2)}\displaystyle=\mathrm{Tr}\{\hat{\rho}(1-\ket{0}\bra{0}_{1})(1-\ket{0}\bra{0}_{2})\}
=1Tr{ρ^(|00|)1}Tr{ρ^(|00|)2}\displaystyle=1-\mathrm{Tr}\{\hat{\rho}(\ket{0}\bra{0})_{1}\}-\mathrm{Tr}\{\hat{\rho}(\ket{0}\bra{0})_{2}\}
+Tr{ρ^(|00|)1(|00|)2},\displaystyle+\mathrm{Tr}\{\hat{\rho}(\ket{0}\bra{0})_{1}(\ket{0}\bra{0})_{2}\}, (A.25)

where |01/2\ket{0}_{1/2} corresponds to a vacuum state of all modes impinging on the detector D1/D2D1/D2. From these we can now easily determine the normalized g(2)(0)g^{(2)}(0) correlation function as:

g(2)(0)=PD1D2PD1PD2.\displaystyle g^{(2)}(0)=\frac{P_{D1D2}}{P_{D1}P_{D2}}. (A.26)

We studied the g(2)(0)g^{(2)}(0) correlation function for different input states corresponding to different physical situations. If the underlying nonlinear process is a degenerate down conversion, i.e. two identical photons are created, the input state is described by a squeezed vacuum state with the corresponding covariance matrix

γSV=(e2r00e2r),\displaystyle\gamma_{SV}=\begin{pmatrix}\mathrm{e}^{-2r}&0\\ 0&\mathrm{e}^{2r}\end{pmatrix}, (A.27)

where rr is the squeezing parameter. This input corresponds to all of the experimental two dimensional parameter space scans, i.e., the data in Fig. 3 of the main text. In Appendix Fig. 5a the g(2)g^{(2)} is plotted as a function of ζ=exp(x2)\zeta=\exp(-x^{2}) (path length difference) and ηu\eta_{u} (loss due to polarization misalignment of upper arm).

Refer to caption
Appendix Fig. 5: Normalized g(2)(0)g^{(2)}(0) correlation function for (a) degenerate single-mode squeezed vacuum input state, and (b) non-degenerate two-mode squeezed vacuum input state, and (c) two thermal input states.

If the down conversion process is non-degenerate, i.e. two different photons are created for example at different frequencies, the input state is a two-mode squeezed vacuum state with the corresponding covariance matrix

γTMSV=(cosh2(r)+sinh2(r)02cosh(r)sinh(r)00cosh2(r)+sinh2(r)02cosh(r)sinh(r)2cosh(r)sinh(r)0cosh2(r)+sinh2(r)002cosh(r)sinh(r)0cosh2(r)+sinh2(r))\displaystyle\gamma_{TMSV}=\begin{pmatrix}\cosh^{2}(r)+\sinh^{2}(r)&0&2\cosh(r)\sinh(r)&0\\ 0&\cosh^{2}(r)+\sinh^{2}(r)&0&-2\cosh(r)\sinh(r)\\ 2\cosh(r)\sinh(r)&0&\cosh^{2}(r)+\sinh^{2}(r)&0\\ 0&-2\cosh(r)\sinh(r)&0&\cosh^{2}(r)+\sinh^{2}(r)\end{pmatrix} (A.28)

This input corresponds to the case in the three-dimensional parameter space for which the spectral filters in the two arms have been shifted with respect to each other by an amount much greater than the bandwidth of the filters. However, in this case the photons in each arm are clearly distinguishable, so the final HOM interference will always be between distinguishable modes. We can tweak the model to accommodate this feature by introducing another virtual beamsplitter with reflectivity ζ\zeta^{\prime}. Essentially, the same result can be achieved by fixing ζ=0\zeta=0. The predicted g(2)(0)g^{(2)}(0) is plotted as a function of ζ=exp(x2)\zeta=\exp(-x^{2}) (path length difference and ηu\eta_{u} (loss due to polarization misalignment of upper arm in Appendix Fig. 5b. The cross-correlation function does not depend on ζ\zeta, but only on ηu\eta_{u}, since the latter induces actual loss in the PBS, i.e., changes the number of photons reaching the detector. In reality we do not fully achieve the scenario in which the filters are detuned by more than bandwidth as the filters are maximally shifted by 6 GHz with respect to each other while their bandwidth is about 12 GHz. Hence, the actual expected g(2)g^{(2)} will be a combination of the squeezed vacuum input case (Appendix Fig. 5a) and the non-degenerate two-mode squeezed state (Appendix Fig. 5b). The relative weight of the two contributions can be found by calculating the spectral overlap of the photons after the filter, which, for Gaussian filter pass-bands, will again yeild a Gaussian shaped weight parameter as a function of the spectral separation of the two filters.

Finally, in a real world implementation of a quantum network the Bell state measurements will occur between two different entangled photon sources. This means that the two partaking photons will originate from different SPDC sources and, thus, not be correlated. The input state in this case will correspond to two thermal states with the corresponding covariance matrix

γTMSV=(1+2μ00001+2μ00001+2μ00001+2μ),\displaystyle\gamma_{TMSV}=\begin{pmatrix}1+2\mu&0&0&0\\ 0&1+2\mu&0&0\\ 0&0&1+2\mu^{\prime}&0\\ 0&0&0&1+2\mu^{\prime}\end{pmatrix}, (A.29)

where μ\mu and μ\mu^{\prime} are the mean photon numbers at the two inputs. Appendix Fig. 5c shows the g(2)(0)g^{(2)}(0) function for the thermal input state scenario.

References

  • (1) C. E. Rasmussen and C. K. I. Williams, ”Gaussian Processes for Machine Learning,” (MIT Press, Cambridge, MA, 2006).
  • (2) Nikolai Lauk et al., manuscript in preparation.
  • (3) Masahiro Takeoka, Rui-Bo Jin, and Masahide Sasaki, “Full analysis of multi-photon pair effects in spontaneous parametric down conversion based photonic quantum information processing,” New J. Phys. 17, 043030 (2015).
  • (4) Christian Weedbrook, Stefano Pirandola, Raúl García-Patrón, Nicolas J. Cerf, Timothy C. Ralph, Jeffrey H. Shapiro, and Seth Lloyd, “Gaussian quantum information,” Rev. Mod. Phys. 84, 621 (2012).
  • (5) Samuel L. Braunstein and Peter van Loock, “Quantum information with continuous variables,” Rev. Mod. Phys. 77, 513 (2005).
  • (6) Gardner, Jacob R and Pleiss, Geoff and Bindel, David and Weinberger, Kilian Q and Wilson, Andrew Gordon, ”GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration,” Advances in Neural Information Processing Systems (2018).