Rapid localization of gravitational wave hosts with FIGARO
Abstract
The copious scientific literature produced after the detection of GW170817 electromagnetic counterpart demonstrated the importance of a prompt and accurate localization of the gravitational wave within the co-moving volume. In this letter, we present figaro, a ready to use and publicly available software that relies on Bayesian non-parametrics. figaro is designed to run in parallel with parameter estimation algorithms to provide updated three-dimensional volume localization information. Differently from any existing algorithms, the analytical nature of the figaro reconstruction allows a ranking of the entries of galaxy catalogues by their probability of being the host of a gravitational wave event, hence providing an additional tool for a prompt electromagnetic follow up of gravitational waves. We illustrate the features of figaro on binary black holes as well as on GW170817. Finally, we demonstrate the robustness of figaro by producing so-called pp-plots and we present a method based on information entropy to assess when, during the parameter estimation run, it is reasonable to begin releasing skymaps.
keywords:
methods: data analysis – methods: statistical – gravitational waves – stars: black holes1 Introduction
The discovery of the binary neutron star (BNS) merger GW170817 (Abbott et al., 2017a) by the LIGO (Aasi et al., 2015) and Virgo (Acernese et al., 2015) detectors and the simultaneous detection of the short gamma-ray burst (GRB) GRB 170817A (Goldstein et al., 2017; Abbott et al., 2017d) led to an intense follow up campaign (Abbott et al., 2017c) that ultimately identified the kilonova transient AT2017gfo (Coulter et al., 2017) and NGC 4993 as the host galaxy.
This multi-messenger observation was a scientific goldmine, leading to the confirmation that BNS mergers are progenitors of short GRBs, as predicted decades earlier (e.g. Goodman, 1986; Paczynski, 1986; Eichler et al., 1989), that the aftermath of the merger leads to intense r-processes (Pian et al., 2017) and allowed a first independents determination of the Hubble constant (Abbott et al., 2017b).
These results, which are a selected subset of the vast literature that sparked afterwards, have been made possible by decade-long methodological and technological efforts. From the GW side, which is our area of expertise, the discovery of GW170817 was enabled by not only the detectors, but by the development of online search pipelines, such as MBTA (Aubin et al., 2021), pycbc (Usman et al., 2016), gstlal (Sachdev et al., 2019) and cWB (Klimenko et al., 2016), that were able to identify the GW trigger almost instantly. The trigger activated then rapid localization algorithms such as bayestar (Singer & Price, 2016) to produce a first estimate of the source location to release to the astronomical community for it to begin the hunt for a counterpart. At the same time, more in-depth parameter estimation algorithms, such as LALInference (Veitch et al., 2015) or Bilby (Ashton et al., 2019; Romero-Shaw et al., 2020), start their endeavour to update and refine the initial bayestar volume maps and hence ease the astronomers hunt for a counterpart (Singer et al., 2016b, a).
This whole process happened – and still does happen – on timescales that range from seconds to minutes, from the online search to the bayestar initial volume map, to hours for a first accurate parameter estimation release of the GW source position in the volume. GW170817 demonstrates that the whole process is extremely successful. However, the hours gap in between the first bayestar map and the final Markov Chain Monte Carlo (MCMC) is, in our opinion, still an unacceptably long time for the astronomers community to wait for updated localization information.
In this letter, we propose a solution to this problem. Our solution is based on Bayesian non-parametrics and led to the development of an algorithm which we named figaro for Fast Inference for GW Astronomy, Research & Observations. figaro is publicly available at https://github.com/sterinaldi/figaro. The main features, and point of departures from any previous efforts, of figaro are the reconstruction of the three-dimensional posterior distribution over sky position and luminosity distance of a GW source from partial MCMC results in terms of analytical functions (a mixture of multivariate normal distributions) that can be used to (i) estimate credible regions in the three and two dimensional localization space; (ii) assess the convergence of the underlying MCMC by looking at the information entropy of the reconstructed distribution; (iii) evaluate the probability than any galaxy is the host of the GW event without relying on a pixelisation (e.g. Górski et al., 2005) of the sky. All of the above points happen simultaneously with an MCMC run and add insignificant overhead to the sampler itself.
The rest of the letter is thus organised as follows: in Sec. 2 we briefly review the Dirichlet process Gaussian mixture model that is the backbone of figaro. In Sec. 3 we describe our solution to the volume localization of a GW source in real time (with respect to a MCMC run) and the generation of the probability ranked list of potential galaxy hosts. We conclude with some considerations in Sec. 4. The more technical points, such as the use of the information entropy to assess the convergence of figaro as well as an analysis of its robustness via so-called pp-plots are presented in Supplementary Material, Appendix A and B, respectively.
2 Density estimate
In this section we briefly review the Dirichlet process (DP) and Dirichlet process Gaussian mixture model (DPGMM) to provide the reader with the necessary background for figaro. However, a comprehensive description of the model and computational method is beyond the scope of this letter, hence we refer the interested reader to Del Pozzo et al. (2018), where a similar idea was first explored, and to Rinaldi & Del Pozzo (2022) where the details of the method (and its hierarchical generalisation) and of the sampling algorithm are described.
To develop an intuition of a DP and of the DPGMM, begin by considering an experiment whose outcomes can assume a finite number of values, say , and imagine drawing a certain number of samples . Each of the possible outcomes can be thought of as a separate, known a priori, “class” of outcomes. If the samples are exchangeable, the order in which they are drawn does not matter; the only relevant information is how many times we observed each of the classes, with .
Given , we wish to infer the probabilities associated with each of the classes. This is done via Bayes’ theorem:
(1) |
where the likelihood for the observed counts is a multinomial distribution
(2) |
and the prior distribution for the probabilities is a Dirichlet distribution (DD)
(3) |
where is a discrete probability distribution, hence . The vector and the parameter completely characterise the DD: is the mean – expected value – for and quantifies how concentrated around the vector is. Being conjugate to the multinomial distribution, the posterior on is still a DD:
(4) |
Consider now the case is which the number of “classes” is unknown. Hence, a priori, they could be countably infinite. The problem can still be solved by introducing the Dirichlet process (DP), the infinite dimensional generalisation of the DD. Assigning probabilities to an infinite set might seem an hopeless task, however an observer will always record a finite number of outcomes. This finite set of outcomes will then be described by a DD. However, the DP takes into account the possibility that there are – a formally infinite – set of outcomes that have not been recorded yet. Similarly to the DD, the DP is parametrized by a (continuous) base distribution , which plays the role of , and a concentration parameter , as before. A DP is particularly useful in the definition of the Dirichlet process Gaussian mixture model (Escobar & West, 1995). The purpose is to represent arbitrary probability distributions as an infinite mixture of multivariate Gaussian distributions (Rasmussen, 2000):
(5) |
where and and are the mean vector and covariance matrix of a multivariate Gaussian distribution, denoted by . The way in which the DP enters in the picture is by providing a practical way of inferring the mixing proportions and relative means and covariances from some observation . Applying Bayes’ theorem:
(6) |
Drawing samples from a Gaussian mixture is done via Gibbs sampling: draw one of the components with probability and from that component draw a sample from the Gaussian distribution. Each sample will be therefore associated with a specific component. This information is stored in an indicator variable containing the label of the component the sample has been drawn from. Given a set of samples , the vector and assuming a symmetric – uniform in the – DP prior on , it is possible to write the probability distribution for ,
(7) |
where denotes the number of samples associated with component , and for and :
(8) |
It is also possible to derive the conditional probability that a new data point is drawn from one of the already observed mixture components
(9a) | |||
as well as from a previously unobserved component | |||
(9b) |
The practical power of a DPGMM is this: there is no need to assign a priori an expected number of components to explain the observed data, but we can let the data tell us how many we need. Therefore, to get a draw from (6) given a set of samples from the target distribution :
-
1.
Draw a sample of : starting with no samples at all, iteratively add one sample at a time to the mixture, drawing from Eqs. (9) conditioned only on the previously added samples;
- 2.
-
3.
Repeat these two steps, always starting from an empty mixture and shuffling the order in which the samples are added to it, until the desired number of , and is accumulated.
Since the probability distributions for , and are conditioned on the observed , they are updated every time a new sample is added to the pool. Hence, if the samples are drawn by some sampling algorithm – a MCMC algorithm, for instance, or any kind of sampling algorithm that is able to produce independent samples during its run – a DPGMM can be updated as the sampler is running, therefore we can estimate the posterior distribution at any stage of the run itself to provide up-to-date estimates of credible regions, as well as an analytical representation of the yet unknown MCMC target distribution.
In a nutshell, this is what figaro does: by reading the current state of any MCMC sampler, it provides a DPGMM representation of the posterior distribution for the sky position and luminosity distance to a GW source at the stage at which the sampler is. It is therefore possible to produce intermediate volume maps and update the localization information on a short time scale than having to wait for the expensive MCMC runs to finish.
3 Volume reconstruction and sky localization
While in principle figaro could be applied to the full parameter space, for this work we focus on the sky position (right ascension and declination ) and luminosity distance . Our algorithm proceeds as follows; as a MCMC is running, figaro reads the current set of samples from the posterior for and and, using the density estimation scheme we described in the previous section, it reconstructs the current snapshot of the full three dimensional posterior distribution as a mixture of multivariate Gaussian distributions. figaro then proceeds at evaluating the 3D map on a user-defined grid that can be marginalised to obtain a two-dimensional sky map for the location of the GW host. The procedure is repeated at any fixed number of new MCMC samples, until we estimate the convergence of the density estimation via the information entropy, see Supplementary Material, Appendix A. We decided to work in terms of number of MCMC samples since the actual wall time depends quite dramatically on the waveform model adopted for the analysis as well as on the sampler used, (e.g. Veitch et al., 2015, Table VI). In the most optimistic case, one can adopt a wall time per independent sample of 3 s to get an idea of the actual time. The computational time added by figaro is negligible being, for a 3-dimensional distribution, approximately per sample.
It is important, however, to point out the fact that the samples from a MCMC must be handled with care. The initial part of a chain, the so-called burn-in, is not representative of the true distribution: the information entropy of the distribution (see appendix A), however, heavily fluctuates during the burn-in phase, indicating that the map is still unreliable. A second, more important, issue to address is the correlation among samples produced by the MCMC – one of the founding assumption of the DPGMM is the fact that the samples are independent. We recommend to ensure, e.g. by mean of some autocorrelation computation script, that the samples fed to figaro are independent. However, we observed that even in the case of correlated samples, the main features of the target distribution, such as 90% credible regions, are approximately correct.
To demonstrate the efficacy of our proposed scheme, we analysed the publicly available posterior samples released on GWOSC (Abbott et al., 2021c) and simulated the above approach by producing results from figaro using an incremental number of samples. An example of the marginal sky position posterior for GW190909 can be seen in Fig. 1, where we show the evolution of the 90 per cent and 50 per cent credible regions as a function of the number of samples accumulated by the MCMC. We obtain a 90 per cent credible region of 4619.4 whereas Abbott et al. (2021b) reports, for the same event, a 90 per cent credible region of 4700 . As a further comparison, Figure 2 shows the figaro skymap for GW150914 (Abbott et al., 2016) as reconstructed using the posterior samples from the gravitational wave catalogue GWTC–3 (Abbott et al., 2021a).






3.1 Exploiting galaxy catalogues
Current strategies for EM follow up, see for instance Coughlin et al. (2018) and references therein, imply the adoption of tiling algorithms, aimed at scheduling telescope observations in order to optimise some pre-defined metric – e.g. minimise the number of pointings or maximise the probability of observing a given counterpart at fixed magnitude – given the time allocation on each telescope. As pointed out earlier, figaro three-dimensional density estimates are analytical, hence they can be evaluated efficiently over any point in the parameter space to provide the value of the probability density at that point, including the standard pixelisation schemes required by tiling algorithms.
We would like to highlight that, since search optimization pipelines such as gwemopt, presented in Coughlin et al. (2018), are triggered by skymaps provided by the GW localisation algorithms, as long as the format in which these skymaps are provided is homogeneous, the way in which they are produced should not affect these pipelines. Looking at the flowchart in Figure 1 of the aforementioned paper, figaro is positioned upstream of the GraceDB branch.
We propose to exploit the analytical form of the DPGMM posterior in order to maximise the probability of identifying the potential host of a GW event, as we shall see in this section. The rapid decay of the faint light curve for GW170817 (Drout et al., 2017) and the expected similar behaviour for other kilonovae signals (Chase et al., 2022) suggest that a successful EM follow up campaign should plan to identify the host of the GW event as soon as possible, ideally within the first 36 hours of the GW transient. Considering that the expected detection range for the LVK network in O4 and O5 is expected to reach O(100) Mpc for BNS, the rapid host identification becomes even more critical, given the limited sensitivity of rapid EM follow up telescopes.
Call the DPGMM inferred by figaro, with ; given a list of known galaxy positions , we can compute the probability densities and sort them in decreasing order to provide a probability-ranked list of potential hosts to a GW. The possibility of doing so in real time is a major departure from any other algorithms. Moreover, the list can be updated at any time figaro releases a new density estimate from an on-going MCMC run. We applied this method to GW170817 using the GLADE+ catalog (Dálya et al., 2021).
We followed the procedure outlined in the previous section, by progressively including more and more MCMC samples to our DPGMM inference and produced a list of potential galactic hosts together with the two and three-dimensional credible regions. The result of our experiment is shown in Fig. 3. This shows the evolution of the galaxies within the 90 per cent credible volume for GW170817, coloured according to the probability of being the host. NGC4993, the host galaxy, is highlighted with a green circle.




40 samples | 160 samples | 1280 samples | 5042 samples |
---|---|---|---|
LEDA 803966 | ESO 575-55 | ESO 575-55 | ESO 575-55 |
ESO 575-55 | LEDA 803966 | LEDA 803966 | NGC 4993 |
NGC 4993 | NGC 4993 | NGC 4993 | LEDA 803966 |
ESO 575-53 | ESO 575-53 | ESO 575-53 | ESO 575-53 |
LEDA 169663 | 6dFGS gJ130832.3-232050 | 6dFGS gJ130832.3-232050 | 6dFGS gJ130832.3-232050 |
Being GW170817 a very well localized event, the number of potential hosts for this event is quite small, some tens of galaxies. The true host turns out to be within these galaxies even with a small number of samples. We report the 5 most probable galaxies for each of the four skymaps presented in Figure 3 in Table 1. The true host (NGC 4993) is always in the list of the top 5 most probable hosts, even after having only 40 MCMC samples. Assuming a reasonable wall time per sample of 5s, this implies that with figaro after just 200s of MCMC run-time, the true host would have been the third galaxy to point at for a successful identification.
4 Discussion and conclusions
We presented figaro, a fast and reliable volume reconstruction method based on a DPGMM. We demonstrated the efficacy of the method in correctly reconstructing the two and three dimensional probability density functions and it is able to consistently estimate credible regions, as shown in Supplementary Material, Appendix B, even when declaring early convergence based on the information entropy. Furthermore, we have shown what we regard as the main advantage of figaro compared to other rapid localization algorithms such as the previously mentioned bayestar: using the analytical form of the probability density function, we can evaluate it on any available galaxy within the reconstructed volume and provide a list of potential hosts ranked by probability based on the information encoded in the GW signal. In this letter we focused exclusively on localization information from GW alone and disregarded any other information about galaxy luminosities or class. This information is, however, easily folded in the host probability calculation, e.g. by assuming an additional weight based on a Schecther luminosity function.
This latter point also helps address one of the possible issues with our proposed ranking model for O4, O5 and subsequent observation runs. As the global network of GW detectors expands with the inclusion of KAGRA (Akutsu et al., 2021) and the foreseen LIGO-India (Iyer et al., 2011) and its sensitivity increases, the typical distance at which potential EM counterparts also increases (Abbott et al., 2018; Pankow et al., 2020) and we end up in regions where the current galaxy catalogues suffer from being incomplete. This is a potential problem, although independent of our algorithm, but we do not regard this as a show stopper. It is in fact plausible to assume that if a GW is localized in a galaxy, the probability that any galaxy will be the host must be proportional to the galaxy stellar mass, hence to the galaxy luminosity (Artale et al., 2020a; Artale et al., 2020b). In other words, more luminous galaxies are more likely to be the hosts of a GW. More luminous galaxies are also seen up to greater distances for a fixed survey sensitivity, hence are also more likely to be included in galaxy catalogues: this strategy is inquired in Gehrels et al. (2016). We therefore conclude that providing a list of potential hosts based on probability will still be a very valuable tool for astronomers, even when the baseline catalogue is deemed incomplete at the GW event distance.
Data availability
figaro is available at https://github.com/sterinaldi/figaro. Examples and instructions on how to use the code are available on the repository. GWTC–3 data are available in GWOSC at https://www.gw-openscience.org/. The F2Y dataset is available at https://www.ligo.org/scientists/first2years/.
Acknowledgments
The authors would like to acknowledge the discussions and interactions with the Parameter Estimation group and the Low-Latency group of the LIGO-Virgo-Kagra Collaboration that improved this work. We would also like to thank Christopher P. L. Berry, Marica Branchesi, Edoardo Milotti, Veronica Roccatagliata, Steven N. Shore, Leo P. Singer and the anonymous referee for useful comments and discussions.
This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan.
The Digitized Sky Surveys were produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. The plates were processed into the present compressed digital form with the permission of these institutions. The National Geographic Society - Palomar Observatory Sky Atlas (POSS-I) was made by the California Institute of Technology with grants from the National Geographic Society. The Second Palomar Observatory Sky Survey (POSS-II) was made by the California Institute of Technology with funds from the National Science Foundation, the National Geographic Society, the Sloan Foundation, the Samuel Oschin Foundation, and the Eastman Kodak Corporation. The Oschin Schmidt Telescope is operated by the California Institute of Technology and Palomar Observatory. The UK Schmidt Telescope was operated by the Royal Observatory Edinburgh, with funding from the UK Science and Engineering Research Council (later the UK Particle Physics and Astronomy Research Council), until 1988 June, and thereafter by the Anglo-Australian Observatory. The blue plates of the southern Sky Atlas and its Equatorial Extension (together known as the SERC-J), as well as the Equatorial Red (ER), and the Second Epoch [red] Survey (SES) were all taken with the UK Schmidt. Supplemental funding for sky-survey work at the ST ScI is provided by the European Southern Observatory.
References
- Aasi et al. (2015) Aasi J., et al., 2015, Class. Quant. Grav., 32, 074001
- Abbott et al. (2016) Abbott B. P., et al., 2016, Phys. Rev. Lett., 116, 061102
- Abbott et al. (2017a) Abbott B. P., et al., 2017a, Phys. Rev. Lett., 119, 161101
- Abbott et al. (2017b) Abbott B. P., et al., 2017b, Nature, 551, 85
- Abbott et al. (2017c) Abbott B. P., et al., 2017c, ApJ, 848, L12
- Abbott et al. (2017d) Abbott B. P., et al., 2017d, ApJ, 848, L13
- Abbott et al. (2018) Abbott B. P., et al., 2018, Living Reviews in Relativity, 21, 3
- Abbott et al. (2019) Abbott B. P., et al., 2019, Physical Review X, 9, 031040
- Abbott et al. (2021a) Abbott R., et al., 2021a, arXiv e-prints, p. arXiv:2111.03606
- Abbott et al. (2021b) Abbott R., et al., 2021b, Physical Review X, 11, 021053
- Abbott et al. (2021c) Abbott R., et al., 2021c, SoftwareX, 13, 100658
- Acernese et al. (2015) Acernese F., et al., 2015, Class. Quant. Grav., 32, 024001
- Akutsu et al. (2021) Akutsu T., et al., 2021, Progress of Theoretical and Experimental Physics, 2021, 05A101
- Artale et al. (2020a) Artale M. C., Mapelli M., Bouffanais Y., Giacobbo N., Pasquato M., Spera M., 2020a, MNRAS, 491, 3419
- Artale et al. (2020b) Artale M. C., Bouffanais Y., Mapelli M., Giacobbo N., Sabha N. B., Santoliquido F., Pasquato M., Spera M., 2020b, MNRAS, 495, 1841
- Ashton et al. (2019) Ashton G., et al., 2019, ApJS, 241, 27
- Aubin et al. (2021) Aubin F., et al., 2021, Class. Quant. Grav., 38, 095004
- Chase et al. (2022) Chase E. A., et al., 2022, ApJ, 927, 163
- Coughlin et al. (2018) Coughlin M. W., et al., 2018, Monthly Notices of the Royal Astronomical Society, 478, 692
- Coulter et al. (2017) Coulter D. A., et al., 2017, Science, 358, 1556
- Dálya et al. (2021) Dálya G., et al., 2021, arXiv e-prints, p. arXiv:2110.06184
- Del Pozzo et al. (2018) Del Pozzo W., Berry C. P. L., Ghosh A., Haines T. S. F., Singer L. P., Vecchio A., 2018, MNRAS, 479, 601
- Drout et al. (2017) Drout M. R., et al., 2017, Science, 358, 1570
- Eichler et al. (1989) Eichler D., Livio M., Piran T., Schramm D. N., 1989, Nature, 340, 126
- Escobar & West (1995) Escobar M. D., West M., 1995, Journal of the American Statistical Association, 90, 577
- Gehrels et al. (2016) Gehrels N., Cannizzo J. K., Kanner J., Kasliwal M. M., Nissanke S., Singer L. P., 2016, ApJ, 820, 136
- Goldstein et al. (2017) Goldstein A., et al., 2017, ApJ, 848, L14
- Goodman (1986) Goodman J., 1986, ApJ, 308, L47
- Górski et al. (2005) Górski K. M., Hivon E., Banday A. J., Wandelt B. D., Hansen F. K., Reinecke M., Bartelmann M., 2005, ApJ, 622, 759
- Iyer et al. (2011) Iyer B., Souradeep T., Unnikrishnan C., Dhurandhar S., Raja S., Anand S., 2011, LIGO-India, Proposal of the Consortium for Indian Initiative in Gravitational-wave Observations (IndIGO), https://dcc.ligo.org/LIGO-M1100296/public
- Klimenko et al. (2016) Klimenko S., et al., 2016, Phys. Rev. D, 93, 042004
- Paczynski (1986) Paczynski B., 1986, ApJ, 308, L43
- Pankow et al. (2020) Pankow C., Rizzo M., Rao K., Berry C. P. L., Kalogera V., 2020, ApJ, 902, 71
- Pian et al. (2017) Pian E., et al., 2017, Nature, 551, 67
- Rasmussen (2000) Rasmussen C., 2000, in Solla S., Leen T., Müller K., eds, Vol. 12, Advances in Neural Information Processing Systems. MIT Press, https://proceedings.neurips.cc/paper/1999/file/97d98119037c5b8a9663cb21fb8ebf47-Paper.pdf
- Rinaldi & Del Pozzo (2022) Rinaldi S., Del Pozzo W., 2022, MNRAS, 509, 5454
- Romero-Shaw et al. (2020) Romero-Shaw I. M., et al., 2020, MNRAS, 499, 3295
- Sachdev et al. (2019) Sachdev S., et al., 2019, arXiv e-prints
- Singer & Price (2016) Singer L. P., Price L. R., 2016, Phys. Rev. D, 93, 024013
- Singer et al. (2016a) Singer L. P., et al., 2016a, ApJS, 226, 10
- Singer et al. (2016b) Singer L. P., et al., 2016b, ApJ, 829, L15
- Usman et al. (2016) Usman S. A., et al., 2016, Class. Quant. Grav., 33, 215004
- Veitch et al. (2015) Veitch J., et al., 2015, Phys. Rev. D, 91, 042003