This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

PNet – A Deep Learning Based Photometry and Astrometry Bayesian Framework

Rui Sun College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, 030024, China Peng Jia College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, 030024, China Peng Cheng Lab, Shenzhen, 518066, China Department of Physics, Durham University, DH1 3LE, UK Yongyang Sun College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, 030024, China Zhimin Yang College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, 030024, China Qiang Liu College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, 030024, China Hongyan Wei College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, 030024, China
Abstract

Time domain astronomy has emerged as a vibrant research field in recent years, focusing on celestial objects that exhibit variable magnitudes or positions. Given the urgency of conducting follow-up observations for such objects, the development of an algorithm capable of detecting them and determining their magnitudes and positions has become imperative. Leveraging the advancements in deep neural networks, we present the PNet, an end-to-end framework designed not only to detect celestial objects and extract their magnitudes and positions but also to estimate photometry uncertainty. The PNet comprises two essential steps. Firstly, it detects stars and retrieves their positions, magnitudes, and calibrated magnitudes. Subsequently, in the second phase, the PNet estimates the uncertainty associated with the photometry results, serving as a valuable reference for the light curve classification algorithm. Our algorithm has been tested using both simulated and real observation data, demonstrating the PNet’s ability to deliver consistent and reliable outcomes. Integration of the PNet into data processing pipelines for time-domain astronomy holds significant potential for enhancing response speed and improving the detection capabilities for celestial objects with variable positions and magnitudes.

Time domain astronomy (2109) - Photographic astrometry (1227) – Bayesian statistics (1900) – CCD photometry (208) – Neural Networks (1933)
software: Skymaker (Bertin, 2009), Bayesian-Torch (Krishnan et al., 2022), Uncertainty Toolbox (Chung et al., 2021), PyTorch (Paszke et al., 2019), Astropy (Robitaille et al., 2013), Matplotlib (Hunter, 2007), Numpy (Harris et al., 2020), Scipy (Virtanen et al., 2020), Pandas (McKinney et al., 2011), tqdm (da Costa-Luis, 2019), photutils (Bradley et al., 2016), OpenCV (Bradski & Kaehler, 2008), Pillow (Lundh & Contributors, 1995–2021)

1 Introduction

In recent years, time-domain astronomy has emerged as an active research field. With the availability of telescopes possessing a wide field of view and high image quality, it has become feasible to capture images of celestial objects at regular intervals, yielding a substantial amount of observational data on a daily basis. Among this vast dataset, there are numerous celestial objects that necessitate frequent or immediate follow-up observations, such as tidal disruption events, near-earth objects, super flares, and microlensing events. Consequently, there is a pressing need to develop an algorithm capable of swiftly detecting these events. Since these events primarily involve changes in the positions and magnitudes of celestial objects, the algorithm must possess the capability to detect celestial objects and conduct precise photometry and astrometry measurements. Furthermore, given that images of celestial objects are susceptible to various sources of noise, the algorithm should also be able to estimate the uncertainties associated with the photometry results, enabling further in-depth analysis.

Numerous pipelines have been proposed to meet these requirements, typically comprising the following key steps:

  • Target detection: The positions of potential celestial object candidates are determined.

  • Target classification: True celestial objects are identified from the pool of candidates, and further categorized into different types.

  • Target information extraction: Magnitudes, positions, and distributions of the celestial objects are obtained.

Previous studies have introduced a variety of algorithms to establish the conventional data processing pipeline. Typically, target detection algorithms such as SExtractor or simplexy have been utilized to identify potential celestial objects from the original observational images (Lang et al., 2010; Bertin & Arnouts, 1996). Subsequently, these identified targets undergo classification algorithms that aim to distinguish true celestial objects from the candidate pool (Cabrera-Vives et al., 2017; Duev et al., 2019; Jia et al., 2019; Turpin et al., 2020; Agarwal et al., 2020). The resulting information regarding these celestial objects is then processed through photometry, astrometry, morphology classification, or segmentation algorithms (Khramtsov et al., 2019; Boucaud et al., 2020; Hausen & Robertson, 2020; Domínguez Sánchez et al., 2022; Casetti-Dinescu et al., 2023). However, the classical data processing pipeline follows a sequential structure, wherein all the processes are executed in sequence. Consequently, the overall performance of the data processing pipeline is limited by the performance of each individual algorithm used. For example, if celestial objects are not detected by the source detection algorithm, it becomes impossible to extract information related to those targets. It should be noted that contemporary source detection algorithms possess numerous adjustable parameters, requiring the expertise of experienced scientists to properly set them. In addition, since the detection results are sensitive to environmental conditions, frequent human intervention is required to obtain effective results. Therefore, it is necessary to develop an end-to-end framework that could not only detect celestial objects, but also extract their information automatically and robustly.

Deep neural network-based algorithms for celestial object detection have attracted considerable attention in recent years. One key advantage is that these algorithms enable end-to-end learning, allowing the neural network to directly acquire the ability to detect celestial objects. Different tasks can be addressed by designing and deploying deep neural networks with specific architectures (Ren et al., 2015; Ge et al., 2021; Liu et al., 2021b). In this study, our focus is on detecting point-like celestial objects in sparse star fields and extracting their positions and magnitudes, as this is a crucial prerequisite for studying such objects in time-domain astronomy. It is worth noting that extended targets and dense star fields (the distance between stars are less than 2 times of the full width half magnitude of the point spread function) may benefit from multicolour images and other relevant neural networks for better detection and classification (González et al., 2018; Farias et al., 2020; Cheng et al., 2021; Jia et al., 2022, 2023b; Yu et al., 2022; Andrew, 2023) , or from methods specifically designed for dense star fields (Liu et al., 2021a; Hansen et al., 2022). Moreover, transients, such as supernovae in galaxies, can be further processed using image difference-based methods or techniques developed based on temporal sequences of images (Wright et al., 2015; Kessler et al., 2015; Zackay et al., 2016; Sánchez et al., 2019; Mong et al., 2020; Gómez et al., 2020; Hu et al., 2022; Makhlouf et al., 2022). A previous study by Jia et al. (2020) has introduced a Faster-RCNN based framework for detecting point-like celestial objects, which was successfully applied to images captured by wide field optical telescopes and the Lobster-Eye telescope (Jia et al., 2023a). However, in real applications, there are three key challenges that need to be addressed:
1. The current framework does not provide apparent magnitudes as part of its output. Apparent magnitudes play a crucial role in various tasks, such as exoplanet observations or studying super-flares from stars. Therefore, it is essential to integrate a photometry algorithm into the detection framework to accurately estimate the apparent magnitudes of the celestial targets.
2. Contemporary deep neural network (DNN) based target detection algorithms define the positions of celestial objects using bounding boxes, which are rectangular boxes that approximate the shape of the objects. However, to match these celestial objects with catalogs and perform further analysis, we need to determine the precise centers of the celestial object images. Hence, it is necessary to incorporate an astrometry method into the detection framework to accurately estimate the centers of different celestial objects.
3. The current framework lacks the ability to estimate uncertainties associated with magnitude estimation. Since most neural networks provide point estimates for a given input, they directly output regression values without accounting for uncertainties. However, uncertainties are vital for subsequent tasks, such as light curve classification. Therefore, it is crucial to develop a method that can estimate the uncertainties introduced by magnitude estimations.

To enhance the suitability of our Faster-RCNN based astronomical detection algorithm for integration into time domain astronomy data processing pipelines, further improvements are necessary. In this study, we present our endeavour to develop a novel framework called the deep learning based photometry and astrometry Bayesian Neural Network (PNet). The PNet includes an advanced architecture that excels in detecting, performing photometry, and carrying out astrometry for point-like celestial objects. Notably, the PNet leverages the Bayesian Neural Network (BNN) for estimating both the photometry results and their associated uncertainties. The subsequent sections of this paper will delve into various aspects of our work. Section 2 will discuss the properties of the data and the methods employed for data reprocessing. In Section 3, we will present the structure of the PNet. In Section 4, we will assess the PNet’s performance using both simulated data and real observation data. And these results will be compared to those obtained using the SExtractor (Bertin & Arnouts, 1996) to show the advantageous of the PNet. Finally, in Section 5, we will conclude our findings and outline our future research directions.

2 The Data

In this paper, we train and evaluate the performance of our framework using simulated and real observation data. Simulated data allows us to have control over the observation conditions, enabling a more precise assessment of our framework’s performance. On the other hand, real observation data encompass various unknown factors that better reflect the actual performance of our framework. To generate the simulated data, we use Skymaker, a widely used tool for generating synthetic images based on specified observation conditions (Bertin, 2009). Skymaker generates simulated images by considering parameters such as point spread functions, noise levels, and input star catalogs, which are obtained by the Stuff for galaxies and manually generated catalogs for stars. The distribution of photons emitted by celestial objects follows a Poisson distribution, with the point spread function serving as the prior distribution function. Additionally, the simulation includes the generation of Poisson-distributed sky background photons. To account for readout noise, Gaussian noise is simulated, and effects like blooming/bleeding are also considered in Skymaker. By carefully controlling parameters in the Skymaker, we ensure that the PSF size and noise level closely resemble those of real observation data, thus facilitating a thorough investigation of our framework’s performance.

The real observation data employed in this paper is derived from the Sloan Digital Sky Survey (SDSS) DR17 (Abdurro’uf et al., 2022). The SDSS data is collected using a wide-field 2.5m telescope (Gunn et al., 2006) located at the Apache Point Observatory in New Mexico (York et al., 2000). This data undergoes meticulous processing through a specialized data processing pipeline (Lupton et al., 2005), which includes precise astrometric calibration (Pier et al., 2003) using the USNO CCD Astrograph Catalog (UCAC) (Zacharias et al., 2000), as well as photometric calibration (Padmanabhan et al., 2008) with the aid of numerous standard stars (Smith et al., 2002). Consequently, the data and catalogue obtained from the SDSS serve as reliable references for both training and testing purposes. The SDSS employs five filters, namely u, g, r, i, and z filters, which correspond to central wavelengths of 3543, 4770, 6231, 7625, and 9134 Å, respectively 111https://skyserver.sdss.org/dr1/en/proj/advanced/color/sdssfilters.asp. In this study, we focus on images obtained using the g-band filter due to their relatively higher signal-to-noise ratio. Additionally, the parameters runrun, camcolcamcol, fieldfield, and rerunrerun define the observation period of the telescope, the camera column number, the area number on the camera, and the version number during data processing, respectively. To ensure the generalization of our algorithm across different sky zones and time periods, we avoid specifying specific values for the first three parameters. However, we do specify the use of data processing process with version number 301 for the rerunrerun parameter.

Given that our framework is designed to specifically detect point-like celestial objects, the dataset exclusively consists of such objects. The magnitudes and positions of these point-like celestial objects are determined as regression values within the framework. The resulting positions are provided in camera coordinates, using pixels as a unit of measurement, while the magnitudes are represented as flux ff. To calculate the apparent magnitude, we utilize the equation defined in (Stoughton et al., 2002) as follows:

mag=22.52.5log10f,mag=22.5-2.5\log_{10}f, (1)

The apparent magnitude (magmag) is related to the flux (ff) according to the equation provided. In this study, we adopt a magnitude zero point of 22.5 and estimate the magnitudes of stars within the range of 13 to 20. Figure 1 illustrates the histogram depicting the distribution of stars by different magnitudes in the real data set. The observed distribution aligns with our experience.

Refer to caption
Figure 1: The histogram of stars with different apparent magnitude in the real data set.

Taking into account the impact of input image size on GPU memory requirements, we divide the original SDSS images into patches with size of 512×512512\times 512 pixels. This approach helps reduce the hardware demands. Additionally, we apply certain criteria to remove specific stars that would otherwise necessitate additional processing steps. These criteria include:

  • Stars located at the image’s edge within a 10-pixel distance.

  • Stars situated near galaxies or other objects.

  • Stars positioned in close proximity to one another within a 10-pixel range.

3 The Method

The flowchart in Figure 2 illustrates the structure of the proposed framework. Initially, we identify and mask out any defective pixels present in the original image. Following that, we divide the image into patches with size of 512×512512\times 512 pixels. Subsequently, we employ the Photometry Detection Net to detect point-like targets within these image patches and conduct photometry to determine the flux of these identified targets. Using the obtained flux values, we perform photometry calibration to derive the magnitudes of the stars. Lastly, we utilize the Bayesian Photometry Neural Network (BPNN) to re-evaluate the magnitudes of the stars and estimate the associated uncertainty in the magnitude measurements. The decision to separate the star detection and the BNN for magnitude measurement is based on the DNN’s extensive parameterization, which necessitates multiple sampling during the prediction phase, thereby requiring significant computational resources. In this section, we will first introduce several performance evaluation metrics. We will then provide a concise overview of the Bayesian Neural Network (BNN) and subsequently discuss the implementation details and training procedures of the framework. All the neural networks described in this paper are constructed using PyTorch and executed on a computer equipped with a GTX 3090Ti GPU.

Refer to caption
Figure 2: The flowchart of the PNet proposed in this paper. The green block indicates the data preprocessing part, the purple block indicates the Photometry-Detection Net part, the blue block indicates the Bayesian Photometry Neural Network part, and the orange block indicates the output of the whole algorithm.

3.1 The Performance Evaluation Criterion

Choosing appropriate performance evaluation criteria is crucial for properly assessing the performance of a framework. It is essential to select evaluation criteria that align with the objectives of developing the framework. Our algorithm focuses on four key aspects: detection of point-like targets, regression of their positions and magnitudes, and estimation of photometric uncertainties. Hence, the following performance evaluation criteria have been selected for our algorithm:

  • The recall rate and the precision rate, and mean Average Precision (mAP) are chosen to evaluate the performance of target detection results.

  • The astrometry accuracy in pixels is used to assess the accuracy of the astrometry results.

  • The photometry accuracy in magnitudes is used to evaluate the accuracy of the photometry results.

  • The outlier fraction (η\eta), Normalized Median Absolute Deviation (NMAD), Median Absolute Deviation (MAD), and the mean value of the photometry results 1σ1\sigma (E¯\bar{E}) are utilized to evaluate the uncertainty of the photometry results.

In the following, we will provide a detailed description of the aforementioned performance evaluation criteria. When evaluating the detection results, we consider four possible scenarios:

  • True Positive (TP): This occurs when a point-like celestial object is correctly identified as a point-like celestial object.

  • True Negative (TN): This occurs when targets other than point-like celestial objects are correctly identified as non-point-like celestial objects.

  • False Positive (FP): This occurs when targets other than point-like celestial objects are wrongly identified as point-like celestial objects.

  • False Negative (FN): This occurs when point-like celestial objects are wrongly identified as non-point-like celestial objects.

Given that our framework is capable of directly outputting the center coordinates of the point-like celestial objects, we assess the detection results by calculating the Euclidean distance between the predicted position and the corresponding position in the label. The Euclidean distance can be defined using the following equation 2:

EuclideanDistance=i=1n(xiyi)2,EuclideanDistance=\sqrt{\sum_{i=1}^{n}{(x_{i}-y_{i})^{2}}}, (2)

the variables xix_{i} represent the predicted positions, yiy_{i} represent the label positions, and nn represents the number of coordinates used to describe the positions (which is 2 in this paper). If the Euclidean distance between the predicted positions and the label positions is below a certain threshold and the classification result is in accordance with the label, it will be considered true positive (TP) or true negative (TN). Otherwise, it will be classified as a false positive (FP) or false negative (FN).

Based on the aforementioned definition, we can assess the performance of our framework in target detection by utilizing the Precision and Recall metrics, as defined in the following Equation 3:

Precision\displaystyle Precision =TPTP+FP,\displaystyle=\frac{TP}{TP+FP}, (3)
Recall\displaystyle Recall =TPTP+TN.\displaystyle=\frac{TP}{TP+TN}.

Precision and Recall are widely used metrics for evaluating target detection results. Precision represents the percentage of true positives among all positive detection results, indicating the performance of the detection algorithm in minimizing false alarms. Recall, on the other hand, represents the percentage of true positives among all actual targets, describing the ability of the detection algorithm to identify all positive instances. Precision and Recall are both influenced by the chosen detection threshold. A higher threshold leads to higher precision, but lower recall, and vice versa. To comprehensively evaluate the performance of a detection algorithm, we can vary the detection threshold and generate a precision-recall curve (P-R curve). The area under the P-R curve is known as the average precision (AP). The mean average precision (mAP) is calculated by averaging the AP values across all categories, providing an overall assessment of the detection algorithm’s performance. In this paper, the mAP is obtained using Equation 4, where nn represents the number of categories (which is 1 in this case). Furthermore, it is worth noting that the astrometry accuracy can be measured by calculating the distance between the predicted position and the corresponding position in the label for all true positive (TP) detection results.

mAP=APnmAP=\frac{\sum{AP}}{n} (4)

The Bayesian neural network is employed to estimate the uncertainty inherent in the data. When a star image is fed into the Bayesian neural network, it generates a predictive posterior distribution for the star’s magnitude, along with the mean value of that distribution. The mean and standard deviation of this distribution, which cannot be directly obtained, are typically estimated through multiple Monte Carlo sampling iterations. The standard deviation σmag\sigma_{mag} of this distribution can be interpreted as a quantitative measure of uncertainty. Consequently, we consider two aspects of the output: the deviation of the true value from the mean of the predicted distribution and the level of uncertainty in the prediction results. To evaluate the deviation of the true value from the mean, we employ the relative error RelativeErrorRelativeError and the absolute error δmag\delta_{mag}, defined in Equation 5:

δmag\displaystyle\delta_{mag} =magtruemagpred,\displaystyle=mag_{true}-mag_{pred}, (5)
RelativeError\displaystyle RelativeError =δmag1+magtrue.\displaystyle=\frac{\delta_{mag}}{1+mag_{true}}.

By considering the relative error for all targets, we can determine the fraction of outliers (η\eta) by establishing a threshold value for the relative error and calculating the proportion of prediction results exceeding this threshold (σmag\sigma_{mag} in this study). Additionally, we employ the normalized median absolute deviation (NMAD) to evaluate the relative errors obtained. The median absolute deviation (MAD) is a statistical measure that characterizes the sample bias of one-dimensional numerical data. To obtain the NMAD, we normalize the MAD by multiplying it by a factor of 1.4826 (Rousseeuw & Croux, 1993), as demonstrated in Equation 6,

δNMAD\displaystyle\delta_{NMAD} =1.48×median(abs(δmagmedian(δmag))1+magtrue).\displaystyle=1.48\times median(\frac{abs(\delta_{mag}–median(\delta_{mag}))}{1+mag_{true}}). (6)

The absolute error is assessed through the Mean Absolute Error (MAE), which represents the average of the absolute differences between the mean of the predictive posterior distribution and the corresponding true value. This metric provides a visual indication of the error level. Regarding uncertainty, we initially estimate the magnitude distribution for each star using the Bayesian neural network. We then compute the mean value of these distributions, denoted as E¯\bar{E}, which serves as a reference indicator for the uncertainty distribution.

3.2 The Principles of the Bayesian Photometry Neural Network

The Bayesian photometry neural network is utilized to estimate both the magnitude of stars and the associated photometry uncertainties. In traditional neural networks, the weights are fixed, leading to a lack of uncertainty estimation and excessive confidence in the predicted results. To address this issue, Bayesian neural networks employ the Variational Bayes (VB) method (Blundell et al., 2015) to introduce uncertainty into the network weights. However, before delving into the methods for capturing uncertainty, it is crucial to comprehend the origins of uncertainty.

In the field of machine learning, two main types of uncertainty are commonly recognized: aleatoric uncertainty (also known as data uncertainty) and epistemic uncertainty (also known as model uncertainty) (Kiureghian & Ditlevsen, 2009). Aleatoric uncertainty (AU) stems from the inherent noise present in the dataset itself (Gal et al., 2016). Since this noise is natural and unpredictable, aleatoric uncertainty cannot be eliminated. On the other hand, epistemic uncertainty (EU) arises from insufficient training of the network, resulting in a lack of knowledge about the system’s behavior. In principle, this uncertainty can be reduced as the training data approaches infinity (Hora, 1996). As mentioned earlier, the predictive uncertainty (PU) we aim to capture can be expressed as the combination of AU and EU (Abdar et al., 2021), as illustrated in Equation 7:

PU=AU+EUPU=AU+EU (7)

By defining the PUPU, we can establish the underlying principle of the Bayesian Photometry Neural Network (BPNN). Initially, we define the weights of the BPNN as ωΩ\omega\in\Omega, where Ω\Omega represents the parameter space of the BPNN. The training dataset is denoted as D, and within this dataset, we have data pairs X and Y. Similarly, in the test dataset, we have data pairs x and y. The distribution of weights ω\omega learned by the network from the dataset D is represented as p(ω|D)p(\omega|D). Additionally, p(y|x,ω)p(y|x,\omega) signifies the probability that the neural network yields output y when given input x and weight ω\omega. Lastly, p(ω)p(\omega) represents the prior weight distribution of the network. With these definitions, we can derive the probability distribution of the output y given the input x when the neural network is trained using dataset D, as illustrated in Equation 8:

p(y|x,D)=p(y|x,ω)p(ω|D)𝑑ω.p(y|x,D)=\int p(y|x,\omega)p(\omega|D)d\omega. (8)

The prediction uncertainty captured by the BPNN is represented by p(y|x,D)p(y|x,D). However, obtaining p(ω|D)p(\omega|D) through analytical calculations is challenging in real applications, often requiring approximation methods to perform the inference task (Hortúa et al., 2020). In this study, we employ variational inference to approximate the solution for p(ω|D)p(\omega|D). Initially, we assume a variational distribution q(ω|θ)q(\omega|\theta), where θ\theta denotes a set of variational parameters. Subsequently, we calculate the Kullback-Leibler (KL) divergence between the variational distributions q(ω|θ)q(\omega|\theta) and p(ω|D)p(\omega|D). Finally, we determine a set of variational parameters θ\theta^{\ast} that minimizes the KL divergence, as shown in Equation 9:

θ\displaystyle\theta^{\ast} =argminθKL[q(ω|θ)p(ω|D)]\displaystyle=\mathop{\arg\min}\limits_{\theta}KL\left[q(\omega|\theta)\|p(\omega|D)\right] (9)
=argminθq(ω|θ)lnq(ω|θ)p(ω|D)dω\displaystyle=\mathop{\arg\min}\limits_{\theta}\int q(\omega|\theta)\ln{\frac{q(\omega|\theta)}{p(\omega|D)}}d\omega

Since p(ω|D)p(\omega|D) cannot be obtained analytically, we further introduce the Bayesian formula below:

p(ω|D)=p(D|ω)p(ω)p(D).p(\omega|D)=\frac{p(D|\omega)p(\omega)}{p(D)}. (10)

We could obtain the KL divergence according to Equation 9 and Equation 10, as shown in Equation 11:

KL[q(ω|θ)p(ω|D)]=lnp(D)+KL[q(ω|θ)p(ω)]q(ω|θ)lnp(D|ω)dω.KL\left[q(\omega|\theta)\|p(\omega|D)\right]=\ln{p(D)}+KL\left[q(\omega|\theta)\|p(\omega)\right]-\int q(\omega|\theta)\ln{p(D|\omega)}d\omega. (11)

Since lnp(D)\ln{p(D)} is only related to properties of data, we could minimize the KL divergence through minimizing the following Equation 12:

F(D,θ)\displaystyle F(D,\theta) =KL[q(ω|θ)p(ω)]q(ω|θ)lnp(D|ω)𝑑ω\displaystyle=KL\left[q(\omega|\theta)\|p(\omega)\right]-\int q(\omega|\theta)\ln{p(D|\omega)}d\omega (12)
=Eq(ω|θ)[lnq(ω|θ)lnp(D,ω)],\displaystyle=E_{q(\omega|\theta)}\left[\ln{q(\omega|\theta)-\ln{p(D,\omega)}}\right],

It is important to note that the term F(D,θ)F(D,\theta) in Equation 12 corresponds to the negative value of the Evidence Lower Bound (ELBO), as discussed in Blei et al. (2017). By employing Equation 12, we can transform the analytically challenging calculation into a practical optimization problem for the variational parameter θ\theta. For a comprehensive understanding of the approximation methodology, we refer readers to the work of Hortúa et al. (2020), while providing a concise overview here.

To begin, let us assume that we can obtain a set of variational parameters θ^\hat{\theta} that minimizes F(D,θ)F(D,\theta) through an optimization process. In this study, we perform such optimization using the backpropagation algorithm (Rumelhart et al., 1986) and employ the Adam gradient descent algorithm (Kingma & Ba, 2014). With the obtained θ^\hat{\theta}, we can derive the predictive distribution qθ^q_{\hat{\theta}}. By combining this with Equation 8, we can determine the predictive distribution of the output variable y given the input x:

qθ^(y|x)=p(y|x,ω)q(ω|θ^)𝑑ω,q_{\hat{\theta}}(y|x)=\int p(y|x,\omega)q(\omega|\hat{\theta})d\omega, (13)

Although Equation 13 provides an analytical solution for the predictive distribution, calculating the integral can be challenging in practical applications. Hence, we employ Monte Carlo sampling (Gal et al., 2016) and series addition method to obtain the final results:

qθ^(y|x)1Nn=1Np(y|x,ω^n),q_{\hat{\theta}}(y|x)\approx\frac{1}{N}\sum_{n=1}^{N}p(y|x,\hat{\omega}_{n}), (14)

where NN denotes the number of samples, and ω^n\hat{\omega}_{n} represents the nth sampled value of the weights obtained from q(ω|θ^)q(\omega|\hat{\theta}). Equation 14 is equivalent to Equation 13 as the number of samples NN tends to infinity. This equation indicates us that we could obtain a Bayesian estimation through Monte Carlo sampling of the weights for a predefined neural network.

In the study by Hortúa et al. (2020), the authors use the total variation principle to derive the analytical results for the variance of the predicted distribution on a fixed input x. They further simplify these results to obtain:

Var^(y|x)1Tt=1Tσt2+1Tt=1T(μt2μ¯2),\hat{Var}(y|x)\approx\frac{1}{T}\sum_{t=1}^{T}\sigma_{t}^{2}+\frac{1}{T}\sum_{t=1}^{T}(\mu_{t}^{2}-\bar{\mu}^{2}), (15)

In the equation above, TT denotes the total number of forward passes of the network. The terms σt\sigma_{t} and μt\mu_{t} refer to the standard deviation and mean of the distribution obtained during the t-th forward pass, respectively, while μ¯\bar{\mu} represents the mean of all μt\mu_{t} values. The first term in Equation 15 corresponds to the aleatoric uncertainty discussed earlier, whereas the second term corresponds to the epistemic uncertainty. In real applications, we build a BPNN with the flipout method discussed in Wen et al. (2018) to generate pseudo-independent weight perturbation on minibatches, which could simulate the Bayesian interference process. Then the distribution between predicted results and prior distribution could be evaluated to provide reference to photometry uncertainty. We will discuss details of the method in Section 3.3.2.

3.3 The Structure of Neural Networks in the PNet

In this subsection, we will provide a comprehensive overview of the neural network structure employed in our framework. The neural network comprises three interconnected components that collaboratively predict the final outcomes. The first component, known as the Photometry-Detection Net, is designed to detect point-like celestial objects, accurately determine their subpixel positions, and calculate their flux. The second component is responsible for estimation of the photometry results and their uncertainties. Lastly we carry out the photometry calibration, utilizing reference stars to ensure precise calibration of the photometric measurements derived from the flux values. This framework allows us to obtain reliable magnitudes and positions for point-like celestial objects. It is important to emphasize that the primary focus of this paper is not the detection of various celestial objects such as galaxies or quasars. Neural networks specifically tailored for the detection of celestial objects from multicolor images are more suitable for these targets (Jia et al., 2023b). Once these targets have been detected by aformentioned methods, we proceed to mask them and carry out point-like celestial object detection, astrometry, and photometry using single-band images with our framework. As a result, the final output of our framework includes the positions of point-like celestial objects, their corresponding magnitudes, and the associated photometry uncertainties, which could be directly used to analyze celestial objects with light variation and moving celestial objects.

3.3.1 The Photometry-Detection Neural Network

Since point-like celestial objects possess a relatively simple structure and smaller size compared to other natural images, there is no need to employ highly complex backbone neural networks specifically designed for natural image target detection. Furthermore, our goal is to determine the center of point-like celestial objects instead of utilizing the bounding box approach commonly used in deep neural network-based target detection algorithms. Therefore, we must modify the position regression strategy within the neural networks. With these considerations in mind, we propose a novel structure known as the Photometry-Detection Net, which directly provides the position and flux of point-like celestial objects. Training data for the Photometry-Detection Net consists of original observation images in a single band, and the corresponding labels in the training data indicate the position and flux of these point-like celestial objects.

First and foremost, the Photometry-Detection Net integrates the CenterNet structure for the detection of celestial objects. CenterNet is a straightforward, efficient and accurate neural network designed specifically to detect small targets by regressing their key points (Zhou et al., 2019). The CenterNet has gained widespread recognition and has been applied in various domains (Ahmed et al., 2021; Guo et al., 2021). The structure of the CenterNet is depicted in Figure 3. In practical scenarios, the CenterNet begins by performing regression to identify the center point of a target, followed by feature extraction in the vicinity of the center point. Subsequently, the CenterNet provides the target’s position and confidence score. Building upon the detection results, we introduce the photometry neural network branch connected to the CenterNet, enabling us to obtain the flux of the point-like celestial objects. To optimize computational resources, we set the input size of the CenterNet to 512×512512\times 512 pixels, and the output of the CenterNet consists of the detected targets, their corresponding positions, and their flux values.

Refer to caption
Figure 3: The structure of CenterNet. The CenterNet includes three modules: the Downsample module, the Extractor module, and the PreHead module. Residual1 and Residual2 represent two types of residual structures. The network takes an image with a resolution of 512×512512\times 512 pixels as input and outputs the center position, flux value, size of the target, and a heatmap that corresponds to the input image. The center position and the flux value will be used for further processing. In the Extractor module, we propose to use the Hourglass neural network for feature extraction.

As depicted in Figure 3, the CenterNet comprises three distinct modules: the Downsample module, the Extractor module, and the PreHead module. In this study, we employ a 2D convolutional neural network as the Downsample module, using a convolutional layer with a kernel size of 7×77\times 7, a padding of 3, and a stride of 2 to downsample the image to a size of 256×256256\times 256 pixels. Subsequently, the resulting image is further downsampled to 128×128128\times 128 pixels using the residual1 component, as illustrated in Figure 3. These down-sampled images are then passed through the Extractor module to extract their features (Springenberg et al., 2014). For Extractor module, we adopt the Hourglass neural network in this paper, as it is well-suited for capturing features from small objects. When the downsampled images are inputted into the hourglass network, a sequence of residual modules with convolutional layers having a stride of 2 is employed to iteratively downsample the image four times, reducing its size by a factor of two each time. Following downsampling, four upsampling operations are performed using the nearest-neighbor interpolation algorithm. The spatial information from each downsampling scale is preserved through skip connections and fused with the upsampled feature maps during the upsampling process (Newell et al., 2016). The hourglass network ultimately produces a feature map of the same size as the input. By stacking multiple hourglasses, the detection capability of the neural network can be enhanced, as subsequent hourglasses refine the detection results based on the output of the previous ones, which proves more effective than employing a single detection network. For instance, in a system containing multiple stars, a star that has been missed in the initial detection round may become easier to detect in subsequent hourglasses.

Finally, the feature maps extracted from the Extractor module are passed to the PreHead module to perform regression on various attributes of the targets in the image under processing. In this study, the PreHead module consists of four sub-PreHeads: heatmap, offset, flux, and size. All four PreHeads are Convolutional Neural Networks (CNNs) comprising 2D convolutional layers and Rectified Linear Unit (ReLU) layers. The heatmap PreHead generates a heatmap of size batchsize×classnum×height×widthbatchsize\times classnum\times height\times width. In this particular study, only star detection is considered, so the class num is set to 1. The heatmap divides the original image into a grid of 128×128128\times 128 patches, serving two purposes: providing confidence scores for different classes at various positions of the target and indicating the approximate position of the detected targets within the grid. The offset and size PreHeads predict different properties of the targets but share the same CNN structure, producing output sizes of batchsize×2×width×heightbatchsize\times 2\times width\times height, where 2 represents the two parameters predicted by each PreHead. The Offset PreHead estimates the deviation between the actual center position of the target and the position indicated by the heatmap, which allows us to calculate the actual center position. The Size PreHead provides the height and width of the target. The Flux PreHead has a structure similar to the Offset PreHead, but with a reduced number of output channels (1) and prediction of a single parameter, which represents the flux. The structures of the four PreHeads are depicted in Figure 4. Additionally, it is worth noting that we have incorporated several intermediate supervision steps within the PreHead module. These supervisions are added directly after each hourglass neural network to assess its performance during the training phase. After training, these neural networks will not be utilized, and we will solely employ the main structure of the PreHead module.

Refer to caption
Figure 4: The figure illustrates the structure diagram of the PreHeads. The blue blocks share the same structure, and their detailed configuration is presented in the figure. The yellow 1x1 convolution within the PreHead module is used to adjust the number of channels to align with the desired output parameter count. The input to the PreHead is the feature map extracted by the Hourglass, while the output consists of the parameters indicated in the purple blocks of the figure.

3.3.2 The Bayesian Photometry Neural Network

The Bayesian Photometry Neural Network (BPNN) is employed to determine the magnitude and uncertainty of photometry results, serving as a reference for subsequent light curve classification. Based on the detection results from the Photometry-Detection Net, all detection results are cropped into stamp images with a size of 9×99\times 9 pixels. This size is suitable for star images of moderate brightness, but the stamp images can be adjusted to smaller or larger sizes depending on the actual observation conditions. The stamp images are then used as input for the BPNN, which estimates the magnitude multiple times using the ’flip-out’ method to approximate the posterior distribution of the neural networks in magnitude estimation. The BPNN comprises two main components: the Feature Extraction Layer and the Bayesian Layer, as illustrated in Figure 5.

Refer to caption
Figure 5: The figure illustrates the architecture of BPNN. The Feature Extraction Layer is depicted in purple, while the Bayesian Layer is represented in red. The blue blocks indicate the convolutional layer, while the green blocks correspond to the Bayesian fully connected layer. The network takes a star image of dimensions 9×99\times 9 pixels as input and produces a predicted distribution for photometry as the output.

The Feature Extraction Layer is built upon the resnet50 architecture (He et al., 2016). Resnet50 tackles the challenges of increased computational time and diminished accuracy in deep network structures by incorporating ”shortcut connections”. Since the input image is relatively small (9×9 pixels), we have adjusted the size of the convolution kernel to 3×33\times 3 to better accommodate these small-scale images. Additionally, we have removed the pooling layers in the network to reduce information loss and improve prediction accuracy. Lastly, we have eliminated the fully connected layer and repurposed it as the Feature Extraction Layer.

In Section 3.2, we have demonstrated that the Variational Bayesian (VB) method can approximate the posterior distribution by minimizing the Evidence Lower Bound (ELBO) as depicted in Equation 12. Typically, this procedure is accomplished using the gradient descent method. The gradient of F(D,θ)F(D,\theta) in Equation 12 can be computed by considering the density function q(ω|θ)q(\omega|\theta), which is also parameterized by θ\theta:

θEq(ω|θ)[fθ(ω)]=θq(ω|θ)fθ(ω)𝑑ω+Eq(ω|θ)[θfθ(ω)].\nabla_{\theta}E_{q(\omega|\theta)}[f_{\theta}(\omega)]=\int\nabla_{\theta}q(\omega|\theta)f_{\theta}(\omega)d\omega+E_{q(\omega|\theta)}[\nabla_{\theta}f_{\theta}(\omega)]. (16)

The specific form of fθ(ω)f_{\theta}(\omega) is:

fθ(ω)=lnq(ω|θ)lnp(D,ω).f_{\theta}(\omega)=\ln{q(\omega|\theta)}-\ln{p(D,\omega)}. (17)

As depicted in the first term of Equation 16, computing the gradient necessitates obtaining analytical solutions for expectations involving the approximate posterior distribution. However, this task often proves challenging in real-world applications. Consequently, when attempting to directly compute Equation 12 using a neural network via forward propagation, calculating the gradient becomes generally infeasible, and the computation process lacks differentiability, impeding backpropagation. To address this issue, ”The reparameterization trick” has been introduced by Kingma & Welling (2013). By employing this trick, a straightforward and differentiable unbiased estimator for the ELBO can be generated, enabling the use of gradient descent algorithms for ELBO optimization.

However, the reparameterization trick is not without its limitations. One issue arises from the fact that all sample weights within the same batch are identical, resulting in correlated gradients across different examples in the batch (Hortúa et al., 2020). To address this problem, a technique known as ”flipout” was introduced by Wen et al. (2018). Flipout facilitates the generation of efficient pseudo-independent weight perturbations on mini-batches. In a comparative study conducted by Hortúa et al. (2020), several methods, including Dropout, Dropconnect, Reparameterization Trick (RT), and flipout, have been evaluated, with flipout demonstrating superior performance. As a result, for the implementation of the Bayesian Neural Network in the BPNN, we have opted to utilize flipout. Specifically, we employ the method proposed in Krishnan et al. (2022) to construct a 5-layer flipout linear layer, thereby enabling the realization of Bayesian layers. In the flipout linear layers, multiple sets of weights are sampled during each forward pass. During the training step, the weights are randomly flipped or flipped out based on a Bernoulli distribution. In the deployment step, these weights are treated as random variables and can be used for uncertainty estimation. By sampling multiple sets of weights, flipout introduces stochasticity into the network, resulting in different predictions for the same input. This characteristic allows for the quantification of uncertainty and enhances the model’s robustness and generalization capabilities.

3.3.3 The Photometry Calibration Part

With above neural networks, we are able to estimate the flux of celestial objects based solely on the grayscale counts within the images of these objects. However, in order to identify flares and variable stars, it becomes crucial to calibrate the flux of all detected targets by referencing the flux from reference stars. For this calibration process, we employ the method proposed by Stoughton et al. (2002) to convert flux to magnitudes, which is represented by the Equation 18:

mag=mag0+magzero+k(t)×x+f(i)mag=mag_{0}+mag_{zero}+k(t)\times x+f(i) (18)

The equation provided calculates the calibrated magnitude magmag based on various variables. The instrumental magnitude mag0mag_{0} is derived from the photometry branch, and magzeromag_{zero} represents the zero-point magnitude set by the user. The primary extinction coefficient is denoted as kk, and xx represents the air mass. Additionally, f(i)f(i) characterizes the flat field of the CCD’s iith column. Consequently, when working with SDSS data, it is necessary to obtain the run, camcol, and field parameters for magnitude calibration. On the other hand, for data obtained by different telescopes, we need to firstly use the PNet to obtain flux and detection results. Then, we use 50 to 100 isolated reference stars on the same image with known magnitudes to fit the calibration function by either least squares or minimizing the chi-square distance between the flux and the magnitudes. In the above steps, we will filter out abnormal points where the error between the observed value and the expected value exceeds 1σ1\sigma.

4 Training and Performance Evaluation

In this chapter, we will utilize two types of data to train and evaluate the performance of our framework: the g-band data from SDSS DR17 and simulated data generated by Skymaker. It is important to highlight that while the datasets differ, all other computational procedures in the framework remain consistent. Besides, we select the SExtractor(Bertin & Arnouts, 1996), a commonly used astrometry and photometry tool for data processing, to process the observation data at the same time for comparison. As a benchmark, we will employ the magnitudes and coordinates provided by the official catalogue from the SDSS for real observation data and catalog provided by the Skymaker for simulated data. Regarding the Photometry-Detection Net, we will showcase the detection results as well as preliminary photometry results. Subsequently, we will focus on the Bayesian Photometry Neural Network and present the estimation of the photometry uncertainty.

4.1 Training and Performance Evaluation of the Photometry-Detection Net

During the training phase, certain targets that are not utilized in the training data, such as dense star fields and stars close to galaxies, are masked. As previously mentioned, the detection of these targets from multiple color images requires specialized processing methods, which are beyond the scope of this paper. For optimization, we employ the Adam Optimizer (Kingma & Ba, 2014) and evaluate the classification results using the Focal Loss (Lin et al., 2017). The Focal Loss dynamically adjusts the impact of each training example on the overall loss, ensuring a balanced approach to the detection results of celestial objects with varying magnitudes. The Focal Loss is formulated by introducing a modulating factor into the Cross Entropy (CE) loss, as originally proposed by Lin et al. (2017). In this study, we employ an alpha-balanced version of the focal loss, as defined in Equation 19:

FocalLoss(pt)=αt(1pt)γlog(pt),FocalLoss(p_{t})=-\alpha_{t}(1-p_{t})^{\gamma}log(p_{t}), (19)

where αt\alpha_{t} is the weighting factor used to adjust the weights of positive and negative samples, ptp_{t} is the output of the network, and γ>0\gamma>0 is an adjustable focusing parameter (2 in this paper). The Mean Absolute Error (MAE) loss is utilized to assess position error, while the Mean Square Error (MSE) loss is employed to evaluate flux prediction outcomes. The computation of MSE and MAE is demonstrated in Equation 20, where nn represents the total number of targets, y^i\hat{y}_{i} denotes the predicted value of the i-th target, and yiy_{i} denotes the corresponding true value. The aggregate of these three losses constitutes the loss function employed to train the Photometry-Detection Net, as depicted in Equation 20:

MAE\displaystyle MAE =1ni=1n|y^iyi|\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\lvert\hat{y}_{i}-y_{i}\rvert (20)
MSE\displaystyle MSE =1ni=1n(y^iyi)2.\displaystyle=\frac{1}{n}\sum_{i=1}^{n}(\hat{y}_{i}-y_{i})^{2}.

It takes approximately 324 seconds to complete a single epoch when training the Photometry-Detection Net on a computer equipped with a 3090Ti GPU. The batch size consists of 40 images, each with dimensions of 512×512512\times 512 pixels. After approximately 40 to 50 epochs of training with randomly initialized weights, the Photometry-Detection Net starts to converge. The convergence process is accelerated when using pre-trained weights. Since the instrumental magnitude measurements exhibit a relatively consistent pattern, when employing pre-trained weights and training on new data batches, it is common practice to freeze the feature extraction network (the Hourglass Network) and the photometry branch network. After training for a specific number of epochs, these two networks are then unfrozen, and the overall network undergoes fine-tuning.

After training, we have tested the performance of Photometry-Detection Net in detecting star targets and measuring magnitues using a batch of 512×512512\times 512 pixel SDSS astronomical images and simulated images generated by Skymaker. When the network outputs classification results, it provides a confidence level, and when this level is higher than a certain threshold, we consider that there exist star targets. Generally, the lower the threshold, the higher the recall, but the lower the precision, and vice versa. We have shown the performance of Photometry-Detection Net using different confidence thresholds, as shown in Figure 6. When the confidence threshold is set to 0.3, the specific results obtained from testing on SDSS data and simulated data are shown in Table 1. The table also presents the detection results from SExtractor. It’s evident that our method achieves a higher level of precision and recall rate compared to SExtractor.

Refer to caption
(a) The relation between the confidence and the precision and the recall for the Photometry-Detection Net in the SDSS data.
Refer to caption
(b) The relation between the confidence and the precision and the recall for the Photometry-Detection Net in the simulated data.
Refer to caption
(c) The precision-recall curve for the Photometry-Detection Net in the SDSS data.
Refer to caption
(d) The precision-recall curve for the Photometry-Detection Net in the Simulated data.
Figure 6: In figures a and b, the horizontal axis represents the confidence level of the predicted targets by the Photometry-Detection Net, while the vertical axis represents the corresponding precision and recall at that confidence level. The red dashed line represents the confidence threshold of 0.3 that has been used in this study. In figures c and d, the horizontal axis represents recall and the vertical axis represents precision. The purple curve illustrates the precision-recall (p-r) curve when the confidence threshold is set to 0. The area between the two red dashed lines depicts the p-r curve when the confidence threshold is set to 0.3, and the region between this curve and the horizontal axis is visually represented as the green area in the figure. This provides insights into the Photometry-Detection Net’s performance in detecting star targets. As explained in this section, the area between the p-r curve and the horizontal axis corresponds to the average precision (AP). A higher AP value indicates a better target detection performance by the network. As shown in this figure, our method could obtain AP more than 98%98\% in simulated and in real observation data.
Table 1: Detection Results
SExtractor_SDSS PNet_SDSS PNet_Simulation
Precision 84.24%84.24\% 92.64%92.64\% 99.43%99.43\%
Recall 96.12%96.12\% 98.20%98.20\% 99.98%99.98\%

During the star target detection process, the Photometry-Detection Net also conducts photometry and astrometry. For the SDSS data, the photometry branch utilizes the calibration method described in Section 3.3.3 to calibrate the magnitudes. However, since the simulated data does not account for airmass effects or other effects, calibration is not necessary for these data. When using SExtractor to process SDSS data, a fixed offset between the photometry and astrometry results and their ground truth values may exist. This offset can be determined by statistically analyzing the measurement results. In our study, we have adjusted the results obtained from SExtractor by subtracting this offset. Aside from this adjustment, no further processing has been performed on the measurement results derived from SExtractor for the SDSS data. The photometry and astrometry results are depicted in Figure 7 and Figure 8.

Refer to caption
(a) The Photometry error of the SExtractor for the SDSS Data
Refer to caption
(b) Histogram of the Photometry error of the SExtractor for the SDSS Data
Refer to caption
(c) The Photometry error of the PNet for the SDSS Data
Refer to caption
(d) Histogram of the Photometry error of the PNet for the SDSS Data
Refer to caption
(e) The Photometry error of the PNet for the Simulated Data
Refer to caption
(f) Histogram of the Photometry error of the PNet for the Simulated Data
Figure 7: Figure a, c and e display the photometric error of the SExtractor on SDSS data, the Photometry-Detection on SDSS data, and the Photometry-Detection on Simulated data respectively. In the top figure, the x-axis represents the magnitude, while the y-axis represents the error between the predicted and true magnitudes. The proximity of the scatter points to the y=0 line indicates the accuracy of the network’s photometry results. The figure illustrates that our algorithm exhibits smaller errors in regions with lower magnitudes compared to those with higher magnitudes. Figures b, d, and f display histograms of magnitude errors. The horizontal axis signifies the magnitude error range, while the vertical axis indicates the percentage of stars falling into each error range. A larger concentration of stars within smaller magnitude error ranges on the horizontal axis indicates a more robust capability of the algorithm to measure target magnitudes. As depicted in these figures, the PNet exhibits superior photometry results compared to those of SExtractor.

Figure 7 reveals that the magnitude error is mostly within 0.5 magnitudes for the majority of stars, with approximately 50%50\% of the stars exhibiting magnitude measurement errors within 2.5%2.5\%. These measurement errors tend to increase gradually as the magnitude of the stars increases. This behavior aligns with theoretical analysis, as the impact of noise with the same level has a smaller effect on stars with lower magnitudes. For every 5-magnitude decrease, the flux of the star increases by a factor of 100. Additionally, compared to the errors observed in the SDSS data, the measurement errors of star magnitudes in the simulated data are generally smaller and more concentrated. This difference primarily stems from the presence of noise in real data, which not only interferes with the algorithm’s flux measurements but also affects the process of calibrating star magnitudes. Additionally, as illustrated in Figure 7, it’s worth noting that the photometric error of our framework when applied to SDSS data exhibits some asymmetry, primarily due to the calibration process. However, when looking at the overall performance, our framework shows more consistent and reliable photometric results, outperforming those obtained by SExtractor on the SDSS data.

Meanwhile, we compared the astrometry results obtained by the Photometry-Detection Net and SExtractor in Figure 8. The astrometry error manifests as a roughly symmetrical circle. The contour lines marked with values 3 represent a diameter of 0.15 pixels, while those with values 4 represent a diameter of 0.21 pixels. In contrast to SExtractor, our framework demonstrates superior astrometry accuracy when applied to the SDSS data, with a higher number of stars exhibiting astrometry error smaller than 0.1. Nevertheless, it’s worth noting that the astrometry error for the SDSS data remains somewhat larger than that observed in the simulated data, primarily due to the introduction of other noise during real observations.

Refer to caption
Figure 8: The left panel displays the astrometry error of SExtractor on the SDSS data, the middle panel illustrates the astrometry error of the Photometry-Detection Net on the SDSS data, and the right panel showcases the astrometry error of the Photometry-Detection Net on the simulated data. Assuming the true positions of all targets are centered at (0,0), the predicted positions are scattered across the figure. Each small area within the figure, defined as a 0.02×0.020.02\times 0.02 pixel region, tallies the number of stars with astrometry errors falling within that area. The resulting count is logarithmically transformed and visualized as a heatmap. Both the horizontal and vertical axes are measured in pixels, while the color bar denotes the logarithm of the star count. The blue contour corresponds to an astrometry error of 0.15 pixels, the green contour corresponds to an astrometry error of 0.21 pixels, and the red dashed line represents the limit of astrometry error in both the x and y directions. When the centers of the green and blue contour lines are closer to the (0,0) point, and the contours are smaller and more symmetrical, it indicates improved astrometry accuracy in the algorithm.

4.2 Training and Performance Evaluation of the Bayesian Photometry Neural Network

As mentioned earlier, the detected results in the images will be segmented into smaller patches to estimate photometry results and uncertainties. We assume the Gaussian distribution as the prior distribution for the photometry results, and the loss function can be derived directly from Equation 12. For optimization during the training stage, we utilize the Adam optimizer (Kingma & Ba, 2014). However, during the training stage, there is a possibility of encountering the issue of exploding gradients due to the random weight sampling, which can lead to unstable training stage. To address this problem, we implement gradient clipping by setting a threshold. If the gradient surpasses this threshold, it is truncated to a specific value. On average, training one epoch typically takes approximately 12 seconds for a batch of 2000 small images.

Upon completing the training phase, we proceed to assess the performance of the Bayesian Photometry Neural Network using a dataset of 14,000 target samples. To obtain the distribution of estimated magnitudes for each star target, we employ Monte Carlo sampling with 50 discrete samples. It is worth noting that due to the large number of parameters involved and the challenges inherent in optimizing Bayesian Neural Networks, the estimated uncertainty distribution may not closely align with the true distribution. In order to address this, we propose utilizing the method introduced by Chung et al. (2021) to calibrate the predicted uncertainty results. This calibration algorithm, based on isotonic regression (Kuleshov et al., 2018), adjusts the uncertainty represented by the standard deviation by determining an appropriate calibration coefficient. After measuring the magnitudes of all star targets, the Bayesian Photometry Neural Network provides corresponding mean values and standard deviations to characterize the uncertainty distribution. We employ Equation 21 to standardize these distributions:

NormalDistribution=ytrueμσ.NormalDistribution=\frac{y_{true}-\mu}{\sigma}. (21)

Here, μ\mu represents the mean of the predicted distribution, ytruey_{true} represents the true value of the corresponding target, and σ\sigma denotes the standard deviation of the predicted distribution. By utilizing the aforementioned method, we are able to obtain the distribution of all targets on a standardized evaluation scale. Subsequently, we compare the standardized distribution of magnitudes for each target with a standard normal distribution having a mean of 0 and a standard deviation of 1. This comparison allows us to assess the accuracy of our Bayesian network’s quantification of uncertainty. The results are depicted in Figure 9.

Refer to caption
(a) SDSS photometry results and uncertainties obtained by the Baysian Neural Network
Refer to caption
(b) Simulated photometry results and uncertainties obtained by the Baysian Neural Network
Figure 9: Figure a displays the outcomes of the Bayesian Photometry Neural Network in measuring the photometry of targets within the SDSS data, whereas figure b showcases the results for simulated data. In the ”Average Calibration” plot (Tran et al., 2020), the horizontal axis represents the expected proportion, while the vertical axis represents the actual observed proportion. The ”Miscalibration area” pertains to the space between the curve and the dashed diagonal line with a slope of one. A smaller value indicates more accurate uncertainty calibration results. In the ”Confidence Band” plot, the horizontal axis corresponds to the target index, and the vertical axis represents the error between the mean of the target distribution calculated by the Bayesian Photometry Neural Network and the true value. To align all mean values of the target distributions with zero, we shift them accordingly, as indicated by the blue line at y = 0 within the figure. The orange dots illustrate the discrepancy between the true values and the mean of the target distribution, while the blue region denotes the 95%95\% confidence interval projected by the Bayesian Photometry Neural Network. A favorable outcome in uncertainty prediction occurs when the light blue region encompasses the orange dots.

More specifically, we utilize the standard normal distribution as the probability density function (PDF) to conduct a symmetrical search for boundaries from the center outward. This search aims to identify the boundaries where the ratio of the integral of the PDF within the boundary to the infinite integral of the PDF equals the horizontal axis value in the Average Calibration plot of Figure 9. The vertical axis represents the ratio of the number of standardized targets within the observed boundary to the total number of targets. Since the PDF follows a standard normal distribution, the infinite integral value is 1. In an ideal scenario where uncertainty can be accurately quantified, these two ratios should align, resulting in a diagonal line with a slope of 1 in the Average Calibration plot of Figure 9. When the curve bends downwards, indicating that the Predicted proportion in the interval exceeds the Observed proportion in the interval, it signifies an overconfident state of uncertainty estimation (where the predicted uncertainty is too small). Conversely, when the curve bends upwards, indicating that the Predicted proportion in the interval falls below the Observed proportion in the interval, it suggests an under-confident state of uncertainty estimation (where the predicted uncertainty is too large).

Overall, the Bayesian Photometry Neural Network has the ability to predict the uncertainty distribution of magnitude estimation results. Once we have obtained the magnitude distribution predictions for all targets using this network, we can assess these predictions using the evaluation criterion described in Section 3.1. By defining the boundary of the relative error as positive and negative 3σrelative3\sigma_{relative}, we can examine the results presented in Table 2. It is worth noting that the calculation method for σrelative\sigma_{relative} involves initially calculating the relative error of all targets using the procedure outlined in Equation 6, and subsequently determining the standard deviation of these relative errors as σrelative\sigma_{relative}. This calculation method differs from the σ\sigma value that represents the uncertainty computed from the distribution of an individual target. Based on the data presented in Table 2, it becomes apparent that the outlier fraction obtained from the simulated data is lower compared to that from the real data, suggesting that the algorithm demonstrates greater stability when applied to simulated data. Furthermore, the sigmaNMADsigma_{NMAD} and MAE values, assessed for both simulated and real data, indicate that the dispersion of predictions from this algorithm is lower, resulting in overall smaller errors when dealing with simulated data. Additionally, the parameter E¯\bar{E} indicates that the Bayesian Photometry Neural Network expresses higher confidence in magnitude measurements for simulated data. In general, the results obtained from simulated data exhibit higher accuracy and confidence compared to real data. This discrepancy could potentially be attributed to the effects induced by noise present in real observational data. On one hand, the presence of noisy pixels can interfere with the Resnet50 model’s extraction of image features, particularly when stars have low signal-to-noise ratios, intensifying such interference. On the other hand, noise can also disrupt the statistical photometry results obtained by the Bayesian layer when leveraging image information. This interference manifests as increased uncertainty in the output results, consequently indicating lower reliability and credibility.

Table 2: Uncertainty of photometry measurement Results on SDSS and Simulated Data
SDSS Simulation
η\eta 0.3143%0.3143\% 0.1879%0.1879\%
σNMAD\sigma_{NMAD} 6.047×1046.047\times 10^{-4} 1.573×1041.573\times 10^{-4}
MAE 9.697×1039.697\times 10^{-3} 2.105×1032.105\times 10^{-3}
E¯\bar{E} 7.004×1027.004\times 10^{-2} 1.682×1021.682\times 10^{-2}

5 Conclusions and future works

In this paper, we introduce the PNet, a novel approach for star detection, photometry, and estimation of photometry uncertainties. By leveraging the Bayesian Photometry Network and the Photometry-Detection Net, the PNet offers a comprehensive solution for photometry and astrometry of point-like celestial objects. To evaluate its performance, we conduct tests using both SDSS data and simulated data. The results indicate that our algorithm achieves consistent and reliable results in the simulated data. However, when applied to real data, the presence of noise or undisclosed data processing steps may introduce certain errors. Nonetheless, the overall results are deemed satisfactory.

There are several additional points that need to be addressed. First, it is crucial to investigate and adopt data preprocessing methods proposed by other teams for magnitude and position calibration. Given the prevalent use of CMOS cameras, which differ significantly from CCD cameras, we must also explore suitable data pre-processing approaches specific to CMOS cameras. Furthermore, it is important to acknowledge the rapid advancements in neural networks in recent years. Exploring alternative methods such as neural network search or meta-learning could potentially yield improved neural network architectures. Lastly, integrating the results obtained from the Bayesian Photometry Neural Network with the light curve classification algorithm is essential for the development of new techniques in transient discovery. These techniques will prove valuable for future sky survey projects, such as the Large Synoptic Survey Telescope(LSST) (Željko Ivezić et al., 2008), the Chinese Space Station Telescope(CSST) (Gong et al., 2019) and the SiTian Project (LIU et al., 2021).

acknowledgments

First, we express our gratitude to the reviewer, whose valuable feedback and guidance over the course of more than two years have significantly contributed to the improvement of our method. Peng Jia would like to thank Professor Zhaohui Shang from National Astronomical Observatories, Professor Jian Ge from Shanghai Astronomical Observatory, Professor Rongyu Sun from Purple Mountain Observatory, Professor Huigen Liu from Nanjing University, Professor Chengyuan Li and Professor Bo Ma from Sun Yat-Sen University who provide very helpful suggestions for this paper. Furthermore, we would like to announce that the code utilized in this article will be made available in the PaperData repository, which is powered by China-VO, ensuring easy access for interested researchers.

Furthermore, we express our gratitude for the generous financial support provided by the National Natural Science Foundation of China (NSFC) under grant numbers 12173027 and 12173062, as well as the Major Key Project of PCL. We also acknowledge the science research grants received from the China Manned Space Project with NO. CMS-CSST-2021-A01 and the Square Kilometre Array (SKA) Project with NO. 2020SKA0110102. Additionally, we extend our appreciation to the Civil Aerospace Technology Research Project (D050105) and the French National Research Agency (ANR) for their support in the form of the ANR APPLY grant (ANR-19-CE31-0011) coordinated by B. Neichel.

Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss4.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics — Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.

References

  • Abdar et al. (2021) Abdar, M., Pourpanah, F., Hussain, S., et al. 2021, Information Fusion, 76, 243, doi: https://doi.org/10.1016/j.inffus.2021.05.008
  • Abdurro’uf et al. (2022) Abdurro’uf, Accetta, K., Aerts, C., et al. 2022, The Astrophysical Journal Supplement Series, 259, 35, doi: 10.3847/1538-4365/ac4414
  • Agarwal et al. (2020) Agarwal, D., Aggarwal, K., Burke-Spolaor, S., Lorimer, D. R., & Garver-Daniels, N. 2020, Monthly Notices of the Royal Astronomical Society, 497, 1661
  • Ahmed et al. (2021) Ahmed, I., Ahmad, M., Rodrigues, J. J., & Jeon, G. 2021, Applied Soft Computing, 107, 107489, doi: https://doi.org/10.1016/j.asoc.2021.107489
  • Andrew (2023) Andrew, Y. 2023, arXiv preprint arXiv:2305.00002
  • Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
  • Bertin (2009) Bertin, E. 2009, Memorie della Società Astronomica Italiana, v. 80, p. 422 (2009), 80, 422
  • Bertin & Arnouts (1996) Bertin, E., & Arnouts, S. 1996, Astronomy and astrophysics supplement series, 117, 393
  • Blei et al. (2017) Blei, D. M., Kucukelbir, A., & McAuliffe, J. D. 2017, Journal of the American Statistical Association, 112, 859, doi: 10.1080/01621459.2017.1285773
  • Blundell et al. (2015) Blundell, C., Cornebise, J., Kavukcuoglu, K., & Wierstra, D. 2015, in Proceedings of Machine Learning Research, Vol. 37, Proceedings of the 32nd International Conference on Machine Learning, ed. F. Bach & D. Blei (Lille, France: PMLR), 1613–1622. https://proceedings.mlr.press/v37/blundell15.html
  • Boucaud et al. (2020) Boucaud, A., Huertas-Company, M., Heneka, C., et al. 2020, Monthly Notices of the Royal Astronomical Society, 491, 2481
  • Bradley et al. (2016) Bradley, L., Sipocz, B., Robitaille, T., et al. 2016, Photutils: Photometry tools, Astrophysics Source Code Library, record ascl:1609.011. http://ascl.net/1609.011
  • Bradski & Kaehler (2008) Bradski, G., & Kaehler, A. 2008, Learning OpenCV: Computer vision with the OpenCV library (” O’Reilly Media, Inc.”)
  • Cabrera-Vives et al. (2017) Cabrera-Vives, G., Reyes, I., Förster, F., Estévez, P. A., & Maureira, J.-C. 2017, The Astrophysical Journal, 836, 97
  • Casetti-Dinescu et al. (2023) Casetti-Dinescu, D. I., Girard, T. M., Baena-Gallé, R., Martone, M., & Schwendemann, K. 2023, Publications of the Astronomical Society of the Pacific, 135, 054501
  • Chandra et al. (2023) Chandra, R., Chen, R., & Simmons, J. 2023, arXiv preprint arXiv:2304.02595
  • Cheng et al. (2021) Cheng, T.-Y., Conselice, C. J., Aragón-Salamanca, A., et al. 2021, Monthly Notices of the Royal Astronomical Society, 507, 4425
  • Chung et al. (2021) Chung, Y., Char, I., Guo, H., Schneider, J., & Neiswanger, W. 2021, arXiv preprint arXiv:2109.10254
  • da Costa-Luis (2019) da Costa-Luis, C. O. 2019, Journal of Open Source Software, 4, 1277
  • Domínguez Sánchez et al. (2022) Domínguez Sánchez, H., Margalef, B., Bernardi, M., & Huertas-Company, M. 2022, Monthly Notices of the Royal Astronomical Society, 509, 4024
  • Duev et al. (2019) Duev, D. A., Mahabal, A., Masci, F. J., et al. 2019, Monthly Notices of the Royal Astronomical Society, 489, 3582
  • Farias et al. (2020) Farias, H., Ortiz, D., Damke, G., Arancibia, M. J., & Solar, M. 2020, Astronomy and Computing, 33, 100420
  • Flaugher et al. (2015) Flaugher, B., Diehl, H. T., Honscheid, K., et al. 2015, The Astronomical Journal, 150, 150, doi: 10.1088/0004-6256/150/5/150
  • Gal et al. (2016) Gal, Y., et al. 2016
  • Ge et al. (2021) Ge, Z., Liu, S., Wang, F., Li, Z., & Sun, J. 2021, arXiv preprint arXiv:2107.08430
  • Goan & Fookes (2020) Goan, E., & Fookes, C. 2020, Case Studies in Applied Bayesian Data Science: CIRM Jean-Morlet Chair, Fall 2018, 45
  • Gómez et al. (2020) Gómez, C., Neira, M., Hernández Hoyos, M., Arbeláez, P., & Forero-Romero, J. E. 2020, Monthly Notices of the Royal Astronomical Society, 499, 3130
  • Gong et al. (2019) Gong, Y., Liu, X., Cao, Y., et al. 2019, The Astrophysical Journal, 883, 203, doi: 10.3847/1538-4357/ab391e
  • González et al. (2018) González, R. E., Munoz, R. P., & Hernández, C. A. 2018, Astronomy and computing, 25, 103
  • Gunn et al. (2006) Gunn, J. E., Siegmund, W. A., Mannery, E. J., et al. 2006, The Astronomical Journal, 131, 2332, doi: 10.1086/500975
  • Guo et al. (2021) Guo, H., Yang, X., Wang, N., & Gao, X. 2021, Pattern Recognition, 112, 107787, doi: 10.1016/j.patcog.2020.107787
  • Hansen et al. (2022) Hansen, D. L., Mendoza, I., Liu, R., et al. 2022, in Machine Learning for Astrophysics, 27, doi: 10.48550/arXiv.2207.05642
  • Harris et al. (2020) Harris, C. R., Millman, K. J., Van Der Walt, S. J., et al. 2020, Nature, 585, 357
  • Hausen & Robertson (2020) Hausen, R., & Robertson, B. E. 2020, The Astrophysical Journal Supplement Series, 248, 20
  • He et al. (2016) He, K., Zhang, X., Ren, S., & Sun, J. 2016, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • Hora (1996) Hora, S. C. 1996, Reliability Engineering & System Safety, 54, 217, doi: https://doi.org/10.1016/S0951-8320(96)00077-4
  • Hortúa et al. (2020) Hortúa, H. J., Volpi, R., Marinelli, D., & Malagò, L. 2020, Phys. Rev. D, 102, 103509, doi: 10.1103/PhysRevD.102.103509
  • Hu et al. (2022) Hu, L., Wang, L., Chen, X., & Yang, J. 2022, The Astrophysical Journal, 936, 157
  • Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
  • Jia et al. (2020) Jia, P., Liu, Q., & Sun, Y. 2020, The Astronomical Journal, 159, 212
  • Jia et al. (2023a) Jia, P., Liu, W., Liu, Y., & Pan, H. 2023a, The Astrophysical Journal Supplement Series, 264, 43
  • Jia et al. (2022) Jia, P., Sun, R., Li, N., et al. 2022, The Astronomical Journal, 165, 26
  • Jia et al. (2019) Jia, P., Zhao, Y., Xue, G., & Cai, D. 2019, The Astronomical Journal, 157, 250
  • Jia et al. (2023b) Jia, P., Zheng, Y., Wang, M., & Yang, Z. 2023b, Astronomy and Computing, 100687
  • Kessler et al. (2015) Kessler, R., Marriner, J., Childress, M., et al. 2015, The Astronomical Journal, 150, 172
  • Khramtsov et al. (2019) Khramtsov, V., Dobrycheva, D., Vasylenko, M. Y., & Akhmetov, V. 2019, Odessa Astronomical Publications, 32, 21
  • Kingma & Ba (2014) Kingma, D. P., & Ba, J. 2014, CoRR, abs/1412.6980. https://arxiv.org/abs/1412.6980
  • Kingma & Welling (2013) Kingma, D. P., & Welling, M. 2013, arXiv e-prints, arXiv:1312.6114, doi: 10.48550/arXiv.1312.6114
  • Kiureghian & Ditlevsen (2009) Kiureghian, A. D., & Ditlevsen, O. 2009, Structural Safety, 31, 105, doi: https://doi.org/10.1016/j.strusafe.2008.06.020
  • Krishnan et al. (2022) Krishnan, R., Esposito, P., & Subedar, M. 2022, Bayesian-Torch: Bayesian neural network layers for uncertainty estimation, v0.2.0, Zenodo, doi: 10.5281/zenodo.5908307
  • Kuleshov et al. (2018) Kuleshov, V., Fenner, N., & Ermon, S. 2018, in Proceedings of Machine Learning Research, Vol. 80, Proceedings of the 35th International Conference on Machine Learning, ed. J. Dy & A. Krause (PMLR), 2796–2804. https://proceedings.mlr.press/v80/kuleshov18a.html
  • Lang et al. (2010) Lang, D., Hogg, D. W., Mierle, K., Blanton, M., & Roweis, S. 2010, The astronomical journal, 139, 1782
  • Lin et al. (2017) Lin, T.-Y., Goyal, P., Girshick, R., He, K., & Dollar, P. 2017, in Proceedings of the IEEE International Conference on Computer Vision (ICCV)
  • LIU et al. (2021) LIU, J., SORIA, R., WU, X.-F., WU, H., & SHANG, Z. 2021, Anais da Academia Brasileira de Ciências, 93, e20200628, doi: 10.1590/0001-3765202120200628
  • Liu et al. (2021a) Liu, R., McAuliffe, J. D., & Regier, J. 2021a, arXiv preprint arXiv:2102.02409
  • Liu et al. (2021b) Liu, Z., Lin, Y., Cao, Y., et al. 2021b, in Proceedings of the IEEE/CVF international conference on computer vision, 10012–10022
  • Lundh & Contributors (1995–2021) Lundh, F., & Contributors. 1995–2021, Pillow Documentation, https://pillow.readthedocs.io/en/stable/
  • Lupton et al. (2005) Lupton, R. H., Ivezic, Z., Gunn, J. E., et al. 2005, SDSS image processing II: The photo pipelines, Technical Report, Princeton Univ. Preprint. Available at https://www. astro …
  • Makhlouf et al. (2022) Makhlouf, K., Turpin, D., Corre, D., et al. 2022, Astronomy & Astrophysics, 664, A81
  • McKinney et al. (2011) McKinney, W., et al. 2011, Python for high performance and scientific computing, 14, 1
  • Mong et al. (2020) Mong, Y.-L., Ackley, K., Galloway, D., et al. 2020, Monthly Notices of the Royal Astronomical Society, 499, 6009
  • Newell et al. (2016) Newell, A., Yang, K., & Deng, J. 2016, in Computer Vision – ECCV 2016, ed. B. Leibe, J. Matas, N. Sebe, & M. Welling (Cham: Springer International Publishing), 483–499
  • Padmanabhan et al. (2008) Padmanabhan, N., Schlegel, D. J., Finkbeiner, D. P., et al. 2008, The Astrophysical Journal, 674, 1217, doi: 10.1086/524677
  • Paszke et al. (2019) Paszke, A., Gross, S., Massa, F., et al. 2019, Advances in neural information processing systems, 32
  • Pier et al. (2003) Pier, J. R., Munn, J. A., Hindsley, R. B., et al. 2003, The Astronomical Journal, 125, 1559, doi: 10.1086/346138
  • Ren et al. (2015) Ren, S., He, K., Girshick, R., & Sun, J. 2015, Advances in neural information processing systems, 28
  • Robitaille et al. (2013) Robitaille, T. P., Tollerud, E. J., Greenfield, P., et al. 2013, Astronomy & Astrophysics, 558, A33
  • Rousseeuw & Croux (1993) Rousseeuw, P. J., & Croux, C. 1993, Journal of the American Statistical Association, 88, 1273, doi: 10.1080/01621459.1993.10476408
  • Rumelhart et al. (1986) Rumelhart, D. E., Hinton, G. E., & Williams, R. J. 1986, Nature, 323, 533, doi: 10.1038/323533a0
  • Sánchez et al. (2019) Sánchez, B., Lares, M., Beroiz, M., et al. 2019, Astronomy and Computing, 28, 100284
  • Smith et al. (2002) Smith, J. A., Tucker, D. L., Kent, S., et al. 2002, The Astronomical Journal, 123, 2121, doi: 10.1086/339311
  • Springenberg et al. (2014) Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M. 2014, arXiv e-prints, arXiv:1412.6806, doi: 10.48550/arXiv.1412.6806
  • Stoughton et al. (2002) Stoughton, C., Lupton, R. H., Bernardi, M., et al. 2002, The Astronomical Journal, 123, 485, doi: 10.1086/324741
  • Tran et al. (2020) Tran, K., Neiswanger, W., Yoon, J., et al. 2020, Machine Learning: Science and Technology, 1, 025006
  • Turpin et al. (2020) Turpin, D., Ganet, M., Antier, S., et al. 2020, Monthly Notices of the Royal Astronomical Society, 497, 2641
  • Turski et al. (2023) Turski, C., Bilicki, M., Dálya, G., Gray, R., & Ghosh, A. 2023, arXiv preprint arXiv:2302.12037
  • Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature methods, 17, 261
  • Wen et al. (2018) Wen, Y., Vicol, P., Ba, J., Tran, D., & Grosse, R. 2018, arXiv e-prints, arXiv:1803.04386, doi: 10.48550/arXiv.1803.04386
  • Wright et al. (2015) Wright, D., Smartt, S., Smith, K., et al. 2015, Monthly Notices of the Royal Astronomical Society, 449, 451
  • York et al. (2000) York, D. G., Adelman, J., John E. Anderson, J., et al. 2000, The Astronomical Journal, 120, 1579, doi: 10.1086/301513
  • Yu et al. (2022) Yu, P.-p., Sun, R.-y., Yu, S.-x., et al. 2022, Advances in Space Research, 70, 3311
  • Zacharias et al. (2000) Zacharias, N., Urban, S. E., Zacharias, M. I., et al. 2000, The Astronomical Journal, 120, 2131, doi: 10.1086/301563
  • Zackay et al. (2016) Zackay, B., Ofek, E. O., & Gal-Yam, A. 2016, The Astrophysical Journal, 830, 27
  • Zhou et al. (2019) Zhou, X., Wang, D., & Krähenbühl, P. 2019, arXiv e-prints, arXiv:1904.07850, doi: 10.48550/arXiv.1904.07850
  • Željko Ivezić et al. (2008) Željko Ivezić, Kahn, S. M., Tyson, J. A., et al. 2008, doi: 10.3847/1538-4357/ab042c