This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Earthquake Location and Magnitude Estimation
with Graph Neural Networks

Abstract

We solve the traditional problems of earthquake location and magnitude estimation through a supervised learning approach, where we train a Graph Neural Network to predict estimates directly from input pick data, and each input allows a distinct seismic network with variable number of stations and positions. We train the model using synthetic simulations from assumed travel-time and amplitude-distance attenuation models. The architecture uses one graph to represent the station set, and another to represent the model space. The input includes theoretical predictions of data, given model parameters, and the adjacency matrices of the graphs defined link spatially local elements. As we show, graph convolutions on this combined representation are highly effective at inference, data fusion, and outlier suppression. We compare our results with traditional methods and observe favorable performance.

Index Terms:
Seismic Networks, Graphs, Earthquake Characterization

I Introduction

Earthquake location and magnitude estimation are two longstanding challenges in seismology. These parameters are estimated for millions of earthquakes every year using data from many regional monitoring networks around the world. Solutions are usually framed in well understood signal processing and Bayesian inverse theory contexts [16]. By their nature, ‘traditional’ inverse methods have certain subjective biases, such as assuming fixed parameter likelihood functions (e.g., Gaussian kernels), stacking estimates over stations, or assuming uncorrelated noise between stations (i.e., neglecting epistemic uncertainties due to physical model errors and local correlated sources of noise). These weaknesses induce errors and biases in catalogs resulting point estimates of source parameters, that are generally weakly correlated in space and time [8].

Copyright disclaimer: © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Conference proceedings limit manuscript length to 4 pages. Correspondence address: [email protected].

.

When applied to the same type of traditional inverse problem, Graph Neural Networks (GNN), and graph signal processing in general [13], offer a powerful new approach to arrive at stable global model estimates of parameters (i.e., location and magnitude) through processing complex, noisy data, on irregularly distributed station networks. As a graph is defined as a set of nodes, with an adjacency matrix that specifies which pairs of nodes have a (directed) edge linking them - if the nodes represent data points - the adjacency matrix gives access to an object that behaves in a similar way as a covariance matrix (with off diagonal terms included) encountered in traditional signal processing and Bayesian inverse theory. Since the adjacency matrix links a subset of nearby nodes together, and algorithms are applied to ‘collect and aggregate’ information between local neighborhoods of the graph, these tools allow utilizing local coherency and redundancy between nodes to, for example: denoise and stabilize measurement estimates, hierarchically cluster and weight data to obtain global estimates of model parameters, or suppress the effect of data outliers [4].

We use Graph Neural Networks to solve the traditional point estimation problems of location and magnitude from arrival time and amplitude data. This approach combines the strengths of graphs and the capability of deep learning to predict the target quantities directly from input data, rather than relying on globally optimizing an explicitly posed objective function [2, 11, 18, 12]. Advantages of training a ‘surrogate model’ to predict solutions from data, are many, such as (i) the estimates are available immediately without need to apply an optimizer, (ii) the estimates can be more robust in the presence of high noise and outlier data points, and (iii) the model can be trained to account for variable seismic network geometry.

Finally, as a major component of our architecture, we introduce an important type of graph: the Cartesian product graph of Stations×Model SpaceStations\times Model\text{ }Space. This graph represents data measured over all stations, and over all points of model space, where the model space is the Source RegionSource\text{ }Region or Magnitude AxisMagnitude\text{ }Axis. It effectively represents information about all aspects of the problem (observed data, and predicted data, given model parameters). By giving these graphs structure, through local edge schemes that connect nearby points in model space and nearby stations, we allow local interaction between both model space points and station data during iterative graph convolutions. As we show, the pairwise structure Stations×Model SpaceStations\times Model\text{ }Space is the natural object on which observed data can be parsed, and thus convolving directly on this structure should improve inference.

II Methods

II-A Graph Message Passing

The basic computational unit of a standard GNN is the generic message passing layer. The message passing layer captures the powerful notion of graph convolution (e.g., [6, 1]), wherein the latent states of nodes on graphs are updated based on (learned) mappings from the latent states of neighboring nodes. The operator is generically defined as 𝒢(𝒉𝒜,𝒜𝒜):𝒜𝒜\mathcal{G}(\boldsymbol{h}_{\mathcal{A}},\mathcal{E}_{\mathcal{A}^{\prime}\leftarrow\mathcal{A}}):\mathcal{A}\longrightarrow\mathcal{A}^{\prime}, and is given as

𝒢(𝒉𝒜,𝒜𝒜)=ϕagg(𝒉i,POOL{ϕmsg(𝒉j,𝒆ij,𝒛)j𝒩(i)}),\begin{split}\mathcal{G}(\boldsymbol{h}_{\mathcal{A}}&,\mathcal{E}_{\mathcal{A}^{\prime}\leftarrow\mathcal{A}})=\\ &\phi^{agg}\bigl{(}\boldsymbol{h}_{i},\text{POOL}\{\phi^{msg}(\boldsymbol{h}_{j},\boldsymbol{e}_{ij},\boldsymbol{z})\mid j\in\mathcal{N}(i)\}\bigr{)},\end{split} (1)

which maps a latent signal 𝒉𝒜|𝒜|×K\boldsymbol{h}_{\mathcal{A}}\in\mathbb{R}^{|\mathcal{A}|\times K} measured on graph 𝒜\mathcal{A} (of feature dimension KK) to a latent signal 𝒉𝒜|𝒜|×K\boldsymbol{h}_{\mathcal{A}^{\prime}}\in\mathbb{R}^{|\mathcal{A}^{\prime}|\times K^{\prime}} measured on graph 𝒜\mathcal{A}^{\prime} (of feature dimension KK^{\prime}), with directed edges 𝒜𝒜\mathcal{E}_{\mathcal{A}^{\prime}\leftarrow\mathcal{A}} pointing nodes from 𝒜\mathcal{A} into 𝒜\mathcal{A}^{\prime}. When 𝒜𝒜\mathcal{A}^{\prime}\neq\mathcal{A}, this is a bipartite mapping between two distinct graphs; when 𝒜=𝒜\mathcal{A}^{\prime}=\mathcal{A}, it is regular graph convolution within a single graph (𝒜𝒜𝒜\mathcal{E}_{\mathcal{A}}\equiv\mathcal{E}_{\mathcal{A}^{\prime}\leftarrow\mathcal{A}} when 𝒜=𝒜\mathcal{A}^{\prime}=\mathcal{A}), yet either case is easily represented by this operator.

In the forward call (1), on a per-node basis (i)(i), the latent features of each neighboring node j𝒩(i)j\in\mathcal{N}(i) are collected, and transformed by the learnable Fully Connected Network (FCN) layer, ϕmsg\phi^{msg}, prior to being pooled over the entire neighborhood, 𝒩(i)\mathcal{N}(i), where the pooling is over the node axis (POOL:|𝒩(i)|×KKPOOL:\mathbb{R}^{|\mathcal{N}(i)|\times K}\longrightarrow\mathbb{R}^{K}), and is any global pooling operator such as sum-, mean, or max-pool. The pooled aggregate message is then concatenated with the self state of node 𝒉i\boldsymbol{h}_{i}, and transformed by an additional learnable FCN, ϕagg\phi^{agg}, resulting in the updated state on node ii. Inside the message passing layer ϕmsg\phi^{msg}, the optional term of 𝒆ij\boldsymbol{e}_{ij} represents edge data between two nodes (i,j)(i,j), such as Cartesian offsets of node coordinates (if nodes represent spatially located points), and 𝒛=POOL{ϕglb(𝒉j)}\boldsymbol{z}=\text{POOL}\{\phi^{glb}(\boldsymbol{h}_{j})\} is an optional ‘global’ summary feature, that can be extracted from the entire graph at each step, using an additional learnable FCN, ϕglb\phi^{glb}. By combining these facets, the overall capability of the graph convolution cell is to learn local feature transformations that work well for a wide range of nodes, while also exploiting local information transfer through the adjacency matrix.

II-B Adjacency Matrices

In our method there is one component (or graph) that specifies the station set, 𝒮\mathcal{S}, and another that represents the model space, \mathcal{H}, where we consider either source region =𝒳\mathcal{H}=\mathcal{X} or magnitude axis =\mathcal{H}=\mathcal{M}. For either model space, we simply need a (regular or irregular) grid that spans the region of interest. For 𝒳\mathcal{X}, we take a collection of \sim100’s of sparsely distributed points covering the source region of interest, and for the magnitude axis we take a fine spacing of nodes in magnitude (e.g., [-3,7] M with 0.1 or 0.25 M increment).

Rather than leave the station set and model spaces as unordered sets, we give these sets structure by interpreting them as graphs. Specifically, (𝒮,𝒮)(\mathcal{S},\mathcal{E}_{\mathcal{S}}) is the station graph, (𝒳,𝒳)(\mathcal{X},\mathcal{E}_{\mathcal{X}}) is the source graph, and (,)(\mathcal{M},\mathcal{E}_{\mathcal{M}}) is the magnitude graph, where we define edge sets, 𝒮,𝒳,\mathcal{E}_{\mathcal{S}},\mathcal{E}_{\mathcal{X}},\mathcal{E}_{\mathcal{M}}, or equivalently adjacency matrices, that specify which elements in the respective sets are linked. While many edge construction schemes are possible [3], GNN’s often work well for a wide range of adjacency matrix choices [20] because the GNN can learn how to use whichever distribution of graphs it is trained over. For point cloud data, such as represented in 𝒮\mathcal{S} and 𝒳\mathcal{X}, a natural choice of edge construction scheme is to use K-nearest-neighbor (K-NN) graphs or ϵ\epsilon-distance graphs [3], since these favor connecting nearby elements to one another. For \mathcal{M}, while it is simply a linear grid of nodes, KNNK-NN and ϵ\epsilon-distance graphs effectively represent 1D convolution kernels over the \mathcal{M} axis, and are hence also a natural choice.

For our purposes we use KNNK-NN instead of ϵ\epsilon-distance graphs for all of 𝒮\mathcal{S}, 𝒳\mathcal{X}, \mathcal{M}, since these limit the incoming degree of all nodes to KK, whereas ϵ\epsilon-distance graphs can have substantially different degrees across different nodes of the graph. While we did not optimize these choices fully, modest trial and error led to

𝒮={(i,j)𝒔jK-NN8(𝒔i)}\mathcal{E}_{\mathcal{S}}=\{(i,j)\mid\boldsymbol{s}_{j}\in K\text{-}NN_{8}(\boldsymbol{s}_{i})\} (2a)
𝒳={(i,j)𝒙jK-NN15(𝒙i)}\mathcal{E}_{\mathcal{X}}=\{(i,j)\mid\boldsymbol{x}_{j}\in K\text{-}NN_{15}(\boldsymbol{x}_{i})\} (2b)
={(i,j)𝒎jK-NN10(𝒎i)}\mathcal{E}_{\mathcal{M}}=\{(i,j)\mid\boldsymbol{m}_{j}\in K\text{-}NN_{10}(\boldsymbol{m}_{i})\} (2c)

where KNN()KK-NN(\cdot)_{K} represent the KK-nearest-neighbors for the input node. Note that, these ‘edge sets’ specify which entries in all possible N×NN\times N pairs for a graph of NN nodes are linked, and that the relationship is generally not symmetric (i.e., (i,j)(i,j)\in\mathcal{E} does not necessarily imply (j,i)(j,i)\in\mathcal{E}). Edge sets are hence equivalent to adjacency matrices, but contain the same data in a more compact form.

We still need to define one essential graph type: the Cartesian product graph of 𝒮×\mathcal{S}\times\mathcal{H}, which has as nodes all pairs of station-model (𝒔\boldsymbol{s}, 𝒉\boldsymbol{h}) space coordinates. As discussed earlier, and as highlighted in the next section, it is most naturally all pairs of (𝒔,𝒉)𝒮×(\boldsymbol{s},\boldsymbol{h})\in\mathcal{S}\times\mathcal{H} stations, and model space coordinates, that we can use to parse the data. The graphs already defined can be used directly to construct the Cartesian product graph with two-edge types, given by

,𝒮={(i,j)𝒉j𝒩(𝒉i)(𝒔j=𝒔i)}\mathcal{E}_{\mathcal{H}\leftarrow\mathcal{H},\mathcal{S}}=\bigl{\{}(i,j)\mid\boldsymbol{h}_{j}\in\mathcal{N}(\boldsymbol{h}_{i})\wedge(\boldsymbol{s}_{j}=\boldsymbol{s}_{i})\bigr{\}} (3a)
𝒮𝒮,={(i,j)𝒔j𝒩(𝒔i)(𝒉j=𝒉i)}.\mathcal{E}_{\mathcal{S}\leftarrow\mathcal{S},\mathcal{H}}=\bigl{\{}(i,j)\mid\boldsymbol{s}_{j}\in\mathcal{N}(\boldsymbol{s}_{i})\wedge(\boldsymbol{h}_{j}=\boldsymbol{h}_{i})\bigr{\}}. (3b)

As can be seen in (3), these edge types capture two different types of relationships between nodes on the Cartesian product: connecting either (i) a fixed station, and two neighboring source nodes (or magnitude nodes), or (ii) a fixed source node (or fixed magnitude node), and two neighboring station nodes. Intuitively, the alternative relationships offer complementary insights into the local and global features between different nodes on the graph. While the cardinality of 𝒮×\mathcal{S}\times\mathcal{H} is high (e.g., 250 stations ×\times 200 spatial nodes == 50,000), the incoming degree of each node (for either edge type) is limited by the choices of KK for any of the component graphs, 𝒮,𝒳,\mathcal{S},\mathcal{X},\mathcal{M}, and hence these graphs are highly sparse.

II-C Input Features

The input features we pass to the GNN are defined for arbitrary input pick data sets, where for location, it includes pick times (with assigned phase types), and for magnitude it includes amplitude measurements (with assigned phase types). Each input can be any given station set, 𝒮\mathcal{S}, and individual stations in this set can still have missing picks or false picks. For the source location problem our input tensor is defined analogously to the ‘pre-stack’ back-projection metric (e.g., [14]), which, for each spatial coordinate 𝒙𝒳\boldsymbol{x}\in\mathcal{X} is the measure of how close the nearest arrival, τik𝒟i\tau_{i}^{k}\in\mathcal{D}_{i}, of phase type kk on station 𝒔i𝒮\boldsymbol{s}_{i}\in\mathcal{S}, is to the theoretical arrival time from that source coordinate. Hence, we compute

𝒉k𝒳(𝒔i,𝒙)=exp((t0+Tk(𝒔i,𝒙)τik)22σt2)\boldsymbol{h}_{k}^{\mathcal{X}}(\boldsymbol{s}_{i},\boldsymbol{x})=\exp\Biggl{(}-\frac{\bigl{(}t_{0}+T_{k}(\boldsymbol{s}_{i},\boldsymbol{x})-\tau_{i}^{k}\bigr{)}^{2}}{2\sigma_{t}^{2}}\Biggr{)} (4)

where Tk(𝒔i,𝒙)T_{k}(\boldsymbol{s}_{i},\boldsymbol{x}) is the theoretical travel time calculator, for phase types kk, and σt\sigma_{t} is a fixed kernel wide enough to capture a range of misfit times appropriate to the scale of the application (e.g., σt=\sigma_{t}= 5 s). Travel times are computed with the Fast Marching Method [15] using the regional 3D velocity model of [17].

For the magnitude input, a similar input scheme is defined. We compute

𝒉k(𝒔i,𝒎)=exp((Ak(𝒔i,𝒎,𝒙)log10(aik))22σa2)\boldsymbol{h}_{k}^{\mathcal{M}}(\boldsymbol{s}_{i},\boldsymbol{m})=\exp\Biggl{(}-\frac{\bigl{(}A_{k}(\boldsymbol{s}_{i},\boldsymbol{m},\boldsymbol{x})-\log_{10}(a_{i}^{k})\bigr{)}^{2}}{2\sigma_{a}^{2}}\Biggr{)} (5)

where Ak(𝒔i,𝒎,𝒙)A_{k}(\boldsymbol{s}_{i},\boldsymbol{m},\boldsymbol{x}) predicts the theoretical log10\log_{10} amplitude on station 𝒔i\boldsymbol{s}_{i}, from source 𝒙𝒳\boldsymbol{x}\in\mathcal{X}, with magnitude 𝒎\boldsymbol{m}\in\mathcal{M}, for phase type kk. Hence, this feature measures for a given magnitude, how close the observed log-amplitude is to the theoretical log-amplitude. The Gaussian kernel width, σa\sigma_{a}, is application dependent, but is chosen to be large enough to capture a range of misfits in the feature space (e.g., σa\sigma_{a} = 0.5). For our purposes we use a locally calibrated linear magnitude scale [7], that is fit to predict magnitudes consistent with the USGS in northern California, using maximum amplitude measurements around the times of P- and S-waves from the EHZ component of NC seismic network stations.

II-D Architecture

The GNN architecture we propose is designed to be a simple, effective way to map the very high-dimensional input tensors 𝒉k𝒳(𝒔i,𝒙)\boldsymbol{h}_{k}^{\mathcal{X}}(\boldsymbol{s}_{i},\boldsymbol{x}), 𝒉k(𝒔i,𝒎)\boldsymbol{h}_{k}^{\mathcal{M}}(\boldsymbol{s}_{i},\boldsymbol{m}), which are feature vectors measured on all pairs of station-model space points, to explicit predictions of source likelihood p(𝒙)Ω𝒳p(\boldsymbol{x})\subset\Omega_{\mathcal{X}}, and magnitude likelihood p(𝒎)Ωp(\boldsymbol{m})\subset\Omega_{\mathcal{M}}, at any query points within the full continuous Euclidean model space, Ω\Omega_{\mathcal{H}}. Each input can also be for a different number of stations, and a different station graph.

To achieve this overall mapping, we first (i) apply repeated graph convolutions on the Cartesian product 𝒮×\mathcal{S}\times\mathcal{H}, then (ii) embed into the model space by stacking over stations (in the latent space, 𝒮×\mathcal{S}\times\mathcal{H}\rightarrow\mathcal{H}). Then, (iii) we repeat graph convolutions on \mathcal{H}, and finally (iv) we make predictions of likelihood at arbitrary coordinates p(𝒉q)Ωp(\boldsymbol{h}_{q})\subset\Omega_{\mathcal{H}}, by using local masked-attention (i.e., effectively graph-based interpolation [19]) of the queries 𝒉q\boldsymbol{h}_{q} with respect to the fixed nodes in \mathcal{H}. This design is straightforward, and in a natural, physically motivated way, transforms the raw data measured over arbitrary station networks to predictions over arbitrary model space domains. The architecture is given in more detail in Algorithm (1).

Algorithm 1 The Architecture of the GNN
1:procedure Input
2:     Create graphs: 𝒮\mathcal{S}, \mathcal{H}, and 𝒮×\mathcal{S}\times\mathcal{H}.
3:     Measure features: 𝒉(𝒔i,𝒉)\boldsymbol{h}^{\mathcal{H}}(\boldsymbol{s}_{i},\boldsymbol{h})
4:     Choose queries: {𝒉qΩ}\{\boldsymbol{h}_{q}\in\Omega_{\mathcal{H}}\}.
5:procedure Forward Pass
6:      𝒉𝒮×(l+1)=ϕ(𝒢1(𝒉𝒮×(l),,𝒮),𝒢2(𝒉𝒮×(l),𝒮𝒮,))\boldsymbol{h}_{\mathcal{S}\times\mathcal{H}}^{(l+1)}=\phi\Bigl{(}\mathcal{G}_{1}\bigl{(}\boldsymbol{h}_{\mathcal{S}\times\mathcal{H}}^{(l)},\mathcal{E}_{\mathcal{H}\leftarrow\mathcal{H},\mathcal{S}}\bigr{)},\mathcal{G}_{2}\bigl{(}\boldsymbol{h}_{\mathcal{S}\times\mathcal{H}}^{(l)},\mathcal{E}_{\mathcal{S}\leftarrow\mathcal{S},\mathcal{H}}\bigr{)}\Bigr{)} [Repeat 3]
7:      𝒉(l+1)=𝒢(𝒉𝒮×(l),𝒮×)\boldsymbol{h}_{\mathcal{H}}^{(l+1)}=\mathcal{G}\bigl{(}\boldsymbol{h}_{\mathcal{S}\times\mathcal{H}}^{(l)},\mathcal{E}_{\mathcal{H}\leftarrow\mathcal{S}\times\mathcal{H}}\bigr{)} [Apply 1]
8:      𝒉(l+1)=𝒢(𝒉(l),)\boldsymbol{h}_{\mathcal{H}}^{(l+1)}=\mathcal{G}\bigl{(}\boldsymbol{h}_{\mathcal{H}}^{(l)},\mathcal{E}_{\mathcal{H}\leftarrow\mathcal{H}}\bigr{)} [Repeat 3]
9:procedure Prediction
10:      𝒉qpred=𝒢(𝒉(l),𝒉q)\boldsymbol{h}_{q}^{pred}=\mathcal{G}\bigl{(}\boldsymbol{h}_{\mathcal{H}}^{(l)},\mathcal{E}_{\boldsymbol{h}_{q}\leftarrow\mathcal{H}}\bigr{)} [Apply 1]

Inside each 𝒢\mathcal{G}, the learnable FCN’s use feature dimensions between [15-30], and PReLU activation. The graph convolutions in steps [7,8,10] include edge data, 𝒆\boldsymbol{e}, of Cartesian offsets between nodes, and in [8], a global state, 𝒛5\boldsymbol{z}\in\mathbb{R}^{5} is also included. The edge set 𝒮×\mathcal{E}_{\mathcal{H}\leftarrow\mathcal{S}\times\mathcal{H}} points nodes in the Cartesian product 𝒮×\mathcal{S}\times\mathcal{H} into \mathcal{H}, by linking all stations for each fixed 𝒉\boldsymbol{h}\in\mathcal{H}. 𝒉q\mathcal{E}_{\boldsymbol{h}_{q}\leftarrow\mathcal{H}} is the KNNK-NN graph, which for each query 𝒉q\boldsymbol{h}_{q} has edges linked to the 1010-nearest nodes in \mathcal{H}. The total number of free parameters of the GNN is \sim28,000.

II-E Training Details

To train the models, we generate a diverse set of synthetic data, for the assumed physical travel time and local magnitude scale models. In either case, for a batch of 150 samples, for each sample, we (i) sample an arbitrary subset of 50 - 450 stations of the NC network, (ii) pick a random source coordinate uniformly over the domain (and random magnitude uniformly over the axis), (iii) compute theoretical arrival times and amplitude measurements, (iv) remove arrivals of source-receiver distances greater than d+ϵd+\epsilon with dd\sim Uniform [10, 1000] km per event, and ϵ𝒩\epsilon\sim\mathcal{N}(0, 30 km) per event and per station, (v) add per-pick Laplacian noise of std. 1-5%\% the quantity for travel time, and 25-250%\% the quantity for amplitude (hence, uncertainties increase proportional to the observed quantity). Finally, we (vi) randomly delete 10-80%\% of arrivals, and randomly corrupt 10-30%\% of the remaining arrivals to have anomalous (uniformly random) arrival time and amplitude measurements. These simple steps can produce complex input datasets (including strong noise and anomalous picks), for variable seismic networks, with data that are still physically tied to the assumed travel time and magnitude scale models.

For the labels, we represent each target solution as a continuous Gaussian, wherein the target spatial prediction is a multivariate Gaussian of fixed width σ𝒳\sigma_{\mathcal{X}} (= 25 km), localized at the correct source coordinate, and similarly, the target magnitude prediction is a 1D Gaussian, localized on the true magnitude, with width σ\sigma_{\mathcal{M}} (= 0.5 M). If a particular input sample has 3\leq 3 non-corrupted arrivals following the synthetic scheme above, the labels are set to all zero for that sample. Models are implemented in PyTorch Geometric [5], and we train by computing L2-norm losses of the output with respect to the target over the batch, and optimizing with the Adam optimizer [10] for \sim10,000 update steps with a learning rate of 10310^{-3}. In \sim10’s of training runs, the GNN performed similarly and converged in training at similar rates, hence indicating that training was stable.

III Results

To evaluate the performance of our method, we apply the trained GNN to locate and compute magnitudes of events for 1,500 new events generated through the synthetic data construction scheme described above. This allows measuring the bulk performance of the GNN as the available station network changes, source coordinates vary, and as the pick data has highly variable levels of noise, number of picks, and anomalous pick rates. We compare the results with traditional methods. For location, we minimize the unweighted least squares residual, using the average best solution of five particle swarm global optimization runs [9]. For magnitude, we compute magnitude estimates for all picks using our calibrated magnitude scale, and take the median of the set of points within the 10-90%\% quantile range of the distribution.

Refer to caption
Fig. 1: Summary of test results of the trained GNN compared against the traditional location, and magnitude scale methods. In each sub-panel, the R2R^{2} correlation scores, standard deviations of residuals of matched events, and the rate of recovered events (locations matched within 0.5\leq 0.5^{\circ}, and magnitudes matched within 0.5\leq 0.5 M) are reported.

The results of the location and magnitude analysis are summarized in Fig. 1, where we observe comparable, and slightly favorable performance of the GNN compared with the traditional method in nearly all bulk statistics. Notably, the R2R^{2} scores for the GNN are >>0.95 in latitude and longitude, and improve upon the traditional methods performance by \sim0.015, and \sim0.05 units, respectively. Also, by declaring ‘matched events’ as those where the predicted event is within 0.50.5^{\circ} epicentral distance of the true location, we measure a recovery rate of 0.968 for the GNN, and 0.933 for the travel time method, highlighting the GNN’s improved tolerance to outliers. Both methods also estimate event depths, however because the station spacing is large compared to the range of seismogenic depths ([0,40][0,40] km) and all observations are from the Earth’s surface, depths are more poorly constrained by the noisy input data than horizontal coordinates, and have R2R^{2} = 0.29 and R2R^{2} = 0.22 for the GNN, and travel time method, respectively. For the magnitude application, we find comparable performance between both methods, with the GNN having slightly reduced residuals and rates of missed events. The magnitude residuals of the traditional method are systematically positive by \sim0.16 M, while the GNN residuals are well localized around a mean of zero for both magnitude, and location predictions.

IV Conclusion

By making use of the local structure in data through information passing along the adjacency matrix, graph-based methods have the potential to parse subtle, but important feature interactions in datasets. Our application demonstrates the potential of GNN’s to solve traditional seismological inverse problems, while directly incorporating the physics of the problem both through the defined input features, and the adjacency matrices. The framework we propose can be easily adapted and extended to other similar inverse problems, and the model can also be trained or applied on real data.

Acknowledgment

This work is supported by Department of Energy (Basic Energy Sciences; Award DE-SC0020445).

References

  • [1] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner et al., “Relational inductive biases, deep learning, and graph networks,” arXiv preprint arXiv:1806.01261, 2018.
  • [2] K. J. Bergen, P. A. Johnson, V. Maarten, and G. C. Beroza, “Machine learning for data-driven discovery in solid earth geoscience,” Science, vol. 363, no. 6433, 2019.
  • [3] L. Berton, A. de Andrade Lopes, and D. A. Vega-Oliveros, “A comparison of graph construction methods for semi-supervised learning,” in 2018 International Joint Conference on Neural Networks (IJCNN).   IEEE, 2018, pp. 1–8.
  • [4] P. Djuric and C. Richard, Cooperative and graph signal processing: principles and applications.   Academic Press, 2018.
  • [5] M. Fey and J. E. Lenssen, “Fast graph representation learning with pytorch geometric,” arXiv preprint arXiv:1903.02428, 2019.
  • [6] W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” arXiv preprint arXiv:1706.02216, 2017.
  • [7] L. Hutton and D. M. Boore, “The ml scale in southern california,” Bulletin of the Seismological Society of America, vol. 77, no. 6, pp. 2074–2094, 1987.
  • [8] Y. Y. Kagan, “Accuracy of modern global earthquake catalogs,” Physics of the Earth and Planetary Interiors, vol. 135, no. 2-3, pp. 173–209, 2003.
  • [9] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of ICNN’95-international conference on neural networks, vol. 4.   IEEE, 1995, pp. 1942–1948.
  • [10] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  • [11] S. M. Mousavi and G. C. Beroza, “Bayesian-deep-learning estimation of earthquake location from single-station observations,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 11, pp. 8211–8224, 2020.
  • [12] J. Münchmeyer, D. Bindi, U. Leser, and F. Tilmann, “Earthquake magnitude and location estimation from real time seismic waveforms with a transformer network,” Geophysical Journal International, vol. 226, no. 2, pp. 1086–1104, 2021.
  • [13] A. Ortega, P. Frossard, J. Kovačević, J. M. Moura, and P. Vandergheynst, “Graph signal processing: Overview, challenges, and applications,” Proceedings of the IEEE, vol. 106, no. 5, pp. 808–828, 2018.
  • [14] F. Ringdal and T. Kværna, “A multi-channel processing approach to real time network detection, phase association, and threshold monitoring,” Bulletin of the Seismological Society of America, vol. 79, no. 6, pp. 1927–1940, 1989.
  • [15] J. A. Sethian, “A fast marching level set method for monotonically advancing fronts,” Proceedings of the National Academy of Sciences, vol. 93, no. 4, pp. 1591–1595, 1996.
  • [16] A. Tarantola and B. Valette, “Generalized nonlinear inverse problems solved using the least squares criterion,” Reviews of Geophysics, vol. 20, no. 2, pp. 219–232, 1982.
  • [17] C. Thurber, H. Zhang, T. Brocher, and V. Langenheim, “Regional three-dimensional seismic velocity model of the crust and uppermost mantle of northern california,” Journal of Geophysical Research: Solid Earth, vol. 114, no. B1, 2009.
  • [18] M. P. van den Ende and J.-P. Ampuero, “Automated seismic source characterization using deep graph neural networks,” Geophysical Research Letters, vol. 47, no. 17, p. e2020GL088690, 2020.
  • [19] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, “Graph attention networks,” arXiv preprint arXiv:1710.10903, 2017.
  • [20] J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graph neural networks: A review of methods and applications,” AI Open, vol. 1, pp. 57–81, 2020.