This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Deep Spatial Domain Generalization

Dazhou Yu, Guangji Bai, Yun Li, Liang Zhao Department of Computer Science
Emory University
Atlanta, USA
{Dazhou.Yu, Guangji.Bai, Yun.Li, Liang.Zhao}@emory.edu
Abstract

Spatial autocorrelation and spatial heterogeneity widely exist in spatial data, which make the traditional machine learning model perform badly. Spatial domain generalization is a spatial extension of domain generalization, which can generalize to unseen spatial domains in continuous 2D space. Specifically, it learns a model under varying data distributions that generalizes to unseen domains. Although tremendous success has been achieved in domain generalization, there exist very few works on spatial domain generalization. The advancement of this area is challenged by: 1) Difficulty in characterizing spatial heterogeneity, and 2) Difficulty in obtaining predictive models for unseen locations without training data. To address these challenges, this paper proposes a generic framework for spatial domain generalization. Specifically, We develop the spatial interpolation graph neural network 111https://github.com/dyu62/Deep-domain-generalization that handles spatial data as a graph and learns the spatial embedding on each node and their relationships. The spatial interpolation graph neural network infers the spatial embedding of an unseen location during the test phase. Then the spatial embedding of the target location is used to decode the parameters of the downstream-task model directly on the target location. Finally, extensive experiments on ten real-world datasets demonstrate the proposed method’s strength.

Index Terms:
unseen domain generalization, spatial, GNN, edge embedding, interpolation

I Introduction

Traditional machine learning models are typically under the independent and identically distributed (i.i.d.) assumption, meaning the data samples are independent of each other and follow the same distribution. However, this assumption generally cannot be held for spatial data which have spatial autocorrelation and heterogeneity. Spatial autocorrelation makes the spatial location of a sample and corresponding spatial attributes informative and samples not independent and identically distributed (non-i.i.d.). Spatial heterogeneity includes spatial non-stationarity and spatial anisotropy. Spatial non-stationarity means that sample distribution varies across locations. Spatial anisotropy means that the spatial dependency between sample locations is non-uniform along different locations. Specifically, the air pollution concentration of a location is usually a complex function of various independent variables but the relative importance of the independent variables are changing with locations, e.g., the population density and distances from emissions sources play an essential role in PM2.5 pollution concentration in Urban built-up areas. But in rural areas, the relative humidity is greatly attributed to the diffusion of PM2.5. This requires us to have some customization on different models in different locations. However, in the training set, we usually only have observations from a limited number of locations. Hence, it is prevalent that we need to execute prediction tasks in locations unseen in the training set. This results in a very challenging task where we need to predict the model in a new location without any training data. This paper focuses on this new problem which we call spatial domain generalization, which is a spatial extension of domain generalization [1].

Domain generalization learns a model under varying data distributions that generalizes to unseen domains. It is derived from and goes beyond domain adaptation, which builds the bridge between source and target domains by characterizing the transformation between the data from these domains [2]. Current domain generalization only covers domains with categorical indices [1] or time sequential domains [3] but has not covered spatial domains which require considering unique problems such as spatial autocorrelation and spatial heterogeneity. Another thread of research comes from the spatial data mining area, where people propose techniques such as Geographically weighted regression (GWR) [4] to handle spatial heterogeneity. Most of the time, prescribed models are used where the underlying spatial distribution and correlation need to be presumed and predefined by the model designer which may not reflect the true spatial process that is usually complex and unknown. Especially, these models only consider distances and ignore other spatial information such as direction. What’s more, these models share the feature extractor on all locations and only generate different coefficients in the last layer so they cannot capture complex heterogeneity within data.

The spatial domain generalization is challenged by several critical bottlenecks, including 1) Difficulty in characterizing spatial heterogeneity. The data distribution is not identical in the entire space and is changing with respect to locations’ confounding and characteristics. A simple global model cannot explain the relationships between variables. So the nature of the model must alter over space to reflect the structure within the data. Modeling the spatially changing relationships requires making the model location-sensitive. Feeding the coordinate values as part of input features is intuitive. However, such a method cannot leverage the fact of the other features’ dependency on location and other confounding factors varying among locations. It is necessary yet difficult to quantitatively figure out how the spatial heterogeneity impacts the models while there is no "one-fits-all" rule for it. It is highly imperative yet challenging to have some techniques that can automatically learn from the data. 2) Difficulty in obtaining predictive models for unseen locations without training data. Due to the spatial heterogeneity, the local models in different locations can be very different in order to capture the relationships between predictors and the target variable. When training data is not provided in some locations, the method must have the capacity to generalize to these unseen locations. This is as difficult as zero-shot learning.

In order to address the above challenges, we propose a generic framework for deep spatial domain generalization, which generates the predictive models for any unseen spatial domains. More specifically, to address the first challenge, we propose a novel spatial interpolation graph neural network (SIGNN) to learn the spatial embedding of each location and the relationships between them in the training set and infer the spatial embedding of unseen locations during the test phase. The spatial embedding of the target location is then used to decode the parameterized model directly without training data on the target location. This solves the second challenge. Our contribution includes

  • We propose a framework for spatial domain generalization. The framework doesn’t assume the data distribution and learns the spatial embeddings for all the locations in the training set in an end-to-end manner. It is also compatible with general predictive task models such as regression models and multi-layer perceptrons (MLP).

  • We develop the spatial interpolation graph neural network. It handles spatial data as a graph and uses the edge representation to learn the spatial embedding on each node and their relationships by doing graph convolution operations. It also interpolates the spatial embedding at any location so our method can generalize to unseen locations.

  • We conduct extensive experiments. We validated the efficacy of our method on ten real-world datasets for classification and regression tasks. Our method outperforms state-of-the-art models on most of the tasks.

II Related Work

In this section, we summarize the works in the field of domain adaptation and domain generalization. Machine learning systems often assume that training and test data follow the same distribution, which, however, usually cannot be satisfied in practice. Domain Adaptation (DA) aims to build the bridge between source and target domains by characterizing the transformation between the data from these domains  [2, 5, 6]. Domain Adaptation (DA) has received great attention from researchers in the past decade [2, 5, 6]. Under the big umbrella of DA, continuous domain adaptation considers the problem of adapting to target domains where the domain index is a continuous variable (temporal DA is a special case when the domain index is 1D). Approaches to tackling such problems can be broadly classified into three categories: (1) biasing the training loss towards future data via transportation of past data[7], (2) using time-sensitive network parameters and explicitly controlling their evolution along time [8], (3) learning representations that are time-invariant using adversarial methods [9]. The first category augments the training data, the second category reparameterizes the model, and the third category redesigns the training objective. However, data may not be available for the target domain, or it may not be possible to adapt the base model, thus requiring Domain Generalization.

A diversity of DG methods have been proposed in recent years. According to [10], existing DG methods can be categorized into the following three groups, namely: (1) Data manipulation: This category of methods focuses on manipulating the inputs to assist in learning general representations. There are two kinds of popular techniques along this line: a). Data augmentation [11], which is mainly based on augmentation, randomization, and transformation of input data; b). Data generation [12], which generates diverse samples to help generalization. (2) Representation learning: This category of methods is the most popular in domain generalization. There are two representative techniques: a). Domain-invariant representation learning [5], which performs kernel, adversarial training, explicitly features alignment between domains, or invariant risk minimization to learn domain-invariant representations; b). Feature disentanglement [13], which tries to disentangle the features into domain-shared or domain-specific parts for better generalization. (3) Learning strategy: This category of methods focuses on exploiting the general learning strategy to promote the generalization capability.

III Methodology

Refer to caption
Figure 1: Illustration of the proposed framework. The unseen location’s spatial embedding is interpolated by SIGNN. The edge representation contains both the distance and direction information. The spatial embedding is decoded to the weights of the downstream-task model.

In this section, we first provide the problem formulation and the challenges of the problem, then we introduce our proposed framework and how it solves the challenges.

III-A Problem formulation

In this paper, we denote a geo-location by its 2D coordinate values s2s\in\mathbb{R}^{2}, and each ss is associated with a spatial domain (𝒳s×𝒴s)(\mathcal{X}_{s}\times\mathcal{Y}_{s}), where we could have a set of samples (𝐱s,𝐲s)={(xi,yi)(𝒳s×𝒴s)}i=1Ns(\mathbf{x}_{s},\mathbf{y}_{s})=\{(x_{i},y_{i})\in(\mathcal{X}_{s}\times\mathcal{Y}_{s})\}_{i=1}^{N_{s}} where xi𝒳x_{i}\in\mathcal{X} is ii-th input sample from the domain 𝒳s\mathcal{X}_{s}, while yi𝒴y_{i}\in\mathcal{Y} is the ii-th output sample from the domain 𝒴s\mathcal{Y}_{s}. For the classification problem, yiy_{i} can be further narrowed to a binary value.

In opposition to an assumption that the relationship ff remains unchanged among dependent variables xi𝒳sx_{i}\in\mathcal{X}_{s} and independent variables yi𝒴sy_{i}\in\mathcal{Y}_{s} in the space 2\mathbb{R}^{2}, spatial heterogeneity describes a condition in which the relationships between some sets of variables {xi,yi}\{x_{i},y_{i}\} are heterogeneous throughout space, i.e., fsfsf_{s}\neq f_{s^{\prime}} if sss\neq s^{\prime}. A static global model cannot capture the changes in relationships, thus Domain Generalization (DG) models which could reflect the heterogeneous relationships within the data play a vital role in spatial analysis.

Our goal in this paper is to build a model that proactively captures the data concept drift across different geo-locations. Given a set of data samples {(𝐱s,𝐲s)}sS0\{(\mathbf{x}_{s},\mathbf{y}_{s})\}_{s\in S_{0}} from seen domains, where S0S_{0} denotes the set of seen locations, we aim to learn the predictive mapping functions fs:𝒳s𝒴sf_{s}:\mathcal{X}_{s}\rightarrow\mathcal{Y}_{s} for downstream tasks such as classification or regression on location s. Here the location can either be seen (i.e., sS0s\in S_{0}) or unseen (i.e., s(2S0)s\in(\mathbb{R}^{2}-S_{0})). The former is spatial multitask learning while the latter is spatial domain generalization. Therefore, our problem is a generalization of both of them.

III-B Proposed Method

III-B1 Spatial domain generalization

We propose a bi-level framework as shown by Fig. 1 which generates the predictive models for any unseen spatial domains. Generally speaking, we propose a novel spatial interpolation graph neural network (SIGNN) to learn the target location’s spatial embedding. The spatial embedding of the target location is then used to decode the parameterized model directly without training data on the target location. The general procedures of unseen domain generalization and model training are outlined in the following and detailed in Sections III-B2.

Spatial KK-nearest neighbor graph

For any location ss we will first build a spatial KK-nearest neighbor graph upon ss and seen locations S0S_{0} that is defined as G(s,S0;Z)=(V(s,S0),E(s,S0);Z)G(s,S_{0};Z)=(V(s,S_{0}),E(s,S_{0});Z), where node set V(s,S0)=S0{s}V(s,S_{0})=S_{0}\bigcup\{s\} is just the union of the current location ss and seen locations S0S_{0} defined before. So in the case that ss is a seen location, then VV is reduced to S0S_{0}. E(s,S0)V×VE(s,S_{0})\subseteq V\times V denotes the relationships among all the locations, which will be detailed in Section III-B3. For simplicity, we omit the input and use VV and EE directly in the following. Let 𝒩i(K)\mathcal{N}_{i}^{(K)} denote node viv_{i}’s KK-nearest-neighbors, the nodes whose Euclidean distance from viv_{i} is less than or equal to the kk-th largest Euclidean distance between any node and viv_{i}. To be specific, for a node vj𝒩i(K)v_{j}\in\mathcal{N}_{i}^{(K)}, a directed edge (vj,vi)(v_{j},v_{i}) exists from vjv_{j} to viv_{i}, so there are exactly KK nodes pointing to viv_{i}. Z={zs}sS0Z=\{z_{s}\}_{s\in{S_{0}}} denotes the spatial embeddings for all the locations except the current location ss, namely S0{s}S_{0}-\{s\}, where zsz_{s} is the spatial embedding vector for location ss. Here the spatial embeddings are also the node features.

Unseen domain model generation

When doing spatial domain generalization, we are interested in generating the predictive model for an unseen location s2S0s^{\prime}\in\mathbb{R}^{2}-S_{0}. And the spatial embedding for location ss^{\prime} is spatially interpolated by our SIGNN via our newly proposed spatial interpolation graph convolutions a(s;E,Z)a(s^{\prime};E,Z) by referring to the spatial embeddings of all seen spatial locations S0S_{0}. Then the spatial embedding of ss^{\prime} is fed into the model generator to generate the parameterized function fsf_{s^{\prime}}, namely the downstream task’s model with the following function

fs=dφ(gθ(s;E,Z)),f_{s^{\prime}}=d_{\varphi}(g_{\theta}({s^{\prime}};{E,Z})), (1)

where dφd_{\varphi} denotes the downstream-task-model generator parameterized by φ{\varphi}, gθg_{\theta} denotes SIGNN parameterized by θ{\theta}. The downstream task can be any classification or regression task on location ss^{\prime} such as weather classification, air pollution prediction, and so forth. We will elaborate on the details of transferring aa’s output to a specific task’s model in Section III-B2.

Model Training

The above model generation for unseen location requires learning spatial embeddings Z={zs}sS0Z=\{z_{s}\}_{s\in{S_{0}}}, model parameters θ\theta of SIGNN gθg_{\theta}, and parameters φ\varphi of model generator dφd_{\varphi}. In the following, we will introduce how to jointly learn them in the training phase. For each seen location, as mentioned in Section III-A, we know the input and output data of the downstream task. Hence our training objective is to maximize the likelihood given the prior of p(Z)p(Z), by learning the unknown spatial embedding and model parameters,

argmaxZ,φ,θ{p(𝐘|𝐗,φ,θ,Z)p(Z)},\arg\max\limits_{Z,\varphi,\theta}\{p(\mathbf{Y}|\mathbf{X},\varphi,\theta,Z)p(Z)\}, (2)

the above equation is equal to minimizing the negative logarithm of the likelihood as follows,

argminZ,φ,θ{lnp(𝐘|𝐗,φ,θ,Z)lnp(Z)},\arg\min\limits_{Z,\varphi,\theta}\{-\ln{p(\mathbf{Y}|\mathbf{X},\varphi,\theta,Z)}-\ln p(Z)\}, (3)

where 𝐘\mathbf{Y} and 𝐗\mathbf{X} denote the prediction and input for all samples from all domains ({{(xi,yi)(𝒳s×𝒴s)}iNs}sS0\{\{(x_{i},y_{i})\in(\mathcal{X}_{s}\times\mathcal{Y}_{s})\}_{i}^{N_{s}}\}_{s\in S_{0}}), respectively. Since ZZ can be any continuous value, its prior distribution p(Z)p(Z) can be trivially assumed as an isotropic Gaussian normal distribution, we have

argminZ,φ,θ{lnp(𝐘|𝐗,φ,θ,Z)+12Z2}.\arg\min\limits_{Z,\varphi,\theta}\{-\ln{p(\mathbf{Y}|\mathbf{X},\varphi,\theta,Z)}+\frac{1}{2}||Z||^{2}\}. (4)

Hence the first term is a downstream task-specific prediction loss and the second term is a 2\ell_{2} norm that regularizes ZZ. So the first term can also be more specifically expressed as sS0loss(fs(𝐱s),𝐲s)\sum_{s\in S_{0}}loss(f_{s}(\mathbf{x}_{s}),\mathbf{y}_{s}), where the parameter WsW_{s} of each location ss’s downstream predictive function fsf_{s} is calculated as

Ws=dφ(zs)=dφ(gθ(s;E,Z)).W_{s}=d_{\varphi}(z_{s})=d_{\varphi}(g_{\theta}(s;E,Z)). (5)

In the following, we will more concretely introduce the prediction and model parameter training of our overall framework. Then in Section III-B2, we will detail our SIGNN model and graph generator for generating the downstream-task model. Lastly, in Section III-B3, we will drill down into our edge representation.

III-B2 Unseen domain model generator

In this subsection, we first introduce the details of our SIGNN model for doing the spatial embedding interpolation for unseen locations and then elaborate on the graph generator for generating the downstream task model using the interpolated spatial embedding.

Spatial interpolation graph neural network (SIGNN)

As mentioned above, our SIGNN model gθ(s;E,Z)g_{\theta}(s;E,Z) aims at inferring the spatial embedding for a given location ss, based on other locations S0S_{0}’s spatial embeddings and their spatial correlation with ss. A key challenge unique to spatial interpolation beyond general message passing in graph neural networks is how to comprehensively represent such correlation among locations. Existing works that typically only consider the distances among the locations to represent their correlation cannot consider the integrated spatial information such as the orientation of neighbors which are indispensable for spatial interpolation.

To achieve this, in our SIGNN we propose a novel edge representation E(s,S0)E(s,S_{0}) which is detailed in section III-B3 and here we first introduce SIGNN and its convolutional operations based upon the edge representation and spatial embedding.

SIGNN is a stack of MM spatial interpolation graph convolutional layers au,u=1,2,,Ua_{u},u=1,2,...,U , namely gθ=aUaU1a1g_{\theta}=a_{U}\circ a_{U-1}\circ\dots a_{1}, where the input to each spatial interpolation graph convolutional layer is the target location, the set of our novel edge representations and spatial embeddings, namely (s;E,Z)(s;E,Z). The spatial interpolation graph convolutional layer interpolates the spatial embedding zsz_{s} as its output while the edge representations remain the same for each layer.

In order to do the interpolation, the spatial interpolation graph convolutional layer aua_{u} generates a pairwise weight ωji(u)\omega_{ji}^{(u)} for each node viv_{i} and its neighbors vj𝒩i(K)v_{j}\in\mathcal{N}_{i}^{(K)}, then the spatial embedding of each node is updated by calculating a weighted sum of the spatial embeddings of neighboring nodes, namely zi(u+1)=j=1Kωji(u)zj(u){z}_{i}^{(u+1)}=\sum_{j=1}^{K}\omega_{ji}^{(u)}*{z}_{j}^{(u)}, where ωji(u)\omega_{ji}^{(u)} equals

exp(σ(αT[m1(eji)m2(zi(u))m2(zj(u))]))k,vk𝒩i(K)exp(σ(αT[m1(eik)m2(zi(u))m2(zk(u))])),\frac{\text{exp}(\sigma(\vec{\alpha}^{T}[m_{1}({e}_{ji})||m_{2}({z}_{i}^{(u)})||m_{2}({z}_{j}^{(u)})]))}{\sum_{k,v_{k}\in\mathcal{N}_{i}^{(K)}}\text{exp}(\sigma(\vec{\alpha}^{T}[m_{1}({e}_{ik})||m_{2}({z}_{i}^{(u)})||m_{2}({z}_{k}^{(u)})]))},

where ejiEe_{ji}\in E denotes the edge representation for edge (vj,vi)(v_{j},v_{i}), zi(u)z_{i}^{(u)} and zj(u)z_{j}^{(u)} denote the spatial embedding of node vi,vjv_{i},v_{j} at layer aua_{u} respectively, m1m_{1} and m2m_{2} denote two MLP models that augment the spatial embedding and edge representation respectively, |||| denotes the concatenation operation, σ\sigma denotes the nonlinear activation function LeakyRuLU, α\vec{\alpha} denotes a vector parameter that transforms the concatenated vector to a scalar. We also use the softmax function to normalize the weights. Finally, we select the spatial embedding zsz_{s} for location ss as the output.

Downstream-task model generator

Many shallow models like linear regression, logistic regression, and support vector machines manipulate the input vector with matrix operations such as multiplication between input and weight vectors. Such matrix operation can be considered as the fully-connected layer or other types of layers, with or without a nonlinear activation function. When the models go deep, then multiple layers are stacked into deep neural networks. Hence each of all these shallow or deep models for location ss can be denoted as its parameter which is network structured. Here we can formally define such a network as 𝒢=(𝒱,;Ws)\mathcal{G}=(\mathcal{V},\mathcal{E};W_{s}), where 𝒱\mathcal{V} are the neurons, \mathcal{E} are the links between neurons, and WsW_{s} are the link weights for the model at location ss. Here the model parameter WsW_{s} is namely the output of our model generator dφ:Ws=dφ(zs)d_{\varphi}:W_{s}=d_{\varphi}(z_{s}). To be specific, a neural network can be represented as an edge-weighted graph 𝒢\mathcal{G}, where each node v𝒱{v}\in\mathcal{V} corresponds to a neuron and each edge e{e}\in\mathcal{E} corresponds to the connection weight between two neurons. Following works in [14], we use a three-layer MLP to generate the downstream-task model’s weights WsW_{s}. Then the model can load the weights and perform the task.

III-B3 Edge representation for spatial interpolation

In this section, we introduce edge representation for spatial interpolation inspired by [15]. The proposed edge representation ejie_{ji} for an edge (vj,vi)(v_{j},v_{i}), where viv_{i} is the target node and vjv_{j} is the source node, can be expressed as

eji=(lij,λijk),{e}_{ji}=\left(l_{ij},\lambda_{ijk}\right), (6)

where vkv_{k} is the neighbor of vjv_{j} that forms the smallest λijk[π,π)\lambda_{ijk}\in[-\pi,\pi),

λijk=Parityλ¯ijk,λ¯ijk=arccos(𝐬ijlij,𝐬jkljk),lij=𝐬ij2,𝐬ij=s¯js¯i,Parity=𝐧ijk,𝐧xy,𝐧ijk=𝐬ij×𝐬jk𝐬ij×𝐬jk2,𝐧xy=𝐮x×𝐮y,\begin{split}\lambda_{ijk}&=\mathrm{Parity}\cdot\bar{\lambda}_{ijk},\\ \bar{\lambda}_{ijk}&=\mathrm{arccos}(\langle\frac{\mathbf{s}_{ij}}{l_{ij}},\frac{\mathbf{s}_{jk}}{l_{jk}}\rangle),\\ l_{ij}&=\|\mathbf{s}_{ij}\|_{2},\\ \mathbf{s}_{ij}&=\bar{s}_{j}-\bar{s}_{i},\\ \mathrm{Parity}&=\langle\mathbf{n}_{ijk},\mathbf{n}_{xy}\rangle,\\ \mathbf{n}_{ijk}&=\frac{\mathbf{s}_{ij}\times\mathbf{s}_{jk}}{\|\mathbf{s}_{ij}\times\mathbf{s}_{jk}\|_{2}},\\ \mathbf{n}_{xy}&=\mathbf{u}_{x}\times\mathbf{u}_{y},\end{split} (7)

where s¯j\bar{s}_{j} and s¯j\bar{s}_{j} denote the coordinate values of two locations, 𝐮x\mathbf{u}_{x} and 𝐮y\mathbf{u}_{y} are unit vectors along the horizontal and vertical axis of the coordinate system on the interested plane, 𝐧xy\mathbf{n}_{xy} is the normal of the interested plane, ×\times denotes the cross product operation.

IV Experiment

In this section, we first introduce the experimental settings, then we compared the effectiveness of the proposed model with comparison methods on ten real-world datasets. All the experiments are conducted on a 64-bit machine with an NVIDIA A5000 GPU.

IV-A Experiment setting

Dataset

We evaluate our method on ten real-world datasets, including seven civil unrest event prediction datasets and one influenza outbreak event prediction dataset extracted from Twitter data for the classification task, and two environmental datasets collected by in-situ monitoring sensors and satellites for the regression task.

  • Civil unrest twitter datasets Seven civil unrest event datasets from Brazil, Chile, Colombia, Ecuador, EI Salvador, Uruguay, and Venezuela are utilized to evaluate the performance of the proposed model. Details of these datasets could be found in [16, 17].

  • Influenza outbreak twitter dataset Flu activities are collected from 48 states in the U.S. in this dataset. Details of these datasets could be found in [18, 19, 20] We call this dataset Flu in the following sections.

  • PM2.5 concentration dataset PM2.5 data in the Los Angeles region derived from the fusion of data collected by PurpleAir sensors and the Moderate Resolution Imaging Spectroradiometer (MODIS) TERRA and AQUA satellites [21], as well as the meteorological dataset from MERRA-2 reanalysis data [22]. The dataset contains latitude, longitude, and meteorological values such as humidity, surface pressure, wind speed, and the corresponding ambient PM2.5 value in the location.

  • Ambient temperature dataset In-situ air temperature was downloaded from Weather Underground, a network of weather stations. Satellite-based land surface temperature (LST) products derived from MODIS satellite observations and meteorological variables were collocated together to estimate ambient temperature.

Comparison method

To the best of our knowledge, there has been little work handling unseen spatial domains. The following methods were included for comparing performances on the collected datasets.

  • ERM: A space-oblivious model which is trained on all training domains using ERM.

  • IncFinetune: In this model, we incrementally train a global model on all training domains by finetuning the model on the training domains one at a time.

  • GTWNN [23]: A geographically weighted neural network consisting of two artificial neural networks with the first network estimating the spatial weight of each independent variable from coordinate values.

IV-B Experimental performance

We adopt Area under the ROC Curve (AUC) score and mean absolute error (MAE) as the metrics for classification and regression tasks, respectively.

TABLE I: Comparison of our proposed method against existing methods on all ten datasets in terms of MAE for the first two datasets and AUC score for others. The standard deviation over three runs follows the ±\pm mark. We observe that our proposed method outperforms almost all the baselines
Dataset ERM IncFinetune GTWNN SIGNN-G SIGNN
PM2.5 12.44 ±\pm 4.64 13.73 ±\pm 4.07 10.00 ±\pm 0.58 9.66 ±\pm 0.48 9.40 ±\pm 0.46
Temperature 8.74 ±\pm 1.23 11.13 ±\pm 4.93 12.29 ±\pm 7.81 7.41 ±\pm 0.30 7.33 ±\pm 0.28
Flu 0.84 ±\pm 0.03 0.80 ±\pm 0.03 0.75 ±\pm 0.02 0.74 ±\pm 0.05 0.84 ±\pm 0.06
Brazil 0.53 ±\pm 0.03 0.52 ±\pm 0.03 0.59 ±\pm 0.04 0.61 ±\pm 0.08 0.65 ±\pm 0.07
Chile 0.46 ±\pm 0.04 0.44 ±\pm 0.12 0.49 ±\pm 0.07 0.57 ±\pm 0.08 0.55 ±\pm 0.05
Columbia 0.52 ±\pm 0.08 0.44 ±\pm 0.06 0.55 ±\pm 0.07 0.56 ±\pm 0.04 0.56 ±\pm 0.11
Ecuador 0.47 ±\pm 0.08 0.38 ±\pm 0.13 0.47 ±\pm 0.03 0.52 ±\pm 0.08 0.52 ±\pm 0.18
El salvador 0.50 ±\pm 0.07 0.51 ±\pm 0.08 0.46 ±\pm 0.07 0.52 ±\pm 0.07 0.53 ±\pm 0.20
Uruguay 0.48 ±\pm 0.08 0.50 ±\pm 0.10 0.39 ±\pm 0.12 0.40 ±\pm 0.17 0.54 ±\pm 0.01
Venezuela 0.51 ±\pm 0.03 0.55 ±\pm 0.04 0.56 ±\pm 0.05 0.60 ±\pm 0.03 0.54 ±\pm 0.03

IV-B1 Effectiveness results

Table I summarizes the performance comparison among the proposed methods and competing models for civil unrest event forecasting, influenza outbreak prediction, ambient PM2.5 concentration, and temperature estimation tasks. The results show the proposed method achieves the best performance on most datasets and has comparable performance on other datasets. It indicates the method that adapts to different locations can better model the heterogeneous relationships among independent variables and dependent along the changes of locations. For example, for the seven civil unrest event dataset, the proposed model has the highest AUC scores in most countries except Venezuela. Specifically, the AUC scores of our model in Chile and Brazil are much higher than that of baseline models.

IV-B2 Ablation study

We further conduct an ablation study on all ten datasets to evaluate the effectiveness of different components in our proposed model. Firstly, we remove the interpolation function in SIGNN and train a single global spatial embedding for all the locations and use this spatial embedding to generate the weights of the downstream-task model. We name this version of our method as SIGNN-G. The results of this version are included in Table I.

As we can see, the interpolation function provided by SIGNN contributes significantly to the overall model performance. The difference in performance between SIGNN-G is an indicator of the extent of heterogeneity of the spatial data. This further implies that spatial heterogeneity exists in almost all the datasets except Columbia and Ecuador, on which the average performances of SIGNN and SIGNN-G are the same.

V Conclusion

Spatial autocorrelation and spatial heterogeneity widely exist in spatial data, which makes the traditional machine learning model perform badly. Spatial domain generalization is a spatial extension of domain generalization, which can generalize to unseen spatial domains in continuous 2D space. Specifically, it learns a model under varying data distributions that generalizes to unseen domains. Although tremendous success has been achieved in domain generalization, there exist very few works on spatial domain generalization. This paper proposes a generic framework for spatial domain generalization. Specifically, We develop a spatial interpolation graph neural network that handles spatial data as a graph and learns the spatial embedding on each node and their relationships. The spatial interpolation graph neural network infers the spatial embedding of an unseen location during the test phase. Then the spatial embedding of the target location is used to decode the parameters of the downstream-task model directly on the target location. Extensive experiments on ten real-world datasets demonstrate the proposed method’s strength. SIGNN achieves the best performances on most of the datasets and comparable performance on the others. The difference in the performances on SIGNN-G and SIGNN validated our assumption that spatial heterogeneity exists in most spatial datasets.

Acknowledgment

This work was supported by the National Science Foundation(NSF) Grant No. 1755850, No. 1841520, No. 2007716, No. 2007976, No. 1942594, No. 1907805, a Jeffress Memorial Trust Award, Amazon Research Award, NVIDIA GPU Grant, and Design Knowledge Company (subcontract number: 10827.002.120.04).

References

  • [1] K. Muandet, D. Balduzzi, and B. Schölkopf, “Domain generalization via invariant feature representation,” in International Conference on Machine Learning.   PMLR, 2013, pp. 10–18.
  • [2] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan, “A theory of learning from different domains,” Machine learning, vol. 79, no. 1, pp. 151–175, 2010.
  • [3] A. Nasery, S. Thakur, V. Piratla, A. De, and S. Sarawagi, “Training for the future: A simple gradient interpolation loss to generalize along time,” Advances in Neural Information Processing Systems, vol. 34, 2021.
  • [4] D. C. Wheeler and A. Páez, “Geographically weighted regression,” in Handbook of applied spatial analysis.   Springer, 2010, pp. 461–486.
  • [5] Y. Ganin, E. Ustinova, H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, M. Marchand, and V. Lempitsky, “Domain-adversarial training of neural networks,” The journal of machine learning research, vol. 17, no. 1, pp. 2096–2030, 2016.
  • [6] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discriminative domain adaptation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 7167–7176.
  • [7] J. Hoffman, T. Darrell, and K. Saenko, “Continuous manifold based adaptation for evolving visual domains,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 867–874.
  • [8] M. Mancini, S. R. Bulo, B. Caputo, and E. Ricci, “Adagraph: Unifying predictive and continuous domain adaptation through graphs,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 6568–6577.
  • [9] H. Wang, H. He, and D. Katabi, “Continuously indexed domain adaptation,” arXiv preprint arXiv:2007.01807, 2020.
  • [10] J. Wang, C. Lan, C. Liu, Y. Ouyang, W. Zeng, and T. Qin, “Generalizing to unseen domains: A survey on domain generalization,” arXiv preprint arXiv:2103.03097, 2021.
  • [11] J. Tobin, R. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for transferring deep neural networks from simulation to the real world,” in 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS).   IEEE, 2017, pp. 23–30.
  • [12] F. Qiao, L. Zhao, and X. Peng, “Learning to learn single domain generalization,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 12 556–12 565.
  • [13] W. Li, Z. Xu, D. Xu, D. Dai, and L. Van Gool, “Domain generalization and adaptation using low rank exemplar svms,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 5, pp. 1114–1127, 2017.
  • [14] K.-H. N. Bui, J. Cho, and H. Yi, “Spatial-temporal graph neural network for traffic forecasting: An overview and open research issues,” Applied Intelligence, pp. 1–12, 2021.
  • [15] Z. Zhang and L. Zhao, “Representation learning on spatial networks,” Advances in Neural Information Processing Systems, vol. 34, pp. 2303–2318, 2021.
  • [16] L. Zhao, Q. Sun, J. Ye, F. Chen, C.-T. Lu, and N. Ramakrishnan, “Multi-task learning for spatio-temporal event forecasting,” in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015, pp. 1503–1512.
  • [17] S. Muthiah, P. Butler, R. P. Khandpur, P. Saraf, N. Self, A. Rozovskaya, L. Zhao, J. Cadena, C.-T. Lu, A. Vullikanti et al., “Embers at 4 years: Experiences operating an open source indicators forecasting system,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 205–214.
  • [18] L. Zhao, Q. Sun, J. Ye, F. Chen, C.-T. Lu, and N. Ramakrishnan, “Feature constrained multi-task learning models for spatiotemporal event forecasting,” IEEE Transactions on Knowledge and Data Engineering, vol. 29, no. 5, pp. 1059–1072, 2017.
  • [19] L. Zhao, J. Chen, F. Chen, W. Wang, C.-T. Lu, and N. Ramakrishnan, “Simnest: Social media nested epidemic simulation via online semi-supervised deep learning,” in 2015 IEEE international conference on data mining.   IEEE, 2015, pp. 639–648.
  • [20] L. Zhao, “Event prediction in the big data era: A systematic survey,” ACM Computing Surveys (CSUR), vol. 54, no. 5, pp. 1–37, 2021.
  • [21] R. Levy, S. Mattoo, L. Munchak, L. Remer, A. Sayer, F. Patadia, and N. Hsu, “The collection 6 modis aerosol products over land and ocean,” Atmospheric Measurement Techniques, vol. 6, no. 11, pp. 2989–3034, 2013.
  • [22] R. Gelaro, W. McCarty, M. J. Suárez, R. Todling, A. Molod, L. Takacs, C. A. Randles, A. Darmenov, M. G. Bosilovich, R. Reichle et al., “The modern-era retrospective analysis for research and applications, version 2 (merra-2),” Journal of climate, vol. 30, no. 14, pp. 5419–5454, 2017.
  • [23] L. Feng, Y. Wang, Z. Zhang, and Q. Du, “Geographically and temporally weighted neural network for winter wheat yield prediction,” Remote Sensing of Environment, vol. 262, p. 112514, 2021.