This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On a Bernoulli Autoregression Framework
for Link Discovery and Prediction

Xiaohan Yan    Avleen S. Bijral11footnotemark: 1
Microsoft, One Microsoft Way, Redmond, WA 98052
{xiaoya,avbijral}@microsoft.com
Both authors contributed equally to this research.
Abstract

We present a dynamic prediction framework for binary sequences that is based on a Bernoulli generalization of the auto-regressive process. Our approach lends itself easily to variants of the standard link prediction problem for a sequence of time dependent networks. Focusing on this dynamic network link prediction/recommendation task, we propose a novel problem that exploits additional information via a much larger sequence of auxiliary networks and has important real-world relevance. To allow discovery of links that do not exist in the available data, our model estimation framework introduces a regularization term that presents a trade-off between the conventional link prediction and this discovery task. In contrast to existing work our stochastic gradient based estimation approach is highly efficient and can scale to networks with millions of nodes. We show extensive empirical results on both actual product-usage based time dependent networks and also present results on a Reddit based data set of time dependent sentiment sequences.

1 Introduction

Predicting and recommending links in a future network is a problem of significant interest in many communities with problems of interest ranging from recommending friends Aiello et al., (2012), patent partners Wu et al., (2013) to applications in disease propagation Myers and Leskovec, (2010) and detection of abnormal communication Huang and Lin, (2009). From a higher perspective link prediction is not markedly different from the problem of matrix completion when the network is represented as a adjacency matrix. However, in many cases past link information is predictive of the future. In one such application, Richard et al., (2010) proposed an approach for time dependent graphs where past adjacency matrices and a forecast of a network feature sequence is used to estimate links at a future time. This is a dynamic extension of the static link prediction problem Liben-Nowell and Kleinberg, (2007) with origins in using network properties of a static snapshot to predict links. As a simple example consider the problem of predicting communication between two users over a network, past information is clearly predictive.

In this paper we discuss one variant of the dynamic link prediction problem and in contrast to Richard et al., (2010) present a highly scalable model with an efficient estimation approach. The underlying idea behind our approach is to exploit a sequence of auxiliary networks with feature data to inform links in the network sequence of interest. This is a fairly common scenario in a large corporation with multiple products and services, it can be immensely useful to use network information gathered from a product A to inform usage (via link prediction) on a product B. In the online retail setting111Example dataset: https://www.kaggle.com/retailrocket/ecommerce-dataset such as Amazon a variety of user-item interactions are possible including “view”, “add-to-cart” and “purchase”. One could generate networks at a user-item level for each type of event, and leverage the “view” network or the “add-to-cart” network to infer connections on the “purchase” network, i.e., when a user eventually buys an item. In such scenarios we have two network sequences, a main sequence that is expected to be sparse and a relatively dense auxiliary sequence with additional feature information that is predictive of a link in the main network. The introduction of the auxiliary sequences serves two purposes, one to provide candidates of links to predict from and a feature set that is possibly predictive of these links. However, the model we propose is general and holds even for a single network sequence with feature information. In the standard matrix completion framework, the observed entries are directly modeled with low rank constraints on the matrix. Instead, we model link probabilities using a Bernoulli parametrization, incorporate an auto-regressive structure to allow for dependence on past data and also include an additional dependence on logit transformed feature data. The equivalence of a low rank constraint in this framework is a relatively under-parametrized model as we shall see in Section 3.

2 Related Work

Existing work on link prediction has primarily focused on static networks with methods ranging from matrix factorization Koren, (2008), node neighborhood measures Sarkar et al., (2011); Liben-Nowell and Kleinberg, (2007) to network diffusion Myers and Leskovec, (2010). For an extensive survey see Wang et al., (2015). In contrast (dynamic) link prediction in the presence of a sequence of networks is relatively unexplored. Vu et al., (2011) propose a longitudinal network evolution approach that models edges at time tt as a multivariate counting process where the intensity function incorporates edge level features (network derived) using either the multiplicative Cox or additive Aalen form. Our setting is different in that we have two sequences of networks that we can exploit and the design of our estimation method enables us to discover new links which is not possible here. Moreover, the estimation methods require computation of large weight matrices and it is not clear if this can scale to networks with millions of nodes. Additionally, many real world networks tend to have low rank structures due to similar local node characteristics and explicit introduction of these constraints is a desirable feature. Closer in structure to our model is the nonparametric approach described in Sarkar et al., (2012), the edge probabilities are assumed to be Bernoulli distributed with a link function connecting the edge/node probabilities to the node/edge level features. Besides the difference in the problem setting, Sarkar et al., (2012) use a nonparametric estimator for the link function that uses a kernel weighted local neighborhood approach (in time and feature space) to estimate the link function. This requires a search over edge features that are similar to the edge being modeled and can be quite expensive, even with the use of fast search techniques it is unlikely to scale to extremely large graphs.

In the matrix completion line of work Richard et al., (2010) propose a graph feature tracking (GFT) framework for dynamic networks where the entire network at a future step is estimated by solving an appropriately regularized optimization problem. The optimization reflects a trade-off between minimizing the error between the estimated and the last adjacency matrix, a low rank constraint and a term involving the features. This term attempts to minimize the gap between the mapped features for the estimated matrix and the unknown adjacency matrix at the future time. The main intuition being that the evolving features are indicative of links in the sequence of networks. This combination of optimization terms allows the method to uncover links that are not previously observed. See Section 4 for more details of GFT considered in the simulation study. In this work we also introduce a feature regularization term to allow for link discovery but unlike Richard et al., (2010) our approach is highly scalable. While their method involves an expensive SVD operation at every iteration we propose a stochastic gradient based algorithm with each iteration involving simple operations. A later work Richard et al., (2014) extended the GFT framework to allow for features that evolve as a vector auto-regressive process and as such is not computationally feasible for very large networks.

Perhaps most importantly our problem setup is more general and applies to arbitrary binary sequences with mapped features. In Section 5.2 we apply our approach on a hyperlink network from Reddit and show its improved performance over a baseline approach. The Reddit hyperlink network represents a single network sequence and the applicability of our approach demonstrates its potential wide usage on modeling time-dependent binary sequences. The rest of the work is organized as follows. In Section 3 we propose the BAR model with its estimation problem in Section 3.1. We compare the BAR model with baselines (GFT, logistic) in a simulation study in Section 4. On two real datasets in Section 5, one from actual products A and B 222To maintain anonymity we don’t reveal the actual product names. and the other from Reddit, we apply the proposed model to show its superior performance in link prediction and discovery. Finally, we conclude the work with discussions in Section 6.

3 The BAR Model

Let the adjacency matrices 𝐀(t){0,1}n\mathbf{\bm{A}}^{(t)}\in\{0,1\}^{n} and 𝐁(t){0,1}N\mathbf{\bm{B}}^{(t)}\in\{0,1\}^{N} be the sequence of main and auxiliary networks for t{1,,T}t\in\{1,\dots,T\}. Corresponding to the edges in the auxiliary sequence we also have a sequence of feature tensors 𝐅(t)N×N×d\mathbf{\bm{F}}^{(t)}\in\Re^{N\times N\times d} such that, for all (i,j)𝐁(t)(i,j)\in\mathbf{\bm{B}}^{(t)} we have 𝐅ij(t)d\mathbf{\bm{F}}^{(t)}_{ij}\in\Re^{d}. We assume that an edge in a main network sequence exists at least once in the auxiliary sequence and that the number of nodes (NN and nn) is fixed over time. This is not an issue in practice since we can always set NN to be the largest values over tt and assume all 0 rows at other times.

There are two important aspects of our model definition. First, we believe that the feature sequence and the history of connections are predictive of a link in the future. As an example, in the context of an email network this could imply that an increased exchange between two nodes is possibly indicative of a future calendar event. Second, the probability of link formation evolves with time and in that sense depends on the entire sequence of edge level features. For any link 𝐀ij(t)\mathbf{\bm{A}}^{(t)}_{ij} we have

𝐀ij(t)|𝐅(1),,𝐅(t)indBernoulli(𝐐ij(t))𝐐ij(t)=λ𝐐ij(t1)+(1λ)𝐏ij(t)and𝐏ij(t)={11+exp(𝜷𝐅ij(t))if 𝐁ij(t)=10otherwise\begin{gathered}\mathbf{\bm{A}}^{(t)}_{ij}\ \big{|}\ \mathbf{\bm{F}}^{(1)},\ldots,\mathbf{\bm{F}}^{(t)}\stackrel{{\scriptstyle\text{ind}}}{{\sim}}\text{Bernoulli}\big{(}\mathbf{\bm{Q}}^{(t)}_{ij}\big{)}\\ \mathbf{\bm{Q}}^{(t)}_{ij}=\lambda\mathbf{\bm{Q}}^{(t-1)}_{ij}+(1-\lambda)\mathbf{\bm{P}}^{(t)}_{ij}\qquad\text{and}\qquad\mathbf{\bm{P}}^{(t)}_{ij}=\begin{cases}\frac{1}{1+\exp\Big{(}-\mathbf{\bm{\beta}}^{\prime}\mathbf{\bm{F}}^{(t)}_{ij}\Big{)}}&\text{if \ $\mathbf{\bm{B}}^{(t)}_{ij}=1$}\\ \hfil 0&\text{otherwise}\end{cases}\end{gathered} (1)

where λ[0,1]\lambda\in[0,1] controls the trade-off between the dependence on the past and the current feature vector for a link (i,j)(i,j). The model expresses our belief that given the features and link history, a connection at time tt is independent (but non-homogeneous) of all other links and that this probability evolves as an auto-regressive model with weights on probability of connection at time t1t-1 and a logit transformed probability induced by the features. We refer to the proposed model as Bernoulli Auto-Regressive (BAR). Note that the recursion of 𝐐ij(t)\mathbf{\bm{Q}}^{(t)}_{ij} in Model (1) allows us to express the probability of connection at a given time as a weighted sum of past probabilities. This weighted sum along with a non-linear dependence on the features provides flexibility in modeling arbitrary sequences with observation level features. This is more general than modeling the probabilities as a logit transform of lagged feature vectors. In the choice of 𝐏ij(t)\mathbf{\bm{P}}^{(t)}_{ij} we can also use different link functions to extend our model to sequences of discrete outcomes of KK categories where K>2K>2 (e.g., by using the softmax function), providing that 𝐏ij(t)\mathbf{\bm{P}}^{(t)}_{ij} is valued between 0 and 1 so that 𝐐ij(t)\mathbf{\bm{Q}}^{(t)}_{ij} is a valid Bernoulli parameter.

For certain applications BAR may be under-parametrized. We can extend the model by having different coefficient vectors in the link probability for different sets of connections. These sets can be determined by using clustering on the features as a pre-processing step. More precisely we can have for 𝐏ij(t)\mathbf{\bm{P}}^{(t)}_{ij}

𝐏ij(t)\displaystyle\mathbf{\bm{P}}^{(t)}_{ij} ={11+exp(𝜷k𝐅ij(t))if 𝐁ij(t)=1 and Cij(t)=k0otherwise\displaystyle=\left\{\begin{array}[]{ll}\frac{1}{1+\exp\left(-\mathbf{\bm{\beta}}^{\prime}_{k}\mathbf{\bm{F}}^{(t)}_{ij}\right)}&\mbox{if }\mathbf{\bm{B}}^{(t)}_{ij}=1\text{ and }C^{(t)}_{ij}=k\\ \hfil 0&\text{otherwise}\end{array}\right. (4)

where Cij(t)C^{(t)}_{ij} is the cluster membership for link (i,j)(i,j) at time tt. We assume only one cluster for edge membership for the rest of the work.

The decaying contribution of historical features is more obvious once we express the recursion of 𝐐ij(t)\mathbf{\bm{Q}}^{(t)}_{ij} in Model (1) equivalently as

𝐐ij(t)=λt𝐐ij(0)+s=1tλts(1λ)𝐏ij(s).\mathbf{\bm{Q}}^{(t)}_{ij}=\lambda^{t}\mathbf{\bm{Q}}^{(0)}_{ij}+\sum_{s=1}^{t}\lambda^{t-s}(1-\lambda)\mathbf{\bm{P}}^{(s)}_{ij}. (5)

The longer the time has elapsed since time ss, the smaller the weight (λts(1λ)\lambda^{t-s}(1-\lambda)) is applied on the probability 𝐏ij(s)\mathbf{\bm{P}}^{(s)}_{ij} and its underlying features from time ss. The diminishing contribution from features over elapsed time is a valid assumption in many applications (e.g., attribution model). The choice of λ\lambda governs the decaying rate: larger λ\lambda makes 𝐐ij(t)\mathbf{\bm{Q}}^{(t)}_{ij} more dependent on historical probabilities, whereas smaller λ\lambda magnifies the dependence on current features. To illustrate the relation between 𝐐ij(t)\mathbf{\bm{Q}}^{(t)}_{ij} and λ\lambda, we consider a simplified scenario: we let 𝐐ij(0)\mathbf{\bm{Q}}^{(0)}_{ij} be 0.2 and 𝐏ij(t)\mathbf{\bm{P}}^{(t)}_{ij} be 0.8 for all t{1,,15}t\in\{1,\ldots,15\}. In Figure 1 we show how 𝐐ij(t)\mathbf{\bm{Q}}^{(t)}_{ij} evolves with tt at various λ\lambda values. At extremes, 𝐐ij(t)\mathbf{\bm{Q}}^{(t)}_{ij} takes constant value of 𝐐ij(0)\mathbf{\bm{Q}}^{(0)}_{ij} (when λ=1\lambda=1) or 𝐏ij(t)\mathbf{\bm{P}}^{(t)}_{ij} (when λ=0\lambda=0). For 0<λ<10<\lambda<1, the larger λ\lambda is, the more slowly the contribution of historical features to the 𝐐ij(t)\mathbf{\bm{Q}}^{(t)}_{ij} decays and the more slowly 𝐐ij(t)\mathbf{\bm{Q}}^{(t)}_{ij} deviates from its initial assignment (0.2) towards the probability derived from current features (0.8).

Refer to caption
Figure 1: Evolution of 𝐐ij(t)\mathbf{\bm{Q}}^{(t)}_{ij} over tt at various λ\lambda, under 𝐐ij(0)=0.2\mathbf{\bm{Q}}^{(0)}_{ij}=0.2 and 𝐏ij(t)=0.8\mathbf{\bm{P}}^{(t)}_{ij}=0.8

3.1 Estimation

Model estimation is performed by maximizing the log-likelihood function for Model (1) since for each (i,j)(i,j) we can obtain a closed form expression for the probability of connection 𝐐ij(t)\mathbf{\bm{Q}}^{(t)}_{ij} at time tt using the recursive formula (5). However, as we mentioned before the main network can be very sparse and so we cannot hope to learn much from a small number of observations. To cast our network to a wider set of connections we add a regularization term to the log-likelihood and propose to minimize the following problem

minimize𝜷d-Log-Likelihood+αt=1T(𝐐(t)𝐁(t))𝚽(t)F2\displaystyle\underset{\mathbf{\bm{\beta}}\in\Re^{d}}{\text{minimize}}\ \text{-Log-Likelihood}+\alpha\sum_{t=1}^{T}\left\|\left(\mathbf{\bm{Q}}^{(t)}-\mathbf{\bm{B}}^{(t)}\right)\mathbf{\bm{\Phi}}^{(t)}\right\|^{2}_{F} (6)

where 𝚽(t)=𝐅(t)𝟏NN×d\mathbf{\bm{\Phi}}^{(t)}=\mathbf{\bm{F}}^{(t)}\otimes\mathbf{\bm{1}}_{N}\in\Re^{N\times d} are aggregated features at sender-level, i.e., 𝚽i(t)={j:𝐁ij(t)=1}𝐅ij(t)\mathbf{\bm{\Phi}}^{(t)}_{i\ell}=\sum_{\{j:\mathbf{\bm{B}}^{(t)}_{ij}=1\}}\mathbf{\bm{F}}^{(t)}_{ij\ell} encodes the \ellth feature aggregated over recipients of sender ii.

The regularization term provides us a way to minimize deviation of first order statistics (for the features) between the network probabilities of interest and the observed auxiliary network. In another word, we want network aggregated features to be similar for the main and auxiliary sequences. The regularization and the probabilities 𝐏ij(t)\mathbf{\bm{P}}^{(t)}_{ij} are precisely what allow us to learn from auxiliary data, and Problem (6) represents a trade-off between the task of link prediction and discovery. If we have a lot more observed data in the main network sequence and it is of interest to predict the prevalence of existing connections in the future we should give more weight to the log-likelihood term (choose a small α\alpha) and if instead our goal is to discover newer connections we should focus on trying to minimize the regularizer term (with a larger α\alpha). If there is only one network sequence with feature information, we simply take α=0\alpha=0 to accommodate missing 𝐁(t)\mathbf{\bm{B}}^{(t)} and adjust 𝐏(t)\mathbf{\bm{P}}^{(t)}’s dependence onto 𝐀(t)\mathbf{\bm{A}}^{(t)} (see Section 5.2 for an example). These adjustments would make our model generalizable to single network sequence with relevant features.

We propose a stochastic gradient descent (SGD) algorithm to solve Problem (6). Our SGD algorithm exploits data sparsity in the main and auxiliary networks and is scalable to networks with millions of nodes. We refer readers to Appendix A for details of the algorithm.

4 Simulation Study

We simulate data from the generative model (1) and perform several experiments to gain more insight into its behavior and performance compared to competing models. Moreover, since the network sequences and edge features are highly likely to be statistically dependent, we employ settings that are indicative of real-world applications. To generate the first network in the auxiliary sequence we sample Erdős-Renyi graphs with different combinations of NN and edge generation probabilities pp. For subsequent time steps (T=15T=15) we vary

𝐁ij(t)\displaystyle\mathbf{\bm{B}}^{(t)}_{ij} ={𝐁ij(t1)+1 w.p. padd if 𝐁ij(t1)=0𝐁ij(t1)1 w.p. pdel if 𝐁ij(t1)=1.\displaystyle=\left\{\begin{array}[]{ll}\mathbf{\bm{B}}^{(t-1)}_{ij}+1\ \mbox{ w.p. }\ p_{\text{add}}\ \mbox{ if }\ \mathbf{\bm{B}}^{(t-1)}_{ij}=0\\ \mathbf{\bm{B}}^{(t-1)}_{ij}-1\ \mbox{ w.p. }\ p_{\text{del}}\ \mbox{ if }\ \mathbf{\bm{B}}^{(t-1)}_{ij}=1.\end{array}\right. (9)

For every edge 𝐁ij(t)\mathbf{\bm{B}}^{(t)}_{ij} we sample the features 𝐅ij(t)10\mathbf{\bm{F}}^{(t)}_{ij}\in\Re^{10} such that each element is Poisson(𝝁t)\text{Poisson}(\mathbf{\bm{\mu}}_{t}) distributed, where 𝝁t=𝝁t1+ϵt𝒩10(𝟎,0.05×𝐈10)\mathbf{\bm{\mu}}_{t}=\mathbf{\bm{\mu}}_{t-1}+\mathbf{\bm{\epsilon}}_{t}\sim\mathcal{N_{\text{10}}}(\mathbf{\bm{0}},0.05\times\mathbf{\bm{I}}_{10}) is a simple random walk. The coefficient vector 𝜷Unif[0,1]10\mathbf{\bm{\beta}}\sim\text{Unif}[0,1]^{10} is normalized to have unit Euclidean norm. Finally, the main network sequence 𝐀ij(t)\mathbf{\bm{A}}^{(t)}_{ij} is generated by Bernoulli trials with probability 𝐐ij(t)\mathbf{\bm{Q}}^{(t)}_{ij} computed using the recursion in (1), where 𝐐ij(0)\mathbf{\bm{Q}}^{(0)}_{ij} is initialized as the maximum likelihood estimate of edge probability.

We compare the BAR model with two competing models. The first model is graph feature tracking (GFT) from Richard et al., (2010) which incorporates a feature prediction and regularization procedure for estimating 𝐀(T+1)\mathbf{\bm{A}}^{(T+1)}. In specific, Richard et al., (2010) define n×kn\times k matrix of features 𝛀\mathbf{\bm{\Omega}} that can capture the dynamics of 𝐀(t)\mathbf{\bm{A}}^{(t)}, and project 𝐀(t)\mathbf{\bm{A}}^{(t)} with a linear feature map 𝐆(t)=𝐀(t)𝛀\mathbf{\bm{G}}^{(t)}=\mathbf{\bm{A}}^{(t)}\mathbf{\bm{\Omega}}. With historical feature maps {𝐆(1),,𝐆(T)}\{\mathbf{\bm{G}}^{(1)},\ldots,\mathbf{\bm{G}}^{(T)}\} Richard et al., (2010) propose using ridge regression to estimate 𝐆^\hat{\mathbf{\bm{G}}} to substitute 𝐀(T+1)𝛀\mathbf{\bm{A}}^{(T+1)}\mathbf{\bm{\Omega}}. In the directed graph setting, we let the features 𝛀=𝐕(:,1:k)𝚺(:,1:k)1\mathbf{\bm{\Omega}}=\mathbf{\bm{V}}_{(:,1:k)}\mathbf{\bm{\Sigma}}^{-1}_{(:,1:k)} where 𝐕(:,1:k)\mathbf{\bm{V}}_{(:,1:k)} and σ1:k\sigma_{1:k} are kk leading singular vectors and singular values from the SVD of 𝐀(T)=𝐔𝚺𝐕\mathbf{\bm{A}}^{(T)}=\mathbf{\bm{U}}\mathbf{\bm{\Sigma}}\mathbf{\bm{V}}^{\prime}, respectively. We solve Problem (10) to get a GFT estimator for 𝐀(T+1)\mathbf{\bm{A}}^{(T+1)}:

min𝐒n×n12𝐒𝐀(T)F2+12ν𝐒𝛀𝐆^F2+τ𝐒\min_{\mathbf{\bm{S}}\in\Re^{n\times n}}\frac{1}{2}\left\|\mathbf{\bm{S}}-\mathbf{\bm{A}}^{(T)}\right\|_{F}^{2}+\frac{1}{2}\nu\left\|\mathbf{\bm{S}}\mathbf{\bm{\Omega}}-\hat{\mathbf{\bm{G}}}\right\|_{F}^{2}+\tau\left\|\mathbf{\bm{S}}\right\|_{*} (10)

where 𝐒\|\mathbf{\bm{S}}\|_{*} is a nuclear norm for inducing low rankness in 𝐒\mathbf{\bm{S}}. Since the GFT estimator is not guaranteed to have its values between 0 and 1, we threshold any value of 𝐒\mathbf{\bm{S}} outside [0,1][0,1] to its closer boundary. In addition to GFT, we compare the BAR model to a logistic model with features being averaged historical features up to the current time.

𝐀ij(t)|𝐅(1),,𝐅(t)indBernoulli(11+exp(𝜷𝐅ijavg(t)))\displaystyle\mathbf{\bm{A}}^{(t)}_{ij}\big{|}\ \mathbf{\bm{F}}^{(1)},\ldots,\mathbf{\bm{F}}^{(t)}\stackrel{{\scriptstyle\text{ind}}}{{\sim}}\text{Bernoulli}\left(\frac{1}{1+\exp\left(-\mathbf{\bm{\beta}}^{\prime}\mathbf{\bm{F}}^{\text{avg}(t)}_{ij}\right)}\right) (11)

where averaged features across historical time 𝐅avg(t)\mathbf{\bm{F}}^{\text{avg}(t)} is defined as follows:

𝐅ijavg(1)=𝐅ij(1)and for t2,𝐅ijavg(t)={𝐅ijavg(t1)(t1)/tif 𝐅ij(t) does not exist(𝐅ijavg(t1)(t1)+𝐅ij(t))/tif 𝐅ij(t) exists.\displaystyle\mathbf{\bm{F}}^{\text{avg}(1)}_{ij}=\mathbf{\bm{F}}^{(1)}_{ij}\quad\text{and for }t\geq 2,\ \mathbf{\bm{F}}^{\text{avg}(t)}_{ij}=\begin{cases}\mathbf{\bm{F}}^{\text{avg}(t-1)}_{ij}\cdot(t-1)/t&\text{if }\mathbf{\bm{F}}^{(t)}_{ij}\text{ does not exist}\\ \left(\mathbf{\bm{F}}^{\text{avg}(t-1)}_{ij}\cdot(t-1)+\mathbf{\bm{F}}^{(t)}_{ij}\right)/t&\text{if }\mathbf{\bm{F}}^{(t)}_{ij}\text{ exists}.\end{cases} (12)

All three models under the comparison have access to historical data though they leverage the data differently.

Table 1: For each simulation scenario (Task, Parameter), Node Degree (No. Links/No. Nodes) for the main network in the test period and AUC ROC for each model (averaged over 10 repetitions).
Task Parameter Node Degree Models (AUC ROC)
BAR GFT Logistic
Standard N=n=10k,p=5×104,padd=103p,pdel=102pN=n=10k,p=5\times 10^{-4},p_{\text{add}}=10^{-3}p,p_{\text{del}}=10^{-2}p 1.8 0.972 0.936 0.505
N=n=10k,p=5×103,padd=103p,pdel=102pN=n=10k,p=5\times 10^{-3},p_{\text{add}}=10^{-3}p,p_{\text{del}}=10^{-2}p 17.6 0.971 0.930 0.506
Varying Network Dependence N=n=10k,p=5×103,padd=103p,pdel=0.0005N=n=10k,p=5\times 10^{-3},p_{\text{add}}=10^{-3}p,p_{\text{del}}=0.0005 17.6 0.970 0.929 0.508
N=n=10k,p=5×103,padd=103p,pdel=0.005N=n=10k,p=5\times 10^{-3},p_{\text{add}}=10^{-3}p,p_{\text{del}}=0.005 17.1 0.967 0.926 0.527
N=n=10k,p=5×103,padd=103p,pdel=0.05N=n=10k,p=5\times 10^{-3},p_{\text{add}}=10^{-3}p,p_{\text{del}}=0.05 13.3 0.941 0.893 0.648
N=n=10k,p=5×103,padd=103p,pdel=0.2N=n=10k,p=5\times 10^{-3},p_{\text{add}}=10^{-3}p,p_{\text{del}}=0.2 6.7 0.901 0.799 0.725
N=n=10k,p=5×103,padd=103p,pdel=0.5N=n=10k,p=5\times 10^{-3},p_{\text{add}}=10^{-3}p,p_{\text{del}}=0.5 3.0 0.855 0.700 0.670

The comparisons are made over two tasks: a standard setting where networks of different densities are considered, and a setting with varying network dependence for which adjacent network variation is gradually increased. For each scenario, we fix NN and nn to 10 thousands and paddp_{\text{add}} to 103p10^{-3}p, but vary pp and pdelp_{\text{del}} over a range of values. We include all simulation settings in Table 1. For the task with varying network dependence, we gradually increase pdelp_{\text{del}} but fix all other parameters. Larger pdelp_{\text{del}} increases the probability of deleting an existing edge, injecting more variability into networks over time. By varying these parameters we highlight the advantages of our approach to a more traditional matrix completion framework (e.g., GFT from Richard et al., (2010)) and also gain insight into the workings of our method. Under each parameter setting, we generate 10 data samples and fit each model onto the samples. We finely tune each model with training data from the first 14 periods and test on the last period. We report the averaged AUC ROC over 10 repetitions for each model in Table 1. On all tasks the BAR significantly outperforms the GFT and the logistic by achieving higher AUC ROC. When network becomes more varied and more sparse with increasing pdelp_{\text{del}}, the challenge increases for all models; the BAR still constantly do better than the other two methods under this scenario.

5 Experiments

We exploit the performance of the BAR model on two real experiments. The first experiment uses product B as an auxiliary network to predict and discover connections on product A (the main network). The superior performance of the BAR model demonstrates the connections between the two underlying products. Moreover, the BAR model can fully leverage such connections to infer links on the product of interest (A). Our second experiment focuses on sentiment classification with a Reddit hyperlink network. We construct temporal network sequences for hyperlinks and use the BAR model (along with the baselines) to classify sentiment for the Reddit posts. We show that the BAR model performs at least as well as the logistic baselines even when minimal time dependence is present in network sequence.

5.1 Link Prediction and Discovery with Real Products

Products from large corporations often provide a connected ecosystem for users where the modality of connection is defined by a specific product or feature. For example, there could be a communication messaging and a social network. The different networks within such companies can have varying levels of connectivity. In our case product B’s network is much bigger and denser than A’s. Such an application is perfectly aligned with the framework we present here.

In the experiment we let the product A’s graph correspond to the main network and the B’s to the auxiliary network. The underlying intuition is that a network defining activity on product B can be indicative of users who might be interested in the feature provided by A. To prepare data, we extract directed links among users from A and B at weekly level. The network data covers a period of 32 weeks where the last week is reserved as a test set. We down sample the B’s network for computational reasons and use 15 weekly edge-level features corresponding only to activity. Note that we are also unable to detail the exact data and size descriptions due to privacy concerns. Finally we tune the BAR model over a grid of α\alpha values between 0 and 1. For comparison, we use the same logistic model described in the simulation study in Section 4 that accounts for time dependence on historical features. Since the GFT does not scale to large networks we are unable to make a comparison to it. We evaluate the model performance on the hold-out test set.

Refer to caption
Refer to caption
Figure 2: Decomposition of the test set. (Left) Set of 1s (i.e., links) from the test period are partitioned into two subsets, depending on whether the link existed in train period (1existed1^{\text{existed}}) or not (1new1^{\text{new}}). (Right) Set of 0s (i.e., non-link pairs) from the test period are partitioned into four subsets by user membership: between main network users (0main2main0^{\text{main2main}}), between auxiliary network users (0aux2aux0^{\text{aux2aux}}), from main network to auxiliary one and vice versa (0main2aux0^{\text{main2aux}} and 0aux2main0^{\text{aux2main}}).

We have two tasks at hand: link prediction that aims at accurately predicting recurrent links, and link discovery that targets on discovering new links that never existed before. We split the test set to properly evaluate the performance on the two tasks. The test set is made up of A’s links from the last week (i.e., the set of ones) and pair of users from A or B that did not connect on A in the week (i.e., the set of zeros). The test set is highly imbalanced: the set of zeros is larger than the set of ones by orders of magnitudes. From a practical standpoint, a predicted link on A is more likely to hold if one of the two users already use A. For link discovery, such an edge prediction can be directly translated into a recommendation setting. Hence, restricting to zero entries with exactly one user being an existing A user would help improve prediction accuracy for any model, and alleviate the imbalance in the test set. As is shown in the left panel of Figure 2, we partition the set of ones into links that existed in training (indexed as 1existed1^{\text{existed}}) and those that never existed before (indexed as 1new1^{\text{new}}). The set 1existed1^{\text{existed}} is suitable for gauging model performance on link prediction, whereas the set 1new1^{\text{new}} allows us to assess the link discovery task. In the right panel of Figure 2, we partition the set of zeros into four segments according to the memberships of the sender and receiver: 0main2main,0aux2aux,0main2aux0^{\text{main2main}},0^{\text{aux2aux}},0^{\text{main2aux}} and 0aux2main0^{\text{aux2main}}. We refer readers to Figure 2 for details of the partition.

Table 2: Performance on link prediction and link discovery on A’s network over test period.
Task Composition of Test Set Models (AUC ROC)
Set of Ones Set of Zeros BAR Logistic
Link Prediction 1existed1^{\text{existed}} 0main2aux0aux2main0^{\text{main2aux}}\cup 0^{\text{aux2main}} 0.98 0.60
Link Discovery 1new1^{\text{new}} 0main2aux0aux2main0^{\text{main2aux}}\cup 0^{\text{aux2main}} 0.67 0.63

Recall among Top NN Predictions

for Link Prediction Refer to caption

Recall among Top N Predictions

for Link Discovery Refer to caption

Figure 3: Recall among top NN predictions for (Left) Link Prediction and (Right) Link Discovery. The top NN predictions are selected among 0main2aux0aux2main0^{\text{main2aux}}\cup 0^{\text{aux2main}} and the corresponding set of ones by task (1existed1^{\text{existed}} for link prediction and 1new1^{\text{new}} for link discovery).

In general, link discovery is more challenging than link prediction since performance on the link discovery task is difficult to measure in the real setting specifically on offline data. This is because the generative mechanism of new links isn’t known and our model assumption bases them on B usage. A more reliable testing mechanism is perhaps randomized control trials. The contrast in the difficulty of the two taks is reflected in Table 2: the BAR model achieves almost perfect AUC ROC over link prediction while its corresponding metric for link discovery is much lower. Moreover, on both tasks the BAR model outperforms the logistic baseline. We also evaluate model performance with recall which is particularly suitable for evaluating link discovery. For each task, we compute recall from top NN predictions among the corresponding set of ones together with 0main2aux0aux2main0^{\text{main2aux}}\cup 0^{\text{aux2main}}. As NN increases, we include more candidates which results in an improvement of recall for any model. In both panels of Figure 3, the BAR model achieves significantly better recall than the baseline at every NN. However, link discovery is more challenging which is reflected on the upper bound of the metric (\sim0.4) at large enough NN (see the right panel of Figure 3). For link prediction (see the left panel of Figure 3), the BAR model with α=0.8\alpha=0.8 quickly converges to perfect recall. The very significant gap between the BAR curves and the logistic baseline curve from the left panel of Figure 3 points to the superiority of the BAR model.

5.2 Sentiment Classification on Reddit Hyperlink Network

In the previous experiment we demonstrate significant improvement of the BAR model over a more standard approach in predicting and discovering links on networks with millions of nodes. Here we show that the proposed model is easily generalizable to any dynamic binary classification task that spans over time. The Reddit hyperlink network333Reddit hyperlink network: https://snap.stanford.edu/data/soc-RedditHyperlinks.html Kumar et al., (2018) encodes the directed connections between two subreddit communities on Reddit from Jan 2014 to April 2017. The subreddit-to-subreddit hyperlink network is extracted from Reddit posts containing hyperlinks from one subreddit to another. The hyperlinks are time-stamped, directed and have attributes (a.k.a. features). Using crowd-sourcing and a text-based classifier, Kumar et al., (2018) assigned a binary label to each hyperlink, for which label 1 indicates that the source expresses a positive sentiment towards the target and -1 if the source sentiment is negative towards the target.

Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Refer to caption
Figure 4: Variation of the number of nodes (Top Row) and the number of links (Bottom Row) from Reddit hyperlink network over time at different aggregation levels: (Left) Daily, (Middle) Weekly and (Right) Monthly.

Our goal for the experiment is to classify sentiment for the hyperlinks with the BAR model and competing ones. The BAR model is designed for discrete times; however, the time stamps in the hyperlink network are continuous. To discretize the time stamps, we aggregate the hyperlinks at various granularities: daily, weekly and monthly. At each granularity we average the feature vectors corresponding to the same pair of (source subreddit, target subreddit) within each time window. Among the 86 features (see Kumar et al., (2018) for details) we omit 11 of them that are count features because they are likely to be dominated by longer posts after the averaging. For sentiment labels within each time window, we take the majority vote as the label for (source subreddit, target subreddit). We experiment on the first 14 months of data (or roughly equivalently, the first 60 weeks or 425 days) from Jan 2014 to Feb 2015. Figure 4 shows how the number of nodes and the number of links vary over time at different time granularities. At all three levels (daily, weekly and monthly) the number of nodes and links have an overall increasing trend over time. From the bottom row of Figure 4 we observe the highly imbalance of the sentiment labels: there are much more positive hyperlinks than negative ones at any of the time granularities. As we equally value positive and negative hyperlinks, we use AUC ROC to evaluate the model performance over test period which is reserved for the last two months (or roughly equivalently, data from the last 8 weeks or 60 days).

Table 3: Performance of different models on Reddit hyperlink sentiment classification over test period. Between the two statistics, Node Degree (No. Links/No. Nodes) reflects the density of the network and % Recurrent Links (Percentage of links that appear more than once over the time) reflects the time dependency structure.
Aggregation Level Network Statistics Train/Test Periods Models (AUC ROC)
Node Degree % Recurrent Links TtrainT_{\text{train}} TtestT_{\text{test}} BAR Logistic (Raw) Logistic
Daily 5.29 21.60% 365 60 0.761 0.756 0.744
Weekly 4.96 20.99% 52 8 0.760 0.755 0.743
Monthly 4.34 19.29% 12 2 0.756 0.748 0.745

We modify the BAR model to adapt to the single network scenario. Let 𝐀ij(t){1,1}\mathbf{\bm{A}}^{(t)}_{ij}\in\{-1,1\} represent the sentiment from the iith subreddit to the jjth subreddit at time window tt. We require the edge-level features 𝐅ij(t)\mathbf{\bm{F}}^{(t)}_{ij} to be from the network and update the dependency of 𝐏ij(t)\mathbf{\bm{P}}^{(t)}_{ij} onto 𝐀ij(t)\mathbf{\bm{A}}^{(t)}_{ij}

𝐏ij(t)={11+exp(𝜷𝐅ij(t))if 𝐀ij(t){1,1}0otherwise.\mathbf{\bm{P}}^{(t)}_{ij}=\begin{cases}\frac{1}{1+\exp\Big{(}-\mathbf{\bm{\beta}}^{\prime}\mathbf{\bm{F}}^{(t)}_{ij}\Big{)}}&\text{if \ $\mathbf{\bm{A}}^{(t)}_{ij}\in\{1,-1\}$}\\ \hfil 0&\text{otherwise}.\end{cases} (13)

With single network we turn off the link discovery option by setting α\alpha to zero in Problem (6). For comparison we consider two variations of the logistic model. The first variation keeps using averaged historical features for classifying sentiment at the current time window, as is used Section 4 and Section 5.1, which we label as Logistic in Table 3. The second variation of the logistic model, which we label with Logistic (Raw) in Table 3, simply use the features from the current time window for classifying sentiment. Table 3 has a comparison of the performance on corresponding test period across varying aggregation level. As we aggregate over longer time window (from daily to monthly), the network becomes sparser with node degree continuously decreasing. Comparing the two variations of the logistic model, it performs better without averaging historical features which indicates there is limited time dependence in the sentiment classification task. Indeed, our approach performs the best with daily aggregation when there is more time dependence, as is evidenced by the highest percentage of recurrent links. At all aggregation levels, the BAR model outperforms the logistic ones, showing its strength over binary classification task even with limited time dependence. As we mentioned before the competing GFT approach does not scale to large datasets. Moreover, the sentiment classification problem is not directly solvable with GFT’s matrix completion framework in (10). Thus, we left the GFT approach out in this comparison.

6 Conclusions

We proposed a generalized auto-regressive model for link prediction and discovery with network sequences. The proposed BAR model can exploit signals from the auxiliary network to infer connections on the main network. We also develop an optimization framework for estimating the model, and a stochastic gradient descent algorithm that is scalable to networks with millions of nodes. To demonstrate the superior performance of our model, we apply the BAR model along with baselines on both simulated data and real data, and show significant improvement in BAR’s performance over baselines. Our experiment on the product A’s network shows practical use of the BAR model at scale, while the Reddit hyperlink experiment validates the generalizability of BAR. For future line of research, an extension of the BAR model to continuous time would be useful for many real-world applications. We also plan to investigate the theoretical properties of our model.

Acknowledgements

The authors thank Sushama Murthy and Richard Johnston for supporting the work.

Appendix

A Algorithm for BAR Model Optimization

We derive the log-likelihood part and the regularizer part of Problem (6) as follows.

Log-Likelihood =t=1T({(i,j):𝐀ij(t)=1}log(𝐐ij(t))+{(i,j):𝐀ij(t)=0}log(1𝐐ij(t)))\displaystyle=\sum_{t=1}^{T}\left(\sum_{\left\{(i,j):\mathbf{\bm{A}}^{(t)}_{ij}=1\right\}}\log\left(\mathbf{\bm{Q}}^{(t)}_{ij}\right)+\sum_{\left\{(i,j):\mathbf{\bm{A}}^{(t)}_{ij}=0\right\}}\log\left(1-\mathbf{\bm{Q}}^{(t)}_{ij}\right)\right) (14)
Regularizer =t=1Ti=1N=1d(j=1N(𝐐ij(t)𝐁ij(t))𝚽j(t))2\displaystyle=\sum_{t=1}^{T}\sum_{i=1}^{N}\sum_{\ell=1}^{d}\left(\sum_{j=1}^{N}\left(\mathbf{\bm{Q}}^{(t)}_{ij}-\mathbf{\bm{B}}^{(t)}_{ij}\right)\cdot\mathbf{\bm{\Phi}}^{(t)}_{j\ell}\right)^{2} (15)

For the log-likelihood part (14), we represent the summands using f(𝜷,t,i,j)f(\mathbf{\bm{\beta}},t,i,j) and g(𝜷,t,i,j)g(\mathbf{\bm{\beta}},t,i,j) defined as follows:

  • f(𝜷,t,i,j):=log(𝐐ij(t))=log(λt𝐐ij(0)+s=1tλts(1λ)𝐏ij(s))f(\mathbf{\bm{\beta}},t,i,j):=\log\left(\mathbf{\bm{Q}}^{(t)}_{ij}\right)=\log\left(\lambda^{t}\mathbf{\bm{Q}}^{(0)}_{ij}+\sum_{s=1}^{t}\lambda^{t-s}(1-\lambda)\mathbf{\bm{P}}^{(s)}_{ij}\right) when 𝐀ij(t)=1\mathbf{\bm{A}}^{(t)}_{ij}=1;

  • g(𝜷,t,i,j):=log(1𝐐ij(t))=log(1λt𝐐ij(0)s=1tλts(1λ)𝐏ij(s))g(\mathbf{\bm{\beta}},t,i,j):=\log\left(1-\mathbf{\bm{Q}}^{(t)}_{ij}\right)=\log\left(1-\lambda^{t}\mathbf{\bm{Q}}^{(0)}_{ij}-\sum_{s=1}^{t}\lambda^{t-s}(1-\lambda)\mathbf{\bm{P}}^{(s)}_{ij}\right) when 𝐀ij(t)=0\mathbf{\bm{A}}^{(t)}_{ij}=0.

The gradients of f(𝜷,t,i,j)f(\mathbf{\bm{\beta}},t,i,j) and g(𝜷,t,i,j)g(\mathbf{\bm{\beta}},t,i,j) can be derived by the Chain Rule:

  • 𝜷f(𝜷,t,i,j)=1λt𝐐ij(0)+s=1tλts(1λ)𝐏ij(s)s=1tλts(1λ)𝜷𝐏ij(s)\nabla_{\mathbf{\bm{\beta}}}f(\mathbf{\bm{\beta}},t,i,j)=\frac{1}{\lambda^{t}\mathbf{\bm{Q}}^{(0)}_{ij}+\sum_{s=1}^{t}\lambda^{t-s}(1-\lambda)\mathbf{\bm{P}}^{(s)}_{ij}}\cdot\sum_{s=1}^{t}\lambda^{t-s}(1-\lambda)\nabla_{\mathbf{\bm{\beta}}}\mathbf{\bm{P}}^{(s)}_{ij};

  • 𝜷g(𝜷,t,i,j)=11λt𝐐ij(0)s=1tλts(1λ)𝐏ij(s)s=1tλts(1λ)𝜷𝐏ij(s)\nabla_{\mathbf{\bm{\beta}}}g(\mathbf{\bm{\beta}},t,i,j)=\frac{-1}{1-\lambda^{t}\mathbf{\bm{Q}}^{(0)}_{ij}-\sum_{s=1}^{t}\lambda^{t-s}(1-\lambda)\mathbf{\bm{P}}^{(s)}_{ij}}\cdot\sum_{s=1}^{t}\lambda^{t-s}(1-\lambda)\nabla_{\mathbf{\bm{\beta}}}\mathbf{\bm{P}}^{(s)}_{ij};

where 𝜷𝐏ij(s)=𝐏ij(s)(1𝐏ij(s))𝐅ij(s)\nabla_{\mathbf{\bm{\beta}}}\mathbf{\bm{P}}^{(s)}_{ij}=\mathbf{\bm{P}}^{(s)}_{ij}\left(1-\mathbf{\bm{P}}^{(s)}_{ij}\right)\mathbf{\bm{F}}^{(s)}_{ij} if 𝐁ij(s)=1\mathbf{\bm{B}}^{(s)}_{ij}=1 and 0 otherwise. The gradient of the log-likelihood is

𝜷Log-Likelihood=t=1T({(i,j):𝐀ij(t)=1}𝜷f(𝜷,t,i,j)+{(i,j):𝐀ij(t)=0}𝜷g(𝜷,t,i,j)).\nabla_{\mathbf{\bm{\beta}}}\text{Log-Likelihood}=\sum_{t=1}^{T}\left(\sum_{\left\{(i,j):\mathbf{\bm{A}}^{(t)}_{ij}=1\right\}}\nabla_{\mathbf{\bm{\beta}}}f(\mathbf{\bm{\beta}},t,i,j)+\sum_{\left\{(i,j):\mathbf{\bm{A}}^{(t)}_{ij}=0\right\}}\nabla_{\mathbf{\bm{\beta}}}g(\mathbf{\bm{\beta}},t,i,j)\right).

For the regularizer part (15), we represent the summand using h(𝜷,t,i,)=(j=1N(𝐐ij(t)𝐁ij(t))𝚽j(t))2h(\mathbf{\bm{\beta}},t,i,\ell)=\left(\sum_{j=1}^{N}\left(\mathbf{\bm{Q}}^{(t)}_{ij}-\mathbf{\bm{B}}^{(t)}_{ij}\right)\cdot\mathbf{\bm{\Phi}}^{(t)}_{j\ell}\right)^{2} of which the gradient can be derived by the Chain Rule as

𝜷h(𝜷,t,i,)=2(j=1N(𝐐ij(t)𝐁ij(t))𝚽j(t))(j=1N𝚽j(t)s=1tλts(1λ)𝜷𝐏ij(s)).\nabla_{\mathbf{\bm{\beta}}}h(\mathbf{\bm{\beta}},t,i,\ell)=2\left(\sum_{j=1}^{N}\left(\mathbf{\bm{Q}}^{(t)}_{ij}-\mathbf{\bm{B}}^{(t)}_{ij}\right)\cdot\mathbf{\bm{\Phi}}^{(t)}_{j\ell}\right)\cdot\left(\sum_{j=1}^{N}\mathbf{\bm{\Phi}}^{(t)}_{j\ell}\sum_{s=1}^{t}\lambda^{t-s}(1-\lambda)\nabla_{\mathbf{\bm{\beta}}}\mathbf{\bm{P}}^{(s)}_{ij}\right).

The gradient of the regularizer is

𝜷Regularizer=t=1Ti=1N=1d𝜷h(𝜷,t,i,)\nabla_{\mathbf{\bm{\beta}}}\text{Regularizer}=\sum_{t=1}^{T}\sum_{i=1}^{N}\sum_{\ell=1}^{d}\nabla_{\mathbf{\bm{\beta}}}h(\mathbf{\bm{\beta}},t,i,\ell)

A gradient descent update at the kkth iteration with step size η\eta has the form

𝜷k𝜷k1η(𝜷Log-Likelihood+α𝜷Regularizer).\mathbf{\bm{\beta}}^{k}\leftarrow\mathbf{\bm{\beta}}^{k-1}-\eta\left(-\nabla_{\mathbf{\bm{\beta}}}\text{Log-Likelihood}+\alpha\cdot\nabla_{\mathbf{\bm{\beta}}}\text{Regularizer}\right).

We propose a computationally feasible substitute to the gradient descent. At the kkth iteration, we randomly sample a time stamp tt, a sender node ilki_{\text{lk}} for the log-likelihood gradient, and a sender node iregi_{\text{reg}} for the regularizer gradient. The Stochastic Gradient Descent (SGD) algorithm is detailed in Algorithm 1. To efficiently compute gradlk\texttt{grad}_{\texttt{lk}} and gradreg\texttt{grad}_{\texttt{reg}} from Lines 8-9 of the algorithm, we leverage the sparsity in the auxiliary sequence 𝐁(s)\mathbf{\bm{B}}^{(s)}: for a given ii there are limited jjs with 𝐁ij(s)=1\mathbf{\bm{B}}^{(s)}_{ij}=1. Hence, only a small number of non-zero 𝜷𝐏ij(s)\nabla_{\mathbf{\bm{\beta}}}\mathbf{\bm{P}}^{(s)}_{ij} needs to be accounted for in computing the gradients of f(),g()f(\cdot),g(\cdot) and h()h(\cdot).

Algorithm 1 Stochastic Gradient Descent Algorithm for Solving Problem (6)
1:𝐀(t),𝐁(t),𝐅(t),𝚽(t),λ,α\mathbf{\bm{A}}^{(t)},\mathbf{\bm{B}}^{(t)},\mathbf{\bm{F}}^{(t)},\mathbf{\bm{\Phi}}^{(t)},\lambda,\alpha and η\eta
2:Initialize 𝜷𝟎\mathbf{\bm{\beta^{0}}}
3:k0k\leftarrow 0
4:while 𝜷k\mathbf{\bm{\beta}}^{k} not converge do
5:     kk+1k\leftarrow k+1
6:     Sample tt from {1,,T}\{1,\dots,T\}
7:     Sample ilki_{\text{lk}} from {i:𝐀ij(t)=1 for some j}\left\{i:\mathbf{\bm{A}}^{(t)}_{ij}=1\text{ for some }j\right\}
8:     Sample iregi_{\text{reg}} from {i:𝐁ij(t)=1 for some j}\left\{i:\mathbf{\bm{B}}^{(t)}_{ij}=1\text{ for some }j\right\}
9:     gradlk{j:𝐀ilkj(t)=1}𝜷f(𝜷k1,t,ilk,j)+{j:𝐀ilkj(t)=0}𝜷g(𝜷k1,t,ilk,j)\texttt{grad}_{\texttt{lk}}\leftarrow\sum_{\left\{j:\mathbf{\bm{A}}^{(t)}_{i_{\text{lk}}j}=1\right\}}\nabla_{\mathbf{\bm{\beta}}}f\left(\mathbf{\bm{\beta}}^{k-1},t,i_{\text{lk}},j\right)+\sum_{\left\{j:\mathbf{\bm{A}}^{(t)}_{i_{\text{lk}}j}=0\right\}}\nabla_{\mathbf{\bm{\beta}}}g\left(\mathbf{\bm{\beta}}^{k-1},t,i_{\text{lk}},j\right)
10:     gradreg=1d𝜷h(𝜷k1,t,ireg,)\texttt{grad}_{\texttt{reg}}\leftarrow\sum_{\ell=1}^{d}\nabla_{\mathbf{\bm{\beta}}}h\left(\mathbf{\bm{\beta}}^{k-1},t,i_{\text{reg}},\ell\right)
11:     𝜷k𝜷k1η(gradlk+αgradreg)\mathbf{\bm{\beta}}^{k}\leftarrow\mathbf{\bm{\beta}}^{k-1}-\eta\left(-\texttt{grad}_{\texttt{lk}}+\alpha\cdot\texttt{grad}_{\texttt{reg}}\right)
12:end while
13:𝜷k\mathbf{\bm{\beta}}^{k}

References

  • Aiello et al., (2012) Aiello, L. M., Barrat, A., Schifanella, R., Cattuto, C., Markines, B., and Menczer, F. (2012). Friendship prediction and homophily in social media. ACM Transactions on the Web (TWEB), 6(2):1–33.
  • Huang and Lin, (2009) Huang, Z. and Lin, D. K. (2009). The time-series link prediction problem with applications in communication surveillance. INFORMS Journal on Computing, 21(2):286–303.
  • Koren, (2008) Koren, Y. (2008). Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 426–434.
  • Kumar et al., (2018) Kumar, S., Hamilton, W. L., Leskovec, J., and Jurafsky, D. (2018). Community interaction and conflict on the web. In Proceedings of the 2018 World Wide Web Conference on World Wide Web, pages 933–943. International World Wide Web Conferences Steering Committee.
  • Liben-Nowell and Kleinberg, (2007) Liben-Nowell, D. and Kleinberg, J. (2007). The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7):1019–1031.
  • Myers and Leskovec, (2010) Myers, S. and Leskovec, J. (2010). On the convexity of latent social network inference. In Advances in neural information processing systems, pages 1741–1749.
  • Richard et al., (2010) Richard, E., Baskiotis, N., Evgeniou, T., and Vayatis, N. (2010). Link discovery using graph feature tracking. In Lafferty, J. D., Williams, C. K. I., Shawe-Taylor, J., Zemel, R. S., and Culotta, A., editors, Advances in Neural Information Processing Systems 23, pages 1966–1974. Curran Associates, Inc.
  • Richard et al., (2014) Richard, E., Gaïffas, S., and Vayatis, N. (2014). Link prediction in graphs with autoregressive features. Journal of Machine Learning Research, 15(1):565–593.
  • Sarkar et al., (2012) Sarkar, P., Chakrabarti, D., and Jordan, M. (2012). Nonparametric link prediction in dynamic networks. arXiv preprint arXiv:1206.6394.
  • Sarkar et al., (2011) Sarkar, P., Chakrabarti, D., and Moore, A. W. (2011). Theoretical justification of popular link prediction heuristics. In Twenty-Second International Joint Conference on Artificial Intelligence.
  • Vu et al., (2011) Vu, D. Q., Hunter, D., Smyth, P., and Asuncion, A. U. (2011). Continuous-time regression models for longitudinal networks. In Advances in neural information processing systems, pages 2492–2500.
  • Wang et al., (2015) Wang, P., Xu, B., Wu, Y., and Zhou, X. (2015). Link prediction in social networks: the state-of-the-art. Science China Information Sciences, 58(1):1–38.
  • Wu et al., (2013) Wu, S., Sun, J., and Tang, J. (2013). Patent partner recommendation in enterprise social networks. In Proceedings of the sixth ACM international conference on Web search and data mining, pages 43–52.