Fast Decentralized Linear Functions Over
Edge Fluctuating Graphs
Abstract
Implementing linear transformations is a key task in the decentralized signal processing framework, which performs learning tasks on data sets distributed over multi-node networks. That kind of network can be represented by a graph. Recently, some decentralized methods have been proposed to compute linear transformations by leveraging the notion of graph shift operator, which captures the local structure of the graph. However, existing approaches have some drawbacks such as considering some special instances of linear transformations, or reducing the family of transformations by assuming that a shift matrix is given such that a subset of its eigenvectors spans the subspace of interest. In contrast, this paper develops a framework for computing a wide class of linear transformations in a decentralized fashion by relying on the notion of graph shift operator. The main goal of the proposed method is to compute the desired linear transformation in a small number of iterations. To this end, a set of successive graph shift operators is employed, then, a new optimization problem is proposed whose goal is to compute the desired transformation as fast as possible. In addition, usually, the topology of the networks, especially the wireless sensor networks, change randomly because of node failures or random links. In this paper, the effect of edge fluctuations on the performance of the proposed method is studied. To deal with the negative effect of edge fluctuations, an online kernel-based method is proposed which enables nodes to estimate the missed values with their at hand information. The proposed method can also be employed to sparsify the network graph or reduce the number of local exchanges between nodes, which saves sensors power in the wireless sensor networks.
Index Terms:
graph signal processing, decentralized linear transformations, graph filters.I Introduction
Processing the data sets defined over non-regular domains is the main application of the graph signal processing (GSP) field. Those data sets are characterized by graphs, and traditional signal processing notions such as frequency analysis and sampling have been extended to process signals defined over graphs (graph signals), i.e. data associated with the nodes of the network [1, 2, 3]. Graph signals exist in different fields such as social networks and wireless sensor networks (WSN).
Recently, a number of decentralized methods have been proposed to compute some special instances of linear transformations, as a common application of decentralized signal processing. Projecting signals onto a low-dimensional subspace [4, 5] or interpolating bandlimited signals [6] are examples of those linear transformations. For instance, the authors in [7] propose a decentralized approach to implement the subspace projection task, which encompasses a number of learning tasks in WSN, such as estimation and denoising [8]. In the aforementioned approach [7], each node at every iteration linearly combines its iterate with the ones of its neighbors. Then, by optimizing a criterion that quantifies asymptotic convergence, the coefficients of those linear combinations are obtained. However, this algorithm can only accommodate a limited set of topologies [9], its convergence is asymptotic, and in general requires a relatively large number of local information exchanges. Those issues are addressed in [10, 11, 4] by proposing some decentralized GSP-based approaches for the average consensus task, which is a special case of subspace projection.
Moreover, some decentralized GSP-based approaches employ Graph filters (GFs) to compute the linear transformations. GFs are expressed by polynomials of the so-called graph shift matrices, which capture the local structure of the graph [2]. Applying GFs gives rise to decentralized implementations, which means that the nodes of the network can implement the graph filter by exchanging only local information with their neighbors. Graph filters have been used for different applications such as graph signal analysis [12, 2] reconstruction [13, 14, 6, 15] and denoising [16, 17, 5, 18].
In [17], GFs are designed to compute pre-specified linear transformations. Nevertheless, the GFs design in [17] for rank-1 projections or for cases, where projections share their eigenvectors with given graph shift matrices such as the Laplacian or adjacency matrices, which limits the possible linear functions that can be implemented. Moreover, the aforementioned method considers the design of GFs only for symmetric topologies. On the other hand, due to interference and background noise, WSNs are often characterized by asymmetric packet losses, leading to network topologies that can be viewed as asymmetric graphs [19, 20]. To approximate the linear transformations, a decentralized approach has been proposed based on the so-called edge-variant (EV) GFs in [21]. This method uses a different graph shift matrix in each iteration of the graph filtering which enables nodes to weigh the signal of their neighbors with different values. Furthermore, in [5] a decentralized subspace projection method via GFs for symmetric topologies has been proposed by optimizing a criterion that yields convergence to the subspace projection in a nearly minimal number of iterations.
In this paper, we propose an approach to compute the linear transformations as fast as possible i.e. in a small number of local exchanges. Our approach is based on a sequence of different shift operators are employed to compute the desired transformation matrix. In our method, the difference between the sequence of shift operators and the transformation matrix at each round is minimized. In order to decrease the required number of rounds which leads to fast convergence, a weighted-sum method is proposed, where larger weights are assigned to the last rounds to enforce the error of those rounds to be smaller.
In addition, edge fluctuation is a critical issue in the decentralized GSP-based approaches, and it negatively affects the performance of them. For instance, in WSN, a number of network edges are missed when edge fluctuations occur due to node failures. In this case, some nodes miss a portion of their neighbors’ information, which creates a deviation in the nodes’ exchange information procedure. The effect of graph perturbations on finite impulse response (FIR) graph filters has been investigated in [22, 23, 24, 25]. Those works analyze the effect of graph perturbation based on the random edge sampling (RES) graph model [23], where the activation probability of graph edges is assumed to be known.
In summary, the contributions that we provide, can be summarized as follows:
-
•
In contrast with [26, 5], where only symmetric topologies have been considered, the proposed method designs shift operators to compute the linear transformations over different topologies, including both symmetric and asymmetric graphs. Compared to [21], the proposed method needs a noticeably smaller number of local exchanges, leading to a faster convergence and computes the desired linear transformation exactly in scenarios where the method in [21] approximates them.
-
•
Furthermore, this work considers the issue of edge fluctuations on the problem of computing the linear transformations, which is not considered by existing works [26, 5, 21]. In this paper, the effect of edge fluctuation on the proposed approach is studied. Then, to deal with the negative effect of edge fluctuations, an efficient online method is proposed, where each node estimates the missing values (due to the fluctuation of the edges) with its at-hand information, by employing an online kernel-based learning approach. In the proposed method, each node estimates the missing values just by applying some simple local calculations.
-
•
Finally, the proposed estimator enables us to randomly remove a number of edges at each iteration to sparsify the graph, which gives rise to saving power of sensor nodes in WSN, increasing their lifetime, since they are usually empowered by batteries. By using our method, the sensor nodes spend less power to relay their information to their neighbours by decreasing the number of graph edges in each iteration. Indeed, for each node in each iteration, we can randomly remove some edges, and then by applying the proposed estimator, nodes can estimate their neighbours’ information corresponded to the removed edges online. This advantage can be further enhanced if the proposed estimators are trained during a certain number of iterations, so that the local exchanges between nodes stop, and instead the nodes update their information based on an estimate of their neighbours’ information. By this new approach, nodes power is saved since the number of local exchanges is decreased.
Notation: Bold uppercase (lowercase) letters denote matrices (column vectors), while stands for matrix transposition. and denote the -th entry of and the entries of , respectively. Also, is the all ones vector and is the basis vector whose all entries are zero except the -th one which equals one. The indicator function and the Hadamard product are denoted by and , respectively. Moreover, and represent the product of a collection of terms and the vectorization operation. Finally, and the Frobenius norm are denoted by and , respectively.
II Preliminaries
II-A System Model
Consider networked sensor nodes which are able to exchange messages with their neighbour nodes to which they are connected. The network is modeled as a directed connected graph , where the vertex (node) set corresponds to the network nodes, and represents the set of edges. The -th vertex is connected to if there is a directed edge from to , but this does not imply that is connected to unless . The in-neighborhood of the -th node is defined as the set of nodes connected to it, which can be denoted as ). In the GSP context, vector is refereed to as a graph signal where the -th component of represents the signal value at the -th node of .
II-B Decentralized linear transformations via graph filters
In this section, we briefly review implementing linear transformations via GFs. Finite impulse response (FIR) graph filtering [2] involves two steps, in the first step, a sequence of graph signals is obtained via number of local exchanges between nodes where . Particularly, the graph signal at the -th iteration i.e. is computed based on the graph signal of the previous iteration i.e. such that where is the coefficient used for aggregation of information between the nodes. Thus, in matrix form, we have where if .
In the GSP context, is called the shift operator [2, 27]. The graph shift operator is a decentralized operator because each node updates its information only via information of its neighbours. Examples of the graph shift operator are the adjacency and Laplacian matrices of [17]. In the second step, outputs of all iterations computed in the first step are linearly aggregated as follows
(1) |
where are the filter coefficients, and is the order of the filter [2]. The graph filter in (1) has been used in [17] to implement a pre-specified transformation matrix . Alternatively, in [17] the graph filter (1) is designed only for symmetric topologies such that for rank-1 projections or for scenarios where and the given shift matrix such as the Laplacian or the Adjacency matrix are simultaneously diagonaizable i.e. they share the same eigenvectors.
The concept of FIR graph filters is generalized in [21] by introducing the edge-variant (EV) graph filters where a different set of weights is used in each iteration. The edge weighting matrices share the same support with the adjacency matrix of the network graph. In [21], the edge-variant graph filter is used to approximate linear transformations.
III Proposed method: Successive shift operators
In this section, by using the notion of graph shift operator, we propose a new approach to implement in a decentralized manner as fast as possible. To this end, a sequence of different shift operators is applied so that , where , we have if .
As stated before, the proposed method leads to a decentralized approach. In fact, after the first round of information exchange among nodes, the nodes compute . Then, at the second round, . The procedure is repeated for iterations. By properly designing , is exactly computed or is obtained with a small error.
To compute as efficient as possible i.e. with the smallest number of iterations, we design the sequence of feasible graph shift matrices that minimize the number of local exchanges, i.e. by solving the following problem:
(2a) | ||||
s. t. | ||||
(2b) | ||||
(2c) |
where constraint (2b) is added to the problem because the shift operators must satisfy the topological constraints of the graph connectivity.
Optimization problem (2) is intractable. In order to address this issue, an upper limit for is considered such that . Then, to design the shift operators that compute via as fast as possible, the error at each round can be minimized i.e. the error between and , which can be attained by solving the following optimization problem:
(3a) | ||||
s. t. | ||||
(3b) |
In optimization problem (3), we put the same emphasis on all rounds, which means that all of them are treated equally. However, if we put more emphasise on the later rounds, we expect that a faster approach is obtained. By this approach, we accept larger error on the earlier rounds to obtain smaller error at the later ones, which can be implemented by assigning different weights to the terms in cost function (3a). To end this, larger weights are assigned to the later rounds. Thus, we have:
(4a) | ||||
s. t. | ||||
(4b) |
where is the weight vector whose entries are non-negative increasing, and such that .
Optimization problem (4) is non-convex with respect to all its variables; however, the cost function (4a) is a strongly convex function with respect to each separately, i.e. when the other blocks of variables are fixed. Therefore, we can apply the block coordinate descent (BCD) algorithm to solve the optimization problem (4). The convergence of the BCD algorithm is guaranteed for the optimization problem in (4) [28]. In this approach, all blocks of variables are fixed except one of them, and the optimization problem is solved with respect to the free block. This procedure continues until all the matrices are considered as the block variable of the optimization problem. The BCD algorithm is repeated until a stopping criterion is satisfied.
Here, we try to obtain the closed form solution of each round of the BCD algorithm, where one of shift matrices is the variable of optimization problem (4) and the rest of shift matrices are fixed, the following optimization problem is solved:
(5) |
where , satisfies constraint (4b), and is the entry that lies in the -th row and the -th column of . Thus, we have:
(6) |
The term inside of (6) can be expressed in vector form by applying the vectorization operator as follows:
(7) |
where in (7), we use the fact that . Moreover, we have:
(8) |
where . Therefore, optimization problem (6) can be represented as follows:
(9) |
Consequently, from (III), optimization problem (4) can be re-written as follows:
(10) |
where .
Finally, the vectorized form of (10) equals:
(11) |
The following algorithm represents the approach for solving (11). For the stopping criterion is used for a sufficiently small , where equals cost function (3a), and denotes the -th iteration of the BCD algorithm.
IV Edge fluctuation
IV-A Edge Fluctuation Model
In the previous section, we have assumed that all edges are fixed; however, in practical scenarios, there are fluctuations on edges due to different reasons such as random links or node failures in WSN, which leads to a random graph topology. In this subsection, the effect of edge fluctuations on the proposed method i.e. the successive method is studied. We consider that edge fluctuations occur randomly and independently across edges. Please note that in this section, to provide an analysis of the effect of edge fluctuations, we assume that the estimation of link activation probabilities is available. However, knowing the link activation probabilities is not required for our proposed method to deal with the edge fluctuations effect. Moreover, we assume that the spectral norm of all graph shift operators is upper-bounded i.e. . This assumption is practical because it implies that the graph of interest has finite edge weights [24].
When an edge is missed in a certain of iteration of the successive method, the corresponded destination node to the missing edge loses the information of its neighbor, which generates a deviation. The shift operator corresponding to the graph with perturbations can be modeled as where is created because of edge fluctuations, and its non-zero entries equal the corresponding entries of with a negative sign. Thus, the signal output for the first iteration of the successive method can be expressed
(12) |
where . We assume that is a realization of a random process, with mean and covariance matrix . Note that can be modeled by using the activation link probabilities i.e. , where is the link activation probability matrix. Thus, since is fixed we have , where is the mean of . In addition, for the covariance of , we have
(13) |
The -th element of equals , where and . From the cyclic property of trace, and the fact that trace is a linear operator, we have
(14) |
where is the cross-correlation between . Consequently:
(15) |
where . In the next iteration, we have:
(16) |
where , which implies that is correlated with . Indeed, and
(17) |
From and (IV-A), we have that
(18) |
where . By taking expectations and the linearity property of the expectation operator, we have:
(19) |
Similar to (IV-A) and (15), for each entry of , we have
(20) |
where and is the cross-correlation between . By using a similar approach to the one used in (IV-A), we have that .
After iterations, the output of the successive method under the edge fluctuation model equals:
(21) |
where . The first and second order statistics of can be obtained by the similar approach as the one used for .
Next, we show that, under the edge fluctuation model, the mean square error (MSE) of the the successive approach output after iterations expressed as follows is upper bounded.
(22) |
where .
Proposition 1: The MSE of the output of the successive approach in (21) under the edge fluctuation model is upper-bounded by:
(23) |
where .
Proof.
Please see Appendix A. ∎
From Proposition 1, the MSE of the successive approach output under the edge fluctuation model is upper-bounded by and some other parameters i.e. and , which are dependent on the statistics of and the shift matrices based on Proposition 1.
IV-B Proposed method: the missing values estimator
In this section, to deal with the edge fluctuations effect, we focus to estimate the missing values via a kernel-based estimator. For this, the main task can be expressed as follows: given , the goal is to estimate the missing values due to the edge fluctuations. To this end, an online kernel-based method is proposed such that the node whose neighbors’ information is missed, estimates the missing value based on its own available information.
Let us consider that in the -th iteration of the successive approach, the -th node misses the information from the -th node which is given by
(24) |
From (24), we can conclude that and are the most relevant information that is available for the -th node to estimate the missing value. Rest information of the -th node could be also informative since the and nodes can have one-hop or two-hops common neighbors. Moreover, due to the fact that the network is connected, all nodes are connected to each other with a number of hops. Consequently, in our method, the -th node utilizes to estimate the missing value (information of the node in the -th iteration) where is a vector that contains the neighbours of the -th node except the -th one.
To estimate the missing values, in supervised learning, an estimator function is sought via the training samples. To end this, flexible non-parametric models are needed for good results. A general class of models is based on functions of the form [29]
(25) |
where is a non-linear function, are coefficients, are the training samples and is the number of samples. Kernel methods are a famous example of an approach using functions of the form (25) [30].
In kernel-based learning, it is assumed that the estimator function belongs to a reproducing kernel Hilbert space (RKHS) expressed as [30]
(26) |
where is a pre-selected symmetric and positive definite function. Given and matrix samples , the kernel-based estimate is usually obtained by solving the following problem:
(27) |
where is a pre-selected cost function, for regression the least-squares is utilized. Moreover, denotes the RKHS norm used as a regularizer to deal with overfitting and is a regularization parameter.
Due to the fact that is infinite dimensional, cannot be obtained directly by solving (27). However, based on the representer theorem [31], (27) admits a solution in form of
(28) |
where . By substituting (28) into (27), is obtained via solving the following problem:
(29) |
where whose -th entry is .
In our proposed method, each node for each of its neighbours employs a function estimator to estimate the missing value. This can be obtained by using estimators in form of (28) such that the -th node employs the following estimator for each of its neighbours:
(30) |
where , is the training samples. The estimators are trained by solving optimization problem (29) that admits a closed-form solution as follows for kernel regression:
(31) |
From (31), to obtain , each node for each of its neighbors should invert a matrix which is computationally heavy especially when the number of samples is high. Indeed, each node, when an edge fluctuation occurs, has to inverse a matrix whose computation complexity equals . Also, they have to save a high number of scalars, for instance, the -th node should save scalars.
To tackle this issue, we employ the random feature approach [32], which is an approximation of obtained from the shift-invariant kernels i.e. . In this paper, we employ shift-invariant and bounded kernels. The examples of that kind of kernels are Gaussian, Laplacian and Cauchy kernels [32]. The Fourier transforms of the shift-invariant kernels denoted by exist in closed form as follows:
(32) |
From the definition of the expected value, (32) equals . The main idea of the random feature approach is that by drawing a sufficient number of samples from , can be approximated by
(33) |
where and
(34) |
From (32) and taking expectation from both sides of (33) with respect to , we can conclude that is unbiased. Moreover, the variance of is proportional with , which implies that it tends to zero when the number of random features tends to infinity [33].
By substituting (33) to (30), we have the following estimator
(35) |
From (IV-B), is obtained by solving the following problem [33]:
(36) |
where denotes the -th training sample. Also, we have:
(37) |
where . The random feature approach can be employed in an online manner where the nodes estimate the missing values online. In this approach, in each iteration of the successive approach denoted by , each node solves optimization problem (IV-B) via online gradient descent [34] to update its parameters which is expressed as follows:
(38) |
where is the step-size of gradient descent. In (IV-B), is used. Then, if the -th node misses the -th node information (), the -th node estimates the missing value via (IV-B). At the end of iteration, the complexity of (IV-B) is .
Please note that for the scenario, where edge fluctuations happen in the first iteration, the estimators are trained offline for just finding . Then, the offline trained parameters are used for initialization. For the rest of iterations, the parameters of the estimators are updated online via (IV-B). Finally, the proposed approach in this section can be used for also different types of graph filters such as FIR [2] or edge-variant graph filters [21].
V Conclusion
This paper proposes a new approach to compute the linear transformations as fast as possible by leveraging a set of successive different shift operators. To compute the desired transformation by a small number of iterations, a new optimization problem is proposed to design the shift operators. An exhaustive simulation study demonstrates that the proposed approach can effectively compute the desired transformation markedly faster than existing algorithms. Furthermore, this work studies the effect of edge fluctuations, which is overlooked by the existing works and a common issue in the wireless sensor networks due to the random links or node failures. To deal with that issue, an online learning method is proposed which enables nodes to estimate the missing values due to the edge fluctuations.
Appendix A Proof of Proposition 1
Proof.
From (22) and the fact that , we have:
(39) |
By using the cyclic property of the trace and the fact that trace is a linear operator, we have:
(41) |
where . Finally, by applying the inequality (which holds for any square matrix and positive semi-definite matrix [35]) and the following properties of the spectral norm , , we have
(42) |
From and (A), we have:
(43) |
∎
References
- [1] D. I. Shuman, S. K. Narang, P. Frossard, A. Ortega, and P. Vandergheynst, “The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains,” IEEE Signal Processing Magazine, vol. 30, pp. 83–98, May 2013.
- [2] A. Sandryhaila and J. M. F. Moura, “Discrete signal processing on graphs:frequency analysis,” IEEE Trans. on Signal Processing, vol. 61, pp. 1644–1656, Apr 2013.
- [3] S. Chen, R. Varma, A. Sandryhaila, and J. Kovačević, “Discrete signal processing on graphs: Sampling theory,” IEEE Transactions on Signal Processing, vol. 63, no. 24, pp. 6510–6523, 2015.
- [4] S. Safavi and U. A. Khan, “Revisiting finite-time distributed algorithms via successive nulling of eigenvalues,” vol. 22, pp. 54–57, Jan. 2015.
- [5] T. Weerasinghe, D. Romero, C. Asensio-Marco, and B. Beferull-Lozano, “Fast decentralized subspace projection via graph filtering,” in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), (Calgary,Canada), pp. 4639–4643, Apr. 2018.
- [6] S. Segarra, A. G. Marques, G. Leus, and A. Ribeiro, “Reconstruction of graph signals through percolation from seeding nodes,” IEEE Transactions on Signal Processing, vol. 64, no. 16, pp. 4363–4378, 2016.
- [7] S. Barbarossa, G. Scutari, and T. Battisti, “Distributed signal subspace projection algorithms with maximum convergence rate for sensor networks with topological constraints,” in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), (Taipei,Taiwan), pp. 2893–2896, Apr. 2009.
- [8] P. Di Lorenzo, S. Barbarossa, and S. Sardellitti, “Distributed signal processing and optimization based on in-network subspace projections,” IEEE Transactions on Signal Processing, vol. 68, pp. 2061–2076, 2020.
- [9] C. A.-M. F. Camaro-Nogues, D. Alonso-Roman and B. Beferull-Lozano, “Reducing the observation error in a wsn through a consensus-based subspace projection,” in WCNC, (Shanghai,China), pp. 3643–3648, Apr. 2013.
- [10] A. Y. Kibangou, “Graph laplacian based matrix design for finite-time distributed average consensus,” in 2012 American Control Conference (ACC), pp. 1901–1906, 2012.
- [11] A. Sandryhaila, S. Kar, and J. M. F. Moura, “Finite-time distributed consensus through graph filters,” in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), (Florence, Italy), pp. 1080–1084, May 2014.
- [12] D. I. Shuman, B. Ricaud, and P. Vandergheynst, “Vertex-frequency analysis on graphs,” Applied and Computational Harmonic Analysis, vol. 40, no. 2, pp. 260–291, 2016.
- [13] S. K. Narang and A. Ortega, “Perfect reconstruction two-channel wavelet filter banks for graph structured data,” IEEE Transactions on Signal Processing, vol. 60, no. 6, pp. 2786–2799, 2012.
- [14] B. Girault, Signal Processing on Graphs - Contributions to an Emerging Field. PhD thesis, École Normale Supérieure de Lyon, 2015.
- [15] D. Romero, M. Ma, and G. B. Giannakis, “Kernel-based reconstruction of graph signals,” IEEE Transactions on Signal Processing, vol. 65, no. 3, pp. 764–778, 2017.
- [16] M. Onuki, S. Ono, M. Yamagishi, and Y. Tanaka, “Graph signal denoising via trilateral filter on graph spectral domain,” IEEE Transactions on Signal and Information Processing over Networks, vol. 2, no. 2, pp. 137–148, 2016.
- [17] S. Segarra, A. G. Marques, and A. Ribeiro, “Optimal graph-filter design and applications to distributed linear network operators,” IEEE Trans. Sig. Process., vol. 65, pp. 4117–4131, May 2017.
- [18] S. Mollaebrahim, C. Asensio-Marco, D. Romero, and B. Beferull-Lozano, “Decentralized subspace projection in large networks,” in Global Conference on Signal and Information Processing (GlobalSIP), (Anaheim,USA), pp. 788–792, Feb. 2018.
- [19] M. Zamalloa and B. Krishnamachari, “An analysis of unreliability and asymmetry in low-power wireless links,” ACM Trans. Sen. Netw., vol. 3, p. 1–34, June 2007.
- [20] L. Sang, A. Arora, and H. Zhang, “On link asymmetry and one-way estimation in wireless sensor networks,” ACM Trans. Sen. Netw., vol. 6, p. 12–25, Mar. 2010.
- [21] M. Coutino, E. Isufi, and G. Leus, “Advances in distributed graph filtering,” IEEE Transactions on Signal Processing, vol. 67, pp. 2320–2333, May. 2019.
- [22] E. Isufi and G. Leus, “Distributed sparsified graph filters for denoising and diffusion tasks,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5865–5869, 2017.
- [23] E. Isufi, A. Loukas, A. Simonetto, and G. Leus, “Filtering random graph processes over random time-varying graphs,” IEEE Transactions on Signal Processing, vol. 65, no. 16, pp. 4406–4421, 2017.
- [24] F. Gama, E. Isufi, A. Ribeiro, and G. Leus, “Controllability of bandlimited graph processes over random time varying graphs,” IEEE Transactions on Signal Processing, vol. 67, no. 24, pp. 6440–6454, 2019.
- [25] L. B. Saad and B. Beferull-Lozano, “Accurate graph filtering in wireless sensor networks,” IEEE Internet of Things Journal, pp. 1–1, 2020.
- [26] S. Segarra, A. G. Marques, G. Mateos, and A. Ribeiro, “Network topology inference from spectral templates,” IEEE Trans. on Signal and Information Processing over Networks,, vol. 3, p. 467–483, Sep 2017.
- [27] A. Sandryhaila and J. M. F. Moura, “Discrete signal processing on graphs: Frequency analysis,” IEEE Trans. on Signal Processing, vol. 62, pp. 3042–3054, Jun 2014.
- [28] Y. Xu and W. Yin, “A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion,” SIAM Journal on Imaging Sciences, vol. 6, no. 3, pp. 1758–1789, 2013.
- [29] V. N. Vapnik, Statistical Learning Theory. Wiley-Interscience, 1998.
- [30] B. Scholkopf and A. J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2001.
- [31] B. Schölkopf, R. Herbrich, and A. J. Smola, “A generalized representer theorem,” in Proceedings of the 14th Annual Conference on Computational Learning Theory, p. 416–426, 2001.
- [32] A. Rahimi and B. Recht, “Random features for large-scale kernel machines,” in Advances in Neural Information Processing Systems 20, pp. 1177–1184, 2008.
- [33] Y. Shen, T. Chen, and G. B. Giannakis, “Random feature-based online multi-kernel learning in environments with unknown dynamics,” J. Mach. Learn. Res., vol. 20, p. 773–808, Jan. 2019.
- [34] E. Hazan, “Introduction to online convex optimization,” Found. Trends Optim., vol. 2, p. 157–325, Aug. 2016.
- [35] Sheng-De Wang, Te-Son Kuo, and Chen-Fa Hsu, “Trace bounds on the solution of the algebraic matrix riccati and lyapunov equation,” IEEE Transactions on Automatic Control, vol. 31, no. 7, pp. 654–656, 1986.