This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Decentralized Inference with Graph Neural Networks in Wireless Communication Systems

Mengyuan Lee,  Guanding Yu,  and Huaiyu Dai The work of Guanding Yu was supported by the National Key Research and Development Program of China under Grant 2018YFB1802302. The work of Huaiyu Dai was supported in part by the US National Science Foundation under grant CNS-1824518. M. Lee is with the College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China, and also with the Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC 27695, USA. e-mail: [email protected]. G. Yu is with the College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China. e-mail: [email protected]. (Corresponding author: Guanding Yu) H. Dai is with the Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC 27695, USA. e-mail: [email protected]
Abstract

Graph neural network (GNN) is an efficient neural network model for graph data and is widely used in different fields, including wireless communications. Different from other neural network models, GNN can be implemented in a decentralized manner during the inference stage with information exchanges among neighbors, making it a potentially powerful tool for decentralized control in wireless communication systems. The main bottleneck, however, is wireless channel impairments that deteriorate the prediction robustness of GNN. To overcome this obstacle, we analyze and enhance the robustness of the decentralized GNN during the inference stage in different wireless communication systems in this paper. Specifically, using a GNN binary classifier as an example, we first develop a methodology to verify whether the predictions are robust. Then, we analyze the performance of the decentralized GNN binary classifier in both uncoded and coded wireless communication systems. To remedy imperfect wireless transmission and enhance the prediction robustness, we further propose novel retransmission mechanisms for the above two communication systems, respectively. Through simulations on the synthetic graph data, we validate our analysis, verify the effectiveness of the proposed retransmission mechanisms, and provide some insights for practical implementation.

Index Terms:
Machine learning, graph neural network, decentralized algorithm, retransmission, robustness.

1 Introduction

With the explosion of data traffic and the enhanced capability of big data processing, the machine learning (ML) technique has been regarded as a potential enabler for the sixth-generation (6G) wireless communications [1]. Recently, a large number of works turn to various ML techniques to facilitate different applications in wireless communication systems, such as physical layer design [2, 3, 4, 5], resource allocation[6, 7, 8, 9, 10, 11], and networking [12, 13]. In these works, ML is generally utilized for two purposes. On the one hand, it has been employed as fast approximations of heavy computations in existing algorithms. For example, the authors in [7, 8] have made use of the imitation learning technique to replace the pruning strategy in the branch-and-bound algorithm, a widely-used algorithm for mixed-integer wireless resource allocation problems with exponential complexity in the worst case. On the other hand, ML has also been used to develop novel low-complexity heuristic algorithms. For example, the authors in [9] and [10] have proposed ML-based link scheduling algorithms for device-to-device networks by using deep neural networks (DNNs) and the graph embedding technique, respectively.

Amongst other ML techniques, the graph neural network (GNN) recently attracts much attention from both the academia and the industry. As the name implies, GNN is a kind of neural network model especially proposed for graph data [14, 15]. It can learn the representation of an input graph in the node/graph level via message passing between nodes. Because of its satisfactory performance and good interpretability, GNN has become a widely applied graph analytical tool in various fields, such as computer vision [16], traffic engineering [17], and chemical industry [18]. By observations, GNN has several properties that highly match with the characteristics/requirements of the applications in wireless communication systems. First, GNN can efficiently exploit the graph data for different learning tasks while wireless networks can be naturally modeled as graphs, where nodes and edges correspond to wireless devices and links, respectively. Secondly, GNN can process input graphs of different sizes. This generalization ability is in accord with the dynamic nature of wireless communication systems. Finally, after a well-trained GNN model is obtained111The training process of the GNN can be conducted in either centralized or decentralized manner by using different training techniques. Since using a well-trained GNN model, i.e., the inference stage of the GNN, is the focus of this paper, we assume that a well-trained GNN model is available throughout this paper., it can be used in a decentralized manner for inference to achieve decentralized management/control. Meanwhile, decentralized control and resource management often serves as an appealing alternative especially for large-scale wireless communication systems [47, 48, 49], which does not depend on the centralized controller, and thus avoiding concurrent communication, possibly severe traffic jam, the computation limitation, low scalability and flexibility, and single point of failure. Note that most standard neural networks widely used in existing works, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), requires central processing during the inference stage.

To employ the inference stage of a well-trained GNN model decentrally in wireless communication systems, information exchanges among neighboring nodes through wireless channels are inevitable. However, wireless transmission over noisy and fading channels is generally imperfect, which leads to stochastic errors in received signals, and eventually decreases the prediction accuracy and deteriorates the robustness of the GNN model. This is the main bottleneck concerning decentralized inference with GNN in wireless systems. Therefore, the goal of this paper is to analyze and enhance the robustness of the decentralized GNN during the inference stage in different wireless communication systems, making the prediction results not only accurate but also robust to transmission errors.

1.1 Related Work

1.1.1 Related Work in Machine Learning Field

GNNs, as the generalization of traditional DNNs to the graph data, inherit both advantages and disadvantages of DNNs. Specifically, GNNs have powerful representation ability, but are vulnerable to adversarial attacks during the inference stage222Adversarial attacks can also happen during the training stage of GNNs, which is out of the scope of this paper.. Note that adversarial attacks refer to the manually designed perturbations on the node/edge features or the topology of the graph data, which aim at fooling GNNs to get incorrect predictions. Therefore, there exist a surge of works in the ML field about enhancing the robustness of GNNs against adversarial attacks during the inference stage [19, 20, 21, 22, 23, 24, 25]. Existing adversarial defense methods can be categorized into three types [19]: (i) adversarial training that injects adversarial samples into the training set such that the GNN model can correctly predict the label of testing samples with adversarial attacks [20, 21], (ii) adversary detection that explores the intrinsic differences of adversarial edges/nodes and then cleans edges/nodes to get correct predictions [22, 23], and (iii) model improvement that focuses on improving the GNN model itself to make it robust against the adversarial attacks [24, 25]. There also exist some works that theoretically analyze the robustness of GNNs under adversarial attacks [26, 27]. However, existing adversarial attacks in literature are manually designed based on certain criteria, which are quite different from the random and stochastic errors caused by imperfect wireless transmission. Therefore, all aforementioned works about defending adversarial attacks may not be directly applied to the decentralized GNN in wireless communication systems. This motivates us to study and enhance the robustness of the decentralized GNN in wireless communication systems.

1.1.2 Related Work in Wireless Communication Field

In the wireless communication field, related works can be divided into two categories. The first category of works focus on using GNNs for different problems in wireless communication systems, such as link scheduling [10], power allocation [28, 29] and user association [30]. In these works, GNNs are generally used in the centralized manner. Recently, some works have used GNNs for decentralized control [31, 32, 33]. However, they generally assume perfect information exchanges between nodes. In contrast, the second category of works examine how wireless transmission affects ML performances [34, 35, 36, 37, 38, 39]. To be more specific, they focus on the training process of ML models and analyze the convergence performance of training algorithms, such as the stochastic gradient descent (SGD) algorithm, over noisy wireless channels. Based on the analysis, resource management algorithms [35, 36], learning parameter setting methods [37, 38], as well as scheduling mechanisms [39] are proposed to achieve different goals, such as trade-off between communication and computation resources, faster convergence, or better ML performance. These existing methods are also applicable to the decentralized training of the GNN in wireless communication systems. However, the characteristics of the training and inference stages of ML are very different. Specifically, training an ML model using algorithms, such as SGD, is a stochastic process, which is inherently tolerant of noisy data. On the contrary, using an ML model for prediction is a definite process and is more vulnerable to noise. Therefore, these aforementioned works are not applicable in our considered problem, which motivates the study in this paper.

1.2 Contributions and Outline

As far as we know, we are among the first to study robust decentralized inference with GNNs in wireless communication systems. To summarize, our main contributions are as follows.

  • We formulate and solve a robustness verification problem to check whether the predictions of the decentralized GNN binary classifier in wireless communication systems are robust.

  • We analyze the performance of the decentralized GNN binary classifier in both uncoded and coded communication systems by examining the robustness verification problem.

  • We develop two novel retransmission mechanisms based on the combining technique to deal with the impairments of wireless transmission and enhance the robustness of the decentralized GNN binary classifier in uncoded and coded communication systems, respectively.

  • We conduct extensive simulations through which we reveal the effectiveness of our proposed retransmission mechanisms and shed lights on practical implementation.

The rest of this paper is organized as follows. In Section 2, we give a brief overview of GNNs. In Section 3, we introduce the binary classification problem in wireless communication systems and discuss a corresponding GNN binary classifier as well as its decentralized implementation. In Section 4, we formulate and solve the robustness verification problem. Based on this, we analyze and enhance the robustness of the decentralized GNN binary classifier in uncoded and coded communication systems in Sections 5 and 6, respectively. Extensive testing results are presented in Section 7. Finally, we conclude the paper and provide directions for future work in Section 8.

1.3 Notations

Throughout this paper, vectors are denoted in bold lower-case letters, such as 𝒂a, while matrices are denoted in bold upper-case letters, such as 𝑨A. Moreover, we use 𝒂[i]\mbox{\boldmath{$a$}}[i] to denote the ii-th entry in vector 𝒂a and 𝑨[i,j]\mbox{\boldmath{$A$}}[i,j] to denote the entry in the ii-th row and jj-th column of matrix 𝑨A. We also use 𝑨[i,:]\mbox{\boldmath{$A$}}[i,:] and 𝑨[:,j]\mbox{\boldmath{$A$}}[:,j] to denote the ii-th row and the jj-th column in 𝑨A, respectively. Given two sets, S1S_{1} and S2S_{2}, 𝑨{S1,S2}\mbox{\boldmath{$A$}}_{\{S_{1},S_{2}\}} corresponds to a sliced matrix from 𝑨A, which contains the rows corresponding to the indices in S1S_{1} and the columns corresponding to the indices in S2S_{2}. Furthermore, given two matrices 𝑨A and 𝑩B, if they have the same number of rows, we can stack them in column and the column-stacking result is denoted as [A,B][A,B]. Similarly, if they have the same number of columns, we can stack them in row and the row-stacking result is denoted as [A;B][A;B]. For example, assuming that the dimensions of 𝑨A and 𝑩B are both 2×22\times 2, then we have

[A,B]=[A[1,1],A[1,2],B[1,1],B[1,2]A[2,1],A[2,2],B[2,1],B[2,2]],[A,B]=\left[\begin{array}[]{cccc}A[1,1],&A[1,2],&B[1,1],&B[1,2]\\ A[2,1],&A[2,2],&B[2,1],&B[2,2]\\ \end{array}\right],
[A;B]=[A[1,1],A[1,2]A[2,1],A[2,2]B[1,1],B[1,2]B[2,1],B[2,2]].[A;B]=\left[\begin{array}[]{cc}A[1,1],&A[1,2]\\ A[2,1],&A[2,2]\\ B[1,1],&B[1,2]\\ B[2,1],&B[2,2]\end{array}\right].

2 Overview of Graph Neural Networks

GNNs, to some extent, are inspired by the success of CNNs that reduce the number of parameters in fully-connected DNNs with convolution filters. However, the convolution operation in CNNs only works for grid-like image data over the Euclidean space. Therefore, GNNs are proposed by redefining the convolution operation for graphs [14, 15]. To discuss the typical architecture of GNNs, we consider a graph G(𝒱,)G(\mathcal{V},\mathcal{E}) with the node set 𝒱\mathcal{V} and the edge set \mathcal{E}. The goal of GNNs is to learn a state embedding vector 𝒉vD\mbox{\boldmath{$h$}}_{v}\in\mathbb{R}^{D} for each node v𝒱v\in\mathcal{V} by taking the graph adjacency matrix, node features, and edge features (if available) as inputs. Note that DD is a hyperparameter called hidden state dimension whose value is manually set. Then the state embedding vector 𝒉v\mbox{\boldmath{$h$}}_{v} is used to generate the output of node vv, i.e., 𝒐v\mbox{\boldmath{$o$}}_{v}. Generally, a GNN is constructed with a sequence of KK layers. In the kk-th layer, the hidden state of node vv, denoted as 𝒉v(k)\mbox{\boldmath{$h$}}_{v}^{(k)}, is first updated by collecting the neighborhood information as follows

𝒉v(k)=ft(k)(𝒙v,𝒙co[v],𝒙ne[v],𝒉ne[v](k1);𝜽t(k)),\mbox{\boldmath{$h$}}_{v}^{(k)}=f_{t}^{(k)}(\mbox{\boldmath{$x$}}_{v},\mbox{\boldmath{$x$}}_{co[v]},\mbox{\boldmath{$x$}}_{ne[v]},\mbox{\boldmath{$h$}}^{(k-1)}_{ne[v]};\mbox{\boldmath{$\theta$}}_{t}^{(k)}), (1)

where ft(k)f_{t}^{(k)} is called the local transition function in the kk-th layer. Note that ft(k)f_{t}^{(k)} is the same for all nodes in 𝒱\mathcal{V} and generally realized through a multi-layer perception (MLP), whose learning parameters are denoted as 𝜽t(k)\mbox{\boldmath{$\theta$}}_{t}^{(k)}. Meanwhile, 𝒙v\mbox{\boldmath{$x$}}_{v}, 𝒙co[v]\mbox{\boldmath{$x$}}_{co[v]}, 𝒙ne[v]\mbox{\boldmath{$x$}}_{ne[v]} are the features of node vv, its edges, and its neighbors, respectively. Also, 𝒉ne[v](k1)\mbox{\boldmath{$h$}}^{(k-1)}_{ne[v]} denotes the hidden states of the neighbors of node vv in the (k1)(k-1)-th layer. After the update in (1), the output of node vv in the kk-th layer, denoted as 𝒐v(k)\mbox{\boldmath{$o$}}_{v}^{(k)}, is computed as

𝒐v(k)=fo(k)(𝒙v,𝒉v(k);𝜽o(k)),\mbox{\boldmath{$o$}}_{v}^{(k)}=f_{o}^{(k)}(\mbox{\boldmath{$x$}}_{v},\mbox{\boldmath{$h$}}^{(k)}_{v};\mbox{\boldmath{$\theta$}}_{o}^{(k)}), (2)

where fo(k)f_{o}^{(k)} is called the local output function in the kk-th layer, which defines how the output of each node is produced. Similar to ft(k)f_{t}^{(k)}, fo(k)f_{o}^{(k)} is again the same for all nodes in 𝒱\mathcal{V} and generally realized through an MLP with learning parameters, 𝜽o(k)\mbox{\boldmath{$\theta$}}_{o}^{(k)} 333Eq. (1) and (2) are general formulas of the local transition and output functions. The specific design of these two functions can be different in various GNN models by omitting some input components, adding extra components, choosing different techniques to combine input components, or selecting different MLP structures..

As mentioned above, GNN has many special properties that match the characteristics and requirements of wireless communication systems. One important property is that the inference stage of the GNN can be naturally implemented in a decentralized manner. Specifically, the operations in (1) and (2) only need the information of node vv (𝒙v\mbox{\boldmath{$x$}}_{v}, 𝒙co[v]\mbox{\boldmath{$x$}}_{co[v]}, and 𝒉v(k)\mbox{\boldmath{$h$}}_{v}^{(k)}) and its one-hop neighbors (𝒙ne[v]\mbox{\boldmath{$x$}}_{ne[v]} and 𝒉ne[v](k1)\mbox{\boldmath{$h$}}_{ne[v]}^{(k-1)}), where the one-hop neighbors’ information can be obtained by local information exchanges. In other words, operations in each layer of the GNN are entirely local. However, in wireless communication systems, the information exchanges among neighbors are achieved through noisy and fading channels, whose impairments lead to imperfect transmission and deteriorate the robustness of decentralized predicted results. This is the main bottleneck for the decentralized inference with the GNN. Analyzing the performance of the decentralized GNN in different wireless communication systems and overcoming the above obstacles to achieve robust predictions are the focus of this paper.

3 System Model and GNN Binary Classifier

For simplicity, we focus on a decentralized GNN binary classifier in the subsequent discussion of this paper444The proposed analysis and retransmission mechanism can be generalized to more complicated problems by modifying the GNN structure and the formulation of the robustness verification problem accordingly.. In this section, we first introduce the binary classification problem in wireless communication systems. Then, we discuss the architecture of a GNN binary classifier for this problem as well as its decentralized implementation in wireless communication systems.

3.1 Binary Classification Problem

As shown in Fig. 1, we consider a binary classification problem in the wireless communication system that can be widely found in literature [6, 9, 10, 49] and modeled as a graph G(𝒱,)G(\mathcal{V},\mathcal{E}), with the node set 𝒱\mathcal{V} and the edge set \mathcal{E}. Specifically, this system has NN nodes. These nodes can be base stations, access points, edge servers, or mobile devices according to specific applications. To enable local computation, we assume that each device is equipped with sufficient storage and computing capability. Meanwhile, two nodes are neighbors and connected by an edge if they can communicate with each other through the wireless channel. In this paper, we assume that each node knows its neighbors in the system, which can be achieved by neighbor discovery algorithms such as [50]. Moreover, each node in 𝒱\mathcal{V} can be classified into two classes and correspondingly labeled as 1 or -1. The label refers to binary indicators in wireless communication systems, such as the power allocation indicator in the power control problem [6], the link activation indicator in the wireless link scheduling problem [9], and the subcarrier assignment indicator for the subcarrier assignment problem [49]. Furthermore, each node v𝒱v\in\mathcal{V} keeps a vector that includes the needed internal characteristics for the classification task, such as the transmit power limit or the QoS requirement for the power control problem [6]. For the sake of storage and digital communication, this vector is quantized into a binary node feature vector, 𝒙v{0,1}p\mbox{\boldmath{$x$}}_{v}\in\{0,1\}^{p}, where pp is called the node feature dimension and is manually set based on the requirements of the quantization precision. In this paper, we do not take the detailed quantization process into consideration and assume that 𝒙v\mbox{\boldmath{$x$}}_{v} is already available at each node.

Refer to caption
Figure 1: A general wireless communication system with N=6N=6 nodes and its corresponding graph model.

3.2 Wireless Communication Model

Assume that node vv and node uu are neighbors. Then, when node uu transmits a pp-bit signal 𝒙u\mbox{\boldmath{$x$}}_{u} to node vv, the transmission takes pp time slots that is defined as a symbol block. Furthermore, we assume that the transmission of 𝒙u\mbox{\boldmath{$x$}}_{u} is not only contaminated by the additive white Gaussian noise (AWGN) that is independent and identically distributed (i.i.d.) over different time slots, but also impaired by the pathloss, the shadowing, and the block-Rayleigh fading, which indicates that the channel gain remains static within a symbol block but is i.i.d over different blocks. According to [39], we further assume that the receiver vv can estimate the signal-to-interference-plus-noise ratio (SINR) γvu\gamma_{vu} of the received signal from each neighbor uN(v)u\in N(v).

Two kinds of communication systems, uncoded and coded communication systems, are considered in this paper. If the transmitted signal from node uu, i.e., 𝒙u\mbox{\boldmath{$x$}}_{u}, is uncoded, node vv can use γvu\gamma_{vu} to estimate the bit error rate (BER), ϵvu\epsilon_{vu}, of the received signal. On the other hand, if 𝒙u\mbox{\boldmath{$x$}}_{u} is coded before transmission, we assume that it can be correctly decoded by node vv if log(1+γvu)Ru\log(1+\gamma_{vu})\geq R_{u}, where RuR_{u} is the preset transmit data rate of node uu. Otherwise, the received signal is said to be in outage and the whole packet will be discarded.

3.3 Architecture of GNN Binary Classifier

The aforementioned binary classification problem can be solved by a GNN binary classifier. The operations of the GNN binary classifier are as follows. First, the hidden states of all nodes are updated as

𝑯=σ(r)(𝑨^𝑿𝜽)F(𝑨^,𝑿;𝜽),\mbox{\boldmath{$H$}}=\sigma^{(r)}(\mbox{\boldmath{$\hat{A}$}}\mbox{\boldmath{$X$}}\mbox{\boldmath{$\theta$}})\triangleq F(\mbox{\boldmath{$\hat{A}$}},\mbox{\boldmath{$X$}};\mbox{\boldmath{$\theta$}}), (3)

where 𝑿{0,1}N×p\mbox{\boldmath{$X$}}\in\{0,1\}^{N\times p} is the node feature matrix, which is obtained by row-stacking node features of all nodes in 𝒱\mathcal{V}. Also, 𝜽p×D\mbox{\boldmath{$\theta$}}\in\mathbb{R}^{p\times D} is the matrix of learning parameters, σ(r)()\sigma^{(r)}(\cdot) is the ReLU function, and 𝑯N×D\mbox{\boldmath{$H$}}\in\mathbb{R}^{N\times D} is the embedding output matrix that is obtained by row-stacking the embedding output vectors of all nodes, i.e., {𝒉v,v𝒱}\{\mbox{\boldmath{$h$}}_{v},\forall v\in\mathcal{V}\}. Furthermore, 𝑨^N×N\mbox{\boldmath{$\hat{A}$}}\in\mathbb{R}^{N\times N} is called graph filter, which defines how to integrate neighborhood information. It is generally a variant of the graph adjacency matrix 𝑨{0,1}N×N\mbox{\boldmath{$A$}}\in\{0,1\}^{N\times N}. There are three popular graph filters for GNNs as follows

  • Unnormalized graph filters [40]: 𝑨^=𝑨+𝑰N\mbox{\boldmath{$\hat{A}$}}=\mbox{\boldmath{$A$}}+\mbox{\boldmath{$I$}}_{N}, where 𝑰N\mbox{\boldmath{$I$}}_{N} is the identity matrix.

  • Normalized graph filters [41]: 𝑨^=𝑫1/2𝑨𝑫1/2+𝑰N\mbox{\boldmath{$\hat{A}$}}=\mbox{\boldmath{$D$}}^{-1/2}\mbox{\boldmath{$A$}}\mbox{\boldmath{$D$}}^{-1/2}+\mbox{\boldmath{$I$}}_{N}, where 𝑫D is the degree matrix.

  • Random walk graph filters [42]: 𝑨^=𝑫1𝑨+𝑰N\mbox{\boldmath{$\hat{A}$}}=\mbox{\boldmath{$D$}}^{-1}\mbox{\boldmath{$A$}}+\mbox{\boldmath{$I$}}_{N}.

As mentioned in Section 3.1, the neighborhood information is available for each node. Therefore, the needed graph filter information can be assumed to be available for each node as well.

After the operation in (3), the hidden states of all nodes are input into a fully-connected layer for binary classification. The specific operation is given by

𝒚=σ(s)(𝑯𝒘+b)G(𝑯;𝒘,b),\mbox{\boldmath{$y$}}=\sigma^{(s)}(\mbox{\boldmath{$H$}}\mbox{\boldmath{$w$}}+b)\triangleq G(\mbox{\boldmath{$H$}};\mbox{\boldmath{$w$}},b), (4)

where 𝒚[0,1]N\mbox{\boldmath{$y$}}\in[0,1]^{N} is the predicted output vector of all nodes, with each entry indicating the probability that the corresponding node is labeled as 1. Also, σ(s)()\sigma^{(s)}(\cdot) is the sigmoid function, 𝒘D\mbox{\boldmath{$w$}}\in\mathbb{R}^{D} is the vector of learning weights, and bb is the bias. We use 𝒲={𝜽,𝒘,b}\mathcal{W}={\{\mbox{\boldmath{$\theta$}},\mbox{\boldmath{$w$}},b\}} to denote the set of all learning parameters in the GNN binary classifier.

Based on the predicted output 𝒚y, we can get the predicted binary label vector of all nodes that is denoted as 𝒄{1,1}N\mbox{\boldmath{$c$}}\in\{-1,1\}^{N} as

𝒄=sgn(𝒚0.5).\mbox{\boldmath{$c$}}=sgn(\mbox{\boldmath{$y$}}-0.5). (5)

By integrating the operations in (3), (4), and (5), operations of the GNN binary classifier can be overall denoted as

𝒄=J(𝑨^,𝑿;𝒲).\mbox{\boldmath{$c$}}=J(\mbox{\boldmath{$\hat{A}$}},\mbox{\boldmath{$X$}};\mathcal{W}). (6)

3.4 Decentralized Implementation of GNN Binary Classifier

As mentioned above, decentralized management/control is preferred in wireless communication systems. Therefore, in this part, we discuss the decentralized implementation of the above GNN binary classifier in the wireless communication system. As a premise, we assume that the GNN binary classifier is already well-trained and each node v𝒱v\in\mathcal{V} keeps a copy of it, i.e. J(,;𝒲)J(\cdot,\cdot;\mathcal{W}), which indicates that the operations and parameters of the GNN model are available at each node. In the remaining part of this subsection, we introduce how each node uses J(,;𝒲)J(\cdot,\cdot;\mathcal{W}) to predict its label in a decentralized manner.

Note that the operations (3)-(5) of the GNN binary classifier introduced in the previous subsection are given in the centralized manner, where the graph filter 𝑨^\hat{A} and the node feature matrix 𝑿X of all the nodes are needed. To better illustrate the decentralized implementation of this GNN binary classifier, we need to manifest the operations in (3)-(5) for each single node v𝒱v\in\mathcal{V}. First, we denote the set of one-hop neighbors of node vv as N(v)N(v). We further define the sliced graph filter, 𝒂^v1×|N(v)|\mbox{\boldmath{$\hat{a}$}}_{v}\in\mathbb{R}^{1\times|N(v)|}, as 𝒂^v=𝑨^[v,:]{N(v)}\mbox{\boldmath{$\hat{a}$}}_{v}=\mbox{\boldmath{$\hat{A}$}}[v,:]_{\{N(v)\}}, which keeps the graph filter entries needed by node vv to integrate information from its neighbors. Similarly, we define the sliced node feature matrix, 𝑿v{0,1}|N(v)|×p\mbox{\boldmath{$X$}}_{v}\in\{0,1\}^{|N(v)|\times p}, as 𝑿v=𝑿{N(v),:}\mbox{\boldmath{$X$}}_{v}=\mbox{\boldmath{$X$}}_{\{N(v),:\}}, which contains the node features of nodes in N(v)N(v), i.e., {𝒙u,uN(v)}\{\mbox{\boldmath{$x$}}_{u},\forall u\in N(v)\}. With these newly defined notations, the operations in (3)-(5) can be rewritten for node vv as

𝒉v\displaystyle\mbox{\boldmath{$h$}}_{v} =F([𝒂^v,a^v],[𝑿v;𝒙v];𝜽)\displaystyle=F([\mbox{\boldmath{$\hat{a}$}}_{v},\hat{a}_{v}],[\mbox{\boldmath{$X$}}_{v};\mbox{\boldmath{$x$}}_{v}];\mbox{\boldmath{$\theta$}}) (7)
=σ(r)([𝒂^v,a^v][𝑿v;𝒙v]𝜽)\displaystyle=\sigma^{(r)}([\mbox{\boldmath{$\hat{a}$}}_{v},\hat{a}_{v}][\mbox{\boldmath{$X$}}_{v};\mbox{\boldmath{$x$}}_{v}]\mbox{\boldmath{$\theta$}})
=σ(r)(a^v𝒙v𝜽+𝒂^v𝑿v𝜽),\displaystyle=\sigma^{(r)}(\hat{a}_{v}\mbox{\boldmath{$x$}}_{v}\mbox{\boldmath{$\theta$}}+\mbox{\boldmath{$\hat{a}$}}_{v}\mbox{\boldmath{$X$}}_{v}\mbox{\boldmath{$\theta$}}),
yv=G(𝒉v;𝒘,b)=σ(s)(𝒉v𝒘+b),y_{v}=G(\mbox{\boldmath{$h$}}_{v};\mbox{\boldmath{$w$}},b)=\sigma^{(s)}(\mbox{\boldmath{$h$}}_{v}\mbox{\boldmath{$w$}}+b), (8)
cv=sgn(yv0.5),c_{v}=sgn(y_{v}-0.5), (9)

where a^v=𝑨^[v,v]\hat{a}_{v}=\mbox{\boldmath{$\hat{A}$}}[v,v], 𝒉v1×D\mbox{\boldmath{$h$}}_{v}\in\mathbb{R}^{1\times D} is the embedding output vector of node vv, yv[0,1]y_{v}\in[0,1] is the predicted output value of node vv, and cv{1,1}c_{v}\in\{-1,1\} is the binary predicted label of node vv. Thus, the overall operations at node vv is given by

cv=J([𝒂^v,a^v],[𝑿v;𝒙v];𝒲).c_{v}=J([\mbox{\boldmath{$\hat{a}$}}_{v},\hat{a}_{v}],[\mbox{\boldmath{$X$}}_{v};\mbox{\boldmath{$x$}}_{v}];\mathcal{W}). (10)

From (10), the graph filter information, the node feature 𝒙v\mbox{\boldmath{$x$}}_{v}, and the neighbors’ node features 𝑿v={𝒙u,uN(v)}\mbox{\boldmath{$X$}}_{v}=\{\mbox{\boldmath{$x$}}_{u},\forall u\in N(v)\} are needed for node vv to get its predicted label. As mentioned above, the graph filter information and 𝒙v\mbox{\boldmath{$x$}}_{v} are already available at node vv. Given that {𝒙u,uN(v)}\{\mbox{\boldmath{$x$}}_{u},\forall u\in N(v)\} can be obtained by information exchanges with its neighbors, the central unit is not needed and node vv can obtain its predicted label in a decentralized manner.

The specific process of the decentralized prediction for each node v𝒱v\in\mathcal{V} is as follows. First, node vv sends transmission requests to each node uu in N(v)N(v). After receiving the request from node vv, each node uu in N(v)N(v) transmits its node feature 𝒙u\mbox{\boldmath{$x$}}_{u} to node vv through wireless channels as shown in Fig. 1. Given that 𝒙u\mbox{\boldmath{$x$}}_{u} is binary and has a length of pp, we utilize the binary phase shift keying (BPSK) modulation and the transmission of 𝒙u\mbox{\boldmath{$x$}}_{u} takes pp time slots or a symbol block. We denote the signal received by node vv from node uu as 𝒙^vu\mbox{\boldmath{$\hat{x}$}}_{vu}, whose SINR is denoted as γvu\gamma_{vu}. Due to imperfect wireless transmission, 𝒙^vu\mbox{\boldmath{$\hat{x}$}}_{vu} is generally not equal to 𝒙u\mbox{\boldmath{$x$}}_{u}. By row stacking 𝒙^vu\mbox{\boldmath{$\hat{x}$}}_{vu} for all nodes in N(v)N(v), node vv gets an estimated sliced node feature matrix, 𝑿^v\mbox{\boldmath{$\hat{X}$}}_{v}. Then node vv inputs the graph filter information, 𝒙v\mbox{\boldmath{$x$}}_{v}, and 𝑿^v\mbox{\boldmath{$\hat{X}$}}_{v} into its local GNN copy J(,;𝒲)J(\cdot,\cdot;\mathcal{W}). By the operations in (10), the decentrally predicted label of node vv is finally obtained as c^v=J([𝒂^v,a^v],[𝑿^v;𝒙v];𝒲)\hat{c}_{v}=J([\mbox{\boldmath{$\hat{a}$}}_{v},\hat{a}_{v}],[\mbox{\boldmath{$\hat{X}$}}_{v};\mbox{\boldmath{$x$}}_{v}];\mathcal{W}).

4 Robustness Verification Problem

As mentioned above, the received sliced node feature matrix, 𝑿^v\mbox{\boldmath{$\hat{X}$}}_{v}, is generally not equal to the true node feature matrix, 𝑿v\mbox{\boldmath{$X$}}_{v}, due to imperfect wireless transmission. It indicates a possibility that the predicted label of node vv, c^v\hat{c}_{v}, is not robust. Therefore, we need to develop a methodology to verify whether a predicted label is robust before studying the performance of the decentralized GNN binary classifier in wireless communication systems. To this end, we investigate the robustness verification problem in this section.

4.1 Problem Formulation

Given that 𝑿v\mbox{\boldmath{$X$}}_{v} and cvc_{v} are not available during the inference stage, we cannot directly compare c^v\hat{c}_{v} with cvc_{v} for robustness verification. However, for each received signal 𝒙^vu\mbox{\boldmath{$\hat{x}$}}_{vu}, its SINR is available at node vv, based on which the possible number of errors in 𝒙^vu\mbox{\boldmath{$\hat{x}$}}_{vu} can be estimated. Therefore, we introduce a new vector, 𝒒v=[qvu,uN(v)]\mbox{\boldmath{$q$}}_{v}=[q_{vu},\forall u\in N(v)], as the local error bound vector of node vv, where each entry qvuq_{vu} represents the maximum number of tolerable errors in 𝒙^vu\mbox{\boldmath{$\hat{x}$}}_{vu}. Obviously, the value of 𝒒v\mbox{\boldmath{$q$}}_{v} is highly related to the specific communication model. In the remaining part of this section, we assume that the value of 𝒒v\mbox{\boldmath{$q$}}_{v} is available and more details about estimating the value of 𝒒v\mbox{\boldmath{$q$}}_{v} will be discussed in Section 5. Under this assumption, it is obvious that the true sliced node feature matrix, 𝑿v\mbox{\boldmath{$X$}}_{v}, must satisfy the following constraints

{𝑿v[u,:]𝑿^v[u,:]0qvu,uN(v),𝑿v[u,j]{0,1},uN(v),j{1,2,..,p},\left\{\begin{array}[]{lr}||\mbox{\boldmath{$X$}}_{v}[u,:]-\mbox{\boldmath{$\hat{X}$}}_{v}[u,:]||_{0}\leq q_{vu},\forall u\in N(v),&\\ \mbox{\boldmath{$X$}}_{v}[u,j]\in\{0,1\},\forall u\in N(v),j\in\{1,2,..,p\},&\end{array}\right. (11)

where ||||0||\cdot||_{0} means the L0 norm of the input vector. Therefore, we can search all possible matrices satisfying constraints in (11), and then pass them through the GNN model. If the predicted labels with all possible matrices are always the same as c^v\hat{c}_{v}, then the true predicted label cvc_{v} must be equal to c^v\hat{c}_{v}. In this sense, c^v\hat{c}_{v} is considered robust.

Accordingly, the robustness verification problem555The proposed “robustness verification problem” does not belong to the robust optimization field, where the objective and constraints are uncertain [43]. can be formulated as follows

z(𝒒v,𝑿^v)min𝑿~vc^vJ([𝒂^v,a^v],[𝑿~v;𝒙v];𝒲),z(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})\triangleq\min_{\mbox{\boldmath{$\tilde{X}$}}_{v}}\hat{c}_{v}J([\mbox{\boldmath{$\hat{a}$}}_{v},\hat{a}_{v}],[\mbox{\boldmath{$\tilde{X}$}}_{v};\mbox{\boldmath{$x$}}_{v}];\mathcal{W}),\vspace{-0.5em} (12)

subject to

𝑿~v[u,:]𝑿^v[u,:]0qvu,uN(v),\displaystyle||\mbox{\boldmath{$\tilde{X}$}}_{v}[u,:]-\mbox{\boldmath{$\hat{X}$}}_{v}[u,:]||_{0}\leq q_{vu},\forall u\in N(v), (12a)
𝑿~v[u,j]{0,1},uN(v),j{1,2,..,p},\displaystyle\mbox{\boldmath{$\tilde{X}$}}_{v}[u,j]\in\{0,1\},\forall u\in N(v),j\in\{1,2,..,p\}, (12b)

where 𝑿~v\mbox{\boldmath{$\tilde{X}$}}_{v} is the optimization variable. If z(𝒒v,𝑿^v)>0z(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})>0, the predicted label c^v\hat{c}_{v} is robust.

Remark 1: If z(𝒒v,𝑿^v)>0z(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})>0, J([𝒂^v,a^v],[𝑿~v;𝒙v];𝒲)J([\mbox{\boldmath{$\hat{a}$}}_{v},\hat{a}_{v}],[\mbox{\boldmath{$\tilde{X}$}}_{v};\mbox{\boldmath{$x$}}_{v}];\mathcal{W}) always has the same sign as c^v\hat{c}_{v} and is also equal to c^v\hat{c}_{v} due to the fact that the predicted label can only be 1 or -1.

4.2 Problem Transformation

Obviously, Problem (12) is intractable for two main reasons. On the one hand, it is a combinatorial optimization problem due to the constraint (12b). On the other hand, even if we relax the binary variables into continuous ones in [0,1][0,1], the optimization problem is still non-convex due to non-linear activation functions in the GNN binary classifier, i.e., the ReLU function and the sigmoid function. In this subsection, we transform Problem (12) into a more tractable form with different approximation and relaxation techniques.

4.2.1 Transformation on Sigmoid Function

According to the definition of the sigmoid function and the binary label transformation rule, the predicted label cvc_{v} always has the same sign as (𝒉v𝒘+b)(\mbox{\boldmath{$h$}}_{v}\mbox{\boldmath{$w$}}+b). Therefore, Problem (12) can be simplified as

z^(𝒒v,𝑿^v)min{𝑿~v,𝒉v}c^v(𝒉v𝒘+b),\hat{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})\triangleq\min_{\{\mbox{\boldmath{$\tilde{X}$}}_{v},\mbox{\boldmath{$h$}}_{v}\}}\hat{c}_{v}(\mbox{\boldmath{$h$}}_{v}\mbox{\boldmath{$w$}}+b), (13)

subject to (12a), (12b), and

𝒉v=σ(r)(a^v𝒙v𝜽+𝒂^v𝑿~v𝜽),\displaystyle\mbox{\boldmath{$h$}}_{v}=\sigma^{(r)}(\hat{a}_{v}\mbox{\boldmath{$x$}}_{v}\mbox{\boldmath{$\theta$}}+\mbox{\boldmath{$\hat{a}$}}_{v}\mbox{\boldmath{$\tilde{X}$}}_{v}\mbox{\boldmath{$\theta$}}), (13a)

where the non-linear sigmoid function is removed. Obviously, z^(𝒒v,𝑿^v)\hat{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v}) always has the same sign as z(𝒒v,𝑿^v)z(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v}). Therefore, by solving problem (13), we can also verify that the predicted label c^v\hat{c}_{v} is robust if z^(𝒒v,𝑿^v)>0\hat{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})>0.

4.2.2 Approximation on ReLU Function

To deal with the non-linearity brought by the ReLU function, we adopt the linear approximation of the ReLU function proposed in [44].

First, we denote the input to the ReLU function in (13a) as 𝒉^v\mbox{\boldmath{$\hat{h}$}}_{v} and rewrite (13a) into

𝒉^v=a^v𝒙v𝜽+𝒂^v𝑿~v𝜽,\mbox{\boldmath{$\hat{h}$}}_{v}=\hat{a}_{v}\mbox{\boldmath{$x$}}_{v}\mbox{\boldmath{$\theta$}}+\mbox{\boldmath{$\hat{a}$}}_{v}\mbox{\boldmath{$\tilde{X}$}}_{v}\mbox{\boldmath{$\theta$}}, (14)
𝒉v=σ(r)(𝒉^v).\mbox{\boldmath{$h$}}_{v}=\sigma^{(r)}(\mbox{\boldmath{$\hat{h}$}}_{v}).\vspace{-0.5em} (15)

Then we try to bound the value of each entry 𝒉^𝒗[i]\mbox{\boldmath{$\hat{h}_{v}$}}[i] in vector 𝒉^𝒗\hat{h}_{v}. Specifically, for 𝒉^𝒗[i]\mbox{\boldmath{$\hat{h}_{v}$}}[i], we can find its upper bound, 𝒖v[i]\mbox{\boldmath{$u$}}_{v}[i], according to [26] as

𝒖v[i]=(a^v𝒙v𝜽)[i]+(𝒂^v𝑿^v𝜽)[i]+uN(v)k=1qvu𝒂^v[u]𝒖^v[u,i,k],\mbox{\boldmath{$u$}}_{v}[i]=(\hat{a}_{v}\mbox{\boldmath{$x$}}_{v}\mbox{\boldmath{$\theta$}})[i]+(\mbox{\boldmath{$\hat{a}$}}_{v}\mbox{\boldmath{$\hat{X}$}}_{v}\mbox{\boldmath{$\theta$}})[i]+\sum_{u\in N(v)}\sum_{k=1}^{q_{vu}}\mbox{\boldmath{$\hat{a}$}}_{v}[u]\mbox{\boldmath{$\hat{u}$}}_{v}[u,i,k], (16)
𝒖^v[u,i,k]=k-th_largest{(1𝑿^v[u,:])[𝜽[:,i]]+\displaystyle\mbox{\boldmath{$\hat{u}$}}_{v}[u,i,k]=\text{$k$-th\_largest}\{(1-\mbox{\boldmath{$\hat{X}$}}_{v}[u,:])\odot[\mbox{\boldmath{$\theta$}}[:,i]]_{+} (17)
+𝑿^v[u,:][𝜽[:,i]]},\displaystyle+\mbox{\boldmath{$\hat{X}$}}_{v}[u,:]\odot[\mbox{\boldmath{$\theta$}}[:,i]]_{-}\},

where k-th_largest()\text{$k$-th\_largest}(\cdot) is a function that can find the kk-th largest entry in the input vector, \odot denotes element-wise product, [x]+=max{x,0}[x]_{+}=\max\{x,0\}, and [x]=min{x,0}[x]_{-}=-\min\{x,0\}. The first two terms in (16) correspond to the value of 𝒉^v[i]\mbox{\boldmath{$\hat{h}$}}_{v}[i] if 𝑿~v=𝑿^v\mbox{\boldmath{$\tilde{X}$}}_{v}=\mbox{\boldmath{$\hat{X}$}}_{v}. Meanwhile, the last term in (16) denotes the maximal increase on the value of 𝒉^v[i]\mbox{\boldmath{$\hat{h}$}}_{v}[i] for any feasible 𝑿~v\mbox{\boldmath{$\tilde{X}$}}_{v} constrained by (12a) and (12b), which can be explained as follows. First, the value of 𝒉^v[i]\mbox{\boldmath{$\hat{h}$}}_{v}[i] depends on the aggregation over all neighbors, which accounts for the summation over uN(v)u\in N(v). Moreover, for each neighbor uu and each bit 𝑿^v[u,j]\mbox{\boldmath{$\hat{X}$}}_{v}[u,j] in 𝑿^v[u,:]\mbox{\boldmath{$\hat{X}$}}_{v}[u,:], if (i) 𝑿^v[u,j]=0\mbox{\boldmath{$\hat{X}$}}_{v}[u,j]=0 and 𝜽[j,i]>0\mbox{\boldmath{$\theta$}}[j,i]>0, or (ii) 𝑿^v[u,j]=1\mbox{\boldmath{$\hat{X}$}}_{v}[u,j]=1 and 𝜽[j,i]<0\mbox{\boldmath{$\theta$}}[j,i]<0, flipping 𝑿^v[u,j]\mbox{\boldmath{$\hat{X}$}}_{v}[u,j] brings an increase on 𝒉^v[i]\mbox{\boldmath{$\hat{h}$}}_{v}[i]. These two conditions can be overall written as (1𝑿^v[u,:])[𝜽[:,i]]++𝑿^v[u,:][𝜽[:,i]](1-\mbox{\boldmath{$\hat{X}$}}_{v}[u,:])\odot[\mbox{\boldmath{$\theta$}}[:,i]]_{+}+\mbox{\boldmath{$\hat{X}$}}_{v}[u,:]\odot[\mbox{\boldmath{$\theta$}}[:,i]]_{-}, which appears as the input of k-th_largest()\text{$k$-th\_largest}(\cdot) in (17). According to the constraint (12a), we can at most flip qvuq_{vu} bits in 𝑿^v[u,:]\mbox{\boldmath{$\hat{X}$}}_{v}[u,:]. Therefore, we pick the top-qvuq_{vu} possible increase to get the upper bound, which accounts for the summation over kk. Based on the analysis in [26], the upper bound 𝒖v[i]\mbox{\boldmath{$u$}}_{v}[i] defined in (16) and (17) is the tightest under constraints (12a) and (12b). Similarly, we can also find the tightest lower bound, 𝒍[i]\mbox{\boldmath{$l$}}[i], for 𝒉^𝒗[i]\mbox{\boldmath{$\hat{h}_{v}$}}[i] as

𝒍v[i]=(a^v𝒙v𝜽)[i]+(𝒂^v𝑿v𝜽)[i]uN(v)k=1qvu𝒂^v[u]𝒍^v[u,i,k],\mbox{\boldmath{$l$}}_{v}[i]=(\hat{a}_{v}\mbox{\boldmath{$x$}}_{v}\mbox{\boldmath{$\theta$}})[i]+(\mbox{\boldmath{$\hat{a}$}}_{v}\mbox{\boldmath{$X$}}_{v}\mbox{\boldmath{$\theta$}})[i]-\sum_{u\in N(v)}\sum_{k=1}^{q_{vu}}\mbox{\boldmath{$\hat{a}$}}_{v}[u]\mbox{\boldmath{$\hat{l}$}}_{v}[u,i,k], (18)
𝒍^v[u,i,k]=k-th_largest{(1𝑿v[u,:])[𝜽[:,i]]\displaystyle\mbox{\boldmath{$\hat{l}$}}_{v}[u,i,k]=\text{$k$-th\_largest}\{(1-\mbox{\boldmath{$X$}}_{v}[u,:])\odot[\mbox{\boldmath{$\theta$}}[:,i]]_{-} (19)
+𝑿v[u,:][𝜽[:,i]]+}.\displaystyle+\mbox{\boldmath{$X$}}_{v}[u,:]\odot[\mbox{\boldmath{$\theta$}}[:,i]]_{+}\}.

After bounding 𝒉^v[i]\mbox{\boldmath{$\hat{h}$}}_{v}[i] as 𝒍v[i]𝒉^v[i]𝒖v[i]\mbox{\boldmath{$l$}}_{v}[i]\leq\mbox{\boldmath{$\hat{h}$}}_{v}[i]\leq\mbox{\boldmath{$u$}}_{v}[i], we can rewrite or approximate the ReLU function in (15) based on the signs of 𝒍v[i]\mbox{\boldmath{$l$}}_{v}[i] and 𝒖v[i]\mbox{\boldmath{$u$}}_{v}[i]. Specifically, if 𝒖v[i]\mbox{\boldmath{$u$}}_{v}[i] and 𝒍v[i]\mbox{\boldmath{$l$}}_{v}[i] have the same signs, say 0<𝒍v[i]<𝒖v[i]0<\mbox{\boldmath{$l$}}_{v}[i]<\mbox{\boldmath{$u$}}_{v}[i] or 𝒍v[i]<𝒖v[i]<0\mbox{\boldmath{$l$}}_{v}[i]<\mbox{\boldmath{$u$}}_{v}[i]<0, the ReLU function is reduced into the following linear functions

𝒉v[i]=𝒉^𝒗[i],iv+,𝒉v[i]=0,iv,\mbox{\boldmath{$h$}}_{v}[i]=\mbox{\boldmath{$\hat{h}_{v}$}}[i],i\in\mathcal{I}_{v}^{+},\quad\mbox{\boldmath{$h$}}_{v}[i]=0,i\in\mathcal{I}_{v}^{-}, (20)

where v+\mathcal{I}_{v}^{+} and v\mathcal{I}_{v}^{-} are the index sets of the entries in 𝒉^𝒗\hat{h}_{v} whose upper and lower bounds are both larger or smaller than 0, respectively. On the other hand, if 𝒍v[i]<0<𝒖v[i]\mbox{\boldmath{$l$}}_{v}[i]<0<\mbox{\boldmath{$u$}}_{v}[i], the ReLU function cannot be directly simplified. To avoid the non-linearity, we can replace it with its convex envelope proposed in [44] and the operation 𝒉v[i]=σ(r)(𝒉^𝒗[i])\mbox{\boldmath{$h$}}_{v}[i]=\sigma^{(r)}(\mbox{\boldmath{$\hat{h}_{v}$}}[i]) can be approximated as

{𝒉v[i]𝒉^𝒗[i],𝒉v[i]0,iv,𝒉v[i](𝒖v[i]𝒍v[i])𝒖v[i](𝒉^𝒗[i]𝒍v[i]),\left\{\begin{array}[]{lr}\mbox{\boldmath{$h$}}_{v}[i]\geq\mbox{\boldmath{$\hat{h}_{v}$}}[i],&\\ \mbox{\boldmath{$h$}}_{v}[i]\geq 0,&i\in\mathcal{I}_{v},\\ \mbox{\boldmath{$h$}}_{v}[i](\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$l$}}_{v}[i])\leq\mbox{\boldmath{$u$}}_{v}[i](\mbox{\boldmath{$\hat{h}_{v}$}}[i]-\mbox{\boldmath{$l$}}_{v}[i]),&\end{array}\right. (21)

where v\mathcal{I}_{v} is the index set of the entries in 𝒉^𝒗\hat{h}_{v} whose upper and lower bounds differ in signs. A graphical illustration of the ReLU function’s convex envelope can be found in Fig. 2.

Refer to caption
(a) ReLU function.
Refer to caption
(b) Convex envelope.
Figure 2: Illustration of the ReLU function’s convex envelope.

Based on the linear approximations in (20) and (21) for the ReLU function, Problem (13) can be transformed into

z~(𝒒v,𝑿^v)min{𝑿~v,𝒉v,𝒉^𝒗}c^v(𝒉v𝒘+b),\tilde{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})\triangleq\min_{\{\mbox{\boldmath{$\tilde{X}$}}_{v},\mbox{\boldmath{$h$}}_{v},\mbox{\boldmath{$\hat{h}_{v}$}}\}}\hat{c}_{v}(\mbox{\boldmath{$h$}}_{v}\mbox{\boldmath{$w$}}+b), (22)

subject to (12a), (12b),(14), (20), and (21).

By observation, the feasible region of Problem (13) is a subset of that of Problem (22), which indicates that the minimum of the objective function in Problem (22) is a lower bound of that in Problem (13), i.e., z~(𝒒v,𝑿^v)z^(𝒒v,𝑿^v)\tilde{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})\leq\hat{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v}). If z~(𝒒v,𝑿^v)>0\tilde{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})>0, then z^(𝒒v,𝑿^v)>0\hat{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})>0. Therefore, by solving problem (22), we can also verify that the predicted label c^v\hat{c}_{v} is robust if z~(𝒒v,𝑿^v)>0\tilde{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})>0.

4.2.3 Relaxation on Binary Variables

Up to this point, Problem (22) is still intractable because 𝑿~v\mbox{\boldmath{$\tilde{X}$}}_{v} is binary (see Eq. (12b)). Therefore, we relax it into the continuous variable between 0 and 1. Then Problem (22) is further relaxed as

z¯(𝒒v,𝑿^v)min{𝑿~v,𝒉v,𝒉^𝒗}c^v(𝒉v𝒘+b),\bar{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})\triangleq\min_{\{\mbox{\boldmath{$\tilde{X}$}}_{v},\mbox{\boldmath{$h$}}_{v},\mbox{\boldmath{$\hat{h}_{v}$}}\}}\hat{c}_{v}(\mbox{\boldmath{$h$}}_{v}\mbox{\boldmath{$w$}}+b), (23)

subject to (14), (20), (21),

𝑿~v[u,:]𝑿^v[u,:]1qvu,uN(v),\displaystyle||\mbox{\boldmath{$\tilde{X}$}}_{v}[u,:]-\mbox{\boldmath{$\hat{X}$}}_{v}[u,:]||_{1}\leq q_{vu},\forall u\in N(v), (23a)
𝑿~v[u,j][0,1],uN(v),j{1,2,..,p},\displaystyle\mbox{\boldmath{$\tilde{X}$}}_{v}[u,j]\in[0,1],\forall u\in N(v),j\in\{1,2,..,p\}, (23b)

where ||||1||\cdot||_{1} means the L1 norm of the input vector. It is obvious that Problem (23) is a linear programming problem, and the minimum of whose objective function is a lower bound of that of Problem (22), i.e., z¯(𝒒v,𝑿^v)z~(𝒒v,𝑿^v)\bar{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})\leq\tilde{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v}). Therefore, the predicted label c^v\hat{c}_{v} is robust if z¯(𝒒v,𝑿^v)>0\bar{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})>0. Note that the solution to Problem (23) is not equivalent to the original problem (12). However, we only focus on the sign instead of the value of the objective function. Therefore, the above series of approximation and relaxation meet our needs. By solving Problem (23), we will be able to rule out any non-robust predicted labels and thus guarantee robust predictions of the decentralized GNN binary classifier, which accords with the goal of this paper. In the following, we will solve Problem (23).

4.3 Dual Problem and Feasible Solution

As mentioned above, Problem (23) is a linear programming problem. Therefore, we can use duality theory to solve it. Specifically, the dual problem of Problem (23) is given by

max{ϕv,𝜷v}i=1D𝜶v[i](a^v𝒙v𝜽)[i]tr[𝑿vT(𝒂^vT𝜶v𝜽T)]\displaystyle\max_{\{\mbox{\boldmath{$\phi$}}_{v},\mbox{\boldmath{$\beta$}}_{v}\}}-\sum_{i=1}^{D}\mbox{\boldmath{$\alpha$}}_{v}[i](\hat{a}_{v}\mbox{\boldmath{$x$}}_{v}\mbox{\boldmath{$\theta$}})[i]-\text{tr}[\mbox{\boldmath{$X$}}_{v}^{T}(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})] (24)
𝛀v1uN(v)qvu𝜷v[u]+const,\displaystyle-||\mbox{\boldmath{$\Omega$}}_{v}||_{1}-\sum_{u\in N(v)}q_{vu}\mbox{\boldmath{$\beta$}}_{v}[u]+\text{const},

subject to

ϕv[0,1]D,𝜷v|N(v)|,𝜷v0,\displaystyle\mbox{\boldmath{$\phi$}}_{v}\in[0,1]^{D},\mbox{\boldmath{$\beta$}}_{v}\in\mathbb{R}^{|N(v)|},\mbox{\boldmath{$\beta$}}_{v}\succeq 0, (24a)
𝛀v[u,j]=max{𝜺v[u,j]𝜷v[u],0},\displaystyle\mbox{\boldmath{$\Omega$}}_{v}[u,j]=\max\{\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]-\mbox{\boldmath{$\beta$}}_{v}[u],0\},
uN(v),j{1,2,..,p},\displaystyle\forall u\in N(v),j\in\{1,2,..,p\}, (24b)
𝜺v[u,j]=(1𝑿^v[u,j])[(𝒂^vT𝜶v𝜽T)[u,j]]++\displaystyle\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]=(1-\mbox{\boldmath{$\hat{X}$}}_{v}[u,j])[(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]]_{+}+
𝑿^v[u,j][(𝒂^vT𝜶v𝜽T)[u,j]],uN(v),j{1,2,..,p},\displaystyle\mbox{\boldmath{$\hat{X}$}}_{v}[u,j][(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]]_{-},\forall u\in N(v),j\in\{1,2,..,p\}, (24c)
𝜶v[i]=0,iv,𝜶v[i]=(c^v𝒘T)[i],iv+,\displaystyle\mbox{\boldmath{$\alpha$}}_{v}[i]=0,\forall i\in\mathcal{I}_{v}^{-},\quad\mbox{\boldmath{$\alpha$}}_{v}[i]=(-\hat{c}_{v}\mbox{\boldmath{$w$}}^{T})[i],\forall i\in\mathcal{I}_{v}^{+}, (24d)
𝜶v[i]=𝒖v[i]𝒖v[i]𝒍v[i][c^v𝒘[i]]ϕv[i][c^v𝒘[i]]+,iv,\displaystyle\mbox{\boldmath{$\alpha$}}_{v}[i]=\frac{\mbox{\boldmath{$u$}}_{v}[i]}{\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$l$}}_{v}[i]}[\hat{c}_{v}\mbox{\boldmath{$w$}}[i]]_{-}-\mbox{\boldmath{$\phi$}}_{v}[i][\hat{c}_{v}\mbox{\boldmath{$w$}}[i]]_{+},\forall i\in\mathcal{I}_{v}, (24e)
const=cvb+iv𝒖v[i]𝒍v[i]𝒖v[i]𝒍v[i][cv𝒘[i]],\displaystyle\text{const}=c_{v}b+\sum_{i\in\mathcal{I}_{v}}\frac{\mbox{\boldmath{$u$}}_{v}[i]\mbox{\boldmath{$l$}}_{v}[i]}{\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$l$}}_{v}[i]}[c_{v}\mbox{\boldmath{$w$}}[i]]_{-}, (24f)

where tr()\text{tr}(\cdot) indicates the trace of the input matrix. The proof is available in Appendix A.

Note that the above dual problem has the similar form to the problems in [26] and [44]. Following these two works, we can easily find a good suboptimal solution for Problem (24). As mentioned above, since we only focus on whether the objective function in Problem (24) is positive, it is not necessary to get its actual value. Therefore, instead of optimizing over all possible values of ϕv\mbox{\boldmath{$\phi$}}_{v}, we can set ϕv\mbox{\boldmath{$\phi$}}_{v} as

ϕv[i]={𝒖v[i]𝒖v[i]𝒍v[i],iv,0,otherwise.\displaystyle\mbox{\boldmath{$\phi$}}_{v}[i]=\left\{\begin{array}[]{rcl}\frac{\mbox{\boldmath{$u$}}_{v}[i]}{\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$l$}}_{v}[i]},&&\forall i\in\mathcal{I}_{v},\\ 0,&&\text{otherwise}.\\ \end{array}\right. (27)

According to the simulation results in [26], the choice in (27) is a good suboptimal solution of the original problem (12) and can efficiently help us verify the robustness of the predicted labels. After the value of ϕv\mbox{\boldmath{$\phi$}}_{v} is fixed, 𝜶v\mbox{\boldmath{$\alpha$}}_{v}, 𝜺v\mbox{\boldmath{$\varepsilon$}}_{v}, and tr[𝑿vT(𝒂^vT𝜶v𝜽T)]\text{tr}[\mbox{\boldmath{$X$}}_{v}^{T}(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})] all become constant. Then, Problem (24) is rewritten as

max𝜷v0𝛀v1uN(v)qvu𝜷v[u]+const,\max_{\mbox{\boldmath{$\beta$}}_{v}\succeq 0}-||\mbox{\boldmath{$\Omega$}}_{v}||_{1}-\sum_{u\in N(v)}q_{vu}\mbox{\boldmath{$\beta$}}_{v}[u]+\text{const}, (28)

subject to

𝛀v[u,j]=max{𝜺v[u,j]𝜷v[u],0},\displaystyle\mbox{\boldmath{$\Omega$}}_{v}[u,j]=\max\{\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]-\mbox{\boldmath{$\beta$}}_{v}[u],0\},
uN(v),j{1,2,..,p},\displaystyle\forall u\in N(v),j\in\{1,2,..,p\}, (28a)
const =cvb+iv𝒖v[i]𝒍v[i]𝒖v[i]𝒍v[i][cv𝒘[i]]\displaystyle=c_{v}b+\sum_{i\in\mathcal{I}_{v}}\frac{\mbox{\boldmath{$u$}}_{v}[i]\mbox{\boldmath{$l$}}_{v}[i]}{\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$l$}}_{v}[i]}[c_{v}\mbox{\boldmath{$w$}}[i]]_{-}
i=1D𝜶v[i](a^v𝒙v𝜽)[i]tr[𝑿vT(𝒂^vT𝜶v𝜽T)].\displaystyle-\sum_{i=1}^{D}\mbox{\boldmath{$\alpha$}}_{v}[i](\hat{a}_{v}\mbox{\boldmath{$x$}}_{v}\mbox{\boldmath{$\theta$}})[i]-\text{tr}[\mbox{\boldmath{$X$}}_{v}^{T}(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})]. (28b)

Again, by using the duality theory, the optimal solution for 𝜷v\mbox{\boldmath{$\beta$}}_{v} is given by

𝜷v[u]=qvu-th_largest(𝜺v[u,:]),uN(v),\mbox{\boldmath{$\beta$}}_{v}[u]=\text{$q_{vu}$-th\_largest}(\mbox{\boldmath{$\varepsilon$}}_{v}[u,:]),\forall u\in N(v), (29)

whose detailed proof can be found in Appendix B. With the closed-form solutions given in (27) and (29), the value of the objective function in Problem (24) denoted as z˙(𝒒v,𝑿^v)\dot{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v}) can be obtained with low complexity. If z˙(𝒒v,𝑿^v)>0\dot{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})>0, the predicted label c^v\hat{c}_{v} is robust.

5 Robustness Verification in Wireless Communication Systems

In the sections above, we have introduced the decentralized GNN binary classifier and formulated a robustness verification problem to verify whether its predicted label is robust. Note that the premise of employing the robustness verification problem is that the local error bound vector, 𝒒v\mbox{\boldmath{$q$}}_{v}, is available, which can be estimated from received SINRs and depends on specific communication systems. Therefore, in this section, we study how to obtain 𝒒v\mbox{\boldmath{$q$}}_{v} and justify the robustness of the decentralized GNN binary classifier in uncoded and coded communication systems, respectively.

5.1 Robustness Verification in Uncoded Communication Systems

In uncoded communication systems, node vv is able to estimate the BER, {ϵvu,uN(v)}\{\epsilon_{vu},u\in N(v)\}, of each received signal from its corresponding SINR according to Section 3.2. Given that the BPSK modulation is adopted, the BER of the received signal 𝒙^vu\mbox{\boldmath{$\hat{x}$}}_{vu} is given by [45]

ϵvu=Q(2γvu),\epsilon_{vu}=Q(\sqrt{2\gamma_{vu}}),\vspace{-0.5em} (30)

where Q()Q(\cdot) is the Q-function defined as Q(x)=x12πex2/2𝑑xQ(x)=\int_{x}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-x^{2}/2}dx666Our analysis can be generalized to other modulation modes by changing (30) accordingly.. By i.i.d. assumption for the channel impairment, qvuq_{vu} follows the Binomial distribution, qvuB(p,ϵvu)q_{vu}\sim B(p,\epsilon_{vu}). On the other hand, the error-tolerance of the decentralized GNN binary classifier is monotonic. Specifically, if it is robust to a received signal with more errors (large qvuq_{vu}), it is also robust to a received signal with fewer errors (small qvuq_{vu}). Based on the statistical characteristics of qvuq_{vu} and the error-tolerance monotonicity, we propose a new metric, robustness probability, to measure the robustness of the decentralized GNN binary classifier in uncoded communication systems.

As the name implies, the robustness probability of node vv, denoted as pv(r)p_{v}^{(r)}, indicates the probability that c^v\hat{c}_{v} is robust with the received signals 𝑿^v={𝒙^vu,uN(v)}\mbox{\boldmath{$\hat{X}$}}_{v}=\{\mbox{\boldmath{$\hat{x}$}}_{vu},\forall u\in N(v)\}. To give the formula of pv(r)p_{v}^{(r)}, we further introduce a new variable qvUq_{v}^{U}, which is the solution of the following optimization problem

qvUmaxqq,q_{v}^{U}\triangleq\max_{q}q, (31)

subject to

z˙(𝒒v,𝑿^v)>0,qvuq,uN(v),\displaystyle\dot{z}(\mbox{\boldmath{$q$}}^{\prime}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})>0,\forall q^{\prime}_{vu}\leq q,u\in N(v), (31a)

where z˙(,)\dot{z}(\cdot,\cdot) is defined at the end of Section 4.3. Based on the error-tolerance monotonicity, Problem (31) can be solved by the bisection algorithm. From the definition of qvUq_{v}^{U}, if qvuqvUq_{vu}\leq q_{v}^{U} holds for all neighbors of node vv, c^v\hat{c}_{v} is robust. Therefore, we define pv(r)p_{v}^{(r)} as P(qvuqvU,uN(v))P(q_{vu}\leq q_{v}^{U},\forall u\in N(v)). Combined with the fact that qvuB(p,ϵvu)q_{vu}\sim B(p,\epsilon_{vu}), the robustness probability of node vv is given by

pv(r)=uN(v)[i=0qvU(pi)(ϵvu)i(1ϵvu)pi].p_{v}^{(r)}=\prod_{u\in N(v)}[\sum_{i=0}^{q_{v}^{U}}\tbinom{p}{i}(\epsilon_{vu})^{i}(1-\epsilon_{vu})^{p-i}].\vspace{-0.5em} (32)

In practice, a specific robustness requirement is generally imposed on node vv. In other words, the robustness probability of node vv is supposed to be no smaller than a given target robustness probability, pv(t)p_{v}^{(t)}. If pv(r)pv(t)p_{v}^{(r)}\geq p_{v}^{(t)}, the robustness requirement is satisfied. Otherwise, retransmission is needed to enhance the prediction robustness, which will be further discussed in Section 6.1.

5.2 Robustness Verification in Coded Communication Systems

As mentioned in Section 3.2, in coded communication systems, a transmitted signal can be correctly decoded by the receiver if it is not in outage. Therefore, 𝒙^vu=𝒙u\mbox{\boldmath{$\hat{x}$}}_{vu}=\mbox{\boldmath{$x$}}_{u} and qvu=0q_{vu}=0 if log(1+γvu)Ru\log(1+\gamma_{vu})\geq R_{u}. Otherwise, the whole transmission packet corresponding to 𝒙u\mbox{\boldmath{$x$}}_{u} is discarded. In this case, we estimate 𝒙^vu\mbox{\boldmath{$\hat{x}$}}_{vu} as a zero vector for the computation of c^v\hat{c}_{v}777If outages happen, the corresponding signal needs to be estimated, even with random guess, to facilitate the robustness verification process. and the corresponding maximum number of tolerable errors is pp, i.e., qvu=pq_{vu}=p. Overall, the value of qvuq_{vu} in coded communication systems is given by

qvu={p,if log(1+γvu)<Ru,0,otherwise.\displaystyle q_{vu}=\left\{\begin{array}[]{rcl}p,&&\text{if }\log(1+\gamma_{vu})<R_{u},\\ 0,&&\text{otherwise}.\\ \end{array}\right.\vspace{-1em} (35)

After getting 𝒒v\mbox{\boldmath{$q$}}_{v} based on (35), we solve the robustness verification problem (12) and check whether z˙(𝒒v,𝑿^v)>0\dot{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})>0. If it is, c^v\hat{c}_{v} is robust and the robustness indicator of node vv is set as 1, where the robustness indicator is a binary metric to measure the robustness of the decentralized GNN binary classifier in coded communication systems. Otherwise, the robustness indicator of node vv is set as 0 and retransmission is needed, which will be discussed in Section 6.2.

6 Retransmission Mechanism for Robustness Enhancement

When the given robustness requirement is not satisfied, retransmission is needed to enhance the prediction robustness. As mentioned in Section 3.2, the channel gain remains static within a symbol block but is i.i.d over different blocks. The channel diversity over different blocks can be exploited by utilizing the maximal-ratio combining (MRC) technique. Specifically, if 𝒙u\mbox{\boldmath{$x$}}_{u} is transmitted for TT rounds (one initial transmission and T1T-1 retransmissions) from node uu to node vv, all TT received copies of 𝒙u\mbox{\boldmath{$x$}}_{u} can be combined by the MRC technique at node vv and a received signal denoted as 𝒙^vu(T)\mbox{\boldmath{$\hat{x}$}}_{vu}(T) is obtained. The effective received SINR of 𝒙^vu(T)\mbox{\boldmath{$\hat{x}$}}_{vu}(T) is

γvu(T)=t=1Tγvu(t),\gamma_{vu}(T)=\sum_{t=1}^{T}\gamma_{vu}(t),\vspace{-0.5em} (36)

where γvu(t)\gamma_{vu}(t) is the SINR of the signal received by node vv from node uu for the tt-th transmission and is assumed known at node vv according to Section 3.2. Obviously, retransmission can contribute to decreasing the BER and the outage probability for both uncoded and coded communication systems. This further enhances the robustness of the decentralized GNN binary classifier.

However, traditional retransmission mechanisms aim at reliable transmission and will come to an end only when the target BER is met and no neighbor is in outage for uncoded and coded communication systems, respectively. This will inevitably induce high communication overhead especially for dense networks. To alleviate communication burden of achieving robust predictions, in the following, we adopt the MRC technique to re-design retransmission mechanisms with novel retransmission criteria and stopping rules for the decentralized GNN binary classifier in uncoded and coded communication systems, respectively.

6.1 Retransmission in Uncoded Communication Systems

In this part, we develop the retransmission mechanism when the predicted label of node vv does not satisfy the robustness requirement, i.e., pv(r)<pv(t)p_{v}^{(r)}<p_{v}^{(t)}, in uncoded communication systems. First, we need to develop a criterion to check whether a neighbor of node vv needs to do retransmission. According to (30), node vv can estimate the BER ϵvu\epsilon_{vu} of each received signal based on the received SINR. Generally, when ϵvu\epsilon_{vu} is large, the probability of qvuqvUq_{vu}\leq q_{v}^{U} will be small, which further leads to a small robustness probability of node vv. Obviously, we need to find out those received signals with large BER and ask the corresponding neighbors to do retransmission. To achieve this goal, we first introduce a new variable, ϵvU\epsilon_{v}^{U}, which is called BER bound and is the solution to the following equation

pv(t)|N(v)|=i=0qvU(pi)(ϵvU)i(1ϵvU)pi.\sqrt[|N(v)|]{p_{v}^{(t)}}=\sum_{i=0}^{q_{v}^{U}}\tbinom{p}{i}(\epsilon_{v}^{U})^{i}(1-\epsilon_{v}^{U})^{p-i}.\vspace{-0.5em} (37)

Based on the above definition, ϵvU\epsilon_{v}^{U} can be interpreted as the maximum BER of each received signal to guarantee that the robustness probability of c^v\hat{c}_{v} is no smaller than pv(t)p_{v}^{(t)}. In this way, we can use ϵvU\epsilon_{v}^{U} as a threshold to check whether retransmission is needed. Specifically, if ϵvu>ϵvU\epsilon_{vu}>\epsilon_{v}^{U}, node uu needs to retransmit 𝒙u\mbox{\boldmath{$x$}}_{u} to node vv.

With the above retransmission criterion and the MRC technique, the retransmission mechanism for the decentralized GNN binary classifier in uncoded communication systems is summarized as Algorithm 1 and described as follows. After the initial transmission and prediction, if pv(r)<pv(t)p_{v}^{(r)}<p_{v}^{(t)}, node vv requests each neighbor uu with ϵvu>ϵvU\epsilon_{vu}>\epsilon_{v}^{U} to retransmit their node features. By using the MRC technique, node vv coherently combines the received copies for each retransmitted signal, computes their corresponding effective SINR as (36), and updates the received sliced feature matrix 𝑿^v\hat{\mbox{\boldmath{$X$}}}_{v}. Then, node vv re-predicts its label c^v\hat{c}_{v} and re-computes the robustness probability pv(r)p_{v}^{(r)}. The whole transmission process ends when pv(r)p_{v}^{(r)} finally becomes no smaller than pv(t)p_{v}^{(t)}.

Algorithm 1 Decentralized GNN Binary Classifier in Uncoded Communication Systems
1:  for node vv in 𝒱\mathcal{V} do
2:     Initial transmission:
3:     Send transmission request to each neighbor in N(v)N(v).
4:     Receive signals from neighbors: {𝒙^vu,uN(v)}\{\hat{\mbox{\boldmath{$x$}}}_{vu},u\in N(v)\} (or say 𝑿^v\mbox{\boldmath{$\hat{X}$}}_{v}).
5:     Compute BERs for all received signals using (30): {ϵvu,uN(v)}\{\epsilon_{vu},u\in N(v)\}.
6:     Use local GNN copy and get the predicted label c^v\hat{c}_{v}.
7:     Compute pv(r)p_{v}^{(r)} using (32).
8:     Retransmission:
9:     while pv(r)<pv(t)p_{v}^{(r)}<p_{v}^{(t)} do
10:        Get qvUq_{v}^{U} by solving Problem (31).
11:        Get retransmission threshold ϵvU\epsilon_{v}^{U} using (37).
12:        Send retransmission request to each neighbor uu with ϵvu>ϵvU\epsilon_{vu}>\epsilon_{v}^{U}.
13:        for neighbor uu with ϵvu>ϵvU\epsilon_{vu}>\epsilon_{v}^{U} do
14:           Retransmit 𝒙u\mbox{\boldmath{$x$}}_{u} to node vv.
15:        end for
16:        Use the MRC technique to combine all received copies for each neighbor with ϵvu>ϵvU\epsilon_{vu}>\epsilon_{v}^{U}.
17:        Re-compute BERs and update 𝑿^v\mbox{\boldmath{$\hat{X}$}}_{v}.
18:        Re-compute c^v\hat{c}_{v} and pv(r)p_{v}^{(r)}.
19:     end while
20:  end for

6.2 Retransmission in Coded Communication Systems

In this part, we develop the retransmission mechanism when the predicted label of node vv is not robust, i.e., z˙(𝒒v,𝑿^v)<0\dot{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})<0, in coded communication systems. Obviously, not all received signals but only those in outage need to be retransmitted. In other words, if log(1+γvu)<Ru\log(1+\gamma_{vu})<R_{u}, node uu needs to retransmit 𝒙u\mbox{\boldmath{$x$}}_{u} to node vv. By using the MRC technique, all received copies of 𝒙u\mbox{\boldmath{$x$}}_{u} are coherently combined and the effective SINR is given by (36), which is used to check whether the effective received signal is in outage. Specifically, after TT transmissions, if log(1+γvu(T))Ru\log(1+\gamma_{vu}(T))\geq R_{u}, it indicates that 𝒙^vu(T)=𝒙v\mbox{\boldmath{$\hat{x}$}}_{vu}(T)=\mbox{\boldmath{$x$}}_{v} and qvu(T)=0q_{vu}(T)=0. Otherwise, 𝒙^vu(T)=𝟎\mbox{\boldmath{$\hat{x}$}}_{vu}(T)=\mbox{\boldmath{$0$}} and qvu(T)=pq_{vu}(T)=p. In this way, 𝑿^v\mbox{\boldmath{$\hat{X}$}}_{v} and 𝒒v\mbox{\boldmath{$q$}}_{v} are updated, which enables node vv to re-predict its label and re-solve Problem (12) to check the robustness of the newly-obtained predicted label. If it is robust, the whole transmission process ends. The retransmission mechanism for the decentralized GNN binary classifier in coded communication systems is summarized as Algorithm 2.

Algorithm 2 Decentralized GNN Binary Classifier in Coded Communication Systems
1:  for node vv in 𝒱\mathcal{V} do
2:     Initial transmission:
3:     Send transmission request to each neighbor in N(v)N(v).
4:     Receive signals from neighbors: {𝒙^vu,uN(v)}\{\hat{\mbox{\boldmath{$x$}}}_{vu},u\in N(v)\} (or say 𝑿^v\mbox{\boldmath{$\hat{X}$}}_{v}).
5:     Check whether an outage occurs for each received signal and get 𝒒v\mbox{\boldmath{$q$}}_{v} using (35).
6:     Use local GNN copy and get the predicted label c^v\hat{c}_{v}.
7:     Solve Problem (12) and get z˙(𝒒v,𝑿^v)\dot{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v}).
8:     Retransmission:
9:     while z˙(𝒒v,𝑿^v)<0\dot{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v})<0 do
10:        Send retransmission request to each neighbor uu in outage.
11:        for neighbor uu in outage do
12:           Retransmit 𝒙u\mbox{\boldmath{$x$}}_{u} to node vv.
13:        end for
14:        Use the MRC technique to combine all received copies for each neighbor in outage.
15:        Re-check whether an outage occurs for each newly received signal.
16:        Update 𝑿^v\mbox{\boldmath{$\hat{X}$}}_{v} and 𝒒v\mbox{\boldmath{$q$}}_{v}.
17:        Re-compute c^v\hat{c}_{v} and z˙(𝒒v,𝑿^v)\dot{z}(\mbox{\boldmath{$q$}}_{v},\mbox{\boldmath{$\hat{X}$}}_{v}).
18:     end while
19:  end for

7 Simulation Results

In this section, we use simulation results to show the performance of the decentralized GNN binary classifier in wireless communication systems and validate the effectiveness of the proposed retransmission mechanisms. We also pay attention to the impact of some key parameters about the network design and retransmission mechanisms to shed lights on practical implementation.

7.1 Simulation Setup

We conduct the simulation on the synthetic random geometric graph data. To be more specific, we consider a 2,000 m by 2,000 m two-dimensional square area with NN nodes, which are uniformly distributed in the area. Moreover, we set the communication radius as 500 m, which indicates that two nodes are connected if their distance is smaller than 500 m. Furthermore, each graph corresponds to a binary classification problem in a wireless communication system and a specific GNN binary classifier J(;𝒲)J(\cdot;\mathcal{W}) is generated to solve it. Following [46], each entry of the parameters in set 𝒲\mathcal{W} is independently generated from the Gaussian distribution 𝒩(0,102)\mathcal{N}(0,10^{2}). In a similar way, for all nodes in the graph, each entry of their node features {𝒙v}vN(v)\{\mbox{\boldmath{$x$}}_{v}\}_{v\in N(v)} is independently generated by following the Bernoulli distribution whose successful probability is 0.3. After that, the true labels of all nodes {cv}vN(v)\{c_{v}\}_{v\in N(v)} are obtained by using the GNN binary classifier in (6), which suggests that the misclassification rate will be 0 if the wireless transmission is perfect. For simplicity, we assume that all nodes have the same target robustness probability, and transmit at the same data rate and power, i.e., pv(t)=p(t),Rv=R,Pv=P,v𝒱p_{v}^{(t)}=p^{(t)},R_{v}=R,P_{v}=P,\forall v\in\mathcal{V}. And we do not consider interference among neighbors in the following simulation. Unless otherwise stated, the values of these three parameters are preset as 80%, 1 bit/s/Hz, and 0.1 w, respectively. Throughout this section, presented results are the average performance over all nodes in 200 testing graphs. Our simulation parameters are summarized in Table I.

TABLE I: Simulation Parameters
Parameter Value
Square Area 2,000 m ×\times 2,000 m
Communication Radius 500 m
Bandwidth 10 MHz
Noise Spectral Density -174 dBm/Hz
Path Loss Model 128.1+37.6log(d[km])
Shadowing Standard Deviation, stdstd 8 dB
Target Robustness Probability, p(t)p^{(t)}
80%
Transmit Data Rate, RR
1 bit/s/Hz
Transmit Power, PP
0.1 w
Node Feature Dimension, pp
32
Hidden State Dimension, DD
32
Graph Filter, 𝑨^\hat{A}
Unnormalized graph filter,
𝑨^=𝑨+𝑰N\mbox{\boldmath{$\hat{A}$}}=\mbox{\boldmath{$A$}}+\mbox{\boldmath{$I$}}_{N}[40]

7.2 Simulation Results in Uncoded Communication Systems

7.2.1 The Performance without Retransmission

First, we test the performance of the decentralized GNN binary classifier in uncoded communication systems without retransmission. Specifically, we pay attention to the misclassification rate and the percentage of robust nodes without retransmission. The results are summarized in Figs. 3(a) and 3(b), where “w/o” is the abbreviation for “without”. From the figures, it can be seen that when the transmit power is small, the imperfect transmission would lead to about 11%-13% incorrect predictions. With the increase of the transmit power, the received SINR increases, which further decreases the BER of the received signal. Therefore, the misclassification rate decreases and the percentage of robust nodes increases. Moreover, both metrics change at a decreasing speed when the transmit power increases, indicating that increasing power only leads to a limited performance enhancement for the decentralized GNN binary classifier. Without other mechanisms, it takes extremely high power consumption to achieve the robustness requirement, which suggests the necessity of the retransmission mechanism especially for power-limited systems.

Refer to caption
(a) Misclassification rate w/o and w/ retransmission.
Refer to caption
(b) Percentage of robust nodes w/o retransmission.
Refer to caption
(c) Average needed transmission rounds.
Figure 3: Simulation results for the decentralized GNN binary classifier in uncoded communication systems.
TABLE II: #Transmission Ratio between the Traditional and the Proposed Retransmission Mechanism for Scenarios with N=200N=200
Communication Mode Transmit Power (w) RatioTRatio_{T} 0.1 0.5 1.0 1.5 2.0
Uncoded 1.52 1.47 1.42 1.42 1.42
Coded 2.02 2.00 1.94 1.99 1.59

7.2.2 The Performance with Retransmission

To show the performance of the decentralized GNN in uncoded communication systems with retransmission, we focus on the misclassification rate with retransmission and the number of average needed transmission rounds to achieve the robustness requirement. The results are summarized in Figs. 3(a) and 3(c), where “w/” is the abbreviation for “with”. From Fig. 3(a), after each node achieving the preset robustness requirement p(t)=80%p^{(t)}=80\%, the misclassification rate is around 0, which suggests that the retransmission mechanism proposed in Section 6.1 can efficiently decrease incorrect predictions and enhance the prediction robustness. Furthermore, Fig. 3(c) suggests that the number of average needed transmission rounds to achieve the robustness requirement decreases with the transmit power. The reason has been discussed in Section 7.2.1. Larger transmit powers bring about fewer errors, and therefore fewer transmission rounds are needed to achieve the preset p(t)p^{(t)}. Undoubtedly, both metrics are closely related to the value of p(t)p^{(t)}, which will be further discussed later.

To further validate the advantage of the proposed retransmission mechanism, we compare it with the traditional retransmission mechanism where the retransmission will stop until the BER of the received signals from all neighbors are smaller than a given threshold, which is manually set in advance. To achieve a fair comparison, the BER threshold of the traditional retransmission mechanism should be set to be comparable with the proposed retransmission mechanism, i.e., ϵvU\epsilon_{v}^{U}. However the value of ϵvU\epsilon_{v}^{U} changes for each transmission. We notice that its value varies from 3×1043\times 10^{-4} to 9×1059\times 10^{-5} during the above simulation. Therefore, to be conservative about our performance gain, we set the BER threshold of the traditional retransmission mechanism as 3×1043\times 10^{-4} in the following simulation. Specifically, we pay attention to the gain in terms of the number of average needed transmission rounds, #Transmission, and the results are summarized in Table II, where RatioTRatio_{T} refers to the ratio between the #Transmission of the traditional and the proposed retransmission mechanisms. The results suggest that our proposed retransmission mechanism is superior to the traditional retransmission mechanism especially when the transmit power is small.

7.2.3 The Impact of Target Robustness Probability

In uncoded communication systems, the performance highly depends on the value of the target robustness probability as mentioned above. Therefore, in this part, we study the impact of p(t)p^{(t)} with fixed transmit power P=0.1P=0.1 w and the simulation results are summarized in Fig. 4. Obviously, with the increase of p(t)p^{(t)}, the misclassification rate with retransmission decreases and finally converges to 0 (for p(t)80%p^{(t)}\geq 80\%) as suggested in Fig. 4(a). Also, fewer nodes can achieve the robustness requirement without retransmission as indicated by Fig. 4(b) and eventually more transmission rounds are needed as shown in Fig. 4(c). Furthermore, it is observed that all three metrics change very sharply when increasing p(t)p^{(t)} from 0 to 10%10\%, which indicates that a low robustness requirement already helps a lot to enhance the prediction accuracy. Another important observation is that the number of average needed transmission rounds increases very quickly when p(t)90%p^{(t)}\geq 90\%. Given that the misclassification rate converges to 0 when p(t)p^{(t)} reaches 80%, there is no need to set p(t)p^{(t)} larger than 80% for robust predictions, which does not enhance the prediction accuracy but leads to extra communication overhead. Obviously, the threshold value (80% in this test setting) varies with different applications, but the key insight concerning the trade-off between prediction accuracy and communication overhead when selecting the value of p(t)p^{(t)} always holds. Indeed, it should be reasonably chosen according to the specific accuracy requirement in practice.

Refer to caption
(a) Misclassification rate w/ retransmission.
Refer to caption
(b) Percentage of robust nodes w/o retransmission.
Refer to caption
(c) Average needed transmission rounds.
Figure 4: Simulation results in uncoded communication systems with different target robustness probability.

7.3 Simulation Results in Coded Communication Systems

7.3.1 The Performance without Retransmission

Similar to Section 7.2.1, we use the misclassification rate and the percentage of robust nodes without retransmission as two metrics to show the performance of the decentralized GNN binary classifier in coded communication systems without retransmission. The results are summarized in Figs. 5(a) and 5(b). From the figures, it can be seen that the impact of the transmit power is similar to that in uncoded systems. However, by comparing the simulation results in Figs. 3 and 5, with the same transmit power, the decentralized GNN in coded communication systems has a smaller misclassification rate and a larger percentage of robust nodes, which indicates the benefits of coding before transmission.

Refer to caption
(a) Misclassification rate w/o retransmission.
Refer to caption
(b) Percentage of robust nodes w/o retransmission.
Refer to caption
(c) Average needed transmission rounds.
Figure 5: Simulation results for decentralized GNN binary classifier in coded communication systems.

7.3.2 The Performance with Retransmission

In this part, we study the performance of the decentralized GNN binary classifier in coded systems with retransmission. Different from uncoded communication systems where retransmission can only guarantee that the predicted label is robust with a probability p(t)p^{(t)}, the predicted label in coded communication systems with retransmission is definitely robust, i.e., equals the true label. This indicates that the misclassification rate with retransmission is always 0 in coded communication systems. Therefore, we only use the number of average needed transmission rounds to achieve robustness as a metric in this part and the results are summarized in Fig. 5(c). Meanwhile, the comparison with the traditional retransmission mechanism where the retransmission will stop until no neighbor is in outage is presented in Table II.

Similar to the analysis in Section 7.2.2, the number of average needed transmission rounds decreases with the transmit power and is only 50% of that with the traditional retransmission mechanism. Moreover, with the same transmit power, coded communication systems need fewer transmission rounds to achieve robustness requirements than uncoded communication systems. This again validates the superiority of coded systems in terms of prediction robustness.

7.3.3 The Impact of Transmit Data Rate

Transmit data rate is an important parameter in coded communication systems, which directly influences the outage probability and the time consumption. Therefore, we study the impact of transmit data rate on the robustness of coded systems. The results are summarized in Fig. 6, where the effective data rate per transmission round in Fig. 6(d) is defined as the ratio between the transmit data rate and the number of needed transmission rounds to achieve robustness. Obviously, it is negatively related to the time consumption of the decentralized GNN binary classifier to achieve robustness (Algorithm 2).

Refer to caption
(a) Misclassification rate w/o retransmission.
Refer to caption
(b) Percentage of robust nodes w/o retransmission.
Refer to caption
(c) Average needed transmission rounds.
Refer to caption
(d) Effective data rate per transmission round.
Figure 6: Simulation results in coded communication systems with different transmit data rates.

From Figs. 6(a) and 6(b), larger transmit data rate would deteriorate the prediction robustness (larger misclassification rates and smaller percentages of robust nodes without retransmission). With the same preset transmit power (0.1 w), larger transmit data rate would lead to higher outage probability of received signals. Accordingly, more transmission rounds are needed to achieve robustness with larger transmit data rate as shown in Fig. 6(c). Furthermore, the effective transmit data rate per transmission round first increases and then decreases with the transmit data rate as can be seen in Fig. 6(d). From this figure, a suitable value of RR (that maximizes the effective data rate) should be 3.5 bit/s/Hz for this test setting, which is variant in different applications. The results indicate that the selection of the transmit data rate also involves various trade-offs (similar to the conclusion in Section 7.2.3 about p(t)p^{(t)}), which should be reasonably chosen for practical implementation.

7.4 The Impact of Network Parameters

To get more insights about the wireless network design and practical implementation, we study the impact of two important network parameters, i.e., the node density and the node feature dimension, in this section.

7.4.1 The Impact of Node Density

First, we study the impact of the node density on the decentralized GNN binary classifier in wireless communication systems. To this end, we conduct simulation on graphs with different numbers of nodes located in a fixed area size (2,000m ×\times 2,000m). The results are summarized in Figs. 3, 4, 5, and 6. From these figures, increasing the node density would deteriorate the performance of the decentralized GNN binary classifier. Specifically, in both uncoded and coded communication systems, larger node densities lead to larger misclassification rates and smaller percentages of robust nodes without retransmission. When retransmission mechanism is used, the predicted results must satisfy the target robustness probability. Therefore, the misclassification rate with retransmission are always around 0 regardless of the node density and the corresponding curves are close to each other. However the corresponding communication overhead is different. Specifically, more transmission rounds are needed for networks with larger node densities to achieve a given robustness requirement. This can be explained as follows. The node density directly influences the number of neighbors of each node in the graph, i.e., node degree |N(v)||N(v)|. As shown in Table III, larger node densities correspond to larger node degrees. For uncoded communication systems, larger node degrees lead to smaller robustness probability for given SINR or BER of the received signal based on (32). On the other hand, for coded communication systems, larger node degrees may increase the number of neighbors in outage, which deteriorates the prediction robustness. Overall, networks with small node densities are preferred in terms of prediction robustness. Nowadays, however, ultra-dense networks (UDNs) are prevalent, where neighborhood sampling technique can be utilized to enhance the prediction robustness of the decentralized GNN.

TABLE III: Average Node Degree for Graphs with Different Node Densities
Number of Nodes, NN 50 100 150 200
Average Node Degree 7.61 15.56 23.33 31.15

7.4.2 The Impact of Node Feature Dimension

Besides the node density mentioned above, the node feature dimension is also a key parameter for the practical network design, which directly corresponds to the length of the packet to be transmitted. In the following, we study the impact of the node feature dimension on the decentralized GNN binary classifier in wireless communication systems with N=50N=50. The simulation results are summarized in Figs. 7 and 8.

Refer to caption
(a) Misclassification rate w/o and w/ retransmission.
Refer to caption
(b) Percentage of robust nodes w/o retransmission.
Refer to caption
(c) Average needed transmission rounds.
Figure 7: Simulation results in uncoded communication systems with different node feature dimensions.
Refer to caption
(a) Misclassification rate w/o retransmission.
Refer to caption
(b) Percentage of robust nodes w/o retransmission.
Refer to caption
(c) Average needed transmission rounds.
Figure 8: Simulation results in coded communication systems with different node feature dimensions.

As can be seen, the impact of the node feature dimension for uncoded and coded communication systems are different. From Fig. 7, larger node feature dimensions deteriorate the robustness of the decentralized GNN binary classifier in uncoded communication systems (larger misclassification rates and smaller percentages of robust nodes without retransmission). Again, this can be explained based on the definition of robustness probability in (32). From (32), when SINR or BER of the received signal is fixed, increasing pp leads to smaller robustness probability. Accordingly, more transmission rounds are needed for networks with larger node feature dimensions to achieve a given robustness requirement, as depicted in Fig. 7(c).

However, the robustness of the decentralized GNN binary classifier in coded communication systems does not change with pp. The reason is that packet-level errors are considered in coded communication systems and the length of each packet does not matter (so long as its transmission can be completed within the coherence time). Based on the above simulation results, for uncoded communication systems, we need to constraint the node feature dimension. As mentioned in Section 3.1, the node feature dimension is related to the quantization precision. Higher-precision quantization on the internal characteristics leads to larger node feature dimension but alleviates the difficulty of learning a good GNN model. Therefore, there exists a trade-off between the training difficulty and the prediction robustness for the decentralized GNN binary classifier in uncoded communication systems.

7.5 Comparison between Uncoded and Coded Communication Systems

Up to this point, we have analyzed and tested the performance of the decentralized GNN binary classifier and the corresponding retransmission mechanisms in both uncoded and coded communication systems. As a summary of the discussion in this section, we highlight some major differences between the two communication systems in Table IV. Overall speaking, coded communication systems are superior to uncoded communication systems for the robust implementation of the GNN binary classifier. Undoubtedly, transmission with coded signals induces overhead in other aspects, such as the decoding delay. Choosing specific transmission mode is important for specific wireless applications.

TABLE IV: Comparison between Uncoded and Coded Communication Systems
System Model Unoded Coded
Error Bit-level (BER) Packet-level (Outage)
Robustness Metric Robustness Probability Robustness Indicator
Misclassification Rate
Larger Smaller
Percentage of
Robust Nodes
Larger Smaller
Average Needed
Retransmission Rounds
More Fewer
Scalability on Node
Feature Dimension
Limited Strong

8 Conclusion

GNN is a potentially powerful tool for a wide range of applications in wireless communication systems. Different from other widely used neural network models, the inference stage of the GNN can be naturally used in a decentralized manner with information exchanges among neighbors over wireless channels. However, wireless transmission is generally imperfect due to the noise and fading impairments, which is the main bottleneck of the decentralized GNN. Therefore, by using the GNN binary classifier as an example, this paper analyzes the performance the decentralized GNN during the inference stage in wireless communication systems and develops retransmission mechanisms to enhance the prediction robustness. We first develop a methodology to check whether the predicted labels are robust. Then we analyze the performance of the decentralized GNN binary classifier in both uncoded and coded communication systems. Furthermore, we develop novel retransmission mechanisms to enhance the prediction robustness in the above two communication systems. To verify our analysis and the effectiveness of the proposed retransmission mechanisms, we conduct simulations on synthetic graph data. The results suggest that our proposed retransmission mechanisms efficiently enhance the robustness of the decentralized GNN. We also provide some insights into related wireless networks design and practical implementation. The proposed analysis, retransmission mechanisms, and corresponding insights are applicable to any GNN based decentralized binary control/management in wireless communication systems.

As far as we know, we are among the first to study the robust decentralized inference with GNNs over noisy wireless channels. The current work represents a first step towards this goal and some open problems still need further investigation. First, this paper focuses on a single-layer GNN binary classifier, where nodes only need to exchange node features with each other. For multi-layer GNNs, however, hidden states at different layers also need to be shared among neighbors through wireless channels, for which the synchronization among neighbors cannot be neglected. Therefore, designing effective synchronization protocol is an important future direction to extend the analysis in this paper. Furthermore, we assume the existence of a well-trained GNN model and the accuracy of the decentralized GNN with perfect transmission is 100% to simplify the analysis. However, this assumption may not hold in practice. Besides imperfect transmission, incorrect predictions may be caused by the limited generalizability of the GNN model on unseen samples as well. How to analyze the robustness of the decentralized GNN with limited generalizability to testing samples is worth future investigation. Moreover, all unqualified neighbors are asked to re-transmit in our proposed retransmission mechanism and the robustness is re-checked after all retransmission is done. In practice, the retransmission is generally done in serial. It is possible that after retransmission from some (but not all) unqualified neighbors, the robustness can be achieved and the order of retransmission may matter. Investigating reasonable order of retransmission to reduce the communication overhead is an interesting topic. Finally, we only consider uncoded and simplified coded communication systems in this paper for theoretical analysis. However, error correction codes, cyclic redundancy check or other more complicated coding mechanisms which can help detect and correct errors are usually adopted in practical systems. These mechanisms definitely influence the robustness of the decentralized GNN. One can also consider other techniques, such as the joint source-channel coding and the error correction mechanism, instead of the retransmission mechanism to further enhance the robustness of the decentralized GNN in wireless communication systems.

References

  • [1] W. Saad, M. Bennis, and M. Chen, “A vision of 6G wireless systems: Applications, trends, technologies, and open research problems,” IEEE Netw., vol. 34, no. 3, pp. 134–142, May/Jun. 2020.
  • [2] T. O’Shea and J. Hoydis, “An introduction to deep learning for the physical layer,” IEEE Trans. Cogn. Commun. Netw., vol. 3, no. 4, pp. 563–575, Dec. 2017.
  • [3] F. Liang et.al., “An iterative BP-CNN architecture for channel decoding,” IEEE J. Sel. Topics Signal Process., vol. 12, no. 1, pp. 144–159, Feb. 2018.
  • [4] Z. Qin et.al., “Deep learning in physical layer communications,” IEEE Wireless Commun., vol. 26, no. 2, pp. 93–99, Apr. 2019.
  • [5] Q. Hu et.al., “Iterative algorithm induced deep-unfolding neural networks: Precoding design for multiuser MIMO systems,” IEEE Trans. Wireless Commun., vol. 20, no. 2, pp. 1394–1410, Feb. 2021.
  • [6] H. Sun et.al., “Learning to optimize: Training deep neural networks for interference management,” IEEE Trans. Signal Process., vol. 66, no. 20, pp. 5438–5453, Oct. 2018.
  • [7] Y. Shen et.al., “LORM: Learning to optimize for resource management in wireless networks with few training samples,” IEEE Trans. Wireless Commun., vol. 19, no. 1, pp. 665–679, Jan. 2020.
  • [8] M. Lee, G. Yu, and G. Y. Li, “Learning to branch: Accelerating resource allocation in wireless networks,” IEEE Trans. Veh. Technol., vol. 69, no. 1, pp. 958–970, Jan. 2020.
  • [9] W. Cui et.al., “Spatial deep learning for wireless scheduling,” IEEE J. Sel. Areas Commun., vol. 37, no. 6, pp. 1248–1261, Jun. 2019.
  • [10] M. Lee et.al., “Graph embedding based wireless link scheduling with few training samples,” IEEE Trans. Wireless Commun., vol. 20, no. 4, pp. 2282–2294, Apr. 2021.
  • [11] H. Ye, G. Y. Li, and B.-H. F. Juang, “Deep reinforcement learning based resource allocation for V2V communications,” IEEE Trans. Veh. Technol., vol. 68, no. 4, pp. 3163–3173, Apr. 2019.
  • [12] M. Wang, Y. Cui, X. Wang, S. Xiao, and J. Jiang, “Machine learning for networking: WorkFlow, advances and opportunities,” IEEE Netw., vol. 32, no. 2, pp. 92–99, Mar./Apr. 2018.
  • [13] E. Zeydan et al., “Big data caching for networking: Moving from cloud to edge,” IEEE Commun. Mag., vol. 54, no. 9, pp. 36–42, Sep. 2016.
  • [14] Z. Wu et.al., “A comprehensive survey on graph neural networks,” arXiv preprint arXiv:1901.00596, Dec. 2019.
  • [15] J. Zhou et.al., “Graph neural networks: A review of methods and applications,” arXiv preprint arXiv:1812.08434, Jul. 2019.
  • [16] D. Xu et.al., “Scene graph generation by iterative message passing,” in Proc. of CVPR, vol. 2, 2017.
  • [17] H. Yao et.al., “Deep multi-view spatial-temporal network for taxi demand prediction,” in Proc. of AAAI, 2018, pp. 2588–2595.
  • [18] A. Fout et.al., “Protein interface prediction using graph convolutional networks,” in Proc. of NIPS, 2017, pp. 6530–6539.
  • [19] W. Jin et.al., “Adversarial attacks and defenses on graphs: A review and empirical study,” arXiv preprint arXiv:2003.00653, 2020.
  • [20] H. Dai et.al., “Adversarial attack on graph structured data,” in Proc. of ICML, Jul. 2018, pp. 1115-1124.
  • [21] K. Xu et.al., “Topology attack and defense for graph neural networks: An optimization perspective,” in Proc. of IJCAI, 2019, pp. 3961–3967.
  • [22] V. N. Ioannidis et.al., “Graphsac: Detecting anomalies in large-scale graphs,” arXiv preprint arXiv:1910.09589, 2019.
  • [23] Y. Zhang et.al., “Comparing and detecting adversarial attacks for graph deep learning,” in Proc. Int. Conf. Learning Representation, May 2019.
  • [24] D. Zhu, et.al., “Robust graph convolutional networks against adversarial attacks,” in Proc. of KDD, Aug. 2019, pp. 1399–1407.
  • [25] M. Jin et.al., “Power up! robust graph convolutional network against evasion attacks based on graph powering,” arXiv preprint arXiv:1905.10029, 2019.
  • [26] D. Zügner and S. Günnemann, “Certifiable robustness and robust training for graph convolutional networks,” in Proc. of KDD, Aug. 2019, pp. 246–256.
  • [27] A. Bojchevski and S. Günnemann, “Certifiable robustness to graph perturbations,” in Proc. of NIPS, 2019, pp. 8317–8328.
  • [28] M. Eisen and A. R. Ribeiro, “Optimal wireless resource allocation with random edge graph neural networks,” IEEE Trans. Signal Process., vol. 68, pp. 2977–2991, Apr. 2020.
  • [29] Y. Shen et.al., “Graph neural networks for scalable radio resource management: Architecture design and theoretical analysis,” IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 101–115, Jan. 2021.
  • [30] R. Liu et.al., “User association for millimeter-wave networks: A machine learning approach,” IEEE Trans. Commun., vol. 67, no. 7, pp. 4162–4174, Jul. 2020.
  • [31] E. Tolstaya et.al., “Learning decentralized controllers for robot swarms with graph neural networks,” in Proc. Conf. Robot Learn., 2020, pp. 671–682.
  • [32] Q. Li, F. Gama, A. Ribeiro, and A. Prorok, “Graph neural networks for decentralized multi-robot path planning,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Oct. 2020, pp. 1–8.
  • [33] F. Gama, E. Tolstaya, and A. Ribeiro, “Graph neural networks for decentralized controllers,” arXiv preprint arXiv:2003.10280v2, 2020.
  • [34] W. Y. B. Lim, et. al., “Federated learning in mobile edge networks: A comprehensive survey,” IEEE Commun. Surveys Tuts., vol. 22, no. 3, pp. 2031–2063, Apr. 2020.
  • [35] M. Chen et.al., “Convergence time optimization for federated learning over wireless networks,” IEEE Trans. Wireless Commun., vol. 20, no. 4, pp. 2457–2471, Apr. 2021.
  • [36] C. Dinh et al., “Federated learning over wireless networks: Convergence analysis and resource allocation,” arXiv preprint arXiv:1910.13067, 2019.
  • [37] J. Ren, G. Yu, and G. Ding, “Accelerating DNN training in wireless federated edge learning system,” IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 219 –232, Jan. 2021.
  • [38] S. Wang et. al., “Adaptive federated learning in resource constrained edge computing systems,” IEEE J. Sel. Areas Commun., vol. 37, no. 6, pp. 1205–1221, Jun. 2019.
  • [39] D. Liu et.al., “Wireless data acquisition for edge learning: Data-importance aware retransmission,” IEEE Trans. Wireless Commun., vol. 20, no. 1, pp. 406–420, Jan. 2021.
  • [40] K. Xu et.al., “How powerful are graph neural networks,” in Proc. Int. Conf. Learning Representation, 2019, pp. 1–17.
  • [41] T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proc. Int. Conf. Learning Representation, Apr. 2017.
  • [42] G. Puy et.al., “Unifying local and non-local signal processing with graph CNNs,” arXiv preprint arXiv:1702.07759, Feb. 2017.
  • [43] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski, Robust Optimization, ser. Princeton Series in Applied Mathematics. Princeton, NJ: Princeton University Press, 2009.
  • [44] E. Wong and J. Z. Kolter, “Provable defenses against adversarial examples via the convex outer adversarial polytope,” in Proc. Int. Conf. Mach. Learn., 2018, pp. 5283–5292.
  • [45] A. Goldsmith, Wireless Communications. New York, NY, USA: Cambridge Univ. Press, 2005.
  • [46] S. Zhang et.al., “Fast Learning of graph neural networks with guaranteed generalizability: One-hidden-layer case,” in Proc. Int. Conf. Mach. Learn., 2020, pp. 11268–11277.
  • [47] S. Agarwal, R. H. Katz, S. V. Krishnamurthy, and S. K. Dao, “Distributed power control in ad hoc wireless networks,” in Proc. IEEE Int. Symp. Pers., Indoor, and Mobile Radio Commun., vol. 2, Oct. 2001, pp. 59–66.
  • [48] C. W. Sung and K. K. Leung, “A generalized framework for distributed power control in wireless networks,” IEEE Trans. Info. Theory, vol. 51, no. 7, pp. 2625–2635, July 2005.
  • [49] A. Zappone, E. Jorswieck, and A. Leshem, “Distributed resource allocation for energy efficiency in MIMO OFDMA wireless networks,” IEEE J. Sel. Areas Commun., vol. 34, no. 12, pp. 3451–3465, Dec 2016.
  • [50] W. Sun, Z. Yang, X. Zhang, and Y. Liu, “Energy-efficient neighbor discovery in mobile ad hoc and wireless sensor networks: A survey,” IEEE Commun. Surveys Tuts., vol. 16, no. 3, pp. 1448–1459, Mar. 2014.

Appendix A Dual Problem for Problem (23)

We reformulate Problem (23) as follows

min{𝑿~v,𝒉v,𝒉^𝒗,c~v,𝚫v}c^v×c~v,\min_{\{\mbox{\boldmath{$\tilde{X}$}}_{v},\mbox{\boldmath{$h$}}_{v},\mbox{\boldmath{$\hat{h}_{v}$}},\tilde{c}_{v},\mbox{\boldmath{$\Delta$}}_{v}\}}\hat{c}_{v}\times\tilde{c}_{v}, (38)

subject to

c~v=𝒉v𝒘+b,ρv\displaystyle\tilde{c}_{v}=\mbox{\boldmath{$h$}}_{v}\mbox{\boldmath{$w$}}+b,\Rightarrow\rho_{v}\in\mathbb{R} (38a)
𝒉^𝒗=a^v𝒙v𝜽+𝒂^v𝑿~v𝜽,𝜶v1×D\displaystyle\mbox{\boldmath{$\hat{h}_{v}$}}=\hat{a}_{v}\mbox{\boldmath{$x$}}_{v}\mbox{\boldmath{$\theta$}}+\mbox{\boldmath{$\hat{a}$}}_{v}\mbox{\boldmath{$\tilde{X}$}}_{v}\mbox{\boldmath{$\theta$}},\Rightarrow\mbox{\boldmath{$\alpha$}}_{v}\in\mathbb{R}^{1\times D} (38b)
0𝑿~v[u,j]1,uN(v),j{1,2,..,p},\displaystyle 0\leq\mbox{\boldmath{$\tilde{X}$}}_{v}[u,j]\leq 1,\forall u\in N(v),j\in\{1,2,..,p\},
𝚿v,𝚿v+|N(v)|×p\displaystyle\Rightarrow\mbox{\boldmath{$\Psi$}}_{v}^{-},\mbox{\boldmath{$\Psi$}}_{v}^{+}\in\mathbb{R}^{|N(v)|\times p} (38c)
𝑿~v[u,j]𝑿^v[u,j]𝚫v[u,j],uN(v),\displaystyle||\mbox{\boldmath{$\tilde{X}$}}_{v}[u,j]-\mbox{\boldmath{$\hat{X}$}}_{v}[u,j]||\leq\mbox{\boldmath{$\Delta$}}_{v}[u,j],\forall u\in N(v),
j{1,2,..,p},𝚼v+,𝚼v|N(v)|×p\displaystyle j\in\{1,2,..,p\},\Rightarrow\mbox{\boldmath{$\Upsilon$}}_{v}^{+},\mbox{\boldmath{$\Upsilon$}}_{v}^{-}\in\mathbb{R}^{|N(v)|\times p} (38d)
j=1p𝚫v[u,j]qvu,uN(v),𝜷v|N(v)|\displaystyle\sum_{j=1}^{p}\mbox{\boldmath{$\Delta$}}_{v}[u,j]\leq q_{vu},\forall u\in N(v),\Rightarrow\mbox{\boldmath{$\beta$}}_{v}\in\mathbb{R}^{|N(v)|} (38e)
𝒉v[i]=𝒉^𝒗[i],iv+,𝒉v[i]=0,iv,\displaystyle\mbox{\boldmath{$h$}}_{v}[i]=\mbox{\boldmath{$\hat{h}_{v}$}}[i],\forall i\in\mathcal{I}_{v}^{+},\quad\mbox{\boldmath{$h$}}_{v}[i]=0,\forall i\in\mathcal{I}_{v}^{-}, (38f)
𝒉v[i]0,iv,𝝉v1×D\displaystyle\mbox{\boldmath{$h$}}_{v}[i]\geq 0,\forall i\in\mathcal{I}_{v},\Rightarrow\mbox{\boldmath{$\tau$}}_{v}\in\mathbb{R}^{1\times D} (38g)
𝒉v[i]𝒉^𝒗[i],iv,𝝁v1×D\displaystyle\mbox{\boldmath{$h$}}_{v}[i]\geq\mbox{\boldmath{$\hat{h}_{v}$}}[i],\forall i\in\mathcal{I}_{v},\Rightarrow\mbox{\boldmath{$\mu$}}_{v}\in\mathbb{R}^{1\times D} (38h)
𝒉v[i](𝒖v[i]𝒍v[i])𝒖v[i](𝒉^𝒗[i]𝒍v[i]),iv.\displaystyle\mbox{\boldmath{$h$}}_{v}[i](\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$l$}}_{v}[i])\leq\mbox{\boldmath{$u$}}_{v}[i](\mbox{\boldmath{$\hat{h}_{v}$}}[i]-\mbox{\boldmath{$l$}}_{v}[i]),\forall i\in\mathcal{I}_{v}.
𝝀v1×D\displaystyle\Rightarrow\mbox{\boldmath{$\lambda$}}_{v}\in\mathbb{R}^{1\times D} (38i)

Note that constraint (38f) can be simply eliminated from the optimization. Then the dual problem of Problem (38) is given by

max{ρv,𝜶v,𝚿v+,𝚿v,𝚼v+,𝚼v,𝜷v,𝝉v,𝝁v,𝝀v}ρvb\displaystyle\max_{\{\rho_{v},\mbox{\boldmath{$\alpha$}}_{v},\mbox{\boldmath{$\Psi$}}_{v}^{+},\mbox{\boldmath{$\Psi$}}_{v}^{-},\mbox{\boldmath{$\Upsilon$}}_{v}^{+},\mbox{\boldmath{$\Upsilon$}}_{v}^{-},\mbox{\boldmath{$\beta$}}_{v},\mbox{\boldmath{$\tau$}}_{v},\mbox{\boldmath{$\mu$}}_{v},\mbox{\boldmath{$\lambda$}}_{v}\}}-\rho_{v}b
i=1D𝜶v[i](a^v𝒙v𝜽)[i]uN(v)j=1p𝚿v+[u,j]\displaystyle-\sum_{i=1}^{D}\mbox{\boldmath{$\alpha$}}_{v}[i](\hat{a}_{v}\mbox{\boldmath{$x$}}_{v}\mbox{\boldmath{$\theta$}})[i]-\sum_{u\in N(v)}\sum_{j=1}^{p}\mbox{\boldmath{$\Psi$}}_{v}^{+}[u,j]
+uN(v)j=1p𝑿^v[u,j](𝚼v[u,j]𝚼v+[u,j])\displaystyle+\sum_{u\in N(v)}\sum_{j=1}^{p}\mbox{\boldmath{$\hat{X}$}}_{v}[u,j](\mbox{\boldmath{$\Upsilon$}}_{v}^{-}[u,j]-\mbox{\boldmath{$\Upsilon$}}_{v}^{+}[u,j])
uN(v)qvu𝜷v[u]+iv𝝀v[i]𝒖v[i]𝒍v[i],\displaystyle-\sum_{u\in N(v)}q_{vu}\mbox{\boldmath{$\beta$}}_{v}[u]+\sum_{i\in\mathcal{I}_{v}}\mbox{\boldmath{$\lambda$}}_{v}[i]\mbox{\boldmath{$u$}}_{v}[i]\mbox{\boldmath{$l$}}_{v}[i], (39)

subject to

𝜷v[u]𝚼v+[u,j]+𝚼v[u,j],uN(v),j{1,2,..,p},\displaystyle\mbox{\boldmath{$\beta$}}_{v}[u]\geq\mbox{\boldmath{$\Upsilon$}}_{v}^{+}[u,j]+\mbox{\boldmath{$\Upsilon$}}_{v}^{-}[u,j],\forall u\in N(v),j\in\{1,2,..,p\}, (39a)
𝒂^vT𝜶v𝜽T=𝚿v+𝚿v+𝚼v+𝚼v,\displaystyle\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T}=\mbox{\boldmath{$\Psi$}}_{v}^{+}-\mbox{\boldmath{$\Psi$}}_{v}^{-}+\mbox{\boldmath{$\Upsilon$}}_{v}^{+}-\mbox{\boldmath{$\Upsilon$}}_{v}^{-}, (39b)
ρv=c^v,\displaystyle\rho_{v}=-\hat{c}_{v}, (39c)
𝜶v[i]=𝝀v[i]𝒖v[i]𝝁v[i],iv,\displaystyle\mbox{\boldmath{$\alpha$}}_{v}[i]=\mbox{\boldmath{$\lambda$}}_{v}[i]\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$\mu$}}_{v}[i],\forall i\in\mathcal{I}_{v}, (39d)
(ρv𝒘T)[i]=𝝀v[i](𝒖v[i]𝒍v[i])𝝉v[i]𝝁v[i],iv\displaystyle(\rho_{v}\mbox{\boldmath{$w$}}^{T})[i]=\mbox{\boldmath{$\lambda$}}_{v}[i](\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$l$}}_{v}[i])-\mbox{\boldmath{$\tau$}}_{v}[i]-\mbox{\boldmath{$\mu$}}_{v}[i],\forall i\in\mathcal{I}_{v} (39e)
𝜶v[i]=0,iv,𝜶v[i]=(ρv𝒘T)[i],iv+,\displaystyle\mbox{\boldmath{$\alpha$}}_{v}[i]=0,\forall i\in\mathcal{I}_{v}^{-},\quad\mbox{\boldmath{$\alpha$}}_{v}[i]=(\rho_{v}\mbox{\boldmath{$w$}}^{T})[i],\forall i\in\mathcal{I}_{v}^{+}, (39f)
𝚿v+,𝚿v,𝚼v+,𝚼v,𝜷v,𝝉v,𝝁v,𝝀v0.\displaystyle\mbox{\boldmath{$\Psi$}}_{v}^{+},\mbox{\boldmath{$\Psi$}}_{v}^{-},\mbox{\boldmath{$\Upsilon$}}_{v}^{+},\mbox{\boldmath{$\Upsilon$}}_{v}^{-},\mbox{\boldmath{$\beta$}}_{v},\mbox{\boldmath{$\tau$}}_{v},\mbox{\boldmath{$\mu$}}_{v},\mbox{\boldmath{$\lambda$}}_{v}\succeq 0. (39g)

We know that either 𝝀v\mbox{\boldmath{$\lambda$}}_{v} or 𝝉v+𝝁v\mbox{\boldmath{$\tau$}}_{v}+\mbox{\boldmath{$\mu$}}_{v} must be zero, which correspond to upper and lower bound of 𝒉v\mbox{\boldmath{$h$}}_{v}, respectively. According to (39e), if 𝝀v[i]=0\mbox{\boldmath{$\lambda$}}_{v}[i]=0, then we have

(ρv𝒘T)[i]=𝝉v[i]𝝁v[i]𝝉v[i]+𝝁v[i]=[(ρv𝒘T)[i]].(\rho_{v}\mbox{\boldmath{$w$}}^{T})[i]=-\mbox{\boldmath{$\tau$}}_{v}[i]-\mbox{\boldmath{$\mu$}}_{v}[i]\Rightarrow\mbox{\boldmath{$\tau$}}_{v}[i]+\mbox{\boldmath{$\mu$}}_{v}[i]=[(\rho_{v}\mbox{\boldmath{$w$}}^{T})[i]]_{-}.\vspace{-0.5em}

Similarly, if 𝝉v[i]+𝝁v[i]=0\mbox{\boldmath{$\tau$}}_{v}[i]+\mbox{\boldmath{$\mu$}}_{v}[i]=0,

𝝀v[i](𝒖v[i]𝒍v[i])=[(ρv𝒘T)[i]]+.\mbox{\boldmath{$\lambda$}}_{v}[i](\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$l$}}_{v}[i])=[(\rho_{v}\mbox{\boldmath{$w$}}^{T})[i]]_{+}.

Therefore, we have 𝝀v[i]=[(ρv𝒘T)[i]]+/(𝒖v[i]𝒍v[i])\mbox{\boldmath{$\lambda$}}_{v}[i]=[(\rho_{v}\mbox{\boldmath{$w$}}^{T})[i]]_{+}/(\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$l$}}_{v}[i]). Combined (39c), the following term in the objective function of Problem (A) can be rewritten as 𝝀v[i]𝒖v[i]𝒍v[i]=𝒖v[i]𝒍v[i]𝒖v[i]𝒍v[i][c^v𝒘[i]],\mbox{\boldmath{$\lambda$}}_{v}[i]\mbox{\boldmath{$u$}}_{v}[i]\mbox{\boldmath{$l$}}_{v}[i]=\frac{\mbox{\boldmath{$u$}}_{v}[i]\mbox{\boldmath{$l$}}_{v}[i]}{\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$l$}}_{v}[i]}[\hat{c}_{v}\mbox{\boldmath{$w$}}[i]]_{-}, which is constant.

To further simplify the objective function of Problem (A), we introduce a new vector, ϕv[0,1]D\mbox{\boldmath{$\phi$}}_{v}\in[0,1]^{D}, which satisfies 𝝁v[i]=ϕv[i][(ρv𝒘T)[i]]\mbox{\boldmath{$\mu$}}_{v}[i]=\mbox{\boldmath{$\phi$}}_{v}[i][(\rho_{v}\mbox{\boldmath{$w$}}^{T})[i]]_{-}. Therefore, constraint (39d) can be rewritten as

𝜶v[i]\displaystyle\mbox{\boldmath{$\alpha$}}_{v}[i] =𝝀v[i]𝒖v[i]𝝁v[i]\displaystyle=\mbox{\boldmath{$\lambda$}}_{v}[i]\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$\mu$}}_{v}[i] (40)
=𝒖v[i]𝒖v[i]𝒍v[i][(ρv𝒘T)[i]]+ϕv[i][(ρv𝒘T)[i]].\displaystyle=\frac{\mbox{\boldmath{$u$}}_{v}[i]}{\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$l$}}_{v}[i]}[(\rho_{v}\mbox{\boldmath{$w$}}^{T})[i]]_{+}-\mbox{\boldmath{$\phi$}}_{v}[i][(\rho_{v}\mbox{\boldmath{$w$}}^{T})[i]]_{-}.

Similarly, we introduce a new matrix, 𝚯v[0,1]|N(v)|×p\mbox{\boldmath{$\Theta$}}_{v}\in[0,1]^{|N(v)|\times p}, which satisfies the following equations

{𝚿v+[u,j]=𝚯v[u,j][(𝒂^vT𝜶v𝜽T)[u,j]]+,𝚿v[u,j]=𝚯v[u,j][(𝒂^vT𝜶v𝜽T)[u,j]],𝚼v+[u,j]=(1𝚯v[u,j])[(𝒂^vT𝜶v𝜽T)[u,j]]+,𝚼v[u,j]=(1𝚯v[u,j])[(𝒂^vT𝜶v𝜽T)[u,j]].\left\{\begin{array}[]{lr}\mbox{\boldmath{$\Psi$}}_{v}^{+}[u,j]=\mbox{\boldmath{$\Theta$}}_{v}[u,j][(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]]_{+},&\\ \mbox{\boldmath{$\Psi$}}_{v}^{-}[u,j]=\mbox{\boldmath{$\Theta$}}_{v}[u,j][(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]]_{-},&\\ \mbox{\boldmath{$\Upsilon$}}_{v}^{+}[u,j]=(1-\mbox{\boldmath{$\Theta$}}_{v}[u,j])[(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]]_{+},&\\ \mbox{\boldmath{$\Upsilon$}}_{v}^{-}[u,j]=(1-\mbox{\boldmath{$\Theta$}}_{v}[u,j])[(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]]_{-}.&\\ \end{array}\right.

In this way, constraint (39a) can be rewritten as 𝜷v[u](1𝚯v[u,j])|(𝒂^vT𝜶v𝜽T)[u,j]|.\mbox{\boldmath{$\beta$}}_{v}[u]\geq(1-\mbox{\boldmath{$\Theta$}}_{v}[u,j])|(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]|. Based on the above transformation, we know that 𝚯v[u,j]1𝜷v[u]/|(𝒂^vT𝜶v𝜽T)[u,j]|,\mbox{\boldmath{$\Theta$}}_{v}[u,j]\geq 1-\mbox{\boldmath{$\beta$}}_{v}[u]/|(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]|, and the following term in the objective function of Problem (A) can be rewritten as

𝑿^v[u,j](𝚼v[u,j]𝚼v+[u,j])𝚿v+[u,j]=\displaystyle\mbox{\boldmath{$\hat{X}$}}_{v}[u,j](\mbox{\boldmath{$\Upsilon$}}_{v}^{-}[u,j]-\mbox{\boldmath{$\Upsilon$}}_{v}^{+}[u,j])-\mbox{\boldmath{$\Psi$}}_{v}^{+}[u,j]= (41)
𝑿^v[u,j](𝒂^vT𝜶v𝜽T)[u,j]𝚯v[u,j]𝜺v[u,j],\displaystyle-\mbox{\boldmath{$\hat{X}$}}_{v}[u,j](\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]-\mbox{\boldmath{$\Theta$}}_{v}[u,j]\mbox{\boldmath{$\varepsilon$}}_{v}[u,j],

where

𝜺v[u,j]=(1𝑿^v[u,j])[(𝒂^vT𝜶v𝜽T)[u,j]]+\displaystyle\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]=(1-\mbox{\boldmath{$\hat{X}$}}_{v}[u,j])[(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]]_{+} (42)
+𝑿^v[u,j][(𝒂^vT𝜶v𝜽T)[u,j]].\displaystyle+\mbox{\boldmath{$\hat{X}$}}_{v}[u,j][(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]]_{-}.

In fact, (41) can be transformed into a more tractable form. When (𝒂^vT𝜶v𝜽T)[u,j]>0(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]>0, if 𝑿^v[u,j]=1\mbox{\boldmath{$\hat{X}$}}_{v}[u,j]=1, 𝜺v[u,j]=0\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]=0 holds. Similarly, if 𝑿^v[u,j]=0\mbox{\boldmath{$\hat{X}$}}_{v}[u,j]=0, 𝜺v[u,j]=(𝒂^vT𝜶v𝜽T)[u,j]\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]=(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]. Under this condition, to maximize the objective function, 𝚯v[u,j]\mbox{\boldmath{$\Theta$}}_{v}[u,j] should be set as its lower bound and

𝜺v[u,j]𝚯v[u,j]\displaystyle\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]\mbox{\boldmath{$\Theta$}}_{v}[u,j] =(𝒂^vT𝜶v𝜽T)[u,j]{1𝜷v[u]|(𝒂^vT𝜶v𝜽T)[u,j]|}\displaystyle=(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]\{1-\frac{\mbox{\boldmath{$\beta$}}_{v}[u]}{|(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]|}\} (43)
=𝜺v[u,j]𝜷v[u].\displaystyle=\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]-\mbox{\boldmath{$\beta$}}_{v}[u].

Similar analysis can be conducted when (𝒂^vT𝜶v𝜽T)[u,j]<0(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]<0. Overall, we have 𝜺v[u,j]𝚯v[u,j]=max{𝜺v[u,j]𝜷v[u],0}𝛀v[u,j].\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]\mbox{\boldmath{$\Theta$}}_{v}[u,j]=\max\{\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]-\mbox{\boldmath{$\beta$}}_{v}[u],0\}\triangleq\mbox{\boldmath{$\Omega$}}_{v}[u,j]. Therefore, (41) can be rewritten as

𝑿^v[u,j](𝚼v[u,j]𝚼v+[u,j])𝚿v+[u,j]\displaystyle\mbox{\boldmath{$\hat{X}$}}_{v}[u,j](\mbox{\boldmath{$\Upsilon$}}_{v}^{-}[u,j]-\mbox{\boldmath{$\Upsilon$}}_{v}^{+}[u,j])-\mbox{\boldmath{$\Psi$}}_{v}^{+}[u,j]
=𝑿^v[u,j](𝒂^vT𝜶v𝜽T)[u,j]max{𝜺v[u,j]𝜷v[u],0},\displaystyle=-\mbox{\boldmath{$\hat{X}$}}_{v}[u,j](\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})[u,j]-\max\{\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]-\mbox{\boldmath{$\beta$}}_{v}[u],0\},
=tr[𝑿^vT(𝒂^vT𝜶v𝜽T)]𝛀v1,\displaystyle=-\text{tr}[\mbox{\boldmath{$\hat{X}$}}_{v}^{T}(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})]-||\mbox{\boldmath{$\Omega$}}_{v}||_{1},

where tr()\text{tr}(\cdot) indicates the trace of the input matrix. Based on the above transformations, Problem (A) finally can be transformed into Problem (24) in Section 4.3.

Appendix B Solution for Problem (26)

Problem (26) can be reformulated as

min{𝜷v0,𝛀^v0}uN(v)j=1p𝛀^v[u,j]+uN(v)qvu𝜷v[u]const,\min_{\{\mbox{\boldmath{$\beta$}}_{v}\succeq 0,\mbox{\boldmath{$\hat{\Omega}$}}_{v}\succeq 0\}}\sum_{u\in N(v)}\sum_{j=1}^{p}\mbox{\boldmath{$\hat{\Omega}$}}_{v}[u,j]+\sum_{u\in N(v)}q_{vu}\mbox{\boldmath{$\beta$}}_{v}[u]-\text{const}, (44)

subject to

𝛀^v[u,j]𝜺v[u,j]𝜷v[u],uN(v),j{1,2,..,p},\displaystyle\mbox{\boldmath{$\hat{\Omega}$}}_{v}[u,j]\geq\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]-\mbox{\boldmath{$\beta$}}_{v}[u],\forall u\in N(v),j\in\{1,2,..,p\}, (44a)
const=\displaystyle\text{const}= cvb+iv𝒖v[i]𝒍v[i]𝒖v[i]𝒍v[i][cv𝒘[i]]\displaystyle c_{v}b+\sum_{i\in\mathcal{I}_{v}}\frac{\mbox{\boldmath{$u$}}_{v}[i]\mbox{\boldmath{$l$}}_{v}[i]}{\mbox{\boldmath{$u$}}_{v}[i]-\mbox{\boldmath{$l$}}_{v}[i]}[c_{v}\mbox{\boldmath{$w$}}[i]]_{-}
i=1D𝜶v[i](a^v𝒙v𝜽)[i]tr[𝑿vT(𝒂^vT𝜶v𝜽T)].\displaystyle-\sum_{i=1}^{D}\mbox{\boldmath{$\alpha$}}_{v}[i](\hat{a}_{v}\mbox{\boldmath{$x$}}_{v}\mbox{\boldmath{$\theta$}})[i]-\text{tr}[\mbox{\boldmath{$X$}}_{v}^{T}(\mbox{\boldmath{$\hat{a}$}}_{v}^{T}\mbox{\boldmath{$\alpha$}}_{v}\mbox{\boldmath{$\theta$}}^{T})]. (44b)

The dual problem of Problem (44) is given by

max𝑩vuN(v)j=1p𝜺v[u,j]𝑩v[u,j]const,\max_{\mbox{\boldmath{$B$}}_{v}}\sum_{u\in N(v)}\sum_{j=1}^{p}\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]\mbox{\boldmath{$B$}}_{v}[u,j]-\text{const}, (45)

subject to (44b),

0𝑩v[u,j]1,uN(v),j{1,2,..,p},\displaystyle 0\leq\mbox{\boldmath{$B$}}_{v}[u,j]\leq 1,\forall u\in N(v),j\in\{1,2,..,p\}, (45a)
j=1p𝑩v[u,j]qvu,uN(v).\displaystyle\sum_{j=1}^{p}\mbox{\boldmath{$B$}}_{v}[u,j]\leq q_{vu},\forall u\in N(v). (45b)

The optimal solution of the above problem is very obvious. Specifically, for 𝑩v[u,:]\mbox{\boldmath{$B$}}_{v}[u,:], its optimal solution is obtained by setting 𝑩v[u,j]\mbox{\boldmath{$B$}}_{v}[u,j] corresponding to the top-qvuq_{vu} largest entries in 𝜺v[u,:]\mbox{\boldmath{$\varepsilon$}}_{v}[u,:] as 1 and others as 0.

Because 𝑩v\mbox{\boldmath{$B$}}_{v} is the dual variable corresponding to constraint (44a), the following equation holds

𝑩v[u,j](𝜺v[u,j]𝜷v[u]𝛀^v[u,j])=0,\displaystyle\mbox{\boldmath{$B$}}_{v}[u,j](\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]-\mbox{\boldmath{$\beta$}}_{v}[u]-\mbox{\boldmath{$\hat{\Omega}$}}_{v}[u,j])=0, (46)
uN(v),j{1,2,..,p}.\displaystyle\forall u\in N(v),j\in\{1,2,..,p\}.

If 𝑩v[u,j]=1\mbox{\boldmath{$B$}}_{v}[u,j]=1, 𝛀^v[u,j]=𝜺v[u,j]𝜷v[u]\mbox{\boldmath{$\hat{\Omega}$}}_{v}[u,j]=\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]-\mbox{\boldmath{$\beta$}}_{v}[u] holds, which indicates that 𝜺v[u,j]𝜷v[u]0\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]-\mbox{\boldmath{$\beta$}}_{v}[u]\geq 0. Meanwhile, 𝑩v[u,j]=1\mbox{\boldmath{$B$}}_{v}[u,j]=1 also means that 𝜺v[u,j]\mbox{\boldmath{$\varepsilon$}}_{v}[u,j] is among the top-qvuq_{vu} largest entries in 𝜺v[u,:]\mbox{\boldmath{$\varepsilon$}}_{v}[u,:]. Therefore, 𝜷v[u]\mbox{\boldmath{$\beta$}}_{v}[u] is no larger than all top-qvuq_{vu} largest entries in 𝜺v[u,:]\mbox{\boldmath{$\varepsilon$}}_{v}[u,:]. Similarly, if 𝜺v[u,j]𝜷v[u]𝛀^v[u,j]0\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]-\mbox{\boldmath{$\beta$}}_{v}[u]-\mbox{\boldmath{$\hat{\Omega}$}}_{v}[u,j]\neq 0, it must be the case that 𝛀^v[u,j]=0\mbox{\boldmath{$\hat{\Omega}$}}_{v}[u,j]=0 and 𝜺v[u,j]𝜷v[u]<0\mbox{\boldmath{$\varepsilon$}}_{v}[u,j]-\mbox{\boldmath{$\beta$}}_{v}[u]<0. According to (46), 𝑩v[u,j]\mbox{\boldmath{$B$}}_{v}[u,j] must equal to 0, which suggests that 𝜺v[u,j]\mbox{\boldmath{$\varepsilon$}}_{v}[u,j] is not among the top-qvuq_{vu} largest entries in 𝜺v[u,:]\mbox{\boldmath{$\varepsilon$}}_{v}[u,:] under this condition. Therefore, 𝜷v[u]\mbox{\boldmath{$\beta$}}_{v}[u] should be larger than all the 𝜺v[u,j]\mbox{\boldmath{$\varepsilon$}}_{v}[u,j] that is not among the top-qvuq_{vu} largest entries in 𝜺v[u,:]\mbox{\boldmath{$\varepsilon$}}_{v}[u,:]. Overall, the solution of 𝜷v[u]\mbox{\boldmath{$\beta$}}_{v}[u] is the qvuq_{vu}-largest entry in 𝜺v[u,:]\mbox{\boldmath{$\varepsilon$}}_{v}[u,:].