Learning Optimal Fronthauling and Decentralized Edge Computation in
Fog Radio Access Networks
Abstract
Fog radio access networks (F-RANs), which consist of a cloud and multiple edge nodes (ENs) connected via fronthaul links, have been regarded as promising network architectures. The F-RAN entails a joint optimization of cloud and edge computing as well as fronthaul interactions, which is challenging for traditional optimization techniques. This paper proposes a Cloud-Enabled Cooperation-Inspired Learning (CECIL) framework, a structural deep learning mechanism for handling a generic F-RAN optimization problem. The proposed solution mimics cloud-aided cooperative optimization policies by including centralized computing at the cloud, distributed decision at the ENs, and their uplink-downlink fronthaul interactions. A group of deep neural networks (DNNs) are employed for characterizing computations of the cloud and ENs. The forwardpass of the DNNs is carefully designed such that the impacts of the practical fronthaul links, such as channel noise and signling overheads, can be included in a training step. As a result, operations of the cloud and ENs can be jointly trained in an end-to-end manner, whereas their real-time inferences are carried out in a decentralized manner by means of the fronthaul coordination. To facilitate fronthaul cooperation among multiple ENs, the optimal fronthaul multiple access schemes are designed. Training algorithms robust to practical fronthaul impairments are also presented. Numerical results validate the effectiveness of the proposed approaches.
Index Terms:
Deep learning, fog radio access networks, fronthaul interaction.I Introduction
Centralized coordination of distributed edge nodes (ENs) has been brought great success in wireless communication networks [1, 2]. Such an architecture is realized with a cloud unit that schedules communication and computation of ENs by leveraging fronthaul interfaces. A particular example is a cloud radio access network (C-RAN) [3, 4, 5, 6, 7] where a cloud centrally performs the baseband signal processing, while radio-frequency (RF) functionalities are carried out by ENs, e.g., remote radio heads. The performance can be further enhanced by fog radio access networks (F-RANs) [8, 9, 10] where ENs are equipped with individual computing units. Measurements of RF propagation environments, e.g., channel state information (CSI), are available only at the ENs due to the absence of the RF circuitry at the cloud. To perform centralized computations with distributed data, the cloud collects the local measurements through uplink fronthaul links. The computation results of the cloud, which contain the information regarding the networking policies of the ENs, e.g., beamforming vectors, are forwarded via downlink fronthaul links. In the F-RAN systems, the data received from the cloud can be further processed at the ENs using local computing units. Hence, to optimize the F-RAN properly, we need to jointly design centralized cloud computing strategies, uplink-downlink fronthaul coordination, and distributed edge processing rules.
Recent studies [3, 4, 5, 6, 7, 8, 9, 10] have addressed various optimization tasks in the C-RAN and F-RAN systems. The works in [3, 4, 5, 6, 7, 8] have investigated a joint optimization of downlink fronthauling schemes at the cloud and multi-antenna signal processing at the ENs. Iterative algorithms are presented for tackling the nonconvexity of particular formulations. Assuming the capacity-constrained fronthauls, compression strategies of the cloud computing results are determined along with the beamforming vectors at the ENs. Although the downlink fronthaul interactions from the cloud to the ENs are adequately studied, they do not consider the imperfections occurred in the uplink fronthaul coordination such as the CSI update steps from the ENs to the cloud. Therefore, existing researches are suitable only for an ideal scenario where the global network state, e.g., the network CSIs, are perfectly known to the cloud. Practical C-RAN systems should involve the joint optimization of downlink-uplink fronthauling protocols and centralized cloud computing strategies. It is, however, not trivial for traditional model-based optimization techniques [3, 4, 5, 6, 7, 8] since the fronthaul interactions typically invoke intractable features including random noises and fronthaul signaling designs.
For the F-RAN architecture, we need to additionally identify decentralized edge computation rules for individual ENs. Distributed optimization methods in the F-RAN have been studied in cache-enabled networks [9] and tactile Internet applications [10]. Message-passing algorithms are employed in [9] to determine a decentralized cache deployment policy. Each EN iteratively updates messages for the interactions with other ENs. These messages should be carefully designed for each network setup, thereby lacking the adaptivity as a general optimization framework. The alternating direction method of multipliers (ADMM) approach can be exploited for the design of distributed and cooperative fog computing [10]. To facilitate iterative interactions among the ENs and the cloud, a proper reformulation technique is necessary to split a global optimization variable. These model-based decentralized algorithms cannot be straightforwardly applied to other types of optimization formulations. In addition, they do not take the practical fronthaul design issues into account such as quantization, noisy channels, and signaling overheads.
To overcome the drawbacks of traditional model-based algorithms, a learning to optimize paradigm has been intensively examined in various wireless networking scenarios [11, 12, 13, 14, 15, 16, 17, 18, 19]. Deep neural networks (DNNs) are employed to replace unknown computation rules for solving network optimization problems. Arbitrarily formulated objectives can be maximized in a data-driven manner without handcraft models, e.g., the convexity of functions and the prior information of the optimal solution. The DNNs are exploited to learn efficient power control mechanisms [12, 13] and user association policy [14] in interfering wireless networks. Beamforming optimization problems for multi-antenna systems are addressed [15]. These results reveal that deep learning (DL) approaches outperform existing suboptimal solutions with much reduced computational complexity. However, they are confined to centralized executions which are not suitable for the F-RAN systems.
Decentralized optimizations have been investigated via unsupervised DL [17, 18, 19] and reinforcement learning (RL) techniques [20]. In [17, 18, 19], a distributed network setup is considered where direct interactions among ENs are allowed by leveraging backhaul interfaces. An interaction policy is autonomously optimized along with distributed computation rules. However, the setup in [17, 18, 19] is different from the F-RAN architecture where the ENs can only be controlled by the cloud, and thus they cannot optimize the role of the cloud in the F-RAN systems. A cloud-aided distributed RL strategy is presented in [20]. To succeed the learning task, the RL framework requires a careful determination of state variables and rewards for individual ENs. The optimization of these hyperparameters typically incurs trial-and-error-based grid search procedures for each network setup. In addition, the backhaul imperfections and signaling overheads are not addressed in [17, 18, 19, 20]. Federated learning (FL) algorithms have been recently studied for handling distributed machine learning problems [17, 21]. The FL focuses on the training of a common DNN at the cloud with the aid of the ENs having individual training datasets. Thus, the FL would not be suitable for the design of decentralized optimization inferences in wireless networks where the ENs desire to identify their own networking solutions with partially observable statistics.
This paper proposes an unsupervised DL method for designing a generic optimization framework in the F-RAN systems. Distributed ENs observe their local states, e.g., the CSI for local wireless links, and desire to determine individual solutions, e.g., transmit power and beamforming vectors, for maximizing the network performance. Since the ENs are typically deployed in a wide cell coverage area, the locally observable information of a certain EN is not directly available to others. A network cloud connecting the ENs through imperfect fronthaul links schedules decentralized edge processing. To optimize the operations at the cloud and the ENs jointly, we propose a Cloud-Enabled Cooperation-Inspired Learning (CECIL) mechanism, which is a structural DL solution developed for the F-RAN systems. The proposed method consists of three consecutive steps: uplink fronthauling at ENs, centralized computation and downlink fronthauling at a cloud, and distributed decision at ENs. A group of DNN units is employed for characterizing the operations of the cloud and ENs. A joint training algorithm of the DNNs is presented with arbitrary given fronthaul imperfections.
The uplink and downlink fronthaul interaction steps incur inter-EN inference signals. To handle this issue, we design multiple access fronthauling schemes that can be autonomously optimized by the DNNs. Two different protocols are investigated. First, following conventional distributed DL approaches [17, 18, 19, 20], an orthogonal multiple access (OMA) is presented which assigns distinct fronthaul resources for each EN. Second, we propose a non-orthogonal multiple access (NOMA) fronthauling strategy where all ENs share the identical fronthaul resources. The non-orthogonal interaction policies among ENs have not yet been investigated in existing DL studies [17, 18, 19, 20], and thus its optimality would not be guaranteed in the design of the cooperative DNN inferencing steps. To this end, we rigorously prove the effectiveness of the OMA and NOMA schemes and analyze the amount of the fronthaul resources to achieve the optimality. The superiority of the NOMA method is verified in terms of the fronthaul signaling overheads. In addition, for the imperfect fronthaul link case, we present a robust learning policy that trains the DNNs in the presence of practical fronthaul impairments such as additive noise and finite-capacity constraints. Finally, numerical results verify the effectiveness of the proposed framework in various F-RAN applications. Our main contributions are summarized as follows.
-
•
We propose the CECIL framework, a model-driven DL-based optimization mechanism for the F-RAN structure, which jointly determines the decentralized edge computations, centralized cloud calculations, and uplink-downlink fronthaul coordination strategies.
-
•
For managing inter-edge interference, fronthaul multiple access schemes are designed which bridge computations of DNNs at the cloud and ENs. The optimality of the proposed fronthauling strategies are verified rigorously.
-
•
To combat the fronthaul channel imperfections, robust training policies are presented which optimize DNNs in the presence of additive noise and fronthaul capacity constraints.
-
•
Intensive numerical results validating the optimality of the proposed method are provided in interfering networks. Efficient fronthaul resource allocation methods are identified from the numerical results.
The rest of the paper is organized as follows. Section II describes a generic F-RAN system. The inference of the CECIL framework is explained in Section III, and its training process is presented in Section IV. Optimal fronthaul interaction strategies are designed in Section V, and in Section VI, robust training policies for imperfect fronthaul channels are studied. Section VII assesses the performance of the proposed CECIL approach from numerical simulations. Finally, concluding remarks are given in Section VIII.
II Network Model and Problem Formulation

Fig. 1 illustrates an F-RAN architecture which exploits both cloud and edge computing processes for an efficient management of wireless networks. A cloud is regarded as a central unit that coordinates multiple, say , ENs by means of fronthaul links. The ENs are equipped with RF modules to provide networking services. We maximize a generic nonconvex network utility function by optimizing network policies at the ENs. Without loss of generality, states of the F-RAN are represented by a vector of length . The global state can be any measurement values such as a set of CSIs between the ENs and their intended mobile users. The ENs equipped with the RF processors are responsible for the estimation of the global state vector. The ENs are, in general, distributed over coverage areas to support reliable communication services. Therefore, a locally observable state at each EN () denoted by becomes a subset of and it is not known to other ENs and the cloud. Then, the global information vector can be represented by with size .
The fronthaul interface supports the cooperation among the cloud and ENs. For notational simplicity, the cloud is denoted by the -th node. Let and be the number of uplink and downlink fronthaul resource blocks (RBs) assigned for EN , respectively. Without loss of the generality, one fronthaul RB is assumed to be occupied for conveying a real-valued scalar number. In practice, the RBs correspond to orthogonal time-frequency channels, e.g., a resource element in LTE systems which consists of one data symbol occupying 15 kHz bandwidth. Thus, and reflect the fronthaul signaling overheads and the fronthaul resource constraints. The capacity constraints on the fronthaul RBs are addressed in Section VI-B. The total available RBs for the F-RAN is limited by as and . The number of the RBs can be optimized in advance by the network operator and is assumed to be fixed. The RB allocation schemes are discussed in Section VII-A. Both time division duplexing (TDD) and frequency division duplexing (FDD) protocols can be exploited for implementing the fronthaul coordinations. For the TDD systems, we have since the quality of the uplink and downlink fronthaul channels is the same due to the channel reciprocity. Also, a more general case of represents the FDD systems where the uplink and downlink fronthaul transmissions experience different radio propagation environments. As a result, the DL method presented in the preceding sections can be applied to arbitrary duplexing systems including both the TDD and FDD.
A decision of EN is characterized by a solution vector of length which includes resource management policy and beamforming vector of EN . The performance of the F-RAN is generally affected by both the global state and a set of solutions . Thus, the utility function can be written by . We focus on a maximization task of the utility averaged over the global state vector expressed by
where is the expectation operation over a random variable and stands for a solution set of EN . To tackle (P1) in the F-RAN system, along with the solution vector , we need to identify the fronthaul interaction policy subject to the fronthaul RB constraints and , . The effect of the fronthaul noise and inter-EN interference can be included in (P1). These are distinct features of our formulation (P1) compared to existing studies on decentralized multi-agent architectures [20] which do not consider the resource constraints on the coordination links.
In this paper, we develop an efficient solution for a generic formulation (P1) whose computational inferences can be realized in the F-RAN systems. Major challenges for (P1) arise from the distinctly available observations and imperfect fronthaul interfaces. We need to perfectly know to solve (P1). One possible approach is to let the ENs upload their local measurements to the cloud through the uplink fronthaul links. Then, the cloud can calculate the network solution . The local decision variables are transferred to individual ENs by leveraging the downlink fronthaul links. Such a strategy is only applicable to an ideal scenario where the fronthaul links are perfect and have sufficient RBs, i.e., and for exchanging and , respectively. Conventional approaches [3, 4, 5, 6, 7, 8] only focus on the downlink or the uplink compression, but not their joint design. The existing FL algorithms [17, 21], where the ENs cooperatively find a common solution at the cloud via model-based iteration rules, are not suitable for addressing the F-RAN problem (P1) since it requires to optimize fronthaul interaction policies. To efficiently solve (P1) in the F-RAN system, we need to study a joint design of fronthaul communication policies and individual decisions at the ENs that can be applied to arbitrary utility functions.
III Cooperative Learning Mechanism
This section presents a CECIL inference which designs cooperative optimization mechanisms for the F-RAN system. The CECIL is exploited as forward pass computations of a DNN-based optimization framework in Section IV. We characterize interactions among the cloud and ENs by leveraging abstracted computational inferences to be replaced by DNNs. Fig. 2 describes the proposed cooperative inference structure which consists of three sequential steps: uplink message generation at the ENs, downlink message generation at the cloud, and the decentralized decision at the ENs. In what follows, we describe the details of each step.
III-A Uplink message generation at ENs
As shown in Fig. 2(a), EN first sends the information regarding its local observation to the cloud using the uplink fronthaul link assigned with RBs. A straightforward transmission of the -dimensional raw data would not be possible when we have insufficient fronthaul resources as . Thus, EN needs to identify a low-dimensional representation of without no direct interactions with other ENs. This can be viewed as decentralized edge compression steps. The resulting representation of length is referred to as an uplink message that carries the local knowledge of EN to the cloud via fronthaul RBs. Let be a computational inference performing the uplink message generation of EN , i.e.,
(1) |
In (1), only the local observation is accepted as an input for characterizing fully decentralized processing. As discussed in Section IV, the inference is modeled by a DNN to be optimized for maximizing the utility.
III-B Downlink message generation at cloud
Practical fronthaul links are interrupted with channel impairments such as the noise, and thus the cloud would get the noisy observation of the uplink messages. To capture this, we introduce a channel transfer function for the uplink fronthaul link from EN to the cloud which can include any channel imperfection encountered in the uplink communication. Then, the received signal at the cloud depends on all the noisy uplink messages , . It is written by
(2) |
where the function defined over the set of the noisy messages describes an uplink transmission strategy of the ENs. The choice of relies on the fronthaul resource sharing policy. For instance, if each EN occupies distinct fronthaul RBs, is simply given by the concatenation operation. On the other hand, becomes the summation when all ENs share the entire uplink fronthaul RBs. The dimension of depends on the number of the uplink fronthaul RBs and the uplink signaling strategy . These are specified in Section V.
From (2), we can observe that the received signal conveys distorted information of the global observation with the piecewise edge processing . A standard approach to process with is to decompose the computations of the cloud into the following subsequent steps. The cloud first recovers the global state from the received signal . Then, the solution to (P1) is determined by centralized cloud computing strategies. The resulting solution is sent back to EN via the downlink fronthaul links with RBs. To handle the practical case with , is encoded into a downlink message whose dimension is fit to the number of the downlink fronthaul RBs assigned to EN . As illustrated in Fig. 2(b), we integrate such cascaded procedures into a single computation inference that creates a set of the downlink messages from . This can be written by
(3) |
It is inferred from (3) that the downlink message encapsulates the local observations of other nodes , , as well as an intermediate decision taken at the cloud. The inference is also modeled by a DNN whose parameters are determined to maximize the utility function.
Remark 1.
The inference in (3) can be viewed as a two-way relaying strategy [22, 23] where the cloud relays the signals received from the ENs after an appropriate signal processing . Classical relaying protocols are dependent on man-made signaling strategies, e.g., amplify-and-forward and decode-and-forward [24], which might not be the optimum cooperation policy. The proposed DL approach can identify the optimal relaying protocol, provided that a relaying inference is approximated by a properly constructed DNN.
III-C Distributed decision at ENs
The distributed decision process shown in Fig. 2(c) is described. The cloud broadcasts the downlink messages to EN with a pre-designed downlink signaling strategy denoted by . Similar to the uplink signaling strategy in (2), is defined over the set of the downlink messages and becomes a design factor to be specified in Section V. The downlink signal intended to EN , which is denoted by , can be written by
(4) |
Defining as the downlink fronthaul transfer function from the cloud to EN , the received signal at EN is given by
(5) |
The dimensions of and rely on the message broadcasting strategy to be designed in Section V. Combining (1), (3), and (5), we can see that the received message of EN contains the local statistics of all the ENs. This implies that all the sufficient, but possibly corrupted, information for solving (P1) is now available at each EN. Thereby, the solution of EN can be attained individually by means of a node-centric decision inference . The proposed solution computation rule at EN is expressed as
(6) |
We use the local observation as the side information to refine the received signal dedicated to EN . This additional input forms a residual shortcut which leads to an efficient training strategy of very deep networks [25].
Algorithm 1 summarizes the inference of the CECIL framework. The uplink messages generated at the ENs are first transmitted to the cloud. Receiving the noisy signal , the centralized cloud computing yields the downlink messages to be broadcasted to the ENs. The decision is then taken at each EN individually. The proposed inference relies only on locally observable information, i.e., local measurement and the received messages, but not on instantaneous states of other network entities. As a result, Algorithm 1 can be implemented in a distributed manner with optimized and .
IV Deep Learning Formulation
Based on the formulations in (1), (3), and (6), the original problem (P1) can be transformed as
The targets of the optimization are given by unstructured functions in (1), in (3), and in (6), which cannot be tackled by traditional optimization techniques requiring analytical formulas. To this end, we employ the learning to optimize approach [11, 12, 13, 14, 15, 16, 17, 18, 19] which employs DNNs for replacing unknown mappings and . Let be a -layer fully-connected DNN with a trainable parameter . For an input vector of length , the output of is written as
(7) |
where is an activation at layer and accounts for the collection of weight matrices and bias vectors for all layers.
We replace the mappings in (1), (3), and (6) with DNNs as
(8) | ||||
(9) | ||||
(10) |
where stands for a concatenation operation of two vectors and . The output dimension of , , and are respectively set to the lengths of the desired outputs. The optimality of this DNN approximation is guaranteed by the universal approximation theorem [26]. It states that for any continuous mapping defined on a compact set , there exist a finite and arbitrary small such that
(11) |
with being an arbitrary small number. From (11), we can identify a DNN close to any continuous function in terms of the worst-case Euclidean distance. Note that (11) also holds for the unknown optimal mappings and . Therefore, the DNN approximations in (8)-(10) can provide a tractable formulation of (P2) but without loss of the optimality.

IV-A Training and Implementation
Fig. 3 illustrates the CECIL-based F-RAN systems where the computations of the ENs and cloud are carried out by the DNNs in (8)-(10). The forward pass computations of the CECIL are provided in Algorithm 1. Plugging (8)-(10) to (P2) results in
where accounts for the set of learnable parameters of the DNNs in (8)-(10) defined as
(12) |
To remove the constraint of (P2), the output activation of can be designed as the projection operator for a layer input . For the convex feasibility set , this projection activation is given by a convex quadratic program (QP) whose gradient-based training rules can be obtained with the backpropagation algorithm [27]. The nonconvex projection problem can be tackled by the successive convex approximation mechanism [28] by solving a series of approximated convex QPs. The gradients of such an iterative procedure can be obtained by integrating the gradients of approximated convex QPs. As a consequent, (P2) is readily solved by the gradient descent method and its variants for stochastic optimizations, e.g., the Adam algorithm [29]. We adopt the mini-batch stochastic gradient descent (SGD) method [30] where the expectations over the distribution of are estimated as the sample mean evaluated on the mini-batch sets . The SGD update at the -th training epoch is given by
(13) |
where indicates a variable attained at the -th epoch, is a learning rate, and denotes the gradient operator with respect to . The sample gradient can be numerically calculated by the backpropagation algorithm [30], provided that the gradients of the channels and as well as the transmission strategies and are available.
The full knowledge of the global observation is required for computing the gradient of the utility function. This can be achieved by the centralized training procedure in an offline domain before real-time optimization inferences [16, 17, 18, 19]. To this end, we can collect training samples, i.e., a set of the local observation vectors , from the ENs in advance. No labels such as the information regarding the optimal solution to (P1) are needed in the training. Thus, the proposed training strategy (13) is performed in a fully unsupervised manner. Once the parameter set is determined, they are readily implemented at the cloud and ENs. As discussed, the forward pass in Algorithm 1 can be carried out only with locally measurable statistics, thereby leading to the distributed realization of the online computations (8)-(10). Compared to existing decentralized F-RAN optimization algorithms [9, 10] that require iterative procedures, the proposed CECIL does not need any repetitions in the real-time inference step. Hence, the proposed approach can save both the fronthaul signaling and computation overheads.
V Message Multiple Access Design
The uplink and downlink interaction steps involve the transmission of the multiple messages over the fronthaul links, incurring inter-message interference both at the cloud and ENs. To handle this issue, we propose efficient fronthaul multiple accessing schemes that design the uplink and downlink signaling strategies in (2) and in (4), respectively.
V-A OMA fronthauling
We first develop an OMA method where distinct fronthaul resources are assigned to each of uplink and downlink messages to avoid inter-message interferences. The uplink messages for occupy bundles of the fronthaul RBs where the -th resource bundle containing RBs is dedicated to the uplink message transmission of EN . In this setup, the uplink signaling strategy in (2) becomes the concatenation operation. Then, the received signals at the cloud (2) is rewritten by
(14) |
where defines the concatenation of vectors for . The dimension of becomes where indicates the total number of the uplink fronthaul RBs.
In the downlink, is sent on orthogonal downlink fronthaul links each having RBs. Hence, the downlink signaling strategy in (4) can be specified as a masking operation extracting from the downlink message set , i.e., . Combining this with in (3), the downlink message generation of the OMA system can be refined as the procedure that creates the concatenation of downlink messages. It follows
(15) |
The downlink message is then received by EN through the corresponding downlink fronthaul channel . Hence, we refine the received signal at EN in (5) as
(16) |
Since RBs are allocated for the transmission of , the length of is given by , resulting in downlink fronthaul RBs. Therefore, the total number of the RBs denoted by is written by .
Remark 2.
The orthogonal interaction concept has been adopted in various decentralized optimization techniques such as the message-passing algorithms [9, 31], the ADMM framework [10, 32], and the distributed learning systems [17, 18, 19, 20]. However, they do not consider the effect of the practical fronthaul links including the channel imperfection and the signaling overheads. Also, the effectiveness of the OMA interaction policy is not clearly addressed in the DNN-based optimization approaches [17, 18, 19, 20]. In the proceeding sections, we investigate the optimality of the proposed CECIL approach implemented with the OMA fronthauling scheme.
V-B NOMA fronthauling
The OMA strategy may waste the fronthaul resources for allocating distinct RBs for each EN. To this end, we propose a non-orthogonal message transmission scheme where all ENs share the same fronthaul resources. Provided that RBs are assigned for the uplink message transmission, EN obtains its messages from (1) by setting , i.e., utilizing all uplink fronthaul RBs. Then, the uplink transmission strategy is obtained as the superposition of all the downlink messages since they are interfere with each other. Therefore, the cloud receives the superposed signal of length expressed as
(17) |
In the downlink, the cloud multicasts a common downlink message of length to all the ENs by leveraging all the available downlink fronthaul RBs. Then, the downlink signaling in (2) is simply fixed as , , such that the cloud directly transmits the output of the cloud computation in (18). We thus modify (3) for the NOMA scheme as
(18) |
Accordingly, the received signal of length at EN can be rewritten by
(19) |
V-C Discussions
We discuss the effectiveness of the OMA and NOMA schemes for the perfect fronthaul link case, i.e., and are given by the identity functions. The received signals of the OMA and NOMA systems are respectively recast to
(20) | |||
(21) |
which simplifies (15) and (18) as
(22) | ||||
(23) |
V-C1 Optimality of NOMA fronthauling
We first focus on the NOMA system. For constructing successful decision inference in (6), the optimal downlink message denoted by needs to properly encode all local observations , . Also, since the NOMA downlink message is common for all ENs, it should not be affected by permutations of input features. In other words, the computation of the downlink message has to be independent of the ordering of , , so that individual ENs can leverage the downlink message for the individual decision without knowing their order indexed by the network. Notice that such a permutation-invariant property indeed holds for (23) due to the superposition signaling in (21).
Based on this intuition, we can model the optimal downlink message of the NOMA by using a generic set operator , which is defined over a set of the local observations , to satisfy the permutation-invariant property. The corresponding formulation can be written as
(24) |
It is easy to see that (24) does not change with the ordering of the ENs since the input feature is given by the set. We may lose the optimality in the NOMA system if the downlink message calculation strategy (23) cannot approximate the optimal one in (24) accurately. The following proposition states that (23) can be the universal approximator for an arbitrary set function.
Proposition 1.
Suppose that the local observation is drawn from a compact set and has the identical dimension. Let be any continuous set function with the permutation-invariant property that maps local observations to -dimensional output vector. Then, for arbitrary small , there exist an outer mapping and an inner mapping satisfying
(25) |
Proof:
Let be the -th element of a vector . Suppose an arbitrary set function whose output is given by a scalar number. From [33, Thm. 9] and the Stone–Weierstrass theorem [34], there exist a continuous mapping and arbitrary small which fulfills
(26) |
By setting in (26), it is concluded forms the universal approximator for the -th element of the optimal message vector . Stacking element-wise mappings for leads to (25) with and . This completes the proof. ∎
Notice that the optimal downlink message generation (24) cannot be implemented in the practical F-RAN systems since the cloud needs to know the local statistics of the ENs perfectly. Nevertheless, thanks to Proposition 1, it can be alternatively executed through the proposed computation rule in (23). Thus, although the uplink messages are independently created at the ENs, the superposition signaling and resource sharing policies of the uplink NOMA fronthauling strategy leads to the successful distributed decision at the ENs. Since Proposition 1 holds for any continuous functions and , the universal approximation property is satisfied in the DL formulation with well-designed DNNs (8) and (9). As a result, the unknown optimal downlink message can be obtained by optimizing the DNNs with the end-to-end training policy (13).
V-C2 Impact of
We analyze the number of the uplink fronthaul RBs required for achieving the universal approximation property (25). For a scalar input , a simple inner mapping of length achieves the element-wise universal approximation property (26) [33, Thm. 7]. This implies that uplink fronthaul RBs are sufficient if all the local observations , , are given by scalar numbers. An extension to a general vector input case is challenging. Instead, we may consider a trivial modification of (24) as
(27) |
where the observation vector is decoupled into its elements for . A modified operator now converts a set of elements into -dimensional downlink message vector. This preserves the optimality since the resulting message still involves the global state essential for the individual decision of the ENs. To implement (27), EN can employ different operators , . Then, (23) can be recast to
(28) |
The NOMA strategy in (28) is achieved with uplink fronthaul RBs. Although (28) is proven to be effective, we adopt the vector-valued operator as in (23) since it includes (28) as a special case by restricting weight matrices of the DNN in (8) to diagonal matrices. Numerical results confirm that (23) requests a much smaller number of the uplink fronthaul RBs than the analytical result .
V-C3 Optimality of OMA fronthauling
We now discuss the optimality of the OMA scheme in (22). Thanks to the orthogonal transmission, the cloud can separate the uplink messages , . Nevertheless, the universal approximation theorem (11) cannot be constructed for (22) since a simple concatenation of DNNs is far from the fully-connected DNN assumed in (11). To this end, we present a suitable transformation of (22) that removes the concatenation operations. Let of length be a zero-padded version of . All elements of are zeros except the -th to the -th elements being replaced with . Similarly, the corresponding message generation operator can also be defined as the zero-padded version of the original inference . Then, (22) can be refined as
(29) |
where is the concatenation of the downlink messages. Unlike the NOMA case (23), due to the concatenation operation, the ordering of the ENs affects the downlink message computations of the OMA. Therefore, the optimal OMA downlink message is modeled as a generic inference with the stacked local observation vectors, i.e., , rather than the permutation-invariant set function in (24). Proposition 1, which is based on the permutation-invariance of the target set function, cannot be straightforwardly applied to the OMA method.
To address this, we leverage the Kolmogorov–Arnold representation theorem [35] which states that any continuous mapping can be represented as a superposition of continuous functions. Assuming and scalar local observations , , a continuous function has the following representation [33, Thm. 8]
(30) |
with some mappings and . The uplink message generation operator of the OMA requires uplink fronthaul RBs for the universal approximation property. With similar approaches presented in Section V-C2, extensions of (30) to the general case with and vector inputs , , result in uplink RBs, which is about twice as large as that of the NOMA case in (28) achieved with . Thus, although the performance of the OMA method could reach that of the NOMA system, it might need more uplink fronthaul resources. This is verified from the numerical results.
VI Imperfect Fronthaul Links
This section investigates the imperfect fronthaul link cases with random noise and finite capacity constraints. The robust training strategy of the CECIL framework is proposed for each scenario. The details are explained in the following.
VI-A Noisy fronthaul links
The imperfection of the wireless fronthauls can be modeled by the random additive noise. We specify the fronthaul channel functions as where stands for the noise vector with arbitrary distribution. In the OMA system, the received messages at the cloud (14) and at EN (16) are respectively written by
(31) |
where for denotes the noise at node . We obtain similar formulations for the NOMA system as
(32) |
The noise hinders successful decisions at the ENs, thereby requiring robust message generation strategies both at the cloud and ENs. To this end, we modify the training update in (13) by taking the noise into account. We include numerous realization of the random noise vectors into the training set. A mini-batch set becomes a set of tuples of the global observation and a collection of the uplink and downlink noise vectors . The DNN parameter is then adjusted in the ascent direction of the gradient averaged over the noise distribution. Such a data-driven optimization enables robust design of the CECIL by observing numerous noisy messages (31) and (32) in the training step.
VI-B Finite-capacity fronthaul links
Until Section. V, we assumed lossless fronthaul interactions where each RB can convey a real-valued scalar number without any distortion. In the practical wired fronthaul link setup, however, the resolution of the message would be limited by the fronthaul capacity. To this end, in this subsection, we design a robust training policy of the CECIL for the general case where the fronthaul links are subject to the transmission capacity. The fronthaul channels and can be given as the rounding functions that output the nearest integer of the transmitted messages. In this configuration, only the lossy coordination is allowed to share discrete-valued messages. To accommodate capacity-limited fronthaul links, we present a message quantization process that creates discrete representations of continuous-valued messages. We focus on the quantization of the uplink message , but the proposed techniques are readily applied to the downlink message quantization. Let be the quantization output of . The capacity of the uplink fronthaul link connecting EN and the cloud is modeled by a set of integers , , each of which indicates the alphabet size, or equivalently, the modulation level allowed for transferring the -th element . It is expressed as
(33) |
where stands for the quantization function with the quantization level . It maps a continuous-valued input into a discrete set . The received signals in (2) and (5) can then be refined as and , respectively.
The quantization operator is viewed as an activation function that is followed by , i.e., the DNN in (8). Our target is to design the activation such that acts as an accurate estimate of the original message . In this way, the cloud and ENs can successfully recover the original messages through their quantized observations. One naive approach would be to employ the rounding function. However, the simple rounding activation exhibits zero gradient for all input regime, thereby prohibiting the DNN parameters from being optimized using the SGD method in (13). This has been well-known as the vanishing gradient problem where the performance of the DNNs are no longer improved but possibly gets stuck into an unsatisfactory point [30]. In our case, the DNNs in (8) and (9) would not be trained properly. To handle this difficulty, a novel quantization method has been provided in [19, 36], but it is only applicable to the special case of .
We propose an integerization technique which is regarded as an extension of the binarization method in [19] for the general case of . The -th element of the continuous-valued message is assumed to lie in a bounded region . This can be achieved by applying a bounding activation, e.g., the sigmoid function, to the output layer of the DNN . The proposed quantization function in (33) carries out a randomized rounding operation. It first configures two nearest integers and , , of the input , i.e., , as candidates of the quantization. For notational simplicity, we denote and . Provided that , the rounding output can be either or with probabilities
(34) | ||||
(35) |
The probabilities in (34) and (35) can be interpreted as the distances from the continuous input to the target quantization points and , respectively. The probability increases as gets closer to , and the resulting quantization is more likely to be .
The proposed quantization activation for an input is given as
(36) |
where denotes the indicator function which is if the condition is true and otherwise. The following proposition states the quality of the quantization in terms of its estimation property for unavailable information .
Proof:
To prove the unbiased estimation property, it suffices to show that the conditional expectation of given , denoted by , is equal to . It follows
(37) | ||||
(38) | ||||
(39) |
where (39) is obtained since
(40) | ||||
(41) |
We thus have . This completes the proof. ∎
Proposition 2 reveals that the cloud and ENs can accurately recover the continuous-valued messages by taking expectations over the received quantized messages. This can be realized with numerous quantization samples observed in the training step. Therefore, the DNN at the cloud in (9), which processes the quantized uplink messages , can be trained to decode the original information successfully.
Now, we discuss an efficient training strategy of the DNNs implemented with the probabilistic activation (36), which is, in general, has no closed-form expression for the gradient . To address this, the gradient estimation techniques [37, 38, 19, 36] is adopted which approximate an intractable gradient with its average evaluated over any randomized operations. By leveraging Proposition 2, the gradient can be approximated as
(42) |
It is inferred from (42) that the gradient of the proposed quantization activation can be simply replaced with that of the input continuous-valued message . Since is obtained with a bounding activation, e.g., sigmoid function, whose derivative is well-defined in all the input domain, the parameter set can be efficiently trained with the SGD algorithm.
Combining (36) and (42), we can conclude that the proposed quantization activation exhibits different behaviors in the forward pass and backward pass. The actual quantized messages are computed in the forward pass with the randomized rounding operations (36), and the resulting quantization is forwarded through the capacity-limited fronthaul links. On the contrary, to optimize the DNN parameter set , we need to calculate the gradients through the backpropagation algorithm [30]. In this backward pass computation, the quantization activation yields its input variable directly.
VII Performance Evaluation
This section assesses the performance of the proposed CECIL framework for power control applications in the F-RAN systems. EN () sends data symbols to its intended mobile receiver referred to as user . The ENs share the identical time-frequency resources for the data transmission. To mitigate the multi-user interference, an appropriate power allocation mechanism is required at individual ENs. The decision variable of EN becomes the transmit power with equal to the maximum allowable power budget. Let () be the channel gain from EN to user . EN can only observe an -dimensional local CSI vector that is reported from the corresponding user [12, 19]. The global network CSI is then defined as .
Two different utility functions are considered: average sum rate utility and average sum energy-efficiency (EE) utility. Defining as the feasible set of the concatenated solution vector , the sum rate maximization (SRMax) and the sum EE maximization (EEMax) problems are respectively formulated as
(43) |
where stands for the rate of user and is the static power consumption at ENs [39]. The power of the proposed DL-based cooperative mechanisms and our intuitions presented in Section V can be analyzed by power control problems in (43) which have been popular applications of DNN-assisted cooperative optimization studies [16, 19, 18, 20].
The channel gains are generated as the exponential random variables with unit mean. The transmit power constraint is set to , and the static power consumption is fixed as . A five-layer DNN with 100 hidden neurons is employed at the cloud DNN in (9). The DNNs (8) and (10) at the ENs are constructed with three layers each with 50 neurons. The batch normalization technique [40] followed by the rectified linear unit (ReLU) activation is adopted at the hidden layers. Unless stated otherwise, we use the linear activations at the output layers of the message generating DNNs at the cloud (9) and at the ENs (8). For creating a feasible power level , the sigmoid function multiplied by is utilized at the output layer of the distributed optimizing DNN in (10). Each training epoch consists of 50 mini-batches each of which contains independently generated random channel gains . The Adam algorithm [29] with learning rate is exploited. The test performance is evaluated with test samples. The training and testing steps are implemented with Tensorflow 1.15.0 on a PC with an Intel i7-9700K CPU, 32 GB of RAM, and a GEFORCE RTX 2080 GPU.
VII-A Perfect Fronthaul Link Case
We first focus on the perfect fronthaul link case where the messages can be exchanged via the noiseless fronthaul channels (20) and (21). In this ideal scenario, we validate the optimality of the NOMA and OMA fronthauling methods. The following baseline schemes are considered.
-
•
Ideal cooperation (IC): The cloud is assumed to get the global CSI vector perfectly. The cloud centrally computes the solution via a DNN with 12 layers and 100 hidden neurons, which has the similar number of trainable variables to the proposed CECIL. The resulting solution is then assumed to be perfectly known to the ENs.
-
•
No cooperation (NC): No message exchange is allowed. Each EN needs to decide the power control solution with an individual DNN, which accepts only the local CSI as input.
-
•
Projected gradient descent (PGD): The power control solution is optimized via the PGD method [41] under the feasible set . To facilitate GPU-enabled parallel computations, we utilize the Adam optimizer in Tensorflow with the precision . The PGD generates a locally optimum solution for the SRMax and EEMax.
To implement the IC and PGD methods, EN uploads an -dimensional local CSI vector to the cloud by using RBs, resulting in total uplink fronthaul RBs. Also, the clouds forwards the local decision variable to EN through the downlink fronthaul links with RB, requiring downlink RBs. Therefore, the total number of the fronthaul RBs is given as . On the other hands, the NC baseline does not allow no interactions among the cloud and the ENs as .



Fig. 4 exhibits the average sum rate performance by changing the number of the uplink fronthaul RBs for various choice of the total number of the RBs . For fair comparison with the IC and PGD methods, the maximum of in the simulations is set to . For fixed and , the number of the downlink fronthaul RBs is determined as . The OMA system evenly allocates the uplink and downlink RBs for each EN, i.e., and . Fig. 4(a) depicts the performance with ENs. We can see that the proposed CECIL outperforms the NC benchmark for all simulated and , even at a small number of the uplink fronthaul RBs, e.g., . The CECIL with the NOMA fronthauling performs better than that with the OMA scheme. With sufficient , the CECIL is superior to the existing locally optimum PGD method. As increases, the proposed schemes reach the upperbound performance of the IC method. For a fixed , the performance of the proposed schemes does not improve by increasing , or, equivalently, increasing the number of the downlink RBs . This means that the uplink coordination, which uploads the encoding of the local CSI from the ENs to the cloud, is more crucial than the downlink interaction that forwards the results of the cloud computing to the ENs. Thus, for fixed , the optimum fronthaul resource allocation policy is to assign as small as possible, e.g., , and utilize the remaining ones for the uplink coordination as . For the NOMA, RBs with the allocation scheme and are sufficient to achieve the performance of the IC requiring RBs, thereby saving RBs. As expected in Section V-C, a more RBs are needed for the OMA as , which is the same as the IC baseline.
Similar observations are made from Figs. 4(b) and 4(c) presenting the sum rate with and ENs, respectively. We can numerically find that RBs with the allocation and suffices for the NOMA method to get close to the upperbound IC performance. This is much smaller than obtained from the analysis in Section V-C. Compared to the IC and PGD methods requiring RBs, the proposed CECIL with the NOMA fronthauling can save total RBs while achieving the same sum rate performance. Still, the OMA method needs RBs with and . We can conclude that the NOMA fronthauling is more efficient than the OMA for any given both in terms of the performance and the fronthaul signaling overheads.



The EEMax problem is examined in Fig. 5 which presents the average sum EE with respect to . Similar phenomenons to the SRMax results are observed. The proposed approaches work well also in the EEMax formulations and outperforms other baselines. It is still beneficial to allocate more RBs to the uplink fronthaul interactions. The NOMA system with RBs for and achieves the performance identical to the IC method. We can conclude that the CECIL generally performs well for arbitrary utility functions.
SRMax | EEMax | SRMax | EEMax | SRMax | EEMax | |
PGD | 6.142 | 1.210 | 10.812 | 1.641 | 14.134 | 2.348 |
CECIL (NOMA) | 0.541 | 0.858 | 1.310 | |||
CECIL (OMA) | 0.539 | 0.859 | 1.308 |
Table I compares the online time complexity in terms of the GPU running time for parallel executions of test samples. Both the proposed and PGD methods are implemented with the identical Tensorflow environment to exploit GPU-enabled parallel computations. Both the NOMA and OMA systems employ RBs with and which is the same setting to the PGD method. The proposed approaches significantly reduce the GPU running time of the traditional PGD algorithm that requires iterative calculations in the real-time inference. The execution time of the PGD varies for the formulations since its convergence speed highly relies on the structure of the utility functions. The SRMax generally needs a higher computational complexity than the EEMax. On the contrary, the proposed schemes show the identical time complexity performance regardless of the formulations since their online computations depend only on the structure of the DNNs. The result implies that the CECIL framework outperforms the traditional optimization algorithm in terms of the performance, signaling overhead, and computational complexity.
VII-B Imperfect Fronthaul Link Case
The rest of this section demonstrates the proposed CECIL method in the imperfect fronthaul link case. For simplicity, we focus on the SRMax with ENs. The noisy fronthaul channels in Section VI-A is considered first. The noise vectors in (31) and (32) are generated as the Gaussian random vectors with zero mean and covariance . The peak power constraint is imposed for the message transmission on each RB. The elements of the message vectors are designed to lie in the bounded range by applying the hyperbolic tangent activation to the output layers of the DNNs in (8) and (9). Then, the fronthaul signal-to-noise ratio (SNR) can be defined as .


Fig. 6 illustrates the average sum rate performance with RBs by changing the fronthaul SNR. For comparison, the performance of the CECIL trained and tested without the noise, i.e., , is plotted. The non-robust scheme stands for the case where the CECIL is trained with the perfect fronthaul links as but its test performance is evaluated in the presence of the noise. Two naive power control policies, i.e., the max power scheme with and the random power method with uniformly generated power , are also depicted. Both in the NOMA (Fig. 6(a)) and OMA (Fig. 6(b)) scenarios, the proposed method converges to the performance of the perfect cooperation case of as the SNR grows. For all simulated , the robust CECIL trained with the random noise presents a remarkable performance gain over the NC baseline even in the low SNR regime. This implies that the proposed cloud-aided coordination policy is beneficial for the practical noisy fronthaul channels. The non-robust design exhibits a fairly degraded performance. In the low SNR regime, the performance of the non-robust design becomes worse than the NC method, meaning that the fronthaul cooperation is not helpful if the DNNs are not carefully trained. This verifies the importance of the proposed robust learning strategy which includes random fronthaul noises in the training data set.

Fig. 7 provides the sum rate performance as a function of for the fronthaul SNRs of and . The NOMA system is still superior to the OMA method in the presence of the noise. In the high SNR regime (), it is efficient to allocate more RBs to the uplink fronthaul link as in the perfect fronthaul case. This is however not true at . For fixed , the increase in would lead to the performance degradation. There would be a nontrivial tradeoff in the uplink-downlink fronthaul RB allocation for the imperfect fronthaul link scenario.

In Fig. 8, we examine the adaptivity of the CECIL framework in a more realistic setup where the fronthaul interactions undergo asymmetric channel gains. Each elements of the message vectors are multiplied by random channel coefficients drawn from the uniform distribution within [0.1, 1]. The NOMA fronthauling scheme still performs better than the OMA method, demonstrating the effectiveness of the resource sharing nature of the NOMA method [42]. Both the NOMA and OMA require fairly high fronthaul SNR to achieve the performance of the centralized PGD algorithm. A more sophisticated interaction policy would be needed at the cloud and ENs to capture the impact of the asymmetric fronthaul channel gains.


Next, we investigate the finite-capacity fronthaul link case in Section VI-B. The capacity of each fronthaul link is fixed as , , where reflects the fronthaul capacity in bits. Fig. 9 exhibits the sum rate performance of the finite-capacity fronthaul link case with respect to for in the NOMA (Fig. 9(a)) and OMA (Fig. 9(b)) systems. The performance for the perfect fronthaul link case, i.e., infinite , is shown as a reference. The proposed quantization activation in (36) is not included in the non-robust design. Hence, it trains the DNNs in the prefect fronthaul link case, and its test performance is measured with the rounding channel functions with finite . The performance of the proposed message quantization constantly grows as gets larger and significantly outperforms the non-robust design. We can see that is sufficient for the NOMA system to achieve the upperbound performance of the IC baseline with infinite , whereas the OMA fails to get close to it even with bits.

We plot the average sum rate of the finite-capacity fronthaul case in Fig. 10 as a function of . Similar to the additive noise scenario in Fig. 7, in the finite-capacity case, the optimal fronthaul resource allocation strategy is not trivial if is small, i.e., the F-RAN suffers from the inaccurate fronthaul interactions. Regardless of and , the NOMA outperforms the OMA fronthauling scheme in the finite-capacity fronthaul link case. Therefore, we can conclude that the NOMA system is robust to the imperfections incurred in the fronthaul interaction steps.
VIII Concluding Remarks
This paper studies a DL solution for addressing generic F-RAN optimization tasks where a cloud schedules decentralized computations of ENs through fronthaul links. A structural learning inference termed by the CECIL framework is proposed which mimics a cloud-aided cooperative optimization strategy. Three different types of DNN modules are applied to the cloud and individual ENs each of which is responsible for uplink and downlink coordinations and distributed optimization. We design message multiple accessing schemes to facilitate the multi-EN fronthaul interactions. A robust training policy is presented in the practical imperfect fronthaul link scenarios. Numerical simulations validate the superiority of the proposed DL framework over existing optimization algorithms in terms of the performance, fronthaul signaling overheads, and computational complexity. To combat wireless fading fronthaul channels, it would be an interesting future work to adopt channel autoencoder techniques [43, 44, 36] for the message-generating inferences. Also, extensions to more complicated application scenarios such as multi-antenna coordinated beamforming problems are worth pursuing.
References
- [1] S.-H. Park, O. Simeone, and S. Shamai (Shitz), “Fronthaul compression for cloud radio access networks: signal processing advances inspired by network information theory,” IEEE Signal Process. Mag., vol. 31, no. 6, pp. 69–79, Nov. 2014.
- [2] R. Tandon and O. Simeone, “Harnessing cloud and edge synergies: toward an information theory of fog radio access networks,” IEEE Communications Magazine, vol. 54, no. 8, pp. 44–50, Aug. 2016.
- [3] S. Park, O. Simeone, O. Sahin, and S. Shamai, “Joint precoding and multivariate backhaul compression for the downlink of cloud radio access networks,” IEEE Trans. Signal Process., vol. 61, no. 22, pp. 5646–5658, Nov. 2013.
- [4] H. Z. M. Tao, E. Chen and W. Yu, “Content-centric sparse multicast beamforming for cache-enabled cloud RAN,” IEEE Trans. Wireless Commun., vol. 15, no. 9, p. 6118–6131, Sep. 2016.
- [5] W. Lee, O. Simeone, J. Kang, and S. Shamai (Shitz), “Multivariate fronthaul quantization for downlink C-RAN,” IEEE Trans. Signal Process., vol. 64, no. 19, p. 5025–5037, Oct. 2016.
- [6] S.-H. Park, O. Simeone, and S. Shamai (Shitz), “Multi-tenant C-RAN with spectrum pooling: Downlink optimization under privacy constraints,” IEEE Trans. Veh. Technol., vol. 67, no. 11, pp. 10 492–10 503, Nov. 2018.
- [7] J. Kim, S. Park, O. Simeone, I. Lee, and S. Shamai (Shitz), “Joint design of fronthauling and hybrid beamforming for downlink C-RAN systems,” IEEE Trans. Commun., vol. 67, no. 6, pp. 4423–4434, Jun. 2019.
- [8] S.-H. Park, O. Simeone, and S. Shamai (Shitz), “Joint optimization of cloud and edge processing for fog radio access networks,” IEEE Trans. Wireless Commun., vol. 15, no. 11, pp. 7621–7632, Nov. 2016.
- [9] J. Liu, B. Bai, J. Zhang, and K. B. Letaief, “Cache placement in fog-RANs: from centralized to distributed algorithms,” IEEE Trans. Wireless Commun., vol. 16, no. 11, pp. 7039–7051, Nov. 2017.
- [10] Y. Xiao and M. Krunz, “Distributed optimization for energy-efficient fog computing in the tactile internet,” IEEE J. Sel. Areas Commun., vol. 36, no. 11, pp. 2390–2400, Nov. 2018.
- [11] H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, “Learning to optimize: training deep neural networks for interference management,” IEEE Trans. Signal Process., vol. 66, no. 20, pp. 5438–5453, Oct. 2018.
- [12] W. Lee, D.-H. Cho, and M. Kim, “Deep power control: transmit power control scheme based on convolutional neural network,” IEEE Commun. Lett., vol. 22, no. 6, pp. 1276–1279, Jun. 2018.
- [13] W. Lee, O. Jo, and M. Kim, “Intelligent resource allocation in wireless communications systems,” IEEE Commun. Mag., vol. 58, no. 1, pp. 100–105, Jan. 2020.
- [14] D. Liu, C. Sun, C. Yang, and L. Hanzo, “Optimizing wireless systems using unsupervised and reinforced-unsupervised deep learning,” IEEE Netw., vol. 34, no. 4, pp. 270–277, Jul. 2020.
- [15] J. Kim, H. Lee, S.-E. Hong, and S.-H. Park, “Deep learning methods for universal MISO beamforming,” IEEE Wireless. Commun. Lett., vol. 9, no. 11, pp. 1894–1898, Nov. 2020.
- [16] P. de Kerret, D. Gesbert, and M. Filippone, “Team deep neural networks for interference channels,” in Proc. IEEE Int. Conf. Commun. (ICC), pp. 1–6, May 2018.
- [17] D. Gunduz, P. de Kerret, N. D. Sidiropoulos, D. Gesbert, C. R. Murthy, and M. van der Schaar, “Machine learning in the air,” IEEE J. Sel. Areas Commun., vol. 37, no. 10, pp. 2184–2199, Sept. 2019.
- [18] M. Kim, P. de Kerret, and D. Gesbert, “Learning to cooperate in decentralized wireless networks,” in in Proc. IEEE Asilomar Conf. Signals, Syst. Comput. (ACSSC), Oct. 2018, pp. 281–285.
- [19] H. Lee, S. H. Lee, and T. Q. S. Quek, “Deep learning for distributed optimization: applications to wireless resource management,” IEEE J. Sel. Areas Commun., vol. 37, no. 10, pp. 2251–2266, Oct. 2019.
- [20] Y. S. Nasir and D. Guo, “Multi-agent deep reinforcement learning for dynamic power allocation in wireless networks,” IEEE J. Sel. Areas Commun., vol. 37, no. 10, pp. 2239–2250, Oct. 2019.
- [21] J. Park, S. Samarakoon, M. Bennis, and M. Debbah, “Wireless network intelligence at the edge,” Proc. IEEE, vol. 107, no. 11, pp. 2204–2239, Nov. 2019.
- [22] R. Zhang, Y.-C. Liang, C. C. Chai, and S. Cui, “Optimal beamforming for two-way multi-antenna relay channel with analogue network coding,” IEEE J. Sel. Areas Commun., vol. 27, no. 5, p. 699–712, Jun. 2009.
- [23] K. Lee, H. Sung, E. Park, and I. Lee, “Joint optimization for one and two-way MIMO AF multiple-relay systems,” IEEE Trans. Wireless Commun., vol. 9, no. 12, pp. 3671–3681, Dec. 2010.
- [24] J. N. Laneman, D. N. C. Tse, and G. W. Wornell, “Cooperative diversity in wireless networks: efficient protocols and outage behavior,” IEEE Trans. Inf. Theory, vol. 50, no. 12, pp. 3062–3080, Dec. 2004.
- [25] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 1–9, Jun. 2016.
- [26] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Netw., vol. 2, no. 5, pp. 359–366, Jan. 1989.
- [27] B. Amos and J. Z. Kolter, “Optnet: differentiable optimization as a layer in neural networks,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2017.
- [28] Y. Sun, P. Babu, and D. P. Palomar, “Majorization-minimization algorithms in signal processing, communications, and machine learning,” IEEE Trans. Signal Process., vol. 65, no. 3, pp. 794–816, Feb. 2017.
- [29] D. Kingma and J. Ba, “Adam: a method for stochastic optimization,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2015.
- [30] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press.
- [31] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp. 498–519, Feb. 2001.
- [32] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed Optimization and Statistical Learning via the Alternating Direction Method and Multipliers,” Foundat. Trends Mach. Learn., vol. 3, no. 1, pp. 1–122, 2010.
- [33] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola, “Deep sets,” in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS), pp. 3391–3401, Dec. 2017, [Online] Available: https://arxiv.org/abs/1703.06114.
- [34] N. E. Cotter, “The stone-weierstrass theorem and its application to neural networks,” IEEE Trans. Neural Netw., vol. 1, no. 4, pp. 290–295, Dec. 1990.
- [35] V. Karkova, “Kolmogorov’s theorem and multilayer neural networks,” Neural Netw., vol. 5, no. 3, pp. 501–506, Jan. 1992.
- [36] H. Lee, T. Q. S. Quek, and S. H. Lee, “A deep learning approach to universal binary visible light communication transceiver,” IEEE Trans. Wireless Commun., vol. 19, no. 2, pp. 956–969, Feb. 2020.
- [37] Y. Bengio, N. Leonard, and A. Courville, “Estimating or propagating gradients through stochastic neurons for conditional computation,” arXiv preprint arXiv:1305.2982, Aug. 2013.
- [38] T. Raiko, M. Berglund, G. Alain, and L. Dinh, “Techniques for learning binary stochastic feedforward neural networks,” in Proc. Int. Conf. Learn. Represent. (ICLR), 2015.
- [39] C. Isheden, Z. Chong, E. Jorswieck, and G. Fettweis, “Framework for link-level energy efficiency optimization with informed transmitter,” IEEE Trans. Wireless Commun., vol. 11, no. 8, pp. 2946–2957, Aug. 2012.
- [40] S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariance shift,” in Proc. Int. Conf. Mach. Learn. (ICML), pp. 448–456, July 2015.
- [41] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004.
- [42] W. Shin, M. Vaezi, B. Lee, D. J. Love, J. Lee, and H. V. Poor, “Non-orthogonal multiple access in multi-cell networks: Theory, performance, and practical challenges,” IEEE Commun. Mag., vol. 55, no. 10, pp. 176–183, Oct. 2017.
- [43] T. O’Shea and J. Hoydis, “An introduction to deep learning for the physical layer,” IEEE Trans. Cog. Commun. Netw., vol. 3, no. 4, pp. 563–575, Dec. 2017.
- [44] H. Lee, S. H. Lee, T. Q. S. Quek, and I. Lee, “Deep learning framework for wireless systems: applications to optical wireless communications,” IEEE Commun. Mag., vol. 57, no. 3, pp. 35–41, Mar. 2019.