Deep Reinforcement Learning for Multi-Agent Power Control in Heterogeneous Networks
Abstract
We consider a typical heterogeneous network (HetNet), in which multiple access points (APs) are deployed to serve users by reusing the same spectrum band. Since different APs and users may cause severe interference to each other, advanced power control techniques are needed to manage the interference and enhance the sum-rate of the whole network. Conventional power control techniques first collect instantaneous global channel state information (CSI) and then calculate sub-optimal solutions. Nevertheless, it is challenging to collect instantaneous global CSI in the HetNet, in which global CSI typically changes fast. In this paper, we exploit deep reinforcement learning (DRL) to design a multi-agent power control algorithm in the HetNet. To be specific, by treating each AP as an agent with a local deep neural network (DNN), we propose a multiple-actor-shared-critic (MASC) method to train the local DNNs separately in an online trial-and-error manner. With the proposed algorithm, each AP can independently use the local DNN to control the transmit power with only local observations. Simulations results show that the proposed algorithm outperforms the conventional power control algorithms in terms of both the converged average sum-rate and the computational complexity.
Index Terms:
DRL, multi-agent, power control, MASC, HetNet.I Introduction
Driven by ubiquitous wireless-devices like smart phones and tablets, wireless data traffics have been dramatically increasing in recent years [1]-[3]. These wireless traffics lay heavy burden on conventional cellular networks, in which a macro base station (BS) is deployed to provide wireless access services for all the users within the macro-cell. As an alternative, the heterogeneous network (HetNet) is proposed by planning small cells in the macro-cell. Typical small cells include Pico-cell and Femto-cell, which are able to provide flexible wireless access services for users with heterogeneous demands. It has been demonstrated that small cells can effectively offload the wireless traffics of the macro BS, provided that the small cells are well coordinated [4].
Due to the spectrum scarcity issue, it is inefficient to assign orthogonal spectrum resources to all the cells (including macro-cell and small cells). Then, different cells may reuse the same spectrum resource, and lead to severe inter-cell interference. To suppress the inter-cell interference and enhance the sum-rate of the cells reusing the same spectrum resource, power control algorithms are usually adopted [5]-[11]. Two conventional power control algorithms to maximize the sum-rate are weighted minimum mean square error (WMMSE) algorithm [7] and fractional programming (FP) [8] algorithm. By assuming that the instantaneous global (including both intra-cell and inter-cell) channel state information (CSI) is available, both WMMSE and FP algorithms can be used to calculate power allocation policies for the access points (APs) in different cells simultaneously.
It is known that, the solutions of conventional power control algorithms are generally sub-optimal. In fact, the solutions can be completely invalid when the coherence time of wireless channels is longer than the processing time, which is the total time of estimating channels, running power control algorithms, and updating transmit power. In a HetNet, the radio environment is highly dynamic and the coherence time of wireless channels is typically shorter than the processing time. As a result, the output solution of the conventional power control algorithms is typically invalid.
Motivated by the appealing performance of the machine learning in computer science field, e.g., computer vision and natural language processing, machine learning is recently advocated for wireless network designs. One typical application of the machine learning lies in the dynamic power control for the sum-rate maximization in wireless networks [13] [14]. [13] designs an deep learning (DL) based algorithm to accelerate the power allocations in a general interference-channel scenario. By collecting a large number of global CSI sets, [13] uses the WMMSE algorithm to generate power allocation labels. Then, [13] trains a deep neural network (DNN) with these global CSI sets and the corresponding power allocation labels. With the trained DNN, power allocation policies can be directly calculated by feeding the instantaneous global CSI. To avoid requiring the instantaneous global CSI meanwhile eliminate the computational cost of generating power allocation labels with the WMMSE algorithm, [14] considered a homogeneous network and assumed that neighboring transceivers (i.e., APs and users) can exchange their local information through certain cooperations. Then, [14] developed a deep reinforcement learning (DRL) based algorithm, which optimizes power allocation policies in a trial-and-error manner and can converge to the performance of the WMMSE algorithm after sufficient trials.
I-A Contributions
In this paper, we study the power control problem for the sum-rate maximization in a typical HetNet, in which multi-tier cells coexist by reusing the same spectrum band. Different from the single-tier scenario, both the maximum transmit power and coverage of each AP in the multi-tier scenario are typically heterogeneous. Main contributions of the paper are summarized as follows:
-
1.
We exploit DRL to design a multi-agent power control algorithm in the HetNet. First, we establish a local DNN at each AP and treat each AP as an agent. The input and the output of each local DNN are the local state information and the adopted local action, i.e., transmit power of the corresponding AP, respectively. Then, we propose a novel multiple-actor-shared-critic (MASC) method to train separately each local DNN in an online trial-and-error manner.
-
2.
The MASC training method is composed of multiple actor DNNs and a shared critic DNN. In particular, we first establish an actor DNN in the core network for each local DNN, and the structure of each actor DNN is the same as the corresponding local DNN. Then, we establish a shared critic DNN in the core network for these actor DNNs. By feeding historical global information into the critic DNN, the output of the critic DNN can accurately evaluate whether the output (i.e., transmit power) of each actor DNN is good or not from an global view. By training each actor DNN with the evaluation of the critic DNN, the weight vector of each actor DNN can be updated towards the direction of the global optimum. The weight vector of each local DNN can be periodically replaced by that of the associated actor DNN until convergence.
-
3.
The proposed algorithm has two main advantages compared with the existing power control algorithms. First, compared with [7], [8], [13], and [14], each AP in the proposed algorithm can independently control the transmit power and enhance the sum-rate based on only local state information, in the absence of instantaneous global CSI and any cooperation with other cells. Second, compared with [14], [15], and [16], the reward function of each agent in the proposed algorithm is the transmission rate between the corresponding AP and its served user, avoiding particular reward function designs for each AP. This may ease the transfer of the proposed algorithm framework into more resource management problems in wireless communications.
-
4.
By considering both two-layer and three-layer HetNets in simulations, we demonstrate that the proposed algorithm can rapidly converge to an average sum-rate higher than those of WMMSE and FP algorithms. Simulation results also reveal that the proposed algorithm outperforms WMMSE and FP algorithms in terms of the computational complexity.
I-B Related literature
DRL originates from RL, which has been widely used for the designs of wireless communications [17]-[22], e.g., user/AP handoff, radio access technology selection, energy-efficient transmissions, user scheduling and resource allocation, spectrum sharing, and etc. In particular, RL estimates the long-term reward of each state-action pair and stores them into a two-dimensional table. For a given state, RL can choose the action subject to the maximum long-term reward to enhance the performance. It has been demonstrated that RL performs well in a decision-making scenario in which the size of the state and action spaces in the wireless system is relatively small. However, the effectiveness of RL diminishes when the state and action spaces become large.
To choose proper actions in the scenarios with large state and action spaces, DRL is proposed by properly integrating DL and RL [23]. In particular, by adopting the DNN which has a strong representation capability, DL has been applied in different areas of wireless communications [24], for example, power control [25], channel access [26], link scheduling [27]. The basic idea of DRL is as follows: Instead of storing the long-term reward of each state-action pair in a tabular manner, DRL uses a DNN to represent the long-term reward as a function of the state and action. Thanks to the strong representation capability of the DNN, the long-term reward of any state-action pair can be properly approximated. The application of DRL in HetNets includes interference control among small cells [15], power control in a HetNet [16], resource allocation in V2V communications [28], caching policy optimization in content distribution networks [29], multiple access optimization in a HetNet [30], modulation and coding scheme selection in a cognitive HetNet [31], joint beamforming/power control/interference coordination in 5G networks [32], UAV navigation [33], and spectrum sharing [34]. More applications of DRL in wireless communications can be found in [35].
I-C Organizations of the paper
The remainder of this paper is organized as follows. We provide the system model in Sec. II. In Sec. III, we provide the problem formulation and analysis. In Sec. IV, we give preliminaries by overviewing related DRL algorithms. In Sec. V, we elaborate the proposed power control algorithm. Simulation results are shown in Sec. VI. Finally, we conclude the paper in Sec. VII.
II System Model

As shown in Fig. 1, we consider a typical HetNet, in which multiple APs share the same spectrum band to serve users and may cause interference to each other. In particular, we denote an AP as AP , , where is the number of APs. Accordingly, we denote the user served by AP as user equipment (UE) . Next, we provide the channel model and signal transmission model, respectively.
II-A Channel model
The channel between an AP and a UE has two components, i.e., large-scale attenuation (including path-loss and shadowing) and small-scale block Rayleigh fading. If we denote as the large-scale attenuation and denote as the small-scale block Rayleigh fading from AP to UE , the corresponding channel gain is . In particular, the large-scale attenuation is highly related to the locations of the AP and the UE, and typically remains constant for a long time. The small-scale block Rayleigh fading remains constant in a single time slot and changes among different time slots.
According to [36], we adopt the Jake’s model to represent the relationship between the small-scale Rayleigh fadings in two successive time slots, i.e.,
(1) |
where () denotes the correlation coefficient of two successive small-scale Rayleigh fading realizations, is a random variable represented by a distribution , and is a random variable produced by a distribution . It should be noted that, the Jake’s model can be reduced to the independent and identically distributed (IID) channel model if is zero.
II-B Signal transmission model
If we denote as the downlink signal from AP to UE with unit power in time slot , the received signal at UE is
(2) |
where is the transmit power of AP , is the noise at UE with power , the first term on the right side is the received requiring signal from AP , and the second term on the right side is the received interference from other APs. By considering that all the downlink transmissions from APs to UEs are synchronized, the signal to interference and noise ratio (SINR) at UE can be written as
(3) |
Accordingly, the downlink transmission rate (in bps) from AP to UE is
(4) |
where is the bandwidth of the downlink transmission.
III Problem description and analysis
Our goal is to optimize the transmit power of all APs to maximize the sum-rate of all the downlink transmissions. Then, we can formulate the optimization problem as
(5) |
where is the maximum transmit power constraint of AP . Note that the APs of different cells (e.g., macro-cell base station, pico-cell base station, and femto-cell base station) typically have distinct maximum transmit power constraints.
Since wireless channels are typically dynamic, the optimal transmit power maximizing the sum-rates among distinct time slots differs a lot. In other words, the optimal transmit power should be determined at the beginning of each time slot to guarantee the optimality. Nevertheless, there exist two main challenges to determine the optimal transmit power of APs. First, according to [37], this problem is generally NP-hard and it is difficult to find the optimal solution. Second, the optimal transmit power should be determined at the beginning of each time slot and it is demanding to find the optimal solution (if possible) in such a short time period.
As mentioned above, conventional power control algorithms (e.g., WMMSE algorithm and FP algorithm) can output sub-optimal solutions for this problem by implicitly assuming a quasi-static radio environment, in which wireless channels change slowly. For the dynamic radio environment, DL [13] and DRL [14] are recently adopted to solve the problem with sub-optimal solutions, by assuming the availability of the instantaneous global CSI or the cooperations among neighboring APs. In fact, these algorithms are inapplicable in the considered scenario of this paper due to the following two main constraints:
-
•
Instantaneous global CSI is unavailable.
-
•
Neighboring APs are not willing to or even cannot cooperate with each other.
In the rest of the paper, we will first provide preliminaries by overviewing related DRL algorithms and then develop a DRL based multi-agent power control algorithm to solve the above problem.
IV Preliminaries
In this section, we provide an overview of two DRL algorithms, i.e., Deep Q-network (DQN) and Deep deterministic policy gradient (DDPG), both of which are important to develop the power control algorithm in this paper. In general, DRL algorithms mimic the human being to solve a successive decision-making problem via trial-and-errors. By observing the current environment state () and adopting an action () based on a policy , the DRL agent can obtain an immediate reward and observes a new environment state . By repeating this process and treating each tuple as an experience, the DRL agent can continuously learn the optimal policy from the experiences to maximize the long-term reward.
IV-A Deep Q-network (DQN) [23]
DQN establishes a DNN with weight vector to represent the expected cumulative discounted (long-term) reward by executing the action in the environmental state . Then, can be rewritten in a recursive form (Bellman equation) as
(6) |
where is the immediate reward by executing the action in the environmental state , is the discount factor representing the discounted impact of future rewards, and is the transition probability of the environment state from to by executing the action . The DRL agent aims to find the optimal weight vector to maximize the long-term reward for each state-action pair. With the optimal weight vector , (6) can be rewritten as
(7) |
Accordingly, the optimal action policy is
(8) |
However, it is challenging to directly obtain the optimal from (7) since the transition probability is typically unknown to the DRL agent. Then, DRL agent updates in an iterative manner. On the one hand, to balance exploitation and exploration, the DRL agent adopts an -greedy algorithm to choose an action for each environmental state: for a given state , the DRL agent executes the action with the probability , and randomly executes an action with the probability . On the other hand, by executing actions with -greedy algorithm, DRL agent can continuously accumulate experience and store it in an experience replay buffer in a first-in-first-out (FIFO) fashion. After sampling a mini-batch of experiences with length () from the experience replay buffer, the DRL agent can update by adopting a stochastic gradient decent (SGD) method to minimize the expected prediction error (loss function) of the sampled experiences, i.e.,
(9) |
where is the target DNN and is established with the same structure as . In particular, the weight vector is updated by periodically to stabilize the training of .
IV-B Deep deterministic policy gradient (DDPG) [38]
It should be noted that DQN needs to estimate the long-term reward for each state-action pair to obtain the optimal policy. When the action space is continuous, DQN is inapplicable and DDPG can be an alternative. DDPG has an actor-critic architecture, which includes an actor DNN and a critic DNN . In particular, the actor DNN is the policy network and is responsible to output a deterministic action from a continuous action space in an environment state , and the critic DNN is the value-function network (similar to DQN) and is responsible to estimate the long-term reward when executing the action in the environment state . Similar to DQN, to stabilize the training of actor DNN and critic DNN, DDPG establishes an target actor DNN and a target critic DNN , respectively.
Similar to the DQN, the DDPG agent samples experiences from the experience replay buffer to train the actor DNN and critic DNN. Nevertheless, the DDPG accumulates experiences in a unique way. In particular, the executed action in each time slot is a noisy version of the output of the actor DNN, i.e., , where is a random variable and guarantees continuously exploring the action space around the output of the actor DNN. Note that, the epsilon-greedy method is usually used for action selections in discrete action space scenarios. When the action space is continuous, the noisy version of the output of the actor DNN is preferable for action selections [38] [39].
The training procedure of the critic DNN is similar to that of DQN: by sampling a mini-batch of experiences with length from the experience replay buffer, the DDPG agent can update by adopting a SGD method to minimize the expected prediction error of the sampled experiences, i.e.,
(10) |
Then, the DDPG agent adopts a soft update method to update the weight vector of the target critic DNN, i.e., , where is the learning rate of the target DNN.
The goal of training the actor DNN is to maximize the expected value-function in terms of the environment state , i.e., . Then, the weight vector of the actor DNN can be updated in the sampled gradient direction of , i.e.,
(11) |
Similar to the target critic DNN, the DDPG agent updates the weight vector of the target actor DNN by .
V DRL for multi-agent power control algorithm
In this section, we exploit DRL to design a multi-agent power control algorithm for the APs in the HetNet. In the following, we will first introduce the algorithm framework and then elaborate the algorithm design.
V-A Algorithm framework
It is known that the optimal transmit power of APs is highly related to the instantaneous global CSI. But the instantaneous global CSI is unavailable to APs. Thus, it is impossible for each AP to optimize the transmit power through conventional power control algorithms, e.g., WMMSE algorithm or FP algorithm. From [13] and [14], the historical wireless data (e.g., global CSI, transmit power of APs, the mutual interference, and achieved sum-rates) of the whole network contains useful information that can be utilized to optimize the transmit power of APs. Thus, we aim to leverage DRL to develop an intelligent power control algorithm that can fully utilize the historical wireless data of the whole network. From the aspect of practical implementations, the intelligent power control algorithm should have two basic functionalities:
-
•
Functionality I: Each AP can use the algorithm to complete the optimization of the transmit power at the beginning of the each time slot, in order to guarantee the timeliness of the optimization.
-
•
Functionality II: Each AP can independently optimize the transmit power to enhance the sum-rate with only local observations, in the absence of the global CSI and any cooperations among APs.

To realize Functionality I, we adopt a centralized-training-distributed-execution architecture as the basic algorithm framework as shown in Fig. 2. To be specific, a local DNN is established at each AP, and the input and the output of each local DNN are the local state information and the adopted local action, i.e., transmit power of the corresponding AP, respectively. In this way, each AP can feed local observations into the local DNN to calculate the transmit power in a real-time fashion. The weight vector of each local DNN is trained in the core network, which has redundant historical wireless data of the whole network. Since there are APs, we denote the corresponding local DNNs as (), in which is the weight vector of the local DNN and is the local state of AP .
To realize Functionality II, we develop a MASC training method based on the DDPG to update the weight vectors of local DNNs. To be specific, as shown in Fig. 2, actor DNNs together with target actor DNNs are established in the core network to associate with local DNNs, respectively. Each actor DNN has the same structure as the associated local DNN, such that the trained weight vector of the actor DNN can be used to update the associated local DNN. Meanwhile, a shared critic DNN together with the corresponding target critic DNN is established in the core network to guide the training of actor DNNs. The inputs of the critic DNN include the global state information and the adopted global action, i.e., the transmit power of each AP, and the output of the critic DNN is the long-term sum-rate of the global state-action pair. It should be noted that, there are two main benefits to include the global state and global action in the input of the critic DNN. First, by doing this, the non-stationary radio environment issue the critic DNN faces is only caused by the time-variant nature of wireless channels. This is the major difference compared with the case, in which single-agent algorithm is directly applied to solve a multi-agent problem and the non-stationary issue is caused by both the time-variant nature of wireless channels and unknown actions of other agents. The adverse impact of the non-stationary radio environment issue on the system performance can thus be reduced. Second, by doing this, we can train the critic DNN with historical global state-action pairs together with the achieved sum-rates in the core network, such that the critic DNN has a global view of the relationship between the global state-action pairs and the long-term sum-rate. Then, the critic DNN can evaluate whether the output of an actor DNN is good or not in terms of the long-term sum-rate. By training each actor DNN with the evaluation of the critic DNN, the weight vector of each actor DNN can be updated towards the direction of the global optimum. To this end, the weight vector of each local DNN can be periodically replaced by that of the associated actor DNN until convergence.
We denote actor DNNs as , (), where is the weight vector of actor DNN . Accordingly, we denote target actor DNNs as , (), where is the corresponding weight vector. Then, we denote respectively the critic DNN and the target critic DNN as and , where is referred to as the global state of the whole network including all the local states () and other global state of the whole network, is referred to as global action of the whole network including all the actions (), and are the corresponding weight vectors.
Next, we detail the experience accumulation procedure followed by the MASC training method.
V-A1 Experience accumulation
At the beginning of time slot , AP () observes a local state and optimizes the action (i.e., transmit power) as , where the action noise is a random variable and guarantees continuously exploring the action space around the output of the local DNN. By executing the action (i.e., transmit power) within this time slot, each AP can obtain a reward (i.e., transmission rate) in the end of time slot . At the beginning of the next time slot, AP , (), observes a new local state . To this end, AP , (), can obtain a local experience and meanwhile upload it to the core network via a bi-directional backhaul link with time slots delay. Upon receiving , (), a global experience is constructed as , where is the global reward of the whole network, and each global experience will be stored in the experience replay buffer, which has the capacity of and works in an FIFO fashion. By repeating this procedure, the experience replay buffer can continuously accumulate new global experiences.
V-A2 MASC training method
To train the critic DNN, a mini-batch of experiences can be sampled from the experience replay buffer in each time slot. Then, the SGD method can be adopted to minimize the expected prediction error (loss function) of the sampled experiences, i.e.,
(12) |
where can be calculated by
(13) |
Then, a soft update method is adopted to update the weight vector of the critic target DNN, i.e.,
(14) |
where is the learning rate of the target critic DNN.
Since each AP aims to optimize the sum-rate of whole network, the training of the actor DNN , (), can be designed to maximize the expected long-term global reward , which is defined as the expectation of in terms of the global state , i.e.,
(15) |
By taking the partial derivation of with respect to , we have
(16) |
Then, can be updated in the direction of , which is the direction with the maximum likelihood to increase .
Similar to the target critic DNN, a soft update method is adopted to adjust the weight vector of the target actor DNN, i.e.,
(17) |
where is the learning rate of the corresponding target actor DNN.
V-B Algorithm designs
In this part, we first design the experience and the structure for each actor DNN and local DNN. Then, we design the global experience and the structure of the critic DNN. Finally, we elaborate the algorithm followed by some related discussions.

V-B1 Experience of actor DNNs
The local information at AP can be divided into historical local information in previous time slots and instantaneous local information at the beginning of the current time slot. Historical local information includes the channel gain between AP and UE , the transmit power of AP , the sum-interference from AP (), the received SINR, and the corresponding transmission rate. Instantaneous local information includes the channel gain between AP and UE , and the sum-interference from AP (). It should be noted that, the sum-interference from AP () at the beginning of the current time slot is generated as follows: At the beginning of the current time slot, the new transmit power of each AP has not been determined, and each AP still uses the transmit power of the previous time slot, although the CSI of the whole network has changed. Thus, state in time slot is designed as
(18) |
Besides, action and reward can be respectively designed as the transmit power and the corresponding achievable rate calculated by (4). Consequently, the experience can be constructed as
(19) |
V-B2 Structure of local/actor DNNs
The designed structure of the local/actor DNN includes five full-connected layers as illustrated in Fig. 3-(A). In particular, the first layer is the input layer for and has neurons corresponding to seven elements in . The second layer and the third layer have respectively and neurons. The forth layer has neuron with the sigmoid activation function, which outputs a value between zero and one. The forth layer has neuron, which scales linearly the value from the forth layer to a value between zero and . With this structure, each local/actor DNN can take the local state as the input and output a transmit power satisfying the maximum transmit power constraint. In summary, there are neurons in each local/actor DNN.
V-B3 Global experience
By considering time slots delay for the core network to obtain the local information of APs, we design the global experience in time slot as
(20) |
In particular, , , , and () can be directly obtained from (), and can be directly calculated with the local reward in (). Here, we construct respectively and as and , where and are the channel gain matrixes of the whole network in time slot and time slot , respectively. Since the channel gains and () are available in and , we focus on the derivation of the interference channel gains and (). Here, we take the derivation of () as an example. At the beginning of time slot , APs transmit orthogonal pilots to the corresponding UEs for channel and interference estimations, and UE can measure locally from AP (). Then, UE can deliver the auxiliary information together with local state to the local DNN at the beginning of time slot . Then, by collecting simultaneously local experience and the auxiliary information from local DNN (), the core network can calculate each interference channel gain () with in and in . In this way, the interference channel gains () can be obtained to construct . Following a similar procedure, the core network can construct the channel gain of the whole network in each time slot, including .
V-B4 Structure of the critic DNN
The designed structure of critic DNN are illustrated in Fig. 3-(B) including three modules, i.e., a state module, an action module, and a mixed state-action module. Each module contains several full-connected layers. For the state module, there are three full-connected layers. The first layer is the input layer for the global state . Since each () has seven elements and has elements, the first layer has neurons. The second layer and the third layer of the state module have respectively and neurons. For the action module, there are two full-connected layers. The first layer of the action module is the input layer for the global action . Since each () is a one-dimension scalar, the first layer of the state module has neurons. The second layer of the action module has neurons. For the mixed state-action module, there are three full-connected layers. The first layer of the mixed state-action module is formulated by concatenating the last layers of the state module and the action module, and thus has neurons. The second layer of the mixed state-action module has neurons. The third layer of the mixed state-action module has one neuron, which outputs the long-term reward . In summary, there are neurons in the critic DNN.
V-B5 Proposed algorithm
The proposed algorithm is illustrated in Algorithm 1, which includes three stages, i.e., Initializations, Random experience accumulation, and Repeat.
In the stage of Initializations, local DNNs, actor DNNs, target actor DNNs, a critic DNN, and a target critic DNN need to be properly constructed and initialized. In particular, local DNN () is established at AP by adopting the structure in Fig. 3-(A), actor DNN with and the corresponding target actor DNN are established in the core network to associate with local DNN by adopting the structure in Fig. 3-(A), meanwhile critic DNN and the corresponding target critic DNN are established in the core network by adopting the structure in Fig. 3-(B). Then, , , and are randomly initialized, and are initialized with and , respectively.

In the stage of Random experience accumulation (), UE () observes local state and auxiliary information , and transmit them to AP at the beginning of time slot . AP chooses a random action (i.e., transmit power) and meanwhile uploads local experience and to the core network through the bi-directional backhaul link with time slots delay. After collecting all the local experiences and the corresponding auxiliary information from APs, the core network constructs a global experience and stores it into the memory relay buffer with the capacity . As illustrated in Fig. 4, at the beginning of time slot , the core network has global experiences in the memory replay buffer. From time slot , the core network begins to sample a mini-batch of experiences with length from the memory replay buffer to train the critic DNN, the target critic DNN, actor DNNs, and target actor DNNs, i.e., update to minimize (12), update with (14), update with (16), and update with (17). From time slot , in every time slots, the core network transmits the latest weight vector to AP through the bi-directional backhaul link with time slots delay. AP receives the latest in time slot and uses it to replace the weight vector of local DNN .
In the stage of Repeat (), UE () observes local state and auxiliary information , and transmit them to AP at the beginning of time slot . AP sets the transmit power to be and meanwhile uploads local experience and to the core network through the bi-directional backhaul link with time slots delay. After collecting all the local experiences and the corresponding auxiliary information from APs, the core network constructs a global experience and stores it into the experience replay buffer. Then, a mini-batch of experiences are sampled from the experience replay buffer to train the critic DNN, the target critic DNN, actor DNNs, and target actor DNNs. In every time slots, AP receives the latest and uses it to replace the weight vector .
V-B6 Discussions on the computational complexity
The computational complexity at each AP is dominated by the calculation of a DNN with neurons, and thus the computational complexity at each AP is around . The computational complexity at the core network is dominated by the training of the critic DNN with neurons and actor DNNs. In particular, the critic DNN is first trained and then actor DNNs can be trained simultaneously. Thus, the computational complexity of each training is around . In the simulation, with the designed DNNs, we show that the average time needed to calculate the transmit power at each AP is around ms, which is much less than those by employing WMMSE algorithm and FP algorithm, the average time needed to train the critic DNN is around ms, and the average time needed to train an actor DNN is around ms. It should be pointed that, the computational capability of the nodes in practical networks is much stronger than the computer we use in the simulations. Thus, the average time needed to train a DNN and calculate the transmit power with a DNN can be further reduced in practical networks.
V-B7 Discussions on the overheads
Note that, the proposed architecture contains two kinds of overhead: local information of each AP to the core network and DNN weight vectors from the core network to the AP. The algorithm in [14] needs three kinds of overhead: local information of each AP to the core network, DNN weight vectors from the core network to the AP, and exchanged local information among neighboring APs. This difference means that, the proposed algorithm does not need cooperations among APs for the power control and is much easier than the algorithm in [14] to implement in practical situations. Besides, as the number of APs increases, the number of actor DNNs increases and the resulting overhead may affect the scalability of the proposed architecture. Nevertheless, the increase of APs may also enhance the spectrum utilization efficiency. Therefore, the number of APs needs to be properly designed to balance the overhead and the spectrum utilization efficiency in practical situations.
V-B8 Discussions on the Implementations
It is worth noting that, the proposed algorithm considers three engineering aspects to facilitate the implementation. First, the core network initially does not has any data for learning in practical situations. Then, the proposed algorithm allows each AP to select transmit power randomly to interact with the environment and accumulate useful data. Second, it typically takes some time for the information exchange between APs and the core network in the implementation. Then, the proposed algorithm takes a corresponding transmission latency into considerations and simulation results demonstrate the robustness of the proposed algorithm to this latency. Third, it is demanding for the core network to transmit the weight vectors of actor DNNs to APs for updating the corresponding local DNNs in each time slot. Then, the proposed algorithm allows the core network to transmit weight vectors periodically and simulation results show that this design does not affect the long-term performance of the proposed algorithm.
VI Simulation results
In this section, we provide simulation results to evaluate the performance of the proposed algorithm. For comparison, we have four benchmark algorithms, namely, WMMSE algorithm, FP algorithm, Full power algorithm, Random power algorithm. In particular, the maximum transmit power is used to initialize the WMMSE algorithm and the FP algorithm, which will stop iterations if the difference of the sum-rates per link between two successive iterations is smaller than or the number of iterations is larger than . In the following, we first provide the settings of the simulation and then demonstrate the performance of the propose algorithm as well as the four benchmark algorithms. We implement the proposed algorithm with an open-source software called keras which is based on Tensorflow, and on a computer with the Intel Core i5-8250U and G RAM.
VI-A Simulation settings
Layers | |||||
---|---|---|---|---|---|
Neuron number | |||||
Activation function | Linear | Relu | Relu | Sigmoid | Linear |
Action noise | Normal distribution with zero mean and variance |
Layers | |||||
---|---|---|---|---|---|
Neuron number | |||||
Activation function | Linear | Relu | Relu | Sigmoid | Linear |
Optimizer | Adam optimizer with learning rate | ||||
Mini-batch size | |||||
Learning rate |
Layers | |||||||
---|---|---|---|---|---|---|---|
Neuron number | |||||||
Activation function | Linear | Relu | Linear | Linear | Linear | Relu | Linear |
Optimizer | Adam optimizer with learning rate | ||||||
Mini-batch size | |||||||
Learning rate | |||||||
Discount factor |
To begin with, we provide the hyperparameters of the DNNs in Table I, Table II, and Table III, which are determined by cross-validation [13] [14]. Note that the adopted hyperparameters maybe not the optimal. Since the proposed algorithm with these hyperparameters performs well, we use them to demonstrate the achievable performance rather than the optimal performance of the proposed algorithm. Besides, we pre-process the local/global state information of the proposed algorithm to reduce their variance in the following procedure: we first use the noise power to normalize each channel gain, and then use the mapping function to process the data related to the transmit power, channel gain, interference, and SINR.
In the simulations, we consider both two-layer HetNet scenario and three-layer HetNet scenario:
-
•
Two-layer HetNet scenario: In this scenario, there are five APs whose locations are respectively , , , , and in meters. Each AP has a disc service coverage defined by a minimum distance and a maximum distance from the AP to the served UE. AP is in the first layer and AP is in the second layer. is set to be meters for all APs, and of AP in the first layer is meters, and of each AP in the second layer is meters. The maximum transmit power of AP in the first layer is dBm, the maximum transmit power of each AP in the second layer is dBm. The served UE by an AP is randomly located within the service coverage of the AP.
-
•
Three-layer HetNet scenario: In this scenario, there are nine APs whose locations are respectively , , , , , , , , and in meters. AP is in the first layer, and AP is in the second layer, and AP is in the third layer. is set to be meters for all APs, and of AP in the first layer is meters, and of each AP in the second layer is meters, and of each AP in the third layer is meters. The maximum transmit power of AP in the first layer is dBm, and the maximum transmit power of each AP in the second layer is dBm, and the maximum transmit power of each AP in the third layer is dBm. The served UE by an AP is randomly located within the service coverage of the AP.
Furthermore, the transmission bandwidth is set to be MHz, the adopted path-loss model is in dB, where in kilometer is the distance between a transmitter and a receiver [40], the log-normal shadowing standard deviation is dB, the noise power at each UE is dBm, the delay of the data transmission between the core network and each AP is time slots, the period to update the weight vector of each local DNN is time slots, and the capacity of the memory replay buffer is .

VI-B Performance comparison and analysis
In this part, we will provide the performance comparison and analysis of the proposed algorithm with four benchmark algorithms in two simulation scenarios. In particular, the simulation of the proposed algorithm has two stages, i.e., training stage and testing stage. In the training stage, the DNNs are trained with the proposed Algorithm 1 in the first time slots. In the testing stage, the well-trained DNNs are used to optimize the transmit power in each AP in the following time slots. Each curve is the average of ten trials, in which the location of the served UE by an AP is randomly generated within the service coverage of the AP.

Fig. 6 provides the average sum-rate performance of the proposed algorithm and four benchmark algorithms in the two-layer HetNet scenario in the training stage. The channel correlation factor is set to be zero, i.e., IID channel. In the figure, the average sum-rate of the proposed algorithm is the same as the random power algorithm at the beginning of data transmissions since the proposed algorithm has to choose transmit power randomly for each AP to accumulate experiences. Then, the average sum-rate of the proposed algorithm increases rapidly and exceeds the sum-rates calculated by WMMSE algorithm and FP algorithm after around time slots. Finally, the proposed algorithm converges after time slots. This can be explained as follows. On the one hand, both WMMSE algorithm and FP algorithm can only output sub-optimal solutions of the power allocation problem in each single time slot, meaning that the average sum-rate performance of both algorithms are also sub-optimal. On the other hand, the proposed algorithm can explore continuously different power control strategies and accumulates global experiences. By learning from these global experiences, the critic DNN has a global view of the impacts of different power control strategies on the sum-rate. Then, the critic DNN can guide each actor DNN (or each local DNN) to update the weight vector towards the global optimum. Thus, it is reasonable that the proposed algorithm outperforms both the WMMSE algorithm and FP algorithm in terms of the average sum-rate.

Fig. 7 provides the corresponding sum-rate performance of the proposed algorithm in the two-layer HetNet in the testing stage. From the figure, the effectiveness of the proposed algorithm in the two-layer HetNet is demonstrated. It should be noted that, the sum-rate performance of the proposed algorithm in the testing stage is also higher that of the proposed algorithm in the training stage. This is because, the proposed algorithm in the training stage needs to continuously explore the transmit power allocation policy and train DNNs until convergence. The exploration may degrade the sum-rate performance after the convergence of the algorithm. On the contrary, by completely exploiting the well-trained DNNs to optimize the transmit power, the proposed algorithm can further enhance the sum-rate performance in the testing stage.


Fig. 8 provides the average sum-rate performance of the proposed algorithm and four benchmark algorithms in the three-layer HetNet scenario in the training stage. The channel correlation factor is set to be zero, i.e., IID channel. In the figure, the average sum-rate of the proposed algorithm also increases rapidly and exceeds the average sum-rates calculated by WMMSE algorithm and FP algorithm after around time slots, and converges after time slots. This phenomenon can be explained in a similar way to that in Fig. 6. In addition, Fig. 9 provides the corresponding sum-rate performance of the proposed algorithm in the three-layer HetNet in the testing stage. From the figure, we observe a phenomenon similar to that in Fig. 7.

Fig. 10 provides the sum-rate performance of the proposed algorithm and four benchmark algorithms in the two-layer HetNet scenario with a random channel correlation factor . In Fig. 10-(a), the average sum-rate of the proposed algorithm in the training stage increases rapidly and exceeds the average sum-rates calculated by WMMSE algorithm and FP algorithm after around time slots, and converges after time slots. Meanwhile,in Fig. 10-(b), the sum-rate of the proposed algorithm in the testing stage is generally higher than those of the benchmark algorithms. This demonstrates that sum-rate performance of the proposed algorithm outperforms those of benchmark algorithms in the two-layer HetNet scenario even with a random .

Fig. 11 provides the sum-rate performance of the proposed algorithm and four benchmark algorithms in the three-layer HetNet scenario with a random channel correlation factor . In Fig. 11-(a), the average sum-rate of the proposed algorithm in the training stage increases rapidly and converges to the average sum-rates calculated by WMMSE algorithm and FP algorithm after around time slots. Meanwhile, in Fig. 11-(b), the sum-rate of the proposed algorithm in the testing stage is almost the same as those of WMMSE algorithm and FP algorithm. This demonstrates the advantages of the proposed algorithm in the three-layer HetNet scenario even with a random channel correlation factor . Note that, the performance advantage of the proposed algorithm in the three-layer HetNet scenario diminishes compared with that in the two-layer HetNet scenario. This is because, the complexity of the network topology increases exponentially as the network size scales up and it is generally more difficult to learn to maximize the sum-rate of the three-layer HetNet scenario than the two-layer HetNet scenario.
|
|
|
WMMSE | FP | ||||||
---|---|---|---|---|---|---|---|---|---|---|
ms | ms | ms | ms | ms |
Table IV shows the average time complexity of different algorithms in the simulation. From the table, we observe that the average time needed to calculate a transmit power with a local DNN is much less than those of employing WMMSE algorithm and FP algorithm. Besides, the average time needed to train the critic DNN and actor DNNs is below ten mini-seconds. In fact, the computational capability of the nodes in practical networks is much stronger than the computer we use in the simulations. Thus, the average time needed to train a DNN and calculate the transmit power with a DNN will be further reduced in the practical networks.
VII Conclusions
In this paper, we exploited DRL to design a multi-agent power control in the HetNet. In particular, a deep neural networks (DNN) was established at each AP and a MASC method was developed to effectively train the DNNs. With the proposed algorithm, each AP can independently learn to optimize the transmit power and enhance the sum-rate with only local information. Simulation results demonstrated the superiority of the proposed algorithm compared with conventional power control algorithms, e.g., WMMSE algorithm and FP algorithm, from the aspects of average sum-rate and computational complexity. In fact, the proposed algorithm framework can also be applied in more resource management problems, in which global instantaneous CSI is unavailable and the cooperations among users are unavailable or highly cost.
References
- [1] L. Zhang, Y.-C. Liang, and D. Niyato, “6G visions: Mobile ultra-broadband, super Internet-of-Things, and artificial intelligence,” China Communications, vol. 16, no. 8, pp. 1-14, Aug. 2019.
- [2] J. G. Andrews, S. Buzzi, W. Choi, S. V. Hanly, A. Lozano, A. C. K. Soong, and J. C. Zhang, “What will 5G be?” IEEE J. Select. Areas Commun., vol. 32, no. 6, pp. 1065-1082, Jun. 2014.
- [3] L. Zhang, M. Xiao, G. Wu, M. Alam, Y.-C. Liang, and S. Li, “A survey of advanced techniques for spectrum sharing in 5G networks,” IEEE Wireless Commun., vol. 24, no. 5, pp. 44-51, Oct. 2017.
- [4] M. Agiwal, A. Roy, and N. Saxena, “Next generation 5G wireless networks: A comprehensive survey,” IEEE Commun. Surv. Tutor., vol. 18, no. 3, pp. 1617-1655, Third quarter 2016.
- [5] C. Yang, J. Li, M. Guizani, A. Anpalagan, and M. Elkashlan, “Advanced spectrum sharing in 5G cognitive heterogeneous networks,” IEEE Wireless Commun., vol. 23, no. 2, pp. 94-101, Apr. 2016.
- [6] S. Singh and J. G. Andrews, “Joint resource partitioning and offloading in heterogeneous cellular networks,” IEEE Wireless Commun., vol. 13, no. 2, pp. 888-901, Feb. 2014.
- [7] Q. Shi, M. Razaviyayn, Z.-Q. Luo, and C. He, “An iteratively weighted MMSE approach to distributed sum-utility maximization for a MIMO interfering broadcast channel,” IEEE Trans. Signal Process., vol. 59, no. 9, pp. 4331-4340, Sep. 2011.
- [8] K. Shen and W. Yu, “Fractional programming for communication systems-Part I: Power control and beamforming,” IEEE Trans. Signal Process., vol. 66, no. 10, pp. 2616-2630, May 2018.
- [9] R. Q. Hu and Y. Qian, “An energy efficient and spectrum efficient wireless heterogeneous network framework for 5G systems,” IEEE Commun. Mag., vol. 52, no. 5, pp. 94-101, May 2014.
- [10] J. Huang, R. A. Berry, and M. L. Honig, “Distributed interference compensation for wireless networks,” IEEE J. Sel. Areas Commun., vol. 24, no. 5, pp. 1074-1084, May 2006.
- [11] H. Zhang, L. Venturino, N. Prasad, P. Li, S. Rangarajan, and X. Wang, “Weighted sum-rate maximization in multi-cell networks via coordinated scheduling and discrete power control,” IEEE J. Sel. Areas Commun., vol. 29, no. 6, pp. 1214-1224, Jun. 2011.
- [12] L. B. Le and E. Hossain, “Resource allocation for spectrum underlay in cognitive radio networks,” IEEE Trans. Wireless Commun., vol. 7, no. 12, pp. 5306-5315, Dec. 2008.
- [13] H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, “Learning to optimize: Training deep neural networks for interference management,” IEEE Trans. Signal Process., vol. 66, no. 20, pp. 5438-5453, Oct. 2018.
- [14] Y. S. Nasir and D. Guo, “Multi-agent deep reinforcement learning for dynamic power allocation in wireless networks,” IEEE J. Sel. Areas in Commun., vol. 37, no. 10, pp. 2239-2250, Oct. 2019.
- [15] L. Xiao, H. Zhang, Y. Xiao, X. Wan, S. Liu, L.-C. Wang, and H. V. Poor, “Reinforcement learning-based downlink interference control for ultra-dense small cells,” IEEE Trans. Wireless Commun., vol. 19, no. 1, pp. 423-434, Jan. 2020.
- [16] R. Amiri, M. A. Almasi, J. G. Andrews, and H. Mehrpouyan, “Reinforcement learning for self organization and power Control of two-tier heterogeneous networks,” IEEE Trans. Wireless Commun., vol. 18, no. 8, pp. 3933-3947, Aug. 2019.
- [17] Y. Sun, G. Feng, S. Qin, Y.-C. Liang, and T.-S. P. Yum, “The SMART handoff policy for millimeter wave heterogeneous cellular networks,” IEEE Trans. Mobile Commun., vol. 17, no. 6, pp. 1456-1468, Jun. 2018.
- [18] D. D. Nguyen, H. X. Nguyen, and L. B. White, “Reinforcement learning with network-assisted feedback for heterogeneous RAT selection,” IEEE Trans. Wireless Commun., vol. 16, no. 9, pp. 6062-6076, Sep. 2017.
- [19] Y. Wei, F. R. Yu, M. Song, and Z. Han, “User scheduling and resource allocation in HetNets with hybrid energy supply: an actor-critic reinforcement learning approach,” IEEE Trans. Wireless Commun., vol. 17, no. 1, pp. 680-692, Jan. 2018.
- [20] N. Morozs, T. Clarke, and D. Grace, “Heuristically accelerated reinforcement learning for dynamic secondary spectrum sharing,” IEEE Access, vol. 3, pp. 2771-2783, 2015.
- [21] V. Raj, I. Dias. T. Tholeti, and S. Kalyani, “Spectrum access in cognitive radio using a two-stage reinforcement learning approach,” IEEE J. Sel. Topics Signal Process., vol. 12, no. 1, pp. 20-34, Feb. 2018.
- [22] O. Iacoboaiea, B. Sayrac, S. B. Jemaa, and P. Bianchi, “SON coordination in heterogeneous networks: a reinforcement learning framework,” IEEE Trans. Wireless Commun., vol. 15, no. 9, pp. 5835-5847, Sep. 2016.
- [23] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529-533, 2015.
- [24] C. Zhang, P. Patras, and H. Haddadi,“Deep Learning in Mobile and Wireless Networking: A Survey,” IEEE Commun. Surveys Tuts., vol. 21, no. 3, pp. 2224-2287, 3rd Quart., 2019.
- [25] T. V. Chien, T. N. Canh, E. Bjornson, and E. G. Larsson, “Power control in cellular massive MIMO with varying user activity: A deep learning solution,” IEEE Trans. Wireless Commun., DOI: 10.1109/TWC.2020.2996368.
- [26] R. Mennes, F. A. P. D. Figueiredo, and S. Latre, “Multi-agent deep learning for multi-channel access in slotted wireless networks,” IEEE Access, vol. 8, pp. 95032-95045, 2020.
- [27] W. Cui, K. Shen, and W. Yu, “Spatial deep learning for wireless scheduling,” IEEE J. Select. Areas Commun., vol. 37, no. 6, pp. 1248-1261, Jun. 2019.
- [28] H. Ye, G. Y. Li, and B.-H. F. Juang, “Deep reinforcement learning based resource allocation for V2V communications,” IEEE Trans. Veh. Technol., vol. 68, no. 4, pp. 3163-3173, Apr. 2019.
- [29] Y. Yu, T. Wang, and S. C. Liew, “Deep-reinforcement learning multiple access for heterogeneous wireless networks,” IEEE J. Sel. Areas Commun., vol. 37, no. 6, pp. 1277-1290, Jun. 2019.
- [30] Y. He, Z. Zhang, F. R. Yu, N. Zhao, H. Yin, V. C. M. Leung, and Y. Zhang, “Deep-reinforcement-learning-based optimization for cache-enabled opportunistic interference alignment wireless networks,” IEEE Trans. Veh. Technol., vol. 66, no. 11, pp. 10433-10445, Sep. 2017.
- [31] L. Zhang, J. Tan, Y.-C. Liang, G. Feng, and D. Niyato, “Deep reinforcement learning-Based modulation and coding scheme selection in cognitive heterogeneous networks,” IEEE Trans. Wireless Commun., vol. 18, no. 6, pp. 3281-3294, Jun. 2019.
- [32] F. B. Mismar, B. L. Evans, and A. Alkhateeb, “Deep reinforcement learning for 5G networks: Joint beamforming, power control, and interference coordination,” IEEE Trans. Commun., vol. 68, no. 3, pp. 1581-1592, Mar. 2020.
- [33] H. Huang, Y. Yang, H. Wang, Z. Ding, H. Sari, and F. Adachi, “Deep reinforcement learning for UAV navigation through massive MIMO technique,” IEEE Trans. Veh. Technol., vol. 69, no. 1, pp. 1117-1121, Jan. 2020.
- [34] H. Zhang, N. Yang, W. Huangfu, K. Long, and V. C. M. Leung, “Power control based on deep reinforcement learning for spectrum sharing,” IEEE Trans. Wireless Commun., vol. 19, no. 6, pp. 4209-4219, Jun. 2020.
- [35] N. C. Luong et al., “Applications of deep reinforcement learning in communications and networking: A survey,” IEEE Commun. Surveys, vol. 21, no. 4, pp. 3133-3174, 2019.
- [36] T. Kim, D. J. Love, B. Clerckx, “Does frequent low resolution feedback outperform infrequent high resolution feedback for multiple antenna beamforming systems?”, IEEE Trans. Signal Process., vol. 59, no. 4, pp. 1654-1669, Apr. 2011.
- [37] Z.-Q. Luo and S. Zhang, “Dynamic spectrum management: Complexity and duality,” IEEE J. Sel. Topics Signal Process., vol. 2, no. 1, pp. 57-73, Feb. 2008.
- [38] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Deterministic Policy Gradient Algorithms,” ICML, Jun. 2016.
- [39] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” ICLR, Aug. 2015.
- [40] Radio Frequency (RF) System Scenarios, document 3GPP TR 25.942, v.14.0.0, 2017.