DRL-based Distributed Resource Allocation for Edge Computing in Cell-Free Massive MIMO Network
Abstract
In this paper, with the aim of addressing the stringent computing and quality-of-service (QoS) requirements of recently introduced advanced multimedia services, we consider a cell-free massive MIMO-enabled mobile edge network. In particular, benefited from the reliable cell-free links to offload intensive computation to the edge server, resource-constrained end-users can augment on-board (local) processing with edge computing. To this end, we formulate a joint communication and computing resource allocation (JCCRA) problem to minimize the total energy consumption of the users, while meeting the respective user-specific deadlines. To tackle the problem, we propose a fully distributed solution approach based on cooperative multi-agent reinforcement learning framework, wherein each user is implemented as a learning agent to make joint resource allocation relying on local information only. The simulation results demonstrate that the performance of the proposed distributed approach outperforms the heuristic baselines, converging to a centralized target benchmark, without resorting to large overhead. Moreover, we showed that the proposed algorithm has performed significantly better in cell-free system as compared with the cellular MEC systems, e.g., a small cell-based MEC system.
Index Terms:
cell-free massive MIMO, joint communication and computing resource allocation, deep reinforcement learning (DRL), multi-agent reinforcement learning, edge computingI introduction
Over the past few years, there has been a surge in computation hungry, and yet highly delay-intolerant multimedia applications. To support these applications in resource-limited end-user devices, it is essential to augment on-board (local) processing by offloading part of the intensive computation to more powerful computing platforms at the network edge. To this end, the ubiquitous radio and computing resources at the mobile edge network should be jointly optimized in a way to improve certain system objectives such as execution delay and energy consumption. In this regard, a plethora of joint communication and computing resource allocation (JCCRA) schemes based on conventional optimization methods have been investigated. However, most of these approaches either assume accurate knowledge of network-wide information, which is difficult to obtain in practice, or are limited to quasi-static systems, such as deterministic task size and time-invariant channel conditions. Thus, their application to support time-sensitive applications in dynamic mobile edge network might be critically limited.
Driven by the recent advances in machine learning and deep reinforcement learning (DRL), several learning-based schemes have been proposed to provide flexible joint resource allocation. For instance, deep Q-network (DQN) is employed in [1, 2] for designing joint offloading and resource allocation decisions, while [3] presents a deep deterministic policy gradient (DDPG)-based scheme in heterogeneous networks with multiple users. However, these schemes rely on central entities to make joint resource allocation decision, suffering from considerable signaling overhead and scalability issues. [4] investigates distributed JCCRA problem in a single-cell mobile edge computing (MEC) system, however, the adopted computation model is not applicable for tasks with extreme reliability and ultra-low hard deadline requirements. We note that the large amount of effort on optimizing joint resource allocation in MEC networks focuses on cellular system. However, as we move towards the next decade, the current cellular MEC systems may not keep up with the diverse and more stringent requirements of the envisaged advanced applications, such as holographic displays, multi-sensory extended reality, and others.
In light of the recently proposed cell-free massive MIMO architecture, envisioned for beyond-5G and 6G networks, however, there are only limited works. Notably, cell-free architecture can provide sufficiently fast and reliable access links without cell-edge by serving a relatively small number of users from several geographically distributed access points (APs) [5]. It, therefore, opens up a new opportunity to support energy-efficient and consistently-low latency computational task offloading, in contrast to cellular MEC systems. [6] presents several performance analyses in a cell-free MEC system considering coverage radius of the APs, while the issues of active user detection and channel estimations are discussed in [7]. In this paper, we consider a JCCRA problem that minimizes total energy consumption of the users while meeting the user-specific application deadlines by jointly optimizing the allocation of local processor clock speed and uplink transmission power for each user. We then present a fully distributed solution approach based on cooperative multi-agent reinforcement learning, alleviating the signaling and communication overheads associated with centralized implementations. In sharp contrast to our work, both [6] and [7] do not deal with the JCCRA problem. To the best of our knowledge, this is the very first attempt to solve JCCRA problem in a fully distributed fashion for cell-free network-enabled mobile edge network. The fully distributed and intelligent JCCRA in our framework combined with the reliable access links of the cell-free massive MIMO architecture can be a promising means to handle the stringent requirements of the newly introduced multimedia services.
The rest of the paper is organized as follows: In Section II, we discuss the system model and problem formulation for the JCCRA framework. Section III presents the proposed distributed JCCRA scheme based on cooperative multi-agent reinforcement learning (MARL) framework. Simulation results are presented in Section IV, followed by some concluding remarks in Section V.
II system model and problem formulation
II-A User-centric Cell-free Massive MIMO: Overview
We consider a cell-free massive MIMO-enabled mobile edge network, depicted in Fig. 1, where a central processing unit (CPU) with an edge server of finite capacity (in cycles per second) is connected to single-antenna access points (APs) via ideal fronthaul links. Furthermore, APs serve single-antenna users, such that , entailing to the widely known system model for cell-free massive MIMO [5]. Let and denote sets of AP and user indices, respectively. Besides, each user is also equipped with limited local processor with maximum capability (in cycle per second).
Let the channel between -th AP and -th user is given as where is a large-scale channel gain coefficient that models the effects of path loss and shadowing while represents small-scale channel fading. Let denote a channel coherence period in which remains the same. The coherence period is divided into pilot transmission interval of samples and uplink data transmission interval of () samples for offloading computation to the edge server in the CPU.

At the beginning of each coherence time, each user simultaneously transmit a pilot sequence , such that , which is used to estimate the respective channels at each AP . Assuming pairwise orthogonal sequences , i.e., , we ignore pilot contamination problem in the current model. Then, the received pilot vector at the -th AP, , can be represented as
(1) |
where is the pilot transmit power, and is a -dimensional additive noise vector with independent and identically distributed entries of . Based on the received vector , the least-square (LS) channel estimate can be expressed as:
(2) |
The channel estimates are used to decode the uplink transmitted data of the users. After pilot transmission, the users transmit offloaded data to the APs. Let denote the uplink data of user , with transmit power of , which is determined as , where , and represent power control coefficient and maximum uplink power of the -th user, respectively. Then, the received signal at the -th AP is represented as
(3) |
It is important to ensure the scalability of cell-free massive MIMO system, in terms of complexity for pilot detection and data processing. To this end, we limit the number of APs that serve the -th user to , and form a user-centric cluster of APs , by including all APs with the largest to until the maximum allowable cluster size is reached, i.e., . Thus, the transmitted data by user is only decoded by the APs in . Then, after performing maximum ratio combining (MRC) at each AP the quantity is transmitted to the CPU via a fronthaul link. The received soft estimates are combined at the CPU to decode the data transmitted by user as follows:
(4) |
Then, the uplink signal-to interference and noise ratio (SINR) for user can be expressed as:
(5) |
Given the system bandwidth , the uplink rate of user is given as by Shannon’s theory.
II-B Parallel Computation Model
We assume each user has a computationally intensive task of bits at the beginning of every discrete time step , whose duration . The task sizes in different time steps are independent and uniformly distributed over , for every user . Let denote the maximum tolerable delay of user to complete execution of the given task. Considering the delay constraint and energy consumption, the -th user processes bits locally while offloading the remaining bits to the edge server at the CPU.
II-B1 Local computation
Let , for , denote the -th user processor speed (in cycle per second) allocated for local task execution. The entire task is offloaded to the edge server if , while implies that the whole local processing capability is fully utilized and the remaining task bits are offloaded to the edge server. Accordingly, the proportion of locally computed task size at time step is expressed as , where corresponds to the number of cycles required to process one-bit task. Then, the time taken for local execution at time step is expressed as
(6) |
The corresponding energy consumption is given by
(7) |
where corresponds to the effective switched capacitance.
II-B2 Computation Offloading
The remaining bits are offloaded to the edge server at the CPU for parallel computation. While offloading computation to the edge server, the experienced latency can be broken down into transmission delay, computing delay at the edge server, and transmission delay for retrieving the processed result. Assuming the retrieved data size after computation is much smaller compared to the offloaded data size, we ignore the retrieving delay. Then, the transmission delay to offload data bits is expressed as , where is the uplink rate of user at time step . The computing time at the edge server is given by , where is computation resource at the CPU allocated for user , which is in proportion to . The total offloading delay for edge computation is then given as
(8) |
The corresponding energy consumption for offloading computation is then expressed as
(9) |
where . Then, the overall experienced latency by user to execute bits locally and at the edge is given as:
(10) |
Similarly, the energy consumption of user at time step can be expressed as
(11) |
II-C JCCRA Problem Formulation
According to the aforementioned communication and parallel computation models, we intend to jointly optimize the local processor speed and uplink transmission power for every user at each time step , in order to minimize the total energy consumption of all users subject to the respective user-specific delay requirements. The JCCRA problem can mathematically be formulated to jointly determine as follows:
(12) |
(12) is a stochastic optimization problem in which the objective function and the first condition involve constantly changing random variables, i.e., task size and channel conditions. Consequently, the problem must be solved frequently, i.e., at every time step , and the solution must converge rapidly within the ultra-low delay constraint. Therefore, it is challenging to solve the problem using iterative optimization algorithms with reasonable complexity. Furthermore, relying on a one-shot optimization, it is difficult to guarantee a steady long-term performance with conventional algorithms since they cannot adapt to the changes in the mobile edge network.
III The Proposed Multi-Agent Reinforcement Learning-based Distributed JCCRA
To cope up with the dynamics in the MEC while supporting efficient and adaptive joint resource allocation for every user, we propose a fully distributed JCCRA based on cooperative multi-agent reinforcement learning (MARL). We note that solving the JCCRA problem centrally at the CPU might allow for efficient management of the available radio and computing resources due to globally processing of network-wide information. However, the associated signaling and communication overheads are forbiddingly significant, especially given the ultra-low delay constraints. Accordingly, we transform the JCCRA problem into a partially observable cooperative Markov game that consists of learning agents. One of the main challenges in multi-agent domain is the non-stationarity of the environment since the agents update their policies concurrently, which might lead to learning instability. To deal with this challenge, the agents are trained with the state-of-the-art multi-agent deep deterministic policy gradient (MADDPG) algorithm under the framework of centralized training and decentralized execution [8].
III-A DRL Formulation for JCCRA
Each user is implemented as a DRL agent that learn to map local observation of the environment to efficient JCCRA actions by sequentially interacting with the mobile edge network in discrete time steps. Let , and denote the local observation space of agent , and the complete environment state space, respectively. At the beginning of each step , agent perceives the local observation of the environment state , and takes an action from its action space according to the current JCCRA policy . The interaction with the environment produces the next observation of the agent and a real-valued scalar reward that guides the agent to constantly improve the policy until it converges. The goal of the agent is, therefore, to derive an optimal JCCRA policy that maximizes the expected long-term discounted cumulative reward, defined as , where is a discount factor and is the total number of time steps (horizon). In the sequel, we define the local observation, action, and reward of agent for the JCRRA at a given time .
III-A1 Local observation
At the beginning of each time step , the local observation of agent is defined as , where , , , and are the incoming task size, user-specific application deadline, and the uplink rate from the previous time step, respectively. Note that the local observation is only a partial view of the environment, i.e., it does not include information about the other agents.
III-A2 Action
Based on the local observation of the environment, the agent takes action , i.e., local processor speed , and uplink transmit power .
III-A3 Reward
To reflect the design objective of the JCCRA problem and enforce a cooperative behavior among the agents, we define joint reward as
(13) |
where if , otherwise to punish potentially selfish behavior of the agents that leads to failure in meeting the delay constraint.
III-B MADDPG Algorithm for Distributed JCCRA
During the offline training phase, the agents can be trained in a central unit, which presents an opportunity to share extra information among the agents for easing and accelerating the training process. Specifically, agent has now access for the joint action , and full observation of the environment state , where is the local observation of other agents at time step . The extra information endowed to the agents, however, is discarded during the execution phase, meaning the agents rely on their local observation entirely to make JCCRA decisions in a fully distributed manner.
As an extension of the single-agent deep deterministic policy gradient (DDPG) algorithm [9], the MADDPG is an actor-critic policy gradient algorithm. Specifically, the MADDPG agent employs two main neural networks: actor network with parameters to approximate a JCCRA policy and a critic network, parametrized by , to approximate a state-value function , along with their respective time-delayed copies, and , which serve as targets. At time step , the actor deterministically maps local observation to a specific continuous action and then, a random noise process is added to generate an exploratory policy such that . The environment, which is shared among the agents, collects the joint action and returns the immediate reward and the next observation to the respective agents. To make use of the experience in later decision-making steps efficiently and improve the stability of the training, the agent’s transition along with the extra information is saved in the replay buffer of the agent.
To train the main networks, a mini batch of samples , is randomly drawn from the replay buffer , where denotes a sample index. The critic network is updated to minimize the following loss function:
(14) |
where is the target value expressed as . On the other hand, the parameters of the actor network are updated along the deterministic policy gradient given as
(15) |
The target parameters in both actor and critic networks are updated as , and , respectively, where is a constant close to zero.
It is worth mentioning that the proposed JCCRA scheme is trained offline. Once it converges, real-time decision making can be conducted in a fully distributed fashion by simply performing inference with the trained actor network of each agent, relying on the respective local observation only.
IV Performance Analysis
We consider APs and users uniformly distributed at random over an area of 1 km2. The APs are connected to the CPU via ideal fronthaul links. We assume only 30% of the entire APs are clustered to serve each user, i.e. . The system bandwidth is set to 5MHz and all active users share this bandwidth without channelization. Denoting the standard deviation of the shadow fading by , the large-scale channel gain is given as , where . The path loss is given according to the three-slope model [10] as follows:
(16) |
where is the distance between the user and AP , and is
(17) |
Here, is the carrier frequency (in MHz), is the AP antenna height (in m), and denotes the user antenna height (in m). Further, no shadowing is considered unless . The edge server at the CPU has computation capacity , while the users are equipped with local processor with . The length of a time step is set to 1ms, and each user generates a task of a random size that is uniformly distributed in the range of 2.5 to 7.5 kbits at the beginning of every time step . Furthermore, Furthermore, additional simulation parameters are set to , , , , and .
For the proposed MADDPG-based JCCRA implementation, the actor and critic networks of each agent are implemented with fully connected neural networks, which consist of 128, 64, and 64 nodes. All the hidden layers are activated by ReLu function, while the outputs of the actor are activated by sigmoid. The parameters of critic and actor networks are updated with adaptive moment (Adam) optimizer with the learning rates of 0.001 and 0.0001. Further, we set the discount factor , target update parameter , and mini batch size . In our simulations, the agents are trained for several episodes, each consisting of 100 steps.
To evaluate the performance of the proposed MADDPG-based JCCRA, we compare it with three benchmarks:
IV-1 Centralized DDPG-based JCCRA scheme
This approach refers to a DDPG-based centralized resource allocation scheme implemented at the CPU. We adopt the same neural network structure and other hyperparameters as the MADDPG-based scheme to train the actor and critic in the single DDPG agent. Since global information of the entire network is processed centrally at the CPU to make JCCRA decisions for all users, we can potentially obtain the most efficient resource allocation. Consequently, this baseline serves as a target performance benchmark. However, as it involves significant signaling and communication overheads, this scheme might be infeasible to support time-sensitive applications.


IV-2 Offloading-first with an uplink fractional power control (FPC) scheme
This approach preferably offloads the computation to the edge server with the aim of aggressively exploiting the reliable access link provided by cell-free massive MIMO while saving the local processing energy consumption. The uplink transmit power for the -th user is determined by the standard fractional power control (FPC) [11] as follows:
(18) |
where , , and .
IV-3 Local execution-first with an uplink fractional power control (FPC) scheme
The entire local processing capability is fully utilized, i.e., , and the remaining task bits are offloaded to the edge server with uplink transmit power given according to (18).
In Fig. 2, we investigate the convergence and performance of the proposed MADDPG-based JCCRA scheme by evaluating the total reward accumulated during the training process. It can be observed that the performance of the proposed distributed scheme has converged to the target performance benchmark, i.e. the centralized DDPG-based JCCRA, while relieving the large overhead of the latter. This implies that by relying on local observation and additional information provided during the offline centralized training phase, the agents manage to learn efficient joint resource allocation. Moreover, the proposed scheme has significantly outperformed the heuristic baselines in terms of total reward accumulated.
In Fig. 3, we compare the average success rate of the users in attaining the delay constraint of the time-sensitive applications. One can observe that the proposed MADDPG-based JCCRA has achieved more than 99% success rate in accomplishing the tasks within the deadline as the training episodes progress. Even though local execution-first policy achieves 100% success rate, the performance comes at the cost of high energy consumption, as reflected in Fig. 2. Moreover, the fact that the policy of the learning agents outperforms the offloading-first baseline implies even if cell-free massive MIMO can provide reliable access links to the users, aggressive computation offloading can result in performance degradation due to ineffective use of the limited resources.


We then implemented the proposed MADDPG-based JCCRA scheme in two different cellular MEC architectures, including a small-cell system and a single-cell system with co-located massive MIMO, to further investigate the gain obtained from cell-free architecture. For the small-cell MEC system, each user is served by a single AP with the largest large-scale fading coefficient according to the methodology in [5], whereas in co-located massive MIMO case, each user connects to a base station with total number of antennas. From Fig.’s 4(a) and 4(b), we can observe that the proposed algorithm over a cell-free system has significantly outperformed the cellular MEC systems, in terms of average success rate and total reward accumulated. This is attributed to the channel quality degradation due to spatial co-channel interference from other near-by cells in the small-cell system case, while in the single-cell MEC system, users are subjected to large interference from all co-located antennas under MRC, hence limiting the uplink rate for computation offloading.
V Conclusion and future works
In this paper, we presented a novel distributed joint communication and computing resource allocation (JCCRA) scheme based on cooperative multi-agent reinforcement learning framework for the cell-free massive MIMO-enabled mobile edge network. More specifically, each trained agent can make intelligent and adaptive joint resource allocation in real-time, based on local observation only, in order to minimize the total energy consumption while meeting the respective delay constraints of the dynamic MEC system. The performance of the proposed distributed JCCRA scheme is shown to converge to the centralized target baseline, without resorting to large overhead. Finally, we have shown that the proposed algorithm in cell-free system has outperformed two cellular MEC systems by wide margins. In the future, it would be interesting to extend the current formulation to determine a set of APs that are adaptive to dynamic user mobility.
Acknowledgment
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.2020R1A2C100998413).
References
- [1] J. Li, H. Gao, T. Lv, and Y. Lu, “Deep reinforcement learning based computation offloading and resource allocation for MEC,” 2018 IEEE Wireless Communications and Networking Conference (WCNC), Barcelona, 2018.
- [2] L. Huang, X. Feng, C. Zhang, L. Qian, and Y. Wu, “Deep reinforcement learning-based joint task offloading and bandwidth allocation for multi-user mobile edge computing,” Digit. Commun. Netw., vol. 5, no. 1, pp. 10–17, 2019.
- [3] Y. Dai, K. Zhang, S. Maharjan, and Y. Zhang, “Edge Intelligence for Energy-Efficient Computation Offloading and Resource Allocation in 5G Beyond,” in IEEE Transactions on Vehicular Technology, vol. 69, no. 10, pp. 12175-12186, Oct. 2020.
- [4] Chen Z. and Wang, X. “Decentralized computation offloading for multi-user mobile edge computing: a deep reinforcement learning approach”. J Wireless Com Network 2020, 188 (2020).
- [5] H. Q. Ngo, A. Ashikhmin, H. Yang, E. G. Larsson and T. L. Marzetta, “Cell-Free Massive MIMO Versus Small Cells,” in IEEE Transactions on Wireless Communications, vol. 16, no. 3, pp. 1834-1850, March 2017.
- [6] S. Mukherjee and J. Lee, “Edge Computing-Enabled Cell-Free Massive MIMO Systems,” in IEEE Transactions on Wireless Communications, vol. 19, no. 4, pp. 2884-2899, April 2020.
- [7] M. Ke, Z. Gao, Y. Wu, X. Gao, and K. -K. Wong, “Massive Access in Cell-Free Massive MIMO-Based Internet of Things: Cloud Computing and Edge Computing Paradigms,” in IEEE Journal on Selected Areas in Communications, vol. 39, no. 3, pp. 756-772, March 2021.
- [8] R. Lowe et al., “Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments”, arXiv preprint arXiv:1706.02275, 2020.
- [9] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” in Proc. International Conference on Learning Representations (ICLR), 2016.
- [10] Ao Tang, JiXian Sun, and Ke Gong, “Mobile propagation loss with a low base station antenna for NLOS street microcells in urban area,” IEEE VTS 53rd Vehicular Technology Conference, Spring 2001. Proceedings (Cat. No.01CH37202), Rhodes, Greece, 2001, pp. 333-336 vol.1.
- [11] R. Nikbakht, R. Mosayebi, and A. Lozano, “Uplink Fractional Power Control and Downlink Power Allocation for Cell-Free Networks,” in IEEE Wireless Communications Letters, vol. 9, no. 6, pp. 774-777, June 2020.