This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

UAV-Assisted Enhanced Coverage and Capacity in Dynamic MU-mMIMO IoT Systems: A Deep Reinforcement Learning Approach

MohammadMahdi Ghadaksaz, Mobeen Mahmood, Tho Le-Ngoc Department of Electrical and Computer Engineering McGill University, Montreal, QC, Canada
Email: mohammad.ghadaksaz@mail.mcgill.ca, mobeen.mahmood@mail.mcgill.ca, tho.le-ngoc@mcgill.ca
Abstract

This study focuses on a multi-user massive multiple-input multiple-output (MU-mMIMO) system by incorporating an unmanned aerial vehicle (UAV) as a decode-and-forward (DF) relay between the base station (BS) and multiple Internet-of-Things (IoT) devices. Our primary objective is to maximize the overall achievable rate (AR) by introducing a novel framework that integrates joint hybrid beamforming (HBF) and UAV localization in dynamic MU-mMIMO IoT systems. Particularly, HBF stages for BS and UAV are designed by leveraging slow time-varying angular information, whereas a deep reinforcement learning (RL) algorithm, namely deep deterministic policy gradient (DDPG) with continuous action space, is developed to train the UAV for its deployment. By using a customized reward function, the RL agent learns an optimal UAV deployment policy capable of adapting to both static and dynamic environments. The illustrative results show that the proposed DDPG-based UAV deployment (DDPG-UD) can achieve approximately 99.5% of the sum-rate capacity achieved by particle swarm optimization (PSO)-based UAV deployment (PSO-UD), while requiring a significantly reduced runtime at approximately 68.50% of that needed by PSO-UD, offering an efficient solution in dynamic MU-mMIMO environments.

I Introduction

In the next-generation of wireless communications, the expectation of connectivity anywhere and anytime poses a formidable challenge, particularly in dynamic Internet-of-Things (IoT) environments where the users are continuously on the move, changing their locations frequently. These IoT environments, spanning from ever-changing urban areas to critical emergencies, necessitate a network infrastructure that is both adaptable and robust. In this case, several networking strategies have been explored, such as direct transmission and relay-based configurations. However, direct transmission over large distances can be impractical and result in excessive power consumption. It also frequently fails to fulfill the dynamic demands and changing conditions of these areas when employing the traditional static structure of base station (BS). Under these conditions, employing mobile relay nodes emerges as a more energy-efficient solution [1].

Recent developments in unmanned aerial vehicles (UAVs), commonly referred to as drones, have positioned them as an essential element of the future wireless communications networks. When used as relays, UAVs offer several advantages over traditional static relay systems. Specifically, mobile, on-demand relay systems are highly suitable for unforeseen or transient events, like emergencies or network offloading tasks, due to their ability to be quickly and economically deployed [2]. The mobility of UAVs enables them to operate at relatively high elevations, set up a line-of-sight connection with users on the ground, and prevent signal interference caused by obstacles, which makes UAVs practically appealing for dynamic communications systems [3]. Despite the significant propagation challenges faced by millimeter-wave (mmWave) signals, including free-space path loss, atmospheric and molecular absorption, and attenuation from rain, their substantial bandwidth presents a promising solution for meeting the high-throughput and low-latency requirements of diverse UAV application scenarios [4]. To address these challenges, massive multiple-input multiple-output (mMIMO) technology is employed, utilizing large antenna arrays to generate robust beam signals and thereby extending the transmission range. Compared to fully-digital beamforming (FDBF), the hybrid beamforming (HBF) architecture, which consists of a radio frequency (RF) stage and a baseband (BB) stage, can minimize power consumption by reducing the number of energy-consuming RF chains while achieving a performance close to FDBF [5]-[7].

The deployment of the UAV plays an important role in enhancing the performance of UAV-assisted wireless communications systems. Thus, recent research has shown an increased interest in optimizing UAV locations, with particular emphasis on HBF solutions to maximize the achievable rate (AR) or minimize transmit power [8]-[11]. In particular, [8] investigates the joint optimization of UAV deployment while considering HBF at BS and UAV for maximum AR. An amply-and-forward UAV relay with analog beamforming architecture is considered in [9] to maximize the capacity in a dual-hop mMIMO IoT system. Similarly, [12] considers the optimization problem for UAV location, user clustering, and HBF design to maximize AR under a minimum rate constraint for each user. The authors in [13] study the joint optimization of UAVs flying altitude, position, transmit power, antenna beamwidth, and users’ allocated bandwidth. Most of the existing research works (e.g., [8]-[13]) tackle the issue of UAV deployment in a static environment where users are situated at fixed locations. However, these works tend to overlook the dynamic nature of real-world environments, where ground IoT users/devices exhibit mobility, leading to rapidly changing conditions.

As a cornerstone of artificial intelligence (AI), reinforcement learning (RL) has been extensively researched in wireless communications and UAV applications [14]-[16]. RL is a decision-making approach that emphasizes learning through interaction with an environment. Inspired by behavioral psychology, RL is akin to the learning process in humans and animals, where actions are taken based on past experiences and their outcomes. Within the spectrum of RL methods, the deep deterministic policy gradient (DDPG) has emerged as a notable technique [17]. DDPG is particularly adept at handling continuous action spaces, which are common in real-world scenarios. This capability makes DDPG highly relevant for complex, dynamic environments where actions need to be precise and varied, as is often the case with UAV operations. Different RL-based solutions have been studied for UAV positioning (e.g., [18]-[20]). However, the design of HBF jointly with UAV deployment using RL-based solutions is an unaddressed research problem, presenting a significant opportunity to advance the field of UAV-assisted mMIMO IoT communications networks in dynamic environments.

To address this issue, we propose a joint HBF and UAV deployment framework using the DDPG-based algorithmic solution to maximize AR in dynamic MU-mMIMO IoT systems. In particular, the RF beamforming stages for BS and UAV are designed based on the slow time-varying angle-of-departure (AoD)/angle-of-arrival (AoA) information, and BB stages are formulated using the reduced-dimensional effective channel matrices. Then, a novel DDPG-based algorithmic solution is proposed for UAV deployment with a primary objective to not only maximize the overall AR in MU-mMIMO IoT systems but also to significantly reduce computational complexity, particularly the runtime, compared to nature-inspired (NI) optimization methods. It is worthwhile to mention that the proposed DDPG algorithm exhibits a notable advantage in dynamic scenarios. The knowledge gained from training on the initial user’s position is effectively transferred to subsequent positions, resulting in reduced learning time—akin to transfer learning. The illustrative results depict the efficacy of the proposed DDPG-based deployment scheme by reducing the runtime up to 31.5% as compared to NI-based solutions.

The rest of this paper is organized in the following manner. Section II defines the system and channel model for UAV-relay MU-mMIMO systems. In Section III, we introduce the joint HBF and DDPG-based UAV deployment framework. Section IV presents the illustrative results. Finally, Section V concludes the paper.

II System & Channel Model

II-A System Model

The current research explores a complex situation in which various users are linked to a gateway via both wired and wireless connections. This configuration is located in a remote region, which is hard to reach directly by BS because of several hindrances like buildings, mountains, etc. Afterward, a UAV is employed as a dual-hop decode-and-forward (DF) relay to connect with the users as illustrated in Fig. 1. Let (xb,yb,zbx_{b},y_{b},z_{b}), (xu,yu,zux_{u},y_{u},z_{u}), and (xk,yk,zkx_{k},y_{k},z_{k}) denote the location of the BS, UAV relay, and kthk^{th} IoT user, respectively. We establish the 3D distances for a UAV-assisted mmWave MU-mMIMO IoT system as follows:

τ1=(xuxb)2+(yuyb)2+(zuzb)2\tau_{1}=\sqrt{(x_{u}-x_{b})^{2}+(y_{u}-y_{b})^{2}+(z_{u}-z_{b})^{2}}
τ2,k=(xuxk)2+(yuyk)2+(zuzk)2\tau_{2,k}=\sqrt{(x_{u}-x_{k})^{2}+(y_{u}-y_{k})^{2}+(z_{u}-z_{k})^{2}} (1)
τk=(xbxk)2+(ybyk)2+(zbzk)2\tau_{k}=\sqrt{(x_{b}-x_{k})^{2}+(y_{b}-y_{k})^{2}+(z_{b}-z_{k})^{2}}

where τ1\tau_{1}, τ2,k\tau_{2,k}, and τk\tau_{k} are the 3D distance between UAV & BS, between the UAV and kthk^{th} IoT user, and between BS and kthk^{th} IoT user, respectively.

Refer to caption
Figure 1: UAV-assisted mmWave MU-mMIMO dynamic environment

In this system model, we consider BS equipped with NTN_{T} antennas, UAV relay with NrN_{r} antennas for receiving and NtN_{t} antennas for sending information to KK single-antenna IoT users scattered in GG groups, where gthg^{th} group has KgK_{g} IoT users such that K=g=1GKgK=\sum_{g=1}^{G}K_{g}. Both BS and UAV relay utilize HBF architecture. In this setup, the BS includes an RF beamforming stage 𝔽bNT×NRFb\mathbb{F}_{b}\in\mathbb{C}^{N_{T}\times N_{RF_{b}}} and BB stage 𝔹bNRFb×K\mathbb{B}_{b}\in\mathbb{C}^{N_{RF_{b}}\times K}, where NRFbN_{RF_{b}} is the number of RF chains such that NsNRFbNTN_{s}\leq N_{RF_{b}}\leq N_{T} to guarantee multi-stream transmission. Considering half-duplex (HD) DF relaying, BS sends KK data streams 𝕕=[d1,d2,,dK]T\mathbb{d}=[d_{1},d_{2},\cdots,d_{K}]^{T} through channel 1Nr×NT\mathbb{H}_{1}\in\mathbb{C}^{N_{r}\times N_{T}} in the first time slot. Using NrN_{r} antennas, UAV receives signals with RF stage 𝔽u,rNRFu×Nr\mathbb{F}_{u,r}\in\mathbb{C}^{N_{RF_{u}}\times N_{r}} and BB stage 𝔹u,rK×NRFu\mathbb{B}_{u,r}\in\mathbb{C}^{K\times N_{RF_{u}}}. We assume UAV relay transmits the data in the second time slot using RF beamformer 𝔽u,t=[𝕗u,t,1,𝕗u,t,NRFu]Nt×NRFu\mathbb{F}_{u,t}=[\mathbb{f}_{u,t,1}\cdots,\mathbb{f}_{u,t,N_{RF_{u}}}]\in\mathbb{C}^{N_{t}\times N_{RF_{u}}} and BB stage 𝔹u,t=[𝕓u,t,1,,𝕓u,t,K]NRFu×K\mathbb{B}_{u,t}=[\mathbb{b}_{u,t,1},\cdots,\mathbb{b}_{u,t,K}]\in\mathbb{C}^{N_{RF_{u}}\times K} via channel 2K×Nt\mathbb{H}_{2}\in\mathbb{C}^{K\times N_{t}}. The HBF design significantly cuts down the number of RF chains, for example, reducing them from NTN_{T} RF chains to NRFbN_{RF_{b}} for BS, and from NtN_{t}(NrN_{r}) RF chains to NRFuN_{RF_{u}} for UAV, while satisfying the following conditions: 1) KNRFbNTK\leq N_{RF_{b}}\ll N_{T}; and 2) KNRFuNrK\leq N_{RF_{u}}\ll N_{r}(NtN_{t}). In this case, considering the environment noise follows the distribution 𝒞𝒩(𝟘,σn2)\mathcal{CN}(\mathbb{0},\sigma_{n}^{2}), the AR for the BS-UAV link can be expressed as follows [21]:

R1(𝔽b,𝔹b,𝔽u,r,𝔹u,r)=log2|𝕀K+11𝔹u,r𝓗1𝔹b𝔹bH𝓗1H𝔹u,rH|,\scalebox{0.9}{$\mathrm{R_{1}}(\mathbb{F}_{b},\mathbb{B}_{b},\mathbb{F}_{u,r},\mathbb{B}_{u,r})=\log_{2}|\mathbb{I}_{K}+\mathbb{Q}_{1}^{-1}\mathbb{B}_{u,r}\bm{\mathcal{H}}_{1}\mathbb{B}_{b}\mathbb{B}_{b}^{H}\bm{\mathcal{H}}_{1}^{H}\mathbb{B}_{u,r}^{H}|$}, (2)

where 11=(σn2𝔹u,r𝔽u,r)1𝔽u,rH𝔹u,rH\mathbb{Q}_{1}^{-1}=(\sigma_{n}^{2}\mathbb{B}_{u,r}\mathbb{F}_{u,r})^{-1}\mathbb{F}_{u,r}^{H}\mathbb{B}_{u,r}^{H}, 𝓗1=𝔽u,r1𝔽b\bm{\mathcal{H}}_{1}=\mathbb{F}_{u,r}\mathbb{H}_{1}\mathbb{F}_{b}, and 𝔼\mathbb{E}{𝕕𝕕H\mathbb{dd}^{H}} =𝕀KK×K=\mathbb{I}_{K}\in\mathbb{C}^{K\times K}. In the same manner, the AR between UAV and IoT users is calculated based on the instantaneous signal-to-interference-plus-noise ratio (SINR) as demonstrated by the following expression [21]:

SINRgk=|𝕙2,kH𝔽u,t𝕓u,t,gk|2k^kKg|𝕙2,kH𝔽u,t𝕓u,t,gk^|2+qgGk^kKg|𝕙2,kH𝔽u,t𝕓u,t,qk^|2+σn2.\text{SINR}_{g_{k}}=\scalebox{0.87}{$\frac{|\mathbb{h}_{2,k}^{H}\mathbb{F}_{u,t}\mathbb{b}_{u,t,g_{k}}|^{2}}{\sum_{\hat{k}\neq k}^{K_{g}}|\mathbb{h}_{2,k}^{H}\mathbb{F}_{u,t}\mathbb{b}_{u,t,g_{\hat{k}}}|^{2}+\sum_{q\neq g}^{G}\sum_{\hat{k}\neq k}^{K_{g}}|\mathbb{h}_{2,k}^{H}\mathbb{F}_{u,t}\mathbb{b}_{u,t,q_{\hat{k}}}|^{2}+\sigma_{n}^{2}}$}. (3)

where gk=k+g=1g1Kgg_{k}=k+\sum_{g^{\prime}=1}^{g-1}K_{g^{\prime}} represents the IoT user index and 𝕙2,gkNt\mathbb{h}_{2,g_{k}}\in\mathbb{C}^{N_{t}} is the channel vector between UAV and respective IoT user. Utilizing the instantaneous SINR, the ergodic AR of the second link R2R_{2}, in UAV-assisted mmWave MU-mMIMO systems, can be written as:

R2(𝔽u,t,𝔹u,t,xu,yu)=𝔼{g=1Gk=1Kg𝔼[log2(1+SINRgk)]}.\scalebox{0.9}{$\mathrm{R_{2}}(\mathbb{F}_{u,t},\mathbb{B}_{u,t},x_{u},y_{u})=\mathbb{E}\left\{\sum_{g=1}^{G}\sum_{k=1}^{K_{g}}\mathbb{E}\left[\log_{2}(1+\text{SINR}_{g_{k}})\right]\right\}$}. (4)

II-B Channel Model

We consider the mmWave channel for both links, then using the Saleh-Valenzuela channel model, the channel between BS and UAV can be written as follows [22]:

𝐇1=c=1Cl=1Lz1clτlclη𝐚1(r)(θcl(r),ϕcl(r))𝐚1(t)T(θcl(t),ϕcl(t))=𝐀1(r)𝐙1𝐀1(t),\begin{split}\mathbf{H}_{1}&=\sum_{c=1}^{C}\sum_{l=1}^{L}z_{1_{cl}}\tau_{l_{cl}}^{-\eta}\mathbf{a}_{1}^{(r)}(\theta_{cl}^{(r)},\phi_{cl}^{(r)})\mathbf{a}_{1}^{(t)T}(\theta_{cl}^{(t)},\phi_{cl}^{(t)})\\ &=\mathbf{A}_{1}^{(r)}\mathbf{Z}_{1}\mathbf{A}_{1}^{(t)},\end{split} (5)

where CC is the total number of groups, LL is the total number of paths from BS to UAV, z1cl𝒞𝒩(𝟘,1L)z_{1_{cl}}\sim\mathcal{CN}(\mathbb{0},\frac{1}{L}) is the complex gain of lthl^{th} path in the cthc^{th} cluster, and η\eta is the path loss exponent. In addition, 𝕒1(k)(.,.)\mathbb{a}_{1}^{(k)}(.,.) represents the respective transmit or receive array steering vector for a uniform rectangular array (URA), which is defined as [7]:

𝕒1(k)(θ,ϕ)=[1,ej2πdsin(θ)cos(ϕ),,ej2πd(Nx1)sin(θ)cos(ϕ)][1,ej2πdsin(θ)sin(ϕ),,ej2πd(Ny1)sin(θ)sin(ϕ)],\begin{split}&\mathbb{a}_{1}^{(k)}(\theta,\phi)=\\ &[1,e^{-j2\pi d\sin{(\theta)}\cos{(\phi)}},\cdots,e^{-j2\pi d(N_{x}-1)\sin{(\theta)}\cos{(\phi)}}]\otimes\\ &[1,e^{-j2\pi d\sin{(\theta)}\sin{(\phi)}},\cdots,e^{-j2\pi d(N_{y}-1)\sin{(\theta)}\sin{(\phi)}}],\end{split} (6)

where k={r,t}k=\left\{r,t\right\}, NxN_{x} and NyN_{y} are the horizontal and vertical size of respective antenna array at BS and UAV, dd is the inter-element spacing, 1\mathbb{Z}_{1} = diag(z1,1τ1,1η,,z1,Lτ1,Lη)L×L(z_{1,1}\tau_{1,1}^{-\eta},...,z_{1,L}\tau_{1,L}^{-\eta})\in\mathbb{C}^{L\times L} represents the diagonal gain matrix, 𝔸1(r)Nr×L\mathbb{A}_{1}^{(r)}\in\mathbb{C}^{N_{r}\times L} and 𝔸1(t)L×Nt\mathbb{A}_{1}^{(t)}\in\mathbb{C}^{L\times N_{t}} are the receive and transmit phase response matrices, respectively. Also, the angles θcl(t)[θc(t)δcθ(t),θc(t)+δcθ(t)]\theta_{cl}^{(t)}\in[\theta_{c}^{(t)}-\delta_{c}^{\theta(t)},\theta_{c}^{(t)}+\delta_{c}^{\theta(t)}] and ϕcl(t)[ϕc(t)δcϕ(t),ϕc(t)+δcϕ(t)]\phi_{cl}^{(t)}\in[\phi_{c}^{(t)}-\delta_{c}^{\phi(t)},\phi_{c}^{(t)}+\delta_{c}^{\phi(t)}] denote the elevation AoD (EAoD) and azimuth AoD (AAoD) for lthl^{th} path in channel 1\mathbb{H}_{1}, respectively. θc(t)\theta_{c}^{(t)} represent the mean EAoD and δcθ(t)\delta_{c}^{\theta(t)} is the EAoD spread, while ϕc(t)\phi_{c}^{(t)} is mean AAoD with spread δcϕ(t)\delta_{c}^{\phi(t)}. In a similar fashion, the angles θcl(r)\theta_{cl}^{(r)} within the range [θc(r)δcθ(r),θc(r)+δcθ(r)][\theta_{c}^{(r)}-\delta_{c}^{\theta(r)},\theta_{c}^{(r)}+\delta_{c}^{\theta(r)}] and ϕcl(r)\phi_{cl}^{(r)} within [ϕc(r)δcϕ(r),ϕc(r)+δcϕ(r)][\phi_{c}^{(r)}-\delta_{c}^{\phi(r)},\phi_{c}^{(r)}+\delta_{c}^{\phi(r)}] correspond to the elevation AoA (EAoA) and azimuth AoA (AAoA), respectively. Here, θc(r)\theta_{c}^{(r)} and ϕc(r)\phi_{c}^{(r)} represent the mean EAoA and AAoA, with δcθ(r)\delta_{c}^{\theta(r)} and δcϕ(r)\delta_{c}^{\phi(r)} denoting the angular spreads of the elevation and azimuth angles, respectively. The channel vector between UAV and kthk^{th} IoT user is written as:

𝐡2,kT=q=1Qz2,kqτ2,kqη𝕒(θkq,ϕkq)=𝐳2,kT𝐀2,kNt,\begin{split}\mathbf{h}_{2,k}^{T}=\sum_{q=1}^{Q}z_{2,k_{q}}\tau_{2,k_{q}}^{-\eta}\mathbb{a}(\theta_{k_{q}},\phi_{k_{q}})=\mathbf{z}_{2,k}^{T}\mathbf{A}_{2,k}\in\mathbb{C}^{N_{t}},\end{split} (7)

where QQ is the total number of downlink paths from UAV to users, z2,kq𝒞𝒩(0,1Q)z_{2,k_{q}}\sim\mathcal{CN}(0,\frac{1}{Q}) is the complex path gain of qthq^{th} path in the second link, and 𝕒(.,.)Nt\mathbb{a}(.,.)\in\mathbb{C}^{N_{t}} is the UAV downlink array phase response vector. As given in (7), the obtained downlink channel is comprised of two distinct parts: 1) a fast time-varying path gain vector z2,k=[z2,k1τ2,k1η,,z2,kQτ2,kQη]TQz_{2,k}=[z_{2,k_{1}}\tau_{2,k_{1}}^{-\eta},...,z_{2,k_{Q}}\tau_{2,k_{Q}}^{-\eta}]^{T}\in\mathbb{C}^{Q}; and 2) a slow time-varying downlink array phase response matrix 𝔸2,kQ×Nt\mathbb{A}_{2,k}\in\mathbb{C}^{Q\times N_{t}} where each row is constituted by 𝕒(θkl,ϕkl)\mathbb{a}(\theta_{kl},\phi_{kl}). Afterward, the channel matrix for the second link can be expressed as follows:

𝐇2=[𝐡2,1,,𝐡2,K]T=𝐙2𝐀2K×Nt,\mathbf{H}_{2}=[\mathbf{h}_{2,1},\cdots,\mathbf{h}_{2,K}]^{T}=\mathbf{Z}_{2}\mathbf{A}_{2}\in\mathbb{C}^{K\times N_{t}}, (8)

where 𝐙2=[𝐳2,1,,𝐳2,K]TK×Q\mathbf{Z}_{2}=[\mathbf{z}_{2,1},\cdots,\mathbf{z}_{2,K}]^{T}\in\mathbb{C}^{K\times Q} is the complete path gain matrix for all downlink IoT users.

III Joint HBF & DDPG-Based UAV Deployment

In this section, our objective is to jointly optimize the UAV location and HBF for BS and UAV to reduce the channel state information (CSI) overhead size while maximizing the total AR of UAV-assisted MU-mMIMO IoT systems. First, we design the stages 𝐅b,𝐅u,r,𝐅u,t\mathbf{F}_{b},\mathbf{F}_{u,r},\mathbf{F}_{u,t} based on the slow time-varying AoD and AoA. Then, the BB stages 𝐁b,𝐁u,r,𝐁u,t\mathbf{B}_{b},\mathbf{B}_{u,r},\mathbf{B}_{u,t} are developed by using singular value decomposition (SVD).

III-A HBF Design

The RF and BB stages for BS and UAV are designed for the following considerations: 1) maximize the beamforming gain at the desired directions based on the slow time-varying AoD and AoA; 2) reduce the power-hungry RF chains; 3) reduce the CSI overhead; and 4) mitigate multi-user interference (MU-I)111The details for HBF design for BS and UAV can be found in [21].

III-B DDPG: Preliminaries

DDPG is a sophisticated RL algorithm that combines elements of deep Q-networks (DQN) and policy gradient techniques. Unlike DQN, which is designed for discrete action spaces, DDPG is tailored for continuous action spaces, common in real-world situations. This makes DDPG ideal for complex tasks that demand a spectrum of continuous action values, increasing its effectiveness in diverse and changing environments.

DDPG utilizes actor-critic approach [17], where the actor, denoted as μ(𝐬|θμ)\mu(\mathbf{s}|\theta^{\mu}), outputs a deterministic action 𝐚\mathbf{a} given a state 𝐬\mathbf{s}, where θμ\theta^{\mu} represents the weights of the actor network. In the same manner, critic expressed as Q(𝐬,𝐚|θQ)Q(\mathbf{s},\mathbf{a}|\theta^{Q}), predicts the expected return (value) of taking an action 𝐚\mathbf{a} in a state 𝐬\mathbf{s}, and it is parameterized by weights θQ\theta^{Q}. DDPG uses target networks for both actor and critic, represented as μ(𝐬|θμ)\mu^{\prime}(\mathbf{s}|\theta^{\mu^{\prime}}) and Q(𝐬,𝐚|θQ)Q^{\prime}(\mathbf{s},\mathbf{a}|\theta^{Q^{\prime}}) respectively. Here, θμ\theta^{\mu^{\prime}} and θQ\theta^{Q^{\prime}} denote the weights of the target actor and target critic networks. These target networks are essentially slow-paced versions of the primary actor and critic networks, ensuring a more stable and consistent learning process. Moreover, to break the correlation between consecutive experiences (𝐬t\mathbf{s}_{t}, 𝐚t\mathbf{a}_{t}, rtr_{t}, 𝐬t+1\mathbf{s}_{t+1}), we use a replay buffer DD, where 𝐬t\mathbf{s}_{t}, 𝐚t\mathbf{a}_{t}, rtr_{t} are the state, action, and reward at timestep tt, respectively, and 𝐬t+1\mathbf{s}_{t+1} is the next state. These transitions are stored in the replay buffer, and then at each timestep, the actor and critic are updated by sampling a minibatch of transitions (𝐬i,𝐚i,ri,𝐬i+1)(\mathbf{s}_{i},\mathbf{a}_{i},r_{i},\mathbf{s}_{i+1}) uniformly from the buffer. During the training, the weights of the critic are updated by minimizing the mean square error (MSE) loss function as:

=1Ni(yiQ(𝐬i,𝐚i|θQ))2,\mathcal{L}=\frac{1}{N}\sum_{i}(y_{i}-Q(\mathbf{s}_{i},\mathbf{a}_{i}|\theta^{Q}))^{2}, (9)

where NN is the batchsize and yiy_{i} is defined as:

yi=ri+γQ(𝐬i+1,μ(𝐬i+1|θμ)|θQ).y_{i}=r_{i}+\gamma Q^{\prime}(\mathbf{s}_{i+1},\mu^{\prime}(\mathbf{s}_{i+1}|\theta^{\mu^{\prime}})|\theta^{Q^{\prime}}). (10)

Here, γ[0,1]\gamma\in[0,1] represents the discounting factor. Then, the actor network is updated using the policy gradient method as:

θμJ1NiaQ(𝐬,𝐚|θQ)|𝐬=𝐬i,a=μ𝐬iθμμ(𝐬|θμ)|𝐬i.\nabla_{\theta^{\mu}}J\approx\frac{1}{N}\sum_{i}\nabla_{a}Q(\mathbf{s},\mathbf{a}|\theta^{Q})|_{\mathbf{s}=\mathbf{s}_{i},a=\mu_{\mathbf{s}_{i}}}\nabla_{\theta^{\mu}}\mu(\mathbf{s}|\theta^{\mu})|_{\mathbf{s}_{i}}. (11)

Afterward, the target actor and target critic networks are updated using the soft update approach:

θQτθQ+(1τ)θQ\theta^{Q^{\prime}}\leftarrow\tau\theta^{Q}+(1-\tau)\theta^{Q^{\prime}} (12)
θμτθμ+(1τ)θμ\theta^{\mu^{\prime}}\leftarrow\tau\theta^{\mu}+(1-\tau)\theta^{\mu^{\prime}} (13)

where τ\tau is a small coefficient denoting how fast the target actor and critic are updated. As in every RL algorithm, we need exploration to achieve the best policy. In this regard, we add a zero-mean Gaussian noise 𝒩\mathcal{N} to the actor policy as follows:

𝐚t=μ(𝐬t|θtμ)+𝒩,\mathbf{a}_{t}=\mu(\mathbf{s}_{t}|\theta_{t}^{\mu})+\mathcal{N}, (14)

where 𝒩\mathcal{N} follows the distribution 𝒞𝒩(0,σ2)\mathcal{CN}(0,\sigma^{2}).

1 Randomly initialize actor μ(𝐬|θμ)\mu(\mathbf{s}|\theta^{\mu}) and critic Q(𝐬,𝐚|θQ)Q(\mathbf{s},\mathbf{a}|\theta^{Q}) networks with weights θμ\theta^{\mu} and θQ\theta^{Q}.
2 Initialize target network QQ^{\prime} and μ\mu^{\prime} with weights θQθQ\theta^{Q^{\prime}}\leftarrow\theta^{Q}, θμθμ\theta^{\mu^{\prime}}\leftarrow\theta^{\mu}.
3 Initialize replay buffer DD.
4 for episode=1:M do
5       Receive initial observation state 𝐬1\mathbf{s}_{1}.
6       for t=1:T do
7             Select action 𝐚t\mathbf{a}_{t} using (14).
8             Execute action 𝐚t\mathbf{a}_{t} and observe reward rtr_{t} and new state 𝐬t+1\mathbf{s}_{t+1}.
9             Store transition (𝐬t,𝐚t,rt,𝐬t+1)(\mathbf{s}_{t},\mathbf{a}_{t},r_{t},\mathbf{s}_{t+1}) in DD.
10             Sample a random minibatch of NN transitions (𝐬i,𝐚i,ri,𝐬i+1)(\mathbf{s}_{i},\mathbf{a}_{i},r_{i},\mathbf{s}_{i+1}) from DD.
11             Formulate yiy_{i} using (10).
12             Update the critic by minimizing the loss function \mathcal{L} in (9).
13             Update the actor using the policy gradient in (11).
14             Update the target networks via (12), (13).
15       end for
16      
17 end for
Algorithm 1 DDPG-Based UAV Deployment

III-C DDPG-Based UAV Deployment

To apply the DDPG algorithm to our problem, we need to define appropriate states, actions, and rewards for the agent. Moreover, the design of the actor and critic network can significantly influence the performance of the algorithm. Thus, first, we introduce the suitable state, action, and reward for UAV deployment, and then, we discuss the configuration of the actor and critic network.

III-C1 States

At each time step tt, the UAV agent will observe state 𝐬t\mathbf{s}_{t} as follows:

𝐬t=[x^u,t,y^u,t]T2,\mathbf{s}_{t}=[\hat{x}_{u,t},\hat{y}_{u,t}]^{T}\in\mathbb{R}^{2}, (15)

where x^u,t=xu,txmax\hat{x}_{u,t}=\frac{x_{u,t}}{x_{\text{max}}} and yu,t^=yu,tymax\hat{y_{u,t}}=\frac{y_{u,t}}{y_{\text{max}}} denote the 2D normalized location of the UAV at time step tt. Here, we assume the UAV is deployed at a fixed height zu,tz_{u,t}222For simplicity, we consider a scenario of fixed UAV height. However, the proposed DDPG-based solution can be applied for 3D UAV deployment, which is left as our future work..

III-C2 Actions

We consider action 𝐚t\mathbf{a}_{t} at time step tt for the UAV agent as follows:

𝐚t=[at,x,at,y]T2,{at,x[ax,max,ax,max]at,y[ay,max,ay,max]\mathbf{a}_{t}=[a_{t,x},a_{t,y}]^{T}\in\mathbb{R}^{2},\quad\left\{\begin{array}[]{rcl}a_{t,x}\in[-a_{x,\text{max}},a_{x,\text{max}}]\\ a_{t,y}\in[-a_{y,\text{max}},a_{y,\text{max}}]\end{array}\right. (16)

Here, ax,maxa_{x,\text{max}} (ay,maxa_{y,\text{max}}) represents the maximum movement step for the UAV on the x-axis (y-axis). When at,xa_{t,x} (at,ya_{t,y}) is positive, the movement is to the East (North), and when at,xa_{t,x} (at,ya_{t,y}) is negative, the movement is to the West (South).

III-C3 Reward

The reward function effectively communicates the objectives of the task to the agent. Designing the reward function correctly is pivotal since it not only shapes the learning trajectory but also influences the convergence speed and the overall effectiveness of the policy learned. Considering this, we define the reward function rtr_{t} at the time step tt as:

rt={R2forR2η01forR2<η05forxu,t>xmaxoryu,t>ymax5forxu,t<xminoryu,t<yminr_{t}=\left\{\begin{array}[]{rcl}\mathrm{R_{2}}&\mbox{for}&\mathrm{R_{2}}\geq\eta_{0}\\ -1&\mbox{for}&\mathrm{R_{2}}<\eta_{0}\\ -5&\mbox{for}&x_{u,t}>x_{\text{max}}\ \text{or}\ y_{u,t}>y_{\text{max}}\\ -5&\mbox{for}&x_{u,t}<x_{\text{min}}\ \text{or}\ y_{u,t}<y_{\text{min}}\\ \end{array}\right. (17)

where R2\mathrm{R_{2}} denotes the achievable rate for the second link as given in (4), η0\eta_{0} is an adjustable threshold, xmaxx_{\text{max}} (xminx_{\text{min}}) is the maximum (minimum) allowable position on the x-axis, and ymaxy_{\text{max}} (yminy_{\text{min}}) is the maximum (minimum) permitted position on the y-axis.

III-C4 Actor and Critic Networks

We employ a fully connected deep neural network (DNN) architecture with two hidden layers as depicted in Fig. 2, for both (target) actor and (target) critic networks. Here, the actor network predicts suitable actions as described in (16) based on the input states given in (15), whereas, the critic network determines the Q-value of the input action-state pair. We consider LiaL_{i}^{a} neurons in each ithi^{th} hidden layer for the actor network with i={1,2}i=\{1,2\}. Similarly, we will have LjcL_{j}^{c} neurons in each jthj^{th} hidden layer for the critic network with j=j={1,2}.

TABLE I: Simulation Parameters
Number of antennas (NT,Nt,NrN_{T},N_{t},N_{r}) = 144
BS height UAV height 10 m 20 m
UAV x-axis range UAV y-axis range [xmin,xmaxx_{\text{min}},x_{\text{max}}] = [0,100]m [ymin,ymaxy_{\text{min}},y_{\text{max}}] = [0,100]m
UAV x-axis movement UAV y-axis movement [ax,min,ax,maxa_{x,\text{min}},a_{x,\text{max}}] = [-1,1]m [ay,min,ay,maxa_{y,\text{min}},a_{y,\text{max}}] = [-1,1]m
User groups # of users per group G=1G=1 Kg=KGK_{g}=\frac{K}{G}
# of paths Path loss exponent L=10L=10 3.6
Noise PSD Reference path loss α\alpha -174 dBm/Hz 61.34 dB
Frequency Channel bandwidth 28 GHz 100 MHz
Mean AAoD/AAoA (1st1^{st} link) Mean AAoD (2nd2^{nd} link) 120120^{\circ} ϕg=21+120(g1)\phi_{g}=21^{\circ}+120^{\circ}(g-1)
Mean EAoD/EAoA (1st1^{st} link) Mean EAoD (2nd2^{nd} link) 6060^{\circ} θg=60\theta_{g}=60^{\circ}
Azimuth/Elevation angle spread # of network realization ±10\pm 10^{\circ} 2000
Refer to caption
Figure 2: (Target) actor and (target) critic DNN architecture

To perform non-linear operations, we utilize the rectified linear unit (ReLU) as the activation function in the hidden layers for both actor and critic networks (i.e., fr(z)=max(0,z)f_{r}(z)=\text{max}(0,z)). To ensure that the predicted actions by the actor network are between [ax,max,ax,max][-a_{x,\text{max}},a_{x,\text{max}}] ([ay,max,ay,max][-a_{y,\text{max}},a_{y,\text{max}}]), we must use a function that can output both negative and positive values. Thus, we apply tanh activation function in the output layer of the actor network (i.e., ft(z)=ezezez+ezf_{t}(z)=\frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}). However, since critic network estimates the Q-value function, the range of outputs is quite large or unbounded. Therefore, we use the linear activation function for the output layer of the critic network (i.e., fl(z)=zf_{l}(z)=z). The summary of the DDPG algorithm is outlined in Algorithm 1.

IV Illustrative results

In this section, we present illustrative results on the performance of the DDPG-based UAV deployment (DDPG-UD) algorithm for different scenarios. For benchmark comparison, we compare the proposed DDPG-UD with the following solutions: 1) particle swarm optimization (PSO)-based UAV deployment (PSO-UD); and 2) deep learning (DL)-based UAV deployment (DL-UD), (i.e., supervised learning (SL) approach [21]). Table I outlines the simulation setup based on the 3D micro-cell scenario [7], whereas the hyper-parameters of the DDPG algorithm are given in Table II. In the following, we compare the performance for both static (fixed user locations) and dynamic (users changing locations) environments in MU-mMIMO IoT systems.

TABLE II: Networks’ Parameters
(Target) Actor Network Architecture
Input Shape L0a=2L_{0}^{a}=2 1st1^{st} hidden layer L1a=20L_{1}^{a}=20
2nd2^{nd} hidden layer L2a=20L_{2}^{a}=20 Output layer L3a=2L_{3}^{a}=2
(Target) Critic Network Architecture
Input Shape L0c=4L_{0}^{c}=4 1st1^{st} hidden layer L1c=20L_{1}^{c}=20
2nd2^{nd} hidden layer L2c=20L_{2}^{c}=20 Output layer L3c=1L_{3}^{c}=1
Network Parameters
Reply buffer size 60000 Critic learning rate 0.002
Actor learning rate 0.001 Target networks learning rate 0.01

IV-A Static Environment (Fixed Users Location)

In this section, we consider IoT users to have a fixed location and discuss two scenarios: 1) narrow-range user distribution; and 2) wide-range user distribution. For narrow-range user distribution, we consider that the BS is located at (xb,yb,zb)=(0,0,10)(x_{b},y_{b},z_{b})=(0,0,10), the UAV initial position is (xu,yu,zu)=(50,50,20)(x_{u},y_{u},z_{u})=(50,50,20), and K=4K=4 IoT users are distributed randomly at a far distance from the BS (i.e., (xk,yk)[90,100](x_{k},y_{k})\in[90,100]). UAV starts its initial location at (50,50,20)(50,50,20) at the beginning of each episode, and then it explores the environment during the time steps. For wide-range user distribution, we consider the same location and initial position for the UAV, while assuming that K=4K=4 IoT users are randomly scattered with (xk,yk)[50,100](x_{k},y_{k})\in[50,100]. Fig. 3 shows the achieved rates for DDPG-UD, PSO-UD, DL-UD, and fixed deployment (FD) (i.e., no optimization). Numerical results reveal that the proposed DDPG-UD can achieve 98.63% of PSO-UD performance for narrow-range user distribution while having a 16 times better performance than FD. Furthermore, DDPG-UD can enhance the performance of PSO-UD by 2.23% for wide-range user distribution while having 3.14 times better AR than the FD. Fig. 3 also demonstrates that although DL-UD can have an acceptable performance when we have narrow-range user distribution while achieving 89.62% AR of PSO-UD, it fails to find the optimal location for wide-range user distribution having only 36.94% AR of the DDPG-UD. This means that for more complex user distributions, DL-UD is not a promising solution. For the next step, we consider that IoT users may change their location during the observation.

IV-B Dynamic Environment (Changing Users Locations)

In this section, we consider a more practical scenario where IoT users can change their location during the training. This scenario represents dynamic, real-world environments. We assume that the BS is set at (xb,yb,zb)=[0,0,10](x_{b},y_{b},z_{b})=[0,0,10]. Then we set the IoT users’ locations randomly at six different distributions ll\in {l1,l2,,l6l_{1},l_{2},...,l_{6}} as follows:

(xk,l,yk,l)={xk,l[60,70],yk,l[60,70]forl1xk,l[60,70],yk,l[70,80]forl2xk,l[70,80],yk,l[80,90]forl3xk,l[80,90],yk,l[80,90]forl4xk,l[80,90],yk,l[70,80]forl5xk,l[80,90],yk,l[60,70]forl6(x_{k,l},y_{k,l})\!=\!\left\{\begin{array}[]{rcl}x_{k,l}\in[60,70],\;y_{k,l}\in[60,70]&\mbox{for}&l_{1}\\ x_{k,l}\in[60,70],\;y_{k,l}\in[70,80]&\mbox{for}&l_{2}\\ x_{k,l}\in[70,80],\;y_{k,l}\in[80,90]&\mbox{for}&l_{3}\\ x_{k,l}\in[80,90],\;y_{k,l}\in[80,90]&\mbox{for}&l_{4}\\ x_{k,l}\in[80,90],\;y_{k,l}\in[70,80]&\mbox{for}&l_{5}\\ x_{k,l}\in[80,90],\;y_{k,l}\in[60,70]&\mbox{for}&l_{6}\\ \end{array}\right. (18)

where xk,lx_{k,l} and yk,ly_{k,l} denote the kthk^{th} IoT user x-axis and y-axis range during lthl^{th} distribution, respectively. Furthermore, UAV starts its initial location at (xu,yu,zu)=(50,50,20)(x_{u},y_{u},z_{u})=(50,50,20) and finds the optimal deployment 𝐱o(1)={xo(1),yo(1)}\mathbf{x}_{o}^{(1)}=\{x_{o}^{(1)},y_{o}^{(1)}\} for l1l_{1}, which is now used as its initial location for the next user distribution (i.e., l2l_{2}). Fig. 4 shows the accumulated reward and average accumulated reward for the DDPG agent during six different user locations. This figure shows that after the first user location, the DDPG agent maintains the reward and finds the optimal location in less number of episodes. The reason is that the pre-trained networks from previous user locations contain information about the environment, thus helping the DDPG agent find the optimal UAV location for changing user locations in a shorter time. Fig. 5 shows the achieved rate for DDPG-UD, PSO-UD, DL-UD, and FD, which shows that DDPG-UD can achieve a performance close to PSO-UD. For instance, DDPG-UD accomplishes 99.62% of PSO-UD AR for l3l_{3}, and it provides 18.54 bps/Hz AR at l6l_{6} and achieves 99.42% of PSO-UD performance. Additionally, the capacity is improved by 36.99% and 16.73% for FD for l3l_{3} and l6l_{6}, respectively. Fig. 5 also supports the previous deduction that DL-UD is not a promising solution for complex scenarios while having only 63.84% and 66.65% of PSO-UD performance for l3l_{3} and l6l_{6}, respectively.

Refer to caption
Figure 3: Achieved Rates versus user distribution
Refer to caption
Figure 4: Accumulated reward vs episode for DDPG
Refer to caption
Figure 5: Achieved rates in different locations

Fig. 6 displays the time comparison between DDPG-UD and PSO-UD. Here, we provide the runtime for a single user location (i.e., ll1l\in l_{1}), three different user locations (l{l1,l2,l3}l\in\{l_{1},l_{2},l_{3}\}), and six distinct user locations (l{l1,,l6}l\in\{l_{1},\cdots,l_{6}\}). This figure shows that although for a single user location, DDPG-UD takes 18.04% more than PSO-UD to find the optimal location, for multiple user locations, it takes 76.45% and 68.50% of PSO-UD runtime. This means that as the number of user locations is increased, DDPG-UD will take less time to find the optimal location when compared to PSO-UD, which makes DDPG-UD an efficient solution for dynamic and fast-changing environments in MU-mMIMO IoT systems.

V Conclusion

In this work, a novel DDPG-based UAV deployment (DDPG-UD) and hybrid beamforming (HBF) technique has been proposed for AR maximization in dynamic MU-mMIMO IoT systems. First, we introduce HBF for both base station (BS) and UAV. Afterward, we apply the deep deterministic policy gradient (DDPG) as a reinforcement learning approach to UAV deployment in dynamic environments. Illustrative results show that the proposed DDPG-UD closely approaches the optimal rate achieved by particle swarm optimization (PSO)-based UAV deployment (PSO-UD). On the other hand, DDPG-UD greatly reduces the runtime by 31.5%, which makes DDPG-UD a more appropriate solution for real-world dynamic applications in UAV-assisted mMIMO IoT systems.

Refer to caption
Figure 6: Runtime for different numbers of locations

References

  • [1] Y. Zeng et al., “Wireless communications with unmanned aerial vehicles: Opportunities and challenges,” IEEE Commun. Mag., vol. 54, no. 5, pp. 36-42, 2016.
  • [2] M. Mozaffari et al., “A tutorial on UAVs for wireless networks: Applications, challenges, and open problems,” IEEE Commun. Surveys Tuts., vol. 21, no. 3, pp. 2334–2360, 2019.
  • [3] Z. Xiao, P. Xia, and X.-G. Xia, “Enabling UAV cellular with millimeterwave communication: Potentials and approaches,” IEEE Commun. Mag., vol. 54, no. 5, pp. 66–73, 2016.
  • [4] W. Roh et al., “Millimeter-wave beamforming as an enabling technology for 5G cellular communications: theoretical feasibility and prototype results,” IEEE Commun. Mag., vol. 52, no. 2, pp. 106–113, 2014.
  • [5] M. Mahmood et al., “2D antenna array structures for hybrid massive MIMO precoding,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), 2020, pp. 1-6.
  • [6] M. Mahmood, A. Koc, and T. Le-Ngoc, “3-D antenna array structures for millimeter wave multi-user massive MIMO hybrid precoder design: A performance comparison,” IEEE Commun. Lett., vol. 26, no. 6, pp. 1393–1397, 2022.
  • [7] M. Mahmood et al., “Energy-efficient MU-massive-MIMO hybrid precoder design: Low-resolution phase shifters and digital-to-analog converters for 2D antenna array structures,” IEEE Open J. Commun. Soc., vol. 2, no. 5, pp. 1842-1861, 2021.
  • [8] M. Mahmood et al., “PSO-Based Joint UAV Positioning and Hybrid Precoding in UAV-Assisted Massive MIMO Systems” in Proc. IEEE 96th Veh. Technol. Conf. (VTC-Fall), 2022, pp. 1-6.
  • [9] M. Mahmood et al., “Spherical Array-Based Joint Beamforming and UAV Positioning in Massive MIMO Systems” in Proc. IEEE 97th Veh. Technol. Conf. (VTC-Spring), 2023, pp. 1-5.
  • [10] X. Xi et al., “Joint user association and UAV location optimization for UAV-aided communications,” IEEE Wireless Commun. Lett., vol. 8, no. 6, pp. 1688–1691, 2019.
  • [11] M. Alzenad et al., “3-D placement of an unmanned aerial vehicle base station (UAV-BS) for energy-efficient maximal coverage,” IEEE Wireless Commun. Lett., vol. 6, no. 4, pp. 434–437, 2017.
  • [12] L. Zhu et al., “Multi-UAV aided millimeter-wave networks: Positioning, clustering, and beamforming,” IEEE Trans. Wireless Commun., vol. 21, no. 7, pp. 4637–4653, 2022.
  • [13] Z. Yang et al., “Joint altitude, beamwidth, location, and bandwidth optimization for UAV-enabled communications,” IEEE Commun. Lett., vol. 22, no. 8, pp. 1716–1719, 2018.
  • [14] NC. Luong et al., “Applications of deep reinforcement learning in communications and networking: A survey,” IEEE Commun. Surveys Tuts., vol. 21, no. 4, pp. 3133-3174, 2019.
  • [15] A. Feriani et al., “Single and multi-agent deep reinforcement learning for AI-enabled wireless networks: A tutorial,” IEEE Commun. Surveys Tuts., vol. 23, no. 2, pp. 1226-1252, 2021.
  • [16] PS. Bithas et al., “A survey on machine-learning techniques for UAV-based communications,” Sensors., vol. 19, no. 23, pp. 5170, 2019.
  • [17] TP. Lillicrap et al., “Continuous control with deep reinforcement learning,” arXiv:1509.02971, 2015.
  • [18] H. Huang, Y. Yang, H. Wang, Z. Ding, H. Sari and F. Adachi, ”Deep Reinforcement Learning for UAV Navigation Through Massive MIMO Technique,” IEEE Transactions on Vehicular Technology, vol. 69, no. 1, pp. 1117–1121, Jan. 2020, doi: 10.1109/TVT.2019.2952549.
  • [19] O. Bouhamed, H. Ghazzai, H. Besbes and Y. Massoud, ”Autonomous UAV Navigation: A DDPG-Based Deep Reinforcement Learning Approach,” 2020 IEEE International Symposium on Circuits and Systems (ISCAS), Seville, Spain, 2020, pp. 1-5, doi: 10.1109/ISCAS45731.2020.9181245.
  • [20] C. Wang, J. Wang, J. Wang and X. Zhang, ”Deep-Reinforcement-Learning-Based Autonomous UAV Navigation With Sparse Rewards,” in IEEE Internet of Things Journal, vol. 7, no. 7, pp. 6180-6190, July 2020, doi: 10.1109/JIOT.2020.2973193.
  • [21] M. Mahmood, M. Ghadaksaz, A. Koc and T. Le-Ngoc, ”Deep Learning Meets Swarm Intelligence for UAV-Assisted IoT Coverage in Massive MIMO,” in IEEE Internet of Things Journal, vol. 11, no. 5, pp. 7679-7696, 1 March1, 2024, doi: 10.1109/JIOT.2023.3318529
  • [22] R. M´endez-Rial et al., “Hybrid MIMO architectures for millimeter wave communications: Phase shifters or switches?” IEEE Access, vol. 4, pp. 247–267, 2016.