This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

AoI and Energy Consumption Oriented Dynamic Status Updating in Caching Enabled IoT Networks

Chao Xu†,§, Xijun Wang∗,♢, Howard H. Yang, Hongguang Sun†,§, and Tony Q. S. Quek This paper is supported by National Natural Science Foundation of China (61701372 and 61701372), Talents Special Foundation of Northwest A & F University (Z111021801 and Z111021801), Key Research and Development Program of Shaanxi (2019ZDLNY07-02-01), Fundamental Research Funds for the Central Universities of China(19lgpy79), and Research Fund of the Key Laboratory of Wireless Sensor Network & Communication (20190912). School of Information Engineering, Northwest A&F University, Yangling, Shaanxi, China
§Key Laboratory for Agricultural Internet of Things, Ministry of Agriculture and Rural Affair, Yangling, China
School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou, China
Key Laboratory of Wireless Sensor Network & Communication,
Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, China
Information System Technology and Design Pillar, Singapore University of Technology and Design, Singapore
Abstract

Caching has been regarded as a promising technique to alleviate energy consumption of sensors in Internet of Things (IoT) networks by responding to users’ requests with the data packets stored in the edge caching node (ECN). For real-time applications in caching enabled IoT networks, it is essential to develop dynamic status update strategies to strike a balance between the information freshness experienced by users and energy consumed by the sensor, which, however, is not well addressed. In this paper, we first depict the evolution of information freshness, in terms of age of information (AoI), at each user. Then, we formulate a dynamic status update optimization problem to minimize the expectation of a long term accumulative cost, which jointly considers the users’ AoI and sensor’s energy consumption. To solve this problem, a Markov Decision Process (MDP) is formulated to cast the status updating procedure, and a model-free reinforcement learning algorithm is proposed, with which the challenge brought by the unknown of the formulated MDP’s dynamics can be addressed. Finally, simulations are conducted to validate the convergence of our proposed algorithm and its effectiveness compared with the zero-wait baseline policy.

Index Terms:
Internet of Things, AoI, reinforcement learning, dynamic status updating.

I Introduction

Acting as a critical and integrated infrastructure, Internet of Things (IoT) enables ubiquitous connections for billions of things in our physical world, ranging from tiny, resource-constrained sensors to more powerful smart phones and networked vehicles[1]. In general, the sensors are powered by batteries with limited capacities rather than a fixed power supply. Thus, to exploit the benefits promised by IoT networks, it is essential to well address the energy consumption issue faced by sensors. Recently, caching bas been proposed as a promising solution to lower the energy consumption of sensors by reducing the frequency of environmental sensing and data transmissions [2, 3, 4]. Particularly, by caching the data packets generated by sensors into the edge caching node (ECN), e.g., access point (AP) or mobile edge server, users can retrieve the cached data directly from the ECN instead of frequently activating sensors for status sensing and data trasnmission, thereby significantly lowering their energy consumption.

Caching multimedia contents at the edge of wireless networks has been regarded as a promising technology for the 5th Generation (5G) wireless networks and hence, has been well studied in existing work [5, 6, 7, 8]. However, compared with multimedia contents (e.g., music, video, etc.) in traditional wireless networks, the data packets in IoT networks have two distinct features: 1) The sizes of data packets generated by IoT applications are generally much smaller than those of multimedia contents. Therefore, for IoT networks, the storage capacity of each ECN is large enough to store the latest status updates generated by all the sensors. 2) For many real-time IoT applications, the staleness of information obtained by users may significantly deteriorate the accuracy and reliability of their subsequent decisions. Therefore, the main concern for edge caching enabled IoT networks is how to properly update the cached data to simultaneously lower the energy consumption of sensors and improve the information freshness at users.

In order to quantify the information freshness, the age of information (AoI) has been proposed, which measures the time elapsed since the latest received packet was generated from the sensor [9, 10, 11]. Based on this metric, some recent researches [12, 13] begin to design status update strategies that optimize the AoI in caching enabled IoT networks. Specifically, authors in [12] proposed a status update algorithm to minimize the popularity-weighted average of AoI values, each of which is associated to one sensor, at the cache, where the users’ popularity for each sensor is assumed to be known. Study [13] further extended [12] by considering the relation between the time consumption for one update and the corresponding update interval. However, in these studies, the AoI is evaluated from the ECN’s (e.g., AP’s) perspective instead of from the perspective of individual users. In fact, users are the real data consumers and final decision makers and hence, it is more reasonable to evaluate and optimize the AoI at users in caching enabled IoT networks. Besides, by focusing on the AoI at users, it is possible to decrease the energy consumption of each sensor by avoiding some useless status updates and meanwhile, reduce the user experienced AoI. This is due to the fact that each user could observe the information freshness of a cached data packet, only when she has asked and received it from the ECN.

In this paper, we consider a time-slotted IoT network consisting of an ECN, a sensor and multiple users, where the ECN would store the latest data packet generated by the sensor. Here, each time slot is divided into two phases, which are termed as the data delivery phase (DDP) and status update phase (SUP), respectively. At the beginning of each time slot, the ECN obtains the data query profile of users, and responds to the requests with its local cached data during the DDP. Then, the ECN will decide whether to ask the sensor to update its status and, if any, the data update will be conducted during SUP. To strike a balance between the average AoI experienced by users and energy consumed by the sensor, we first depict the evolution of the AoI at both the ENC and each user and then, formulate a dynamic status update optimization problem to minimize the expectation of a long term accumulative cost, by jointly considering the effects from the AoI and energy consumption. To solve this problem, we cast the corresponding status updating procedure into a Markov Decision Process (MDP), and design an Energy consumption and AoI oriented dynamic data Update (EAU) algorithm to solve the originally formulated problem by resorting to reinforcement learning (RL). The proposed algorithm does not require any priori knowledge of the dynamics of the MDP, and its effectiveness is further verified via simulation results.

The organization of this paper is as follows. In Section II, the description of the system model and the formulation of the dynamic status update optimization problem are presented. In Section III, we cast the dynamic status updating procedure as an MDP and then, develop a model-free RL algorithm to solve it. Simulation results are presented to show the effectiveness of our proposed scheme in Section IV and finally, conclusions are drawn in Section V.

II System model and problem formulation

II-A Network model

We consider an IoT network consisting of NN users, one ECN, and one sensor, which serves users with time sensitive information. The user set is denoted by 𝒩={1,2,,N}\mathcal{N}=\{1,2,\cdots,N\}. We assume a time-slotted system as illustrated in Fig. 1, where each slot is divided into two phases, i.e., the DDP and SUP, whose time durations are fixed and denoted by DdD_{d} and DuD_{u}, respectively. In each slot tt, users may request the ECN for the cached data packet, and the corresponding query profile can be expressed as 𝐫(t)=(r1(t),r2(t),,rN(t))\mathbf{r}(t)=\left(r_{1}(t),r_{2}(t),\cdots,r_{N}(t)\right), where rn(t){0,1},n𝒩r_{n}(t)\in\{0,1\},\forall n\in\mathcal{N}, with rn(t)=1r_{n}(t)=1 if user nn requests the data, and rn(t)=0r_{n}(t)=0 otherwise. In this paper, we consider the ECN obtains the data query profile 𝐫(t)\mathbf{r}(t) at the beginning of each time slot tt, and it has no prior knowledge of users’ request patterns which is random.

Figure 1: Illustration of the time slot structure.

As shown in Fig. 1, the ECN would first transmit the required data packet to the corresponding users during DDP. Then, the ECN has to decide whether or not ask the sensor update its status, and, if any, receives the status update during the SUP. Let A(t){0,1}A(t)\in\{0,1\} denote the ECN’s update decision made in each time slot tt, i.e., A(t)=1A(t)=1 if the sensor is asked to sense the underlying environment and update the status, and A(t)=0{A}(t)=0 otherwise. We consider all data packets requested by users can be successfully transmitted from the ECN. However, when each update is transmitted from the sensor to ECN, there is a potential update failure due to the limited transmission power of the sensor, which will be presented in detail in the following subsection.

II-B Status updating model

During SUP in time slot tt, if the sensor is asked to update its status, then it has to sense the underlying environment and transmit the information packet to the ECN. The energy consumed by the sensor for each environmental sensing is assumed to be constant and denoted by EsE_{s}. Besides, we assume the time spent on the environment sensing is negligible and hence, the transmission time of each update packet is DuD_{u}. The spectral bandwidth available for the data update is BB and the transmit power of the sensor is fixed as pp. Accordingly, the overall energy consumption of one status update is E=Es+pDuE=E_{s}+pD_{u}. Let g(t)g(t) denote the channel power gain from the sensor to ECN in time slot tt, and δ2\delta^{2} the AWGN power density at the ECN. It is assumed that the channel between the sensor and ECN follows the quasi-static Rayleigh fading with mean g¯\bar{g}, i.e., its power gain remains the same over one SUP and changes independently across time slots.

The size of each update packet is considered to be constant and denoted by FF. To complete the data transmission in one SUP, the required transmission rate is set to R=FDuR=\frac{F}{D_{u}}. In this light, if the sensor is asked to transmit its update packet, a transmission failure can occur when the required SNR threshold cannot be reached. Particularly, according to Shannon’s formula, to meet the transmission rate requirement RR, the SNR threshold should be set as

γT=2RB1.\displaystyle{\gamma^{T}}={{2^{\frac{{{R}}}{{{B}}}}}-1}. (1)

Furthermore, the update failure probability PfP_{f} for one update transmission, if there is any, can be expressed as [14, 15]

Pf=𝖯(γ(t)<γT)=1exp(γTγ¯)\displaystyle{P_{f}}=\mathsf{P}(\gamma(t)<\gamma^{T})=1-exp(-\frac{\gamma^{T}}{\bar{\gamma}}) (2)

where γ(t)\gamma(t) denotes the SNR with the corresponding transmission, and γ¯=pg¯Bδ2\bar{\gamma}=\frac{p\bar{g}}{B\delta^{2}} the average SNR.

For ease of expression, we use Z(t){0,1}Z(t)\in\{0,1\} to denote whether the asked update is successfully transmitted to the ECN, i.e., Z(t)=1Z(t)=1 if the update is successfully delivered and Z(t)=0Z(t)=0 otherwise. Since the sensor will only generate and transmit the update packet when it is asked by the ECN, we further have Z(t)A(t)Z(t)\leq A(t) with each time slot tt.

II-C Performance metric and problem formulation

In this paper, we consider both the AoI at the users and the ECN just before the decision moment in each time slot, i.e., the beginning of the status updating phase. Before formally depicting the evolution of AoI at each user, we first give the dynamics of AoI for the stored data packet from the ECN’s perspective. Particularly, we denote the AoI at the ECN in slot tt by the Δ0(t)\Delta_{0}(t), whose evolution can be expressed as

Δ0(t)={Du+DdifZu(t1)=1Δ0(t1)+Du+Dd,otherwise\displaystyle\Delta_{0}(t)=\begin{cases}D_{u}+D_{d}&\textrm{if}\ Z_{u}(t-1)=1\\ \Delta_{0}(t-1)+D_{u}+D_{d},&\textrm{otherwise}\end{cases} (3)

where Z(t1)=1Z(t-1)=1 means in the previous slot t1t-1 an update packet was successfully delivered to the ECN.

Similarly, let Δn(t)\Delta_{n}(t) denote the AoI at each user nn with slot tt. Then, its dynamics can be expressed as

Δn(t)\displaystyle\Delta_{n}(t)
={Du+Dd,ifrn(t)Z(t1)=1Δ0(t1)+Du+Ddifrn(t)=1&Z(t1)=0Δn(t1)+Du+Dd,otherwise\displaystyle=\begin{cases}D_{u}+D_{d},&\textrm{if}\ r_{n}(t)Z(t-1)=1\\ \Delta_{0}(t-1)+D_{u}+D_{d}\!&\textrm{if}\ r_{n}(t)=1\!\&\!Z(t-1)=0\\ \Delta_{n}(t-1)+D_{u}+D_{d},&\textrm{otherwise}\end{cases}
=(a){Δ0(t),ifrn(t)=1Δn(t1)+Du+Dd,otherwise\displaystyle\mathop{\rm{=}}\limits^{\left(a\right)}\begin{cases}\Delta_{0}(t),&\textrm{if}\ r_{n}(t)=1\\ \Delta_{n}(t-1)+D_{u}+D_{d},&\textrm{otherwise}\end{cases} (4)

where the product rn(t)Z(t1)=1r_{n}(t)Z(t-1)=1 means the packet required by user nn was just successfully updated to the ECN in the previous slot t1t-1, and (a) holds because of Eq. (3). Without loss of generality, we initialize Δ0(1)=Δn(1)=Du+Dd,n𝒩\Delta_{0}(1)=\Delta_{n}(1)=D_{u}+D_{d},\forall n\in\mathcal{N}. Based on the above analyses, we note that, even obtaining data from the same ECN, different AoIs may be experienced by different users, which would be ignored if we just consider the dynamics of the AoI at the ECN.

To strike a balance between the average AoI experienced by users and energy consumed by sensors, we define the overall cost associated with the update decision made by the ECM in each slot tt as

L(t)=β1n=1NωnΔn(t)+β2A(t)E\displaystyle L(t)=\beta_{1}\sum\nolimits_{n=1}^{N}{\omega_{n}\Delta_{n}(t)}+\beta_{2}A(t)E (5)

where ωn\omega_{n} denotes the weight allocated to user nn, i.e., ωn(0,1),n𝒩\omega_{n}\in(0,1),\forall n\in\mathcal{N} and n=1Nωn=1\sum\nolimits_{n=1}^{N}\omega_{n}=1, EE represents the energy consumed to complete one status update, and β1\beta_{1} as well as β2\beta_{2} are parameters used to nondimensionalize the equation and meanwhile, to make a trade-off between the experienced AoI and consumed energy in the network.

In this work, we aim at designing a dynamic status updating strategy to minimize the expectation of a long term accumulative cost, i.e.,

P:\displaystyle\textbf{P}: min𝐀\displaystyle\mathop{\min}\limits_{\mathbf{A}} limT𝔼[t=1Tλt1L(t)]\displaystyle\mathop{\lim}\limits_{T\to\infty}\mathbb{E}\big{[}\sum\nolimits_{t=1}^{T}\lambda^{t-1}L(t)\big{]} (6)
s.t. 𝐀=(A(1),A(2),,A(T))\displaystyle\mathbf{A}=\left({A}(1),{A}(2),\cdots,{A}(T)\right) (7)

where 𝐀\mathbf{A} denotes a sequence of update decisions made by the ECN from slot 11 to TT, and λ\lambda the discount rate introduced to determine the importance of the present cost and meanwhile to guarantee the long term accumulative cost finite. The formulated dynamic optimization problem will be solved in the following section.

III Reinforcement learning algorithm design

In this section, we first formulate the dynamic status updating procedure as an MDP and then, develop a model-free RL algorithm, with which the challenge brought by the unknown of transition probability in the formulated MDP could be addressed.

III-A MDP formulation

The concerned dynamic status updating can be formulated into an MDP consisting of a tuple =Γ(𝕊,𝔸,U(,))\mathcal{M}=\Gamma\left(\mathbb{S},\mathbb{A},U\left({\cdot,\cdot}\right)\right), which is depicted as follows:

  • State space 𝕊\mathbb{S}: At each time slot tt, the state 𝒮(t)\mathcal{S}(t) is defined to be the combination of the AoIs at the ECN and users just before the data updating phase, i.e., 𝒮(t)=(S0(t),S1(t),S2(t),,SN(t))\mathcal{S}(t)=\left(S_{0}(t),S_{1}(t),S_{2}(t),\cdots,S_{N}(t)\right), where Sm(t)=Δm(t),m{0}𝒩S_{m}(t)=\Delta_{m}(t),\forall m\in\{0\}\cup\mathcal{N}. We set the maximum AoI at the ECN and users to be Δmax=MmaxD\Delta_{\max}=M_{\max}D where D=Du+DdD=D_{u}+D_{d} and MmaxM_{\max} is an integer. In this light, the state space 𝒮\mathcal{S} is finite and can be expressed as

    𝕊={𝕊0×𝕊1×𝕊2××𝕊N}\displaystyle\mathbb{S}=\{{{\mathbb{S}_{0}}}\times{{\mathbb{S}_{1}}}\times{{\mathbb{S}_{2}}}\times\cdots\times{{\mathbb{S}_{N}}}\} (8)

    with 𝕊m={D,2D,,MmaxD},m{0}𝒩\mathbb{S}_{m}=\{D,2D,\cdots,M_{\max}D\},{\forall m\in\{0\}\cup\mathcal{N}}.

  • Action space 𝔸\mathbb{A}: At each time slot tt, the ECN could ask the sensor to sense the underlying environment and update its generated data packet. Therefore, the action space can be expressed as 𝔸={0,1}\mathbb{A}=\{0,1\}, where A=0A=0 (A𝔸A\in\mathbb{A}) means the ECN would not ask the sensor to conduct its status update.

  • Reward function U(,)U\left({\cdot,\cdot}\right): The reward function is a mapping from a state-action pair 𝒮×A\mathcal{S}\times A to a real number, which is used to quantify the obtained reward by choosing a action A𝔸A\in\mathbb{A} at a state 𝒮𝕊\mathcal{S}\in\mathbb{S}. In this work, at each time slot tt, when given a state 𝒮(t)\mathcal{S}(t) and choosing action 𝒜(t)\mathcal{A}(t), we define the reward function as

    U(𝒮(t),A(t))\displaystyle U\left({\mathcal{S}(t),A(t)}\right)
    =C(β1n=1Nωn(Δn(t)+Du)+β2A(t)E)\displaystyle=C-\left(\beta_{1}\sum\nolimits_{n=1}^{N}{\omega_{n}\left(\Delta_{n}(t)+D_{u}\right)}+\beta_{2}A(t)E\right)
    =(a)𝐶β1DuL(t)=C1L(t)\displaystyle\mathop{=}\limits^{(a)}\mathop{C}-\beta_{1}D_{u}-L(t)=C_{1}-L(t) (9)

    where CC is a constant used to ensure the reward positive, and (a) follows Eq. (5). Here, DuD_{u} is introduced to evaluate the effect of the chosen action after it is conducted, i.e., after the SUP is over.

For an MDP, the goal is generally to find a deterministic stationary policy π:𝕊𝔸\pi:\mathbb{S}\to\mathbb{A} to maximize the long term expected discounted return. Particularly, a policy is said to be deterministic and stationary if: 1. Given the state, only one certain action is chosen; 2. The policy is irrelevant to time. In this work, we aim at deriving a deterministic stationary policy π\pi^{*} that maximizes the long term expected discounted reward with the initial state 𝒮(0)\mathcal{S}(0), i.e.,

π\displaystyle\pi^{*} =argπmaxlimT𝔼[t=0TλtU(𝒮(t),A(t))|𝒮(0)]\displaystyle=\mathop{\arg}\limits_{\pi}\max\mathop{\lim}\limits_{T\to\infty}\mathbb{E}\big{[}\sum\nolimits_{t=0}^{T}\lambda^{t}U\left({\mathcal{S}(t),A(t)}\right)\left|\mathcal{S}(0)\right.\big{]}
=(a)argπminlimT𝔼[t=1Tλt1L(t)].\displaystyle\mathop{=}\limits^{(a)}\mathop{\arg}\limits_{\pi}\min\mathop{\lim}\limits_{T\to\infty}\mathbb{E}\big{[}\sum\nolimits_{t=1}^{T}\lambda^{t-1}L(t)\big{]}. (10)

where (a) holds when all the elements in 𝒮(0)\mathcal{S}(0) are set to DD. Comparing Eq. (6) with Eq. (III-A), we note that π\pi^{*} can also be used to derive a solution of the original Problem P.

As shown in Eq. (III-A), in each time slot tt, the immediately obtained reward U(𝒮(t),A(t))U\left({\mathcal{S}(t),A(t)}\right) does affect the cumulative reward in the long run. Therefore, to find the optimal strategy π\pi^{*}, it is essential to accurately estimate the long-term effect of each decision, which is nontrivial because of the causality. In this work, we resort to RL, and design a model-free algorithm, named as EAU, to solve it, which will be presented in detail in the following subsection.

III-B EAU algorithm

Qπ(𝒮,A)=𝔼π[l=0λt1U(𝒮(t+l),A(t+l))|𝒮(t)=𝒮,A(t)=A]\displaystyle Q_{\pi}(\mathcal{S},A)=\mathbb{E}_{\pi}\big{[}\sum\nolimits_{l=0}^{\infty}{{\lambda^{t-1}}}U\left({{\cal S}(t+l),{A}(t+l)}\right)\left|{{\cal S}(t)=\mathcal{S},A(t)=A}\right.\big{]} (11)
Qπ(𝒮,A)\displaystyle Q_{\pi^{*}}(\mathcal{S},A) =𝔼π[U(𝒮(t),𝒜(t))+γmaxA𝔸Qπ(𝒮(t+1),A)|𝒮(t)=𝒮,A(t)=A]\displaystyle=\mathbb{E}_{\pi*}\big{[}U\left({\mathcal{S}(t),\mathcal{A}(t)}\right)+\gamma\mathop{\max}\limits_{A^{\prime}\in\mathbb{A}}{Q_{{\pi^{*}}}}({\mathcal{S}}(t+1),A^{\prime})\left|{{\mathcal{S}}(t)=\mathcal{S},A(t)=A}\right.\big{]} (12)
=U(𝒮(t),A(t))+γ𝒮𝕊𝖯(𝒮|𝒮,A)maxA𝔸Qπ(𝒮,A)\displaystyle=U\left({\mathcal{S}(t),A(t)}\right)+\gamma\sum\limits_{\mathcal{S}\in\mathbb{S}^{\prime}}{\mathsf{P}\left({{\mathcal{S}}^{\prime}\left|{{\mathcal{S}},{A}}\right.}\right)}\mathop{\max}\limits_{A^{\prime}\in\mathbb{A}}{Q_{{\pi^{*}}}}(\mathcal{S}^{\prime},A^{\prime})

For each deterministic stationary policy π\pi, we define the action-value function as shown in Eq. (11), with (𝒮,A)(\mathcal{S},A) denoting the initial action-state pair, and the corresponding Bellman optimality equation can be expressed in Eq. (12) [16], where 𝖯(𝒮|𝒮,A){\mathsf{P}\left({{\mathcal{S}}^{\prime}\left|{{\mathcal{S}},{A}}\right.}\right)} denotes the transition probability from one state 𝒮\mathcal{S} to 𝒮\mathcal{S}^{\prime} by conducting an action AA. Essentially, if that probability is known, the Bellman optimality equations are actually a system of equations with |𝕊×𝔸|\left|{\mathbb{S}\times\mathbb{A}}\right| unknowns, where ||\left|{\cdot}\right| represents the cardinality of a set. In that case, for a finite MDP, we could derive the unique solution by utilizing model based RL algorithms, e.g., dynamic programming[16].

However, in this work, the transition probability cannot be regarded as a prior knowledge due to the unknown of the users’ request patterns and mean channel gain from the sensor to ECN. To deal with the corresponding challenge, we develop a model-free algorithm by resorting to expected Sarsa, one kind of temporal-difference (TD) RL algorithms. We note that, compared with traditional TD algorithms, e.g., classic Sarsa and Q-learning, expected Sarsa may achieve better performance with small additional computational cost for many problems [16]. Our developed EAU algorithm is shown in Algorithm 1, where 𝐐^(t)\hat{\mathbf{Q}}(t) is a row vector with |𝕊×𝔸|\left|{\mathbb{S}\times\mathbb{A}}\right| elements, denoting the estimate of the action-value function obtained after the tt-th iteration. In this algorithm, the loop is repeated until the number of iterations reaches the maximum value TmaxT_{\max}.

Algorithm 1 Energy consumption and AoI dependent dynamic data Update (EAU) algorithm.
1:  Initialization:
2:  Set t=0t=0, 𝐐^(t)=𝟎\hat{\mathbf{Q}}(t)=\mathbf{0}, C1C_{1} with Eq. (13), and 𝒮(t)={D,D,,D}\mathcal{S}(t)=\{D,D,\cdots,D\} as the initial state. Randomly select an action A(t)A(t).
3:  Go into a loop:
4:  for t<Tmaxt<T_{\max} do
5:     Action Selection: Choose a action A(t)A(t) according to the probability distribution shown in Eq. (14), i.e., ϵ\epsilon-greedy.
6:     Acting and observing: Take action A(t)A(t), obtain a reward U(𝒮(t),A(t))U\left({\mathcal{S}(t),A(t)}\right), and observe a new state 𝒮(t+1)\mathcal{S}(t+1).
7:     𝐐^(t)\hat{\mathbf{Q}}(t) Updating: Update 𝐐^(t+1)\hat{\mathbf{Q}}(t+1) with Eq.(15)-(17) and set t=t+1t=t+1.
8:  end for
9:  Output: 𝐐^(t)\hat{\mathbf{Q}}(t).

At the beginning of EAU, the estimate of the action-value vector 𝐐^(0)\hat{\mathbf{Q}}(0) is initialized as a vector with all elements being 0, and all elements of the initial state 𝒮(0)\mathcal{S}(0) are set to DD. We note the constant C1C_{1} in Eq. (III-A) could be chosen arbitrarily, and, as an instance, we set it as

C1=β1n=1Nωn(MmaxD)+β2E\displaystyle C_{1}=\beta_{1}\sum\nolimits_{n=1}^{N}{\omega_{n}(M_{\max}D)}+\beta_{2}E (13)

with which the immediately obtained reward in each iteration tt is guaranteed to be positive.

When the initialization is completed, the algorithm goes into a loop. At each iteration tt, we will first choose an action A(t)A(t) from the action space 𝔸\mathbb{A} based on the current state 𝒮(t)\mathcal{S}(t) by looking up the action-value vector 𝐐^(t)\hat{\mathbf{Q}}(t). To balance the exploration and exploitation, we adopt the ε\varepsilon-greedy policy here, i.e., choosing an action from the space 𝔸\mathbb{A} with the following probability distribution

𝖯𝐐^(t)\displaystyle\mathsf{P}_{\hat{\mathbf{Q}}(t)} (A(t)|𝒮(t))\displaystyle\left({A(t)\left|{\mathcal{S}(t)}\right.}\right) (14)
={ε,A(t)=argA𝔸max{Q^(𝒮(t),A)𝐐^(t)}1ε,otherwise,\displaystyle=\begin{cases}\varepsilon,&\text{$A(t)=\mathop{\arg}\limits_{A\in\mathbb{A}}\max{\{\hat{Q}\left({\mathcal{S}(t),A}\right)\in\hat{\mathbf{Q}}(t)\}}$}\\ 1-{{\varepsilon}},&\text{otherwise}\end{cases},

where Q^(𝒮(t),A)\hat{Q}\left({\mathcal{S}(t),A}\right) denotes the estimated state-action value for the pair (𝒮(t),A)\left({\mathcal{S}(t),A}\right) with the vector 𝐐^(t)\hat{\mathbf{Q}}(t), and ε\varepsilon belongs to (0,1)\left(0,1\right). After that, we would conduct the action A(t)A(t), obtain a reward U(𝒮(t),A(t))U\left({\mathcal{S}(t),A(t)}\right), as shown in Eq. (III-A), and then observe a new state 𝒮(t+1)\mathcal{S}(t+1).

Learning from the new experiences (i.e., 𝒮(t)\mathcal{S}(t), A(t)A(t) and 𝒮(t+1)\mathcal{S}(t+1)), we would update the estimate of the action-value vector from 𝐐^(t)\hat{\mathbf{Q}}(t) to 𝐐^(t+1)\hat{\mathbf{Q}}(t+1) by just updating the element associated with the pair (𝒮(t)\mathcal{S}(t), A(t)A(t)). Particularly, if we denote the element with (𝒮(t)\mathcal{S}(t), 𝒜(t)\mathcal{A}(t)) in 𝐐^(t)\hat{\mathbf{Q}}(t) and that in 𝐐^(t+1)\hat{\mathbf{Q}}(t+1) by Q^(𝒮(t),𝒜(t))\hat{Q}\left({\mathcal{S}(t),\mathcal{A}(t)}\right) and Q^(𝒮(t),𝒜(t))\hat{Q^{\prime}}\left({\mathcal{S}(t),\mathcal{A}(t)}\right), respectively, we have

Q^(𝒮(t),A(t))=Q^(𝒮(t),A(t))+αΞ(t)\displaystyle\hat{Q^{\prime}}\left({\mathcal{S}(t),A(t)}\right)=\hat{Q}\left({\mathcal{S}(t),A(t)}\right)+\alpha\Xi(t) (15)

where α(0,1]\alpha\in(0,1] is the learning step-size and Ξ(t)\Xi(t) denotes the obtained TD error. Here, Ξ(t)\Xi(t) can be as

Ξ(t)=\displaystyle\Xi(t)= U(𝒮(t),A(t))+λ𝔼ε[𝐐^(t)|𝒮(t+1)]\displaystyle U\left({\mathcal{S}(t),A(t)}\right)+\lambda\mathbb{E}_{\varepsilon}\big{[}\hat{\mathbf{Q}}(t)\left|\mathcal{S}(t+1)\right.\big{]}- (16)
Q^(𝒮(t),A(t))\displaystyle\hat{Q}\left({\mathcal{S}(t),A(t)}\right)

where λ\lambda denotes the discount rate and the expectation

𝔼ε[𝐐^(t)|𝒮(t+1)]\displaystyle\mathbb{E}_{\varepsilon}\big{[}\hat{\mathbf{Q}}(t)\left|\mathcal{S}(t+1)\right.\big{]} (17)
=A𝔸𝖯𝐐^(t)(A|𝒮(t))Q^(𝒮(t+1),A)\displaystyle=\sum\nolimits_{A\in\mathbb{A}}{\mathsf{P}_{\hat{\mathbf{Q}}(t)}\left({A\left|{\mathcal{S}(t)}\right.}\right)\hat{Q}\left({\mathcal{S}(t+1),A}\right)}

represents the expected action-value we will have, if we start from the state 𝒮(t+1)\mathcal{S}(t+1), still evaluate the effect of actions according to the action-value vector 𝐐^(t)\hat{\mathbf{Q}}(t), and choose the action with the ε\varepsilon-greedy policy as shown in Eq. (14).

When the algorithm is terminated, we will obtain an estimated action-value function mapping from each state-action pair to a positive number, i.e., 𝐐^(Tmax)\hat{\mathbf{Q}}(T_{\max}). With 𝐐^(Tmax)\hat{\mathbf{Q}}(T_{\max}) and the initial state 𝒮={D,D,,D}\mathcal{S}=\{D,D,\cdots,D\}, we can obtain an approximate solution of Problem P by keeping looking up 𝐐^(Tmax)\hat{\mathbf{Q}}(T_{\max}) and choosing the action bringing the maximum action-value in each decision time.

IV Simulation results

In this section, we conduct simulations to evaluate the performance of our proposed strategy. We consider an IoT network consisting of 1 sensor, 1 ECN, and 3 users, where DuD_{u} and DdD_{d} are both set to 1 s. To simulate the data requests of users, we assume the requests of each user arrive at the ECN according to an independent Bernoulli distribution with parameter PnP_{n}, i.e., in each time slot tt, we have 𝖯(rn(t)=1)=Pn,n𝒩\mathsf{P}(r_{n}(t)=1)=P_{n},\forall n\in\mathcal{N}, and 𝖯(rn(t)=0)=1Pn\mathsf{P}\left(r_{n}(t)=0\right)=1-P_{n}. Here, we set Pn=0.6P_{n}=0.6, n{1,2,3}\forall n\in\{1,2,3\}. For the sensor, the generated data packet is 200 Kbits, the adopted transmit power pp is 10 mW, and the energy consumption for sensing EsE_{s} is half of that for data transmissions. The channel bandwidth is 100 KHz, the mean of channel power gain over the sensor and ECN g¯\bar{g} is 120-120 dB, and the AWGN power density N0N_{0} is set to 174-174 dBm/Hz. Meanwhile, MmaxM_{\max}, the parameter related to the maximum AoI, is set to 20, and the user weight factor ωn=1/3\omega_{n}=1/3, n{1,2,3}\forall n\in\{1,2,3\}. For our proposed algorithm EAU, the learning time TmaxT_{\max} is set to 10810^{8}, and the learning step-size α\alpha is set to 0.1. Meanwhile, as in [17], we set the discount rate λ\lambda to 0.990.99 to strength the effects of rewards obtained in the future. Besides, the number of immediate rewards used to calculate the long term reward TT (in Eq. (6) and (III-A)) is set to 600. To balance the exploration and exploitation, we would gradually increase ε\varepsilon from 0.5 to 0.999 with the step-size 0.499Tmax\frac{0.499}{T_{\max}}. All simulation results are obtained by averaging over 10310^{3} independent runs.

Figure 2: Convergence of our proposed learning algorithm EAU with β1=β2=1\beta_{1}=\beta_{2}=1 and energy consumption EE evaluated in units of mJ.

Before delving into the performance of the learning algorithm EAU, we first investigate its convergence behavior by setting the allowed learning times TmaxT_{\max} to different values, as shown in Fig. (2). Here, for the concerned objective function shown in Eq. (6), the energy consumption EE is evaluated in units of mJ, and the parameter β1\beta_{1} and β2\beta_{2} are both set to 1, i.e., the AoI and energy consumption are equally treated. It can be seen that our algorithm converges when TmaxT_{\max} is larger than 51075*10^{7}. We note that the low convergence rate is resulted from the large number of state-action pairs, which is |𝕊×𝔸|=3.2105\left|{\mathbb{S}\times\mathbb{A}}\right|=3.2*10^{5}. Besides, the fluctuations is because of the randomness in both users’ behaviors and update transmission failures.

Figure 3: Performance comparison in terms of the long-term discounted reward, where the parameter β1\beta_{1} varies from 1 to 7.
Figure 4: Performance comparison in terms of the long-term discounted average AoI and energy consumption, where the parameter β1\beta_{1} varies from 1 to 7.

To evaluate the effectiveness of our posed algorithm, we compare its performance with the zero-wait baseline policy, i.e., the sensor updates its status in every slot, with which the AoI values at both the ECN and users would be minimized. The simulation results are demonstrated in Fig. 3, where the parameter β1\beta_{1} varies from 1 to 7. It should be noted that when the value of β1\beta_{1} is small, our proposed algorithm EAU perform much better than the zero-wait baseline policy in terms of the achieved long-term discounted reward. However, the improvement gradually becomes less obvious as β1\beta_{1} increases. This is because that when β1\beta_{1} is larger, as shown in Eq. (5), reducing the weighted AoI would be more essential to lower the overall cost. To show this, we further presented the long-term discounted average AoI and energy consumption (EC) achieved by EAU and the zero-wait policy in Fig. 4. It can be seen that, implementing EAU, the achieved long-term discounted average AoI and long-term discounted average EC would respectively decrease and increase, when the effect of AoI on the overall cost becomes more pronounced, i.e., β1\beta_{1} is larger. However, the performance gap between EAU and the zero-wait policy becomes smaller as β1\beta_{1} is larger.

V Conclusions

This work presents a fresh view, by evaluating the AoI at users, for effectively striking the balance between the average AoI experienced by users and energy consumed by the sensor in caching enabled IoT networks. We formulated a dynamic status update optimization problem to minimize the expectation of a long term accumulative cost, and developed a model-free reinforcement learning algorithm to solve it. We have shown that, compared with the zero-wait policy minimizing the AoI value at both the ECN and users, our proposed scheme could achieve a higher or equal long-term discounted reward in different cases.

Based on this work, several extensions are possible. For instance, in the more realistic scenario with a large number of sensors, an interesting problem is how to design the dynamic status updating policy facing the state space and action space with extremely large sizes. Another future direction is to consider the scenario where there are correlations among the status updates from different sensors. Then, to design an efficient status update strategy for sensors, it is essential to simultaneously consider their sensing correlations as well as transmission interactions.

References

  • [1] A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, “Internet of Things: A survey on enabling technologies, protocols, and applications,” IEEE Commun. Surveys Tuts., vol. 17, no. 4, pp. 2347–2376, Fourth Quarter 2015.
  • [2] M. Amadeo, C. Campolo, J. Quevedo, D. Corujo, A. Molinaro, A. Iera, R. L. Aguiar, and A. V. Vasilakos, “Information-centric networking for the Internet of Things: challenges and opportunities,” IEEE Network, vol. 30, no. 2, pp. 92–100, Mar. 2016.
  • [3] D. Niyato, D. I. Kim, P. Wang, and L. Song, “A novel caching mechanism for Internet of Things (IoT) sensing service with energy harvesting,” in Proc. IEEE ICC, May 2016, pp. 1–6.
  • [4] Y. He, F. R. Yu, N. Zhao, V. C. M. Leung, and H. Yin, “Software-defined networks with mobile edge computing and caching for smart cities: A big data deep reinforcement learning approach,” IEEE Commun. Magazine, vol. 55, no. 12, pp. 31–37, DECEMBER 2017.
  • [5] K. Shanmugam, N. Golrezaei, A. G. Dimakis, A. F. Molisch, and G. Caire, “Femtocaching: Wireless content delivery through distributed caching helpers,” IEEE Trans. Inf. Theory, vol. 59, no. 12, pp. 8402–8413, Dec. 2013.
  • [6] X. Wang, M. Chen, T. Taleb, A. Ksentini, and V. C. Leung, “Cache in the air: Exploiting content caching and delivery techniques for 5G systems,” IEEE Commun. Magazine, vol. 52, no. 2, pp. 131–139, Feb. 2014.
  • [7] M. Sheng, C. Xu, J. Liu, J. Song, X. Ma, and J. Li, “Enhancement for content delivery with proximity communications in caching enabled wireless networks: Architecture and challenges,” IEEE Commun. Magazine, vol. 54, no. 8, pp. 70–76, Aug. 2016.
  • [8] J. Tang, T. Q. Quek, T.-H. Chang, and B. Shim, “Systematic resource allocation in cloud RAN with caching as a service under two timescales,” IEEE Trans. Commun., Aug. 2019, accepted for publication.
  • [9] S. Kaul, R. Yates, and M. Gruteser, “Real-time status: How often should one update?” in Proc. IEEE INFOCOM, Mar. 2012, pp. 2731–2735.
  • [10] A. Kosta, N. Pappas, and V. Angelakis, “Age of information: A new concept, metric, and tool,” Found. Trends Netw., vol. 12, no. 3, pp. 162–259, 2017.
  • [11] C. Xu, H. H. Yang, X. Wang, and T. Q. Quek, “Optimizing information freshness in computing enabled IoT networks,” IEEE Internet Things J., vol. 7, no. 2, pp. 971–985, Feb. 2020.
  • [12] R. D. Yates, P. Ciblat, A. Yener, and M. Wigger, “Age-optimal constrained cache updating,” in Proc. IEEE ISIT.   IEEE, Jun. 2017, pp. 141–145.
  • [13] H. Tang, P. Ciblat, J. Wang, M. Wigger, and R. Yates, “Age of information aware cache updating with file-and age-dependent update durations,” arXiv preprint arXiv:1909.05930, Sep. 2019.
  • [14] A. Goldsmith, Wireless communications.   Cambridge university press, 2005.
  • [15] Y. Gu, H. Chen, Y. Zhou, Y. Li, and B. Vucetic, “Timely status update in Internet of Things monitoring systems: An age-energy tradeoff,” IEEE Internet Things J., vol. 6, no. 3, pp. 5324–5335, Jun. 2019.
  • [16] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction.   MIT press, 2018.
  • [17] P. Wu, J. Li, L. Shi, M. Ding, K. Cai, and F. Yang, “Dynamic content update for wireless edge caching via deep reinforcement learning,” IEEE Commun. Lett., vol. 23, no. 10, pp. 1773–1777, Jul. 2019.