AoI and Energy Consumption Oriented Dynamic Status Updating in Caching Enabled IoT Networks
Abstract
Caching has been regarded as a promising technique to alleviate energy consumption of sensors in Internet of Things (IoT) networks by responding to users’ requests with the data packets stored in the edge caching node (ECN). For real-time applications in caching enabled IoT networks, it is essential to develop dynamic status update strategies to strike a balance between the information freshness experienced by users and energy consumed by the sensor, which, however, is not well addressed. In this paper, we first depict the evolution of information freshness, in terms of age of information (AoI), at each user. Then, we formulate a dynamic status update optimization problem to minimize the expectation of a long term accumulative cost, which jointly considers the users’ AoI and sensor’s energy consumption. To solve this problem, a Markov Decision Process (MDP) is formulated to cast the status updating procedure, and a model-free reinforcement learning algorithm is proposed, with which the challenge brought by the unknown of the formulated MDP’s dynamics can be addressed. Finally, simulations are conducted to validate the convergence of our proposed algorithm and its effectiveness compared with the zero-wait baseline policy.
Index Terms:
Internet of Things, AoI, reinforcement learning, dynamic status updating.I Introduction
Acting as a critical and integrated infrastructure, Internet of Things (IoT) enables ubiquitous connections for billions of things in our physical world, ranging from tiny, resource-constrained sensors to more powerful smart phones and networked vehicles[1]. In general, the sensors are powered by batteries with limited capacities rather than a fixed power supply. Thus, to exploit the benefits promised by IoT networks, it is essential to well address the energy consumption issue faced by sensors. Recently, caching bas been proposed as a promising solution to lower the energy consumption of sensors by reducing the frequency of environmental sensing and data transmissions [2, 3, 4]. Particularly, by caching the data packets generated by sensors into the edge caching node (ECN), e.g., access point (AP) or mobile edge server, users can retrieve the cached data directly from the ECN instead of frequently activating sensors for status sensing and data trasnmission, thereby significantly lowering their energy consumption.
Caching multimedia contents at the edge of wireless networks has been regarded as a promising technology for the 5th Generation (5G) wireless networks and hence, has been well studied in existing work [5, 6, 7, 8]. However, compared with multimedia contents (e.g., music, video, etc.) in traditional wireless networks, the data packets in IoT networks have two distinct features: 1) The sizes of data packets generated by IoT applications are generally much smaller than those of multimedia contents. Therefore, for IoT networks, the storage capacity of each ECN is large enough to store the latest status updates generated by all the sensors. 2) For many real-time IoT applications, the staleness of information obtained by users may significantly deteriorate the accuracy and reliability of their subsequent decisions. Therefore, the main concern for edge caching enabled IoT networks is how to properly update the cached data to simultaneously lower the energy consumption of sensors and improve the information freshness at users.
In order to quantify the information freshness, the age of information (AoI) has been proposed, which measures the time elapsed since the latest received packet was generated from the sensor [9, 10, 11]. Based on this metric, some recent researches [12, 13] begin to design status update strategies that optimize the AoI in caching enabled IoT networks. Specifically, authors in [12] proposed a status update algorithm to minimize the popularity-weighted average of AoI values, each of which is associated to one sensor, at the cache, where the users’ popularity for each sensor is assumed to be known. Study [13] further extended [12] by considering the relation between the time consumption for one update and the corresponding update interval. However, in these studies, the AoI is evaluated from the ECN’s (e.g., AP’s) perspective instead of from the perspective of individual users. In fact, users are the real data consumers and final decision makers and hence, it is more reasonable to evaluate and optimize the AoI at users in caching enabled IoT networks. Besides, by focusing on the AoI at users, it is possible to decrease the energy consumption of each sensor by avoiding some useless status updates and meanwhile, reduce the user experienced AoI. This is due to the fact that each user could observe the information freshness of a cached data packet, only when she has asked and received it from the ECN.
In this paper, we consider a time-slotted IoT network consisting of an ECN, a sensor and multiple users, where the ECN would store the latest data packet generated by the sensor. Here, each time slot is divided into two phases, which are termed as the data delivery phase (DDP) and status update phase (SUP), respectively. At the beginning of each time slot, the ECN obtains the data query profile of users, and responds to the requests with its local cached data during the DDP. Then, the ECN will decide whether to ask the sensor to update its status and, if any, the data update will be conducted during SUP. To strike a balance between the average AoI experienced by users and energy consumed by the sensor, we first depict the evolution of the AoI at both the ENC and each user and then, formulate a dynamic status update optimization problem to minimize the expectation of a long term accumulative cost, by jointly considering the effects from the AoI and energy consumption. To solve this problem, we cast the corresponding status updating procedure into a Markov Decision Process (MDP), and design an Energy consumption and AoI oriented dynamic data Update (EAU) algorithm to solve the originally formulated problem by resorting to reinforcement learning (RL). The proposed algorithm does not require any priori knowledge of the dynamics of the MDP, and its effectiveness is further verified via simulation results.
The organization of this paper is as follows. In Section II, the description of the system model and the formulation of the dynamic status update optimization problem are presented. In Section III, we cast the dynamic status updating procedure as an MDP and then, develop a model-free RL algorithm to solve it. Simulation results are presented to show the effectiveness of our proposed scheme in Section IV and finally, conclusions are drawn in Section V.
II System model and problem formulation
II-A Network model
We consider an IoT network consisting of users, one ECN, and one sensor, which serves users with time sensitive information. The user set is denoted by . We assume a time-slotted system as illustrated in Fig. 1, where each slot is divided into two phases, i.e., the DDP and SUP, whose time durations are fixed and denoted by and , respectively. In each slot , users may request the ECN for the cached data packet, and the corresponding query profile can be expressed as , where , with if user requests the data, and otherwise. In this paper, we consider the ECN obtains the data query profile at the beginning of each time slot , and it has no prior knowledge of users’ request patterns which is random.
As shown in Fig. 1, the ECN would first transmit the required data packet to the corresponding users during DDP. Then, the ECN has to decide whether or not ask the sensor update its status, and, if any, receives the status update during the SUP. Let denote the ECN’s update decision made in each time slot , i.e., if the sensor is asked to sense the underlying environment and update the status, and otherwise. We consider all data packets requested by users can be successfully transmitted from the ECN. However, when each update is transmitted from the sensor to ECN, there is a potential update failure due to the limited transmission power of the sensor, which will be presented in detail in the following subsection.
II-B Status updating model
During SUP in time slot , if the sensor is asked to update its status, then it has to sense the underlying environment and transmit the information packet to the ECN. The energy consumed by the sensor for each environmental sensing is assumed to be constant and denoted by . Besides, we assume the time spent on the environment sensing is negligible and hence, the transmission time of each update packet is . The spectral bandwidth available for the data update is and the transmit power of the sensor is fixed as . Accordingly, the overall energy consumption of one status update is . Let denote the channel power gain from the sensor to ECN in time slot , and the AWGN power density at the ECN. It is assumed that the channel between the sensor and ECN follows the quasi-static Rayleigh fading with mean , i.e., its power gain remains the same over one SUP and changes independently across time slots.
The size of each update packet is considered to be constant and denoted by . To complete the data transmission in one SUP, the required transmission rate is set to . In this light, if the sensor is asked to transmit its update packet, a transmission failure can occur when the required SNR threshold cannot be reached. Particularly, according to Shannon’s formula, to meet the transmission rate requirement , the SNR threshold should be set as
(1) |
Furthermore, the update failure probability for one update transmission, if there is any, can be expressed as [14, 15]
(2) |
where denotes the SNR with the corresponding transmission, and the average SNR.
For ease of expression, we use to denote whether the asked update is successfully transmitted to the ECN, i.e., if the update is successfully delivered and otherwise. Since the sensor will only generate and transmit the update packet when it is asked by the ECN, we further have with each time slot .
II-C Performance metric and problem formulation
In this paper, we consider both the AoI at the users and the ECN just before the decision moment in each time slot, i.e., the beginning of the status updating phase. Before formally depicting the evolution of AoI at each user, we first give the dynamics of AoI for the stored data packet from the ECN’s perspective. Particularly, we denote the AoI at the ECN in slot by the , whose evolution can be expressed as
(3) |
where means in the previous slot an update packet was successfully delivered to the ECN.
Similarly, let denote the AoI at each user with slot . Then, its dynamics can be expressed as
(4) |
where the product means the packet required by user was just successfully updated to the ECN in the previous slot , and (a) holds because of Eq. (3). Without loss of generality, we initialize . Based on the above analyses, we note that, even obtaining data from the same ECN, different AoIs may be experienced by different users, which would be ignored if we just consider the dynamics of the AoI at the ECN.
To strike a balance between the average AoI experienced by users and energy consumed by sensors, we define the overall cost associated with the update decision made by the ECM in each slot as
(5) |
where denotes the weight allocated to user , i.e., and , represents the energy consumed to complete one status update, and as well as are parameters used to nondimensionalize the equation and meanwhile, to make a trade-off between the experienced AoI and consumed energy in the network.
In this work, we aim at designing a dynamic status updating strategy to minimize the expectation of a long term accumulative cost, i.e.,
(6) | |||||
s.t. | (7) |
where denotes a sequence of update decisions made by the ECN from slot to , and the discount rate introduced to determine the importance of the present cost and meanwhile to guarantee the long term accumulative cost finite. The formulated dynamic optimization problem will be solved in the following section.
III Reinforcement learning algorithm design
In this section, we first formulate the dynamic status updating procedure as an MDP and then, develop a model-free RL algorithm, with which the challenge brought by the unknown of transition probability in the formulated MDP could be addressed.
III-A MDP formulation
The concerned dynamic status updating can be formulated into an MDP consisting of a tuple , which is depicted as follows:
-
•
State space : At each time slot , the state is defined to be the combination of the AoIs at the ECN and users just before the data updating phase, i.e., , where . We set the maximum AoI at the ECN and users to be where and is an integer. In this light, the state space is finite and can be expressed as
(8) with .
-
•
Action space : At each time slot , the ECN could ask the sensor to sense the underlying environment and update its generated data packet. Therefore, the action space can be expressed as , where () means the ECN would not ask the sensor to conduct its status update.
-
•
Reward function : The reward function is a mapping from a state-action pair to a real number, which is used to quantify the obtained reward by choosing a action at a state . In this work, at each time slot , when given a state and choosing action , we define the reward function as
(9) where is a constant used to ensure the reward positive, and (a) follows Eq. (5). Here, is introduced to evaluate the effect of the chosen action after it is conducted, i.e., after the SUP is over.
For an MDP, the goal is generally to find a deterministic stationary policy to maximize the long term expected discounted return. Particularly, a policy is said to be deterministic and stationary if: 1. Given the state, only one certain action is chosen; 2. The policy is irrelevant to time. In this work, we aim at deriving a deterministic stationary policy that maximizes the long term expected discounted reward with the initial state , i.e.,
(10) |
where (a) holds when all the elements in are set to . Comparing Eq. (6) with Eq. (III-A), we note that can also be used to derive a solution of the original Problem P.
As shown in Eq. (III-A), in each time slot , the immediately obtained reward does affect the cumulative reward in the long run. Therefore, to find the optimal strategy , it is essential to accurately estimate the long-term effect of each decision, which is nontrivial because of the causality. In this work, we resort to RL, and design a model-free algorithm, named as EAU, to solve it, which will be presented in detail in the following subsection.
III-B EAU algorithm
(11) |
(12) | ||||
For each deterministic stationary policy , we define the action-value function as shown in Eq. (11), with denoting the initial action-state pair, and the corresponding Bellman optimality equation can be expressed in Eq. (12) [16], where denotes the transition probability from one state to by conducting an action . Essentially, if that probability is known, the Bellman optimality equations are actually a system of equations with unknowns, where represents the cardinality of a set. In that case, for a finite MDP, we could derive the unique solution by utilizing model based RL algorithms, e.g., dynamic programming[16].
However, in this work, the transition probability cannot be regarded as a prior knowledge due to the unknown of the users’ request patterns and mean channel gain from the sensor to ECN. To deal with the corresponding challenge, we develop a model-free algorithm by resorting to expected Sarsa, one kind of temporal-difference (TD) RL algorithms. We note that, compared with traditional TD algorithms, e.g., classic Sarsa and Q-learning, expected Sarsa may achieve better performance with small additional computational cost for many problems [16]. Our developed EAU algorithm is shown in Algorithm 1, where is a row vector with elements, denoting the estimate of the action-value function obtained after the -th iteration. In this algorithm, the loop is repeated until the number of iterations reaches the maximum value .
At the beginning of EAU, the estimate of the action-value vector is initialized as a vector with all elements being 0, and all elements of the initial state are set to . We note the constant in Eq. (• ‣ III-A) could be chosen arbitrarily, and, as an instance, we set it as
(13) |
with which the immediately obtained reward in each iteration is guaranteed to be positive.
When the initialization is completed, the algorithm goes into a loop. At each iteration , we will first choose an action from the action space based on the current state by looking up the action-value vector . To balance the exploration and exploitation, we adopt the -greedy policy here, i.e., choosing an action from the space with the following probability distribution
(14) | ||||
where denotes the estimated state-action value for the pair with the vector , and belongs to . After that, we would conduct the action , obtain a reward , as shown in Eq. (• ‣ III-A), and then observe a new state .
Learning from the new experiences (i.e., , and ), we would update the estimate of the action-value vector from to by just updating the element associated with the pair (, ). Particularly, if we denote the element with (, ) in and that in by and , respectively, we have
(15) |
where is the learning step-size and denotes the obtained TD error. Here, can be as
(16) | ||||
where denotes the discount rate and the expectation
(17) | ||||
represents the expected action-value we will have, if we start from the state , still evaluate the effect of actions according to the action-value vector , and choose the action with the -greedy policy as shown in Eq. (14).
When the algorithm is terminated, we will obtain an estimated action-value function mapping from each state-action pair to a positive number, i.e., . With and the initial state , we can obtain an approximate solution of Problem P by keeping looking up and choosing the action bringing the maximum action-value in each decision time.
IV Simulation results
In this section, we conduct simulations to evaluate the performance of our proposed strategy. We consider an IoT network consisting of 1 sensor, 1 ECN, and 3 users, where and are both set to 1 s. To simulate the data requests of users, we assume the requests of each user arrive at the ECN according to an independent Bernoulli distribution with parameter , i.e., in each time slot , we have , and . Here, we set , . For the sensor, the generated data packet is 200 Kbits, the adopted transmit power is 10 mW, and the energy consumption for sensing is half of that for data transmissions. The channel bandwidth is 100 KHz, the mean of channel power gain over the sensor and ECN is dB, and the AWGN power density is set to dBm/Hz. Meanwhile, , the parameter related to the maximum AoI, is set to 20, and the user weight factor , . For our proposed algorithm EAU, the learning time is set to , and the learning step-size is set to 0.1. Meanwhile, as in [17], we set the discount rate to to strength the effects of rewards obtained in the future. Besides, the number of immediate rewards used to calculate the long term reward (in Eq. (6) and (III-A)) is set to 600. To balance the exploration and exploitation, we would gradually increase from 0.5 to 0.999 with the step-size . All simulation results are obtained by averaging over independent runs.
Before delving into the performance of the learning algorithm EAU, we first investigate its convergence behavior by setting the allowed learning times to different values, as shown in Fig. (2). Here, for the concerned objective function shown in Eq. (6), the energy consumption is evaluated in units of mJ, and the parameter and are both set to 1, i.e., the AoI and energy consumption are equally treated. It can be seen that our algorithm converges when is larger than . We note that the low convergence rate is resulted from the large number of state-action pairs, which is . Besides, the fluctuations is because of the randomness in both users’ behaviors and update transmission failures.
To evaluate the effectiveness of our posed algorithm, we compare its performance with the zero-wait baseline policy, i.e., the sensor updates its status in every slot, with which the AoI values at both the ECN and users would be minimized. The simulation results are demonstrated in Fig. 3, where the parameter varies from 1 to 7. It should be noted that when the value of is small, our proposed algorithm EAU perform much better than the zero-wait baseline policy in terms of the achieved long-term discounted reward. However, the improvement gradually becomes less obvious as increases. This is because that when is larger, as shown in Eq. (5), reducing the weighted AoI would be more essential to lower the overall cost. To show this, we further presented the long-term discounted average AoI and energy consumption (EC) achieved by EAU and the zero-wait policy in Fig. 4. It can be seen that, implementing EAU, the achieved long-term discounted average AoI and long-term discounted average EC would respectively decrease and increase, when the effect of AoI on the overall cost becomes more pronounced, i.e., is larger. However, the performance gap between EAU and the zero-wait policy becomes smaller as is larger.
V Conclusions
This work presents a fresh view, by evaluating the AoI at users, for effectively striking the balance between the average AoI experienced by users and energy consumed by the sensor in caching enabled IoT networks. We formulated a dynamic status update optimization problem to minimize the expectation of a long term accumulative cost, and developed a model-free reinforcement learning algorithm to solve it. We have shown that, compared with the zero-wait policy minimizing the AoI value at both the ECN and users, our proposed scheme could achieve a higher or equal long-term discounted reward in different cases.
Based on this work, several extensions are possible. For instance, in the more realistic scenario with a large number of sensors, an interesting problem is how to design the dynamic status updating policy facing the state space and action space with extremely large sizes. Another future direction is to consider the scenario where there are correlations among the status updates from different sensors. Then, to design an efficient status update strategy for sensors, it is essential to simultaneously consider their sensing correlations as well as transmission interactions.
References
- [1] A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, “Internet of Things: A survey on enabling technologies, protocols, and applications,” IEEE Commun. Surveys Tuts., vol. 17, no. 4, pp. 2347–2376, Fourth Quarter 2015.
- [2] M. Amadeo, C. Campolo, J. Quevedo, D. Corujo, A. Molinaro, A. Iera, R. L. Aguiar, and A. V. Vasilakos, “Information-centric networking for the Internet of Things: challenges and opportunities,” IEEE Network, vol. 30, no. 2, pp. 92–100, Mar. 2016.
- [3] D. Niyato, D. I. Kim, P. Wang, and L. Song, “A novel caching mechanism for Internet of Things (IoT) sensing service with energy harvesting,” in Proc. IEEE ICC, May 2016, pp. 1–6.
- [4] Y. He, F. R. Yu, N. Zhao, V. C. M. Leung, and H. Yin, “Software-defined networks with mobile edge computing and caching for smart cities: A big data deep reinforcement learning approach,” IEEE Commun. Magazine, vol. 55, no. 12, pp. 31–37, DECEMBER 2017.
- [5] K. Shanmugam, N. Golrezaei, A. G. Dimakis, A. F. Molisch, and G. Caire, “Femtocaching: Wireless content delivery through distributed caching helpers,” IEEE Trans. Inf. Theory, vol. 59, no. 12, pp. 8402–8413, Dec. 2013.
- [6] X. Wang, M. Chen, T. Taleb, A. Ksentini, and V. C. Leung, “Cache in the air: Exploiting content caching and delivery techniques for 5G systems,” IEEE Commun. Magazine, vol. 52, no. 2, pp. 131–139, Feb. 2014.
- [7] M. Sheng, C. Xu, J. Liu, J. Song, X. Ma, and J. Li, “Enhancement for content delivery with proximity communications in caching enabled wireless networks: Architecture and challenges,” IEEE Commun. Magazine, vol. 54, no. 8, pp. 70–76, Aug. 2016.
- [8] J. Tang, T. Q. Quek, T.-H. Chang, and B. Shim, “Systematic resource allocation in cloud RAN with caching as a service under two timescales,” IEEE Trans. Commun., Aug. 2019, accepted for publication.
- [9] S. Kaul, R. Yates, and M. Gruteser, “Real-time status: How often should one update?” in Proc. IEEE INFOCOM, Mar. 2012, pp. 2731–2735.
- [10] A. Kosta, N. Pappas, and V. Angelakis, “Age of information: A new concept, metric, and tool,” Found. Trends Netw., vol. 12, no. 3, pp. 162–259, 2017.
- [11] C. Xu, H. H. Yang, X. Wang, and T. Q. Quek, “Optimizing information freshness in computing enabled IoT networks,” IEEE Internet Things J., vol. 7, no. 2, pp. 971–985, Feb. 2020.
- [12] R. D. Yates, P. Ciblat, A. Yener, and M. Wigger, “Age-optimal constrained cache updating,” in Proc. IEEE ISIT. IEEE, Jun. 2017, pp. 141–145.
- [13] H. Tang, P. Ciblat, J. Wang, M. Wigger, and R. Yates, “Age of information aware cache updating with file-and age-dependent update durations,” arXiv preprint arXiv:1909.05930, Sep. 2019.
- [14] A. Goldsmith, Wireless communications. Cambridge university press, 2005.
- [15] Y. Gu, H. Chen, Y. Zhou, Y. Li, and B. Vucetic, “Timely status update in Internet of Things monitoring systems: An age-energy tradeoff,” IEEE Internet Things J., vol. 6, no. 3, pp. 5324–5335, Jun. 2019.
- [16] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2018.
- [17] P. Wu, J. Li, L. Shi, M. Ding, K. Cai, and F. Yang, “Dynamic content update for wireless edge caching via deep reinforcement learning,” IEEE Commun. Lett., vol. 23, no. 10, pp. 1773–1777, Jul. 2019.