This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Age- and Deviation-of-Information of Time-Triggered and Event-Triggered Systems

Mahsa Noroozi, Markus Fidler Institute of Communications Technology
Leibniz Universität Hannover
Abstract

Age-of-information is a metric that quantifies the freshness of information obtained by sampling a remote sensor. In signal-agnostic sampling, sensor updates are triggered at certain times without being conditioned on the actual sensor signal. Optimal update policies have been researched and it is accepted that periodic updates achieve smaller age-of-information than random updates. We contribute a study of a signal-aware policy, where updates are triggered by a random sensor event. By definition, this implies random updates and as a consequence inferior age-of-information. Considering a notion of deviation-of-information as a signal-aware metric, our results show, however, that event-triggered systems can perform equally well as time-triggered systems while causing smaller mean network utilization.

I Introduction

We consider a system where a remote sensor is sampled and the samples are transmitted via a network to a monitor. A model of the system is shown in Fig. 1. The signal C(t)C(t) generated by the sensor changes randomly over time tt and the nnth sample is taken and sent to the network at time A(n)A(n). We investigate two different sampling policies. In a time-triggered system, the sampling process is agnostic to the signal and samples are taken after a certain amount of time has elapsed. In an event-triggered system, the sampler is signal-aware and whenever the signal change with respect to the last sample exceeds a threshold, a new sample is generated. Sample nn has network service requirement Si(n)S_{i}(n) at queue ii and it departs from the network to the monitor at time D(n)D(n). The monitor does not have a priori knowledge of the distribution and parameters of the sensor signal C(t)C(t). Hence, it relies only on the most recent update received, i.e., at time tt sample n=max{n:D(n)<t}n^{*}=\max\{n:D(n)<t\} provides the sensor reading C(A(n))C(A(n^{*})) generated at time A(n)A(n^{*}).

A key performance metric of such systems is the age-of-information (AoI) that quantifies the freshness of information at the monitor. The AoI is defined as Δ(t)=tA(n)\Delta(t)=t-A(n^{*}). An example of the progression of the AoI over time is shown in Fig. 2 [1]. The information of sample nn generated at time A(n)A(n) ages with slope one with tt. The monitor selects the most recent sample nn^{*} that it has received. This leads to the linear increase of Δ(t)\Delta(t) with discontinuities whenever a fresher sample becomes available at the monitor and the AoI is reset to the network delay.

The notion of AoI has been introduced in vehicular networks [1, 2, 3, 4]. It has emerged as a very active area of research, being of general importance for a variety of applications in the areas of cyber-physical systems and the Internet of Things. There, particular challenges arise in networked feedback control systems [5, 6, 7]. Recent surveys are [8, 9].

Refer to caption
Figure 1: System model. At time A(n)A(n) the nnth sample of the sensor signal C(t)C(t) arrives at a network of queues, with service times Si(n)S_{i}(n). At time D(n)D(n) the sample departs from the network to a monitor, conveying the signal C(A(n))C(A(n)).

A general objective of AoI research is to find update policies that minimize the AoI. Common policies are periodic sampling, random sampling, and zero-wait sampling [2, 10, 11]. The effects of periodic and random sampling on the AoI have been studied in-depth using models of D\midM\mid1 and M\midM\mid1 queues and variants thereof [2, 12, 13, 14], and it is universally accepted that periodic sampling outperforms exponential, random sampling. Zero-wait sampling uses A(n+1)=D(n)A(n+1)=D(n) for all n1n\geq 1, i.e., reception of sample nn by the monitor triggers generation of sample n+1n+1. This avoids queueing in the network entirely and achieves good but not necessarily optimal AoI [10, 11]. Zero-wait sampling differs, however, from our system in Fig. 1 as it requires feedback of network state information.

Different from these signal-agnostic policies, we consider a signal-aware policy [8, 15, 16], where samples are generated in case of a defined, random sensor event. At first sight, this brings about random updates, which may be assumed to have worse AoI performance than time-triggered, periodic updates. Noticing that AoI is a signal-agnostic metric, this may not be unexpected. We define a deviation-of-information (DoI) metric Φ(t)=C(t)C(A(n))\Phi(t)=C(t)-C(A(n^{*})) that matches the definition of AoI Δ(t)=tA(n)\Delta(t)=t-A(n^{*}), but replaces age by the actual deviation of the monitor’s signal estimate from the sensor signal C(t)C(t).

We employ a max-plus queueing model and stochastic methods of the network calculus to derive bounds of tail delays [17, 18, 19, 20]. We contribute solutions for AoI and DoI of time- and event-triggered systems. Simulation results that confirm the tail decay rates of our analytical bounds are included. Our results enable finding update rates that minimize the AoI or DoI, respectively. Interestingly, the optimal update rate may differ with respect to the goal of AoI or DoI minimization. While the event-triggered system has larger AoI, our evaluation shows that it requires a lower average update rate to achieve DoI performance similar to the time-triggered system.

The remainder of this work is structured as follows. In Sec. II we give an overview of related works. Our basic model of a system that is triggered by sensor events is developed in Sec. III where we also define suitable performance metrics. In Sec. IV we derive a lemma that is essential for our investigation of DoI. As an immediate corollary this lemma provides tail bounds of delay and AoI of time-triggered and event-triggered systems. We obtain our main result for the DoI in Sec. V. Brief conclusions are presented in Sec. VI.

Refer to caption
Figure 2: Progression of the age-of-information Δ(t)\Delta(t) over time tt. A(n)A(n) and D(n)D(n) denote the network arrival and departure time stamps of sample nn.

II Related Work

The notion of AoI as a performance metric and its relevance to a wide range of systems have attracted significant research. During the past decade, AoI results of a catalogue of queueing systems have been accomplished [12, 8, 9]. Commonly, the time-average of the AoI, that can be visualized by the area under the curve in Fig. 2, is derived. Further, the peak AoI [21, 14], that is the maximal AoI observed immediately before an update is received, and the tail distribution of the AoI [22, 23, 24, 25, 26] have been studied. In this work, we consider the peak AoI and like [22, 23, 25] we employ techniques from the stochastic network calculus [17, 18, 19, 20] to estimate tail probabilities.

The starting basis of our work are a number of studies that compare the impact of periodic versus exponential sampling on the AoI. Optimal update rates that minimize the average AoI are considered in [2] for M\midM\mid1, D\midM\mid1, and M\midD\mid1 queues. It is observed that the random arrivals of the M\midM\mid1 queue lead to a 50% increase of the AoI compared to the D\midM\mid1 queue. For last-come first-served queues with and without preemption [12] reports accordingly that the AoI of the D\midM\mid1 queue outperforms the M\midM\mid1 queue. The AoI of GI\midGI\mid1\mid1 and GI\midGI\mid1\mid2* queues is investigated in [13] and results are presented for deterministic arrivals and deterministic service, respectively. A comparison of periodic arrivals and Bernoulli arrivals in wireless networks [14] shows that periodic arrivals outperform Bernoulli arrivals considering average AoI and peak AoI. These results indicate that random sampling may in general perform worse than periodic sampling. A plausible implication is that event-triggered systems may be inferior to time-triggered systems.

While the AoI of a sample increases linearly with time, the actual validity period of that sample depends on the future progression of the sensor signal. Taking this aspect into account appears essential for evaluation of event-triggered systems. A number of works employ a non-linear aging function to represent the value-of-information over time, see the survey [8]. The evolution of a random sensor signal can, however, not be modeled by a deterministic function.

Sampling governed by an external random process is considered in energy-harvesting systems, where random energy arrivals trigger sensor updates, see [8] for an overview. Different from these works, the event-triggered systems that we consider are signal-aware, i.e., the progression of the signal itself triggers sensor updates.

More closely related to our work are a number of studies on remote estimation of the state of a linear plant with Gaussian disturbance via a network [6, 5, 7]. In [5] geometric transmission times with success probability pp are assumed, whereas [7] considers an erasure channel with loss probability 1p1-p and unit service time, and [6] investigates scheduling for a cellular network. The common target is to minimize the mean-square norm of the state error at the monitor. It is shown that this can be expressed by a non-decreasing function of the AoI, referred to as age-penalty function in [7] and expressed as value-of-information in [6]. The result is an equivalent AoI minimization problem [5, 7] that is signal-agnostic. AoI minimization is studied in [2, 10, 11].

Remote estimation of Wiener processes using signal-aware sampling is analyzed in [15] and generalized to Ornstein-Uhlenbeck processes in [16]. Samples are generated whenever the instantaneous estimation error exceeds a threshold. The policy is proven to minimize the time-average mean-square error of the estimate. For signal-agnostic sampling it is shown that the problem can be recast as AoI minimization. Generally, the policies that are investigated include an adapted zero-wait condition, where a new sample is generated only after the previous sample is delivered, i.e., A(n+1)D(n)A(n+1)\geq D(n) for all n1n\geq 1. This avoids the problem of waiting times in network queues but requires feedback information that is not included in our system model, see Fig. 1.

III Sensor Model and Performance Metrics

We model the sensor signal as a random process and define the performance metrics peak AoI and DoI at the monitor.

III-A Sensor Model

We consider a sensor that detects the occurrence of defined, random events indexed nn\in\mathbb{N} in order. Time t0+t\in\mathbb{R}_{0+} is continuous and non-negative. We denote E(n)E(n) the time of occurrence of event n1n\geq 1, and define E(0)=0E(0)=0. For all n1n\geq 1 it holds that E(n)E(n1)E(n)\geq E(n-1) and I(n)=E(n)E(n1)I(n)=E(n)-E(n-1) are the inter-event times. The event count

C(t)=max{n0:E(n)t},C(t)=\max\{n\geq 0:E(n)\leq t\}, (1)

denotes the cumulative number of events that occurred in (0,t](0,t]. By definition C(t)0C(t)\in\mathbb{N}_{0}, C(0)=0C(0)=0, and C(t)C(t) is non-decreasing and right-continuous.

The sensor is part of the system model in Fig. 1. Depending on a defined trigger, time or event, the sensor is sampled and an update message that contains the current event count C(t)C(t) is sent. The update messages are indexed nn\in\mathbb{N} and we denote A(n)A(n) and D(n)D(n) their arrival time to the network and departure time from the network, respectively. For convenience, we define A(0)=D(0)=0A(0)=D(0)=0, A(ν,n)=A(n)A(ν)A(\nu,n)=A(n)-A(\nu), and D(ν,n)=D(n)D(ν)D(\nu,n)=D(n)-D(\nu) for nν0n\geq\nu\geq 0. Generally for all n1n\geq 1 it holds that D(n)A(n)D(n)\geq A(n) for causality.

In a time-triggered system, update messages are sent by the sensor at times A(n)=nwA(n)=nw for n1n\geq 1 where w+w\in\mathbb{R}_{+} is the width of the update interval. In an event-triggered system, update messages are sent whenever the number of events since the last update exceeds a threshold α\alpha\in\mathbb{N}. This happens at times A(n)=E(nα)A(n)=E(n\alpha) for n1n\geq 1. We assume that the monitor does not have any other, a priori knowledge of the random sensor process. In particular, it does not know the distribution nor any moments of the sensor process.

Practical examples of our system range from networked leak or overflow sensors, alert counters and alert aggregation in cloud and network operations, to people counting sensors, e.g., at emergency exits. More general sensor models may include processes C(t)C(t) that are not non-decreasing. Examples include Gaussian noise and Wiener processes in [6, 5, 7, 15] or Markovian random walks. These may cause additional difficulties when defining a condition on the process C(t)C(t) that triggers generation of update messages A(n)A(n).

III-B Definition of Performance Metrics

The network delay, respectively, the sojourn time of message n1n\geq 1 can be written as

T(n)=D(n)A(n).T(n)=D(n)-A(n). (2)

A common definition of AoI at time t>D(1)t>D(1) is Δ(t)=tmaxn1{A(n):D(n)<t}\Delta(t)=t-\max_{n\geq 1}\{A(n):D(n)<t\}. This definition matches [23] with the minor difference that we define Δ(t)\Delta(t) as a left-continuous function. Thus, the peak AoI of update n1n\geq 1 follows as

Δ(n)=D(n+1)A(n).\Delta(n)=D(n+1)-A(n). (3)

Complementary to the AoI that is signal-agnostic, we define a signal-aware deviation-of-information (DoI) metric Φ(t)=C(t)maxn1{C(A(n)):D(n)<t}\Phi(t)=C(t)-\max_{n\geq 1}\{C(A(n)):D(n)<t\} for t>D(1)t>D(1). The DoI is the deviation of the current sensor signal from the latest value received by the monitor. The peak DoI of update n1n\geq 1 is

Φ(n)=C(D(n+1))C(A(n)),\Phi(n)=C(D(n+1))-C(A(n)), (4)

that is attained at the departure time of update message n+1n+1 when the monitor uses the information of update nn for the last time.

IV Delay and AoI Statistics

In this section, we define the queueing model and its statistical characterization. We derive a lemma for delay and AoI that is key to our later analysis of the DoI. This lemma also provides statistical delay TεT_{\varepsilon} and AoI bounds Δε\Delta_{\varepsilon} that satisfy 𝖯[T(n)>Tε]ε\mathsf{P}[T(n)>T_{\varepsilon}]\leq\varepsilon and 𝖯[Δ(n)>Δε]ε\mathsf{P}[\Delta(n)>\Delta_{\varepsilon}]\leq\varepsilon, respectively.

IV-A Queueing Model

We model queueing systems and networks thereof using a definition of a max-plus server [27, Def. 1] that is adapted from the definition of g-server from [17, Def. 6.3.1].

Definition 1 (Max-Plus Server).

A system with arrival process A(n)A(n) and departure process D(n)D(n) is a max-plus server with service process S(ν,n)S(\nu,n) if it holds for all n1n\geq 1 that

D(n)maxν[1,n]{A(ν)+S(ν,n)}.D(n)\leq\max_{\nu\in[1,n]}\{A(\nu)+S(\nu,n)\}.

The general class of work-conserving, lossless, first-in first-out (fifo) queueing systems satisfies the definition of max-plus server with service process S(ν,n)=m=νnL(m)S(\nu,n)=\sum_{m=\nu}^{n}L(m) where L(m)+L(m)\in\mathbb{R}_{+} is the service time of message m1m\geq 1 [27, Lem. 1]. This includes G\midG\mid1 queues [17, Ex. 6.2.3]. Since any tandem of max-plus servers is a max-plus server, too, the model extends naturally to networks of queues.

By insertion of Def. 1 into the definition of network delay (2) it follows readily for n1n\geq 1 that

T(n)maxν[1,n]{S(ν,n)A(ν,n)}.T(n)\leq\max_{\nu\in[1,n]}\{S(\nu,n)-A(\nu,n)\}. (5)

Similarly, for the peak AoI (3) we obtain for n1n\geq 1 that

Δ(n)max{\displaystyle\Delta(n)\leq\max\biggl{\{} maxν[1,n]{S(ν,n+1)A(ν,n)},\displaystyle\max_{\nu\in[1,n]}\{S(\nu,n+1)-A(\nu,n)\},
S(n+1,n+1)+A(n,n+1)}.\displaystyle S(n+1,n+1)+A(n,n+1)\biggr{\}}. (6)

IV-B Statistical Characterization

We derive statistical tail bounds using Chernoff’s theorem

𝖯[Xx]eθx𝖬X(θ),\mathsf{P}[X\geq x]\leq e^{-\theta x}\mathsf{M}_{X}(\theta), (7)

for any θ>0\theta>0, where 𝖬X(θ)=𝖤[eθX]\mathsf{M}_{X}(\theta)=\mathsf{E}[e^{\theta X}] is the moment generating function (MGF) of the random variable XX and xx is an arbitrary threshold parameter. We will frequently use that 𝖬X+Y(θ)=𝖬X(θ)𝖬Y(θ)\mathsf{M}_{X+Y}(\theta)=\mathsf{M}_{X}(\theta)\mathsf{M}_{Y}(\theta) for statistically independent random variables XX and YY.

We characterize the MGF of arrival and service processes by (σ,ρ)(\sigma,\rho)-envelopes defined in [17, Def. 7.2.1]. These are adapted to max-plus servers in [27, Def. 2]. We use arrival processes with independent and identically distributed (iid) increments A(n1,n)A(n-1,n) for n1n\geq 1, including deterministic increments as a special case. For iid increments the parameter σA=0\sigma_{A}=0 and the arrival process is characterized by an envelope rate ρA>0\rho_{A}>0.

Definition 2 (Service and Arrival Envelopes).

Each of the following statements for all nν1n\geq\nu\geq 1 and θ>0\theta>0. A service process, S(ν,n)S(\nu,n), has (σ¯S(θ),ρ¯S(θ))(\overline{\sigma}_{S}(\theta),\overline{\rho}_{S}(\theta))-upper envelope if

𝖤[eθS(ν,n)]eθ(σ¯S(θ)+ρ¯S(θ)(nν+1)).\mathsf{E}\Bigl{[}e^{\theta S(\nu,n)}\Bigr{]}\leq e^{\theta(\overline{\sigma}_{S}(\theta)+\overline{\rho}_{S}(\theta)(n-\nu+1))}.

An arrival process, A(ν,n)A(\nu,n), has ρ¯A(θ)\underline{\rho}_{A}(\theta)-lower envelope if

𝖤[eθA(ν,n)]eθρ¯A(θ)(nν),\mathsf{E}\Bigl{[}e^{-\theta A(\nu,n)}\Bigr{]}\leq e^{-\theta\underline{\rho}_{A}(-\theta)(n-\nu)},

and ρ¯A(θ)\overline{\rho}_{A}(\theta)-upper envelope if

𝖤[eθA(ν,n)]eθρ¯A(θ)(nν).\mathsf{E}\Bigl{[}e^{\theta A(\nu,n)}\Bigr{]}\leq e^{\theta\overline{\rho}_{A}(\theta)(n-\nu)}.

Next, we obtain bounds of the MGF of delay and AoI that are an essential building block of the following derivations.

Lemma 1 (MGF bounds of delay and AoI).

Given arrivals A(n)A(n) with iid increments and envelope parameters (ρ¯A,ρ¯A)(\underline{\rho}_{A},\overline{\rho}_{A}) at a max-plus server S(ν,n)S(\nu,n) with envelope parameters (σ¯S,ρ¯S)(\overline{\sigma}_{S},\overline{\rho}_{S}). For any θ>0\theta>0 that satisfies ρ¯A(θ)>ρ¯S(θ)\underline{\rho}_{A}(-\theta)>\overline{\rho}_{S}(\theta) it holds for the MGF of the delay T(n)T(n) for any n1n\geq 1 that

𝖬T(θ)eθ(σ¯S(θ)+ρ¯S(θ))1eθ(ρ¯A(θ)ρ¯S(θ)),\mathsf{M}_{T}(\theta)\leq\frac{e^{\theta(\overline{\sigma}_{S}(\theta)+\overline{\rho}_{S}(\theta))}}{1-e^{-\theta(\underline{\rho}_{A}(-\theta)-\overline{\rho}_{S}(\theta))}},

and for the MGF of the AoI Δ(n)\Delta(n) for any n1n\geq 1 that

𝖬Δ(θ)eθ(σ¯S(θ)+2ρ¯S(θ))1eθ(ρ¯A(θ)ρ¯S(θ))+eθ(σ¯S(θ)+ρ¯S(θ)+ρ¯A(θ)).\mathsf{M}_{\Delta}(\theta)\leq\frac{e^{\theta(\overline{\sigma}_{S}(\theta)+2\overline{\rho}_{S}(\theta))}}{1-e^{-\theta(\underline{\rho}_{A}(-\theta)-\overline{\rho}_{S}(\theta))}}+e^{\theta(\overline{\sigma}_{S}(\theta)+\overline{\rho}_{S}(\theta)+\overline{\rho}_{A}(\theta))}.
Proof.

We first show the derivation of the MGF of the delay. The MGF of the AoI follows similarly.

Delay

We estimate the MGF of the sojourn time using the approach from [17, 28]. It follows from (5) for n1n\geq 1 and θ>0\theta>0 that

𝖬T(θ,n)\displaystyle\mathsf{M}_{T}(\theta,n)\leq 𝖤[eθmaxν[1,n]{S(ν,n)A(ν,n)}]\displaystyle\mathsf{E}\bigl{[}e^{\theta\max_{\nu\in[1,n]}\{S(\nu,n)-A(\nu,n)\}}\bigr{]}
=\displaystyle= 𝖤[maxν[1,n]{eθ(S(ν,n)A(ν,n))}]\displaystyle\mathsf{E}\biggl{[}\max_{\nu\in[1,n]}\bigl{\{}e^{\theta(S(\nu,n)-A(\nu,n))}\bigr{\}}\biggr{]}
\displaystyle\leq 𝖤[ν=1neθ(S(ν,n)A(ν,n))]\displaystyle\mathsf{E}\Biggl{[}\sum_{\nu=1}^{n}e^{\theta(S(\nu,n)-A(\nu,n))}\Biggr{]}
=\displaystyle= ν=1n𝖤[eθS(ν,n)]𝖤[eθA(ν,n)],\displaystyle\sum_{\nu=1}^{n}\mathsf{E}\bigl{[}e^{\theta S(\nu,n)}\bigr{]}\mathsf{E}\bigl{[}e^{-\theta A(\nu,n)}\bigr{]},

where we used independence of S(ν,n)S(\nu,n) and A(ν,n)A(\nu,n). By insertion of the envelope parameters we have

𝖬T(θ,n)\displaystyle\mathsf{M}_{T}(\theta,n)\leq eθ(σ¯S(θ)+ρ¯S(θ))ν=1n(eθ(ρ¯A(θ)ρ¯S(θ)))nν\displaystyle e^{\theta(\overline{\sigma}_{S}(\theta)+\overline{\rho}_{S}(\theta))}\sum_{\nu=1}^{n}\Bigl{(}e^{-\theta(\underline{\rho}_{A}(-\theta)-\overline{\rho}_{S}(\theta))}\Bigr{)}^{n-\nu}
\displaystyle\leq eθ(σ¯S(θ)+ρ¯S(θ))ν=0(eθ(ρ¯A(θ)ρ¯S(θ)))ν,\displaystyle e^{\theta(\overline{\sigma}_{S}(\theta)+\overline{\rho}_{S}(\theta))}\sum_{\nu=0}^{\infty}\Bigl{(}e^{-\theta(\underline{\rho}_{A}(-\theta)-\overline{\rho}_{S}(\theta))}\Bigr{)}^{\nu},

where ν=0xν=1/(1x)\sum_{\nu=0}^{\infty}x^{\nu}=1/(1-x) if x<1x<1 concludes the proof, implying the stability condition ρ¯A(θ)>ρ¯S(θ)\underline{\rho}_{A}(-\theta)>\overline{\rho}_{S}(\theta).

AoI

We use the same essential steps to estimate the MGF of the AoI. From (6) we have for n1n\geq 1 and θ>0\theta>0 that

𝖬Δ(θ,n)\displaystyle\mathsf{M}_{\Delta}(\theta,n)\leq ν=1n𝖤[eθS(ν,n+1)]𝖤[eθA(ν,n)]\displaystyle\sum_{\nu=1}^{n}\mathsf{E}\bigl{[}e^{\theta S(\nu,n+1)}\bigr{]}\mathsf{E}\bigl{[}e^{-\theta A(\nu,n)}\bigr{]}
+\displaystyle+ 𝖤[eθS(n+1,n+1)]𝖤[eθA(n,n+1)]\displaystyle\mathsf{E}\bigl{[}e^{\theta S(n+1,n+1)}\bigr{]}\mathsf{E}\bigl{[}e^{\theta A(n,n+1)}\bigr{]}
\displaystyle\leq eθ(σ¯S(θ)+2ρ¯S(θ))ν=1n(eθ(ρ¯A(θ)ρ¯S(θ)))nν\displaystyle e^{\theta(\overline{\sigma}_{S}(\theta)+2\overline{\rho}_{S}(\theta))}\sum_{\nu=1}^{n}\Bigl{(}e^{-\theta(\underline{\rho}_{A}(-\theta)-\overline{\rho}_{S}(\theta))}\Bigr{)}^{n-\nu}
+\displaystyle+ eθ(σ¯S(θ)+ρ¯S(θ)+ρ¯A(θ)).\displaystyle e^{\theta(\overline{\sigma}_{S}(\theta)+\overline{\rho}_{S}(\theta)+\overline{\rho}_{A}(\theta))}.

Again, ρ¯A(θ)>ρ¯S(θ)\underline{\rho}_{A}(-\theta)>\overline{\rho}_{S}(\theta) achieves convergence if nn\rightarrow\infty. ∎

IV-C Statistical Performance Bounds

Refer to caption
Figure 3: Sojourn time bounds of the time-triggered system and the event-triggered system with exponential inter-event times, exponential service times, and parameter α=1\alpha=1. In this case, the time-triggered system is a D\midM\mid1 queue and corresponding simulation results are shown for comparison, and the event-triggered system is an M\midM\mid1 queue that has a known tail distribution.

Statistical delay and AoI bounds follow as an immediate corollary of Lem. 1 and Chernoff’s theorem (7). Specifically, we have for the delay for any n1n\geq 1 and θ>0\theta>0 that

𝖯[T(n)Tε]eθTε𝖬T(θ)=:ε.\mathsf{P}[T(n)\geq T_{\varepsilon}]\leq e^{-\theta T_{\varepsilon}}\mathsf{M}_{T}(\theta)=:\varepsilon.

Solving for TεT_{\varepsilon} we have that

Tε(θ)=ln𝖬T(θ)lnεθ,T_{\varepsilon}(\theta)=\frac{\ln\mathsf{M}_{T}(\theta)-\ln\varepsilon}{\theta}, (8)

and similarly for the AoI

Δε(θ)=ln𝖬Δ(θ)lnεθ,\Delta_{\varepsilon}(\theta)=\frac{\ln\mathsf{M}_{\Delta}(\theta)-\ln\varepsilon}{\theta}, (9)

are statistical upper bounds of delay and AoI, respectively, that are exceeded at most with probability ε\varepsilon. Since Tε(θ)T_{\varepsilon}(\theta) and Δε(θ)\Delta_{\varepsilon}(\theta) are valid upper bounds for any θ>0\theta>0, we can optimize θ>0\theta>0 to find the smallest upper bounds. Next, we evaluate these bounds for time-triggered and event-triggered systems, respectively.

IV-C1 Time-triggered systems

For a time-triggered system where update messages are generated at times A(n)=nwA(n)=nw for n1n\geq 1 and w+w\in\mathbb{R}_{+} is the width of the update interval, the envelope parameters in Def. 2 for all θ>0\theta>0 are simply

ρ¯A=w,ρ¯A=w.\underline{\rho}_{A}=w,\quad\quad\overline{\rho}_{A}=w. (10)

IV-C2 Event-triggered systems

Refer to caption
(a) λ=0.25\lambda=0.25
Refer to caption
(b) λ=0.5\lambda=0.5
Refer to caption
(c) λ=1\lambda=1
Figure 4: Sojourn time and AoI bounds for ε=106\varepsilon=10^{-6} for the time-triggered and the event-triggered system. Inter-event times are exponential with parameter λ\lambda. The update interval ww of the time-triggered system and the event threshold α\alpha of the event-triggered system are varied, where α=λw\alpha=\lambda w achieves the same mean network utilization for both systems.

For an event-triggered system, A(n)=E(nα)A(n)=E(n\alpha) for n1n\geq 1 and α\alpha\in\mathbb{N} is a threshold parameter. We assume that inter-event times I(n)I(n) are iid with MGF 𝖬I(θ)\mathsf{M}_{I}(\theta). With A(n)=ν=1nαI(ν)A(n)=\sum_{\nu=1}^{n\alpha}I(\nu), it follows for θ>0\theta>0 that

ρ¯A(θ)=αθln(𝖬I(θ)),ρ¯A(θ)=αθln(𝖬I(θ)).\underline{\rho}_{A}(-\theta)=-\frac{\alpha}{\theta}\ln(\mathsf{M}_{I}(-\theta)),\quad\overline{\rho}_{A}(\theta)=\frac{\alpha}{\theta}\ln(\mathsf{M}_{I}(\theta)). (11)

If the time between events is exponential with parameter λ>0\lambda>0 we have for θ<λ\theta<\lambda that

𝖬I(θ)=λλθ.\mathsf{M}_{I}(\theta)=\frac{\lambda}{\lambda-\theta}.

In this case, the sensor signal C(t)C(t) is a Poisson counting process with parameter λ\lambda. Further, the time between two event-triggered update messages is iid Erlang with α\alpha and λ\lambda.

IV-C3 Service times

We consider messages of variable length and denote L(n)L(n) the service time of message n1n\geq 1. It holds that S(ν,n)=m=νnL(m)S(\nu,n)=\sum_{m=\nu}^{n}L(m) [27, Lem. 1] and considering iid service times it follows for θ>0\theta>0 that σ¯S=0\overline{\sigma}_{S}=0 and

ρ¯S(θ)=1θln(𝖬L(θ)).\overline{\rho}_{S}(\theta)=\frac{1}{\theta}\ln(\mathsf{M}_{L}(\theta)). (12)

Considering exponential service times with parameter μ>0\mu>0 we have for θ<μ\theta<\mu that

𝖬L(θ)=μμθ.\mathsf{M}_{L}(\theta)=\frac{\mu}{\mu-\theta}.

We will also consider the case of deterministic message service times L(n)=lL(n)=l for n1n\geq 1 and l>0l>0 which gives ρ¯S=l\overline{\rho}_{S}=l.

IV-C4 Numerical results

Statistical delay and AoI bounds follow from (8) and (9), respectively, by insertion of the envelope parameters (10) or (11), and (12) into Lem. 1. We optimize the free parameter θ\theta numerically to obtain the smallest upper bound.

The time-triggered system is a D\midG\mid1 queue or in case of exponential service times a D\midM\mid1 queue, respectively. The event-triggered system is of type G\midG\mid1, respectively, Erlang-α\alpha\midM\mid1 in case of exponential inter-event times and exponential service times. For α=1\alpha=1 it becomes a basic M\midM\mid1 queue. For reference, the exact tail distribution of TεT_{\varepsilon} of the M\midM\mid1 queue is known [29] as

ε=eμ(1λμ)Tε.\varepsilon=e^{-\mu\left(1-\frac{\lambda}{\mu}\right)T_{\varepsilon}}. (13)

In Fig. 3 we display the tail decay of sojourn time bounds of the time-triggered and the event-triggered system with exponential inter-event times with parameter λ=0.5\lambda=0.5 and exponential service times with parameter μ=1\mu=1. We consider the case α=1\alpha=1 for the event-triggered system. For the time-triggered system we choose parameter w=2w=2 that achieves the same average network utilization. For comparison, we include empirical quantiles from 10910^{9} sojourn time samples obtained by simulation of a D\midM\mid1 queue and the tail distribution of the M\midM\mid1 queue (13). The tail bounds exhibit the correct speed of tail decay and show the expected accuracy [20].

In Fig. 4 we compare delay and AoI bounds with probability ε=106\varepsilon=10^{-6} of the time-triggered and the event-triggered system. Service times and inter-event times are exponential, where the service rate is μ=0.25\mu=0.25 and different sensor event rates λ{0.25,0.5,1}\lambda\in\{0.25,0.5,1\} are used. While the arrival process of the time-triggered system is not affected by λ\lambda, the arrival process of the event-triggered system is Erlang with parameters α\alpha and λ\lambda. We show results for different update intervals ww and we set the event threshold α=λw\alpha=\lambda w, that is the mean number of events during an interval of duration ww, to achieve the same average utilization for the time-triggered and the event-triggered system.

Refer to caption
Figure 5: Same parameters as in Fig. 4(b) but deterministic service times.

It can be observed that all curves in Fig. 4 show a tremendous increase if ww and α\alpha become small. This corresponds to high network utilization that induces queueing delays. In case of large ww and α\alpha, the network delay converges to the service time quantile of a message, whereas the AoI grows almost linearly due to increasingly rare update messages. Generally, it can be observed that the event-triggered system shows worse delay and AoI performance than the time-triggered system. Similar observations have also been made for periodic versus random arrivals in [2, 12, 13, 14]. This is a consequence of the variability of the arrival process of the event-triggered system that leads to two different effects: bursts of update messages cause queueing delays in the network, this effect is dominant in the left of the graphs in Fig. 4; or the absence of update messages causes idle waiting, dominant in the right of the graphs. With increasing λ\lambda and α\alpha the arrival process becomes smoother and the performance of the event-triggered system approaches that of the time-triggered system, see Fig. 4(c).

Fig. 5 uses the same parameters as Fig. 4(b) with the exception that the network service times are deterministic, i.e., the queue is served with a constant service rate of 0.250.25. In this case, the time-triggered system is a D\midD\mid1 queue and the bounds obtained from Lem. 1 correctly identify the delay Tε=4T_{\varepsilon}=4 and the AoI Δε=4+w\Delta_{\varepsilon}=4+w for all w>4w>4. The event-triggered system is an Erlang-α\alpha\midD\mid1 queue. For small α\alpha corresponding to high utilization the burstiness of the arrivals causes large queueing delays. With increasing α\alpha the queueing delays diminish quickly and the system switches sharply to a regime, where the AoI is dominated by idle waiting due to too infrequent update messages.

V DoI Bounds

In this section, we investigate how event-triggered systems perform compared to time-triggered systems if we consider the signal-aware DoI metric. We derive statistical bounds of the DoI of time-triggered and event-triggered systems and show numerical as well as simulation results.

V-A Analysis

We derive statistical bounds of the peak DoI Φε\Phi_{\varepsilon} that satisfy 𝖯[Φ(n)>Φε]ε\mathsf{P}[\Phi(n)>\Phi_{\varepsilon}]\leq\varepsilon. The analysis of DoI is more involved due to the use of the doubly stochastic processes C(A(n))C(A(n)) and C(D(n))C(D(n)). As before, we consider time-triggered systems, where update messages are generated at times A(n)=nwA(n)=nw for n1n\geq 1 and w+w\in\mathbb{R}_{+} is the width of the update interval, and event-triggered systems, where update messages are generated at times A(n)=E(nα)A(n)=E(n\alpha) and α\alpha\in\mathbb{N} is the event threshold, respectively. The following theorem uses Lem. 1 to state our main result.

Theorem 1 (DoI bounds).

Given the assumptions of Lem. 1. Consider events with iid inter-event times I(n)I(n) for n1n\geq 1 and denote J(t)J(t) the residual inter-event time at time t0t\geq 0.

For the DoI Φ(n)\Phi(n) of a time-triggered system with update interval ww and envelope parameters (10), it holds for all n1n\geq 1, θ>0\theta>0, and Φε0\Phi_{\varepsilon}\in\mathbb{N}_{0} that

𝖯[Φ(n)>Φε]𝖬Δ(θ)𝖬J(A(n))(θ)(𝖬I(θ))Φε.\mathsf{P}[\Phi(n)>\Phi_{\varepsilon}]\leq\mathsf{M}_{\Delta}(\theta)\mathsf{M}_{J(A(n))}(-\theta)(\mathsf{M}_{I}(-\theta))^{\Phi_{\varepsilon}}.

For the DoI Φ(n)\Phi(n) of an event-triggered system with threshold α\alpha, and envelope parameters (11), it holds for all n1n\geq 1, θ>0\theta>0, and Φε0α1\Phi_{\varepsilon}\in\mathbb{N}_{0}\geq\alpha-1 that

𝖯[Φ(n)>Φε]𝖬T(θ)(𝖬I(θ))Φεα+1.\mathsf{P}[\Phi(n)>\Phi_{\varepsilon}]\leq\mathsf{M}_{T}(\theta)(\mathsf{M}_{I}(-\theta))^{\Phi_{\varepsilon}-\alpha+1}.

The MGF of the residual inter-event time can be estimated as 𝖬J(t)(θ)1\mathsf{M}_{J(t)}(-\theta)\leq 1 for θ>0\theta>0. For a memoryless distribution we also have 𝖬J(t)(θ)=𝖬I(θ)\mathsf{M}_{J(t)}(-\theta)=\mathsf{M}_{I}(-\theta).

Equating the bound for time-triggered systems in Th. 1 with ε\varepsilon and considering a memoryless inter-event distribution, we can solve for

Φε=lnεln𝖬Δ(θ)ln𝖬I(θ)1,\Phi_{\varepsilon}=\biggl{\lceil}\frac{\ln\varepsilon-\ln\mathsf{M}_{\Delta}(\theta)}{\ln\mathsf{M}_{I}(-\theta)}\biggl{\rceil}-1,

and for event-triggered systems

Φε=lnεln𝖬T(θ)ln𝖬I(θ)+α1.\Phi_{\varepsilon}=\biggl{\lceil}\frac{\ln\varepsilon-\ln\mathsf{M}_{T}(\theta)}{\ln\mathsf{M}_{I}(-\theta)}\biggl{\rceil}+\alpha-1.
Proof.

We start with the proof for event-triggered systems, since time-triggered systems pose some additional difficulties.

Event-triggered system

By definition of the event-triggered system we have C(A(n))=nαC(A(n))=n\alpha. Using (1), we also have C(D(n+1))=max{ν0:E(ν)D(n+1)}C(D(n+1))=\max\{\nu\geq 0:E(\nu)\leq D(n+1)\}. Further, for the last expression we know that ν(n+1)α\nu\geq(n+1)\alpha, since D(n+1)A(n+1)D(n+1)\geq A(n+1) and hence C(D(n+1))C(A(n+1))=(n+1)αC(D(n+1))\geq C(A(n+1))=(n+1)\alpha. By insertion into (4) it holds for n0n\geq 0 that

Φ(n)=max{ν(n+1)α:E(ν)D(n+1)}nα.\Phi(n)=\max\{\nu\geq(n+1)\alpha:E(\nu)\leq D(n+1)\}-n\alpha.

With a variable substitution it follows that

Φ(n)=α+max{ν0:E((n+1)α+ν)D(n+1)}.\Phi(n)=\alpha+\max\{\nu\geq 0:E((n+1)\alpha+\nu)\leq D(n+1)\}.

We use D(n+1)=A(n+1)+T(n+1)D(n+1)=A(n+1)+T(n+1) and A(n+1)=E((n+1)α)=m=1(n+1)αI(m)A(n+1)=E((n+1)\alpha)=\sum_{m=1}^{(n+1)\alpha}I(m) to obtain

Φ(n)=α+max{ν0:m=(n+1)α+1(n+1)α+νI(m)T(n+1)}.\Phi(n)=\alpha+\max\Biggl{\{}\nu\geq 0:\sum_{m=(n+1)\alpha+1}^{(n+1)\alpha+\nu}I(m)\leq T(n+1)\Biggr{\}}.

Now, choose some Φε0α1\Phi_{\varepsilon}\in\mathbb{N}_{0}\geq\alpha-1. The case Φ(n)>Φε\Phi(n)>\Phi_{\varepsilon} occurs iff ν=Φεα+1\nu=\Phi_{\varepsilon}-\alpha+1 satisfies the condition above, i.e., m=(n+1)α+1(n+1)α+νI(m)T(n+1)\sum_{m=(n+1)\alpha+1}^{(n+1)\alpha+\nu}I(m)\leq T(n+1). It follows that

𝖯[Φ(n)>Φε]=𝖯[T(n+1)m=(n+1)α+1(n+1)α+Φεα+1I(m)0].\mathsf{P}[\Phi(n)>\Phi_{\varepsilon}]=\mathsf{P}\Biggl{[}T(n+1)-\sum_{m=(n+1)\alpha+1}^{(n+1)\alpha+\Phi_{\varepsilon}-\alpha+1}I(m)\geq 0\Biggr{]}.

With Chernoff’s theorem (7) we have 𝖯[X0]𝖬X(θ)\mathsf{P}[X\geq 0]\leq\mathsf{M}_{X}(\theta) for θ>0\theta>0 so that

𝖯[Φ(n)>Φε]𝖬[T(n+1)m=(n+1)α+1(n+1)α+Φεα+1I(m)](θ).\mathsf{P}[\Phi(n)>\Phi_{\varepsilon}]\leq\mathsf{M}\Biggl{[}T(n+1)-\sum_{m=(n+1)\alpha+1}^{(n+1)\alpha+\Phi_{\varepsilon}-\alpha+1}I(m)\Biggr{]}(\theta).

The result of Th. 1 follows for iid inter-event times I(m)I(m). Note that for iid inter-event times T(n+1)T(n+1) is independent of events that occur after A(n+1)=E((n+1)α)A(n+1)=E((n+1)\alpha).

Refer to caption
(a) Deterministic events, exponential service
Refer to caption
(b) Exponential events, exponential service
Refer to caption
(c) Exponential events, deterministic service
Figure 6: AoI and DoI bounds for ε=106\varepsilon=10^{-6} for the time-triggered and the event-triggered system. The update interval width ww and the event threshold α\alpha are adjusted to achieve the desired network utilization.
Time-triggered systems

For time-triggered systems, we have the additional difficulty that the generation of messages is not synchronized with the occurrence of events. Instead, at time t0t\geq 0, e.g., t=A(n)t=A(n), we only know that the last event occurred at time E(C(t))E(C(t)) and the next event occurs at time E(C(t)+1)=E(C(t))+I(C(t)+1)E(C(t)+1)=E(C(t))+I(C(t)+1). We denote J(t)J(t) the residual inter-event time at time t0t\geq 0 until the next event occurs, i.e., J(t)=E(C(t)+1)tJ(t)=E(C(t)+1)-t. It follows that

J(t)=E(C(t))+I(C(t)+1)t.J(t)=E(C(t))+I(C(t)+1)-t. (14)

First, we formalize an intermediate result. Consider some times t,τ0t,\tau\geq 0. From (1) we have

C(t+τ)=max{νC(t):E(ν)t+τ}.C(t+\tau)=\max\{\nu\geq C(t):E(\nu)\leq t+\tau\}. (15)

For νC(t)+1\nu\geq C(t)+1 we can write

E(ν)=\displaystyle E(\nu)= E(C(t))+m=C(t)+1νI(m)\displaystyle E(C(t))+\sum_{m=C(t)+1}^{\nu}I(m)
=\displaystyle= t+J(t)+m=C(t)+2νI(m),\displaystyle t+J(t)+\sum_{m=C(t)+2}^{\nu}I(m), (16)

where we use (14) in the second step. By insertion of (16) for νC(t)+1\nu\geq C(t)+1 into (15) and noting that the case ν=C(t)\nu=C(t) is trivial, we obtain that

C(t+τ)\displaystyle C(t+\tau)
=\displaystyle= max{νC(t):J(t)1νC(t)+1+m=C(t)+2νI(m)τ}\displaystyle\max\Biggl{\{}\nu\geq C(t):J(t)1_{\nu\geq C(t)+1}+\sum_{m=C(t)+2}^{\nu}I(m)\leq\tau\Biggr{\}}
=\displaystyle= C(t)+max{ν0:J(t)1ν1+m=C(t)+2C(t)+νI(m)τ},\displaystyle C(t)+\max\Biggl{\{}\nu\geq 0:J(t)1_{\nu\geq 1}+\sum_{m=C(t)+2}^{C(t)+\nu}I(m)\leq\tau\Biggr{\}},

where 1(.)1_{(.)} is the indicator function that is one if the argument is true and zero otherwise.

Next, we insert D(n+1)=A(n)+Δ(n)D(n+1)=A(n)+\Delta(n) from (3) into (4) and with the previous result we obtain by substitution of t=A(n)t=A(n) and τ=Δ(n)\tau=\Delta(n) for n1n\geq 1 that

Φ(n)=C(A(n)+Δ(n))C(A(n))=max{ν0:J(A(n))1ν1+m=C(A(n))+2C(A(n))+νI(m)Δ(n)}.\Phi(n)=C(A(n)+\Delta(n))-C(A(n))=\\ \max\Biggl{\{}\nu\geq 0:J(A(n))1_{\nu\geq 1}+\sum_{m=C(A(n))+2}^{C(A(n))+\nu}I(m)\leq\Delta(n)\Biggr{\}}.

Now, choose some Φε0\Phi_{\varepsilon}\in\mathbb{N}_{0}. The case Φ(n)>Φε\Phi(n)>\Phi_{\varepsilon} occurs iff ν=Φε+1\nu=\Phi_{\varepsilon}+1 satisfies the condition above. It follows that

𝖯[Φ(n)>Φε]=𝖯[Δ(n)J(A(n))m=C(A(n))+2C(A(n))+Φε+1I(m)0].\mathsf{P}[\Phi(n)>\Phi_{\varepsilon}]\\ =\mathsf{P}\Biggl{[}\Delta(n)-J(A(n))-\sum_{m=C(A(n))+2}^{C(A(n))+\Phi_{\varepsilon}+1}I(m)\geq 0\Biggr{]}.

With Chernoff’s theorem (7) we have for θ>0\theta>0 that

𝖯[Φ(n)>Φε]𝖬[Δ(n)J(A(n))m=C(A(n))+2C(A(n))+Φε+1I(m)](θ).\mathsf{P}[\Phi(n)>\Phi_{\varepsilon}]\\ \leq\mathsf{M}\Biggl{[}\Delta(n)-J(A(n))-\sum_{m=C(A(n))+2}^{C(A(n))+\Phi_{\varepsilon}+1}I(m)\Biggr{]}(\theta).

The result of Th. 1 follows for iid inter-event times J(A(n))J(A(n)) and I(m)I(m). We note that in a time-triggered system Δ(n)\Delta(n) is independent of the occurrence of events. ∎

V-B Numerical Results

Refer to caption
(a) Empirical distribution
Refer to caption
(b) Tail decay, time-triggered system
Refer to caption
(c) Tail decay, event-triggered system
Figure 7: AoI and DoI distribution for the systems in Fig. 6(b). For the time triggered-systems the update interval width is w=13w=13 and for the event-triggered system the event threshold is α=8\alpha=8. These parameters corresponds to a utilization of 0.3250.325 and 0.250.25, respectively, that minimize the ε=106\varepsilon=10^{-6} DoI bounds.

In Fig. 6 we show tail bounds of the AoI and DoI for ε=106\varepsilon=10^{-6}. The bounds are derived using Lem. 1 and Th. 1. The free parameter θ\theta is optimized numerically. We consider a range of relevant time-triggered and event-triggered systems. In all cases, the mean rate of sensor events is λ=0.5\lambda=0.5 and the mean service rate of the network queue is μ=0.25\mu=0.25. The width of the update interval ww of the time-triggered system and the event threshold α\alpha of the event-triggered system are varied in unison so that both cause the same network utilization, that is 1/(wμ)1/(w\mu) and λ/(αμ)\lambda/(\alpha\mu), respectively. We use the network utilization as the abscissa. For reasons of presentability, we mostly ignore the integer constraints of α\alpha and Δε\Delta_{\varepsilon} in the figures.

Deterministic events, exponential service

In Fig. 6(a) we consider exponential network service times with parameter μ\mu and a deterministic sensor signal, i.e., periodic events with deterministic inter-event times 1/λ=21/\lambda=2. This degenerate case serves as a reference. In this case both, the time-triggered and the event-triggered system, sent updates periodically. We choose α=λw\alpha=\lambda w to ensure the same network utilization resulting in identical delay and AoI bounds.

The DoI bounds differ slightly since update messages are synchronized with the occurrence of sensor events in the event-triggered system but not in the time-triggered system. This is reflected by the residual inter-event time J(t)J(t) in Th. 1. Since deterministic inter-event times are not memoryless, we estimate 𝖬J(t)(θ)<1\mathsf{M}_{J(t)}(-\theta)<1 for θ>0\theta>0 by 11 and obtain with Th. 1 for the time-triggered system that

Φε=lnεln𝖬Δ(θ)ln𝖬I(θ)=λ(ln𝖬Δ(θ)lnε)θ=λΔε,\Phi_{\varepsilon}=\frac{\ln\varepsilon-\ln\mathsf{M}_{\Delta}(\theta)}{\ln\mathsf{M}_{I}(-\theta)}=\frac{\lambda(\ln\mathsf{M}_{\Delta}(\theta)-\ln\varepsilon)}{\theta}=\lambda\Delta_{\varepsilon},

where we inserted the MGF 𝖬I(θ)=eθ/λ\mathsf{M}_{I}(-\theta)=e^{-\theta/\lambda} of the deterministic inter-event time 1/λ1/\lambda and ignored integer constraints. In the final step, we substituted the AoI bound Δε\Delta_{\varepsilon} (9). This implies that the update rate that achieves the minimal AoI also minimizes the DoI in this case. As can be observed in Fig. 6(a), the minimal AoI bound Δε=88\Delta_{\varepsilon}=88 and the minimal DoI bound Φε=44\Phi_{\varepsilon}=44, corresponding to λ=0.5\lambda=0.5, are achieved for the same network utilization of about 0.3.

Exponential events, exponential service

The direct correspondence of AoI and DoI Φε=λΔε\Phi_{\varepsilon}=\lambda\Delta_{\varepsilon} observed in Fig. 6(a) is, however, not given in case of a random sensor signal. In Fig. 6(b) we show results for exponential instead of deterministic inter-event times. All other parameters are unchanged. The same set of parameters has also been used for Fig 4(b).

For the time-triggered system, that is signal-agnostic, the AoI is generally unaffected by the choice of the sensor model. Consequently, the AoI in Fig. 6(b) is identical to Fig. 6(a). The DoI increases, however, since a varying number of sensor events may occur during any update interval.

In case of the event-triggered system, the AoI in Fig. 6(b) is larger than in Fig. 6(a) since the arrivals to the network are now a random process. Due to the randomness, the AoI of the event-triggered system is generally larger than the AoI of the time-triggered system, as also observed in Fig. 4.

Regarding the DoI, the event-triggered system has the advantage that it is signal-aware and sends update messages only if needed. Interestingly, both systems, time-triggered and event-triggered, show comparable minimal DoI. For an intuitive explanation consider a burst of sensor events. In this case, the event-triggered system samples the sensor more frequently with the goal to improve the DoI. The increased rate of update messages may, however, cause network congestion and queueing delays that are detrimental to the DoI and outweigh their advantage. Overall this appears to cause similar minimal DoI, however, at a lower average network utilization for the event-triggered system. Concluding, the u-shaped DoI curves in Fig. 6(b) show that both systems are feasible and robust to variations of the network utilization. Configured optimally, the event-triggered system uses less network resources. It generates, however, more bursty network traffic.

A related finding in [5, 7] is that the problem of minimizing the mean-square norm of the state error at the monitor is equivalent to a signal-agnostic AoI minimization problem. In case of our event-triggered and hence signal-aware system, Fig. 6(b) does not confirm a similar result. Here, the network utilization that achieves the minimal tail bounds is different for the AoI and DoI, respectively.

Exponential events, deterministic service

Fig. 6(c) shows results for the same system as in Fig. 6(b) but with deterministic service times 1/μ=41/\mu=4 as also used in Fig. 5. In this case the time-triggered system is purely deterministic and achieves a very small AoI that is determined as the sum of the network service time and the width of the update interval. Hence, the AoI is minimal in case of full network utilization. The same applies for the DoI bound.

The event-triggered system shows a much larger AoI that is due to the randomness of the update messages. For low utilization, corresponding to a large threshold α\alpha, the AoI is large due to infrequent updates if the sensor signal does not change much. In case of high utilization, small α\alpha, queueing delays start to dominate and the AoI bends sharply upwards.

Despite the large AoI, the event-triggered system achieves a similarly good minimal DoI bound as the time-triggered system. Specifically at low utilization, the DoI bound of the event-triggered system is much smaller. This is a consequence of the deterministic network service, where the delivery of an update message within one message service time 1/μ=41/\mu=4 is almost guaranteed, given the utilization is low and queueing delays are avoided. This is particularly favorable for the event-triggered system since once the sensor signal changes by more than the threshold α\alpha, an update message can be delivered with high probability within short time.

Decay of tail probabilities

In Fig. 6(b) the minimal DoI bound of the time-triggered system is achieved for w13w\approx 13 and of the event-triggered system for α=8\alpha=8, corresponding to utilizations of 0.3250.325 and 0.250.25, respectively. We investigate these parameters in more detail in Fig. 7 where we show AoI and DoI bounds as well as empirical quantiles from 10810^{8} samples of the AoI and DoI obtained by simulation. While the minimal DoI bounds of the time-triggered and event-triggered systems in Fig. 6(b) are about the same for ε=106\varepsilon=10^{-6}, we see in Fig. 7(a) that the DoI quantiles differ if ε\varepsilon is not small. Particularly, the DoI approaches α\alpha for ε1\varepsilon\rightarrow 1 in case of the event-triggered system and zero in case of the time-triggered system. Conversely, the AoI approaches ww for ε1\varepsilon\rightarrow 1 in case of the time-triggered system and zero in case of the event-triggered system. We do not display tail bounds for the range of ε\varepsilon in Fig. 7(a). We include the bounds in Fig. 7(b) and Fig. 7(c) where we show the tail decay. It can be noticed that the DoI bounds and the empirical DoI quantiles of the time-triggered and the event-triggered system exhibit the same speed of tail decay. This dominates the DoI if ε\varepsilon is small causing similar DoI performance for both systems.

In Fig. 8 we include simulation results for non-optimal parameters ww and α\alpha. For smaller ww and α\alpha we see an improvement of the AoI and DoI if ε\varepsilon is not small. This is due to more frequent update messages. At the same time this causes increased network utilization and a smaller speed of tail decay. This consumes the initial advantage when ε\varepsilon becomes small and leads to worse tail performance. In case of larger than optimal ww and α\alpha, update messages are sent less frequently so that the AoI and DoI increase. This also brings about a reduction of the network utilization that can, however, only achieve a small improvement of the speed of the tail decay which is not relevant for ε=106\varepsilon=10^{-6}.

Refer to caption
(a) Time-triggered system
Refer to caption
(b) Event-triggered system
Figure 8: Empirical AoI and DoI distribution for the system in Fig. 6(b). Parameters w=13w=13 and α=8\alpha=8 minimize the ε=106\varepsilon=10^{-6} DoI bound. It can be seen how smaller or larger parameters are sub-optimal for ε=106\varepsilon=10^{-6}.

VI Conclusions

We considered remote monitoring of a sensor via a network. The sampling policy of the sensor is either time-triggered or event-triggered. Correspondingly, sampling is either signal-agnostic or signal-aware. We derived tail bounds of the delay and peak age-of-information that show advantages of the time-triggered system. These metrics do, however, not take the estimation error at the monitor into account, motivating a complementary definition of deviation-of-information. Despite inferior age-, we find that the event-triggered system achieves similar deviation-of-information as the time-triggered system. Sending update messages only in case of certain sensor events, the event-triggered system operates optimally at a lower network utilization and saves network resources.

References

  • [1] S. Kaul, M. Gruteser, V. Rai, and J. Kenney, “Minimizing age of information in vehicular networks,” in Proc. of IEEE SECON, Jun. 2011, pp. 350–358.
  • [2] S. Kaul, R. Yates, and M. Gruteser, “Real-time status: How often should one update?” in Proc. of IEEE INFOCOM Mini-Conference, Mar. 2012, pp. 2731–2735.
  • [3] T. Zinchenko, H. Tchoankem, L. Wolf, and A. Leschke, “Reliability analysis of vehicle-to-vehicle applications based on real world measurements,” in Proc. of ACM VANET Workshop, Jun. 2013, pp. 11–20.
  • [4] H. Tchoankem, T. Zinchenko, and H. Schumacher, “Impact of buildings on vehicle-to-vehicle communication at urban intersections,” in Proc. of ICCC CCNC, Jan. 2015.
  • [5] J. P. Champati, M. Mamduhi, K. Johansson, and J. Gross, “Performance characterization using AoI in a single-loop networked control system,” in Proc. of IEEE INFOCOM AoI Workshop, Apr. 2019, pp. 197–203.
  • [6] O. Ayan, M. Vilgelm, M. Klügel, S. Hirche, and W. Kellerer, “Age-of-information vs. value-of-information scheduling for cellular networked control systems,” in Proc. of ACM/IEEE ICCPS, Apr. 2019.
  • [7] M. Klügel, M. H. Mamduhi, S. Hirche, and W. Kellerer, “AoI-penalty minimization for networked control systems with packet loss,” in Proc. of IEEE INFOCOM AoI Workshop, Apr. 2019, pp. 189–196.
  • [8] R. D. Yates, Y. Sun, D. R. Brown, S. K. Kaul, E. Modiano, and S. Ulukus, “Age of information: An introduction and survey,” vol. 39, no. 5, pp. 1183–1210, May 2021.
  • [9] A. Kosta, N. Pappas, and V. Angelakis, “Age of information: A new concept, metric, and tool,” vol. 12, no. 3, pp. 162–259, 2017.
  • [10] R. D. Yates, “Lazy is timely: Status updates by an energy harvesting source,” in Proc. of IEEE ISIT, Jun. 2015, pp. 3008–3012.
  • [11] Y. Sun, E. Uysal-Biyikoglu, R. D. Yates, C. E. Koksal, and N. B. Shroff, “Update or wait: How to keep your data fresh,” IEEE Trans. Inf. Theory, vol. 63, no. 11, pp. 7492–7508, Nov. 2017.
  • [12] Y. Inoue, H. Masuyama, T. Takine, and T. Tanaka, “A general formula for the stationary distribution of the age of information and its application to single-server queues,” IEEE Trans. Inf. Theory, vol. 65, no. 12, pp. 8305–8324, Dec. 2019.
  • [13] J. P. Champati, H. Al-Zubaidy, and J. Gross, “On the distribution of AoI for the GI/GI/1/1 and GI/GI/1/2* systems: Exact expressions and bounds,” in Proc. of IEEE INFOCOM, Apr. 2019, pp. 37–45.
  • [14] R. Talak, S. Karaman, and E. Modiano, “Optimizing information freshness in wireless networks under general interference constraints,” IEEE/ACM Trans. Netw., vol. 28, no. 1, pp. 15–28, Feb. 2020.
  • [15] Y. Sun, Y. Polyanskiy, and E. Uysal, “Sampling of the wiener process for remote estimation over a channel with random delay,” IEEE Trans. Inf. Theory, vol. 66, no. 2, pp. 1118–1135, Feb. 2020.
  • [16] T. Z. Ornee and Y. Sun, “Sampling and remote estimation for the ornstein-uhlenbeck process trough queues: Age of information and beyond,” IEEE/ACM Trans. Netw., vol. 29, no. 5, pp. 1962–1975, Oct. 2021.
  • [17] C.-S. Chang, Performance Guarantees in Communication Networks.   Springer-Verlag, 2000.
  • [18] F. Ciucu, A. Burchard, and J. Liebeherr, “Scaling properties of statistical end-to-end bounds in the network calculus,” IEEE/ACM Trans. Netw., vol. 14, no. 6, pp. 2300–2312, Jun. 2006.
  • [19] Y. Jiang and Y. Liu, Stochastic Network Calculus.   Springer-Verlag, Sep. 2008.
  • [20] M. Fidler and A. Rizk, “A guide to the stochastic network calculus,” IEEE Commun. Surveys Tuts., vol. 17, no. 1, pp. 92–105, Mar. 2015.
  • [21] L. Huang and E. Modiano, “Optimizing age-of-information in a multi-class queueing system,” in Proc. of IEEE International Symposium on Information Theory, Jun. 2015, pp. 1681–1685.
  • [22] N. Pappas and M. Kountouris, “Delay violation probabiliy and age of information interplay in the two-user multiple access channel,” in Proc. of IEEE SPAWC Workshop, Jul. 2019, pp. 1–5.
  • [23] J. P. Champati, H. Al-Zubaidy, and J. Gross, “Statistical guarantee optimization for AoI in single-hop and two-hop FCFS systems with periodic arrivals,” IEEE Trans. Commun., vol. 69, no. 1, pp. 365–381, Jan. 2021.
  • [24] ——, “Statistical guarantee optimization for age of information for the D/G/1 queue,” in Proc. of IEEE INFOCOM AoI Workshop, Apr. 2018, pp. 130–135.
  • [25] M. Noroozi and M. Fidler, “A min-plus model of age-of-information with worst-case and statistical bounds,” in Proc. of IEEE ICC, May 2022.
  • [26] A. Rizk and J.-Y. L. Boudec, “A Palm calculus approach to the distribution of the age of information,” Tech. Rep. arXiv:2204.04643, Apr. 2022.
  • [27] M. Fidler, B. Walker, and Y. Jiang, “Non-asymptotic delay bounds for multi-server systems with synchronization constraints,” IEEE Trans. Parallel Distrib. Syst., vol. 29, no. 7, pp. 1545–1559, Jul. 2018.
  • [28] M. Fidler, “An end-to-end probabilistic network calculus with moment generating functions,” in Proc. of IWQoS, Jun. 2006, pp. 261–270.
  • [29] G. Bolch, S. Greiner, H. de Meer, and K. S. Trivedi, Queueing Networks and Markov Chains, 2nd ed.   Wiley, 2006.