This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Completion Delay of Random Linear Network Coding in Full-Duplex Relay Networks

Rina Su, Qifu Tyler Sun, Zhongshan Zhang,  Zongpeng Li§
{}^{*}~{} This paper was presented in part at 2021 IEEE International Symposium on Information Theory. R. Su and Q. T. Sun are with University of Science and Technology Beijing, Z. Zhang is with Beijing Institute of Technology, and Z. Li is with Tsinghua University. Q. T. Sun (Email: [email protected]) is the corresponding author.
Abstract

As the next-generation wireless networks thrive, full-duplex and relay techniques are combined to improve the network performance. Random linear network coding (RLNC) is another popular technique to enhance the efficiency and reliability of wireless communications. In this paper, in order to explore the potential of RLNC in full-duplex relay networks, we investigate two fundamental perfect RLNC schemes and theoretically analyze their completion delay performance. The first scheme is a straightforward application of conventional perfect RLNC studied in wireless broadcast, so it involves no additional process at the relay. Its performance serves as an upper bound for all perfect RLNC schemes. The other scheme allows sufficiently large buffer and unconstrained linear coding at the relay. It attains the optimal performance and serves as a lower bound for all RLNC schemes. For both schemes, closed-form formulae to characterize the expected completion delay at a single receiver as well as for the whole system are derived. Numerical results are also demonstrated to validate the theoretical characterizations, and compare the two fundamental schemes with the existing one.

Index Terms:
Full-duplex, relaying, random linear network coding (RLNC), completion delay, throughput

I Introduction

Among the emerging techniques for promoting evolution of the next generation (5G) wireless networks, relay techniques [2] is widely considered for the purpose of catering to the ever-growing demand for throughput and coverage. As an up-and-coming paradigm, network coding (NC) and particularly random linear network coding (RLNC) [3] has shown great capabilities to realize higher transmission efficiency and throughput in wireless communications. Different NC schemes have shown significant gains in a multitude of wireless transmission scenarios, from wireless broadcast [4, 5, 6, 7, 8, 9] and wireless sensor networks [10], to D2D [11] and Wi-Fi direct transmission [12]. A considerable amount of research (e.g., [13, 14, 15, 16, 17]) has investigated the combination of relay techniques with RLNC for the two-hop relay networks, which are the network model proposed in IEEE 802.16j and 3GPP LTE-Advanced [18, 19] for the sake of simplicity and explicitness of system design. In particular, in the two-hop relay network depicted in Fig. 1 with the relay station operating in the half-duplex mode, Ref. [17] proposed an RLNC scheme with online scheduling that can achieve near-optimal completion delay performance, a key metric for transmission efficiency (e.g., [9],[13, 14, 15, 16, 17],[20],[21]).

Compared with the traditional half-duplex relay, the full-duplex relay can potentially double the throughput by simultaneously fetching and forwarding data [22, 23]. Moreover, with recent developments in the self-interference cancellation technology [24], the full-duplex relay has been considered as a key component to construct versatile networks in 5G. In the literature, there have been studies on combining full-duplex relaying with NC for performance enhancement over broadcast networks. For example, physical-layer NC has been applied in full-duplex relay networks with multiple receivers [25]. An NC-based Automatic Repeat Request (ARQ) scheme [26] was proposed to enhance the downlink throughput for a two-way full-duplex relay network. When the relay station in Fig. 1 operates in the full-duplex mode, the recent work [27] proposed an RLNC scheme with scheduling, known as FBPF (Fewest Broadcast Packet First), which demonstrated a better throughput (equivalently, completion delay) performance than the ARQ. FBPF assumes full linear independence among the packets generated by the source, so it belongs to perfect RLNC, which is always considered in analyzing the optimal performance RLNC can achieve (e.g., [4, 28, 29, 13, 17]). Even though FBPF is a type of perfect RLNC, it does not shed light on the best possible completion delay performance that RLNC can achieve because it does not fully utilize the packets buffered in the relay, i.e., it does not invoke coding while stores and forwards packets via an unnecessarily large buffer at the relay.

Refer to caption
Figure 1: The system model of the relay network, in which the relay station is considered to be half-duplex in [17], and to be full-duplex in [27] as well as in this paper.

In this paper, in order to further explore the potential of RLNC in the full-duplex relay network in Fig. 1, we are inspired to investigate two fundamental perfect RLNC schemes and study their completion delay performance. The main contributions of this paper are summarized as follows.

  • We first investigate perfect RLNC without buffer, which does not involve any buffer or additional process at the relay. As a result, this scheme provides a fundamental performance guarantee among all possible perfect RLNC schemes. For this scheme, explicit formulae of expected completion delay at a single receiver as well as for the system are deduced.

  • We then investigate perfect RLNC with buffer, which allows sufficiently large buffer and unconstrained linear coding at the relay. It turns out that it attains the best completion delay performance among all RLNC schemes. For this scheme, by modeling the transmission process as a Markov chain, we deduce a closed-form formula of the expected completion delay at a single receiver, which involves combinatorial numbers related to Schroeder paths. In order to compute the expected completion delay at a single receiver in a more handy manner, we further derive a recursive formula for it.

  • For perfect RLNC with buffer, we also model such a Markov chain that the expected system completion delay can be calculated by a formula built upon its 11-step transition probability matrix, whose size grows exponentially with the increasing number of receivers. Furthermore, we characterize a non-trivial closed-form lower bound, which is the maximum of two individual ones that can be explicitly and recursively computed. The first stems from the expected system completion delay in wireless broadcast, and the other is selected to be the expected completion delay at a single receiver with the worst channel condition.

For full-duplex relay networks, the formulae of the expected completion delay of perfect RLNC without buffer obtained in this paper serve as upper bounds for the completion delay performance of all perfect RLNC schemes, including FBPF [27], the relay-centered RLNC schemes [13], and the one adapted from [17]. The formulae of the expected completion delay of perfect RLNC with buffer obtained in this paper serves as lower bounds for the completion delay performance of all RLNC schemes, including Fulcrum codes [30], DSEP Fulcrum codes [31], Sparse RLNC [32], Telescopic codes[33]. The study of this paper provides a theoretical guideline for future works on the detailed design of RLNC-based transmission.

We would like to remark that when the RS in the system operates in the half-duplex mode, the completion delay performance of perfect RLNC was analyzed in [17], but it does not shed light on the completion delay analyses in this paper, because its focus is to determine whether to fetch a packet from the BS or to broadcast a packet to the receivers at every timeslot at the RS.

The remainder of this paper is organized as follows. Section II introduces the system model and two fundamental perfect RLNC schemes. Section III theoretically analyzes the expected completion delay of two fundamental schemes. Section IV provides extensive numerical analyses to justify the theoretical characterizations and compare the two fundamental schemes’ performance with FBPF. Section V concludes the paper.

Throughout the paper, we shall use 𝐈\mathbf{I} and 𝟏\mathbf{1} to respectively represent an identity matrix and an all-11 matrix, where the matrix size, if not explained, can be inferred in the context.

II System Model and Two Fundamental Perfect RLNC Schemes

II-A System Model

We consider the two-hop full-duplex relay network depicted in Fig. 1, where a base station (BS) attempts to deliver PP packets to a set of RR receivers via a full-duplex relay station (RS) with a limited buffer size. The network transmission is considered to be time-slotted, that is, at every timeslot, the BS can deliver one packet to the RS, while the full-duplex RS can simultaneously fetch a coded packet from the BS and broadcast a coded packet to all receivers. The memoryless channel between the BS and the RS, together with the channel between the RS and every receiver rr, are subject to independent random packet erasures with erasure probability 1p01-p_{0} and 1pr1-p_{r}, respectively. Every receiver is interested in recovering all PP original packets. Herein, the completion delay refers to the total number of packets transmitted by the BS before every receiver is able to recover all PP original packets. Notice that the definition of completion delay is the same as the one in [13][27], which takes the initial PP packets transmitted by the BS into account, so it is slightly different from the one in [13][6]. The packet number PP divided by the completion delay is set as a measurement of throughput in [27].

In practice, at every timeslot, even though the BS and the RS transmit simultaneously, the RS obtains what the BS transmits at the end of the timeslot. Thus, for the RS, the broadcast process is always one timeslot behind the reception process, i.e., what the RS broadcasts at timeslot jj has nothing to do with what it receives at timeslot jj. Same as the consideration in [13], in this paper, we shall also ignore this constant timeslot lag at the RS and this assumption does not affect the analysis on the completion delay performance.

For the full-duplex relay network, Ref. [27] has proposed the FBPF scheduling scheme, in which if the RS buffer is not empty, the RS selects a packet that has been broadcast the fewest number of times from the unlimited buffer, and broadcasts the selected packets to the receivers. Since perfect RLNC is adopted in the design of the FBPF scheme, any PP different packets received by a receiver are assumed to be linearly independent.

II-B Two Fundamental Perfect RLNC Schemes

Since the FBPF perfect RLNC scheme permits unlimited buffer at the RS for additional scheduling procedures, it is not a perfect RLNC scheme with the simplest setting at the RS. Thus, it does not reflect a fundamental performance guarantee provided by perfect RLNC in the full-duplex relay network, that is, there exist other perfect RLNC schemes with simpler settings at the RS and their completion delay performance is not as good as that of the FBPF scheme. On the other hand, the FBPF perfect RLNC scheme cannot yield the best performance gain in terms of completion delay as it does not involve NC at the RS.

In order to study the fundamental completion delay performance of perfect RLNC in the full-duplex relay network, we consider two basic perfect RLNC schemes, one without buffering at the RS and the other with buffering and recoding at the RS.

The first scheme, called perfect RLNC without buffer, does not require buffer and thus there is no recoding at the RS. The role of the RS is just to directly forward the received packets. At every timeslot, the BS delivers one coded packet, which is a random linear combination of PP original packets, to the RS. If the RS successfully receives the coded packet, then it will broadcast it to all receivers. Otherwise, the RS will not transmit anything. Notice that the full-duplex RS can simultaneously fetch the coded packet from the BS and broadcast it to all receivers. As perfect RLNC is considered, any PP coded packets generated by the BS are assumed linearly independent and sufficient to recover the PP original packets. As a result, once every receiver obtains PP packets, the transmission completes. Since this scheme does not involve any extra operation at the RS, it is the most straightforward and simplest application of perfect RLNC in the full-duplex relay network. As a result, it provides a fundamental performance guarantee for all perfect RLNC schemes designed for the full-duplex relay network in terms of completion delay.

The other scheme, called perfect RLNC with buffer, assumes buffer size PP and no coding constraints at the RS. At each timeslot, the BS transmits a coded packet to the RS, and if the RS receives the packet and its buffer is not full, then it stores the received packet in its buffer. Meanwhile, no matter whether the RS successfully receives the packet from the BS or not, it broadcasts a packet which is a random linear combination of all the packets stored in the buffer. Due to the causality at the RS to firstly buffer received packets and then broadcast a linear combination of buffered packets to receivers, the number of linearly independent packets obtained at a receiver is always no larger than the number of packets buffered at the RS. Once every receiver obtains PP linearly independent packets, the transmission completes. As perfect RLNC is considered, the PP original packets can be recovered from any PP coded packets generated by the BS. Moreover, there is no coding constraint at the RS in this scheme. As a result, as long as the RS stores PP coded packets transmitted from the BS, the PP original packets can be recovered from any PP randomly generated coded packets at the RS. This justifies why setting the buffer size to PP is sufficient, and actually, when the RS successfully stores PP coded packets transmitted from the BS in the buffer, there is no need for the BS to transmit new coded packets to the RS at all. To sum up, the scheme perfect RLNC with buffer attains the best completion delay performance among all perfect RLNC schemes in the full-duplex relay network.

III Completion Delay Analysis

One of the main contributions in this paper is to theoretically analyze the expected completion delay of the benchmark schemes introduced in Sec. II-B, that is, the perfect RLNC scheme without buffer and with buffer, respectively.

III-A Perfect RLNC without Buffer

In perfect RLNC without buffer, the completion delay at a single receiver rr, denoted by D0,rD_{0,r}, is defined to be the number of packets the BS transmits till receiver rr is able to recover all PP original packets, and the completion delay for the system, denoted by D0D_{0}, is defined as

D0=max{D0,1,D0,2,,D0,R}.D_{0}=\max\{D_{0,1},D_{0,2},\ldots,D_{0,R}\}.

The completion delay D0,rD_{0,r} at a single receiver rr follows the negative binomial distribution with the probability mass function Pr(D0,r=P+d)=(P+d1P1)(p0pr)P(1p0pr)d,d0\mathrm{Pr}(D_{0,r}=P+d)=\binom{P+d-1}{P-1}(p_{0}p_{r})^{P}(1-p_{0}p_{r})^{d},d\geq 0, so that the expectation of D0,rD_{0,r} is equal to

𝔼[D0,r]=P/(p0pr).\mathbb{E}[D_{0,r}]=P/(p_{0}p_{r}). (1)
Proposition 1.

The expected completion delay of the perfect RLNC scheme without buffer is

𝔼[D0]=1p0(P+d0(11rRIpr(P,d+1))),\mathbb{E}[D_{0}]=\frac{1}{p_{0}}(P+\sum\nolimits_{d\geq 0}(1-\prod\nolimits_{1\leq r\leq R}I_{p_{r}}(P,d+1))), (2)

where Ipr(P,d+1)=j=0d(P+j1P1)prP(1pr)jI_{p_{r}}(P,d+1)=\sum_{j=0}^{d}\binom{P+j-1}{P-1}p_{r}^{P}(1-p_{r})^{j} is the regularized incomplete beta function.

Proof:

Let D^P\hat{D}_{P} denote the number of packets broadcast from the RS till every receiver is able to decode all PP original packets. Since the considered perfect RLNC scheme does not have any buffer at the RS, any PP out of the D^P\hat{D}_{P} packets broadcast from the RS are linearly independent. Thus, D^P\hat{D}_{P} can be regarded as the completion delay for the wireless broadcast system with PP original packets, RR receivers and independent erasure probability 1pr1-p_{r}. By Theorem 1 in [6],

𝔼[D^P]=P+d0(11rRIpr(P,d+1))\mathbb{E}[\hat{D}_{P}]=P+\sum\nolimits_{d\geq 0}(1-\prod\nolimits_{1\leq r\leq R}I_{p_{r}}(P,d+1)) (3)

On the other hand, as the RS immediately broadcasts a packet it successfully receives from the BS, D^P\hat{D}_{P} also represents the number of successfully received packets from the BS at the RS, and D0D_{0} just represents the number of transmissions from the BS till the RS successfully receives D^P\hat{D}_{P} packets. As it takes on average 1/p01/p_{0} transmissions to successfully receive one packet at the RS from the BS, 𝔼[D0]=𝔼[D^P]/p0\mathbb{E}[D_{0}]=\mathbb{E}[\hat{D}_{P}]/p_{0}, which implies (2) based on (3). ∎

For the special case that the channel from the RS to every receiver rr is perfect, i.e., pr=1p_{r}=1, the full-duplex relay network becomes essentially the same as a point-to-point transmission, so the system completion delay of both perfect RLNC schemes considered herein follows the negative binomial distribution with the expected value Pp0\frac{P}{p_{0}}. For the other special case p0=1p_{0}=1, the full-duplex relay network degenerates to the wireless broadcast with erasure probability 1pr1-p_{r}, so 𝔼[D0]\mathbb{E}[D_{0}] is given by P+d0(11rRIpr(P,d+1))P+\sum\nolimits_{d\geq 0}(1-\prod\nolimits_{1\leq r\leq R}I_{p_{r}}(P,d+1)), same as the one obtained in [6].

III-B Perfect RLNC with Buffer, Single Receiver Case

In perfect RLNC with buffer, the completion delay at receiver rr is denoted by DP,rD_{P,r}, where PP means the number of original packets generated by the source and to be recovered at every receiver. The system completion delay, denoted by DPD_{P}, is defined as

DP=max{DP,1,DP,2,,DP,R}.D_{P}=\max\{D_{P,1},D_{P,2},\ldots,D_{P,R}\}.

In order to characterize the expected completion delay 𝔼[DP,r]\mathbb{E}[D_{P,r}], we first recall the following combinatorial number

Ti,j=1j+1(i+ji)(ij),0jiT_{i,j}=\frac{1}{j+1}\binom{i+j}{i}\binom{i}{j},\quad 0\leq j\leq i (4)

The Schroeder path (see, e.g., [34]) from (0,0)(0,0) to (i,i)(i,i) is a path with possible movement (+1,0)(+1,0), (0,+1)(0,+1), (+1,+1)(+1,+1) in every step transition and with xyx\geq y for every point (x,y)(x,y) in the path. Then, Ti,jT_{i,j} represents the number of Schroeder paths from (0,0)(0,0) to (i,i)(i,i) with i+ji+j step transitions.

Theorem 2.

At a single receiver rr, the expected completion delay of the perfect RLNC scheme with buffer is

𝔼[DP,r]=Pp0+Ppr1+i=0P2j=0i(Pi1)Ti,j(p0pr)i(p0prp0pr)i+j+1.\mathbb{E}[D_{P,r}]=\frac{P}{p_{0}}+\frac{P}{p_{r}}-1+\sum_{i=0}^{P-2}\sum_{j=0}^{i}\frac{(P-i-1)T_{i,j}(p_{0}p_{r})^{i}}{(p_{0}p_{r}-p_{0}-p_{r})^{i+j+1}}. (5)
Proof:

Model the transmission process as a Markov chain P\mathcal{M}_{P} consisting of (P+1)(P+2)2\frac{(P+1)(P+2)}{2} states. Every state, labeled as (i,j)(i,j), represents the scenario that the RS and the receiver have respectively obtained ii and jj packets. Due to the causality at the RS to firstly buffer received packets (from the BS) and then broadcast a random linear combination of the buffered packets to receivers, all states (i,j)(i,j) in P\mathcal{M}_{P} have 0jiP0\leq j\leq i\leq P, and the only absorbing state in P\mathcal{M}_{P} is (P,P)(P,P) (assuming p0,p1,,pR0p_{0},p_{1},\ldots,p_{R}\neq 0). The 11-step transition probability pij,ijp_{ij,i^{\prime}j^{\prime}} from a transient state (i,j)(i,j) to another state (i,j)(i^{\prime},j^{\prime}) is given by

  • for 0j=i<P0\leq j=i<P, pij,ij=1p0,pij,(i+1)j=p0(1pr),pij,(i+1)(j+1)=p0prp_{ij,ij}=1-p_{0},p_{ij,(i+1)j}=p_{0}(1-p_{r}),p_{ij,(i+1)(j+1)}=p_{0}p_{r};

  • for 0j<i<P0\leq j<i<P, pij,ij=(1p0)(1pr)p_{ij,ij}=(1-p_{0})(1-p_{r}),pij,(i+1)j=p0(1pr)p_{ij,(i+1)j}=p_{0}(1-p_{r}),pij,i(j+1)=(1p0)prp_{ij,i(j+1)}=(1-p_{0})p_{r},pij,(i+1)(j+1)=p0prp_{ij,(i+1)(j+1)}=p_{0}p_{r}.

  • for 0j<i=P0\leq j<i=P, pij,ij=1prp_{ij,ij}=1-p_{r},pij,i(j+1)=prp_{ij,i(j+1)}=p_{r}.

Let 𝐏\mathbf{P} denote the matrix of 11-step transition probabilities among all (P+1)(P+2)21\frac{(P+1)(P+2)}{2}-1 transient states. Assume the states are ordered lexicographically, so that the first row/column in 𝐏\mathbf{P} is indexed by the state (0,0,,0)(0,0,\ldots,0). By the standard technique to calculate the expected transition times from a transient state to an absorbing one (see, e.g., Sec. 4.6 in [35]), the expected completion delay 𝔼[DP,r]\mathbb{E}[D_{P,r}] can be formulated as

𝔼[DP,r]=(1,0,,0)(𝐈𝐏)1𝟏,\mathbb{E}[D_{P,r}]=(1,0,\ldots,0)(\mathbf{I}-\mathbf{P})^{-1}\mathbf{1}, (6)

where (1,0,,0)(1,0,\ldots,0) represents the ((P+1)(P+2)21)\left(\frac{(P+1)(P+2)}{2}-1\right)-dimensional row unit vector with the first entry equal to 11. The details to derive (5) based on (6) is provided in Appendix--A. ∎

Remark.

In the literature, Ref. [9, 13, 37, 30, 36] also studied the completion delay performance from a Markov chain approach in different network settings. For example, [9] characterized the system completion delay for wireless broadcast with feedback by means of a moment generating function. For wireless broadcast with 22 receivers, Ref. [36] characterized the cumulative distribution function of the system completion delay of RLNC by means of a Markov chain model. For the relay-centered network with a single receiver, Ref. [13] established a Markov chain model to obtain an implicit recursive formula for the expected completion delay of perfect RLNC. However, in the above references, there is no explicit expression for the completion delay studied. In comparison, we also adopt the Markov chain approach to model the transmission process in full-duplex relay networks, and by deliberate analysis we obtain a closed-form formula of the completion delay at a single receiver. \blacksquare

With PP increasing, the calculation of 𝔼[DP,r]\mathbb{E}[D_{P,r}] according to (5) becomes tedious because it involves the combinatorial number Ti,jT_{i,j}, which can be extremely large. Stemming from (5), we next deduce an equivalent recurrence expression for 𝔼[DP,r]\mathbb{E}[D_{P,r}].

For i0i\geq 0, define

B(i)=j=0iTi,j(p0pr)i(p0prp0pr)i+j+1.B(i)=\sum\nolimits_{j=0}^{i}\frac{T_{i,j}(p_{0}p_{r})^{i}}{(p_{0}p_{r}-p_{0}-p_{r})^{i+j+1}}. (7)

Thus, (5) can be rewritten as

𝔼[DP,r]=Pp0+Ppr1+i=0P2(Pi1)B(i).\mathbb{E}[D_{P,r}]=\frac{P}{p_{0}}+\frac{P}{p_{r}}-1+\sum_{i=0}^{P-2}(P-i-1)B(i). (8)

Moreover, one can readily see

𝔼[DP+1,r]𝔼[DP,r]=1p0+1pr+i=0P1B(i).\mathbb{E}[D_{P+1,r}]-\mathbb{E}[D_{P,r}]=\frac{1}{p_{0}}+\frac{1}{p_{r}}+\sum_{i=0}^{P-1}B(i). (9)
Corollary 3.

B(i)B(i) can be recursively expressed as

B(i)=p0prΔ(B(i1)+j=0i1B(j)B(ij1)).B(i)=-\frac{p_{0}p_{r}}{\Delta}\left(B(i-1)+\sum_{j=0}^{i-1}B(j)B(i-j-1)\right). (10)

with the initial value B(0)=1p0+prp0prB(0)=-\frac{1}{p_{0}+p_{r}-p_{0}p_{r}}.

Proof:

Same as in the proof of Theorem 2, write Δ=p0+prp0pr=1(1p0)(1p1)\Delta=p_{0}+p_{r}-p_{0}p_{r}=1-(1-p_{0})(1-p_{1}) for short. Thus, B(i)B(i) can be expressed as

B(i)=1Δj=0iTi,j(p0prΔ)ij(p0prΔ2)j.B(i)=-\frac{1}{\Delta}\sum\nolimits_{j=0}^{i}T_{i,j}\left(-\frac{p_{0}p_{r}}{\Delta}\right)^{i-j}\left(\frac{p_{0}p_{r}}{\Delta^{2}}\right)^{j}. (11)

First, it is trivial to see B(0)=1/ΔB(0)=-1/\Delta.

Recall that Ti,jT_{i,j} represents the number of Schroeder paths from (0,0)(0,0) to (i,i)(i,i) with exactly i+ji+j step transitions. Moreover, for every Schroeder path in Ti,jT_{i,j}, the number of step transitions that are in the form of from (i,j)(i^{\prime},j^{\prime}) to (i+1,j+1)(i^{\prime}+1,j^{\prime}+1), from (i,j)(i^{\prime},j^{\prime}) to (i+1,j)(i^{\prime}+1,j^{\prime}) and from (i,j)(i^{\prime},j^{\prime}) to (i,j+1)(i^{\prime},j^{\prime}+1) are respectively equal to iji-j, jj and jj. Now assign a weight to every step transition as follows. If the step transition is from (i,j)(i^{\prime},j^{\prime}) to (i+1,j+1)(i^{\prime}+1,j^{\prime}+1), then its weight is p0prΔ-\frac{p_{0}p_{r}}{\Delta}; if the step transition is from (i,j)(i^{\prime},j^{\prime}) to (i+1,j)(i^{\prime}+1,j^{\prime}) or from (i,j)(i^{\prime},j^{\prime}) to (i,j+1)(i^{\prime},j^{\prime}+1), then its weight is p0prΔ\frac{\sqrt{p_{0}p_{r}}}{\Delta}. For every Schroeder path, define its weight as the product of all the weights for the step transitions in the path. Thus, based on (11), ΔB(i)-\Delta B(i) can be regarded as the sum of weights of all Schroeder paths from (0,0)(0,0) to (i,i)(i,i).

All Schroeder paths from (0,0)(0,0) to (i,i)(i,i) can be partitioned into i+1i+1 categories. The first category consists of all those Schroeder paths that contain the step transition from (0,0)(0,0) to (1,1)(1,1), which has weight p0prΔ-\frac{p_{0}p_{r}}{\Delta}. Thus, the sum of weights of all Schroeder paths in the first category is equal to p0prΔ(ΔB(i1))=p0prB(i1)-\frac{p_{0}p_{r}}{\Delta}(-\Delta B(i-1))=p_{0}p_{r}B(i-1). For 0ji10\leq j\leq i-1, the (j+1)st(j+1)^{st} category consists of all those Schroeder paths that satisfy the followings:

  • the step transitions from (0,0)(0,0) to (1,0)(1,0) and from (j+1,j)(j+1,j) to (j+1,j+1)(j+1,j+1) are contained;

  • the step transitions from (1,0)(1,0) to (j+1,j)(j+1,j) is equivalent to a Schroeder path from (0,0)(0,0) to (j,j)(j,j);

  • the step transitions from (j+1,j+1)(j+1,j+1) to (i,i)(i,i) is equivalent to a Schroeder path from (0,0)(0,0) to (ij1,ij1)(i-j-1,i-j-1).

As a result, the sum of weights of all Schroeder paths in category j+1{j+1} equals to p0prΔ2(ΔB(j))(ΔB(ij1))=p0prB(j)B(ij1)\frac{p_{0}p_{r}}{\Delta^{2}}(-\Delta B(j))(-\Delta B(i-j-1))=p_{0}p_{r}B(j)B(i-j-1). To add up the weights of all Schroeder paths in all i+1i+1 categories, we obtain

ΔB(i)=p0prB(i1)+j=0i1p0prB(j)B(ij1),-\Delta B(i)=p_{0}p_{r}B(i-1)+\sum\nolimits_{j=0}^{i-1}p_{0}p_{r}B(j)B(i-j-1),

that is, (10) holds. ∎

Based on (9) and (10), 𝔼[DP+1,r]\mathbb{E}[D_{P+1,r}] can be recursively computed for arbitrarily large PP. It is also interesting to observe from Eq. (1), (5), (9) and (10) that the parameters p0p_{0} and prp_{r} have the same impact on the expected completion delay at a single receiver rr for both perfect RLNC without buffer and perfect RLNC with buffer.

III-C Perfect RLNC with Buffer, Multiple-Receiver Case

The transmission process of the perfect RLNC scheme with buffer for multiple receivers on the full-duplex relay network can be modeled as a Markov chain, denoted by P,R\mathcal{M}_{P,R}, in the following way. Every state in P,R\mathcal{M}_{P,R} can be labeled by an (R+1)(R+1)-tuple 𝐬=(s0,s1,s2,,sR)\mathbf{s}=(s_{0},s_{1},s_{2},\ldots,s_{R}), where s0s_{0} represents the number of packets successfully received by the RS, and srs_{r} represents the number of packets successfully received by receiver rr. Notice that 0srs0P0\leq s_{r}\leq s_{0}\leq P for every receiver rr. Thus, by conditioning on s0s_{0}, we can compute the number of states in the Markov chain P,R\mathcal{M}_{P,R} as s0=0P(s0+1)R\sum_{s_{0}=0}^{P}(s_{0}+1)^{R} states. Except for the state (P,P,,P)(P,P,\ldots,P), which is absorbing, all other s0=0P(s0+1)R1\sum_{s_{0}=0}^{P}(s_{0}+1)^{R}-1 states are transient (assuming p0,p1,,pR0p_{0},p_{1},\ldots,p_{R}\neq 0). There is a 11-step transition in the Markov chain once the BS broadcasts a new packet in a timeslot. An illustration of the 11-step transition diagram for the case R=P=2R=P=2 is given in Fig. 2. We next define the 11-step transition probability from state 𝐬=(s0,s1,s2,,sR)\mathbf{s}=(s_{0},s_{1},s_{2},\ldots,s_{R}) to state 𝐬=(s0,s1,s2,,sR)\mathbf{s}^{\prime}=(s_{0}^{\prime},s_{1}^{\prime},s_{2}^{\prime},\ldots,s_{R}^{\prime}) for the Markov chain.

Refer to caption
Figure 2: An illustration of the 11-step transition diagram for the Markov chain P,R\mathcal{M}_{P,R} with R=P=2R=P=2. There are 12+22+32=141^{2}+2^{2}+3^{2}=14 states, all states except for (2,2,2)(2,2,2) are transient, and there is a state transition from every transient state to itself. For brevity, every state (s0,s1,s2)(s_{0},s_{1},s_{2}) is labeled as s0s1s2s_{0}s_{1}s_{2}, the transition from every transient state to itself is not depicted, and the edge direction is not marked. Specifically, whenever there is an edge between 𝐬=(s0,s1,s2)\mathbf{s}=(s_{0},s_{1},s_{2}) and 𝐬=(s0,s1,s2)\mathbf{s}^{\prime}=(s_{0}^{\prime},s_{1}^{\prime},s_{2}^{\prime}) with sjsjs_{j}\leq s_{j}^{\prime} for all 0j20\leq j\leq 2, it represents a transition from 𝐬\mathbf{s} to 𝐬\mathbf{s}^{\prime}.

Let \mathcal{R} denote the set of receivers who have not obtained s0s_{0} packets at state 𝐬\mathbf{s} yet, that is, ={1rR:sr<s0}\mathcal{R}=\{1\leq r\leq R:s_{r}<s_{0}\}. In addition, denote by \mathcal{R}^{\prime} the set of receivers who have obtained a new packet after the 11-step transition from 𝐬\mathbf{s} to 𝐬\mathbf{s}^{\prime}, that is, ={r:sr=sr+1}\mathcal{R}^{\prime}=\{r\in\mathcal{R}:s_{r}^{\prime}=s_{r}+1\}. The 11-step transition probability from 𝐬\mathbf{s} to 𝐬\mathbf{s}^{\prime} can be formulated by the following 33 different cases depending on the value of s0s_{0} and s0s_{0}^{\prime}.

  • Case 11: s0=s0<Ps_{0}^{\prime}=s_{0}<P. In this case \mathcal{R}^{\prime}\subseteq\mathcal{R}. We have

    p𝐬,𝐬=(1p0)(rpr)(r\(1pr)).p_{\mathbf{s},\mathbf{s}^{\prime}}=(1-p_{0})\left(\prod\nolimits_{r\in\mathcal{R^{\prime}}}p_{r}\right)\left(\prod\nolimits_{r\in\mathcal{R}\backslash\mathcal{R^{\prime}}}(1-p_{r})\right). (12)
  • Case 22: s0=s0=Ps_{0}^{\prime}=s_{0}=P. In this case \mathcal{R}^{\prime}\subseteq\mathcal{R}. We have

    p𝐬,𝐬=(rpr)(r\(1pr)).p_{\mathbf{s},\mathbf{s}^{\prime}}=\left(\prod\nolimits_{r\in\mathcal{R^{\prime}}}p_{r}\right)\left(\prod\nolimits_{r\in\mathcal{R}\backslash\mathcal{R^{\prime}}}(1-p_{r})\right). (13)
  • Case 33: s0=s0+1s_{0}^{\prime}=s_{0}+1. Notice that \mathcal{R}^{\prime} is not necessarily contained in \mathcal{R} in this case. We have

    p𝐬,𝐬=p0(rpr)(r(1pr)).p_{\mathbf{s},\mathbf{s}^{\prime}}=p_{0}\left(\prod\nolimits_{r\in\mathcal{R^{\prime}}}p_{r}\right)\left(\prod\nolimits_{r\notin\mathcal{R^{\prime}}}(1-p_{r})\right). (14)

Analogous to (6), the expected completion delay for the system can be expressed as

𝔼[DP]=(1,0,,0)(𝐈𝐏)1𝟏,\mathbb{E}[D_{P}]=(1,0,\ldots,0)(\mathbf{I}-\mathbf{P})^{-1}\mathbf{1}, (15)

where 𝐏\mathbf{P} denotes the matrix of 11-step transition probabilities among all transient states, with the first row/column indexed by the state (0,,0)(0,\ldots,0).

Since the number of states in the Markov chain P,R\mathcal{M}_{P,R} increases exponentially with increasing RR, it may not be convenient to compute 𝔼[DP]\mathbb{E}[D_{P}] based on (15). We next provide an alternative way to analyze 𝔼[DP]\mathbb{E}[D_{P}] by exploring its connection with 𝔼[D^P]\mathbb{E}[\hat{D}_{P}], which is given by (3) and denotes the expected system completion delay for the special case p0=1p_{0}=1, or equivalently, the expected system completion delay in wireless broadcast. For this purpose, we first focus on a single receiver rr and compare 𝔼[DP,r]\mathbb{E}[D_{P,r}] with 𝔼[D^P,r]=P/pr\mathbb{E}[\hat{D}_{P,r}]=P/p_{r}, which represents the expected completion delay at receiver rr for the special case p0=1p_{0}=1. For 1jP1\leq j\leq P, let SjS_{j} and Tj,rT_{j,r} respectively denote the number of timeslots that the RS and receiver rr take to receive the jthj^{th} packet. Thus, S1<<SPS_{1}<\ldots<S_{P}, T1,r<<TP,rT_{1,r}<\ldots<T_{P,r}, and SjTj,rS_{j}\leq T_{j,r} for all 1jP1\leq j\leq P. By conditioning on the relation between SP+1S_{P+1} and TP,rT_{P,r}, we can deduce the followings.

  • SP+1TP,rS_{P+1}\leq T_{P,r}, which means the RS has received the (P+1)st(P+1)^{st} packet upon the reception of the PthP^{th} packet by receiver rr. In this case, it takes on average 1pr\frac{1}{p_{r}} timeslots for receiver rr to further obtain the (P+1)st(P+1)^{st} packet so that 𝔼[DP+1,r]𝔼[DP,r]=1pr=𝔼[D^P+1,r]𝔼[D^P,r]\mathbb{E}[D_{P+1,r}]-\mathbb{E}[D_{P,r}]=\frac{1}{p_{r}}=\mathbb{E}[\hat{D}_{P+1,r}]-\mathbb{E}[\hat{D}_{P,r}];

  • SP+1>TP,rS_{P+1}>T_{P,r}, which means upon the reception of the (P+1)st(P+1)^{st} packet at the RS, receiver rr has only received fewer than PP packets. In this case, it takes on average 1p0\frac{1}{p_{0}} additional timeslots for the RS to receive the (P+1)st(P+1)^{st} packet, and 1pr1\frac{1}{p_{r}}-1 timeslots for receiver rr to receive the (P+1)st(P+1)^{st} packet. Thus, 𝔼[DP+1,r]𝔼[DP,r]=1p0+1pr1=𝔼[D^P+1,r]𝔼[D^P,r]+1p01\mathbb{E}[D_{P+1,r}]-\mathbb{E}[D_{P,r}]=\frac{1}{p_{0}}+\frac{1}{p_{r}}-1=\mathbb{E}[\hat{D}_{P+1,r}]-\mathbb{E}[\hat{D}_{P,r}]+\frac{1}{p_{0}}-1.

In all, we have

𝔼[DP+1,r]𝔼[DP,r]=𝔼[D^P+1,r]𝔼[D^P,r]+Pr(SP+1>TP,r)(1p01).\mathbb{E}[D_{P+1,r}]-\mathbb{E}[D_{P,r}]=\mathbb{E}[\hat{D}_{P+1,r}]-\mathbb{E}[\hat{D}_{P,r}]+\mathrm{Pr}(S_{P+1}>T_{P,r})(\frac{1}{p_{0}}-1). (16)

Since 𝔼[D1,r]=1p0+1pr1\mathbb{E}[D_{1,r}]=\frac{1}{p_{0}}+\frac{1}{p_{r}}-1 and 𝔼[D^1,r]=1pr\mathbb{E}[\hat{D}_{1,r}]=\frac{1}{p_{r}}, we have

𝔼[DP,r]=𝔼[D^P,r]+(1p01)(1+j=1P1Pr(Sj+1>Tj,r))\mathbb{E}[D_{P,r}]=\mathbb{E}[\hat{D}_{P,r}]+(\frac{1}{p_{0}}-1)\left(1+\sum\nolimits_{j=1}^{P-1}\mathrm{Pr}(S_{j+1}>T_{j,r})\right) (17)

Moreover, due to (9), (16) and 𝔼[D^P+1,r]𝔼[D^P,r]=1pr\mathbb{E}[\hat{D}_{P+1,r}]-\mathbb{E}[\hat{D}_{P,r}]=\frac{1}{p_{r}}, the following lemma can be obtained.

Lemma 4.

When p0<1p_{0}<1, Pr(SP+1>TP,r)=11p0+p01p0i=0P1B(i)\mathrm{Pr}(S_{P+1}\ >T_{P,r})=\frac{1}{1-p_{0}}+\frac{p_{0}}{1-p_{0}}\sum\nolimits_{i=0}^{P-1}B(i).

We next make a connection between 𝔼[DP]\mathbb{E}[D_{P}] and 𝔼[D^P]\mathbb{E}[\hat{D}_{P}] similar to that between 𝔼[DP,r]\mathbb{E}[D_{P,r}] and 𝔼[D^P,r]\mathbb{E}[\hat{D}_{P,r}] in (17). For RR independent geometrically distributed random variables N1,,NRN_{1},\ldots,N_{R} (starting from 11) with respective parameters p1,,pRp_{1},\ldots,p_{R}, define

Emax=𝔼[max{N1,,NR}].E_{\max}=\mathbb{E}[\max\{N_{1},\ldots,N_{R}\}]. (18)

EmaxE_{\max} can be explicitly computed by the min-max identity (see, e.g., [35]). Based on EmaxE_{\max},

𝔼[D1]=1/p0+Emax1,𝔼[D^1]=Emax,𝔼[D1]=𝔼[D^1]+1/p01.\mathbb{E}[D_{1}]=1/p_{0}+E_{\max}-1,~{}\mathbb{E}[\hat{D}_{1}]=E_{\max},~{}\mathbb{E}[D_{1}]=\mathbb{E}[\hat{D}_{1}]+1/p_{0}-1. (19)

For P2P\geq 2, the connection between 𝔼[DP]\mathbb{E}[D_{P}] and 𝔼[D^P]\mathbb{E}[\hat{D}_{P}] can be analyzed by conditioning on the relation among SP+1S_{P+1}, TP+1,rT_{P+1,r} and TP,rT_{P,r}, 1rR1\leq r\leq R. To ease the following presentation, we first elaborate the case of R=2R=2, and then give a general conclusion for R2R\geq 2.

Lemma 5.

For the case R=2R=2,

𝔼[DP+1]𝔼[DP]=Pr(TP+1,1TP,2)1p2+Pr(TP+1,2TP,1)1p1+Pr(TP+1,1>TP,2,TP+1,2>TP,1)Emax+Pr(SP+1>max{TP,1,TP,2})(1p01).\begin{split}&\mathbb{E}[D_{P+1}]-\mathbb{E}[D_{P}]\\ =&\mathrm{Pr}(T_{P+1,1}\leq T_{P,2})\frac{1}{p_{2}}+\mathrm{Pr}(T_{P+1,2}\leq T_{P,1})\frac{1}{p_{1}}+\mathrm{Pr}(T_{P+1,1}>T_{P,2},T_{P+1,2}>T_{P,1})E_{\max}+\\ &\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\})(\frac{1}{p_{0}}-1).\end{split} (20)
Proof:

We analyze 𝔼[DP+1]𝔼[DP]\mathbb{E}[D_{P+1}]-\mathbb{E}[D_{P}] by conditioning on the following 44 different cases:

  • Let AA represent the case TP+1,1TP,2T_{P+1,1}\leq T_{P,2} and TP+1,2>TP,1T_{P+1,2}>T_{P,1}, which is equivalent to TP+1,1TP,2T_{P+1,1}\leq T_{P,2}. In this case, when both receivers have obtained PP packets, receiver 11 has obtained (P+1)st(P+1)^{st} packet as well. Thus, it only takes additional 1/p21/p_{2} timeslots on average for receiver 22 to get the (P+1)st(P+1)^{st} packet, that is, 𝔼[DP+1DP|A]=1/p2\mathbb{E}[D_{P+1}-D_{P}~{}|~{}A]=1/p_{2}.

  • Let BB represent the case TP+1,2TP,1T_{P+1,2}\leq T_{P,1} and TP+1,1>TP,2T_{P+1,1}>T_{P,2}, which is equivalent to TP+1,2TP,1T_{P+1,2}\leq T_{P,1}. In a similar argument to case AA, we have 𝔼[DP+1DP|B]=1/p1\mathbb{E}[D_{P+1}-D_{P}~{}|~{}B]=1/p_{1}.

  • Let CC represent the case TP+1,2>TP,1T_{P+1,2}>T_{P,1} and TP+1,1>TP,2T_{P+1,1}>T_{P,2}. We further divide CC into two subcases, that is, SP+1>max{TP,1,TP,2}S_{P+1}>\max\{T_{P,1},T_{P,2}\} and SP+1max{TP,1,TP,2}S_{P+1}\leq\max\{T_{P,1},T_{P,2}\}. In the first subcase SP+1>max{TP,1,TP,2}S_{P+1}>\max\{T_{P,1},T_{P,2}\}, when both receivers have obtained PP packets, it takes extra 1/p01/p_{0} timeslots on average for the RS to get the (P+1)st(P+1)^{st} packet and then Emax1E_{\max}-1 timeslots on average to make both receivers obtain the (P+1)st(P+1)^{st} packets, that is,

    𝔼[DP+1DP|C,SP+1>max{TP,1,TP,2}]=1/p0+Emax1.\begin{split}&\mathbb{E}[D_{P+1}-D_{P}~{}|~{}C,S_{P+1}>\max\{T_{P,1},T_{P,2}\}]=1/p_{0}+E_{\max}-1.\end{split}

    In the second subcase SP+1max{TP,1,TP,2}S_{P+1}\leq\max\{T_{P,1},T_{P,2}\}, when both receivers have obtained PP packets, the RS has already received the (P+1)st(P+1)^{st} packet, so it takes extra EmaxE_{\max} timeslots on average to make both receivers obtain the (P+1)st(P+1)^{st} packets, that is,

    𝔼[DP+1DP|C,SP+1max{TP,1,TP,2}]=Emax.\begin{split}\mathbb{E}[D_{P+1}-D_{P}~{}|~{}C,S_{P+1}\leq\max\{T_{P,1},T_{P,2}\}]=E_{\max}.\end{split}

    In all,

    𝔼[DP+1DP|C]=(1/p0+Emax1)Pr(SP+1>max{TP,1,TP,2}|C)+EmaxPr(SP+1max{TP,1,TP,2}|C)=Emax+(1/p01)Pr(SP+1>max{TP,1,TP,2}|C),\begin{split}&\mathbb{E}[D_{P+1}-D_{P}~{}|~{}C]\\ =&(1/p_{0}+E_{\max}-1)\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\}~{}|~{}C)+E_{\max}\mathrm{Pr}(S_{P+1}\leq\max\{T_{P,1},T_{P,2}\}~{}|~{}C)\\ =&E_{\max}+(1/p_{0}-1)\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\}~{}|~{}C),\end{split}

    and consequently

    𝔼[DP+1DP|C]Pr(C)=EmaxPr(C)+(1/p01)Pr(SP+1>max{TP,1,TP,2}|C)Pr(C)=EmaxPr(C)+(1/p01)Pr(SP+1>max{TP,1,TP,2})\begin{split}&\mathbb{E}[D_{P+1}-D_{P}~{}|~{}C]\mathrm{Pr}(C)\\ =&E_{\max}\mathrm{Pr}(C)+~{}(1/p_{0}-1)\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\}~{}|~{}C)\mathrm{Pr}(C)\\ =&E_{\max}\mathrm{Pr}(C)+(1/p_{0}-1)\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\})\end{split}
  • The last case TP+1,1TP,2T_{P+1,1}\leq T_{P,2} and TP+1,2TP,1T_{P+1,2}\leq T_{P,1} has probability 0 to occur.

Eq. (LABEL:eqn:recursive_expression_of_fullduplex1) can now be proved to be correct due to 𝔼[DP+1DP]=𝔼[DP+1DP|A]Pr(A)+𝔼[DP+1DP|B]Pr(B)+𝔼[DP+1DP|C]Pr(C)\mathbb{E}[D_{P+1}-D_{P}]=\mathbb{E}[D_{P+1}-D_{P}~{}|~{}A]\mathrm{Pr}(A)+\mathbb{E}[D_{P+1}-D_{P}~{}|~{}B]\mathrm{Pr}(B)+\mathbb{E}[D_{P+1}-D_{P}~{}|~{}C]\mathrm{Pr}(C). ∎

Let T^j,r\hat{T}_{j,r} denote the number of timeslots receiver rr takes to receive the jthj^{th} packet in the special case p0=1p_{0}=1. Thus, (LABEL:eqn:recursive_expression_of_fullduplex1) implies

𝔼[D^P+1]𝔼[D^P]=Pr(T^P+1,1T^P,2)1p2+Pr(T^P+1,2T^P,1)1p1+Pr(T^P+1,1>T^P,2,T^P+1,2>T^P,1)Emax.\begin{split}&\mathbb{E}[\hat{D}_{P+1}]-\mathbb{E}[\hat{D}_{P}]\\ =&\mathrm{Pr}(\hat{T}_{P+1,1}\leq\hat{T}_{P,2})\frac{1}{p_{2}}+\mathrm{Pr}(\hat{T}_{P+1,2}\leq\hat{T}_{P,1})\frac{1}{p_{1}}+\mathrm{Pr}(\hat{T}_{P+1,1}>\hat{T}_{P,2},\hat{T}_{P+1,2}>\hat{T}_{P,1})E_{\max}.\end{split} (21)

Let εP\varepsilon_{P} denote the following identity

εP=(Pr(TP,1=TP,2)Pr(T^P,1=T^P,2))(1p1)(1p2)(p1+p2)p1p2(p1+p2p1p2)+(Pr(T^P,1>T^P,2)Pr(TP,1>TP,2))p1(1p2)p2(p1+p2p1p2)+(Pr(T^P,2>T^P,1)Pr(TP,2>TP,1))(1p1)p2p1(p1+p2p1p2)\begin{split}\varepsilon_{P}=&(\mathrm{Pr}(T_{P,1}=T_{P,2})-\mathrm{Pr}(\hat{T}_{P,1}=\hat{T}_{P,2}))\frac{(1-p_{1})(1-p_{2})(p_{1}+p_{2})}{p_{1}p_{2}(p_{1}+p_{2}-p_{1}p_{2})}+\\ &(\mathrm{Pr}(\hat{T}_{P,1}>\hat{T}_{P,2})-\mathrm{Pr}(T_{P,1}>T_{P,2}))\frac{p_{1}(1-p_{2})}{p_{2}(p_{1}+p_{2}-p_{1}p_{2})}+\\ &(\mathrm{Pr}(\hat{T}_{P,2}>\hat{T}_{P,1})-\mathrm{Pr}(T_{P,2}>T_{P,1}))\frac{(1-p_{1})p_{2}}{p_{1}(p_{1}+p_{2}-p_{1}p_{2})}\end{split} (22)
Theorem 6.

For the case R=2R=2,

𝔼[DP+1]𝔼[DP]=𝔼[D^P+1]𝔼[D^P]+εP+1+Pr(SP+1>max{TP,1,TP,2})(1p01).\mathbb{E}[D_{P+1}]-\mathbb{E}[D_{P}]=\mathbb{E}[\hat{D}_{P+1}]-\mathbb{E}[\hat{D}_{P}]+\varepsilon_{P+1}+\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\})(\frac{1}{p_{0}}-1). (23)
Proof:

Please refer to Appendix--B. ∎

Eq. (23) in Theorem 6 extends (16) from the single receiver case to the case R=2R=2. It implies an approximation, which can also serve as a lower bound, for 𝔼[DP]\mathbb{E}[D_{P}] based on 𝔼[D^P]\mathbb{E}[\hat{D}_{P}]. First, because Pr(SP+1>TP,1|SP+1>TP,2)Pr(SP+1>TP,1)\mathrm{Pr}(S_{P+1}>T_{P,1}|S_{P+1}>T_{P,2})\geq\mathrm{Pr}(S_{P+1}>T_{P,1}), we have

Pr(SP+1>max{TP,1,TP,2})Pr(SP+1>TP,1)Pr(SP+1>TP,2).\begin{split}&\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\})\geq\mathrm{Pr}(S_{P+1}>T_{P,1})\mathrm{Pr}(S_{P+1}>T_{P,2}).\end{split} (24)

Since Pr(SP+1>TP,1)\mathrm{Pr}(S_{P+1}>T_{P,1}) and Pr(SP+1>TP,2)\mathrm{Pr}(S_{P+1}>T_{P,2}) can be explicitly computed based on Lemma 4 together with the recursive formula (10), we shall adopt Pr(SP+1>TP,1)Pr(SP+1>TP,2)\mathrm{Pr}(S_{P+1}>T_{P,1})\mathrm{Pr}(S_{P+1}>T_{P,2}) as an explicitly computable lower bound on Pr(SP+1>max{TP,1,TP,2})\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\}). Second, we shall omit εP+1\varepsilon_{P+1} in the approximation whose performance will be justified below. For brevity, let D~P\tilde{D}_{P} denote

D~P=(1p01)(1+j=1P1Pr(Sj+1>Tj,1)Pr(Sj+1>Tj,2)).\tilde{D}_{P}=(\frac{1}{p_{0}}-1)\left(1+\sum\nolimits_{j=1}^{P-1}\mathrm{Pr}(S_{j+1}>T_{j,1})\mathrm{Pr}(S_{j+1}>T_{j,2})\right). (25)
Theorem 7.

For the case R=2R=2 and P2P\geq 2,

𝔼[DP]=\displaystyle\mathbb{E}[D_{P}]= 𝔼[D^P]+j=2Pεj+(1p01)(1+j=1P1Pr(Sj+1>max{Tj,1,Tj,2}))\displaystyle\mathbb{E}[\hat{D}_{P}]+\sum\nolimits_{j=2}^{P}\varepsilon_{j}+(\frac{1}{p_{0}}-1)(1+\sum\nolimits_{j=1}^{P-1}\mathrm{Pr}(S_{j+1}>\max\{T_{j,1},T_{j,2}\}))
\displaystyle\geq 𝔼[D^P]+D~P.\displaystyle\mathbb{E}[\hat{D}_{P}]+\tilde{D}_{P}. (26)
Proof:

The first equation is a direct consequence of (23) and (19). By (24),

𝔼[DP]𝔼[D^P]+j=2Pεj+D~P.\mathbb{E}[D_{P}]\geq\mathbb{E}[\hat{D}_{P}]+\sum\nolimits_{j=2}^{P}\varepsilon_{j}+\tilde{D}_{P}.

It remains to prove j=2Pεj0\sum\nolimits_{j=2}^{P}\varepsilon_{j}\geq 0, which can be found in Appendix--C. ∎

We can now consider 𝔼[D^P]+D~P\mathbb{E}[\hat{D}_{P}]+\tilde{D}_{P} as an approximation as well as a lower bound for 𝔼[DP]\mathbb{E}[D_{P}] when R=2R=2 and P2P\geq 2. Notice that 𝔼[D^P]\mathbb{E}[\hat{D}_{P}] and D~P\tilde{D}_{P} can be explicitly computed by (3) and by Lemma 4 together with the recursive formula (10), respectively.

Before proceeding to generalize the approximation 𝔼[D^P]+D~P\mathbb{E}[\hat{D}_{P}]+\tilde{D}_{P} of 𝔼[DP]\mathbb{E}[D_{P}] to the case R>2R>2, we briefly discuss the approximation accuracy, which depends on how close D~P\tilde{D}_{P} approximates the difference 𝔼[DP]𝔼[D^P]\mathbb{E}[D_{P}]-\mathbb{E}[\hat{D}_{P}]. In the process of obtaining the approximation value D~P\tilde{D}_{P} in Theorem 7, we neglect the term j=2Pεj0\sum\nolimits_{j=2}^{P}\varepsilon_{j}\geq 0, approximate j=1P1Pr(Sj+1>max{Tj,1,Tj,2})\sum\nolimits_{j=1}^{P-1}\mathrm{Pr}(S_{j+1}>\max\{T_{j,1},T_{j,2}\}) as j=1P1Pr(Sj+1>Tj,1)Pr(Sj+1>Tj,2)\sum\nolimits_{j=1}^{P-1}\mathrm{Pr}(S_{j+1}>T_{j,1})\mathrm{Pr}(S_{j+1}>T_{j,2}), and add 11/p01-1/p_{0}, which represents the expected number of timeslots the RS takes to obtain the first packet. Observe that with increasing PP, both Pr(SP+1>TP,1)\mathrm{Pr}(S_{P+1}>T_{P,1}) and Pr(SP+1>TP,2)\mathrm{Pr}(S_{P+1}>T_{P,2}) decrease by Lemma 4, and so does Pr(SP+1>max{TP,1,TP,2})\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\}). Without loss of generality, we next analyze the loss by approximating 𝔼[DP]𝔼[D^P]\mathbb{E}[D_{P}]-\mathbb{E}[\hat{D}_{P}] as D~P\tilde{D}_{P} via the following 33 cases.

  • Case 1: p0>max{p1,p2}p_{0}>\max\{p_{1},p_{2}\}, so that Pr(SP+1>TP,1)\mathrm{Pr}(S_{P+1}>T_{P,1}), Pr(SP+1>TP,2)\mathrm{Pr}(S_{P+1}>T_{P,2}) and Pr(SP+1>max{TP,1,TP,2})\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\}) are small. Thus, Pr(SP+1>max{TP,1,TP,2})Pr(SP+1>TP,1)Pr(SP+1>TP,2)\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\})-\mathrm{Pr}(S_{P+1}>T_{P,1})\mathrm{Pr}(S_{P+1}>T_{P,2}) tends to zero fast with increasing PP. Moreover, εP\varepsilon_{P} also converges to 0 fast with increasing PP. Actually, with increasing jj, when a receiver obtains the jthj^{th} new packet, there is a higher probability that the RS has obtained at least j+1j+1 packets from the BS. It turns out that the approximation of the transmission scenario from the RS to receivers as a wireless broadcast is accurate, that is, D~P/P\tilde{D}_{P}/P is a close approximation of (𝔼[DP]𝔼[D^P])/P(\mathbb{E}[D_{P}]-\mathbb{E}[\hat{D}_{P}])/P.

  • Case 2: p1<p0p2p_{1}<p_{0}\leq p_{2}, so that both Pr(SP+1>TP,1)\mathrm{Pr}(S_{P+1}>T_{P,1}) and Pr(SP+1>max{TP,1,TP,2})\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\}) are relatively small. The gap between Pr(SP+1>max{TP,1,TP,2})\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\}) and Pr(SP+1>TP,1)Pr(SP+1>TP,2)\mathrm{Pr}(S_{P+1}>T_{P,1})\mathrm{Pr}(S_{P+1}>T_{P,2}) is negligible for small PP. However, for larger PP, the approximation of Pr(SP+1>max{TP,1,TP,2})\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\}) as Pr(SP+1>TP,1)Pr(SP+1>TP,2)\mathrm{Pr}(S_{P+1}>T_{P,1})\mathrm{Pr}(S_{P+1}>T_{P,2}) becomes less accurate than that in Case 1. Moreover, εP\varepsilon_{P} tends to 0 with PP increasing (but not as fast as in Case 1). Hence, the approximation of (𝔼[DP]𝔼[D^P])/P(\mathbb{E}[D_{P}]-\mathbb{E}[\hat{D}_{P}])/P by D~P/P\tilde{D}_{P}/P in this case performs a little less accurate than that in Case 1 (with the same p1p_{1}, p2p_{2}).

  • Case 3: p0min{p1,p2}p_{0}\leq\min\{p_{1},p_{2}\}. In this case, with increasing PP, Pr(SP+1>TP,1)\mathrm{Pr}(S_{P+1}>T_{P,1}), Pr(SP+1>TP,2)\mathrm{Pr}(S_{P+1}>T_{P,2}) and Pr(SP+1>max{TP,1,TP,2})\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\}) decrease slower than those in Case 1 and 2 (which have the same p1p_{1}, p2p_{2} but larger p0p_{0}). Consequently, Pr(SP+1>max{TP,1,TP,2})Pr(SP+1>TP,1)Pr(SP+1>TP,2)\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\})-\mathrm{Pr}(S_{P+1}>T_{P,1})\mathrm{Pr}(S_{P+1}>T_{P,2}) is not negligible even for large PP. Similarly, εP\varepsilon_{P} does not converge to 0 with PP increasing. As a result, for large PP, the approximation of (𝔼[DP]𝔼[D^P])/P(\mathbb{E}[D_{P}]-\mathbb{E}[\hat{D}_{P}])/P by D~P/P\tilde{D}_{P}/P is not as accurate as in Case 1 and 2 (with the same p1p_{1}, p2p_{2}).

Notice that for every receiver rr, the expected completion delay 𝔼[DP,r]\mathbb{E}[D_{P,r}], which has been explicitly characterized in Theorem 2 and can be efficiently computed according to (9), is naturally a lower bound of 𝔼[DP]\mathbb{E}[D_{P}]. When p1p_{1} is much smaller than p2p_{2}, or p1p2p_{1}\leq p_{2} with large enough PP, the probability that the completion delay at receiver 22 is no larger than that at receiver 11, that is, Pr(DP,2DP,1)\mathrm{Pr}(D_{P,2}\leq D_{P,1}) is high. As a result, the following simple lower bound for 𝔼[DP]\mathbb{E}[D_{P}]

𝔼[DP]max{𝔼[DP,1],𝔼[DP,2]}\mathbb{E}[D_{P}]\geq\max\{\mathbb{E}[D_{P,1}],\mathbb{E}[D_{P,2}]\} (27)

may provide a better approximation compared with (7) when the difference of p1p_{1} and p2p_{2} is large or PP is relatively large, particularly for Case 2 and 3 discussed above.

Corollary 8.

For the case R=2R=2 and P2P\geq 2,

𝔼[DP]max{𝔼[DP,1],𝔼[DP,2],𝔼[D^P]+D~P}.\begin{split}&\mathbb{E}[D_{P}]\geq\max\{\mathbb{E}[D_{P,1}],\mathbb{E}[D_{P,2}],\mathbb{E}[\hat{D}_{P}]+\tilde{D}_{P}\}.\end{split} (28)

We now generalize the approximation of 𝔼[DP]\mathbb{E}[D_{P}] as 𝔼[D^P]+D~P\mathbb{E}[\hat{D}_{P}]+\tilde{D}_{P} from R=2R=2 to R2R\geq 2. Write

Ψ^P+1=𝔼[D^P+1]𝔼[D^P].\hat{\Psi}_{P+1}=\mathbb{E}[\hat{D}_{P+1}]-\mathbb{E}[\hat{D}_{P}]. (29)

Because 𝔼[DP+1DP|SP+1>max1rRTP,r]=1p01\mathbb{E}[D_{P+1}-D_{P}|S_{P+1}>\max\nolimits_{1\leq r\leq R}T_{P,r}]=\frac{1}{p_{0}}-1, we can express

𝔼[DP+1]𝔼[DP]=ΨP+1+Pr(SP+1>max1rRTP,r)(1p01),\mathbb{E}[D_{P+1}]-\mathbb{E}[D_{P}]=\Psi_{P+1}+\mathrm{Pr}(S_{P+1}>\max\nolimits_{1\leq r\leq R}T_{P,r})(\frac{1}{p_{0}}-1), (30)

where ΨP+1=𝔼[DP+1DP|SP+1max1rRTP,r]Pr(SP+1max1rRTP,r)\Psi_{P+1}=\mathbb{E}[D_{P+1}-D_{P}|S_{P+1}\leq\max_{1\leq r\leq R}T_{P,r}]\mathrm{Pr}(S_{P+1}\leq\max_{1\leq r\leq R}T_{P,r}). Based on (29), (30) and (19), we have

𝔼[DP]=𝔼[D^P]+j=2P(ΨjΨ^j)+(1p01)(1+j=1P1Pr(Sj+1>max1rRTj,r))\mathbb{E}[D_{P}]=\mathbb{E}[\hat{D}_{P}]+\sum\nolimits_{j=2}^{P}(\Psi_{j}-\hat{\Psi}_{j})+(\frac{1}{p_{0}}-1)(1+\sum\nolimits_{j=1}^{P-1}\mathrm{Pr}(S_{j+1}>\max\nolimits_{1\leq r\leq R}T_{j,r})) (31)

For R=2R=2, it has been proved in Theorem 7 that j=2P(ΨjΨ^j)=j=2Pεj0\sum\nolimits_{j=2}^{P}(\Psi_{j}-\hat{\Psi}_{j})=\sum\nolimits_{j=2}^{P}\varepsilon_{j}\geq 0. For R>2R>2, further analysis of j=2P(ΨjΨ^j)\sum\nolimits_{j=2}^{P}(\Psi_{j}-\hat{\Psi}_{j}) involves much more details which are beyond the scope of this paper, so we directly ignore this term in our approximation. Because Similar to (24), Pr(SP+1>max1rRTP,r)1rRPr(SP+1>TP,r)\mathrm{Pr}(S_{P+1}>\max_{1\leq r\leq R}T_{P,r})\geq\prod_{1\leq r\leq R}\mathrm{Pr}(S_{P+1}>T_{P,r}). Consequently, we obtain the following approximation of 𝔼[DP]\mathbb{E}[D_{P}]

𝔼[DP]𝔼[D^P]+(1p01)(1+j=1P1Pr(Sj+1>max1rRTj,r))𝔼[D^P]+D~P,\mathbb{E}[D_{P}]\approx\mathbb{E}[\hat{D}_{P}]+(\frac{1}{p_{0}}-1)(1+\sum\nolimits_{j=1}^{P-1}\mathrm{Pr}(S_{j+1}>\max\nolimits_{1\leq r\leq R}T_{j,r}))\geq\mathbb{E}[\hat{D}_{P}]+\tilde{D}_{P}, (32)

where D~P\tilde{D}_{P} is generalized from (25) to denote D~P=(1p01)(1+j=1P1r=1RPr(Sj+1>Tj,r))\tilde{D}_{P}=(\frac{1}{p_{0}}-1)\left(1+\sum_{j=1}^{P-1}\prod_{r=1}^{R}\mathrm{Pr}(S_{j+1}>T_{j,r})\right). Recall that 𝔼[D^P]\mathbb{E}[\hat{D}_{P}] represents the expected system completion delay for the special case p0=1p_{0}=1, which degenerates to the wireless broadcast, and according to (3), we have the following explicit expression

𝔼[D^P]=P+d0(11rRj=0d(P+j1P1)prP(1pr)j).\mathbb{E}[\hat{D}_{P}]=P+\sum\nolimits_{d\geq 0}\left(1-\prod\nolimits_{1\leq r\leq R}\sum\nolimits_{j=0}^{d}\binom{P+j-1}{P-1}p_{r}^{P}(1-p_{r})^{j}\right). (33)

Based on Lemma 4 and the definition of B(i)B(i) in (7), we can also express D~P\tilde{D}_{P} in the following closed-form, which is a function of p0p_{0} and prp_{r}

D~P=(1p01)(1+j=1P1r=1RPr(Sj+1>Tj,r))=(1p01)(1+j=1P1r=1R(11p0+p01p0i=0j1k=0i1k+1(i+ki)(ik)(p0pr)i(p0prp0pr)i+k+1)).\begin{split}\tilde{D}_{P}=&(\frac{1}{p_{0}}-1)\left(1+\sum_{j=1}^{P-1}\prod_{r=1}^{R}\mathrm{Pr}(S_{j+1}>T_{j,r})\right)\\ =&(\frac{1}{p_{0}}-1)\left(1+\sum_{j=1}^{P-1}\prod_{r=1}^{R}(\frac{1}{1-p_{0}}+\frac{p_{0}}{1-p_{0}}\sum\nolimits_{i=0}^{j-1}\sum\nolimits_{k=0}^{i}\frac{\frac{1}{k+1}\binom{i+k}{i}\binom{i}{k}(p_{0}p_{r})^{i}}{(p_{0}p_{r}-p_{0}-p_{r})^{i+k+1}})\right).\end{split} (34)

Moreover, notice that D~P\tilde{D}_{P} can be explicitly computed by the following recursive procedure:

  • D~1=𝔼[D1]𝔼[D^1]=1/p01\tilde{D}_{1}=\mathbb{E}[D_{1}]-\mathbb{E}[\hat{D}_{1}]=1/p_{0}-1.

  • For P1P\geq 1, D~P+1=D~P+r=1RPr(SP+1>TP,r)(1p01)=D~P+r=1R(1p0+i=0P1Br(i))\tilde{D}_{P+1}=\tilde{D}_{P}+\prod_{r=1}^{R}\mathrm{Pr}(S_{P+1}>T_{P,r})(\frac{1}{p_{0}}-1)=\tilde{D}_{P}+\prod_{r=1}^{R}(\frac{1}{p_{0}}+\sum_{i=0}^{P-1}B_{r}(i)), where Br(i)B_{r}(i) denotes B(i)B(i) in (7) for given prp_{r} and it can be recursively computed by (10).

For fixed prp_{r}, 𝔼[D^P]\mathbb{E}[\hat{D}_{P}] keeps the same so that the accuracy for the approximation 𝔼[D^P]+D~P\mathbb{E}[\hat{D}_{P}]+\tilde{D}_{P} of 𝔼[DP]\mathbb{E}[D_{P}] depends on D~P\tilde{D}_{P}. In particular, both 1p01\frac{1}{p_{0}}-1 and Pr(SP+1>TP,r)\mathrm{Pr}(S_{P+1}>T_{P,r}) in D~P\tilde{D}_{P} decrease with increasing p0p_{0}, and D~P\tilde{D}_{P} converges to 0 when p0p_{0} increases to 11, which represents the case 𝔼[DP]=𝔼[D^P]\mathbb{E}[D_{P}]=\mathbb{E}[\hat{D}_{P}]. Moreover, as Pr(SP+1>TP,r)\mathrm{Pr}(S_{P+1}>T_{P,r}) can be explicitly computed by Lemma 4, the use of r=1RPr(SP+1>TP,r)\prod_{r=1}^{R}\mathrm{Pr}(S_{P+1}>T_{P,r}) in D~P\tilde{D}_{P} is to approximate the joint probability Pr(SP+1>max1rRTP,r)\mathrm{Pr}(S_{P+1}>\max_{1\leq r\leq R}T_{P,r}), which is a key component in characterizing 𝔼[DP]\mathbb{E}[D_{P}] in (31). Analogous to the discussion above Corollary 8 for the case R=2R=2, we further argue that for sufficiently large PP, when p0>max1rRprp_{0}>\max\nolimits_{1\leq r\leq R}p_{r}, r=1RPr(SP+1>TP,r)\prod_{r=1}^{R}\mathrm{Pr}(S_{P+1}>T_{P,r}) is a closer approximation of Pr(SP+1>max1rRTP,r)\mathrm{Pr}(S_{P+1}>\max_{1\leq r\leq R}T_{P,r}) compared with the case p0max1rRprp_{0}\leq\max_{1\leq r\leq R}p_{r}, while the approximation performs the worst for the case p0min1rRprp_{0}\leq\min\nolimits_{1\leq r\leq R}p_{r}.

In (32), we use the sign “\approx” instead of “\geq” because we did not prove the term j=2P(ΨjΨ^j)\sum\nolimits_{j=2}^{P}(\Psi_{j}-\hat{\Psi}_{j}) in the explicit characterization of 𝔼[DP]\mathbb{E}[D_{P}] in (31) to be non-negative for R>2R>2, which we conjecture to be correct. By further taking the simple lower bound 𝔼[DP]max1rR𝔼[DP,r]\mathbb{E}[D_{P}]\geq\max\nolimits_{1\leq r\leq R}\mathbb{E}[D_{P,r}] into account, we obtain the following approximation of 𝔼[DP]\mathbb{E}[D_{P}] for R2R\geq 2,

𝔼[DP]max{max1rR𝔼[DP,r],𝔼[D^P]+D~P},\mathbb{E}[D_{P}]\gtrsim\max\{\max\nolimits_{1\leq r\leq R}\mathbb{E}[D_{P,r}],\mathbb{E}[\hat{D}_{P}]+\tilde{D}_{P}\}, (35)

where \gtrsim is rigorously proved to be \geq in Theorem 7 and Corollary 8 for R=2R=2. The tightness of the above approximation will be numerically analyzed in the next section.

It is worthwhile noting that in the case that RR is relatively small and there is a receiver whose successful receiving probability prp_{r} is much smaller than others’, 𝔼[DP,r]\mathbb{E}[D_{P,r}] is a better lower bound of 𝔼[DP]\mathbb{E}[D_{P}] compared with 𝔼[D^P]+D~P\mathbb{E}[\hat{D}_{P}]+\tilde{D}_{P}. Otherwise, 𝔼[D^P]+D~P\mathbb{E}[\hat{D}_{P}]+\tilde{D}_{P} will be a better approximation of 𝔼[DP]\mathbb{E}[D_{P}], because it is built upon the expected system completion delay of wireless broadcast, which has already taken all receivers into consideration. In the full-duplex relay network modeled in this paper, as 𝔼[DP]\mathbb{E}[D_{P}] is a lower bound for the expected system completion delay of an arbitrary RLNC scheme, so is max{max1rR𝔼[DP,r],𝔼[D^P]+D~P}\max\{\max\nolimits_{1\leq r\leq R}\mathbb{E}[D_{P,r}],\mathbb{E}[\hat{D}_{P}]+\tilde{D}_{P}\}.

IV Numerical Analysis

In this section, we numerically analyze the average completion delay of the two fundamental perfect RLNC schemes and compare the numerical results with the theoretical characterizations obtained in the previous section. Moreover, we compare the performance of the two fundamental schemes with the one proposed in [27] — from the perspective of the average completion delay and the average buffer size taken at the RS, both of which are normalized by PP. We adopt the following abbreviations in the figure legends. The FBPF perfect RLNC scheme, the perfect RLNC scheme without buffer, and the perfect RLNC scheme with buffer are respectively labeled as “FBPF”, “PRLNC w/t buffer”, and “PRLNC w/ buffer”. The results obtained by simulation are labeled with “simu”, and the theoretical results from Proposition 1 and Theorem 2 are labeled with “th”. The performance for the single receiver and for the system is respectively labeled as “single r” and “system”.

Refer to caption
Refer to caption
Figure 3: Average completion delay per packet of different schemes with: (a) fixed P=10P=10, R=10R=10, p0=0.75p_{0}=0.75 and varying prp_{r}; (b) fixed P=10P=10, R=10R=10, pr=0.75p_{r}=0.75 and varying p0p_{0}.

Under the settings P=10P=10, R=10R=10 and p0=0.75p_{0}=0.75, Fig. 3 depicts the average completion delay per packet at a single receiver as well as for the system with varying prp_{r}. One may observe the followings. First, the average completion delay of every scheme decreases with increasing prp_{r}, and converges to 1/p0=1.331/p_{0}=1.33. Second, the average completion delay of FBPF is upper bounded by that of perfect RLNC without buffer and lower bounded by that of perfect RLNC with buffer. Most importantly, the simulation results numerically verify the theoretical derivations in Proposition 1 and Theorem 2, as well as validate the lower bound (35) for the expected system completion delay for perfect RLNC with buffer.

Under the settings P=10P=10, R=10R=10 and pr=0.75p_{r}=0.75, Fig. 3 depicts the average completion delay per packet at a single receiver as well as for the system with increasing p0p_{0}. In addition to similar observations to Fig. 3, one may further conclude the followings. The average completion delay for the system of all three schemes converges to 1+1Pd0(11rRIpr(P,d+1))=1.691+\frac{1}{P}\sum\nolimits_{d\geq 0}(1-\prod\nolimits_{1\leq r\leq R}I_{p_{r}}(P,d+1))=1.69. Moreover, the plots for the average completion delay at a single receiver in Fig. 3 and Fig. 3 are almost identical, which infers that for all three schemes, the exchange of the values of p0p_{0} and prp_{r} does not affect the completion delay performance at a single receiver. Theoretically, (1) and (5) justify this observation for perfect RLNC without buffer and with buffer, respectively. Lastly, with increasing p0p_{0}, the lower bound (35) becomes tighter, which is in line with the discussion in the previous section.

Table I lists the average buffer size per packet needed at the RS for the FBPF scheme with the settings P=10P=10, R=10R=10, pr=0.75p_{r}=0.75 and different p0p_{0}. It is interesting to notice that the required buffer size increases with increasing p0p_{0}. In comparison, the perfect RLNC scheme without buffer demands no buffer, and the buffer size per packet of the perfect RLNC scheme with buffer is always 11, which is 34%34\% smaller than that of the FBPF scheme when p0=pr=0.75p_{0}=p_{r}=0.75.

Refer to caption
Figure 4: Average system completion delay per packet and its lower bounds for perfect RLNC with buffer with fixed R=2,p1=0.75,p2=0.85R=2,p_{1}=0.75,p_{2}=0.85 and different p0p_{0}, PP.
TABLE I: The average buffer size per packet of the FBPF scheme with fixed P=10P=10, R=10R=10, pr=0.75p_{r}=0.75 and varying p0p_{0}
p0p_{0} 0.5 0.6 0.7 0.8 0.9 1
Buffer Size
1.293 1.383 1.474 1.559 1.63 1.698

Fig. 3 has demonstrated the tightness of the lower bound (35) for the expected system completion delay 𝔼[DP]\mathbb{E}[D_{P}] for perfect RLNC with buffer. Recall that the bound (35) consists of two parts, both of which can be explicitly computed. One is (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P, which is a lower bound deduced from the perspective of wireless broadcast, and the other is max1rR𝔼[DP,r]/P\max_{1\leq r\leq R}\mathbb{E}[D_{P,r}]/P, which is the expected completion delay at the single receiver with the worst channel condition. Above Corollary 8, we have discussed, for R=2R=2, the accuracy of the lower bound (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P and conclude that max1rR𝔼[DP,r]/P\max_{1\leq r\leq R}\mathbb{E}[D_{P,r}]/P is necessary to form the better lower bound (28). In the remaining part of this section, we shall further numerically compare the accuracy of the two lower bounds (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P and max1rR𝔼[DP,r]/P\max_{1\leq r\leq R}\mathbb{E}[D_{P,r}]/P.

Under the settings R=2R=2, p1=0.75,p2=0.85p_{1}=0.75,p_{2}=0.85, Fig. 4 compares the average system completion delay per packet with the two lower bounds (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P (labeled as “lower bound 11: Eq. (7)”) and 𝔼[DP,1]/P\mathbb{E}[D_{P,1}]/P (labeled as “lower bound 22: Eq. (27)”). The comparison is conducted under 33 different choices of p0p_{0}, that is, p0{0.95,0.8,0.65}p_{0}\in\{0.95,0.8,0.65\}. We can conclude the following observations from Fig. 4 about the two lower bounds.

  • First, the average system completion delay per packet as well as its two lower bounds decrease with fixed PP and increasing p0p_{0}. They also decrease and converge to some values with fixed p0p_{0} and increasing PP.

  • Second, for all 33 choices of p0p_{0}, (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P is tighter than 𝔼[DP,1]/P\mathbb{E}[D_{P,1}]/P for small PP. This is because (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P is obtained from the perspective of wireless broadcast so it takes all receivers’ completion delay into account. In particular, the approximation of 𝔼[DP]𝔼[D^P]\mathbb{E}[D_{P}]-\mathbb{E}[\hat{D}_{P}] by D~P\tilde{D}_{P} is relatively accurate for small PP and when P=1P=1, the bound (D~P+𝔼[D^P])(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}]) is exactly equal to 𝔼[DP]\mathbb{E}[D_{P}], that is, D~P=𝔼[DP]𝔼[D^P]=0\tilde{D}_{P}=\mathbb{E}[D_{P}]-\mathbb{E}[\hat{D}_{P}]=0.

  • Moreover, in the two cases with p0>max{p1,p2}p_{0}>\max\{p_{1},p_{2}\}, (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P is always better than 𝔼[DP,1]/P\mathbb{E}[D_{P,1}]/P. When p0=0.95p_{0}=0.95, (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P converges to the average system completion delay 𝔼[DP]\mathbb{E}[D_{P}] with increasing PP. However, when p0max{p1,p2}p_{0}\leq\max\{p_{1},p_{2}\}, (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P decreases faster with increasing PP, so that 𝔼[DP,1]/P\mathbb{E}[D_{P,1}]/P outperforms. This justifies the usefulness to supplement 𝔼[DP,1]/P\mathbb{E}[D_{P,1}]/P in the tighter lower bound (28).

  • Last, by comparing the three curves related to (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P, we can find that approximating 𝔼[DP]/P\mathbb{E}[D_{P}]/P by merely 𝔼[D^P]/P\mathbb{E}[\hat{D}_{P}]/P will be much looser with decreasing p0p_{0} because it does not take p0p_{0} into consideration. Therefore, the additional term D~P\tilde{D}_{P} we introduce in the lower bound (7) is indispensable in estimating 𝔼[DP]\mathbb{E}[D_{P}].

In summary, what have been observed from Fig. 4 are consistent with the (3-case) discussion about the accuracy to approximate 𝔼[DP]𝔼[D^P]\mathbb{E}[D_{P}]-\mathbb{E}[\hat{D}_{P}] as D~P\tilde{D}_{P} in the previous section (above Corollary 8), and they affirm that (28) is a tighter lower bound than the individual use of (7) or (27).

Refer to caption
Figure 5: Average system completion delay per packet and its lower bounds for perfect RLNC with buffer.

For multi-receiver case R{20,100}R\in\{20,100\}, Fig. 5 compares the average system completion delay per packet 𝔼[DP]/P\mathbb{E}[D_{P}]/P with (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P (labeled as “approx”) and max1rR𝔼(DP,r)/P\max\nolimits_{1\leq r\leq R}\mathbb{E}(D_{P,r})/P (labeled as “single r”), which constitute the lower bound in (35) as an extension of (28). The comparison is conducted under three different settings of p0p_{0} and prp_{r}: (a) p0=0.75p_{0}=0.75 and prp_{r} is evenly distributed over {0.9,0.8,0.7,0.6}\{0.9,0.8,0.7,0.6\}; (b) p0=0.7p_{0}=0.7 and pr=0.75p_{r}=0.75 for each receiver rr; (c) p0=0.95p_{0}=0.95 and pr=0.9p_{r}=0.9 for each receiver rr. The followings can be observed.

  • For all 33 settings, there is a noticeable gap between max1rR𝔼(DP,r)/P\max\nolimits_{1\leq r\leq R}\mathbb{E}(D_{P,r})/P and the average system completion delay per packet 𝔼[DP]/P\mathbb{E}[D_{P}]/P even for large PP (compare with Fig. 4 in which the gap is very small for P=50P=50). This is mainly because the number of receivers considered herein is much more than that in Fig. 4, so that the approximation accuracy from the perspective of a single receiver declines.

  • In Settings (a) and (c), the curve of (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P is close to the average system completion delay per packet 𝔼[DP]/P\mathbb{E}[D_{P}]/P for all 22 choices of RR. However, the tightness of (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P in Setting (a) is slightly worse than that in Setting (c) mainly because half of the receivers’ prp_{r} in Setting (a) are larger than p0p_{0}.

  • In Setting (b), the accuracy of (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P to approximate 𝔼[DP]/P\mathbb{E}[D_{P}]/P is not as good as that in Settings (a) and (c), because p0<prp_{0}<p_{r} herein. Meanwhile, the approximation max1rR𝔼[DP,r]/P\max\nolimits_{1\leq r\leq R}\mathbb{E}[D_{P,r}]/P becomes more accurate for large PP, affirming that (35) is a tighter lower bound of 𝔼[DP]\mathbb{E}[D_{P}] compared with (D~P+𝔼[D^P])/P(\tilde{D}_{P}+\mathbb{E}[\hat{D}_{P}])/P or max1rR𝔼[DP,r]/P\max\nolimits_{1\leq r\leq R}\mathbb{E}[D_{P,r}]/P.

V Concluding Remarks

In this paper, for full-duplex relay networks, we consider two fundamental perfect RLNC schemes to investigate the fundamental benefit RLNC can provide, and derive closed-form formulae for their expected completion delay, both at a single receiver and for the whole system. The expected completion delay of the two schemes can respectively serve as an upper bound for all perfect RLNC schemes and a lower bound for all RLNC schemes. It provides a theoretical guideline for future works on the detailed design of RLNC-based transmission schemes in the full-duplex relay networks. As an ensuing work, by adapting recently proposed efficient RLNC schemes such as Fulcrum [30] or circular-shift RLNC [38], we will further design practical RLNC schemes with small buffer as well as low coding complexity at the RS, and with the completion delay performance closer to the theoretical limit characterized in this paper. Another ensuing work is to study the completion delay performance of perfect RLNC in a more general network model which contains direct links between the BS and the receivers as well as multiple full-duplex relays.

-A Proof of Eq. (40)

For brevity, write Δ=p0+prp0pr=1(1p0)(1pr)\Delta=p_{0}+p_{r}-p_{0}p_{r}=1-(1-p_{0})(1-p_{r}). Let qij,ijq_{ij,i^{\prime}j^{\prime}} denote the total probability to enter state (i,j)(i^{\prime},j^{\prime}) starting from state (i,j)(i,j) in the Markov chain under the constraint that the transitions from (i,j)(i,j) to (i,j)(i^{\prime},j^{\prime}) are not allowed to visit states (i+l,j+l)(i+l,j+l^{\prime}) for all 0l<ii0\leq l<i^{\prime}-i and l>ll^{\prime}>l. E.g., q10,21=p0prΔ2q_{10,21}=\frac{p_{0}p_{r}}{\Delta^{2}} when P3P\geq 3. For brevity, q00,ijq_{00,ij} will be written as qijq_{ij}.

To reflect the number of original packets in the notation, write EP=𝔼[DP,r]E_{P}=\mathbb{E}[D_{P,r}]. Stemming from (6), we can obtain the following equivalent characterization

EP=(1,0,,0)k0𝐏k𝟏=0jiP11pij,ijqij.E_{P}=(1,0,\ldots,0)\sum\nolimits_{k\geq 0}\mathbf{P}^{k}\mathbf{1}=\sum\nolimits_{0\leq j\leq i\leq P}\frac{1}{1-p_{ij,ij}}q_{ij}. (36)

Let qijq_{ij}^{\prime} represent the total probability that state (i,j)(i,j) can be entered starting from (0,0)(0,0) in the Markov chain P1\mathcal{M}_{P-1}, so that qij=qijq_{ij}=q_{ij}^{\prime} for all 0jiP20\leq j\leq i\leq P-2. For jP2j\leq P-2, because qjjpr+qPj+prΔi=j+1P1qij=1q_{jj}p_{r}+q_{Pj}+\frac{p_{r}}{\Delta}\sum_{i=j+1}^{P-1}q_{ij}=1 and qjjpr+q(P1)j+prΔi=j+1P2qij=1q_{jj}^{\prime}p_{r}+q_{(P-1)j}^{\prime}+\frac{p_{r}}{\Delta}\sum_{i=j+1}^{P-2}q_{ij}^{\prime}=1, we have qPj+p1Δq(P1)j=q(P1)jq_{Pj}+\frac{p_{1}}{\Delta}q_{(P-1)j}=q_{(P-1)j}^{\prime} and thus

qPjpr=q(P1)jprq(P1)jΔ.\frac{q_{Pj}}{p_{r}}=\frac{q_{(P-1)j}^{\prime}}{p_{r}}-\frac{q_{(P-1)j}}{\Delta}. (37)

Since 1pr1-p_{r} is the 11-step transition probability from state (P,j)(P,j) to itself in P\mathcal{M}_{P} as well as from state (P1,j)(P-1,j) to itself in P1\mathcal{M}_{P-1}, and 1Δ1-\Delta is the 11-step transition probability from state (P1,j)(P-1,j) to itself in P\mathcal{M}_{P}, based on (36) and (37), we obtain the following recursive expression

EP=EP1+q(P1)(P1)p0+qP(P1)pr.E_{P}=E_{P-1}+\frac{q_{(P-1)(P-1)}}{p_{0}}+\frac{q_{P(P-1)}}{p_{r}}. (38)

As 1=qPP=q(P1)(P1)pr+qP(P1)1=q_{PP}=q_{(P-1)(P-1)}p_{r}+q_{P(P-1)}, we have qP(P1)=1q(P1)(P1)prq_{P(P-1)}=1-q_{(P-1)(P-1)}p_{r} and

EP=EP1+1pr+1p0p0q(P1)(P1).E_{P}=E_{P-1}+\frac{1}{p_{r}}+\frac{1-p_{0}}{p_{0}}q_{(P-1)(P-1)}. (39)

As E1=1/p0+1/pr1E_{1}=1/p_{0}+1/p_{r}-1, in order to prove (5) based on (39) for P2P\geq 2, it remains to show

(1p0)q(P1)(P1)1p0=i=0P2j=0iTi,j(p0pr)i(Δ)i+j+1,\frac{(1-p_{0})q_{(P-1)(P-1)}-1}{p_{0}}=\sum\nolimits_{i=0}^{P-2}\sum\nolimits_{j=0}^{i}\frac{T_{i,j}(p_{0}p_{r})^{i}}{(-\Delta)^{i+j+1}}, (40)

which is equivalent to prove

1p0p0(q(P1)(P1)q(P2)(P2))=j=0P2TP2,j(p0pr)P2(Δ)P+j1\frac{1-p_{0}}{p_{0}}(q_{(P-1)(P-1)}-q_{(P-2)(P-2)})=\sum\nolimits_{j=0}^{P-2}\frac{T_{P-2,j}(p_{0}p_{r})^{P-2}}{(-\Delta)^{P+j-1}} (41)

In the remaining proof, we shall first make a connection between qiiq_{ii} and q10,(i+1)iq_{10,(i+1)i} for 0<i<P0<i<P. Notice that for every 0j<i<P0\leq j<i<P, the 11-step transition probability for state (i,j)(i,j) keeps the same, and qij,(i+1)j=p0Δqii,(i+1)iq_{ij,(i+1)j}=\frac{p_{0}}{\Delta}q_{i^{\prime}i^{\prime},(i^{\prime}+1)i^{\prime}}, qij,(i+1)(j+1)=p0Δqii,(i+1)(i+1)q_{ij,(i+1)(j+1)}=\frac{p_{0}}{\Delta}q_{i^{\prime}i^{\prime},(i^{\prime}+1)(i^{\prime}+1)} for all 0i<P0\leq i^{\prime}<P. Thus, q11=Δp0q10,21q_{11}=\frac{\Delta}{p_{0}}q_{10,21}. Moreover, by making use of the property

pr(1p0)p0qij,(i+1)j=qii,(i+1)iqij,(i+1)j,pr(1p0)p0qij,(i+1)(j+1)=qii,(i+1)(i+1)qij,(i+1)(j+1)\frac{p_{r}(1-p_{0})}{p_{0}}q_{ij,(i+1)j}=q_{i^{\prime}i^{\prime},(i^{\prime}+1)i^{\prime}}-q_{ij,(i+1)j},\frac{p_{r}(1-p_{0})}{p_{0}}q_{ij,(i+1)(j+1)}=q_{i^{\prime}i^{\prime},(i^{\prime}+1)(i^{\prime}+1)}-q_{ij,(i+1)(j+1)}

for 0j<i<P0\leq j<i<P and 0i<P0\leq i^{\prime}<P, one may readily verify that for 1<i<P1<i<P,

qii=Δp0q10,(i+1)i+pr(1p0)p0j=1i1qjjq10,(ij+1)(ij)q_{ii}=\frac{\Delta}{p_{0}}q_{10,(i+1)i}+\frac{p_{r}(1-p_{0})}{p_{0}}\sum\nolimits_{j=1}^{i-1}q_{jj}q_{10,(i-j+1)(i-j)} (42)

On the other hand, for 0<i<P10<i<P-1, q(i+1)(i+1)q_{(i+1)(i+1)} can also be expressed as

q(i+1)(i+1)=\displaystyle q_{(i+1)(i+1)}= prΔqii+j=0i1pr(1p0)(1pr)Δqjjq(j+1)j,(i+1)i\displaystyle\frac{p_{r}}{\Delta}q_{ii}+\sum\nolimits_{j=0}^{i-1}\frac{p_{r}(1-p_{0})(1-p_{r})}{\Delta}q_{jj}q_{(j+1)j,(i+1)i}
=\displaystyle= prΔqii+j=0i1pr(1p0)(1pr)Δqjjq10,(ij+1)(ij)\displaystyle\frac{p_{r}}{\Delta}q_{ii}+\sum\nolimits_{j=0}^{i-1}\frac{p_{r}(1-p_{0})(1-p_{r})}{\Delta}q_{jj}q_{10,(i-j+1)(i-j)} (43)

where the last equality holds due to qij,(i+i)(j+j)=q10,(i+1)jq_{ij,(i+i^{\prime})(j+j^{\prime})}=q_{10,(i^{\prime}+1)j^{\prime}} for all i,i,j,ji,i^{\prime},j,j^{\prime} subject to 0j<i0\leq j<i and j+j<i+i<Pj+j^{\prime}<i+i^{\prime}<P. Then, the addition of (-A) and (42) multiplied by p0(1pr)Δ-\frac{p_{0}(1-p_{r})}{\Delta} on both sides yields q(i+1)(i+1)qii=p0(1pr)Δq10,(i+1)iq_{(i+1)(i+1)}-q_{ii}=-\frac{p_{0}(1-p_{r})}{\Delta}q_{10,(i+1)i}, which implies for 1<i<P1<i<P,

1p0p0(qiiq(i1)(i1))=(11Δ)q10,i(i1).\frac{1-p_{0}}{p_{0}}(q_{ii}-q_{(i-1)(i-1)})=(1-\frac{1}{\Delta})q_{10,i(i-1)}. (44)

We last characterize q10,i(i1)q_{10,i(i-1)} given that 1<i<P1<i<P. Notice that qij,(i+1)(j+1)=p0pr(1p0)(1pr)Δ2+p0prΔ=p0prΔ2q_{ij,(i+1)(j+1)}=\frac{p_{0}p_{r}(1-p_{0})(1-p_{r})}{\Delta^{2}}+\frac{p_{0}p_{r}}{\Delta}=\frac{p_{0}p_{r}}{\Delta^{2}} for 0j<i0\leq j<i. It turns out that

q10,i(i1)=(p0pr)i1Δ2(i1)j=1i1Ni1,j[(1p0)(1pr)]ij1=(p0pr)i1Δ2(i1)j=1i1Ni1,j(1Δ)ij1q_{10,i(i-1)}=\frac{(p_{0}p_{r})^{i-1}}{\Delta^{2(i-1)}}\sum\nolimits_{j=1}^{i-1}N_{i-1,j}[(1-p_{0})(1-p_{r})]^{i-j-1}=\frac{(p_{0}p_{r})^{i-1}}{\Delta^{2(i-1)}}\sum\nolimits_{j=1}^{i-1}N_{i-1,j}(1-\Delta)^{i-j-1} (45)

where Ni1,jN_{i-1,j} represents the number of all those transitions from (1,0)(1,0) to (i,i1)(i,i-1) that contain

  • exactly jj 11-step transitions in the form (i,j)(i+1,j+1)(i^{\prime},j^{\prime})\rightarrow(i^{\prime}+1,j^{\prime}+1), ij1i-j-1 11-step transitions in the form (i,j)(i+1,j)(i^{\prime},j^{\prime})\rightarrow(i^{\prime}+1,j^{\prime}), and ij1i-j-1 11-step transitions in the form (i,j)(i,j+1)(i^{\prime},j^{\prime})\rightarrow(i^{\prime},j^{\prime}+1);

  • no 22-step transitions in the form (i,j)(i+1,j)(i+1,j+1)(i^{\prime},j^{\prime})\rightarrow(i^{\prime}+1,j^{\prime})\rightarrow(i^{\prime}+1,j^{\prime}+1).

Under this constraint, Ni1,jN_{i-1,j} coincides with the Narayana number (see, e.g., Chapter 2 in [39]) with parameters 1ji11\leq j\leq i-1, so Ni1,j=1i1(i1j)(i1j1)N_{i-1,j}=\frac{1}{i-1}\binom{i-1}{j}\binom{i-1}{j-1}. As Ni1,j=Ni1,ijN_{i-1,j}=N_{i-1,i-j},

j=1i1Ni1,j(1Δ)ij1\displaystyle\sum\nolimits_{j=1}^{i-1}N_{i-1,j}(1-\Delta)^{i-j-1}
=\displaystyle= j=1i1Ni1,j(1Δ)j1\displaystyle\sum\nolimits_{j=1}^{i-1}N_{i-1,j}(1-\Delta)^{j-1}
=\displaystyle= j=1i1Ni1,jj=1j(Δ)j1(j1j1)\displaystyle\sum\nolimits_{j=1}^{i-1}N_{i-1,j}\sum\nolimits_{j^{\prime}=1}^{j}(-\Delta)^{j^{\prime}-1}\binom{j-1}{j^{\prime}-1}
=\displaystyle= j=1i1(Δ)j1j=ji1(j1j1)Ni1,j\displaystyle\sum\nolimits_{j^{\prime}=1}^{i-1}(-\Delta)^{j^{\prime}-1}\sum\nolimits_{j=j^{\prime}}^{i-1}\binom{j-1}{j^{\prime}-1}N_{i-1,j}
=\displaystyle= j=1i1(Δ)j1i1(i1j1)(2ij1i)\displaystyle\sum\nolimits_{j^{\prime}=1}^{i-1}\frac{(-\Delta)^{j^{\prime}-1}}{i-1}\binom{i-1}{j^{\prime}-1}\binom{2i-j^{\prime}-1}{i}
=\displaystyle= j=1i1(Δ)ij1i1(i1j)(i+j1i),\displaystyle\sum\nolimits_{j=1}^{i-1}\frac{(-\Delta)^{i-j-1}}{i-1}\binom{i-1}{j}\binom{i+j-1}{i}, (46)

where the second last equality can be readily verified based on the combinatorial equation (n+mn+1)=j=nm+1n(nj)(mn+1j)\binom{n+m}{n+1}=\sum_{j=n-m+1}^{n}\binom{n}{j}\binom{m}{n+1-j} for 1mn+11\leq m\leq n+1. By plugging (-A) back to (45),

q10,i(i1)=(p0prΔ2)i1j=1i1(Δ)ij1i1(i1j)(i+j1i).q_{10,i(i-1)}=\left(\frac{p_{0}p_{r}}{\Delta^{2}}\right)^{i-1}\sum_{j=1}^{i-1}\frac{(-\Delta)^{i-j-1}}{i-1}\binom{i-1}{j}\binom{i+j-1}{i}.

Thus, the right-hand side of (44) can be expressed as

(11Δ)q10,i(i1)\displaystyle(1-\frac{1}{\Delta})q_{10,i(i-1)}
=\displaystyle= j=1i1(p0pr)i1(i1)(Δ)i+j1(i1j)(i+j1i)+j=1i1(p0pr)i1(i1)(Δ)i+j(i1j)(i+j1i)\displaystyle\sum\nolimits_{j=1}^{i-1}\frac{(p_{0}p_{r})^{i-1}}{(i-1)(-\Delta)^{i+j-1}}\binom{i-1}{j}\binom{i+j-1}{i}+\sum\nolimits_{j=1}^{i-1}\frac{(p_{0}p_{r})^{i-1}}{(i-1)(-\Delta)^{i+j}}\binom{i-1}{j}\binom{i+j-1}{i} (47)

As (i1j)=i1j(i2j1)\binom{i-1}{j}=\frac{i-1}{j}\binom{i-2}{j-1}, the first term in (-A) can be expressed as

j=1i1(p0pr)i1(i1)(Δ)i+j1(i1j)(i+j1i)\displaystyle\sum\nolimits_{j=1}^{i-1}\frac{(p_{0}p_{r})^{i-1}}{(i-1)(-\Delta)^{i+j-1}}\binom{i-1}{j}\binom{i+j-1}{i}
=\displaystyle= j=1i1(p0pr)i1j(Δ)i+j1(i2j1)(i+j1i)\displaystyle\sum\nolimits_{j=1}^{i-1}\frac{(p_{0}p_{r})^{i-1}}{j(-\Delta)^{i+j-1}}\binom{i-2}{j-1}\binom{i+j-1}{i}
=\displaystyle= (p0pr)i1(Δ)i+j=2i1(p0pr)i1j(Δ)i+j1(i2j1)(i+j1i)\displaystyle\frac{(p_{0}p_{r})^{i-1}}{(-\Delta)^{i}}+\sum\nolimits_{j=2}^{i-1}\frac{(p_{0}p_{r})^{i-1}}{j(-\Delta)^{i+j-1}}\binom{i-2}{j-1}\binom{i+j-1}{i}

Moreover, by altering the index of jj and the fact 1i1(2i2i)=1i(2i2i1)\frac{1}{i-1}\binom{2i-2}{i}=\frac{1}{i}\binom{2i-2}{i-1}, we can express the second term in (-A) as

j=1i1(p0pr)i1(i1)(Δ)i+j(i1j)(i+j1i)\displaystyle\sum\nolimits_{j=1}^{i-1}\frac{(p_{0}p_{r})^{i-1}}{(i-1)(-\Delta)^{i+j}}\binom{i-1}{j}\binom{i+j-1}{i}
=\displaystyle= j=2i1(p0pr)i1(i1)(Δ)i+j1(i1j1)(i+j2i)+(p0pr)i1i(Δ)2i1(2i2i1).\displaystyle\sum\nolimits_{j=2}^{i-1}\frac{(p_{0}p_{r})^{i-1}}{(i-1)(-\Delta)^{i+j-1}}\binom{i-1}{j-1}\binom{i+j-2}{i}+\frac{(p_{0}p_{r})^{i-1}}{i(-\Delta)^{2i-1}}\binom{2i-2}{i-1}.

Consequently,

(11Δ)q10,i(i1)\displaystyle(1-\frac{1}{\Delta})q_{10,i(i-1)}
=\displaystyle= (p0pr)i1(Δ)i+j=2i1(p0pr)i1j(Δ)i+j1(i2j1)(i+j1i)\displaystyle\frac{(p_{0}p_{r})^{i-1}}{(-\Delta)^{i}}+\sum\nolimits_{j=2}^{i-1}\frac{(p_{0}p_{r})^{i-1}}{j(-\Delta)^{i+j-1}}\binom{i-2}{j-1}\binom{i+j-1}{i}
+j=2i1(p0pr)i1(i1)(Δ)i+j1(i1j1)(i+j2i)+(p0pr)i1i(Δ)2i1(2i2i1)\displaystyle+\sum\nolimits_{j=2}^{i-1}\frac{(p_{0}p_{r})^{i-1}}{(i-1)(-\Delta)^{i+j-1}}\binom{i-1}{j-1}\binom{i+j-2}{i}+\frac{(p_{0}p_{r})^{i-1}}{i(-\Delta)^{2i-1}}\binom{2i-2}{i-1}
=\displaystyle= (p0pr)i1(Δ)i+j=2i1(p0pr)i1j(Δ)i+j1(i+j2i1)(i1j1)+(p0pr)i1i(Δ)2i1(2i2i1)\displaystyle\frac{(p_{0}p_{r})^{i-1}}{(-\Delta)^{i}}+\sum\nolimits_{j=2}^{i-1}\frac{(p_{0}p_{r})^{i-1}}{j(-\Delta)^{i+j-1}}\binom{i+j-2}{i-1}\binom{i-1}{j-1}+\frac{(p_{0}p_{r})^{i-1}}{i(-\Delta)^{2i-1}}\binom{2i-2}{i-1}
=\displaystyle= j=1i(p0pr)i1j(Δ)i+j1(i+j2i1)(i1j1)\displaystyle\sum\nolimits_{j=1}^{i}\frac{(p_{0}p_{r})^{i-1}}{j(-\Delta)^{i+j-1}}\binom{i+j-2}{i-1}\binom{i-1}{j-1}
=\displaystyle= j=0i1(p0pr)i1Ti1,j(Δ)i+j.\displaystyle\sum\nolimits_{j=0}^{i-1}\frac{(p_{0}p_{r})^{i-1}T_{i-1,j}}{(-\Delta)^{i+j}}. (48)

where the second equality holds because 1j(i2j1)(i+j1i)+1i1(i1j1)(i+j2i)=1j(i+j2i1)(i1j1)\frac{1}{j}\binom{i-2}{j-1}\binom{i+j-1}{i}+\frac{1}{i-1}\binom{i-1}{j-1}\binom{i+j-2}{i}=\frac{1}{j}\binom{i+j-2}{i-1}\binom{i-1}{j-1}.

Eq. (41) can now be verified based on (44) and (-A) with the setting i=P1i=P-1.

-B Proof of Theorem 6

Given that there are R=2R=2 receivers and P+1P+1 source packets in the network. For the parameter EmaxE_{\max} defined in (18), by the min-max identity, Emax=1/p1+1/p2EminE_{\max}=1/p_{1}+1/p_{2}-E_{\min}, where Emin=𝔼[min{N1,N2}]=1/(1(1p1)(1p2))=1/(p1+p2p1p2)E_{\min}=\mathbb{E}[\min\{N_{1},N_{2}\}]=1/(1-(1-p_{1})(1-p_{2}))=1/(p_{1}+p_{2}-p_{1}p_{2}). Moreover, we have

Pr(TP+1,2TP,1)+Pr(TP+1,1>TP,2,TP+1,2>TP,1)=Pr(TP+1,1>TP,2),Pr(TP+1,1TP,2)+Pr(TP+1,1>TP,2,TP+1,2>TP,1)=Pr(TP+1,2>TP,1),\begin{split}&\mathrm{Pr}(T_{P+1,2}\leq T_{P,1})+\mathrm{Pr}(T_{P+1,1}>T_{P,2},T_{P+1,2}>T_{P,1})=\mathrm{Pr}(T_{P+1,1}>T_{P,2}),\\ &\mathrm{Pr}(T_{P+1,1}\leq T_{P,2})+\mathrm{Pr}(T_{P+1,1}>T_{P,2},T_{P+1,2}>T_{P,1})=\mathrm{Pr}(T_{P+1,2}>T_{P,1}),\end{split}

and taking a summation on each side of the above two equations yields

1+Pr(TP+1,1>TP,2,TP+1,2>TP,1)=Pr(TP+1,1>TP,2)+Pr(TP+1,2>TP,1).\begin{split}&1+\mathrm{Pr}(T_{P+1,1}>T_{P,2},T_{P+1,2}>T_{P,1})=\mathrm{Pr}(T_{P+1,1}>T_{P,2})+\mathrm{Pr}(T_{P+1,2}>T_{P,1}).\end{split} (49)

As a result, Eq. (LABEL:eqn:recursive_expression_of_fullduplex1) in Lemma 5 and (21) can be respectively rewritten as

𝔼[DP+1]𝔼[DP]=\displaystyle\mathbb{E}[D_{P+1}]-\mathbb{E}[D_{P}]= Pr(TP+1,1>TP,2)(1p1Emin)+Pr(TP+1,2>TP,1)(1p2Emin)+Emin+\displaystyle\mathrm{Pr}(T_{P+1,1}>T_{P,2})(\frac{1}{p_{1}}-E_{\min})+\mathrm{Pr}(T_{P+1,2}>T_{P,1})(\frac{1}{p_{2}}-E_{\min})+E_{\min}+
Pr(SP+1>max{TP,1,TP,2})(1p01),\displaystyle\mathrm{Pr}(S_{P+1}>\max\{T_{P,1},T_{P,2}\})(\frac{1}{p_{0}}-1),
𝔼[D^P+1]𝔼[D^P]=Pr(T^P+1,1>T^P,2)(1p1Emin)+Pr(T^P+1,2>T^P,1)(1p2Emin)+Emin.\mathbb{E}[\hat{D}_{P+1}]-\mathbb{E}[\hat{D}_{P}]=\mathrm{Pr}(\hat{T}_{P+1,1}>\hat{T}_{P,2})(\frac{1}{p_{1}}-E_{\min})+\mathrm{Pr}(\hat{T}_{P+1,2}>\hat{T}_{P,1})(\frac{1}{p_{2}}-E_{\min})+E_{\min}.

By the above two equations and (22), in order to prove (23), it suffices to show

εP+1=(Pr(TP+1,1>TP,2)Pr(T^P+1,1>T^P,2))(1p1Emin)+(Pr(TP+1,2>TP,1)Pr(T^P+1,2>T^P,1))(1p2Emin)\begin{split}\varepsilon_{P+1}=&(\mathrm{Pr}(T_{P+1,1}>T_{P,2})-\mathrm{Pr}(\hat{T}_{P+1,1}>\hat{T}_{P,2}))(\frac{1}{p_{1}}-E_{\min})+\\ &(\mathrm{Pr}(T_{P+1,2}>T_{P,1})-\mathrm{Pr}(\hat{T}_{P+1,2}>\hat{T}_{P,1}))(\frac{1}{p_{2}}-E_{\min})\end{split} (50)

Observe that

Pr(TP+1,1>TP,2)=\displaystyle\mathrm{Pr}(T_{P+1,1}>T_{P,2})= 1Pr(TP,2TP+1,1)\displaystyle 1-\mathrm{Pr}(T_{P,2}\geq T_{P+1,1})
=\displaystyle= 1Pr(TP+1,2>TP+1,1)+Pr(TP+1,2>TP+1,1>TP,2).\displaystyle 1-\mathrm{Pr}(T_{P+1,2}>T_{P+1,1})+\mathrm{Pr}(T_{P+1,2}>T_{P+1,1}>T_{P,2}). (51)

Due to the memoryless property of geometric distribution, under the condition TP+1,2>TP,1T_{P+1,2}>T_{P,1} and TP+1,1>TP,2T_{P+1,1}>T_{P,2}, the events TP+1,2>TP+1,1T_{P+1,2}>T_{P+1,1} and TP+1,2=TP+1,1T_{P+1,2}=T_{P+1,1} are independent of the arrival timeslot SP+1S_{P+1} of packet P+1P+1 at the RS, and they have respective probability p1(1p2)p1+p2p1p2\frac{p_{1}(1-p_{2})}{p_{1}+p_{2}-p_{1}p_{2}} and p1p2p1+p2p1p2\frac{p_{1}p_{2}}{p_{1}+p_{2}-p_{1}p_{2}} to occur. Hence,

Pr(TP+1,2>TP+1,1>TP,2)=Pr(TP+1,2>TP+1,1|TP+1,2>TP,1,TP+1,1>TP,2)Pr(TP+1,2>TP,1,TP+1,1>TP,2)=Pr(TP+1,2>TP,1,TP+1,1>TP,2)p1(1p2)p1+p2p1p2,Pr(TP+1,2=TP+1,1)=Pr(TP+1,2>TP,1,TP+1,1>TP,2)p1p2p1+p2p1p2.\begin{split}&\mathrm{Pr}(T_{P+1,2}>T_{P+1,1}>T_{P,2})\\ =&\mathrm{Pr}(T_{P+1,2}>T_{P+1,1}|T_{P+1,2}>T_{P,1},T_{P+1,1}>T_{P,2}){Pr}(T_{P+1,2}>T_{P,1},T_{P+1,1}>T_{P,2})\\ =&\mathrm{Pr}(T_{P+1,2}>T_{P,1},T_{P+1,1}>T_{P,2})\frac{p_{1}(1-p_{2})}{p_{1}+p_{2}-p_{1}p_{2}},\\ &\mathrm{Pr}(T_{P+1,2}=T_{P+1,1})=\mathrm{Pr}(T_{P+1,2}>T_{P,1},T_{P+1,1}>T_{P,2})\frac{p_{1}p_{2}}{p_{1}+p_{2}-p_{1}p_{2}}.\end{split}

It turns out that Pr(TP+1,2>TP+1,1>TP,2)=Pr(TP+1,2=TP+1,1)1p2p2\mathrm{Pr}(T_{P+1,2}>T_{P+1,1}>T_{P,2})=\mathrm{Pr}(T_{P+1,2}=T_{P+1,1})\frac{1-p_{2}}{p_{2}}. By plugging the above expression back to (-B), we obtain

Pr(TP+1,1>TP,2)=1Pr(TP+1,2>TP+1,1)+Pr(TP+1,2=TP+1,1)1p2p2,\mathrm{Pr}(T_{P+1,1}>T_{P,2})=1-\mathrm{Pr}(T_{P+1,2}>T_{P+1,1})+\mathrm{Pr}(T_{P+1,2}=T_{P+1,1})\frac{1-p_{2}}{p_{2}}, (52)

and similarly

Pr(TP+1,2>TP,1)=1Pr(TP+1,1>TP+1,2)+Pr(TP+1,1=TP+1,2)1p1p1.\mathrm{Pr}(T_{P+1,2}>T_{P,1})=1-\mathrm{Pr}(T_{P+1,1}>T_{P+1,2})+\mathrm{Pr}(T_{P+1,1}=T_{P+1,2})\frac{1-p_{1}}{p_{1}}. (53)

Based on (52), (53), and Emin=1/(p1+p2p1p2)E_{\min}=1/(p_{1}+p_{2}-p_{1}p_{2}), the correctness of (50) can be verified.

-C Proof of Theorem 7

It remains to prove j=2Pεj0\sum\nolimits_{j=2}^{P}\varepsilon_{j}\geq 0. To ease the analysis, we make use of the following Markov chain \mathcal{M} consisting of (P+1)2(P+1)^{2} states, in which (i) state (i,j)(i,j), 0i,jP0\leq i,j\leq P, represents the scenario that receivers 11 and 22 have respectively obtained ii and jj packets; (ii) there is a 11-step transition once at least one receiver obtains a new packet. In the Markov chain, let pij,ijp_{ij,i^{\prime}j^{\prime}} represent the 11-step transition probability from state (i,j)(i,j) to (i,j)(i^{\prime},j^{\prime}), and qijq_{ij} denote the total probability to visit state (i,j)(i,j) among all paths from (0,0)(0,0) to (P,P)(P,P). Notice that the Markov chain \mathcal{M} defined herein is different from the one modeled at the beginning of Sec. III-C and illustrated in Fig. 2, because it does not involve the number of received packets at the RS in the state description. Observe that for 1i<P1\leq i<P,

qii=Pr(Ti+1,1>Ti,2,Ti+1,2>Ti,1).q_{ii}=\mathrm{Pr}(T_{i+1,1}>T_{i,2},T_{i+1,2}>T_{i,1}). (54)

For brevity, write Q0i=q(i1)(i1)p(i1)(i1),ii,Q1i=j=0i1qij,Q2i=j=0i1qjiQ_{0}^{i}=q_{(i-1)(i-1)}p_{(i-1)(i-1),ii},~{}Q_{1}^{i}=\sum\nolimits_{j=0}^{i-1}q_{ij},~{}Q_{2}^{i}=\sum\nolimits_{j=0}^{i-1}q_{ji} for 1iP1\leq i\leq P and write Δij=1(1pi)(1pj)=pi+pjpipj\Delta_{ij}=1-(1-p_{i})(1-p_{j})=p_{i}+p_{j}-p_{i}p_{j} for 0i<j20\leq i<j\leq 2. In this way,

Q0i=Pr(Ti,1=Ti,2),Q1i=Pr(Ti,2>Ti,1),Q2i=Pr(Ti,1>Ti,2),Q0i+Q1i+Q2i=1,Q_{0}^{i}=\mathrm{Pr}(T_{i,1}=T_{i,2}),~{}Q_{1}^{i}=\mathrm{Pr}(T_{i,2}>T_{i,1}),~{}Q_{2}^{i}=\mathrm{Pr}(T_{i,1}>T_{i,2}),~{}Q_{0}^{i}+Q_{1}^{i}+Q_{2}^{i}=1, (55)
εi=(Q0iQ^0i)(1p1)(1p2)(p1+p2)p1p2Δ12+(Q^2iQ2i)p1(1p2)p2Δ12+(Q^1iQ1i)(1p1)p2p1Δ12\varepsilon_{i}=(Q_{0}^{i}-\hat{Q}_{0}^{i})\frac{(1-p_{1})(1-p_{2})(p_{1}+p_{2})}{p_{1}p_{2}\Delta_{12}}+(\hat{Q}_{2}^{i}-Q_{2}^{i})\frac{p_{1}(1-p_{2})}{p_{2}\Delta_{12}}+(\hat{Q}_{1}^{i}-Q_{1}^{i})\frac{(1-p_{1})p_{2}}{p_{1}\Delta_{12}} (56)

for 1iP1\leq i\leq P. We next characterize the 11-step transition probability in \mathcal{M}.

For 0i<P0\leq i<P, by the memoryless property of a geometric distribution, the 11-step transition probability starting from state (i,i)(i,i) is not affected by the arrival time Si+1S_{i+1} of the (i+1)st(i+1)^{st} packet at the RS and thus is invariant of ii. Specifically,

pii,(i+1)(i+1)=p1p2Δ12,pii,(i+1)i=p1(1p2)Δ12,pii,i(i+1)=(1p1)p2Δ12.p_{ii,(i+1)(i+1)}=\frac{p_{1}p_{2}}{\Delta_{12}},\quad p_{ii,(i+1)i}=\frac{p_{1}(1-p_{2})}{\Delta_{12}},\quad p_{ii,i(i+1)}=\frac{(1-p_{1})p_{2}}{\Delta_{12}}. (57)

For 0j<i<P0\leq j<i<P, the 11-step transition probability starting from state (i,j)(i,j) is affected by the arrival time Si+1S_{i+1} of the (i+1)st(i+1)^{st} packet at the RS. Notice that when \mathcal{M} is in state (i,j)(i,j), it implies the occurrence of the joint event of Ti+1,1>Tj,2T_{i+1,1}>T_{j,2} and Tj+1,2>Ti,1T_{j+1,2}>T_{i,1}, which will be denoted by AijA_{ij}. Thus, we can respectively express pij,i(j+1)=Pr(Tj+1,2<Ti+1,1|Aij)p_{ij,i(j+1)}=\mathrm{Pr}(T_{j+1,2}<T_{i+1,1}|A_{ij}), pij,(i+1)(j+1)=Pr(Tj+1,2=Ti+1,1|Aij)p_{ij,(i+1)(j+1)}=\mathrm{Pr}(T_{j+1,2}=T_{i+1,1}|A_{ij}), and pij,(i+1)j=Pr(Tj+1,2>Ti+1,1|Aij)p_{ij,(i+1)j}=\mathrm{Pr}(T_{j+1,2}>T_{i+1,1}|A_{ij}) as

pij,i(j+1)=\displaystyle p_{ij,i(j+1)}= Pr(Tj+1,2<Si+1|Si+1>Ti,1,Aij)Pr(Si+1>Ti,1|Aij)+\displaystyle\mathrm{Pr}(T_{j+1,2}<S_{i+1}|S_{i+1}>T_{i,1},A_{ij})\mathrm{Pr}(S_{i+1}>T_{i,1}|A_{ij})+
Pr(Si+1Tj+1,2<Ti+1,1|Si+1>Ti,1,Aij)Pr(Si+1>Ti,1|Aij)+\displaystyle\mathrm{Pr}(S_{i+1}\leq T_{j+1,2}<T_{i+1,1}|S_{i+1}>T_{i,1},A_{ij})\mathrm{Pr}(S_{i+1}>T_{i,1}|A_{ij})+
Pr(Tj+1,2<Ti+1,1|Si+1Ti,1,Aij)Pr(Si+1Ti,1|Aij)\displaystyle\mathrm{Pr}(T_{j+1,2}<T_{i+1,1}|S_{i+1}\leq T_{i,1},A_{ij})\mathrm{Pr}(S_{i+1}\leq T_{i,1}|A_{ij})
=\displaystyle= (1p1)p2Δ12+(1p0)p1p2Δ02Δ12Pr(Si+1>Ti,1|Aij)\displaystyle\frac{(1-p_{1})p_{2}}{\Delta_{12}}+\frac{(1-p_{0})p_{1}p_{2}}{\Delta_{02}\Delta_{12}}\mathrm{Pr}(S_{i+1}>T_{i,1}|A_{ij})
=\displaystyle= (1p1)p2Δ12(1+p11p1αij),\displaystyle\frac{(1-p_{1})p_{2}}{\Delta_{12}}(1+\frac{p_{1}}{1-p_{1}}\alpha_{ij}), (58)
pij,(i+1)(j+1)=\displaystyle p_{ij,(i+1)(j+1)}= Pr(Si+1Tj+1,2=Ti+1,1|Si+1>Ti,1,Aij)Pr(Si+1>Ti,1|Aij)+\displaystyle\mathrm{Pr}(S_{i+1}\leq T_{j+1,2}=T_{i+1,1}|S_{i+1}>T_{i,1},A_{ij})\mathrm{Pr}(S_{i+1}>T_{i,1}|A_{ij})+
Pr(Tj+1,2=Ti+1,1|Si+1Ti,1,Aij)Pr(Si+1Ti,1|Aij)\displaystyle\mathrm{Pr}(T_{j+1,2}=T_{i+1,1}|S_{i+1}\leq T_{i,1},A_{ij})\mathrm{Pr}(S_{i+1}\leq T_{i,1}|A_{ij})
=\displaystyle= p1p2Δ12(1p0)p1p22Δ02Δ12Pr(Si+1>Ti,1|Aij)\displaystyle\frac{p_{1}p_{2}}{\Delta_{12}}-\frac{(1-p_{0})p_{1}p_{2}^{2}}{\Delta_{02}\Delta_{12}}\mathrm{Pr}(S_{i+1}>T_{i,1}|A_{ij})
=\displaystyle= p1p2Δ12(1p2αij),\displaystyle\frac{p_{1}p_{2}}{\Delta_{12}}(1-p_{2}\alpha_{ij}), (59)
pij,(i+1)j=\displaystyle p_{ij,(i+1)j}= Pr(Si+1Ti+1,1<Tj+1,2|Si+1>Ti,1,Aij)Pr(Si+1>Ti,1|Aij)+\displaystyle\mathrm{Pr}(S_{i+1}\leq T_{i+1,1}<T_{j+1,2}|S_{i+1}>T_{i,1},A_{ij})\mathrm{Pr}(S_{i+1}>T_{i,1}|A_{ij})+
Pr(Tj+1,2>Ti+1,1|Si+1Ti,1,Aij)Pr(Si+1Ti,1|Aij)\displaystyle\mathrm{Pr}(T_{j+1,2}>T_{i+1,1}|S_{i+1}\leq T_{i,1},A_{ij})\mathrm{Pr}(S_{i+1}\leq T_{i,1}|A_{ij})
=\displaystyle= p1(1p2)Δ12(1p0)p1p2(1p2)Δ02Δ12Pr(Si+1>Ti,1|Aij)\displaystyle\frac{p_{1}(1-p_{2})}{\Delta_{12}}-\frac{(1-p_{0})p_{1}p_{2}(1-p_{2})}{\Delta_{02}\Delta_{12}}\mathrm{Pr}(S_{i+1}>T_{i,1}|A_{ij})
=\displaystyle= p1(1p2)Δ12(1p2αij),\displaystyle\frac{p_{1}(1-p_{2})}{\Delta_{12}}(1-p_{2}\alpha_{ij}), (60)

where αij={1p0Δ02Pr(Si+1>Ti,1|Aij)ifi>j1p0Δ01Pr(Sj+1>Tj,2|Aij)ifi<j\alpha_{ij}=\left\{\begin{matrix}\frac{1-p_{0}}{\Delta_{02}}\mathrm{Pr}(S_{i+1}>T_{i,1}|A_{ij})&\mathrm{if}~{}i>j\\ \frac{1-p_{0}}{\Delta_{01}}\mathrm{Pr}(S_{j+1}>T_{j,2}|A_{ij})&\mathrm{if}~{}i<j\\ \end{matrix}\right..

Similarly, for 0i<jP0\leq i<j\leq P,

pij,(i+1)j=p1(1p2)Δ12(1+p21p2αij),pij,(i+1)(j+1)=p1p2Δ12(1p1αij),pij,i(j+1)=(1p1)p2Δ12(1p1αij).\begin{split}p_{ij,(i+1)j}&=\frac{p_{1}(1-p_{2})}{\Delta_{12}}(1+\frac{p_{2}}{1-p_{2}}\alpha_{ij}),p_{ij,(i+1)(j+1)}=\frac{p_{1}p_{2}}{\Delta_{12}}(1-p_{1}\alpha_{ij}),\\ p_{ij,i(j+1)}&=\frac{(1-p_{1})p_{2}}{\Delta_{12}}(1-p_{1}\alpha_{ij}).\end{split} (61)

For the special case p0=1p_{0}=1, let p^ij,ij\hat{p}_{ij,i^{\prime}j^{\prime}}, q^ij\hat{q}_{ij} and Q^ji\hat{Q}^{i}_{j} respectively represent pij,ijp_{ij,i^{\prime}j^{\prime}}, qijq_{ij} and QjiQ^{i}_{j}. Based on the above derivation of 11-step transition probabilities in \mathcal{M}, we obtain the following comparisons. For 0i<P0\leq i<P,

pii,(i+1)i=p^ii,(i+1)i,pii,i(i+1)=p^ii,i(i+1),pii,(i+1)(i+1)=p^ii,(i+1)(i+1).p_{ii,(i+1)i}=\hat{p}_{ii,(i+1)i},\quad p_{ii,i(i+1)}=\hat{p}_{ii,i(i+1)},\quad p_{ii,(i+1)(i+1)}=\hat{p}_{ii,(i+1)(i+1)}. (62)

For 0j<i<P0\leq j<i<P,

pij,i(j+1)p^ij,i(j+1)0,pij,(i+1)(j+1)p^ij,(i+1)(j+1)0,pij,(i+1)jp^ij,(i+1)j0.p_{ij,i(j+1)}-\hat{p}_{ij,i(j+1)}\geq 0,~{}p_{ij,(i+1)(j+1)}-\hat{p}_{ij,(i+1)(j+1)}\leq 0,~{}p_{ij,(i+1)j}-\hat{p}_{ij,(i+1)j}\leq 0. (63)

For 0i<j<P0\leq i<j<P,

pij,(i+1)jp^ij,(i+1)j0,pij,(i+1)(j+1)p^ij,(i+1)(j+1)0,pij,i(j+1)p^ij,i(j+1)0.p_{ij,(i+1)j}-\hat{p}_{ij,(i+1)j}\geq 0,~{}p_{ij,(i+1)(j+1)}-\hat{p}_{ij,(i+1)(j+1)}\leq 0,~{}p_{ij,i(j+1)}-\hat{p}_{ij,i(j+1)}\leq 0. (64)

Assume 2iP2\leq i\leq P and p0<1p_{0}<1. Eqs. (62)-(64) together imply that there are higher probabilities to visit state (i,i)(i,i) along the paths from (0,0)(0,0) to (P,P)(P,P) compared with the case p0=1p_{0}=1, that is, q(i1)(i1)>q^(i1)(i1)q_{(i-1)(i-1)}>\hat{q}_{(i-1)(i-1)}. Since p(i1)(i1),ii=p^(i1)(i1),iip_{(i-1)(i-1),ii}=\hat{p}_{(i-1)(i-1),ii}, Q0i>Q^0iQ_{0}^{i}>\hat{Q}_{0}^{i}. Since Q0i+Q1i+Q2i=1Q_{0}^{i}+Q_{1}^{i}+Q_{2}^{i}=1,

Q^1i+Q^2i>Q1i+Q2i.\hat{Q}_{1}^{i}+\hat{Q}_{2}^{i}>Q_{1}^{i}+Q_{2}^{i}. (65)

When p1=p2p_{1}=p_{2}, we have Q1i=Q2iQ_{1}^{i}=Q_{2}^{i}, so (65) implies Q^1i>Q1i\hat{Q}_{1}^{i}>Q_{1}^{i} and Q^2i>Q2i\hat{Q}_{2}^{i}>Q_{2}^{i}. Therefore, εi>0\varepsilon_{i}>0 and 2iPεi>0\sum_{2\leq i\leq P}\varepsilon_{i}>0.

Assume p1<p2p_{1}<p_{2}. In this case, Q0i>Q^0iQ_{0}^{i}>\hat{Q}_{0}^{i}, Q^2i>Q2i>Q1i\hat{Q}_{2}^{i}>Q_{2}^{i}>Q_{1}^{i}, and Q^2i\hat{Q}_{2}^{i} converges to 11 while Q^0i\hat{Q}_{0}^{i} and Q^1i\hat{Q}_{1}^{i} converge to 0 with increasing ii (assume PP is sufficiently large). For relatively small ii, Q^1i\hat{Q}_{1}^{i} is still larger than Q1iQ_{1}^{i} due to the effect of (63), so we have εi>0\varepsilon_{i}>0. However, with increasing ii, Q^1i<Q1i\hat{Q}_{1}^{i}<Q_{1}^{i} will occur because Q^1i\hat{Q}_{1}^{i} decreases faster than Q1iQ_{1}^{i}, (Q1iQ_{1}^{i} may or may not converge to 0 depending on whether p0p_{0} is larger than p1p_{1}). As a result, for large PP, (65) is insufficient to imply εi>0\varepsilon_{i}>0 so that we need further manipulation on the expression of εi\varepsilon_{i}.

For 1i<P1\leq i<P, write Q10i=qi(i1)pi(i1),ii,Q20i=q(i1)ip(i1)i,iiQ_{10}^{i}=q_{i(i-1)}p_{i(i-1),ii},Q_{20}^{i}=q_{(i-1)i}p_{(i-1)i,ii}, and let Q^10i\hat{Q}_{10}^{i}, Q^20i\hat{Q}_{20}^{i} respectively represent Q10iQ_{10}^{i}, Q20iQ_{20}^{i} for the special case p0=1p_{0}=1. In terms of Q10iQ_{10}^{i}, we can express Q1i+1Q_{1}^{i+1}, 1i<P1\leq i<P, recursively as

Q1i+1=Q1iQ10i+qiipii,(i+1)i=Q1iQ10i+qiip1(1p2)Δ12=Q1iQ10i+Q0i+11p2p2,Q_{1}^{i+1}=Q_{1}^{i}-Q_{10}^{i}+q_{ii}p_{ii,(i+1)i}=Q_{1}^{i}-Q_{10}^{i}+q_{ii}\frac{p_{1}(1-p_{2})}{\Delta_{12}}=Q_{1}^{i}-Q_{10}^{i}+Q_{0}^{i+1}\frac{1-p_{2}}{p_{2}}, (66)

where the last equality holds by (57) and the definition of Q0iQ_{0}^{i}. Similarly,

Q2i+1=Q2iQ20i+qii(1p1)p2Δ12=Q2iQ20i+Q0i+11p1p1.Q_{2}^{i+1}=Q_{2}^{i}-Q_{20}^{i}+q_{ii}\frac{(1-p_{1})p_{2}}{\Delta_{12}}=Q_{2}^{i}-Q_{20}^{i}+Q_{0}^{i+1}\frac{1-p_{1}}{p_{1}}. (67)

By plugging (66), (67) back to (56), we can deduce the following recursive expression of εi+1\varepsilon_{i+1},

εi+1=(Q^2iQ2i+Q20iQ^20i)p1(1p2)p2Δ12+(Q^1iQ1i+Q10iQ^10i)(1p1)p2p1Δ12=εi+(Q20iQ^20i)p1(1p2)p2Δ12+(Q10iQ^10i)(1p1)p2p1Δ12(Q0iQ^0i)(1p1)(1p2)(p1+p2)p1p2Δ12,\begin{split}&\varepsilon_{i+1}\\ =&(\hat{Q}_{2}^{i}-Q_{2}^{i}+Q_{20}^{i}-\hat{Q}_{20}^{i})\frac{p_{1}(1-p_{2})}{p_{2}\Delta_{12}}+(\hat{Q}_{1}^{i}-Q_{1}^{i}+Q_{10}^{i}-\hat{Q}_{10}^{i})\frac{(1-p_{1})p_{2}}{p_{1}\Delta_{12}}\\ =&\varepsilon_{i}+(Q_{20}^{i}-\hat{Q}_{20}^{i})\frac{p_{1}(1-p_{2})}{p_{2}\Delta_{12}}+(Q_{10}^{i}-\hat{Q}_{10}^{i})\frac{(1-p_{1})p_{2}}{p_{1}\Delta_{12}}-(Q_{0}^{i}-\hat{Q}_{0}^{i})\frac{(1-p_{1})(1-p_{2})(p_{1}+p_{2})}{p_{1}p_{2}\Delta_{12}},\\ \end{split}

where C1i=Q10iQ0i1p2p2=Q10iq(i1)(i1)p1(1p2)Δ12C_{1}^{i}=Q_{10}^{i}-Q_{0}^{i}\frac{1-p_{2}}{p_{2}}=Q_{10}^{i}-q_{(i-1)(i-1)}\frac{p_{1}(1-p_{2})}{\Delta_{12}}, C2i=Q20iQ0i1p1p1=Q20iq(i1)(i1)(1p1)p2Δ12C_{2}^{i}=Q_{20}^{i}-Q_{0}^{i}\frac{1-p_{1}}{p_{1}}=Q_{20}^{i}-q_{(i-1)(i-1)}\frac{(1-p_{1})p_{2}}{\Delta_{12}}. In this way, for 1i<P1\leq i<P, we have εi+1=εi+(C1iC^1i)(1p1)p2p1Δ12+(C2iC^2i)p1(1p2)p2Δ12\varepsilon_{i+1}=\varepsilon_{i}+(C_{1}^{i}-\hat{C}_{1}^{i})\frac{(1-p_{1})p_{2}}{p_{1}\Delta_{12}}+(C_{2}^{i}-\hat{C}_{2}^{i})\frac{p_{1}(1-p_{2})}{p_{2}\Delta_{12}}, where C^1i\hat{C}_{1}^{i}, C^2i\hat{C}_{2}^{i} respectively refer to C1iC_{1}^{i}, C2iC_{2}^{i} in the case of p0=1p_{0}=1. Because

C1i+C2i=\displaystyle C_{1}^{i}+C_{2}^{i}= Q10i+Q20iq(i1)(i1)p1+p22p1p2Δ12=qiiq(i1)(i1),\displaystyle Q_{10}^{i}+Q_{20}^{i}-q_{(i-1)(i-1)}\frac{p_{1}+p_{2}-2p_{1}p_{2}}{\Delta_{12}}=q_{ii}-q_{(i-1)(i-1)}, (68)

we has εi+1=(qiiq^ii)p1(1p2)p2Δ12+j=1i(C1jC^1j)(1p11p2)\varepsilon_{i+1}=(q_{ii}-\hat{q}_{ii})\frac{p_{1}(1-p_{2})}{p_{2}\Delta_{12}}+\sum_{j=1}^{i}(C_{1}^{j}-\hat{C}_{1}^{j})(\frac{1}{p_{1}}-\frac{1}{p_{2}}). When p0=1p_{0}=1 or p1=p2=1p_{1}=p_{2}=1, qii=q^iiq_{ii}=\hat{q}_{ii} and C1j=C^1jC_{1}^{j}=\hat{C}_{1}^{j}, so that εi+1=0\varepsilon_{i+1}=0 for all 1i<P1\leq i<P.

Assume p0,p1,p21p_{0},p_{1},p_{2}\neq 1. Based on (-C) and (61), we have

q11q^11=p1p2(1p1)(1p2)Δ122(p11p1α01+p21p2α10)>0,\displaystyle q_{11}-\hat{q}_{11}=\frac{p_{1}p_{2}(1-p_{1})(1-p_{2})}{\Delta_{12}^{2}}(\frac{p_{1}}{1-p_{1}}\alpha_{01}+\frac{p_{2}}{1-p_{2}}\alpha_{10})>0,
C11C^11=Q101Q^101=p1p2(1p1)(1p2)Δ122p21p2α10>0,\displaystyle C_{1}^{1}-\hat{C}_{1}^{1}=Q_{10}^{1}-\hat{Q}_{10}^{1}=\frac{p_{1}p_{2}(1-p_{1})(1-p_{2})}{\Delta_{12}^{2}}\frac{p_{2}}{1-p_{2}}\alpha_{10}>0,

so that ε2>0\varepsilon_{2}>0. With increasing ii, both qiiq_{ii} and q^ii\hat{q}_{ii} decrease and qiiq^iiq_{ii}-\hat{q}_{ii} converges to a nonnegative constant value (for sufficiently large PP). We can then deduce, based on (68), that both C1jC_{1}^{j} and C2jC_{2}^{j} converge to zero with increasing ii too. As a result, even if C1iC^1i<0C_{1}^{i}-\hat{C}_{1}^{i}<0 is possible to occur, |C1iC^1i||C_{1}^{i}-\hat{C}_{1}^{i}| is negligible compared with j=1i(qjjq^jj)>0\sum_{j=1}^{i}(q_{jj}-\hat{q}_{jj})>0. We can now assert j=2Pεj>0\sum\nolimits_{j=2}^{P}\varepsilon_{j}>0.

Acknowledgement

The authors would like to appreciate the valuable suggestions by the editor as well as anonymous reviewers to help improve the quality of the paper.

References

  • [1] R. Su, Q. T. Sun and Z. Zhang, “On the delay of random linear network coding in full-duplex relay networks,” IEEE ISIT, 2021.
  • [2] G. Liu, F. R. Yu, H. Ji, V. C. M. Leung and X. Li, “In-band full-duplex relaying: a survey, research issues and challenges,” IEEE Commun. Surv., vol. 17, no. 2, 2015.
  • [3] T. Ho, M. Médard, R. Koetter, D. R. Karger, M. Effros, J. Shi and B. Leong, “A random linear network coding approach to multicast,” IEEE Trans. Inf. Theory, vol. 52, no. 10, 2006.
  • [4] A. Eryilmaz, A. Ozdaglar, M. Medard and E. Ahmed, “On the delay and throughput gains of coding in unreliable networks,” IEEE Trans. Inf. Theory, vol. 54, no. 12, 2008.
  • [5] E. Tsimbalo, A. Tassi and R. J. Piechocki, “Reliability of multicast under random linear network coding,” IEEE Trans. Commun., vol. 66, no. 6, 2018.
  • [6] R. Su, Q. T. Sun and Z. Zhang, “Delay-complexity trade-off of random linear network coding in wireless broadcast,” IEEE Trans. Commun., vol. 68, no. 9, 2020.
  • [7] I. Chatzigeorgious and A. Tassi, “Decoding delay performance of random linear network coding for broadcast,” IEEE Trans. Veh. Technol., vol. 66, no. 8, 2017.
  • [8] B. T. Swapna, A. Eryilmaz, and N. B. Shroff, “Throughput-delay analysis of random linear network coding for wireless broadcasting,” IEEE Trans. Inf. Theory, vol. 59, no. 10, 2013.
  • [9] D. E. Lucani, M. Medard and M. Stojanovic, “On coding for delay — network coding for time-division duplexing,” IEEE Trans. Inf. Theory, vol. 58, no. 4, 2012.
  • [10] L. Wang, Y. Yang and W. Zhao, “Network coding-based multipath routing for energy efficiency in wireless sensor networks,” EURASIP J. WIREL. COMM., vol. 2012, 2012.
  • [11] J. Huang, H. Gharavi, H. Yan and C. Xing, “Network coding in relay-based device-to-device communications,” IEEE Netw., vol. 31, no. 4, 2017.
  • [12] S. Chieochan and E. Hossain, “Wireless fountain coding with IEEE 802.11E block ACK for media streaming in wireline-cum-Wifi networks: a performance study,” IEEE Trans. on Mob. Comput.,vol. 10, no. 10, 2011.
  • [13] G. Giacaglia, X. Shi, M. Kim, D. E. Lucani and M. Médard, “Systematic network coding with the aid of a full-duplex relay,” IEEE ICC, 2013.
  • [14] H. Khamfroush, P. Pahlevani, D. E. Lucani, M. Hundebøll and F. H. P. Fitzek, “On the coded packet relay network in the presence of neighbors: benefits of speaking in a crowded room,” IEEE ICC, 2014.
  • [15] Y. Li, S. Blostein and W.-Y. Chan, “Systematic network coding for two-hop lossy transmissions,” EURASIP J. Adv. Signal Process, vol. 2015, 2015.
  • [16] B. Maboudi, P. Pahlevani, “Security overhead of random linear network coding in multicast relay networks,” T. EMERG. TELECOMMUN. T., 2020.
  • [17] C. Chen, S. J. Baek, and G. De Veciana, “Opportunistic scheduling of randomly coded multicast transmissions at half-duplex relay stations,” IEEE Trans. Inf. Theory, vol. 62, no. 10, 2016.
  • [18] K. Loa, et al., “IMT-advanced relay standards [WiMAX/LTE update],” IEEE Commun. Mag., vol. 48, no. 8, 2010.
  • [19] A. Damnjanovic, et al., “A survey on 3GPP heterogeneous networks,” IEEE Wireless Commun., vol. 18, no. 3, 2011.
  • [20] P. Sadeghi, R. Shams and Danail Traskov, “An optimal adaptive network coding scheme for minimizing decoding delay in broadcast erasure channels,” EURASIP J. Wireless Commun. and Netw., vol. 2010, 2010.
  • [21] S. Sorour and S. Valaee, “Completion delay minimization for instantly decodable network codes,” IEEE/ACM Trans. Netw., vol. 23, no. 5, 2015.
  • [22] Z. Zhang, X. Chai, K. Long, A. V. Vasilakos and L. Hanzo, “Full duplex techniques for 5G networks: self-interference cancellation, protocol design, and relay selection,” IEEE Commun. Mag., vol. 53, no. 5, 2015.
  • [23] Z. Zhang, K. Long, A. V. Vasilakos and L. Hanzo, “Full-duplex wireless communications: challenges, solutions, and future research directions,” Proc. IEEE, vol. 104, no. 7, 2016.
  • [24] Y. Zhang, M. Xiao, S. Han, M. Skoglund and W. Meng, “On precoding and energy efficiency of full-duplex millimeter-wave relays,” IEEE Trans. Wirel. Commun., vol. 18, no. 3, 2019.
  • [25] S. Han, Y. Zhang, W. Meng and H. Chen, “Self-interference-cancellation-based SLNR precoding design for full-duplex relay-assisted system,” IEEE Trans. Veh. Technol., vol. 67, no. 9, 2018.
  • [26] H. Zhu, B. Smida and D. J. Love, “An efficient network-coded ARQ scheme for two-way wireless communication with full-duplex relaying,” IEEE Access, vol. 7, 2019.
  • [27] C. Chen, Z. Meng, S. J. Baek, X. Yu, C. Li and R. Yin, “Low-complexity coded transmission without CSI for full-duplex relay networks,” IEEE GLOBECOM, 2020.
  • [28] B. Shrader, A. Ephremides, “Queueing delay analysis for multicast with random linear coding,” IEEE Trans. Inf. Theory, vol. 58, no. 1, 2011.
  • [29] H. Khamfroush, D.E. Lucani, and J. Barros, “Network coding for wireless coopertive networks: Simple rules, near-optimal delay,” in Proc. IEEE ICC Workshops, 2014.
  • [30] D. E. Lucani, M. V. Pedersen, D. Ruano, C. W. Sørensen, F. H. P. Fitzek, J. Heide, O. Geil, V. Nguyen and M.Reisslein, “Fulcrum: flexible network coding for heterogeneous devices,” IEEE Access, vol. 6, 2018.
  • [31] V. Nguyen, E. Tasdemir, G. T. Nguyen, D. E. Lucani, F. H. P. Fitzek, M.Reisslein, “DSEP Fulcrum: Dynamic sparsity and expansion packet for Fulcrum network coding,” IEEE Access, vol. 8, 2020.
  • [32] A. Tassi, I. Chatzigeorgiou and D. E. Lucani, “Analysis and optimization of sparse random linear network coding for reliable multicast services,” IEEE Trans. Commun., vol. 64, no. 1, 2016.
  • [33] J. Heide and D. Lucani, “Composite extension finite fields for low overhead network coding: telescopic codes,” IEEE ICC, 2015.
  • [34] N. J. A. Sloane. The On-Line Encyclopedia of Integer Sequence. Accessed: 22nd, Jan. 2021[Online]. Available: https://oeis.org/A088617.
  • [35] S. M. Ross, Introduction to Probability Models, Academic Press, 11th ed., 2014.
  • [36] M. Nistor, D. E. Lucani, T. T. V. Vinhoza, R. A. Costa and J. Barros, “On the delay distribution of random linear network coding,” IEEE Journal on Selected Areas in Communications, vol. 29, no. 5, 2011.
  • [37] J. Heide, M. V. Pedersen, F. H. P. Fitzek and M. Medard, “On code parameters and coding vector representation for practical RLNC,” IEEE ICC, 2011.
  • [38] H. Tang, Q. T. Sun, Z. Li, X. Yang and K. Long, “Circular-shift linear network coding,” IEEE Trans. Inf. Theory, vol. 65, no. 1, 2019.
  • [39] T. K. Petersen, Eulerian Numbers, Birkha¨\mathrm{\ddot{a}}user Basel, 1st ed., 2015.