This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Fundamental Limits of Communication-Assisted Sensing in ISAC Systems

Fuwang Dong1, Fan Liu1, Shihang Lu1, Yifeng Xiong2, Weijie Yuan1, Yuanhao Cui1 1. Sourthen University of Science and Technology, China
2. Beijing University of Posts and Telecommunications, China
Email: [email protected]
Abstract

In this paper, we introduce a novel communication-assisted sensing (CAS) framework that explores the potential coordination gains offered by the integrated sensing and communication technique. The CAS system endows users with beyond-line-of-the-sight sensing capabilities, supported by a dual-functional base station that enables simultaneous sensing and communication. To delve into the system’s fundamental limits, we characterize the information-theoretic framework of the CAS system in terms of rate-distortion theory. We reveal the achievable overall distortion between the target’s state and the reconstructions at the end-user, referred to as the sensing quality of service, within a special case where the distortion metric is separable for sensing and communication processes. As a case study, we employ a typical application to demonstrate distortion minimization under the ISAC signaling strategy, showcasing the potential of CAS in enhancing sensing capabilities.

Index Terms:
Communication-assisted sensing, ISAC, rate-distortion theory, fundamental limits.

I Introduction

I-A Background and Motivations

Integrated sensing and communication (ISAC) system is widely acknowledged for its potential to enhance sensing and communications (S&C) performance by sharing the use of hardware, spectrum, and signaling strategies [1]. Recently, the ISAC system has been officially approved as one of the six critical usage scenarios of 6G by the International Telecommunications Union (ITU). Over the past few decades, substantial research efforts have been directed toward signal processing and waveform design (cf. [2, 3], and the reference therein). However, the fundamental limits and the resulting performance tradeoff between S&C have remained longstanding challenges within the research community [4].

Several groundbreaking studies have recently delved into exploring the fundamental limits within various ISAC system configurations. The authors of [5, 6] consider a scenario where the transmitter (Tx) communicates with a user through a memoryless state-dependent channel, simultaneously estimating the state from generalized feedback. The capacity-distortion-cost tradeoff of this channel is characterized to illustrate the optimal achievable rate for reliable communication while adhering to a preset state estimation distortion. Concurrently, another study in [7, 8] focuses on a general scenario where the Tx senses arbitrary targets rather than the specified communication state, namely, the separated S&C channels. The deterministic-random tradeoff between S&C within the ISAC signaling strategy is unveiled by characterizing the Cramér-Rao bound (CRB)-communication rate region [7]. Subsequently, the work in [8] extended this tradeoff to any well-defined sensing distortion metrics.

Unfortunately, there is limited research on exploring the coordination gains offered by mutual support of S&C functionalities in ISAC systems. Notably, within intelligent 6G applications requiring high-quality sensing service, a natural question arises regarding how sensing performance can be enhanced by incorporating communication functionality, and what the fundamental limits are for such a system. To fill this research gap, we introduce a novel system setup in this paper, which is referred to as communication-assisted sensing (CAS). As illustrated in Fig. 1, such the CAS system can endow users with beyond-line-of-sight (BLoS) sensing capability without the need of additional sensors. Specifically, the base station (BS) with favorable visibility illuminates the targets and captures observations containing the interested parameters through device-free wireless sensing abilities, then conveys the relevant information to the users. Thus, the users can acquire the parameters of interest to the BLoS targets.

Refer to caption
Figure 1: The applications of the proposed novel CAS system.

The main contributions of this paper are summarized below. First, we establish a novel CAS system framework, delving deeper into the coordination gain facilitated by the ISAC technique. Second, we analyze the information-theoretic aspects of the CAS framework, characterizing the fundamental limits of the achievable sensing distortion at the end-user. Finally, we illustrate a practical transmission scheme, designing an ISAC waveform to achieve the minimum sensing distortion.

Refer to caption
Figure 2: The information-theoretic framework for the CAS systems.

II System Model

II-A CAS Process

The information-theoretic framework of the proposed CAS system is depicted in Fig. 2. The Tx employs ISAC signal to simultaneously implement S&C tasks. The state random variables SS of the target, the S&C channel input XX, the sensing channel output ZZ, and the communication channel output YY take values in the sets 𝒮\mathcal{S}, 𝒳\mathcal{X}, 𝒵\mathcal{Z}, 𝒴\mathcal{Y}, respectively. Here, the state sequence {Si}i1\{S_{i}\}_{i\geq 1} follows independently and identically distributed (i.i.d.) with a prior distribution PSP_{S}. The CAS process mainly contains the following two parts.

\bullet Sensing Process: The sensing channel output at sensing receiver (Rx) ZiZ_{i} at a given time ii is generated based on the sensing channel law QZ|SX(|xi,si)Q_{Z|SX}(\cdot|x_{i},s_{i}) given the iith channel input Xi=xiX_{i}=x_{i} and the state realization Si=siS_{i}=s_{i}. We assume that the sensing channel output ZiZ_{i} is independent to the past inputs, outputs and state signals. Let 𝒮~\tilde{\mathcal{S}} denote the estimate alphabet, the state estimator is a map from the acquired the observations 𝒵n\mathcal{Z}^{n} to 𝒮~n\tilde{\mathcal{S}}^{n}. Thus, the expected average per-block estimation distortion in the sensing process can be defined by

Δs(n):=𝔼[d(Sn,S~n)]=1ni=1n𝔼[d(Si,S~i)],\Delta_{s}^{(n)}:=\mathbb{E}[d(S^{n},\tilde{S}^{n})]=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[d(S_{i},\tilde{S}_{i})], (1)

where d(,)d(\cdot,\cdot) is a distortion function bounded by dmaxd_{\text{max}}.

\bullet Communication Process: The Tx encodes the estimate S~i1\tilde{S}_{i-1} at the last epoch to communication channel input XiX_{i}. The communication Rx receives the channel output YiY_{i} according to channel law QY|XQ_{Y|X}. The decoder is a map from 𝒴n\mathcal{Y}^{n} to 𝒮^n\hat{\mathcal{S}}^{n}, where 𝒮^\hat{\mathcal{S}} denotes the reconstruction alphabet. Thus, the expected average per-block estimation distortion in the communication process can be defined by

Δc(n):=𝔼[d(S~n,S^n)]=1ni=1n𝔼[d(S~i,S^i)].\Delta_{c}^{(n)}:=\mathbb{E}[d(\tilde{S}^{n},\hat{S}^{n})]=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[d(\tilde{S}_{i},\hat{S}_{i})]. (2)

The overall sensing quality of service (QoS) may be measured by the distortion between the source and its reconstruction at the user-end, i.e.,

Δ(n):=𝔼[d(Sn,S^n)]=1ni=1n𝔼[d(Si,S^i)].\Delta^{(n)}:=\mathbb{E}[d(S^{n},\hat{S}^{n})]=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[d(S_{i},\hat{S}_{i})]. (3)

In the CAS process, a (2n𝖱,n)(2^{n\mathsf{R}},n) coding scheme consists of

1) a state parameter estimator hh: 𝒳n×𝒵n𝒮~n\mathcal{X}^{n}\times\mathcal{Z}^{n}\to\tilde{\mathcal{S}}^{n};

2) a message set (also the estimate set) 𝒮~=[1:2n𝖱]\tilde{\mathcal{S}}=[1:2^{n\mathsf{R}}];

3) an encoder ϕ\phi: 𝒮~n𝒳n\tilde{\mathcal{S}}^{n}\to\mathcal{X}^{n};

4) a decoder ψ\psi: 𝒴n𝒮^n\mathcal{Y}^{n}\to\hat{\mathcal{S}}^{n}.

For a practical system, the S&C channel input XX may be restricted by the limited system resources. Define the cost-function b(x):𝒳+b(x):\mathcal{X}\to\mathbb{R}^{+} to characterize the S&C channel cost, then a rate-distortion-cost tuple (𝖱,D,B)(\mathsf{R},D,B) is said achievable if there exists a sequence of (2n𝖱,n)(2^{n\mathsf{R}},n) codes that satisfy

lim¯nΔ(n)D,lim¯n𝔼[b(Xn)]B.\varlimsup_{n\to\infty}\Delta^{(n)}\leq D,\kern 5.0pt\varlimsup_{n\to\infty}\mathbb{E}[b(X^{n})]\leq B. (4)

Remark: We emphasize that while our CAS system setup shares some similarities with conventional remote estimation [9, 10, 11] and remote source coding [12, 13, 14] problems, they are essentially distinct. In remote estimation, a sensor measures the target’s state and transmits its observations to a remote estimator via a wireless channel. In remote source coding, the encoder cannot access the original source information but only its noisy observations. Note that in these schemes, the state observations (which may be acquired by additional sensors) are independent of the communication channel input. In contrast, the S&C channels are coupling due to the employment of the ISAC signaling strategy. For instance, SZXYS^S\leftrightarrow Z\leftrightarrow X\leftrightarrow Y\leftrightarrow\hat{S} forms a Markov chain in the remote source coding, which is, however, no longer a Markov chain due to the dependency of observation ZZ on the dual-functional channel input XX in the CAS framework. As a result, the overall distortion may not be well-defined by applying the remote source coding theory.

II-B Sensing Estimator and Constrained Capacity

In this subsection, we analyze the optimal estimator in the sensing process and the communication channel capacity constrained by the sensing distortion and resource limitation.

Lemma 1 [6]: By recalling that SXZS~S\leftrightarrow XZ\leftrightarrow\tilde{S} forms a Markov chain, the sensing distortion Δs(n)\Delta_{s}^{(n)} is minimized by the deterministic estimator

h(xn,zn):=(s~(x1,z1),s~(x2,z2),,s~(xn,zn)),h^{\star}(x^{n},z^{n}):=(\tilde{s}^{\star}(x_{1},z_{1}),\tilde{s}^{\star}(x_{2},z_{2}),\cdots,\tilde{s}^{\star}(x_{n},z_{n})), (5)

where

s~(x,z):=argmins𝒮~s𝒮QS|XZ(s|x,z)d(s,s),\tilde{s}^{\star}(x,z):=\arg\min_{s^{\prime}\in\mathcal{\tilde{S}}}\sum_{s\in\mathcal{S}}Q_{S|XZ}(s|x,z)d(s,s^{\prime}), (6)

is independent to the choice of encoder and decoder.

The detailed proof of Lemma 1 can be found in [6]. By applying the estimator (6), we may define the estimate-cost function as e(x)=𝔼[d(S,s~(X,Z))|X=x]e(x)=\mathbb{E}[d(S,\tilde{s}^{\star}(X,Z))|X=x]. Then, a give estimation distortion DsD_{s} satisfies

lim¯n1ni=1n𝔼[e(X)]Ds.\varlimsup_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[e(X)]\leq D_{s}. (7)

Within the ISAC signaling strategy, the S&C channel input XX may be constrained by both the estimate-cost and resource-cost functions. Namely, the feasible set of the probability distributions of XX can be described by the intersection of the two sets as follows.

𝒫B={PX|𝔼[b(X)]B},𝒫Ds={PX|𝔼[e(X)]Ds}.\mathcal{P}_{B}=\left\{P_{X}|\mathbb{E}[b(X)]\leq B\right\},\mathcal{P}_{D_{s}}=\left\{P_{X}|\mathbb{E}[e(X)]\leq D_{s}\right\}. (8)

Definition 1: In communication process, the information-theoretic capacity constrained by estimation and resource cost is defined by

CIT(Ds,B)=maxPX𝒫Ds𝒫BI(X;Y),C^{\text{IT}}(D_{s},B)=\mathop{\max}\limits_{P_{X}\in\mathcal{P}_{D_{s}}\cap\mathcal{P}_{B}}I(X;Y), (9)

where I(X;Y)I(X;Y) denotes the mutual information (MI) between the communication channel input and output.

From (9), we can observe the coupling relationship between S&C processes, namely, the achieved sensing distortion concurrently impacts the communication channel capacity. Moreover, the achieved communication distortion DcD_{c} is determined by the channel capacity in terms of the rate-distortion theory in lossy data transmission strategy.

Definition 2: The information-theoretic rate distortion function can be defined by

RIT(Dc)=minP(s^|s~):𝔼[d(S~,S^)]DcI(S~;S^).R^{\text{IT}}(D_{c})=\mathop{\min}\limits_{P(\hat{s}|\tilde{s}):\mathbb{E}[d(\tilde{S},\hat{S})]\leq D_{c}}I(\tilde{S};\hat{S}). (10)

According to the optimal sensing estimator in Lemma 1, we have I(S~;S^)=I(X,Z;S^)=I(X;S^)I(\tilde{S};\hat{S})=I(X,Z;\hat{S})=I(X;\hat{S}) due to the fact that S^\hat{S} is conditional independent to ZZ with given XX.

III Main Results

Theorem 1: In the CAS framework, for a separable distortion metric d(,)d(\cdot,\cdot) satisfying 𝔼[d(S,S^)]=𝔼[d(S,S~)]+𝔼[d(S~,S^)]\mathbb{E}[d(S,\hat{S})]=\mathbb{E}[d(S,\tilde{S})]+\mathbb{E}[d(\tilde{S},\hat{S})], it is possible to design the coding scheme so that the overall sensing QoS is achieved by the sum of estimation and communication distortions D=Ds+DcD=D_{s}+D_{c}, if and only if

RIT(Dc)CIT(Ds,B).R^{\text{IT}}(D_{c})\leq C^{\text{IT}}(D_{s},B). (11)

III-A Converse

We start with a converse to show that any achievable coding scheme must satisfy (11). Consider any (2n𝖱,n)(2^{n\mathsf{R}},n) coding scheme defined by the encoding and decoding functions ϕ\phi and ψ\psi. Let S~n=h(Xn,Zn)\tilde{S}^{n}=h^{\star}(X^{n},Z^{n}) be the estimate sequence as given in (5) and S^n=ψ(ϕ(S~n))\hat{S}^{n}=\psi(\phi(\tilde{S}^{n})) be the reconstruction sequence corresponding to S~n\tilde{S}^{n}.

By recalling from the proof of the converse in lossy source coding, we have

𝖱\displaystyle\mathsf{R} 1ni=1nI(S~i;S^i)(a)1ni=1nRIT(𝔼[d(S~i,S^i)])\displaystyle\geq\frac{1}{n}\sum_{i=1}^{n}I(\tilde{S}_{i};\hat{S}_{i})\mathop{\geq}\limits^{(a)}\frac{1}{n}\sum_{i=1}^{n}R^{\text{IT}}(\mathbb{E}[d(\tilde{S}_{i},\hat{S}_{i})]) (12)
(b)RIT(1ni=1n𝔼[d(S~i,S^i)])(c)RIT(Dc),\displaystyle\mathop{\geq}\limits^{(b)}R^{\text{IT}}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[d(\tilde{S}_{i},\hat{S}_{i})]\Big{)}\mathop{\geq}\limits^{(c)}R^{\text{IT}}(D_{c}),

where (a)(a) follows the Definition 2, (b)(b) and (c)(c) are due to the convexity and non-increasing properties of the rate distortion function.

On the other hand, by recalling from the proof of the converse in channel coding, we have

𝖱\displaystyle\mathsf{R} 1ni=1nI(Xi;Yi)\displaystyle\leq\frac{1}{n}\sum_{i=1}^{n}I(X_{i};Y_{i}) (13)
(d)1ni=1nCIT(x𝒳PXi(x)c(x),x𝒳PXi(x)b(x))\displaystyle\mathop{\leq}\limits^{(d)}\frac{1}{n}\sum_{i=1}^{n}C^{\text{IT}}\Big{(}\sum_{x\in\mathcal{X}}P_{X_{i}}(x)c(x),\sum_{x\in\mathcal{X}}P_{X_{i}}(x)b(x)\Big{)}
(e)CIT(1ni=1nx𝒳PXi(x)c(x),1ni=1nx𝒳PXi(x)b(x))\displaystyle\mathop{\leq}\limits^{(e)}C^{\text{IT}}\Big{(}\frac{1}{n}\sum_{i=1}^{n}\sum_{x\in\mathcal{X}}P_{X_{i}}(x)c(x),\frac{1}{n}\sum_{i=1}^{n}\sum_{x\in\mathcal{X}}P_{X_{i}}(x)b(x)\Big{)}
(f)CIT(Ds,B),\displaystyle\mathop{\leq}\limits^{(f)}C^{\text{IT}}(D_{s},B),

where (d)(d) follows the Definition 1, (e)(e) and (f)(f) are due to the concavity and non-decreasing properties of the capacity constrained by estimation and resource costs [6]. Additionally, we have the data processing inequality I(S~;S^)=I(X;S^)I(X;Y)I(\tilde{S};\hat{S})=I(X;\hat{S})\leq I(X;Y) benefit from the Markov chain XYS^X\leftrightarrow Y\leftrightarrow\hat{S}. By combing (12), (13), we complete the proof of the converse.

III-B Achievability

Let SnS^{n} be drawn i.i.d. PS\sim P_{S}, we will show that there exists a coding scheme for a sufficiently large nn and rate 𝖱\mathsf{R}, the distortion Δn\Delta^{n} can be achieved by DD if (11) holds. The core idea follows the famous random coding argument and source channel separation theorem with distortion.

1) Codebook Generation: In source coding with rate distortion code, randomly generate a codebook 𝒞s\mathcal{C}_{s} consisting of 2n𝖱2^{n\mathsf{R}} sequences S^n\hat{S}^{n} which is drawn i.i.d. PS^\sim P_{\hat{S}}. The probability distribution is calculated by PS^=S~PS~PS^|S~P_{\hat{S}}=\sum_{\tilde{S}}P_{\tilde{S}}P_{\hat{S}|\tilde{S}}, where PS^|S~P_{\hat{S}|\tilde{S}} achieves the equality in (10). In channel coding, randomly generate a codebook 𝒞c\mathcal{C}_{c} consisting of 2(n𝖱)2^{(n\mathsf{R})} sequences XnX^{n} which is drawn i.i.d. PX\sim P_{X}. The PXP_{X} is chosen by satisfying the capacity with estimation- and resource-cost in (9). Index the codeword S~n\tilde{S}^{n} and XnX^{n} by w{1,2,,2n𝖱}w\in\{1,2,\cdots,2^{n\mathsf{R}}\}.

2) Encoding: Encode the S~n\tilde{S}^{n} by ww such that

(S~n,S^n(w))𝒯d,ϵs(n)(PS~,S^),(\tilde{S}^{n},\hat{S}^{n}(w))\in\mathcal{T}^{(n)}_{d,\epsilon_{s}}(P_{\tilde{S},\hat{S}}), (14)

where 𝒯d,ϵs(n)(PS~,S^)\mathcal{T}^{(n)}_{d,\epsilon_{s}}(P_{\tilde{S},\hat{S}}) represents the distortion typical set [15] with joint probability distribution PS~,S^=PS~PS^|S~P_{\tilde{S},\hat{S}}=P_{\tilde{S}}P_{\hat{S}|\tilde{S}}. To send the message ww, the encoder transmits xn(w)x^{n}(w).

3) Decoding: The decoder observes the communication channel output Yn=ynY^{n}=y^{n} and look for the index w^\hat{w} such that

(xn(w^),yn)𝒯ϵc(n)(PX,Y),(x^{n}(\hat{w}),y^{n})\in\mathcal{T}^{(n)}_{\epsilon_{c}}(P_{X,Y}), (15)

where 𝒯ϵc(n)(PX,Y)\mathcal{T}^{(n)}_{\epsilon_{c}}(P_{X,Y}) represents the typical set with joint probability distribution PX,Y=PXPY|XP_{X,Y}=P_{X}P_{Y|X}. If there exists such w^\hat{w}, it declares S^n=s^n(w^)\hat{S}^{n}=\hat{s}^{n}(\hat{w}). Otherwise, it declares an error.

4) Estimation: The encoder observes the channel output Zn=znZ^{n}=z^{n}, and computes the estimate sequence with the knowledge of channel input xnx^{n} by using the estimator s~n=h(xn,zn)\tilde{s}^{n}=h^{\star}(x^{n},z^{n}) given in (5).

5) Distortion Analysis: We start by analyzing the expected communication distortion (averaged over the random codebooks, state and channel noise). In lossy source coding, for a fixed codebook 𝒞s\mathcal{C}_{s} and choice of ϵs>0\epsilon_{s}>0, the sequence s~n𝒮~n\tilde{s}^{n}\in\tilde{\mathcal{S}}^{n} can be divided into two categories:

\bullet (s~n,s^n(w))𝒯d,ϵs(n)(\tilde{s}^{n},\hat{s}^{n}(w))\in\mathcal{T}^{(n)}_{d,\epsilon_{s}}, we have d(s~n,s^n(w))<Dc+ϵsd(\tilde{s}^{n},\hat{s}^{n}(w))<D_{c}+\epsilon_{s};

\bullet (s~n,s^n(w))𝒯d,ϵs(n)(\tilde{s}^{n},\hat{s}^{n}(w))\notin\mathcal{T}^{(n)}_{d,\epsilon_{s}}, we denote PesP_{e_{s}} as the total probability of these sequences. Thus, these sequence contribute at most PesdmaxP_{e_{s}}d_{\max} to the expected distortion since the distortion for any individual sequence is bounded by dmaxd_{\max}.

According to the achievability of lossy source coding [15, Theorem 10.2.1], we have PesP_{e_{s}} tends to zero for sufficiently large nn whenever 𝖱RIT(Dc)\mathsf{R}\geq R^{\text{IT}}(D_{c}).

In channel coding, the decoder declares an error when the following events occur:

\bullet (xn(w),yn)𝒯ϵc(n)(x^{n}(w),y^{n})\notin\mathcal{T}^{(n)}_{\epsilon_{c}};

\bullet (xn(w),yn)𝒯ϵc(n)(x^{n}(w^{\prime}),y^{n})\in\mathcal{T}^{(n)}_{\epsilon_{c}}, for some www^{\prime}\neq w, we denote PecP_{e_{c}} as the probability of the error occurred in decoder. The error decoding contribute at most PecdmaxP_{e_{c}}d_{\max} to the expected distortion. Similarly, we have Pec0P_{e_{c}}\to 0 for nn\to\infty whenever 𝖱CIT(Ds,B)\mathsf{R}\leq C^{\text{IT}}(D_{s},B) according to channel coding theorem [15].

On the other hand, the expected estimation distortion can be upper bounded by

Δs(n)=1ni=1n𝔼[d(Si,S~i)|W^w]P(W^w)\displaystyle\Delta_{s}^{(n)}=\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[d(S_{i},\tilde{S}_{i})|\hat{W}\neq w]P(\hat{W}\neq w) (16)
+1ni=1n𝔼[d(Si,S~i)|W^=w]P(W^=w)\displaystyle\kern 30.0pt+\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[d(S_{i},\tilde{S}_{i})|\hat{W}=w]P(\hat{W}=w)
Pecdmax+1ni=1n𝔼[d(Si,S~i)|W^=w]P(W^=w)(1Pec).\displaystyle\leq P_{e_{c}}d_{\text{max}}+\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[d(S_{i},\tilde{S}_{i})|\hat{W}=w]P(\hat{W}=w)(1-P_{e_{c}}).

Note that (sn,xn(w),s~n)𝒯ϵe(n)(PS,X,S~)(s^{n},x^{n}(w),\tilde{s}^{n})\in\mathcal{T}^{(n)}_{\epsilon_{e}}(P_{S,X,\tilde{S}}) where PS,X,S~P_{S,X,\tilde{S}} denotes the joint marginal distribution of PS,X,Z,S~=PSPXQZ|SX𝕀{s~=s~(x,z)}P_{S,X,Z,\tilde{S}}=P_{S}P_{X}Q_{Z|SX}\mathbb{I}\{\tilde{s}=\tilde{s}^{\star}(x,z)\} with 𝕀()\mathbb{I}(\cdot) being the indicator function, we have

lim¯n1ni=1n𝔼[d(Si,S~i)|W^=w](1+ϵe)𝔼[d(S,S~)],\varlimsup_{n\to\infty}\frac{1}{n}\sum_{i=1}^{n}\mathbb{E}[d(S_{i},\tilde{S}_{i})|\hat{W}=w]\leq(1+\epsilon_{e})\mathbb{E}[d(S,\tilde{S})], (17)

according to the typical average lemma [6]. In summary, the overall sensing QoS can be calculated by

Δ(n)\displaystyle\Delta^{(n)} =(g)Δs(n)+Δc(n)\displaystyle\mathop{=}\limits^{(g)}\Delta_{s}^{(n)}+\Delta_{c}^{(n)} (18)
(h)Dc+(Pes+2Pec)dmax+(1+ϵe)(1Pec)Ds,\displaystyle\mathop{\leq}\limits^{(h)}D_{c}+(P_{e_{s}}+2P_{e_{c}})d_{\max}+(1+\epsilon_{e})(1-P_{e_{c}})D_{s},

where (g)(g) follows the definition of the separable distortion metric in Theorem 1 and in (h)(h) we omit the terms containing the product of PesP_{e_{s}} and PecP_{e_{c}}. Consequently, taking nn\to\infty and Pes,Pec,ϵc,ϵe,ϵs0P_{e_{s}},P_{e_{c}},\epsilon_{c},\epsilon_{e},\epsilon_{s}\to 0, we can conclude that the expected sensing distortion (averaged over the random codebooks, state and channel noise) tends to be DDc+DsD\to D_{c}+D_{s} whenever RIT(Dc)𝖱CIT(Ds,B)R^{\text{IT}}(D_{c})\leq\mathsf{R}\leq C^{\text{IT}}(D_{s},B).

IV Example

In this section, we take a ISAC waveform design scheme as an example, where the overall sensing QoS DD is minimized by appropriately choosing S&C channel input. The associated optimization problem can be stated by 111Here, we use the symbol I(Ds,B)I(D_{s},B) instead of C(Ds,B)C(D_{s},B) to emphasize that the optimal distribution does not necessarily achieve the channel capacity.

minPX\displaystyle\mathop{\text{min}}\limits_{P_{X}}\kern 5.0pt D=Dc+Ds\displaystyle D=D_{c}+D_{s} (19)
subject to R(Dc)I(Ds,B).\displaystyle R(D_{c})\leq I(D_{s},B).

The widely employed Gaussian linear model (GLM) for S&C received signal are given by

𝐙=𝐇s𝐗+𝐍s,𝐘=𝐇c𝐗+𝐍c,\mathbf{Z}=\mathbf{H}_{s}\mathbf{X}+\mathbf{N}_{s},\kern 5.0pt\mathbf{Y}=\mathbf{H}_{c}\mathbf{X}+\mathbf{N}_{c}, (20)

where 𝐙Ms×T\mathbf{Z}\in\mathbb{C}^{M_{s}\times T} and 𝐘Mc×T\mathbf{Y}\in\mathbb{C}^{M_{c}\times T} represent the received signals of S&C Rxs, 𝐗N×T\mathbf{X}\in\mathbb{C}^{N\times T} is the transmitting ISAC signal with TT being the number of symbols. NN, MsM_{s}, and McM_{c} are the numbers of antennas for Tx, sensing Rx, and user Rx, respectively. 𝐇sMs×N\mathbf{H}_{s}\in\mathbb{C}^{M_{s}\times N} and 𝐇cMc×N\mathbf{H}_{c}\in\mathbb{C}^{M_{c}\times N} denote the S&C channels. 𝐍s\mathbf{N}_{s} and 𝐍c\mathbf{N}_{c} are the corresponding channel noises whose entries follow the complex Gaussian distribution with 𝒞𝒩(0,σs2)\mathcal{CN}(0,\sigma_{s}^{2}) and 𝒞𝒩(0,σc2)\mathcal{CN}(0,\sigma_{c}^{2}).

Let us consider the scenario of target response matrix (TRM) estimation, where the user wants to acquire the sensing channel information 𝐬=vec(𝐇s)\mathbf{s}=\text{vec}(\mathbf{H}_{s}) through the CAS process. Vectorize the Hermitian of sensing received signal by

𝐳s=𝐗~𝐬+𝐧s,\mathbf{z}_{s}=\tilde{\mathbf{X}}\mathbf{s}+\mathbf{n}_{s}, (21)

where 𝐳s=vec(𝐙sH)\mathbf{z}_{s}=\text{vec}(\mathbf{Z}_{s}^{H}), 𝐗~=𝐈Ms𝐗H\tilde{\mathbf{X}}=\mathbf{I}_{M_{s}}\otimes\mathbf{X}^{H}, 𝐧s=vec(𝐍sH)\mathbf{n}_{s}=\text{vec}(\mathbf{N}_{s}^{H}). For discussion convenience, we assume that the parameter vector 𝐬\mathbf{s} follows Gaussian distribution 𝒞𝒩(0,𝐈Ms𝚺s)\mathcal{CN}(0,\mathbf{I}_{M_{s}}\otimes\bm{\Sigma}_{s}), where 𝚺s\bm{\Sigma}_{s} denotes the covariance matrix of each column of 𝐇s\mathbf{H}_{s} [16].

IV-A Sensing Process

By applying the estimator (6), the mean squared error (MSE) between 𝐬\mathbf{s} and its estimate 𝐬~\tilde{\mathbf{s}} may be obtained by [17]

Ds=𝔼[𝐬𝐬~2]=MsTr[(Tσs2𝐗𝐗H+𝚺s1)1].D_{s}=\mathbb{E}\Big{[}\|\mathbf{s}-\tilde{\mathbf{s}}\|^{2}\Big{]}=M_{s}\text{Tr}\Big{[}\Big{(}\frac{T}{\sigma^{2}_{s}}\mathbf{X}\mathbf{X}^{H}+\bm{\Sigma}_{s}^{-1}\Big{)}^{-1}\Big{]}. (22)

Fortunately, when we adopt MSE, a widely used metric in parameter estimation, as the distortion metric, the condition of separable distortion in the CAS process holds. Namely, the overall distortion DD can be equivalently decomposed into the sum of the estimation and communication distortions,

D\displaystyle D =𝔼[𝐬𝐬^22]=𝔼[𝐬𝐬~+𝐬~𝐬^22]\displaystyle=\mathbb{E}\left[\left\|\mathbf{s}-\hat{\mathbf{s}}\right\|_{2}^{2}\right]=\mathbb{E}\left[\left\|\mathbf{s}-\tilde{\mathbf{s}}+\tilde{\mathbf{s}}-\hat{\mathbf{s}}\right\|_{2}^{2}\right] (23)
=(i)𝔼[𝐬𝐬~22]+𝔼[𝐬~𝐬^22]=ΔDs+Dc,\displaystyle\mathop{=}\limits^{(i)}\mathbb{E}\left[\left\|\mathbf{s}-\tilde{\mathbf{s}}\right\|_{2}^{2}\right]+\mathbb{E}\left[\left\|\tilde{\mathbf{s}}-\hat{\mathbf{s}}\right\|_{2}^{2}\right]\mathop{=}\limits^{\Delta}D_{s}+D_{c},

where (i)(i) holds from the properties of the conditional expectation

𝔼[(𝐬𝐬~)T(𝐬~𝐬^)]=𝔼[(𝐬𝔼[𝐬|𝐳,𝐱])Tf(𝐳,𝐱)]=0.\mathbb{E}\big{[}(\mathbf{s}-\tilde{\mathbf{s}})^{T}(\tilde{\mathbf{s}}-\hat{\mathbf{s}})\big{]}=\mathbb{E}\big{[}(\mathbf{s}-\mathbb{E}\left[\mathbf{s}|\mathbf{z},\mathbf{x}\right])^{T}f(\mathbf{z},\mathbf{x})\big{]}=0.

Here, 𝐬~=𝔼[𝐬|𝐳,𝐱]\tilde{\mathbf{s}}=\mathbb{E}\left[\mathbf{s}|\mathbf{z},\mathbf{x}\right] is the nimimum mean squared error (MMSE) estimator, and f(𝐳,𝐱)f(\mathbf{z},\mathbf{x}) represents an arbitrary function with respect to 𝐳,𝐱\mathbf{z},\mathbf{x}. Furthermore, the MMSE estimate 𝐬~\tilde{\mathbf{s}} can be expressed by [17]

𝐬~=(𝐈Ms(𝚺s𝐗𝐑z1))z,\tilde{\mathbf{s}}=\Big{(}\mathbf{I}_{M_{s}}\otimes\big{(}\bm{\Sigma}_{s}\mathbf{X}\mathbf{R}_{z}^{-1}\big{)}\Big{)}\textbf{z}, (24)

where 𝐈Ms𝐑z\mathbf{I}_{M_{s}}\otimes\mathbf{R}_{z} is the covariance matrix of the observation 𝐳\mathbf{z} with the expression of 𝐑z=𝐗H𝚺s𝐗+σs2𝐈T\mathbf{R}_{z}=\mathbf{X}^{H}\bm{\Sigma}_{s}\mathbf{X}+\sigma^{2}_{s}\mathbf{I}_{T}.

IV-B Communication Process

The estimate 𝐬~\tilde{\mathbf{s}} also follows complex Gaussian distribution 𝒞𝒩(𝟎,𝐑𝐬~)\mathcal{CN}(\mathbf{0},\mathbf{R}_{\tilde{\mathbf{s}}}), with the covariance matrix of

𝐑𝐬~=𝐈Ms𝚺sX𝐑z1XH𝚺sH.\mathbf{R}_{\tilde{\mathbf{s}}}=\mathbf{I}_{M_{s}}\otimes\bm{\Sigma}_{s}\textbf{X}\mathbf{R}_{z}^{-1}\textbf{X}^{H}\bm{\Sigma}_{s}^{H}. (25)

Based on the rate distortion function for the Gaussian distribution source, we have

R(Dc)=i=1NMslogλi(𝐑𝐬~)Dci,R(D_{c})=\sum_{i=1}^{NM_{s}}\log\frac{\lambda_{i}(\mathbf{R}_{\tilde{\mathbf{s}}})}{D_{c_{i}}}, (26)

which is in a reverse water-filling form as

Dc=i=1NMsDci=i=1NMsλi(𝐑𝐬~)(λi(𝐑𝐬~)ξ)+,D_{c}=\sum_{i=1}^{NM_{s}}D_{c_{i}}=\sum_{i=1}^{NM_{s}}\lambda_{i}(\mathbf{R}_{\tilde{\mathbf{s}}})-\big{(}\lambda_{i}(\mathbf{R}_{\tilde{\mathbf{s}}})-\xi\big{)}^{+}, (27)

where λi(𝐑𝐬~)\lambda_{i}(\mathbf{R}_{\tilde{\mathbf{s}}}) represents the iith eigenvalue of 𝐑𝐬~\mathbf{R}_{\tilde{\mathbf{s}}}, and ξ\xi is the reverse water-filling factor.

Our aim is to optimize the probability distribution PXP_{X}. However, there may be no explicit expression of the communication channel capacity for an arbitrary distribution of channel input. To circumvent numerical computing, we temporarily restrict the waveform to the Gaussian distribution with 𝒞𝒩(𝟎,𝐑x)\mathcal{CN}(\mathbf{0},\mathbf{R}_{x}) which is widely considered in communication systems. It should be noted that although Gaussian distribution is optimal for communication process, but not necessarily achieving optimal estimation performance, leading to an uncertain overall sensing QoS in the CAS system. Evidently, this also exhibits a deterministic-random tradeoff discussed in [7, 8].

The statistical covariance matrix 𝐑x\mathbf{R}_{x} can be approximated by sample covariance 1/T𝐗𝐗H1/T\mathbf{X}\mathbf{X}^{H}. Thus, the MI between the received and transmitted signals conditioned on communication channel 𝐇c\mathbf{H}_{c} can be approximately characterized by

I(𝐘;𝐗|𝐇c)=log|Tσc2𝐇c𝐗𝐗H𝐇cH+𝐈N|.I(\mathbf{Y};\mathbf{X}|\mathbf{H}_{c})=\log\Big{|}\frac{T}{\sigma^{2}_{c}}\mathbf{H}_{c}\mathbf{X}\mathbf{X}^{H}\mathbf{H}_{c}^{H}+\mathbf{I}_{N}\Big{|}. (28)

The original problem (19) can be reformulated by

min𝐗\displaystyle\mathop{\text{min}}\limits_{\mathbf{X}}\kern 3.0pt Ds+Dc\displaystyle D_{s}+D_{c} (29)
s.t. R(Dc)log|Tσc2𝐇c𝐗𝐗H𝐇cH+𝐈N|,\displaystyle R(D_{c})\leq\log\Big{|}\frac{T}{\sigma^{2}_{c}}\mathbf{H}_{c}\mathbf{X}\mathbf{X}^{H}\mathbf{H}_{c}^{H}+\mathbf{I}_{N}\Big{|},
Ds=MsTr[(Tσs2𝐗𝐗H+𝚺s1)1],\displaystyle D_{s}=M_{s}\text{Tr}\Big{[}\Big{(}\frac{T}{\sigma^{2}_{s}}\mathbf{X}\mathbf{X}^{H}+\bm{\Sigma}_{s}^{-1}\Big{)}^{-1}\Big{]},
(26),Tr(𝐗𝐗H)TPT,\displaystyle\eqref{rdcc},\kern 5.0pt\text{Tr}(\mathbf{X}\mathbf{X}^{H})\leq TP_{T},

where PTP_{T} is the power budget. We derive the explicit expressions of MI and rate distortion function in the scenario of TRM estimation. However, it is still challenging to solve the non-convex problem (29) since the imposed reverse water-filling constraint introduces an unknown nuisance parameter (i.e., the factor ξ\xi), which may only be determined through a numerical search algorithm in general. The solution of (29) is omitted in this paper due to the limited space. We refer the interested reader to [18] for more details.

IV-C Numerical Results

We conduct a comparative analysis of the performance of two signaling strategies: the separated S&C waveforms (SW) scheme and the ISAC scheme. In the SW scheme, optimal S&C waveforms are independently selected for the S&C processes. The SW scheme is more similar to the remote estimation due to the decoupling of the sensing observations from the communication channel input. Unlike remote estimation, which typically employs additional sensors for observation collection, the SW scheme detect the target through wireless sensing. Consequently, the S&C resource competition exists, defined by Tr(𝐗s𝐗sH)+Tr(𝐗c𝐗cH)TPT\text{Tr}(\mathbf{X}_{s}\mathbf{X}_{s}^{H})+\text{Tr}(\mathbf{X}_{c}\mathbf{X}_{c}^{H})\leq TP_{T} in this paper, due to the shared use of hardware platforms.

We can observe from Fig. 3 that the power competition between the S&C processes dominates the overall sensing QoS in the regimes of low SNRs. As anticipated, the ISAC scheme outperforms the SW scheme due to the benefits derived from the resource multiplexing gains. Conversely, in high SNR regimes, the SW scheme exhibits superior performance compared to the ISAC scheme, implying favorable quality of the S&C channels. As resource multiplexing gains approach saturation levels, the optimal waveform structures take precedence in influencing the sensing QoS. Specifically, achieving a balance between S&C channel for the eigenspace of the waveform becomes crucial.

Refer to caption
Figure 3: The comparison of the SW and ISAC schemes.

V Conclusion

In this paper, we present a novel communication-assisted sensing (CAS) system framework that endows users with beyond-line-of-sight sensing capabilities. By evaluating the CAS system from an information-theoretic perspective, we characterize the achievable overall distortion between the target’s state and its reconstruction at the end-user in a special case where the adopted distortion is separable for sensing and communication processes. We also provide an example of ISAC waveform design, showcasing its superiority over separated sensing and communication waveforms due to power multiplexing gain, especially at low SNR regimes.

References

  • [1] Y. Cui, F. Liu, X. Jing, and J. Mu, “Integrating sensing and communications for ubiquitous iot: Applications, trends, and challenges,” IEEE Network, vol. 35, no. 5, pp. 158–167, 2021.
  • [2] J. A. Zhang, F. Liu, C. Masouros, R. W. Heath, Z. Feng, L. Zheng, and A. Petropulu, “An overview of signal processing techniques for joint communication and radar sensing,” IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 6, pp. 1295–1315, 2021.
  • [3] F. Liu, Y. Cui, C. Masouros, J. Xu, T. X. Han, Y. C. Eldar, and S. Buzzi, “Integrated sensing and communications: Toward dual-functional wireless networks for 6g and beyond,” IEEE Journal on Selected Areas in Communications, vol. 40, no. 6, pp. 1728–1767, 2022.
  • [4] A. Liu, Z. Huang, M. Li, Y. Wan, W. Li, T. X. Han, C. Liu, R. Du, D. K. P. Tan, J. Lu, Y. Shen, F. Colone, and K. Chetty, “A survey on fundamental limits of integrated sensing and communication,” IEEE Communications Surveys & Tutorials, vol. 24, no. 2, pp. 994–1034, 2022.
  • [5] M. Kobayashi, G. Caire, and G. Kramer, “Joint state sensing and communication: Optimal tradeoff for a memoryless case,” in 2018 IEEE International Symposium on Information Theory (ISIT), 2018, pp. 111–115.
  • [6] M. Ahmadipour, M. Kobayashi, M. Wigger, and G. Caire, “An information-theoretic approach to joint sensing and communication,” IEEE Transactions on Information Theory, pp. 1–1, 2022.
  • [7] Y. Xiong, F. Liu, Y. Cui, W. Yuan, T. X. Han, and G. Caire, “On the fundamental tradeoff of integrated sensing and communications under gaussian channels,” IEEE Transactions on Information Theory, vol. 69, no. 9, pp. 5723–5751, 2023.
  • [8] F. Liu, Y. Xiong, K. Wan, T. X. Han, and G. Caire, “Deterministic-random tradeoff of integrated sensing and communications in gaussian channels: A rate-distortion perspective,” in 2023 IEEE International Symposium on Information Theory (ISIT), 2023, pp. 2326–2331.
  • [9] V. Kostina and B. Hassibi, “Rate-cost tradeoffs in scalar lqg control and tracking with side information,” in 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton).   IEEE, 2018, pp. 421–428.
  • [10] X. Gao, E. Akyol, and T. Başar, “Optimal communication scheduling and remote estimation over an additive noise channel,” Automatica, vol. 88, pp. 57–69, 2018.
  • [11] X. Ren, J. Wu, K. H. Johansson, G. Shi, and L. Shi, “Infinite horizon optimal transmission power control for remote state estimation over fading channels,” IEEE Transactions on Automatic Control, vol. 63, no. 1, pp. 85–100, 2018.
  • [12] H. Viswanathan and T. Berger, “The quadratic Gaussian CEO problem,” IEEE Transactions on Information Theory, vol. 43, no. 5, pp. 1549–1559, 1997.
  • [13] Y. Oohama, “Indirect and direct Gaussian distributed source coding problems,” IEEE Transactions on Information Theory, vol. 60, no. 12, pp. 7506–7539, 2014.
  • [14] K. Eswaran and M. Gastpar, “Remote source coding under Gaussian noise: Dueling roles of power and entropy power,” IEEE Transactions on Information Theory, vol. 65, no. 7, pp. 4486–4498, 2019.
  • [15] M. Thomas and A. T. Joy, Elements of information theory.   Wiley Interscience, 2006.
  • [16] B. Tang and J. Li, “Spectrally constrained MIMO radar waveform design based on mutual information,” IEEE Transactions on Signal Processing, vol. 67, no. 3, pp. 821–834, 2019.
  • [17] S. M. Kay, Fundamentals of statistical signal processing: Estimation theory.   Englewood Cliffs, NJ: Prentice-Hall, 1993.
  • [18] F. Dong, F. Liu, S. Lu, Y. Xiong, Q. Zhang, Z. Feng, and F. Gao, “Communication-assisted sensing in 6G networks,” arXiv preprint arXiv:2311.07157, 2023.