This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On the Capacity of Generalized Quadrature Spatial Modulation

Kein Yukiyoshi and Naoki Ishikawa K. Yukiyoshi and N. Ishikawa are with the Faculty of Engineering, Yokohama National University, 240-8501 Kanagawa, Japan (e-mail: [email protected]). This work was partially supported by Support Center for Advanced Telecommunications Technology Research, Japan.
Abstract

In this letter, the average mutual information (AMI) of generalized quadrature spatial modulation (GQSM) is first derived for continuous-input continuous-output channels. Our mathematical analysis shows that the calculation error induced by Monte Carlo integration increases exponentially with the signal-to-noise ratio. This nature of GQSM is resolved by deriving a closed-form expression. The derived AMI is compared with other related SM schemes and evaluated for different antenna activation patterns. Our results show that an equiprobable antenna selection method slightly decreases AMI of symbols, while the method significantly improves AMI in total.

Index Terms:
Capacity, mutual information, spatial modulation (SM), quadrature spatial modulation (QSM), generalized quadrature spatial modulation (GQSM).
\TPshowboxesfalse{textblock*}

(45pt,10pt) Accepted for publication in IEEE Wireless Communications Letters. This is the author’s version which has not been fully edited and content may change prior to final publication. Citation information: DOI 10.1109/LWC.2023.3310586

I Introduction

Spatial modulation (SM) is a technique that modulates information by assigning it to an index of active transmit antennas, in addition to data symbols [1]. SM has been extensively studied as a potential solution for striking the fundamental trade-off between performance and complexity in wireless communications [2].

The transmission rate of SM is given by RSM=log2L+log2NtR_{\mathrm{SM}}=\log_{2}L+\lfloor\log_{2}N_{t}\rfloor, where LL is the constellation size and NtN_{t} is the number of transmit antennas. To improve the spectral efficiency of SM, a number of extensions have been proposed. Introducing representative schemes, generalized spatial modulation (GSM) [3] extends the number of data symbols from 11 to a general integer KK, resulting in an improved transmission rate RGSM=Klog2L+log2(NtK)R_{\mathrm{GSM}}=K\log_{2}L+\lfloor\log_{2}\binom{N_{t}}{K}\rfloor. In contrast, quadrature spatial modulation (QSM) [4] defines different activation patterns (APs) independently for the real and imaginary parts of the codeword, resulting in an improved transmission rate RQSM=log2L+2log2NtR_{\mathrm{QSM}}=\log_{2}L+2\lfloor\log_{2}N_{t}\rfloor. A hybrid of the above two SM extensions, generalized quadrature spatial modulation (GQSM) [5], has been proposed. Currently, GQSM is considered to be the most advanced SM, offering the highest transmission rate RGQSM=Klog2L+2log2(NtK)R_{\mathrm{GQSM}}=K\log_{2}L+2\lfloor\log_{2}\binom{N_{t}}{K}\rfloor. Additionally, other equivalent techniques have been proposed in the context of orthogonal frequency division multiplexing (OFDM), generally termed index modulation (IM), such as OFDM-IM [6, 7] and OFDM-I/Q-IM [8].

GQSM requires APs of antennas to be designed carefully, where QQ APs are selected out of (NtK)2\binom{N_{t}}{K}^{2} possible candidates. This AP selection determines achievable performances, leading to studies on the efficient design of APs [7, 9, 10]. One approach, known as combinatorial design, was proposed in [7]; it is equivalent to selecting QQ APs from (NtK)2\binom{N_{t}}{K}^{2} candidates in lexicographic order. Another approach, known as equiprobable design, was proposed in [9]; APs are constructed such that each transmit antenna is activated with an equal probability. In addition, an integer linear programming (ILP) design is proposed in [10], where the equiprobable antenna selection is formulated as an ILP problem and it is compared with other design methods [7, 9] for discrete-input channels.111The open-source implementations of [7, 9, 10] are provided in [10].

The channel capacity CC is an essential metric for evaluating a communication system, and many studies have addressed that analysis of SM. Although the distribution of the channel input that maximizes the average mutual information (AMI) of SM is unknown [11], the AMI of SM and GSM has been derived under the assumption of Gaussian input distribution [12]. Under the same assumption, it was shown in [13] that, when NtN_{t}\rightarrow\infty, the AMI of QSM is equal to the channel capacity of MIMO [14]. To the best of our knowledge, the AMI of QSM or GQSM at a specific number of transmit antennas has yet to be derived, although it is an important metric predicting an upper bound of achievable rates in coded scenarios.

In this letter, we newly derive the AMI of GQSM assuming continuous-input channels and compare it with those of SM and GSM, where a non-trivial problem of calculation errors is solved by our partially closed-form expressions. In addition, using the derived AMI, we investigate differences between the three methods [7, 9, 10] of APs and clarify that the difference is maximized at medium signal-to-noise ratios (SNRs).

II System Model

Consider an Nt×NrN_{t}\times N_{r} multiple-input multiple-output (MIMO) system and assume independent and identically distributed (i.i.d.) frequency-flat Rayleigh fading channels in which each element of a channel matrix 𝐇Nr×Nt\mathbf{H}\in\mathbb{C}^{N_{r}\times N_{t}} and a noise vector 𝐧Nr×1\mathbf{n}\in\mathbb{C}^{N_{r}\times 1} independently follow complex Gaussian distributions 𝒞𝒩(0,1)\mathcal{CN}(0,1) and 𝒞𝒩(0,σn2)\mathcal{CN}(0,\sigma_{n}^{2}), respectively. Given the noise variance σn2\sigma_{n}^{2} and the transmission power σs2\sigma_{s}^{2}, the SNR is defined by ρ=σs2/σn2\rho=\sigma_{s}^{2}/\sigma_{n}^{2}. Let xix_{i} be the ii-th element of a GQSM codeword 𝐱Nt×1\mathbf{x}\in\mathbb{C}^{N_{t}\times 1}. The real and imaginary parts of classic symbols 𝐬K×1\mathbf{s}\in\mathbb{C}^{K\times 1} are denoted by 𝐬\mathbf{s}_{\mathbb{R}} and 𝐬𝕀\mathbf{s}_{\mathbb{I}}, while the kk-th elements of 𝐬\mathbf{s}_{\mathbb{R}} and 𝐬𝕀\mathbf{s}_{\mathbb{I}} are denoted by s(k)s_{\mathbb{R}}^{(k)} and s𝕀(k)s_{\mathbb{I}}^{(k)}, respectively. Similarly, the (i,k)(i,k) element of a matrix 𝐀\mathbf{A} is denoted by a(i,k)a^{(i,k)}. Then, the APs corresponding to the real and imaginary parts of the codeword are defined by

𝔸={𝐀{0,1}Nt×Kk=1,K,Σi=1Nta(i,k)=1}\displaystyle\mathbb{A}_{\mathbb{R}}=\bigl{\{}\mathbf{A}_{\mathbb{R}}\in\quantity{0,1}^{N_{t}\times K}\mid\forall k=1,\cdots K,\ \Sigma_{i=1}^{N_{t}}a_{\mathbb{R}}^{(i,k)}=1\bigr{\}} (1)

and

𝔸𝕀={𝐀𝕀{0,1}Nt×Kk=1,K,Σi=1Nta𝕀(i,k)=1},\displaystyle\mathbb{A}_{\mathbb{I}}=\bigl{\{}\mathbf{A}_{\mathbb{I}}\in\quantity{0,1}^{N_{t}\times K}\mid\forall k=1,\cdots K,\ \Sigma_{i=1}^{N_{t}}a_{\mathbb{I}}^{(i,k)}=1\bigr{\}}, (2)

where we have relationships a(i,k)=1Re(xi)=s(k)a_{\mathbb{R}}^{(i,k)}=1\Leftrightarrow\real(x_{i})=s_{\mathbb{R}}^{(k)} and a𝕀(i,k)=1Im(xi)=s𝕀(k)a_{\mathbb{I}}^{(i,k)}=1\Leftrightarrow\imaginary(x_{i})=s_{\mathbb{I}}^{(k)}. Denoting all APs as 𝔸=𝔸×𝔸𝕀\mathbb{A}=\mathbb{A}_{\mathbb{R}}\times\mathbb{A}_{\mathbb{I}} using the Cartesian product ×\times, the number of APs, Q=|𝔸|Q=|\mathbb{A}|, satisfies a constraint 2Q2log2(NtK)2(NtK)22\leq Q\leq 2^{\lfloor\log_{2}\binom{N_{t}}{K}^{2}\rfloor}\leq\binom{N_{t}}{K}^{2} for additional bit allocation of GQSM.222Here, QQ is not limited to the maximum value, which can be adjusted to achieve additional gain at the expense of a reduced transmission rate [10]. The received signal 𝐲\mathbf{y} is represented as

𝐲=𝐇𝐱+𝐧Nr×1\mathbf{y}=\mathbf{H}\mathbf{x}+\mathbf{n}\in\mathbb{C}^{N_{r}\times 1} (3)

and the codeword 𝐱\mathbf{x} is constructed by

𝐱=𝐀𝐬+j𝐀𝕀𝐬𝕀Nt×1,\mathbf{x}=\mathbf{A}_{\mathbb{R}}\mathbf{s}_{\mathbb{R}}+\mathrm{j}\mathbf{A}_{\mathbb{I}}\mathbf{s}_{\mathbb{I}}\in\mathbb{C}^{N_{t}\times 1}, (4)

where j=1\mathrm{j}=\sqrt{-1} denotes the imaginary number. Note that this generalized system model can represent QSM by imposing a constraint K=1K=1 and GSM by imposing a condition 𝐀=𝐀𝕀\mathbf{A}_{\mathbb{R}}=\mathbf{A}_{\mathbb{I}} for all APs (𝐀,𝐀𝕀)𝔸(\mathbf{A}_{\mathbb{R}},\mathbf{A}_{\mathbb{I}})\in\mathbb{A}. In addition, by setting the off-diagonal elements of the channel matrix 𝐇\mathbf{H} to 0, it becomes equivalent to an idealized system model of OFDM-I/Q-IM.

III Capacity Analysis

AMI I(𝐱;𝐲|𝐇)I(\mathbf{x};\mathbf{y}|\mathbf{H}) is defined as the expected value of the maximum number of bits that can be conveyed without error at a given SNR, and ergodic capacity C=maxp(𝐱)I(𝐱;𝐲|𝐇)C=\max_{p(\mathbf{x})}I(\mathbf{x};\mathbf{y}|\mathbf{H}) is defined as the maximum value of AMI over the distribution of the codeword 𝐱\mathbf{x} [14]. In general, the ergodic capacity of MIMO is achieved when each element of the codeword 𝐱\mathbf{x} independently follows a complex Gaussian distribution with the same variance [14]. However, a distribution that achieves the capacity of SM has not yet been found [11]. Therefore, in this study, we derive the AMI of GQSM when the input symbols independently follow a complex Gaussian distribution with the same variance as in previous studies [12] and use it as an evaluation metric instead of the actual capacity.

III-A AMI of Discrete-Input Channel [15]

If the input symbols follow a discrete probability distribution, the AMI of a general MIMO scheme is expressed as [15]

I(𝐱;𝐲|𝐇)=R12Ri=12R𝔼𝐇,𝐧[log2j=12Rexpη[i,j]],I(\mathbf{x};\mathbf{y}|\mathbf{H})=R-\frac{1}{2^{R}}\sum_{i=1}^{2^{R}}\mathbb{E}_{\mathbf{H},\mathbf{n}}\quantity[\log_{2}\sum_{j=1}^{2^{R}}\exp\eta[i,j]], (5)

where

η[i,j]=𝐇(𝐱i𝐱j)+𝐧2+𝐧2σn2,\eta[i,j]=\frac{-\norm{\mathbf{H}(\mathbf{x}_{i}-\mathbf{x}_{j})+\mathbf{n}}^{2}+\norm{\mathbf{n}}^{2}}{\sigma_{n}^{2}}, (6)

𝐱i\mathbf{x}_{i} is the ii-th element of a codebook {𝐱1,𝐱2R}\quantity{\mathbf{x}_{1}\cdots,\mathbf{x}_{2^{R}}}, and RR is the transmission rate.

III-B AMI of Continuous-Input Channel

In the following, the AMI of GQSM is newly derived for continuous-input continuous-output channels. If the input symbols follow a continuous probability distribution, similar to [12, Eq. (31)], the AMI of GQSM can be divided into that of the symbols, I𝐬I_{\mathbf{s}}, and that of the APs, I𝐀I_{\mathbf{A}}, expressed as

I(𝐀,𝐬;𝐲|𝐇)=I(𝐬;𝐲|𝐀,𝐇)+I(𝐀;𝐲|𝐇)=I𝐬+I𝐀.\displaystyle I(\mathbf{A},\mathbf{s};\mathbf{y}|\mathbf{H})=I(\mathbf{s};\mathbf{y}|\mathbf{A},\mathbf{H})+I(\mathbf{A};\mathbf{y}|\mathbf{H})=I_{\mathbf{s}}+I_{\mathbf{A}}. (7)

First, we derive I𝐬I_{\mathbf{s}}. From (7), we obtain

I𝐬\displaystyle I_{\mathbf{s}} =I(𝐬;𝐲|𝐀,𝐇)=h(𝐲|𝐀,𝐇)h(𝐲|𝐀,𝐇,𝐬)\displaystyle=I(\mathbf{s};\mathbf{y}|\mathbf{A},\mathbf{H})=h(\mathbf{y}|\mathbf{A},\mathbf{H})-h(\mathbf{y}|\mathbf{A},\mathbf{H},\mathbf{s})
=𝔼𝐀,𝐇,𝐲[log2p(𝐲|𝐀,𝐇)]h(𝐧)\displaystyle=-\mathbb{E}_{\mathbf{A},\mathbf{H},\mathbf{y}}\quantity[\log_{2}p(\mathbf{y}|\mathbf{A},\mathbf{H})]-h(\mathbf{n})
=𝔼𝐀,𝐇,𝐲[log2p(𝐲|𝐀,𝐇)1]Nrlog2(πeσn2),\displaystyle=\mathbb{E}_{\mathbf{A},\mathbf{H},\mathbf{y}}\quantity[\log_{2}p(\mathbf{y}|\mathbf{A},\mathbf{H})^{-1}]-N_{r}\log_{2}(\pi e\sigma_{n}^{2}), (8)

where h()h(\cdot) denotes the entropy of a random variable. Supposing d𝐬=ds(1)ds(k)ds𝕀(1)ds𝕀(k)d\mathbf{s}=ds_{\mathbb{R}}^{(1)}\cdots ds_{\mathbb{R}}^{(k)}ds_{\mathbb{I}}^{(1)}\cdots ds_{\mathbb{I}}^{(k)}, the argument of the expectation 𝔼𝐀,𝐇,𝐲[]\mathbb{E}_{\mathbf{A},\mathbf{H},\mathbf{y}}[\cdot] can be transformed into a Monte Carlo integration [1, 11] of

log2p(𝐲|𝐀,𝐇)1=log22Kp(𝐬)p(𝐧=𝐲𝐇𝐱)𝑑𝐬\displaystyle\log_{2}p(\mathbf{y}|\mathbf{A},\mathbf{H})^{-1}=-\log_{2}\int_{\mathbb{R}^{2K}}p(\mathbf{s})p(\mathbf{n}=\mathbf{y}-\mathbf{H}\mathbf{x})\ d\mathbf{s}
=\displaystyle= log2(𝔼𝐬[p(𝐧=𝐲𝐇𝐱)])\displaystyle-\log_{2}\quantity(\mathbb{E}_{\mathbf{s}}\quantity[p(\mathbf{n}=\mathbf{y}-\mathbf{H}\mathbf{x})])
=\displaystyle= log2(𝔼𝐬[exp(𝐲𝐇𝐱2σn2)])+Nrlog2(πσn2).\displaystyle-\log_{2}\quantity(\mathbb{E}_{\mathbf{s}}\quantity[\exp\quantity(-\frac{\norm{\mathbf{y}-\mathbf{H}\mathbf{x}}^{2}}{\sigma_{n}^{2}})])+N_{r}\log_{2}\quantity(\pi\sigma_{n}^{2}). (9)

Substituting (III-B) for (III-B) yields

I𝐬=𝔼𝐀,𝐇,𝐲[log2𝔼𝐬[exp(𝐲𝐇𝐱2σn2)]]Nrlog2(e).I_{\mathbf{s}}=\mathbb{E}_{\mathbf{A},\mathbf{H},\mathbf{y}}\quantity[-\log_{2}\mathbb{E}_{\mathbf{s}}\quantity[\exp\quantity(-\frac{\norm{\mathbf{y}-\mathbf{H}\mathbf{x}}^{2}}{\sigma_{n}^{2}})]]-N_{r}\log_{2}(e). (10)

Second, we derive I𝐀I_{\mathbf{A}}. From (7), we obtain

I𝐀\displaystyle I_{\mathbf{A}} =I(𝐀;𝐲|𝐇)=h(𝐀|𝐇)h(𝐀|𝐇,𝐲)\displaystyle=I(\mathbf{A};\mathbf{y}|\mathbf{H})=h(\mathbf{A}|\mathbf{H})-h(\mathbf{A}|\mathbf{H},\mathbf{y})
=log2Q𝔼𝐀,𝐇,𝐲[log2p(𝐀|𝐲,𝐇)1].\displaystyle=\log_{2}Q-\mathbb{E}_{\mathbf{A},\mathbf{H},\mathbf{y}}\quantity[\log_{2}p(\mathbf{A}|\mathbf{y},\mathbf{H})^{-1}]. (11)

Using Bayes’ theorem, p(𝐀|𝐲,𝐇)p(\mathbf{A}|\mathbf{y},\mathbf{H}) is expressed as

p(𝐀|𝐲,𝐇)\displaystyle p(\mathbf{A}|\mathbf{y},\mathbf{H}) =p(𝐲|𝐀,𝐇)p(𝐀|𝐇)i=1Qp(𝐲|𝐀i,𝐇)p(𝐀i|𝐇)\displaystyle=\frac{p(\mathbf{y}|\mathbf{A},\mathbf{H})p(\mathbf{A}|\mathbf{H})}{\sum_{i=1}^{Q}p(\mathbf{y}|\mathbf{A}_{i},\mathbf{H})p(\mathbf{A}_{i}|\mathbf{H})}
=p(𝐲|𝐀,𝐇)p(𝐀)i=1Qp(𝐲|𝐀i,𝐇)p(𝐀i).\displaystyle=\frac{p(\mathbf{y}|\mathbf{A},\mathbf{H})p(\mathbf{A})}{\sum_{i=1}^{Q}p(\mathbf{y}|\mathbf{A}_{i},\mathbf{H})p(\mathbf{A}_{i})}. (12)

Assuming that APs are chosen uniformly at random, i.e., p(𝐀i)=1/Qp(\mathbf{A}_{i})=1/Q, we obtain

p(𝐀|𝐲,𝐇)\displaystyle p(\mathbf{A}|\mathbf{y},\mathbf{H}) =p(𝐲|𝐀,𝐇)i=1Qp(𝐲|𝐀i,𝐇)\displaystyle=\frac{p(\mathbf{y}|\mathbf{A},\mathbf{H})}{\sum_{i=1}^{Q}p(\mathbf{y}|\mathbf{A}_{i},\mathbf{H})}
=𝔼𝐬[exp(𝐲𝐇𝐱2σn2)]i=1Q𝔼𝐬[exp(𝐲𝐇𝐱i2σn2)].\displaystyle=\frac{\mathbb{E}_{\mathbf{s}}\quantity[\exp\quantity(-\frac{\norm{\mathbf{y}-\mathbf{H}\mathbf{x}}^{2}}{\sigma_{n}^{2}})]}{\sum_{i=1}^{Q}\mathbb{E}_{\mathbf{s}}\quantity[\exp\quantity(-\frac{\norm{\mathbf{y}-\mathbf{H}\mathbf{x}_{i}}^{2}}{\sigma_{n}^{2}})]}. (13)

Substituting (13) for (11) yields

I𝐀=log2Q𝔼𝐀,𝐇,𝐲[log2i=1Q𝔼𝐬[exp(𝐲𝐇𝐱i2σn2)]𝔼𝐬[exp(𝐲𝐇𝐱2σn2)]].\begin{split}I_{\mathbf{A}}&=\log_{2}Q\\ &\quad-\mathbb{E}_{\mathbf{A},\mathbf{H},\mathbf{y}}\quantity[\log_{2}\frac{\sum_{i=1}^{Q}\mathbb{E}_{\mathbf{s}}\quantity[\exp\quantity(-\frac{\norm{\mathbf{y}-\mathbf{H}\mathbf{x}_{i}}^{2}}{\sigma_{n}^{2}})]}{\mathbb{E}_{\mathbf{s}}\quantity[\exp\quantity(-\frac{\norm{\mathbf{y}-\mathbf{H}\mathbf{x}}^{2}}{\sigma_{n}^{2}})]}].\end{split} (14)

Overall, from (10) and (14), the AMI of GQSM for continuous-input channels is derived as

I(𝐱;𝐲|𝐇)=I𝐬+I𝐀\displaystyle I(\mathbf{x};\mathbf{y}|\mathbf{H})=I_{\mathbf{s}}+I_{\mathbf{A}}
=𝔼𝐀,𝐇,𝐲[log2i=1Qp(𝐲|𝐀i,𝐇)]+log2QNrlog2(e)\displaystyle=-\mathbb{E}_{\mathbf{A},\mathbf{H},\mathbf{y}}\quantity[\log_{2}\sum_{i=1}^{Q}p(\mathbf{y}|\mathbf{A}_{i},\mathbf{H})]+\log_{2}Q-N_{r}\log_{2}\quantity(e)
=𝔼𝐀,𝐇,𝐲[log2i=1Q𝔼𝐬[exp(𝐲𝐇𝐱i2σn2)]]\displaystyle=-\mathbb{E}_{\mathbf{A},\mathbf{H},\mathbf{y}}\quantity[\log_{2}\sum_{i=1}^{Q}\mathbb{E}_{\mathbf{s}}\quantity[\exp\quantity(-\frac{\norm{\mathbf{y}-\mathbf{H}\mathbf{x}_{i}}^{2}}{\sigma_{n}^{2}})]]
+log2QNrlog2(e),\displaystyle\quad+\log_{2}Q-N_{r}\log_{2}\quantity(e), (15)

where the Monte Carlo method is used to calculate the expected values 𝔼𝐀,𝐇,𝐲[]\mathbb{E}_{\mathbf{A},\mathbf{H},\mathbf{y}}[\cdot] and 𝔼𝐬[]\mathbb{E}_{\mathbf{s}}[\cdot].

III-C Error Analysis of Monte Carlo Integration

Refer to caption
Figure 1: Relationship between SNR and I𝐬I_{\mathbf{s}} calculated from (10).

The expression in (III-B) seems straightforward; however, it is actually difficult to calculate with high accuracy. Fig. 1 shows I𝐬I_{\mathbf{s}} calculated from (10) using the Monte Carlo integration for different sample sizes N=10,,107N=10,\cdots,10^{7}, where (Nt,Nr,K,𝐀,𝐀𝕀)=(2,2,1,[0 1]T,[1 0]T)(N_{t},N_{r},K,\mathbf{A}_{\mathbb{R}},\mathbf{A}_{\mathbb{I}})=(2,2,1,[0\ 1]^{\mathrm{T}},[1\ 0]^{\mathrm{T}}), and each element of the input symbol follows a complex Gaussian distribution 𝒞𝒩(0,σs2/2K)\mathcal{CN}(0,\sigma_{s}^{2}/2K) independently. IsI_{s} calculated using the closed form expression of (III-B) is also shown, which will be discussed in Section III-D. As shown in Fig. 1, the approximate value of I𝐬I_{\mathbf{s}} increases exponentially from a certain SNR if the sample size is relatively small. This result indicates that the calculation errors increase exponentially with respect to SNR, which is analyzed below.

First, the expectation of (III-B) is transformed into another form of

log2p¯(𝐲|𝐀,𝐇)1\displaystyle\log_{2}\bar{p}(\mathbf{y}|\mathbf{A},\mathbf{H})^{-1}
=\displaystyle= log2(1Ni=1N1(πσn2)Nrexp(𝐲𝐇𝐱(i)2σn2))\displaystyle-\log_{2}\quantity(\frac{1}{N}\sum_{i=1}^{N}\frac{1}{\quantity(\pi\sigma_{n}^{2})^{N_{r}}}\exp\quantity(-\frac{\norm{\mathbf{y}-\mathbf{H}\mathbf{x}^{(i)}}^{2}}{\sigma_{n}^{2}}))
=\displaystyle= log2(i=1Nexp(𝐲𝐇𝐱(i)2σn2))\displaystyle-\log_{2}\quantity(\sum_{i=1}^{N}\exp\quantity(-\frac{\norm{\mathbf{y}-\mathbf{H}\mathbf{x}^{(i)}}^{2}}{\sigma_{n}^{2}}))
+log2(NπNrσ2Nr).\displaystyle+\log_{2}(N\pi^{N_{r}}\sigma^{2N_{r}}). (16)

Here, the variables 𝐲\mathbf{y}, 𝐀\mathbf{A}, and 𝐇\mathbf{H} in (III-B) other than 𝐬\mathbf{s} can be regarded as given constants. Therefore, because of the reproductive property of the Gaussian distribution, if each element of the input symbol follows a complex Gaussian distribution independently, then 𝐲𝐇𝐱\mathbf{y}-\mathbf{H}\mathbf{x} is also a probability vector whose elements follow complex Gaussian distributions with different variances. As a result, its squared norm 𝐲𝐇𝐱2\norm{\mathbf{y}-\mathbf{H}\mathbf{x}}^{2} is a random variable defined by the sum of the squares of random variables following Gaussian distributions. To investigate the asymptotic properties of (III-B) with respect to SNR, we consider the following random variable YY as a simplified model of (III-C) expressed by

Y=log2(i=1Nexp(Xi2)),Y=-\log_{2}\quantity(\sum_{i=1}^{N}\exp\quantity(-X_{i}^{2})), (17)

where XiX_{i} is a random variable following a Gaussian distribution 𝒩(0,σx2)\mathcal{N}(0,\sigma_{x}^{2}). Now, we introduce a new random variable Xmin2=min{X12,,XN2}X_{\mathrm{min}}^{2}=\min\quantity{X_{1}^{2},\cdots,X_{N}^{2}}. Its expected value σXmin2\sigma_{X_{\mathrm{min}}^{2}} is given by

σXmin2=0x2ddx{1(1Φ(xσx))N}𝑑x=σx2g(N)\displaystyle\sigma_{X_{\mathrm{min}}^{2}}=\int_{0}^{\infty}x^{2}\frac{d}{dx}\quantity{1-\quantity(1-\Phi\quantity(\frac{x}{\sigma_{x}}))^{N}}\ dx=\sigma_{x}^{2}g(N) (18)

and g(N)=0t2ddt{1(1Φ(t))N}𝑑tg(N)=\int_{0}^{\infty}t^{2}\frac{d}{dt}\quantity{1-\quantity(1-\Phi\quantity(t))^{N}}\ dt, where Φ()\Phi(\cdot) is the cumulative distribution function of a half-Gaussian distribution with unit variance. From (18), σXmin2\sigma_{X_{\mathrm{min}}^{2}} increases linearly with SNR. Here, the following inequality holds:

Y\displaystyle Y =log2(e)Xmin2log2(i=1Nexp(Xi2+Xmin2))\displaystyle=\log_{2}(e)X_{\mathrm{min}}^{2}-\log_{2}\quantity(\sum_{i=1}^{N}\exp\quantity(-X_{i}^{2}+X_{\mathrm{min}}^{2}))
Xmin2log(2)log2(N).\displaystyle\geq\frac{X_{\mathrm{min}}^{2}}{\log(2)}-\log_{2}(N). (19)

Since log2(i=1Nexp(Xi2+Xmin2))0\log_{2}\quantity(\sum_{i=1}^{N}\exp\quantity(-X_{i}^{2}+X_{\mathrm{min}}^{2}))\geq 0, we have

Xmin2log(2)log2(N)YXmin2log(2).\frac{X_{\mathrm{min}}^{2}}{\log(2)}-\log_{2}(N)\leq Y\leq\frac{X_{\mathrm{min}}^{2}}{\log(2)}. (20)

If the sample size NN is constant, then YXmin2/log(2)Y\sim X_{\mathrm{min}}^{2}/\log(2) asymptotically holds because of σx0Xmin2log2(N)\sigma_{x}\gg 0\Rightarrow X_{\mathrm{min}}^{2}\gg\log_{2}(N). Also, for the expected value of YY, the following approximation holds:

σYσXmin2log(2)=σx2g(N)log(2).\sigma_{Y}\sim\frac{\sigma_{X_{\mathrm{min}}^{2}}}{\log(2)}=\sigma_{x}^{2}\cdot\frac{g(N)}{\log(2)}. (21)

Thus, asymptotically, σY\sigma_{Y} increases linearly with SNR. In other words, σY\sigma_{Y} increases exponentially with SNR in units of decibels.

The above analysis explains the phenomenon observed in Fig. 1. The expected value calculation in log-sum-exp, as in (III-B), requires a sufficient sample size because the error increases with SNR. That is, at high SNRs, it is a challenging task to calculate accurate AMI of GQSM in terms of computational complexity.

III-D Closed-Form Expression of (III-B)

The calculation error in (III-B) induced by the Monte Carlo integration increases exponentially with SNR. Here, a closed-form expression of (III-B) can be derived, which eliminates the calculation error and provides a more accurate value of AMI. Assuming that each element of the input symbol independently follows a complex Gaussian distribution 𝒞𝒩(0,σs2/2K)\mathcal{CN}(0,\sigma_{s}^{2}/2K), (III-B) can be expressed as

p(𝐲|𝐀,𝐇)=2Kexp(s2σs2/K𝐲𝐇𝐱2σn2)d𝐬(πσs2/K)K(πσn2)Mr.p(\mathbf{y}|\mathbf{A},\mathbf{H})=\frac{\int_{\mathbb{R}^{2K}}\exp\quantity(-\frac{\norm{s}^{2}}{\sigma_{s}^{2}/K}-\frac{\norm{\mathbf{y}-\mathbf{H}\mathbf{x}}^{2}}{\sigma_{n}^{2}})\ d\mathbf{s}}{\quantity(\pi\sigma_{s}^{2}/K)^{K}\quantity(\pi\sigma_{n}^{2})^{M_{r}}}. (22)
p(𝐲|𝐀,𝐇)\displaystyle p(\mathbf{y}|\mathbf{A},\mathbf{H}) =1π3σs2σn4exp(s2+s𝕀2σs2𝐲𝐇(𝐀𝐬+j𝐀𝕀𝐬𝕀)2σn2)dsds𝕀\displaystyle=\frac{1}{\pi^{3}\sigma_{s}^{2}\sigma_{n}^{4}}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\exp\quantity(-\frac{s_{\mathbb{R}}^{2}+s_{\mathbb{I}}^{2}}{\sigma_{s}^{2}}-\frac{\norm{\mathbf{y}-\mathbf{H}\quantity(\mathbf{A}_{\mathbb{R}}\mathbf{s}_{\mathbb{R}}+\mathrm{j}\mathbf{A}_{\mathbb{I}}\mathbf{s}_{\mathbb{I}})}^{2}}{\sigma_{n}^{2}})\ ds_{\mathbb{R}}ds_{\mathbb{I}}
=1π3σs2σn4exp(p1s2σs2σn2+2sσn2(αs𝕀+β)1σn2(𝐲2+2γs𝕀+p2σs2s𝕀2))dsds𝕀\displaystyle=\frac{1}{\pi^{3}\sigma_{s}^{2}\sigma_{n}^{4}}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\exp\quantity(-\frac{p_{1}s_{\mathbb{R}}^{2}}{\sigma_{s}^{2}\sigma_{n}^{2}}+\frac{2s_{\mathbb{R}}}{\sigma_{n}^{2}}\quantity(\alpha s_{\mathbb{I}}+\beta)-\frac{1}{\sigma_{n}^{2}}\quantity(\norm{\mathbf{y}}^{2}+2\gamma s_{\mathbb{I}}+\frac{p_{2}}{\sigma_{s}^{2}}s_{\mathbb{I}}^{2}))\ ds_{\mathbb{R}}ds_{\mathbb{I}} (23)

In general, (22) can be expressed in closed form by repeating the Gaussian integral 2K2K times. By substituting the closed-form expression of (22) into (III-B), the AMI of GQSM for continuous-input channels can be obtained with minimal error. Although the closed form of the expected value 𝔼𝐬[]\mathbb{E}_{\mathbf{s}}[\cdot] can be obtained in (III-B), it is still necessary to calculate 𝔼𝐀,𝐇,𝐲[]\mathbb{E}_{\mathbf{A},\mathbf{H},\mathbf{y}}[\cdot] by the Monte Carlo method, which induces no significant calculation errors. In repeating the Gaussian integral 2K2K times, we used symbolic computation with the computer algebra system GiNaC [16] and the linear algebra library Eigen [17].

As an example, in the case of (Nt,Nr,K,𝐀,𝐀𝕀)=(2,2,1,[0 1]T,[1 0]T)(N_{t},N_{r},K,\mathbf{A}_{\mathbb{R}},\mathbf{A}_{\mathbb{I}})=(2,2,1,[0\ 1]^{\mathrm{T}},[1\ 0]^{\mathrm{T}}), (22) is given by (III-D), where 𝐡i\mathbf{h}_{i} is the ii-th column vector of 𝐇\mathbf{H}, and h(i,k)h_{\mathbb{R}}^{(i,k)} and h𝕀(i,k)h_{\mathbb{I}}^{(i,k)} are the real and imaginary parts of the (i,k)(i,k) element of 𝐇\mathbf{H}. The real and imaginary parts of the input symbol ss are denoted by ss_{\mathbb{R}} and s𝕀s_{\mathbb{I}}, while the real and imaginary parts of the ii-th element of the received signal 𝐲\mathbf{y} are denoted by y(i)y_{\mathbb{R}}^{(i)} and y𝕀(i)y_{\mathbb{I}}^{(i)}, respectively. The constant parameters in (III-D) are defined as

p1\displaystyle p_{1} =σs2𝐡12+σn2,p2=σs2𝐡22+σn2,\displaystyle=\sigma_{s}^{2}\norm{\mathbf{h}_{1}}^{2}+\sigma_{n}^{2},~{}~{}p_{2}=\sigma_{s}^{2}\norm{\mathbf{h}_{2}}^{2}+\sigma_{n}^{2}, (24)
α\displaystyle\alpha =i=12(h(i,1)h𝕀(i,2)h𝕀(i,1)h(i,2)),\displaystyle=\textstyle{\sum_{i=1}^{2}\quantity(h_{\mathbb{R}}^{(i,1)}h_{\mathbb{I}}^{(i,2)}-h_{\mathbb{I}}^{(i,1)}h_{\mathbb{R}}^{(i,2)})}, (25)
β\displaystyle\beta =i=12(h(i,1)y(i)+h𝕀(i,1)y𝕀(i)),and\displaystyle=\textstyle{\sum_{i=1}^{2}\quantity(h_{\mathbb{R}}^{(i,1)}y_{\mathbb{R}}^{(i)}+h_{\mathbb{I}}^{(i,1)}y_{\mathbb{I}}^{(i)})},~{}\mathrm{and} (26)
γ\displaystyle\gamma =i=12(h𝕀(i,2)y(i)h(i,2)y𝕀(i)).\displaystyle=\textstyle{\sum_{i=1}^{2}\quantity(h_{\mathbb{I}}^{(i,2)}y_{\mathbb{R}}^{(i)}-h_{\mathbb{R}}^{(i,2)}y_{\mathbb{I}}^{(i)})}. (27)

Integrating (III-D) with respect to ss_{\mathbb{R}} gives

p(𝐲|𝐀,𝐇)=1π52σsσn3p1exp(Z1)ds𝕀,p(\mathbf{y}|\mathbf{A},\mathbf{H})=\frac{1}{\pi^{\frac{5}{2}}\sigma_{s}\sigma_{n}^{3}\sqrt{p_{1}}}\int_{-\infty}^{\infty}\exp\quantity(Z_{1})\ ds_{\mathbb{I}}, (28)

where

Z1=s𝕀2σn2(p2σs2σs2α2p1)+2s𝕀σn2(σs2αβp1γ)+1σn2(σs2β2p1𝐲2).\begin{split}Z_{1}&=-\frac{s_{\mathbb{I}}^{2}}{\sigma_{n}^{2}}\quantity(\frac{p_{2}}{\sigma_{s}^{2}}-\frac{\sigma_{s}^{2}\alpha^{2}}{p_{1}})+\frac{2s_{\mathbb{I}}}{\sigma_{n}^{2}}\quantity(\frac{\sigma_{s}^{2}\alpha\beta}{p_{1}}-\gamma)\\ &\quad+\frac{1}{\sigma_{n}^{2}}\quantity(\frac{\sigma_{s}^{2}\beta^{2}}{p_{1}}-\norm{\mathbf{y}}^{2}).\end{split} (29)

Finally, integrating (28) with respect to s𝕀s_{\mathbb{I}} gives the closed-form expression of (22) as

p(𝐲|𝐀,𝐇)=exp(Z2)π2σn2p1p2σs4α2,p(\mathbf{y}|\mathbf{A},\mathbf{H})=\frac{\exp\quantity(Z_{2})}{\pi^{2}\sigma_{n}^{2}\sqrt{p_{1}p_{2}-\sigma_{s}^{4}\alpha^{2}}}, (30)

where

Z2=1σn2(p2σs2σs2α2p1)(σs2αβp1+γ)+σs2β2p1σn2+𝐲2σn4.Z_{2}=\frac{1}{\sigma_{n}^{2}}\quantity(\frac{p_{2}}{\sigma_{s}^{2}}-\frac{\sigma_{s}^{2}\alpha^{2}}{p_{1}})\quantity(\frac{\sigma_{s}^{2}\alpha\beta}{p_{1}}+\gamma)+\frac{\sigma_{s}^{2}\beta^{2}}{p_{1}\sigma_{n}^{2}}+\frac{\norm{\mathbf{y}}^{2}}{\sigma_{n}^{4}}. (31)

IV Numerical Results

Refer to caption
Figure 2: AMI of SM and QSM, where (Nt,Nr,K,Q)=(4,4,1,16)(N_{t},N_{r},K,Q)=(4,4,1,16). QPSK signaling was considered for discrete-input QSM.
Refer to caption
Figure 3: AMI of GSM and GQSM, where (Nt,Nr,K,Q)=(4,4,2,36)(N_{t},N_{r},K,Q)=(4,4,2,36). QPSK signaling was considered for discrete-input GQSM.
Refer to caption
Figure 4: AMI of GQSM for different activation patterns [7, 9, 10] where (Nt,Nr,K,Q)=(8,8,3,64)(N_{t},N_{r},K,Q)=(8,8,3,64).
Refer to caption
Figure 5: AMI gaps of GQSM for different activation patterns [7, 9, 10] where (Nt,Nr,K,Q)=(8,8,3,64)(N_{t},N_{r},K,Q)=(8,8,3,64).

In this section, we compare the AMI of QSM and GQSM with those of related SM schemes.333The AMI of QSM derived in [13] is omitted here since it is equal to the MIMO channel capacity [14] when NtN_{t}\rightarrow\infty. All the results were obtained through Monte Carlo simulations over 10610^{6} independent channel realizations. Each element of the input symbols was assumed to independently follow a complex Gaussian distribution.

First, Fig. 2 shows the AMI of QSM and SM, where (Nt,Nr,K,Q)=(4,4,1,16)(N_{t},N_{r},K,Q)=(4,4,1,16). The dashed and dotted lines in the figure correspond to I𝐬I_{\mathbf{s}} and I𝐀I_{\mathbf{A}} in (III-B), respectively. The figure shows that the difference in AMI between QSM and SM can be mainly attributed to I𝐀I_{\mathbf{A}}, and there was little difference in I𝐬I_{\mathbf{s}}. Since I𝐀I_{\mathbf{A}} is based on a finite number of APs, it converged to log2Q\log_{2}Q at high SNRs.

Next, Fig. 3 shows the AMI of GSM and GQSM, where (Nt,Nr,K,Q)=(4,4,2,36)(N_{t},N_{r},K,Q)=(4,4,2,36). Note that I𝐬I_{\mathbf{s}} of GSM is equal to the ergodic capacity of K×NrK\times N_{r} MIMO. Although the results were basically the same as those shown in Fig. 2, I𝐬I_{\mathbf{s}} of GQSM was slightly smaller than that of GSM. This can be attributed to the fact that GQSM is more susceptible to channel fading than GSM when the APs of the real and imaginary parts do not match, i.e., 𝐀𝐀𝕀\mathbf{A}_{\mathbb{R}}\neq\mathbf{A}_{\mathbb{I}}. Specifically, GSM is affected by KK channel elements, while GQSM can be affected by up to 2K2K channel elements. Interestingly, in the SNR range below 7.57.5 dB, the decrease in I𝐬I_{\mathbf{s}} exceeded the increase in I𝐀I_{\mathbf{A}}, resulting in the AMI of GQSM being lower than that of GSM.444Since Q=(NtK)2Q=\binom{N_{t}}{K}^{2} holds in Figs. 2 and 3, the number of APs is equal to the cardinality of all candidates, and there is no room for designing APs.

Finally, Fig. 4 shows the AMI of GQSM for different AP designs [7, 9, 10], where (Nt,Nr,K,Q)=(8,8,3,64)(N_{t},N_{r},K,Q)=(8,8,3,64). For simplicity, the same APs were used for both the real and imaginary parts of the codewords, i.e., 𝔸=𝔸𝕀\mathbb{A}_{\mathbb{R}}=\mathbb{A}_{\mathbb{I}}. As shown, the differences in AMI appeared to be small. To further analyze these differences, in Fig. 5, we focus on the differences in AMI between the three AP designs, which shows that the differences in AMI depended on the SNR and were maximized at medium SNRs. The ILP [10] and equiprobable [9] designs maximized the equiprobability of the active transmit antennas and also increased the probability of being affected by channel fading, leading to a slight decrease in I𝐬I_{\mathbf{s}}. However, the decrease in I𝐬I_{\mathbf{s}} was negligible compared with the increase in I𝐀I_{\mathbf{A}}, resulting in overall improvements in AMI compared with the combinatorial design [7].

V Conclusions

In this letter, we derived the AMI of GQSM for continuous-input channels, which clarified a significant difference in AMI between GQSM and GSM at high SNRs. Additionally, the impact of AP designs on AMI was maximized at medium SNRs, and the maximum AMI was achieved by the ILP design. The analyses given in this letter are applicable to the schemes subsumed by GQSM, such as QSM and OFDM-I/Q-IM.

References

  • [1] N. Ishikawa, S. Sugiura, and L. Hanzo, “50 years of permutation, spatial and index modulation: From classic RF to visible light communications and data storage,” IEEE Communications Surveys & Tutorials, vol. 20, no. 3, pp. 1905–1938, 2018.
  • [2] M. Wen et al., “A survey on spatial modulation in emerging wireless systems: Research progresses and applications,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 9, pp. 1949–1972, 2019.
  • [3] J. Jeganathan, A. Ghrayeb, and L. Szczecinski, “Generalized space shift keying modulation for MIMO channels,” in IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, Cannes, France, Sep. 2008, pp. 1–5.
  • [4] R. Mesleh, S. S. Ikki, and H. M. Aggoune, “Quadrature spatial modulation,” IEEE Transactions on Vehicular Technology, vol. 64, no. 6, pp. 2738–2742, 2015.
  • [5] F. R. Castillo-Soria et al., “Generalized quadrature spatial modulation scheme using antenna grouping,” ETRI Journal, vol. 39, no. 5, pp. 707–717, 2017.
  • [6] R. Abu-alhiga and H. Haas, “Subcarrier-index modulation OFDM,” in 2009 IEEE 20th International Symposium on Personal, Indoor and Mobile Radio Communications, Sep. 2009, pp. 177–181.
  • [7] E. Başar et al., “Orthogonal frequency division multiplexing with index modulation,” IEEE Transactions on Signal Processing, vol. 61, no. 22, pp. 5536–5549, 2013.
  • [8] B. Zheng et al., “Low-complexity ML detector and performance analysis for OFDM with in-phase/quadrature index modulation,” IEEE Communications Letters, vol. 19, no. 11, pp. 1893–1896, 2015.
  • [9] M. Wen et al., “Equiprobable subcarrier activation method for OFDM with index modulation,” IEEE Communications Letters, vol. 20, no. 12, pp. 2386–2389, 2016.
  • [10] N. Ishikawa, “IMToolkit: An open-source index modulation toolkit for reproducible research based on massively parallel algorithms,” IEEE Access, vol. 7, pp. 93 830–93 846, 2019.
  • [11] D. A. Basnayaka, M. Di Renzo, and H. Haas, “Massive but few active MIMO,” IEEE Transactions on Vehicular Technology, vol. 65, no. 9, pp. 6861–6877, 2016.
  • [12] B. Shamasundar and A. Nosratinia, “On the capacity of index modulation,” IEEE Transactions on Wireless Communications, vol. 21, no. 11, pp. 9114–9126, 2022.
  • [13] A. Younis et al., “Quadrature spatial modulation for 5G outdoor millimeter–wave communications: Capacity analysis,” IEEE Transactions on Wireless Communications, vol. 16, no. 5, pp. 2882–2890, 2017.
  • [14] E. Telatar, “Capacity of multi-antenna Gaussian channels,” European Transactions on Telecommunications, vol. 10, no. 6, pp. 585–595, 1999.
  • [15] S. X. Ng and L. Hanzo, “On the MIMO channel capacity of multidimensional signal sets,” IEEE Transactions on Vehicular Technology, vol. 55, no. 2, pp. 528–536, 2006.
  • [16] J. Vollinga, “GiNaC—symbolic computation with C++,” Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 559, no. 1, pp. 282–284, 2006.
  • [17] G. Guennebaud, B. Jacob et al., “Eigen v3,” 2010.