This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Explainable Semantic Communication for Text Tasks

Chuanhong Liu, Caili Guo, Senior Member, IEEE, Yang Yang, Wanli Ni, Yanquan Zhou, Lei Li,
and Tony Q.S. Quek, Fellow, IEEE
This work was supported in part by the Fundamental Research Funds for the Central Universities (No.2021XD-A01-1), and in part by BUPT Excellent Ph.D. Students Foundation (No.CX2022101). An earlier version of this paper [1] was presented in part at the 2023 IEEE Wireless Communications and Networking Conference (WCNC) [DOI: 10.1109/WCNC55385.2023.10118916]. (Corresponding author: Caili Guo)Chuanhong Liu and Caili Guo are with the Beijing Key Laboratory of Network System Architecture and Convergence, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail: [email protected]; [email protected]).Yang Yang is with the Beijing Laboratory of Advanced Information Networks, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail: [email protected]).Wanli Ni is with Department of Electronic Engineering, Tsinghua University, Beijing 100084, China (e-mail: [email protected]).Yanquan Zhou and Lei Li are with School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail: [email protected]; [email protected]).Tony Q. S. Quek is with the Dept. of Information Systems Technology and Design, Singapore University of Technology and Design, Singapore, 487372 (e-mail: [email protected]).
Abstract

Task-oriented semantic communication has gained increasing attention due to its ability to reduce the amount of transmitted data without sacrificing task performance. Although some prior efforts have been dedicated to developing semantic communications, the semantics in these works remains to be unexplainable. Challenges related to explainable semantic representation and knowledge-based semantic compression have yet to be explored. In this paper, we propose a triplet-based explainable semantic communication (TESC) scheme for representing text semantics efficiently. Specifically, we develop a semantic extraction method to convert text into triplets while using syntactic dependency analysis to enhance semantic completeness. Then, we design a semantic filtering method to further compress the duplicate and task-irrelevant triplets based on prior knowledge. The filtered triplets are encoded and transmitted to the receiver for completing intelligent tasks. Furthermore, we apply the propsed TESC scheme to two emblematic text tasks: sentiment analysis and question answering, in which the semantic codec is meticulously customized for each task. Experimental results demonstrate that 1) TESC scheme outperforms benchmarks in terms of Top-1 accuracy and transmission efficiency, and 2) TESC scheme enjoys about 150% performance gain compared to the traditional communication method.

Index Terms:
task-oriented communication, explainable semantic representation, triplets, natural language processing.

I Introduction

The advancement of 6G network technology is increasingly intertwined with artificial intelligence (AI)[2, 3, 4]. As the number of connected intelligent devices skyrockets and wireless data traffic grows exponentially, one of the primary challenges emerging is spectrum scarcity. Traditional communication technologies typically focus on the accurate transmission of symbols but often overlook the actual purpose and meaning of the data being transmitted. This leads to inefficient use of wireless communication resources and struggles to meet the demands of future large-scale communications. Semantic communication, recognized as a key technology in 6G networks, offers a promising solution[5, 6].

Unlike traditional communication that transmits raw data, semantic communication emphasizes transmitting task-oriented semantics[7, 8]. This paradigm shift has multiple advantages. First, it reduces redundant or irrelevant content, significantly improving communication efficiency and alleviating the spectrum scarcity issue. Second, it enhances the robustness of communication systems against channel noise by allowing bit-level errors. However, the unexplainable nature of existing semantic communication hinders its practical application to some extent. Therefore, it is necessary to improve its explainability so as to enhance the reliability of semantic communication systems.

I-A Related Works

Recently, there are a number of prior studies on semantic communications [9, 10, 11, 12]. In particular, for text, the authors in [9] proposed a semantic communication system based on Transformer, in which the concept of semantic information was clarified at the sentence level. Based on [9], the authors in [10] further proposed a lite distributed semantic communication system, making the model easier to deploy on the Internet of Things (IoT) devices. For image, the authors in [11] presented a joint source-channel coding scheme based on convolutional neural networks (CNN) to transmit image data in a wireless channel, which can jointly optimize various modules of the communication system. For speech, the authors in [12] proposed an attention mechanism-based semantic communication system for speech signals, in which a general model to cope with various channel conditions without retraining is developed. All of them concentrate solely on reconstructing source data at the receiver, neglecting the specific task at hand. This oversight may significantly restrict the data compression efficiency.

Focusing on specific mission, task-oriented semantic communication has been proposed in [13, 14, 15, 16, 17]. The authors in [13] considered image-based re-identification for persons or cars as the communication task, where two schemes were proposed to improve the retrieval accuracy. The authors in [14] designed a joint transmission-classification system for images, in which the receiver outputs image classification results directly. The authors in [15] proposed an adaptable semantic compression method for image classification-oriented semantic communication system, and investigated resource allocation for performance optimization. The authors in [16] presented a novel federated semantic learning framework to collaboratively train the semantic-channel encoders of multiple devices with the coordination of a base station-based semantic-channel decoder, aiming at semantic knowledge graph constructing task. The authors in [17] proposed a multi-users semantic communication system for serving visual question answering task, in which long short term memory was used for text transmitter and CNN for image transmitter. All of the aforementioned studies employed deep neural networks to encode data and utilized the output features as the semantic representation of the data. However, the output of neural networks, which consists of a vector of numerical values, is often unexplainable and lacks the logic of human language, as shown in Fig. 1(a).

Refer to caption
(a) Feature-based unexplainable semantic representation
Refer to caption
(b) Triplet-based explainable semantic representation
Figure 1: Unexplainable vs. explainable semantic representation.

In [18, 19, 20, 21, 22, 23], the authors proposed a number of methods for explainable semantic communications. The authors in[18] and [19] proposed a semantic communication framework tailored for real-time control systems, where the control signals were treated as the semantic information of the data. However, it should be noted that utilizing control signals as semantic information may not be suitable for transmitting text or image data, as control signals are unable to capture the content associated with textual or visual information. The authors in [20] proposed an explainable and robust semantic communication framework to disentangles features into independent and semantically interpretable features. However, this work is specifically dedicated to the field of image transmission and does not encompass text-related applications. Moreover, the authors in [21, 22, 23] proposed to use the knowledge graph instead of features to represent text semantics, which could decompose texts into multiple triplets. The entities in such a representation method were well-defined without ambiguity, since knowledge graphs were usually used as generic databases, and the information contained in them was clear and general. However, the textual statements in daily language depend on each other, and most of the words are only applicable to a specific context. A large number of ambiguous words could not be extracted completely via the existing methods in [21, 22, 23]. In addition, all of these works aimed at recovering the original text at the receiver, rather than implementing a specific task, which may lead to some task-irrelevant semantic redundancy.

I-B Challenges and Contributions

The development of task-oriented explainable semantic communication systems faces a number of challenges. The first one is to represent the task-oriented semantics of text in an interpretable way, as shown in Fig. 1(b). This involves not only recognizing the meaning of the text, but also ensuring that this representation is understandable. Furthermore, it is crucial to design an efficient method to extract these interpretable semantics. This process must be robust and efficient, enabling the system to extract the semantics accurately. Another challenge involves refining the extracted semantics, especially in terms of removing semantic redundancy. It is crucial for communication systems to transmit only the critical semantics as this helps to improve the overall efficiency of the system.

In this paper, we propose a novel triplet-based explainable semantic communication (TESC) scheme for text tasks, which can effectively filter the redundant information to reduce the amount of transmitted data. The main contributions of this paper are summarized as follows:

  • \bullet

    We propose a TESC scheme for text tasks, where the transmitter characterizes the semantic information of the text by means of triplets, which enhances the interpretability of the semantic information while efficiently completing the downstream tasks. Specifically, the triplet-based semantic representation method includes semantic extraction and semantic filtering. The former consists of an out-of-the-box triplet extraction tool and a complementary triplet extraction method that employs syntactic dependency analysis. This dual approach ensures a comprehensive capture of semantic information. The latter is a two-step method for compressing the extracted triplets, thereby reducing the amount of transmitted data and improving the efficiency of semantic communication.

  • \bullet

    We apply the proposed TESC scheme to two emblematic text tasks: sentiment analysis and question answering (QA), underscoring its effectiveness and versatility. The semantic codec is meticulously customized for each task, optimizing for nuanced understanding and processing, thereby exemplifying the scheme’s potential across diverse text tasks.

  • \bullet

    Experiment results show that the proposed TESC scheme can significantly reduce the amount of transmitted data and improve task performance, compared to the traditional communication and existing semantic communication methods. Specifically, we observe that 1) the proposed TESC scheme shows 80.5% and 150% performance gain over traditional communication methods on sentiment analysis and QA, respectively. 2) The number of transmission symbols of TESC is only 8% of that of traditional methods in QA taks. 3) The proposed semantic filtering method notably decreases the average number of triplets by as much as 76.1% and the total word count by 83.8% while maintaining task performance.

The remainder of this paper is organized as follows. The proposed TESC scheme is described in Section II. The proposed TESC scheme is applied to sentiment analysis and QA task in Section III. Finally, experiments results are provided in Section IV, which is followed by conclusions in Section V.

II Proposed TESC Scheme

II-A Scheme Design

As shown in Fig. 2, the structure of the proposed TESC scheme consists of a semantic representation module, a semantic codec module, a channel codec module and knowledge bases. Similar to [24], the knowledge base primarily contains well-trained model parameters, alongside prior task-specific knowledge. In this paper, the channel codec is the same as that of the conventional communication system, we mainly focus on the design of semantic representation method and semantic codec.

The transmitter aims at gathering data locally and performing an inference task with the assistance of the receiver, which is the goal of semantic communication. In particular, the transmitter first extracts the triplets from original text and further filters the redundant triplets. Then, the triplets are encoded and transmitted to the receiver in a scheduled manner. Finally, the receiver performs semantic decoding based on the received semantics and returns the result of tasks to the transmitter. In the following, we detail the system design of the proposed TESC scheme.

Refer to caption
Figure 2: An illustration of the proposed TESC scheme for text tasks.

As shown in Fig. 2, the transmitter maps the input text into complex symbol streams, and then passes it through the wireless channel with fading and noise. Particularly, the input of the system is denoted by 𝑰=[i1,i2,,iN]{\boldsymbol{I}}{\rm{=}}\left[{{i_{1}},{i_{2}},\cdots,{i_{N}}}\right], where ini_{n} represents the nn-th word in the text and NN is the number of words in 𝑰{\boldsymbol{I}}. The transmitter first extracts triplets from original text 𝑰{\boldsymbol{I}} to represent its semantics, which can be denoted by

𝑨=R𝑻(𝑰),\displaystyle{\boldsymbol{A}}={R_{\boldsymbol{T}}}({\boldsymbol{I}}), (1)

where R𝑻(){R_{\boldsymbol{T}}}(\cdot) denotes the semantic extraction method, and 𝑻{\boldsymbol{T}} is the oriented task. The output 𝑨{\boldsymbol{A}} is a series of triplets, consisting of entities and relations.

In particular, each entity in the triplets refers to an object or a concept in the real world. Hereinafter, we define entity ii in text 𝑰{\boldsymbol{I}} as enien_{i} that consists of a subset of words in text 𝑰{\boldsymbol{I}}. For example, “New York City” and “the United States” are two entities consisting of three words in the text “New York City is a beautiful city, which belongs to the United States”. There are various methods such as named entity recognition model (NER) can be used to extract the entities in text 𝑰{\boldsymbol{I}}. Given a pair of extracted entities (eni,enjen_{i},en_{j}), the semantic extractor has to extracted the relation rijr_{ij} between them. For example, the relation between entity “New York City” and entity “the United States” can be denoted by “belongs to”. Based on the extracted entities and relations, the triplets of text 𝑰{\boldsymbol{I}} can be expressed as

𝑨=[𝒂1,,𝒂k,,𝒂K],\displaystyle{\boldsymbol{A}}{\rm{=[}}{{\boldsymbol{a}}_{\rm{1}}}{\rm{,}}...{\rm{,}}{{\boldsymbol{a}}_{{k}}}{\rm{,}}...{\rm{,}}{{\boldsymbol{a}}_{{K}}}{\rm{]}}, (2)

where 𝒂k=(enk,i,rk,ij,enk,j){{\boldsymbol{a}}_{k}}=(e{n_{k,i}},{r_{k,ij}},e{n_{k,j}}) is the kk-th triplet and KK is the total number of triplets extracted from 𝑰{\boldsymbol{I}}. Note that the relations are directional, and hence we have rk,ijrk,ji{r_{k,ij}}\neq{r_{k,ji}}. The triplet-based representation can effectively remove a significant amount of semantic redundancy from the original text, and thus the data size of the extracted triplets is smaller than that of the original text 𝑰{\boldsymbol{I}}. The reduction of data size is advantageous for applications that require efficient transmission and understanding of large amounts of text-based data. It is worth noting that the semantics of original texts is extracted in an explainable manner. This represents a fundamental distinction between our work and existing feature-based semantic representations.

Different from the conventional communication system, in which bits or symbols are treated equally, triplets may have different semantic importance for accomplishing tasks, and there may still be some redundancy in the extracted triplets[25]. Therefore, we can further filter the extracted triplets according to their contribution to the downstream task. By retaining only the essential triplets under the guidance of task 𝑻{\boldsymbol{T}}, the task-irrelevant information is eliminated, which can be especially valuable in scenarios where data transmission is limited by bandwidth or other constraints. The semantic filtering process can be expressed as

𝑿=C𝑻(𝑨),\displaystyle{\boldsymbol{X}}={C_{\boldsymbol{T}}}({\boldsymbol{A}}), (3)

where C𝑻(){C_{\boldsymbol{T}}}(\cdot) denotes the semantic filtering function, which highly depends on the final task 𝑻\boldsymbol{T}. The term 𝑿\boldsymbol{X} is the task-oriented triplets after filtering, which is a subset of the original triplets 𝑨\boldsymbol{A}, i.e., 𝑿𝑨{\boldsymbol{X}}\subset{\boldsymbol{A}}. Obviously, semantic filtering offers two major benefits by removing semantic redundancy. First, it reduces the amount of transmitted data and consequently reduces transmission delay and the demand for bandwidth. Second, it lessens the transmission energy consumption, which is significant for lightweight devices.

The filtered triplets are encoded via a semantic encoder, which can further compress the transmitted data. The implementation details of the semantic encoder are closely related to the downstream tasks 𝑻{\boldsymbol{T}}. The encoded semantics can be denoted by

𝑴=E𝜶𝑻(𝑿),\displaystyle{\boldsymbol{M}}={E_{\boldsymbol{\alpha_{T}}}}({\boldsymbol{X}}), (4)

where E𝜶𝑻(){E_{\boldsymbol{\alpha_{T}}}}(\cdot) denotes the semantic encoder with parameter set 𝜶𝑻{\boldsymbol{\alpha_{T}}} for accomplishing task 𝑻{\boldsymbol{T}}. The above semantic representation and semantic encoder serve a purpose akin to traditional source coding, which involves compressing the source to enhance wireless communication efficiency.

Next, the features are encoded by the channel encoder to generate symbols for transmission, which can be denoted by

𝑺=Q𝝈(𝑴),\displaystyle{\boldsymbol{S}}={Q_{\boldsymbol{\sigma}}}({\boldsymbol{M}}), (5)

where Q𝝈()Q_{\boldsymbol{\sigma}}(\cdot) denotes the channel encoder network with parameter set 𝝈{\boldsymbol{\sigma}}.

Then, the encoded symbols are transmitted via a wireless channel, and the received signal is expressed as

𝒀=h𝑺+𝒏,\displaystyle{\boldsymbol{Y}}=h{\boldsymbol{S}}+\boldsymbol{n}, (6)

where hh is the channel gain and 𝒏\boldsymbol{n} is the additive white Gaussian noise (AWGN) sampled from 𝒞𝒩(0,𝝈2𝑰){{\cal C}{\cal N}}(0,{\boldsymbol{\sigma}^{2}\boldsymbol{I}}). For end-to-end (E2E) training of the semantic transceiver, the channel must allow the backpropagation of parameters, and thus physical channel in this work is modeled by a non-trainable fully connected layer in line with [9, 10, 11] to bolster the robustness.

As shown in Fig. 2, the receiver includes a channel decoder and a semantic decoder to recover the transmitted semantics and complete the final task. The received symbols are decoded to recover semantics via the channel decoder, which can be expressed as

𝑴=Q𝝌1(𝒀),\displaystyle{{{\boldsymbol{M}}^{\prime}}}={Q_{\boldsymbol{\chi}}^{-1}}({\boldsymbol{Y}}), (7)

where Q𝝌1(){Q_{\boldsymbol{\chi}}^{-1}}(\cdot) denotes the channel decoder with parameter set 𝝌{\boldsymbol{\chi}}.

Finally, the receiver inputs the recovered semantics 𝑴{{\boldsymbol{M}}^{\prime}} into the semantic decoder to complete the intelligent tasks. Specifically, the output is

𝒑=E𝝁𝑻1(𝑴),\displaystyle{\boldsymbol{p}}={E_{\boldsymbol{\mu_{T}}}^{-1}}({\boldsymbol{{{\boldsymbol{M}}^{\prime}}}}), (8)

where 𝒑{\boldsymbol{p}} is the task result, which will be returned to the transmitter and E𝝁𝑻1(){E_{\boldsymbol{\mu_{T}}}^{-1}}(\cdot) denotes the semantic decoder with the parameter set 𝝁𝑻{\boldsymbol{\mu_{T}}}. Notably, the entire semantic communication network can be trained offline and subsequently deployed online, resulting in substantial resource savings for devices.

As stated in [26], accurate and efficient recognition and extraction of semantic information, including identifying different types of entities and their potential relationships, is crucial for effective semantic communication. However, the effective representation of semantic information in TESC poses two primary challenges. One is effectively extracting the triplets from the input text. This requires techniques that can identify and disambiguate the semantic relationships between words and phrases in the text, as well as handle the inherent ambiguity in natural language. Another challenge is filtering the extracted triplets to ensure that they are relevant and informative for the downstream task. Different downstream tasks may require different levels of granularity and specificity in the extracted triplets, and it can be challenging to identify the most useful and relevant triplets for a given task. This requires techniques that can filter out irrelevant or noisy triplets based on the downstream task. In the next subsection, we will present the proposed semantic representation method in detail.

Refer to caption
Figure 3: The process of the proposed semantic extraction method.

II-B Triplet-Based Semantic Representation Method

In this subsection, we delve into the proposed triplet-based semantic representation method, which comprises of two key components: semantic extraction and semantic filtering. First, we introduce syntactic dependency analysis-based semantic extraction method. Then, we elaborate the details of the semantic filtering method under the task guidance.

II-B1 Semantic Extraction Method

The proposed semantic extraction method, depicted in Fig. 3, leverages both an established out-of-the-box triplet extraction tool and a novel complementary triplet extraction method predicated on syntactic dependency analysis. In particular, the out-of-the-box triplet extraction tool used in this work is the Open Information Extraction (OpenIE) annotator[27]. However, the existing triplets extraction methods still exhibit moderate performance, and this limitation could potentially be the performance bottleneck in semantic communication. To overcome this problem, we further propose a complementary triplet extraction method based on syntactic dependency analysis to supplement the missing semantics. Syntactic dependency analysis offers a comprehensive depiction of the dependency relationship between linguistic elements at the sentence level, making it particularly advantageous for extracting implicit and abstract information. For example, OpenIE can only extract (a bird, flies in, sky) from the text “A red bird flies in sky”, while the proposed complementary triplet extraction method can additionally extract more specific triplets (bird, is, red).

The complementary triplet extraction method can be divided into two phases: syntactic dependency analysis (𝑺d\boldsymbol{S}_{\rm{d}}) and rule-based triplet extraction (𝑬r\boldsymbol{E}_{\rm{r}}), as shown in the lower part of Fig. 3. In the first phase, we employ Spacy to deconstruct the sentence syntactically, outputting a tree whose branches represent the syntactic dependencies. Spacy is a powerful open-source natural language processing (NLP) library renowned for its efficiency and accuracy in linguistic analysis. Mathematically, this can be represented as

𝑨d=𝑺d(𝑰),\displaystyle{\boldsymbol{A}_{\rm{d}}}=\boldsymbol{S}_{\rm{d}}({\boldsymbol{I}}), (9)

where 𝑰\boldsymbol{I} denotes the input texts, and 𝑨d{\boldsymbol{A}_{\rm{d}}} represents the dependency tree. The nodes of 𝑨d{\boldsymbol{A}_{\rm{d}}} represent the words in the sentence and the edges represent the syntactic relationships between the words.

In rule-based triplet extraction phase, we apply a set of predefined rules to 𝑨d{\boldsymbol{A}_{\rm{d}}} to extract triplets 𝑨c{\boldsymbol{A}_{\rm{c}}}, which can be denoted by

𝑨c=𝑬r(𝑨d).\displaystyle{\boldsymbol{A}_{\rm{c}}}=\boldsymbol{E}_{\rm{r}}({\boldsymbol{A}_{\rm{d}}}). (10)

where 𝑨c{\boldsymbol{A}_{\rm{c}}} is the set of complementary triplets.

These rules are designed to capture various sentence structures, including, but not limited to, the classic subject-verb-object and subject-link verb-predicative structures. For instance, in the case of a subject-verb-object structure, we identify all the verbs in the syntactic dependency tree and assign them as the relationship in the extracted triplets. The subject node and object node connected to a specific verb are then labeled as the head entity and tail entity of the triplet, respectively. Additionally, we expand the sub-tree of the extracted node and include all its modifiers, which can be abstract and non-generic, in the corresponding triplets to ensure semantic completeness.

Finally, the extracted triplets of the two parts will be combined and the duplicate triplets will be sifted out. Based on the proposed method, the semantics of the raw text, including the abstract and non-generic information, can be fully represented.

II-B2 Semantic Filtering Method

During the triplet extraction process, redundant triplets that are either semantically identical or irrelevant to the task may be generated, presenting a challenge to communication efficiency. This redundancy can lead to an extensive quantity of extracted triplets. Leveraging the explainability of triplets, we propose a two-step explicit semantic filtering method to address this issue. Table I presents an example of how the proposed two-step semantic filtering method works.

In the first step, we apply a unique-relationship filter, 𝑭u\boldsymbol{F}_{\rm{u}}, to remove semantic-redundant triplets. This approach is grounded in the principle that the relationship between two entities is unique[28]. Specifically, we begin with an empty set initialized for entity pairs. As we extract each triplet, we examine whether the pair of its head entity and tail entity is already included in the set. If this specific entity pair is found in the set, we discard the corresponding triplet. Conversely, if the pair is novel, we incorporate it into our list of triplets. This preliminary processing step helps to significantly reduce the duplicate semantics transmitted, which can be denoted by

𝑿u=𝑭u(𝑨),\displaystyle{\boldsymbol{X}_{\rm{u}}}=\boldsymbol{F}_{\rm{u}}({\boldsymbol{A}}), (11)

where 𝑿u{\boldsymbol{X}_{\rm{u}}} is the remain triplets after filtering, and 𝑨{\boldsymbol{A}} is the set of original extracted triplets. As depicted in Table I, once the triplet (“China”, “capital city”, “Beijing”) is extracted, the entity pair (“China”,“Beijing”) is added to the entity pair set. Hence, when the triplet (“China”, “contain”, “Beijing”) appears, it will be discarded, as the entity pair (“China”,“Beijing”) already exists in the set.

In the second step, we employ a task-specific filter, 𝑭t\boldsymbol{F}_{\rm{t}}, informed by the knowledge base, to discard triplets irrelevant to the task, which can be represented as

𝑿t=𝑭t(𝑿u),\displaystyle{\boldsymbol{X}_{\rm{t}}}=\boldsymbol{F}_{\rm{t}}({\boldsymbol{X}_{\rm{u}}}), (12)

where 𝑿t{\boldsymbol{X}_{\rm{t}}} is the final transmitted semantic triplets. The knowledge-based semantic filtering step is closely tied to the task at hand, and we can illustrate this through examples such as sentiment analysis and QA tasks. In sentiment analysis, we can leverage the knowledge that triplets with longer entities and richer relationships tend to include more verbs or adjectives expressing sentiment. These triplets are more valuable for analyzing the sentiment of the original text. Therefore, after filtering redundant triplets in the first step, there is a preference for retaining triplets with longer entities and richer relationships. In addition, for QA task, we possess prior knowledge that questions and answers typically involve specific types of entities, such as position, size, time, etc. Thus, unmatched or irrelevant triplets have a higher likelihood of being filtered out. Mathematically, we employ a selection score SQA(𝒂i){S_{\rm{QA}}}\left({{{\boldsymbol{a}}_{i}}}\right) calculated as follows

SQA(𝒂i)=t𝒯δ(𝒂i,t),\displaystyle{S_{{\rm{QA}}}}\left({{{\boldsymbol{a}}_{i}}}\right)=\sum\limits_{t\in\mathcal{T}}{\delta\left({{{\boldsymbol{a}}_{i}},t}\right)}, (13)

where 𝒯\mathcal{T} is a entity set of specific type. δ()\delta\left(\cdot\right) is an indicator function that returns 1 if the triplet 𝒂i{\boldsymbol{a}}_{i} contains an entity belonging to 𝒯\mathcal{T} and 0 otherwise. Triplets with a selection score of 0 will be filtered out, while others will be retained. This scoring mechanism ensures that triplets directly pertinent to the question’s context are prioritized for retention. As depicted in step 2 of Table I, when the question is about country, the selection score of triplet (“Bob”, “born in”, “Beijing”) is 0 and it will be removed.

Semantic filtering method allows for a nuanced and precise refinement of data, tailored to the specific requirements of intelligent tasks, thereby ensuring that TESC scheme not only minimizes data transmission but does so without compromising the integrity and applicability of the retained information. Moreover, some experiments have been conducted in Section IV, the results of which substantiate the effectiveness of the proposed semantic filtering method.

TABLE I: Example of Semantic Extraction and Semantic Filtering.
Original Texts
Beijing is the capital city of China, and Bob was born in there.
Extracted Results
Subject Relation Object
China capital city Beijing
China contain Beijing
Bob born in Beijing
Filtered Results (Step 1)
Subject Relation Object
China capital city Beijing
Bob born in Beijing
Filtered Results (Step 2)
Subject Relation Object
China capital city Beijing

III Application of TESC Scheme in Sentiment Analysis and Question Answering

In this section, we offer a detailed exploration of how our proposed TESC scheme is applied across various text tasks, with a particular focus on sentiment analysis[29] and QA[30]. These tasks are chosen for their widespread importance in NLP and their unique attributes, which can effectively showcase the superiority of the proposed scheme[31]. Considering in the smart home scenario, sentiment analysis can enhance the interaction between users and smart home devices by interpreting emotional states, while question answering systems can facilitate seamless dialogue with various IoT devices, thus enriching user experience. TESC scheme can be easily expanded to other intelligent tasks via redesigning the semantic codec. Additionally, all models can be trained in the cloud and then broadcast to users, making the implementation process more efficient and scalable.

III-A Sentiment Analysis

In this subsection, we will focus on sentiment analysis-oriented TESC. Sentiment analysis is a technique used to analyze subjective texts with various emotions, such as positive or negative, to determine the text’s opinions, preferences, and emotional tendencies. Sentiment analysis can be divided into binary classification and multi-classification tasks, in which the latter providing a finer granularity in the division of emotions. For the scope of this paper, we will focus on binary sentiment classification. However, it is worth noting that the proposed TESC scheme can be easily applied to multi-classification tasks as well.

Refer to caption
Figure 4: The process of sentiment analysis-oriented TESC.

The proposed sentiment analysis-oriented TESC is shown in Fig. 4. It should be noted that since channel coding and decoding are not the primary focus of this work, they are omitted in the figure. First, we extract the semantics of the raw text 𝑰{\boldsymbol{I}} to obtain the triplets 𝑿{\boldsymbol{X}}, based on the proposed semantic extraction and filtering methods in Sec. III. The triplets are separated by commas and input to the encoder network.

Then, we encode the triplets 𝑿{\boldsymbol{X}} to convert them into the embedding 𝑬{\boldsymbol{E}}, which can be represented as

𝑬=𝑬T+𝑬P+𝑬S,\displaystyle{\boldsymbol{E}}={\boldsymbol{E}}_{T}+{\boldsymbol{E}}_{P}+{\boldsymbol{E}}_{S}, (14)

where 𝑬T{\boldsymbol{E}}_{T} is the token embedding, 𝑬P{\boldsymbol{E}}_{P} represents positional embedding, and 𝑬S{\boldsymbol{E}}_{S} denotes the segmentation embedding. The token embedding represents the index of the token in the vocabulary, which is built manually in advance. The positional embedding corresponds to the position of the token in the sentence. Segmentation embedding is used to distinguish which sentence the token belongs to, which is effective in the case where there are multiple input sentences.

Upon obtaining the embedding 𝑬{\boldsymbol{E}}, we use Transformer encoder layers[32] to serve as the semantic encoder, enhancing the compression of the data for transmission. This semantic encoder incorporates a multi-head attention mechanism, allowing it to merge each input 𝑬{\boldsymbol{E}} with contextual information, thereby capturing a more comprehensive range of semantic details. The multi-head attention mechanism empowers the model to evaluate the entire context of the input, fostering a deeper comprehension of the content. Within each attention head, a self-attention process is executed on the embeddings. This operation involves comparing each piece of the input data to every other piece, thus enabling the model to weigh the importance of each part of the input relative to the rest. By scaling and normalizing these attention scores, the model effectively determines which parts of the input are most relevant in a given context. The output of these Transformer encoder layers is a semantically enriched representation, 𝑴{\boldsymbol{M}}, which can be expressed as

𝑴=E𝜶𝑻(𝑿|T=SA)=Trans(𝐄),\displaystyle{\boldsymbol{M}}={E_{\boldsymbol{\alpha_{T}}}}({\boldsymbol{X}}|T=\rm{SA})=Trans({\boldsymbol{E}}), (15)

where Trans()\rm{Trans}(\cdot) is the Transformer encoders, and T=SAT=\rm{SA} means the oriented task is sentiment analysis. The encoded semantics 𝑴{\boldsymbol{M}} will be further encoded by the channel encoder and transmitted via the wireless channel.

Considering the sentiment analysis task here, we use the classifier as the semantic decoder, which can be realized via a multi-layer perceptron (MLP). Thus, the sentiment analysis results can be obtained via

𝒑l=MLP(𝐌),\displaystyle\boldsymbol{p}_{l}=\rm{MLP}({\boldsymbol{M}^{\prime}}), (16)

where 𝑴\boldsymbol{M}^{\prime} is the received semantics and 𝒑l\boldsymbol{p}_{l} represents the predicted probability vector of the ll-th sample.

The cross-entropy loss function is adopted in this work, which can be expressed as

SA=1Nl=1Nm=1Mqlmlog(pim),\displaystyle{\mathcal{L}_{\rm{SA}}}=-\frac{1}{N}\sum\limits_{l=1}^{N}{\sum\limits_{m=1}^{M}{{q_{lm}}\log\left({{p_{im}}}\right)}}, (17)

where NN is the number of samples, and MM is the number of classes. qlmq_{lm} and plmp_{lm} are the mm-th element of label vector 𝒒l{\boldsymbol{q}_{l}} and predicted vector 𝒑l{\boldsymbol{p}_{l}}, respectively.

III-B Question Answering

QA is a critical component of many human-computer interaction systems, and is used in various fields such as information retrieval, and knowledge management. However, in a QA-oriented communication system, the transmitter need to transmit a large amount of original text to the receiver to search for the answer. This can result in a significant transmission burden and may affect the task performance in resource-constraint scenarios. To address this issue, we apply the proposed TESC scheme to the QA task to reduce the transmission burden and improve the task performance.

Refer to caption
Figure 5: The structure of STM model.

The proposed QA-oriented TESC comprises of semantic representation (described in Sec. III), semantic encoder, and semantic decoder. In this subsection, we focus on the semantic codec used specifically for QA-oriented TESC. Given the limited computation and storage resources of the transmitter, it is necessary to simplify the structure of the semantic encoder to strike a balance between the amount of transmitted data and the resource consumption of the transmitter. To this end, we use a simple semantic encoder which primarily consists of an embedding layer that maps the triplets into embedding vectors, which can be optimized during the end-to-end training. This design allows the proposed semantic communication scheme to be easily deployed on various lightweight devices (e.g., IoT devices). The encoded semantics, 𝑬{\boldsymbol{E}}, can be denoted by

𝑬=E𝜶𝑻(𝑿|T=QA)=𝐄𝐌𝐁(𝐗),\displaystyle{\boldsymbol{E}}={E_{\boldsymbol{\alpha_{T}}}}({\boldsymbol{X}}|T=\rm{QA})=\bf{EMB}(\boldsymbol{X}), (18)

where 𝐄𝐌𝐁()\bf{EMB}(\cdot) denotes the embedding layer, and 𝑿\boldsymbol{X} is the transmitted triplets. 𝑬={𝒆1,𝒆2,,𝒆l,𝒆L}{\boldsymbol{E}}=\{{\boldsymbol{e}_{1}},{\boldsymbol{e}_{2}},...,{\boldsymbol{e}_{l}},...{\boldsymbol{e}_{L}}\}, where LL is the number of triplets in 𝑿\boldsymbol{X} and 𝒆l{\boldsymbol{e}_{l}} is the encoded semantics corresponding to the ll-th triplets.

Then, the encoded semantics 𝑬{\boldsymbol{E}} will be further encoded by the channel encoder and transmitted via the wireless channel. After obtaining the received signal, the receiver will decode it via channel decoder to recover the semantics 𝑬={𝒆1,𝒆2,,𝒆l,𝒆L}{\boldsymbol{E}^{\prime}}=\{{\boldsymbol{e}^{\prime}_{1}},{\boldsymbol{e}^{\prime}_{2}},...,{\boldsymbol{e}^{\prime}_{l}},...{\boldsymbol{e}^{\prime}_{L}}\}.

The semantic decoder utilizes a self-attentive associative memory (SAM)-based two-memory model (STM)[33], which has demonstrated high performance on memory and inference tasks. Inspired by the item and relational memory systems in the human brain, STM separates the relational memory from the item memory. To maintain a detailed representation of the item relationships, the relational memory has a higher-order capacity than item memory, storing multiple relationships represented by matrices instead of scalar or vector quantities. Additionally, the two separate memories interact with each other to enhance their representations. SAM is applied in STM to transform a second-order item memory into a third-order relational representation.

The basic structure of STM model is shown in Fig. 5. ti{\mathcal{M}_{t}^{i}} \in d×d{\mathcal{\mathbbm{R}}^{d\times d}} is a memory unit for items and tr{\mathcal{M}_{t}^{r}} \in nq×d×d{\boldsymbol{\mathbbm{R}}^{n_{q}\times d\times d}} is for relationships. From a high-level view, at tt-th step, we input 𝒆t{\boldsymbol{e}^{\prime}_{t}} and the previous state of memories {t1i{\mathcal{M}_{t-1}^{i}}, t1r{\mathcal{M}_{t-1}^{r}}} to produce output 𝒂t{\boldsymbol{a}_{t}} and new state of memories {ti{\mathcal{M}_{t}^{i}}, tr{\mathcal{M}_{t}^{r}}}.

At each step, the item memory ti{\mathcal{M}_{t}^{i}} is updated with new input 𝒆t{\boldsymbol{e}^{\prime}_{t}} using gating mechanisms. For an input 𝒆t{\boldsymbol{e}^{\prime}_{t}}, we update the item memory as

ti=Ft(t1i,𝒆t)t1i+Gt(t1i,𝒆t)Et,\mathcal{M}_{t}^{i}=F_{t}\left(\mathcal{M}_{t-1}^{i},\boldsymbol{e}^{\prime}_{t}\right)\odot\mathcal{M}_{t-1}^{i}+G_{t}\left(\mathcal{M}_{t-1}^{i},\boldsymbol{e}^{\prime}_{t}\right)\odot E_{t}, (19)

where Et=f1(𝒆t)f2(𝒆t)E_{t}=f_{1}(\boldsymbol{e}^{\prime}_{t})\otimes f_{2}(\boldsymbol{e}^{\prime}_{t}), f1f_{1} and f2f_{2} are feed-forward neural networks that output dd-dimensional vectors, Ft{F_{t}} and Gt{G_{t}} are forget and input gates, respectively.

The associative memories used to store relationships in tr\mathcal{M}_{t}^{r} allow us to reconstruct previously items by reading the relational memory. This involves a two-step contraction process, where 𝒗tr\boldsymbol{v}_{t}^{r} is computed as follows

𝒗tr=softmax(f3(𝒆t))t1rf2(𝒆t),\boldsymbol{v}_{t}^{r}=\operatorname{softmax}\left(f_{3}\left(\boldsymbol{e}^{\prime}_{t}\right)^{\top}\right)\mathcal{M}_{t-1}^{r}f_{2}\left(\boldsymbol{e}^{\prime}_{t}\right), (20)

where f3{f_{3}} is a feed-forward neural network that outputs a nqn_{q}-dimensional vector.

The item memory, together with the read-out from the relational memory, is passed to the SAM module to generate a new relational representation used to update the relational memory tr{\mathcal{M}_{t}^{r}}, as follows

tr=t1r+α1SAM(ti+α2𝒗trf2(𝒆t)),\mathcal{M}_{t}^{r}=\mathcal{M}_{t-1}^{r}+\alpha_{1}\operatorname{SAM}\left(\mathcal{M}_{t}^{i}+\alpha_{2}\boldsymbol{v}_{t}^{r}\otimes f_{2}\left(\boldsymbol{e}^{\prime}_{t}\right)\right), (21)

where α1{\alpha_{1}} and α2{\alpha_{2}} are hyper-parameters. The input for SAM is a combination of the current item memory ti{\mathcal{M}_{t}^{i}} and the association between the extracted item from the previous relational memory 𝒗tr{\boldsymbol{v}_{t}^{r}} and the current input data 𝒆t{\boldsymbol{e}^{\prime}_{t}}.

The relational memory transfers its knowledge to the item memory by using high dimensional transformation

ti=ti+α31𝒱ftr,\mathcal{M}_{t}^{i}=\mathcal{M}_{t}^{i}+\alpha_{3}\mathcal{B}_{1}\circ\mathcal{V}_{f}\circ\mathcal{M}_{t}^{r}, (22)

where 𝒱f\mathcal{V}_{f} is a function that flattens the first two dimensions of its input tensor, 1\mathcal{B}_{1} is a feed-forward neural network that maps (nqd)×dd×d\mathbb{R}^{\left(n_{q}d\right)\times d}\rightarrow\mathbb{R}^{d\times d} and α3{\alpha_{3}} is a hyper-parameter.

As for the answer to the question, at each time step, we distill the relational memory into an output answer vector 𝒂tno\boldsymbol{a}_{t}\in\mathbb{R}^{n_{o}}. We alternatively flatten and apply high-dimensional transformations as follows

𝒂t=3𝒱l2𝒱ltr,{\boldsymbol{a}}_{t}=\mathcal{B}_{3}\circ\mathcal{V}_{l}\circ\mathcal{B}_{2}\circ\mathcal{V}_{l}\circ\mathcal{M}_{t}^{r}, (23)

where 𝒱l\mathcal{V}_{l} is a function that flattens the last two dimensions of its input tensor. 2\mathcal{B}_{2} and 3\mathcal{B}_{3} are two feed-forward neural networks that map nq×(dd)nq×nr\mathbb{R}^{n_{q}\times(dd)}\rightarrow\mathbb{R}^{n_{q}\times n_{r}} and nqnrno\mathbb{R}^{n_{q}n_{r}}\rightarrow\mathbb{R}^{n_{o}} , respectively, where nr{n_{r}} is a hyper-parameter.

IV Experiments

IV-A Experimental Settings

1) Datasets: The adopted dataset for sentiment analysis is the SST2 dataset[34], which is always used to predict sentiment from long movie reviews. The adopted dataset for QA is the bAbI dataset[35], which consists of 20 subtasks. For each subtask, there are 1000 questions for training, and 1000 for testing. Although the SST-2 and bAbI datasets are primarily utilized within the realms of sentiment analysis and QA tasks, respectively, the fundamental NLP capabilities are critical for intelligent IoT systems.

TABLE II: Training Parameters.
Parameter Sentiment Analysis QA
Epochs 5 300
Batch size 8 128
Optimizer Adam Adam
Learning rate 1×1051\times 10^{-5} 6×1046\times 10^{-4}
Drop 0.3 0.5
TABLE III: Settings of TESC Network for Different Tasks.
Tasks Sentiment Analysis Question Answeting
Module Layer Name Units Activation Layer Name Units Activation
Transmitter (Encoder) Embedding Layer 768 Linear Embedding Layer 64 Linear
Transformer Encoders 256 (12 heads) Gelu
Dense 192 Relu
Channel AWGN/Rayleigh None None AWGN/Rayleigh None None
Receiver (Decoder) Dense 192 Relu STM Model 64 Relu
Dense 2 Softmax Dense 90 Relu
Dense 177 Softmax

2) Baselines: For the baselines, we adopt a task-oriented joint source-channel coding system, a deep learning-based semantic communication system[9], and the traditional communication system for separate source and channel coding.

  • \bullet

    Error-free Transmission: The full, noiseless texts are delivered to the receiver, which will serve as the upper bound. We label this as “Error_free” in the simulation figures.

  • \bullet

    Task-oriented DNN-based joint source-channel coding (JSCC): For sentiment analysis task, the joint codec network consists of Bidirectional Long Short-Term Memory (BiLSTM) layers[36]. For the QA task, the joint codec network consists of an End-to-end Memory Network (E2EMN)[37]. Based on task-oriented JSCC, the receiver can directly execute the task based on the received semantic features without reconstructing the original text. We label this benchmark as “DeepJSCC” in the simulation figures.

  • \bullet

    Deep semantic communication based on Transformer: The text is first transmitted via the DeepSC method[9], and reconstructed at the receiver. Subsequently, the reconstructed text is input into the downstream task network to obtain the desired results. We label this benchmark as “DeepSC” in the simulation figures.

  • \bullet

    Traditional methods: To perform the source and channel coding separately, we use Huffman coding for source coding, Reed-Solomon (RS) coding for channel coding, and 16-QAM for modulation, and then execute the task based on the recovered text. We label it as “Huffman+RS” in the simulation figures. It is worth mentioning that traditional communication systems are not the only ones used here. The source coding can also choose L-Z coding and other coding methods, while the channel coding can choose turbo coding, LDPC coding, and other coding methods.

It is noteworthy that we have adopted 8-bit quantization for the TESC network, aligning with the approach recommended in [10]. This choice enhances the compatibility of the TESC with resource-constraint devices. Top-1 accuracy is used to measure the performance. Experiments are performed by the computer with Ubuntu16.04 + CUDA11.0, and the selected deep learning framework is Pytorch. The settings of TESC network structure of different tasks can be found in Table II and the training parameters are summarized in Table III.

IV-B Experimental Results

Refer to caption
Figure 6: An example of the proposed task-oriented TESC scheme.

Fig. 6 shows an example of the proposed task-oriented TESC scheme. In Fig. 6, the transmitter needs to transmit a text, as shown in Fig. 6(a), to the receiver to accomplish intelligent tasks. During the communication process, the transmitter first extracts the semantics from the original text, which is represented via some triplets, as shown in Fig. 6(b). The entities and their corresponding relations are represented as nodes and edges, respectively. Based on the final task, the transmitter then filters the redundant and task-irrelevant triplets to generate transmitted partial task-relevant semantics. Fig. 6(c) illustrates the filtered results corresponding to various tasks. From Fig. 6(b) and Fig. 6(c), we can see that the selected triplets are variable for different tasks, since different tasks focus on different semantic information. This means that the proposed TESC scheme can discard task-irrelevant triplets to reduce the amount of transmitted data. At the receiver, the received semantics can be directly used for implementing the intelligent task, as shown in Fig. 6(d).

Refer to caption
(a) AWGN Channel
Refer to caption
(b) Rayleigh Channel
Figure 7: Sentiment analysis accuracy versus SNRs for AWGN and Rayleigh fading channels.

Fig. 7 shows the relationship between sentiment analysis accuracy and signal-to-noise ratio (SNR) in both AWGN and Rayleigh fading channels. As shown in these figures, the accuracy increases with SNR and gradually converges to a certain threshold. This trend is attributable to higher SNR enhancing communication quality and reducing transmission errors, thus boosting task performance at the receiving end. Notably, the proposed TESC scheme consistently outperforms the DeepJSCC across the entire SNR region. This superior performance of TESC is linked to its ability to capture flawless semantics through triplets, representing text semantics more accurately. Furthermore, TESC’s capability to compress semantics tailored to the specific task effectively leverages task-related prior information. It is also evident that both TESC and DeepJSCC surpass DeepSC and traditional communication methods, particularly in lower SNR environments, affirming the efficiency of task-oriented communication strategies. In the context of AWGN channel at 5 dB, TESC scheme demonstrates a significant 80.5% improvement in accuracy over traditional communication method. This is because deep learning-based methods take the channel condition into consideration in the training process and have better robustness to channel noise. When the SNR exceeds 15 dB, traditional communication methods attain optimal performance. This superior performance is attributed to the fact that, at high SNR levels, traditional communication techniques are capable of achieving error-free transmission, thus ensuring peak accuracy.

Refer to caption
(a) AWGN Channel
Refer to caption
(b) Rayleigh Channel
Figure 8: QA accuracy versus SNRs for AWGN and Rayleigh fading channels.

The accuracy of QA task versus SNR over different channels is depicted in Fig. 8. Echoing the trends observed in Fig. 7, Fig. 8 demonstrates that accuracy improves with increasing SNR, as higher SNR reduces transmission distortions. In addition, it can be observed that the proposed TESC scheme outperforms DeepJSCC, DeepSC, and traditional communication method, nearing the upper bound at high SNR regime. Specifically, at an SNR of 5 dB in the Rayleigh channel, TESC scheme shows a 7.5%, 20.6% and 150% increase in accuracy compared to DeepJSCC, DeepSC, and traditional method, respectively. This enhanced performance is attributed to TESC’s advanced semantic extraction capabilities, which efficiently preserve the most relevant information for the QA task, even in challenging communication environments. Conversely, DeepSC and the traditional method exhibit poorer performance, likely due to the loss of task-relevant semantic information in the recovered text, leading to task failure. Moreover, the traditional method suffers from the cliff effect, which results in a sharp decrease in the performance when the channel condition is worse than a threshold. The findings from both Fig. 7 and Fig. 8 lead to the conclusion that TESC scheme not only excels in performance but also possesses a broad applicability across various intelligent tasks.

Refer to caption
Figure 9: The number of transmitted symbols comparison between TESC and various baselines.

The numbers of transmission symbols for different methods and various tasks are compared in Fig. 9. All of the displayed results are the average number of symbols transmitted per sentence computed across the entire dataset. It is noteworthy that all of the deep learning-based methods achieve relatively lower numbers of transmission symbols, demonstrating the powerful compression capability of neural networks and highlighting the advantages of semantic communications. Furthermore, the figure clearly shows that the TESC scheme transmits the least symbols compared with DeepJSCC and DeepSC in QA task. This is attributed to the proposed semantic filtering approach, which can efficiently discard redundant and irrelevant semantics, demonstrating the effectiveness of the proposed semantic representation method. In the sentiment analysis task, DeepSC attains the lowest number of transmitted symbols. However, this comes at the cost of increased computational complexity. Consequently, the proposed TESC scheme is particularly suitable for scenarios with limited communication resources.

TABLE IV: Effectiveness Evaluation of Semantic Filtering Method.
Average Number of triplets Average Number of Words Performance Decay
Before Semantic Filtering 70.8 517.3 -
After Semantic Filtering 16.9 83.6 0.4%

Table IV illustrates the effectiveness of the proposed semantic filtering method. The data illustrates that the semantic filtering method notably decreases the average number of triplets by as much as 76.1% and the total word count by 83.8%. Impressively, this reduction only results in a minor 0.4% dip in task performance, which is essentially negligible. This is attributed to the method’s proficiency in eliminating triplets that are either redundant or irrelevant to the task at hand. Consequently, these results demonstrate that semantic filtering is highly effective in reducing the volume of data transmitted, thereby enhancing communication efficiency without compromising task performance.

TABLE V: The Comparison Between Proposed TESC and Other Communication Methods in Terms of Flops, Parameters, and Size.
Method FLOPs Parameters Size
TESC 4.6 ×107\times 10^{7} 1,196,078 1.14 MB
DeepJSCC 4.8 ×107\times 10^{7} 1,366,274 5.21 MB
DeepSC 8.3 ×107\times 10^{7} 3,333,120 12.3 MB
Huffman + RS 9.3 ×107\times 10^{7} - -

Table V provides a comprehensive comparison between the QA-oriented TESC and baseline communication methods, evaluating computational complexity, parameters, and storage size. As evidenced by the data in this table, all deep learning-based methods demonstrate superior computational efficiency compared to traditional communication methods, with TESC’s complexity potentially reduced by more than 50%. Furthermore, both TESC and DeepJSCC outperform DeepSC and conventional methods in terms of computational complexity. This distinction arises from the former’s direct design for intelligent tasks, while the latter require post-text recovery processing, separating transmission and understanding. Notably, the proposed TESC scheme attains the lowest complexity among all the systems under consideration. Additionally, the proposed TESC scheme, requiring only 1.14 MB, achieves substantial space savings through the use of 8-bit parameter quantization, a notable contrast to existing methods relying on 32-bit quantization. In summary, these results collectively affirm the suitability of the proposed TESC scheme for resource-constrained devices.

TABLE VI: Part of The Results for QA Task at 10 dB Over Rayleigh Channel.
Original Text: The hallway is east of the bathroom. The bedroom is west of the bathroom.
Question: What is the bathroom east of?
Label: bedroom
Methods Recovered Text Task Result
TESC —— bedroom \large\color[rgb]{0,1,0}\checkmark
DeepJSCC —— hallway
DeepSC the hallway is west of the bathroom. the bathroom is west of the bathroom. hallway
Huffman + RS The hallway is east of the bathroom. The bedroom is west of the bathroom. bedroom \large\color[rgb]{0,1,0}\checkmark
Original Text: The bedroom is west of the kitchen. The hallway is west of the bedroom.
Question: What is west of the kitchen?
Label: bedroom
Methods Recovered Text Task Result
TESC —— bedroom \large\color[rgb]{0,1,0}\checkmark
DeepJSCC —— bedroom \large\color[rgb]{0,1,0}\checkmark
DeepSC the bedroom is west of the kitchen. the bedroom is west of the hallway. bedroom \large\color[rgb]{0,1,0}\checkmark
Huffman + RS The bedroohils west of r saeechen. The hallway is west of the bedroom. garden
Original Text: The bathroom is north of the garden. The hallway is north of the bathroom.
Question: What is north of the garden?
Label: bathroom
Methods Recovered Text Task Result
TESC —— bathroom \large\color[rgb]{0,1,0}\checkmark
DeepJSCC —— bathroom \large\color[rgb]{0,1,0}\checkmark
DeepSC the kitchen is south of the bathroom. the kitchen is south of the bedroom. bedroom
Huffman + RS The bathroom is north of the garden. Tdehallway is north of the bathrodi. office

Some of the visualized results for QA task is displayed in Table VI. Table VI presents both the reconstructed text and the results of the QA task for TESC scheme and other baseline methods at a 10 dB Rayleigh channel. The results clearly demonstrate that TESC scheme outperforms others, accurately answering all questions without needing to reconstruct the original text. DeepJSCC ranks second to TESC, highlighting the advantage of focusing on semantic communication for intelligent tasks, as it more effectively preserves essential semantic information pertinent to the task, despite not having access to the original text. While methods like DeepSC and traditional communication approaches can partially recover the original text, they often fail to capture key semantic details critical for the task, as evidenced in their performance in the QA task.

V Conclusion

In this paper, we have proposed the TESC scheme, an explainable task-oriented semantic communication scheme that leverages triplets to represent semantics, enabling the system to handle various text tasks. We have developed a novel semantic representation method that comprises semantic extraction and semantic filtering to transform texts into triplets. The semantic filtering process eliminates redundant or task-irrelevant semantics based on the task-specific knowledge. To evaluate the effectiveness of TESC scheme, we have applied it to sentiment analysis and question answering tasks. Simulation results demonstrate that TESC scheme outperforms various benchmarks, particularly in low SNR environments. Consequently, we believe that TESC scheme is a promising candidate for future task-oriented semantic communication systems. Future extensions of this work include 1) exploring explainable semantic communication for computer vision tasks, and 2) developing unified semantic representation method for multimodal data.

References

  • [1] C. Liu, C. Guo, S. Wang, Y. Li, and D. Hu, “Task-oriented semantic communication based on semantic triplets,” in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), Glasgow, United Kingdom, Mar. 2023, pp. 1–6.
  • [2] W. Saad, M. Bennis, and M. Chen, “A vision of 6G Wireless Systems: Applications, Trends,Technologies, and Open Research Problems,” IEEE Netw., vol. 34, no. 3, pp. 134–142, June 2020.
  • [3] M. Chen, D. Gündüz, K. Huang, W. Saad, M. Bennis, A. V. Feljan, and H. V. Poor, “Distributed Learning in Wireless Networks: Recent Progress and Future Challenges,” IEEE J. Sel. Areas Commun., vol. 39, no. 12, pp. 3579 – 3605, Dec. 2021.
  • [4] K. B. Letaief, W. Chen, Y. Shi, J. Zhang, and Y. A. Zhang, “The Roadmap to 6G: AI Empowered Wireless Networks,” IEEE Commun. Mag., vol. 57, no. 8, pp. 84–90, Aug. 2019.
  • [5] Z. Qin, X. Tao, J. Lu, and G. Y. Li, “Semantic communications: Principles and challenges,” arXiv preprint arXiv:2201.01389, Jan. 2022.
  • [6] W. Tong and G. Y. Li, “Nine challenges in artificial intelligence and wireless communications for 6G,” IEEE Wireless Commun., pp. 1–10, 2022.
  • [7] Y. Yang, C. Guo, F. Liu, C. Liu, L. Sun, Q. Sun, and J. Chen, “Semantic communications with artificial intelligence tasks: reducing bandwidth requirements and improving artificial intelligence task performance,” IEEE Ind. Electron. Mag., Early Access, May 2022.
  • [8] C. E. Shannon and W. Weaver, The Mathematical Theory of Communication, Champaign, Il, USA: Univ. Illinois Press, 1949.
  • [9] H. Xie, Z. Qin, G. Y. Li, and B. Juang, “Deep Learning Enabled Semantic Communication Systems,” IEEE Trans. Signal Process., vol. 69, no. 1, pp. 2663–2675, Apr. 2021.
  • [10] H. Xie and Z. Qin, “A Lite Distributed Semantic Communication System for Internet of Things,” IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 142–153, Jan. 2021.
  • [11] E. Bourtsoulatze, D. Burth Kurka, and D. Gunduz, “Deep joint source-channel coding for wireless image transmission,” IEEE Trans. Cognit. Commun. Netw., vol. 5, no. 3, pp. 567–579, Sep. 2019.
  • [12] Z. Weng and Z. Qin, “Semantic communication systems for speech transmission,” IEEE J. Sel. Areas Commun., vol. 39, no. 8, pp. 2434–2444, 2021.
  • [13] M. Jankowski, D. Gündüz, and K. Mikolajczyk, “Wireless image retrieval at the edge,” IEEE J. Sel. Areas Commun., vol. 39, no. 1, pp. 89–100, 2021.
  • [14] C.-H. Lee, J.-W. Lin, P.-H. Chen, and Y.-C. Chang, “Deep learning-constructed joint transmission-recognition for Internet of Things,” IEEE Access, vol. 7, pp. 76547–76561, Jun. 2019.
  • [15] C. Liu, C. Guo, Y. Yang, and N. Jiang, “Adaptable semantic compression and resource allocation for task-oriented communications,” IEEE Trans. Cognit. Commun. Netw., Early Access, 2023.
  • [16] H. Wei, W. Ni, W. Xu, F. Wang, D. Niyato, and P. Zhang, “Federated semantic learning driven by information bottleneck for task-oriented communications,” IEEE Commun. Lett., 2023.
  • [17] H. Xie, Z. Qin, and G. Y. Li, “Task-oriented multi-user semantic communications for VQA,” IEEE Wireless Commun. Lett., Dec. 2021.
  • [18] E. Uysal, O. Kaya, A. Ephremides, J. Gross, M. Codreanu, P. Popovski, M. Assaad, G. Liva, A. Munari, and B. Soret, “Semantic communications in networked systems: A data significance perspective,” IEEE Netw., vol. 36, no. 4, pp. 233–240, 2022.
  • [19] M. Kountouris and N. Pappas, “Semantics-empowered communication for networked intelligent systems,” IEEE Commun. Mag., vol. 59, no. 6, pp. 96–102, 2021.
  • [20] S. Ma, W. Qiao, Y. Wu, H. Li, G. Shi, D. Gao, Y. Shi, S. Li, and N. Al-Dhahir, “Task-oriented explainable semantic communications,” IEEE Trans. Wireless Commun., Early Access, Apr. 2023.
  • [21] S. Jiang, Y. Liu, Y. Zhang, P. Luo, K. Cao, J. Xiong, H. Zhao, and J. Wei, “Reliable semantic communication system enabled by knowledge graph,” Entropy, vol. 22, no. 6, pp. 846, 2022.
  • [22] L. Hu, Y. Li, H. Zhang, L. Yuan, F. Zhou, and Q. Wu, “Robust semantic communication driven by knowledge graph,” in 9th International Conference on Internet of Things: Systems, Management and Security (IOTSMS), Milan, Italy, Nov. 2022, pp. 1–5.
  • [23] Y. Wang, M. Chen, T. Luo, W. Saad, D. Niyato, H. V. Poor, and S. Cui, “Performance Optimization for Semantic Communications: An Attention-based Reinforcement Learning Approach,” IEEE J. Sel. Areas Commun., vol. 40, no. 9, pp. 2598 – 2613, Sept. 2022.
  • [24] K. Niu, J. Dai, S. Yao, S. Wang, Z. Si, X. Qin, and P. Zhang, “A paradigm shift toward semantic communications,” IEEE Commun. Mag., vol. 60, no. 11, pp. 113–119, 2022.
  • [25] M. Banko, M. J. Cafarella, S. Soderland, M. Broadhead, and O. Etzioni, “Open information extraction from the web,” Communications of the ACM, vol. 51, no. 12, pp. 68–74, 2008.
  • [26] G. Shi, Y. Xiao, Y. Li, and X. Xie, “From semantic communication to semantic-aware networking: Model, architecture, and open problems,” IEEE Commun. Mag., vol. 59, no. 8, pp. 44–50, 2021.
  • [27] G. Angeli, M. J. J. Premkumar, and C. D. Manning, “Leveraging linguistic structure for open domain information extraction,” in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, 2015, pp. 344–354.
  • [28] Y. Che, H. Xiong, S. Han, and X. Xu, “Cache-enabled Knowledge Base Construction Strategy in Semantic Communications,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Rio de Janeiro, Brazil, Dec. 2022.
  • [29] W. Medhat, A. Hassan, and H. Korashy, “Sentiment analysis algorithms and applications: A survey,” Ain Shams engineering journal, vol. 5, no. 4, pp. 1093–1113, 2014.
  • [30] L. Hirschman and R. Gaizauskas, “Natural language question answering: the view from here,” natural language engineering, vol. 7, no. 4, pp. 275–300, 2001.
  • [31] E. Cambria and B. White, “Jumping nlp curves: A review of natural language processing research,” IEEE Computational intelligence magazine, vol. 9, no. 2, pp. 48–57, 2014.
  • [32] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances Neural Info. Process. Systems (NIPS’17), Long Beach, CA, USA, Dec. 2017, pp. 5998–6008.
  • [33] H. Le, T. Tran, and S. Venkatesh, “Self-Attentive Associative Memory,” in International Conference on Machine Learning (ICML), online, Jul. 2020.
  • [34] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts, “Recursive deep models for semantic compositionality over a sentiment treebank,” in Proceedings of the 2013 conference on empirical methods in natural language processing, 2013, pp. 1631–1642.
  • [35] J. Weston, A. Bordes, S. Chopra, A. M. Rush, B. van Merriënboer, A. Joulin, and T. Mikolov, “Towards Ai-Complete Question Answering: A Set of Prerequisite Toy Tasks,” arXiv preprint arXiv:1502.05698, Mar. 2015.
  • [36] S. Zhang, D. Zheng, X. Hu, and M. Yang, “Bidirectional long short-term memory networks for relation classification,” in Proceedings of the 29th Pacific Asia conference on language, information and computation, 2015, pp. 73–78.
  • [37] S. Sukhbaatar, A. Szlam, J. Weston, and R. Fergus, “End-to-end memory networks,” Advances in neural information processing systems, vol. 28, 2015.