Trigger-GNN: A Trigger-Based Graph Neural Network for Nested Named Entity Recognition
Abstract
Nested named entity recognition (NER) aims to identify the entity boundaries and recognize categories of the named entities in a complex hierarchical sentence. Some works have been done using character-level, word-level, or lexicon-level based models. However, such researches ignore the role of the complementary annotations. In this paper, we propose a trigger-based graph neural network (Trigger-GNN) to leverage the nested NER. It obtains the complementary annotation embeddings through entity trigger encoding and semantic matching, and tackle nested entity utilizing an efficient graph message passing architecture, aggregation-update mode. We posit that using entity triggers as external annotations can add in complementary supervision signals on the whole sentences. It helps the model to learn and generalize more efficiently and cost-effectively. Experiments show that the Trigger-GNN consistently outperforms the baselines on four public NER datasets, and it can effectively alleviate the nested NER.
Index Terms:
Nested named entity recognition, recursive graph neural network, entity trigger.I Introduction
Named entity recognition (nested-NER) aims to identify the entity boundaries and recognize the categories of named entities in a sentence [1, 2]. The categorizes belong to the pre-defined semantic types, such as person, location, organization [3]. With applications ranging from AI-based dialogue systems to combing the natural language and semantic web in learning environments, NER plays a standalone role.
However, nested NER is always a thorny challenge due to its complex hierarchical structure. Fig 1 illustrates two examples of the nested entity strings. The upper example shows that “Thomas Jefferson, third president of the United States” should be labeled jointly to constitute a complete entity statement and expressed as a person entity (PER). However, it also involves two distinct entities: “the United States” and “third president of the United States”, which can be expressed as a geopolitical entity (GEO) or a PER separately. The nest problem hampers the judgment of the entities boundaries. Sometimes, it appears as an overlapping text. As shown as the bottom example of Fig 1, “Pennsylvania radio station” is overlapped. “Pennsylvania” can be treated as a PER or concatenate with the following two words as a organization institute (ORI), “Pennsylvania radio station”.
Generally, one intuitive way to leverage the nested NER is to stack flat NER layers [4, 5, 6, 7]. Ju et al. [5] proposed to recognize the nested entities by stacking the flat NER layers dynamically. It concatenates the output of each LSTM structure from the current NER layer and subsequently feeds them into the next flat NER layer. It makes full use of the information encoded in the corresponding internal entities (entities exist in the internal layers) in an inside-out manner. However, this kind of layered model cannot reuse the outside information, which means it has a single direction for information transfer from the inner layer to the outside way. Hence, joint learning flat entities and their inner dependencies has attracted research attention. Luo et al. [8] proposed a bipartite flat graph network considering the bidirectional delivery of information from innermost layers to outer ones. It indeed tackle the lack of the dependencies of inner entities. However, it still lack clear rationales to execute the delivery process. The nested NER is still a nontrivial task due to its complex structure.

Recent advances in nested-NER mainly focus on training an superior neural network model on different levels of the semantic hierarchy, e.g., character-level [9], word-level [2], and lexicon-level [10, 11]. However, such researches ignore the role of the complementary annotation 111Complementary annotation refers to the supplementary explanation representing the subtle reasons why humans label an entity among the sentence. For example, “Tom traveled a lot last year in Silicon.” where ‘Silicon’ can be signed as location (LOC) entity. The reason why we make this judgment is due to the cue phrase “travel in”. Considering semantics, it reveals that there should be an LOC right behind the word ‘in’. Actually, this is how the complementary annotation works.. In recent years, some works have been done using this external supplementary explanation as compensation [12]. Lin et al. first proposed entity trigger as an complementary annotation for facilitating NER models using label-efficient learning. It defines a set of words from the sentence as the entity triggers, which brings the complementary annotation for recognizing the entities of the sentence. For example, given a sentence “We had a fantastic lunch at Rumble Fish yesterday, where the food is my favorite.”, where “had lunch at” and “where the food” are two distinct triggers associated with the restaurant entity “Rumble Fish”. These distinct triggers 222To be noted that an entity trigger should follows some rules: (1) an entity trigger should involves necessary and sufficient cue for entity recognition process; (2) some modifier words should be removed, e.g., “fantastic”. explicitly indicate the location information of the candidate entities and helps to better anchor them. In addition, the defined entity triggers can be reused. Statements with similar phrase structure can be linked to a equivalent class and reuse the same entity trigger as well. It means the most of our commonly used sentences can be formed into some entity triggers rather than labelling all these sentences manually. It helps the model training to be cost-effectively.
Our intuition is that these triggers can add in complementary supervision on the whole sentences and thus help the nested-NER models to learn and generalize more efficiently and cost-effectively. It utilizes the recurrent neural networks (RNN) to encode the sentences sequentially. However, the underlying structure of the sentences are not strictly serial. In fact, RNN-based models process words in a strictly sequential order, and a word have precedence when assigned to its left word. More seriously, these methods label the candidate entities only using the previous partial sequences without seeing the remain words. It lacks the capacity of capturing the long-term dependency and high-level features among the sentence.
To this end, we introduce a trigger-based graph neural network (Trigger-GNN). It casts the nested-NER into a node classification task, and breaks the serialization processing of RNNs using an recursive graph neural network. In nested NER, we have to tackle the uncertain rules for discontinuous token sequences and the confusing error from multiple rules applying to an input instance at the same time. We propose an updation-aggregation training manner to address this issue, i.e., the node representation is updated by aggregating the representation of its adjacent edges and the graph-level node recursively. The multiple iterations of aggregation enables Trigger-GNN to continuously verify the nested words based on the global context information. The key contributions of this paper can be summarized as follows:
-
•
We develop a trigger-based graph neural network for the nested NER task in a cost-effective manner, and cast the problem into a graph node classification task.
-
•
We propose to capture the global context information and local compositions to tackle nested NER through a recursively aggregating mechanism.
-
•
Experiments show that the Trigger-GNN is cost-effective and efficient on four public NER datasets.
II Related Work
II-A Graph Neural Networks on Texts
Graph neural networks have been successfully applied to several text classification tasks [13, 14, 15, 16, 17]. Liu et al. [16] proposed a tensor graph convolutional networks called Tensor-GCN. The Tensor-GCN uses the text graph tensor to describe semantic, syntactic, and sequential contextual information. The model uses the combination of the intra-graph propagation and inter-graph propagation to aggregate the information from the neighborhood nodes in a single graph, and to harmonize heterogeneous information between graphs. Huang et al. [17] proposed to build graph for each input text using sharing parameters, instead of training on a single graph for the whole corpus. It indeed removes the burden of dependence from the individual text to the entire corpus. However, it still preserves the global information. Yao et al. [14] developed a single text graph convolutional network (Text-GCN) based on the word co-occurrence and document-word relations. It jointly learns the embeddings of both words and documents, and it is supervised by the documents’ class labels.
II-B Nested Named Entity Recognition
There has been a growing amount of efforts towards NER, and have been explored in several directions, including rule-based, statistics-based and deep neural network-based [1, 9, 18, 6, 19, 20]. However, nested named entity recognition is always a thorny issue because of its complex hierarchical structure.
Towards alleviating the nested NER, recent works have mainly focus on stacking flat NER layers [4, 5, 6, 7]. Ju et al. [5] proposed to recognize the nested entities by stacking flat NER layers dynamically. The model concatenates the output of each LSTM structure in the current flat NER layer to build a revolutionary representation for detected entities, and subsequently feeds them into the next flat NER layer. It makes full use of the encoded information in the corresponding internal entities in an inside-out manner. However, the layered model cannot reuse the outside information, which means it has a single direction for information transfer from the inner layer to the outside way.
Another line of research aims to combine the bipartite graph with the flat NER layers. Luo et al. [8] proposed a bipartite flat GNN to learn the flat entities and their inner dependencies jointly. It constructs a graph module to deliver the bidirectional information from innermost graph layer to the outermost one. The leaned information carries the dependencies of inner entities and can be exploited to improve outermost entity predictions. Due to the graph module, the transfer of the information is bidirectional. The layered model can reuse the outside information as well. However, this method lacks clear and confident rationales to conduct the process of notation. While the entity triggers can bring the complementary annotation to help promote the labelling process.
Inspired by the achievements aforementioned (e.g., graph neural networks on texts [10] and learning with entity triggers [12]), we propose to use a graph neural network integrating the complementary annotation. The Trigger-GNN enables two significant issues: (1) multiple graph-based interactions among the words, entity triggers, and the whole sentence semantics; and (2) using trigger-based instead of lexicon-based [10] can bring some complementary information as the supervision signals, and thus help the model to learn and generalize more efficiently.
III Trigger-Based Graph Neural Network
In this section, we detail the Trigger-GNN model. The idea of the whole model is briefly illustrated in Fig 2. Trigger-GNN obtains the complementary annotation embeddings through entity trigger encoding and semantic matching. And it uses an efficient graph message passing architecture [21], aggregation-updation mode, to better extract the interactions among the words, sentences, and complementary annotations.

III-A Trigger Encoding and Semantic Matching
In this section, we propose to train an encoder architecture for entity trigger learning, and to match the entity trigger with its corresponding sentence using an attention-based representations. Our intuition is that the desired representation of the entity triggers should involve the semantics with the hidden states of the tokens from the sentence in a shared embedding space. Specifically, for a sentence with multiple entities , we assume that there is a set of triggers . We reformat the input of our model to enable efficient batch training. Each entity is linked to one of its corresponding trigger denoted as , where is the tokenization list of the sentence . For each reformed training batch, we first apply a bidirectional LSTM on the sequence of word vectors of using Glove word embedding [22]. It returns the hidden states of each token , and is denoted as the matrix containing the embedding representation for all the tokens; is denoted as the matrix containing the embedding representation for all the trigger . We utilize the self-attention method introduced by [12] to obtain the representation of both triggers and sentences:
(1) | ||||
where and are two trainable parameters in the model. is denoted as the final sentence’s representation. It reveals the weighted sum of all the token representations in the whole sentence. Similarly, is denoted as the final trigger’s representation.
We train the trigger encoder with the help of the type of the trigger’s associated entity. We use a simple multi-class classifier to predicate the type of such entity, and the corresponding type of each entity returned is denoted as . The loss function of the classifier is as follows:
(2) |
where is a trainable parameter in the model.
We match the triggers and sentences by and according to Eq.4. During the training process, we mix the triggers and the sentences randomly to sample some negative samples to tackle the imbalance of the positive and negative samples. We expect a margin between the sample embeddings for the negative situation. The loss function of the semantic matching is as follows:
(3) |
where is defined to confirm whether the trigger is originally in the sentence of not. When is set to , the trigger is originally in the sentence; and when it is set to , the trigger is not originally in the sentence. The joint loss function of the trigger encoding and semantic matching is , where is a hyper-parameter for fine-tuning. In our experiment, we define .
III-B Text Graph Construction
In this section, we detail how to convert the whole sentences into a directed graph. We define each word in the sentences as a node, and add the edges between each node according to the corresponding lexicon333To this end, we have to maintain a lexicon list during our experiment, find more details in the Experiment Analysis Section. Also, we design a graph-level node to gather all the information from the nodes and edges in the text graph. The graph-level node can help the node representation by removing the ambiguity.
Formally, let denote a sentence, where denotes the -th word. The potential lexicon matching a word sub-sequence can be formulated as , where the index of the first word and the last word are and . In this work, we propose to denote directed and labeled multi-graphs as with nodes (words: ), labeled edges (relations: (, where is a relation type according to the entity trigger), and graph-level node (global attributes: ). Once a word sub-sequence matches a candidate lexicon such as , we add one edge , indicating that it starts from the beginning word to the ending word . The graph-level node is to capture the global information of the entire text graph. Formally, it is represented as the sum of the representation of all the nodes and edges of the graph. For a graph with word nodes and edges, there are relations linking each node and edge to the shared graph-level node representation. In addition, we construct the transpose of the text graph according to [10]. It is another directed graph denoted as with the same set of nodes but all edges reversed compared to . Similar to the bi-directional LSTM, we compose and as a bi-directional text graph as the input and concatenate the hidden states from and as the final outputs.
III-C Recursive Graph Neural Networks
In this part, we detail the structure of the recursive-based graph neural network from three subsections: updation module, aggregation module, and trigger-enhanced decoding and tagging.
III-C1 Updation Module
The hidden vectors of all of the tokens is used as the representation for each node in the graph, where . And the hidden vectors of the trigger token is used as the representation for each edge, where . Formally, the hidden state of the text graph is denoted as (at the -th layer):
(4) |
where is the hidden status of each node. represents the graph-level node.
For the initial state of the text graph, the hidden states of the -th nodes is set to its embedding, where . The transition from to is calculated as follows:
(5) | ||||
where in each layer is used to introduce the original meaning of the token; represents the aggregated hidden vectors of the neighbors of the node ; represents the concatenation of the tokens’ vectors; represent the input, forget and output gate structure respectively.
III-C2 Aggregation Module
We assumes that has neighbors in the text graph. The aggregated hidden vectors can be calculated as follows:
(6) | ||||
where represents the positional vectors of the node ; represents the hidden state of the -th neighbor of the node . With positional vectors, it makes easier to aware of the position information of word and more seriously. The combination of the gated-graph neural network and the aggregation makes our model enable to gather the information from the long-term dependency as the layer number increases by determining which part of the information should be passed to higher layers.
The value of is updated according to the hidden states of the last layer . is calculated using attention mechanism as: .
(7) | ||||
For each , the forget gate is updated to decide which of the information should be considered to forget according to the vectors of the graph-node . Also, the candidate vectors of the is updated according to the last-state candidate graph vectors . The candidate vectors of the last-state node is denoted as :
(8) | ||||
III-C3 Trigger-Enhanced Decoding and Tagging
Given the the vector of the whole graph and the vectors of each node , we use the previously trained module in III-A to compute the mean value of corresponds to its sentence. We incorporate the weighted sum of all the token representations from as with the trigger representation:
(9) |
where , and are trainable parameters. We concatenate with the trigger-enhanced as the input () to the final CRF tagger. CRF tagger is constructed conventionally to predict the tag for each token according to [16]. It should be noted that When process the unlabeled sentences, we do not know the corresponding triggers of the sentence. Instead, we can use the trigger encoding and semantic matching module to compute the similarities between the sentence’s representation and the trigger’s representation according to L2-norm distances. The triggers with highest score will be used as the additional inputs to the last tagging process.
IV Experiment Analysis
IV-A Datasets
We first introduce the NER datasets used in the experiments. All the datasets are public and popular in NER task.
- •
-
•
JNLPBA: We use a molecular biology dataset for identifying the technical terms [25]. It contains various type of nested entities. Examples of such entities include the names of proteins, genes and their location of activity such as cells or organism. We use the same standard split according to [26].
-
•
BC5CDR: Another bio-medical domain dataset which is well-studied and popular in evaluating the performance of nested-NER [27]. We use the standard split.
Both JNLPBA and BC5CDR datasets involves a large number of nested entities in the bio-medical domain.
IV-B Lexicon
We use the lexicon generated on a corpus of article pairs from Gigaword 444https://catalog.ldc.upenn.edu/LDC2003T05, consisting around 4 millon articles. Use tensorflow as implementation 555https://www.tensorflow.org/datasets/catalog/gigaword. The embeddings of the lexicon words were pre-trained by Glove word embedding [22] and fine-tuned during training.
IV-C Baselines
Several state-of-the-art NER algorithms (sorted by the timeline) for effectiveness evaluation are listed below:
-
•
Neural layered model: It merged the output of the stacked LSTM layers to build new representation for detected entities. [5].
-
•
Lattice LSTM: It raised a lattice-structure based LSTM model, which encodes a sequence of input characters as well as all the potential words matching a lexicon [11].
-
•
Boundary-aware neural model: It proposed a boundary-aware neural model for nested NER using sequence labelling [28].
-
•
Biaffine-NER: It proposed a graph-based dependency parsing to provide a global view on the sentence [29].
-
•
BiFlaG: It firstly used the entities recognized by the flat NER module to construct an entity graph [8].
-
•
LGN: It developed a lexicon-based graph network with global semantics and proposed a schema to connect the character using external lexicon words [10].
-
•
Trigger-NER: It first introduced the concept of “entity trigger” and proposed a RNN-based trigger matching network for NER [12].
-
•
ACE document-context: It proposed an automated concatenation method of embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks, NER is one of them [30].
IV-D Experiment Settings
IV-D1 Annotating Entity Triggers as Complementary Supervision
We follow [12] to crowd-source the entity triggers, and use the LEAN-LIFE developed by [31] for annotating. Annotators are required to label a set of words in a sentence that are useful for entity recognition. We mask the entities with their corresponding types, so that human annotators can pay more attention to the non-entity words in the sentence. We merge multiple triggers for each entity, taking the intersection of all annotators’ results. We reuse 14K triggers from [12] and release another 8K triggers on the bio-domain for future trigger-enhanced NER in domain-research, which is also one of the contributions of our work.
IV-D2 Hyper-parameter Settings
We develop the Trigger-GNN based on Pytorch, with the same learning rate setting selected from as [10]. We use the Dropout [32] with a rate of 0.4 for all th embedding layers and a rate of 0.3 for the aggregation model to reduce the overfitting. The embedding vector size and the hidden states were both set to 150. The initial word vectors is based on the Glove word embedding [22]. Steps of message passing are selected from which is detailed analyzed in IV-E1. We use the standard metrics for evaluation: Precision (P), Recall (R), and F1 score (F1).
IV-E Evaluation and Discussion
We demonstrate the performance of our Trigger-GNN on both general and bio domains, especially focus on the nested-NER. Results on the CoNLL-2002 & 2003, JNLPBA and BC5CDR datasets are shown in Table I, II, and III, respectively. Compared with the recent methods, our Trigger-GNN obtains the best results by a large margin. In particular, Trigger-GNN obtains 1.93% and 3.52% boosting than its baseline model LGN on the general domain dataset CoNLL-2002 & 2003. And as shown in Table II, and III, Trigger-GNN obtains 3.41% improvement on JNLPBA than LGN, and 5.32% on BC5CDR. The observation from the main results on these four dataset demonstrates that our proposed model can adapt to the both general domain and bio domain, and perform better than the recent methods.
Models | CoNLL-2002 | CoNLL-2003 |
---|---|---|
Neural layered model [5] | 85.23% | 85.13% |
Lattic LSTM [11] | 90.34% | 91.28% |
Boundary-aware model [28] | 87.58% | 91.58% |
Biaffine-NER [29] | 91.38% | 91.63% |
BiFlaG [8] | 92.54% | 92.67% |
ACE+document-context [30] | 93.95% | 94.63% |
Trigger-NER [12] | 86.83% | 86.95% |
LGN [10] | 92.19% | 91.86% |
Trigger-GNN | 94.92% | 95.38% |
Models | P | R | F1 |
---|---|---|---|
Neural layered model [5] | 81.75% | 81.24% | 81.49% |
Lattic LSTM [11] | 83.25% | 80.25% | 81.72% |
Boundary-aware model [28] | 80.12% | 75.12% | 77.53% |
ACE+document-context [30] | 85.75% | 83.62% | 84.67% |
Trigger-NER [12] | 79.12% | 76.34% | 77.70% |
LGN [10] | 84.12% | 82.14% | 83.12% |
Trigger-GNN | 87.75% | 85.34% | 86.53% |
Models | P | R | F1 |
---|---|---|---|
Neural layered model [5] | 87.12% | 86.14% | 86.62% |
Lattic LSTM [11] | 89.34% | 87.89% | 88.61% |
Boundary-aware model [28] | 86.12% | 85.12% | 85.62% |
ACE+document-context [30] | 93.65% | 92.32% | 92.98% |
Trigger-NER [12] | 85.75% | 83.62% | 84.67% |
LGN [10] | 90.12% | 86.21% | 88.13% |
Trigger-GNN | 94.19% | 92.73% | 93.45% |
IV-E1 Steps of Message Passing
To investigate the impact of steps of massage passing during the update process, we analyze the performance of Trigger-GNN using different value of the step number as shown in Fig. 3. The results show that the number of update steps has an important impact on the performance of Trigger-GNN. When is less than 3, the F1 score drops by 4.1% on average. Specifically, F1 score on either JNLPBA and BC5CDR datasets drops around 3.63% and 3.53%. Several rounds of updates to the model yield competitive results, revealing that the Trigger-GNN has benefitted from the update process. Empirically, as the process iterates, graph nodes aggregate more information from the neighbors and graph-level node as it aggregates information from both the neighbors’ nodes and the edges at every update step.

IV-E2 Ablation Studies
To evaluate the contribution of each component in Trigger-GNN, we conduct the ablation study on four public datasets. The results are as illustrated in Table IV. It shows that the model’s performance drops when the graph-level node is removed, which indicates the essential of the global connections in the graph structure. We can also observe that the entity triggers play an vital role. Specially, the CoNLL-2003, BC5CDR, and JNLPBA suffer serious performance drops of 3% without entity triggers. Also, missing the edge/lexicon will result in a further performance loss about 1.5% on average.
Models | CoNLL-2003 | JNLPBA | BC5CDR |
---|---|---|---|
Trigger-GNN | 95.38% | 86.53% | 93.45% |
-graph-level node | 94.31% | 85.87% | 92.34% |
-trigger | 91.86% | 83.12% | 88.13% |
-edge/lexicon | 90.42% | 81.74% | 87.83% |
-bidirectional | 88.12% | 76.51% | 84.38% |
-crf | 86.87% | 74.93% | 82.80% |
LGN | 91.86% | 83.12% | 88.13% |
-graph-level node | 89.73% | 81.56% | 87.32% |
To better demonstrate the advance of our model, we compare our trigger-based GNN with LGN using lexicon instead. The results show that compared to the LGN, the Trigger-GNN achieves an average F1 score of 2.68% higher. In addition, there is a distinct performance gap when remove the global node from both of the models. This is because the trigger-based GNN can add in complementary supervision on the words, entity triggers and the whole sentence. The F1 score of LGN decreases by 1.83% on average across the four datasets without graph-level nodes. In contrast, our Trigger-GNN drops by only 0.89% which proves that Trigger-GNN is better at modeling sentences.
IV-E3 Performance Against Labeled Data
Fig. 4 illustrates the performance of Trigger-GNN and several baseline methods on the CoNLL2003 dataset using different number of the labeled data. We can see that by using only 20-30% of the trigger-annotated data for training, Trigger-GNN model delivers comparable performance as the baseline model LGN using 50-70% traditional training data like lexicon. It indicates the cost-effectiveness of using triggers as an additional source of supervision.

IV-E4 Case Studies
To further validate the Trigger-GNN can alleviate the nested NER, we perform an case study on BC5CDR as illustrated in Table V. It demonstrates that the Trigger-GNN performs well on the nested NER case. In particular, Trigger-GNN can not only identify the “selegiline” as a chemical, but also can integrate the “supine systolic and diastolic blood pressures” as a integral disease. However, models like LGN and Trigger-NER can only capture part of the entities or incorrectly split the diseases, which illustrates the superiority of our proposed model.
Models | Cases | ||||
---|---|---|---|---|---|
LGN |
|
||||
Trigger-NER |
|
||||
Trigger-GNN |
|
V Conclusion
In this work, we investigate a trigger-based graph neural network approach to alleviate the nested NER. Entity triggers are used to provide more explicit supervision. The Trigger-GNN enables two significant issues: (1) multiple graph-based interactions among the words, entity triggers, and the whole sentence semantics; and (2) using trigger-based instead of lexicon-based [10] can add in complementary supervision signals and thus help the model to learn and generalize more efficiently. As a result, the experiments indicate the significant performance of our proposed model on four real-world datasets in both general and bio domains. The explanatory experiments also illustrate the efficiency and the cost-effectiveness of our proposed model.
VI Acknowledgment
This work is supported by the National Natural Science Foundation of China (62002207, 62072290, 12075142, 62073201), the Shandong Provincial Natural Science Foundation (ZR2020MA102) and Shandong Provincial Key Laboratory for Novel Distributed Computer Software Technology.
References
- [1] V. Yadav and S. Bethard, “A survey on recent advances in named entity recognition from deep learning models,” CoRR, vol. abs/1910.11470, 2019. [Online]. Available: http://arxiv.org/abs/1910.11470
- [2] J. P. C. Chiu and E. Nichols, “Named entity recognition with bidirectional lstm-cnns,” Trans. Assoc. Comput. Linguistics, vol. 4, pp. 357–370, 2016. [Online]. Available: https://transacl.org/ojs/index.php/tacl/article/view/792
- [3] Q. Tran, A. MacKinlay, and A. Jimeno-Yepes, “Named entity recognition with stack residual LSTM and trainable bias decoding,” in Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Taipei, Taiwan, November 27 - December 1, 2017 - Volume 1: Long Papers, G. Kondrak and T. Watanabe, Eds. Asian Federation of Natural Language Processing, 2017, pp. 566–575. [Online]. Available: https://aclanthology.org/I17-1057/
- [4] M. Ju, M. Miwa, and S. Ananiadou, “A neural layered model for nested named entity recognition,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), M. A. Walker, H. Ji, and A. Stent, Eds. Association for Computational Linguistics, 2018, pp. 1446–1459. [Online]. Available: https://doi.org/10.18653/v1/n18-1131
- [5] ——, “A neural layered model for nested named entity recognition,” in Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), M. A. Walker, H. Ji, and A. Stent, Eds. Association for Computational Linguistics, 2018, pp. 1446–1459. [Online]. Available: https://doi.org/10.18653/v1/n18-1131
- [6] Q. Wang and M. Iwaihara, “Deep neural architectures for joint named entity recognition and disambiguation,” in IEEE International Conference on Big Data and Smart Computing, BigComp 2019, Kyoto, Japan, February 27 - March 2, 2019. IEEE, 2019, pp. 1–4. [Online]. Available: https://doi.org/10.1109/BIGCOMP.2019.8679233
- [7] T. H. Nguyen, A. Sil, G. Dinu, and R. Florian, “Toward mention detection robustness with recurrent neural networks,” CoRR, vol. abs/1602.07749, 2016. [Online]. Available: http://arxiv.org/abs/1602.07749
- [8] Y. Luo and H. Zhao, “Bipartite flat-graph network for nested named entity recognition,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, D. Jurafsky, J. Chai, N. Schluter, and J. R. Tetreault, Eds. Association for Computational Linguistics, 2020, pp. 6408–6418. [Online]. Available: https://doi.org/10.18653/v1/2020.acl-main.571
- [9] Z. Huang, W. Xu, and K. Yu, “Bidirectional LSTM-CRF models for sequence tagging,” CoRR, vol. abs/1508.01991, 2015. [Online]. Available: http://arxiv.org/abs/1508.01991
- [10] T. Gui, Y. Zou, Q. Zhang, M. Peng, J. Fu, Z. Wei, and X. Huang, “A lexicon-based graph neural network for chinese NER,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, K. Inui, J. Jiang, V. Ng, and X. Wan, Eds. Association for Computational Linguistics, 2019, pp. 1040–1050. [Online]. Available: https://doi.org/10.18653/v1/D19-1096
- [11] Y. Zhang and J. Yang, “Chinese NER using lattice LSTM,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, I. Gurevych and Y. Miyao, Eds. Association for Computational Linguistics, 2018, pp. 1554–1564. [Online]. Available: https://aclanthology.org/P18-1144/
- [12] B. Y. Lin, D.-H. Lee, M. Shen, R. Moreno, X. Huang, P. Shiralkar, and X. Ren, “TriggerNER: Learning with entity triggers as explanations for named entity recognition,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: Association for Computational Linguistics, Jul. 2020, pp. 8503–8511. [Online]. Available: https://aclanthology.org/2020.acl-main.752
- [13] W. Li, S. Li, S. Ma, Y. He, D. Chen, and X. Sun, “Recursive graphical neural networks for text classification,” ArXiv, vol. abs/1909.08166, 2019.
- [14] L. Yao, C. Mao, and Y. Luo, “Graph convolutional networks for text classification,” ArXiv, vol. abs/1809.05679, 2019.
- [15] F. Lei, X. Liu, Z. Li, Q. Dai, and S. Wang, “Multihop neighbor information fusion graph convolutional network for text classification,” Mathematical Problems in Engineering, vol. 2021, pp. 1–9, 2021.
- [16] X. Liu, X. You, X. Zhang, J. Wu, and P. Lv, “Tensor graph convolutional networks for text classification,” ArXiv, vol. abs/2001.05313, 2020.
- [17] L. Huang, D. Ma, S. Li, X. Zhang, and H. WANG, “Text level graph neural network for text classification,” 2019.
- [18] Y. Luo and H. Zhao, “Bipartite flat-graph network for nested named entity recognition,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, D. Jurafsky, J. Chai, N. Schluter, and J. R. Tetreault, Eds. Association for Computational Linguistics, 2020, pp. 6408–6418. [Online]. Available: https://doi.org/10.18653/v1/2020.acl-main.571
- [19] S. Zheng, F. Wang, H. Bao, Y. Hao, P. Zhou, and B. Xu, “Joint extraction of entities and relations based on a novel tagging scheme,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, R. Barzilay and M. Kan, Eds. Association for Computational Linguistics, 2017, pp. 1227–1236. [Online]. Available: https://doi.org/10.18653/v1/P17-1113
- [20] L. Yao, C. Mao, and Y. Luo, “KG-BERT: BERT for knowledge graph completion,” CoRR, vol. abs/1909.03193, 2019. [Online]. Available: http://arxiv.org/abs/1909.03193
- [21] W. Li, S. Li, S. Ma, Y. He, D. Chen, and X. Sun, “Recursive graphical neural networks for text classification,” CoRR, vol. abs/1909.08166, 2019. [Online]. Available: http://arxiv.org/abs/1909.08166
- [22] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, A. Moschitti, B. Pang, and W. Daelemans, Eds. ACL, 2014, pp. 1532–1543. [Online]. Available: https://doi.org/10.3115/v1/d14-1162
- [23] E. F. Tjong Kim Sang and S. Buchholz, “Introduction to the CoNLL-2000 shared task chunking,” in Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop, 2000. [Online]. Available: https://aclanthology.org/W00-0726
- [24] E. F. Tjong Kim Sang and F. De Meulder, “Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition,” in Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, 2003, pp. 142–147. [Online]. Available: https://aclanthology.org/W03-0419
- [25] N. Collier and J. Kim, “Introduction to the bio-entity recognition task at JNLPBA,” in Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications, NLPBA/BioNLP 2004, Geneva, Switzerland, August 28-29, 2004, N. Collier, P. Ruch, and A. Nazarenko, Eds., 2004. [Online]. Available: https://aclanthology.org/W04-1213/
- [26] M. Habibi, L. Weber, M. L. Neves, D. L. Wiegandt, and U. Leser, “Deep learning with word embeddings improves biomedical named entity recognition,” Bioinform., vol. 33, no. 14, pp. i37–i48, 2017. [Online]. Available: https://doi.org/10.1093/bioinformatics/btx228
- [27] J. Li, Y. Sun, R. J. Johnson, D. Sciaky, C. Wei, R. Leaman, A. P. Davis, C. J. Mattingly, T. C. Wiegers, and Z. Lu, “Biocreative V CDR task corpus: a resource for chemical disease relation extraction,” Database J. Biol. Databases Curation, vol. 2016, 2016. [Online]. Available: https://doi.org/10.1093/database/baw068
- [28] C. Zheng, Y. Cai, J. Xu, H. Leung, and G. Xu, “A boundary-aware neural model for nested named entity recognition,” in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, K. Inui, J. Jiang, V. Ng, and X. Wan, Eds. Association for Computational Linguistics, 2019, pp. 357–366. [Online]. Available: https://doi.org/10.18653/v1/D19-1034
- [29] J. Yu, B. Bohnet, and M. Poesio, “Named entity recognition as dependency parsing,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, D. Jurafsky, J. Chai, N. Schluter, and J. R. Tetreault, Eds. Association for Computational Linguistics, 2020, pp. 6470–6476. [Online]. Available: https://doi.org/10.18653/v1/2020.acl-main.577
- [30] X. Wang, Y. Jiang, N. Bach, T. Wang, Z. Huang, F. Huang, and K. Tu, “Automated concatenation of embeddings for structured prediction,” 2021.
- [31] D.-H. Lee, R. Khanna, B. Y. Lin, J. Chen, S. Lee, Q. Ye, E. Boschee, L. Neves, and X. Ren, “Lean-life: A label-efficient annotation framework towards learning from explanation,” 2020.
- [32] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, no. 56, pp. 1929–1958, 2014. [Online]. Available: http://jmlr.org/papers/v15/srivastava14a.html