Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.190
|View full text |Cite
|
Sign up to set email alerts
|

Unified Semantic Typing with Meaningful Label Inference

Abstract: Semantic typing aims at classifying tokens or spans of interest in a textual context into semantic categories such as relations, entity types, and event types. The inferred labels of semantic categories meaningfully interpret how machines understand components of text. In this paper, we present UNIST, a unified framework for semantic typing that captures label semantics by projecting both inputs and labels into a joint semantic embedding space. To formulate different lexical and relational semantic typing task… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…Following the original paper, we report macro-averaged precision, recall and F1. We compare our model with Box4Types (Onoe et al, 2021), LRN (Liu et al, 2021b), MLMET (Dai et al, 2021), DenoiseFET (Pan et al, 2022), UNIST (Huang et al, 2022), NPCRF (Jiang et al, 2022) and LITE . The baseline results were obtained from the original papers.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Following the original paper, we report macro-averaged precision, recall and F1. We compare our model with Box4Types (Onoe et al, 2021), LRN (Liu et al, 2021b), MLMET (Dai et al, 2021), DenoiseFET (Pan et al, 2022), UNIST (Huang et al, 2022), NPCRF (Jiang et al, 2022) and LITE . The baseline results were obtained from the original papers.…”
Section: Resultsmentioning
confidence: 99%
“…Pan et al (2022) initialise the scoring function for each label based on that label's representation in the decoder of the language model (see Section 3). Rather than using pre-trained embeddings, Huang et al (2022) encode the labels using a language model, which is fine-tuned together with the entity mention encoder. A similar approach was used by Ma et al (2022) in the context of few-shot entity typing.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…BiLSTM (Choi et al, 2018) 47.1 24.2 32.0 BERT (Onoe and Durrett, 2019) 51.6 33.0 40.2 Box4Types (Onoe et al, 2021) 52.8 38.8 44.8 MLMET (Dai et al, 2021) 53.6 45.3 49.1 UniST (Huang et al, 2022) 50.2 49.6 49.9 LITE 52.4 48.9 50.6 CASENT (Ours) 53.3 49.5 51.3 the F1 score. Among the fully-supervised models, cross-encoder models demonstrate superior performance over both bi-encoder methods and multilabel classifier-based models.…”
Section: Supervised Methodsmentioning
confidence: 99%
“…Baselines We consider two categories of competitive entity typing models as baselines: 1) methods capturing the example-label and label-label relations, e.g., BiLSTM (Choi et al, 2018) that concatenates the context representation learned by a bidirectional LSTM and the mention representation learned by a CNN, LabelGCN (Xiong et al, 2019) that learns to encode global label co-occurrence statistics and their word-level similarities, LRN (Liu et al, 2021a) that models the coarse-to-fine label dependency as causal chains, Box4Types (Onoe et al, 2021) that captures hierarchies of types as topological relations of boxes, and UniST (Huang et al, 2022a) that conduct namebased label ranking; 2) methods leveraging inductive bias from pre-trained models for entity typing, e.g., MLMET (Dai et al, 2021) that utilizes the pretrained BERT to predict the most probable words for "[MASK]" earlier incorporated around the mention as type labels, LITE and Context-TE ) that both leverage indirect supervision from pre-trained natural language inference.…”
Section: Datasets and Metricsmentioning
confidence: 99%