Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.160
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Fine-Grained Entity Types with Box Embeddings

Abstract: Neural entity typing models typically represent fine-grained entity types as vectors in a high-dimensional space, but such spaces are not well-suited to modeling these types' complex interdependencies. We study the ability of box embeddings, which embed concepts as d-dimensional hyperrectangles, to capture hierarchies of types even when these relationships are not defined explicitly in the ontology. Our model represents both types and entity mentions as boxes. Each mention and its context are fed into a BERT-b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(30 citation statements)
references
References 38 publications
1
29
0
Order By: Relevance
“…Onoe and Durrett (2019) trained a filtering and relabeling model with the human annotated data to denoise the automatically generated data for training. Onoe et al (2021) introduced box embeddings (Vilnis et al, 2018) to represent the dependency among multiple levels of type labels as topology of axis-aligned hyperrectangles (boxes). To further cope with insufficient training data, Dai et al (2021) used pretrained language model for augmenting (noisy) training data with masked entity generation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Onoe and Durrett (2019) trained a filtering and relabeling model with the human annotated data to denoise the automatically generated data for training. Onoe et al (2021) introduced box embeddings (Vilnis et al, 2018) to represent the dependency among multiple levels of type labels as topology of axis-aligned hyperrectangles (boxes). To further cope with insufficient training data, Dai et al (2021) used pretrained language model for augmenting (noisy) training data with masked entity generation.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, to handle the dependency of type labels in different granularities, we also utilize the inference ability of NLI model to learn that the finer label hypothesis of an entity mention entails its general label hypothesis. Experimental results on the UFET benchmark (Choi et al, 2018) show that LITE drastically outperforms the recent state-of-the-art (SOTA) systems (Dai et al, 2021;Onoe et al, 2021;Liu et al, 2021) without any need of distantly supervised data as they do. In addition, our LITE also yields the best performance on traditional (less) fine-grained entity typing tasks.…”
Section: Introductionmentioning
confidence: 96%
“…Entity typing A task closely related to our work is entity typing, or predicting the set of types for a mention (e.g., 31,16,37). A key difference is that entity typing methods often learn explicit type embeddings to perform type classification, whereas TABi only learns query and entity embeddings.…”
Section: Related Workmentioning
confidence: 99%
“…Representing relations between the nodes of a hierarchy is useful for various NLP and Machine Learning tasks such as natural language inference (Wang et al, 2019;Sharma et al, 2019), entity typing (Onoe et al, 2021), multi-label classification (Chatterjee et al, 2021), and question answering (Jin et al, 2019;Fang et al, 2020). For example, in Figure 2, knowing the hypernym relationship between the pairs (herb, basil), (herb, thyme), and (herb, rosemary) can help paraphrase the sentence "This dish requires basil, thyme and rosemary" into "This dish requires several herbs.".…”
Section: Representing Hierarchical Graphmentioning
confidence: 99%
“…Various applications of probabilistic box embeddings (eg. modeling joint-hierarchies (Patel et al, 2020), uncertain knowledge graph representation (Chen et al, 2021), or fine-grained entity typing (Onoe et al, 2021)) have relied on bespoke implementations, adding unnecessary difficulty and differences in implementation when applying box embeddings to new tasks. To mitigate this issue and make applying and extending box embeddings easier, we saw the need to introduce a reusable, unified, stable library that provides the basic functionalities needed in studying box embeddings.…”
Section: Introductionmentioning
confidence: 99%