2021
DOI: 10.1609/aaai.v26i1.8122
|View full text |Cite
|
Sign up to set email alerts
|

Fine-Grained Entity Recognition

Abstract: Entity Recognition (ER) is a key component of relation extraction systems and many other natural-language processing applications. Unfortunately, most ER systems are restricted to produce labels from to a small set of entity classes, e.g., person, organization, location or miscellaneous. In order to intelligently understand text and extract a wide range of information, it is useful to more precisely determine the semantic classes of entities mentioned in unstructured text. This paper defines a fine-grained set… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
71
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 273 publications
(90 citation statements)
references
References 21 publications
0
71
0
Order By: Relevance
“…this kind of NER tasks are called coarse-grained NER [11,12]. In contrast, the NER tasks whose categories of named entities are much larger are called fine-grained NER [13][14][15][16].…”
Section: Named Entity Recognition (Ner)mentioning
confidence: 99%
“…this kind of NER tasks are called coarse-grained NER [11,12]. In contrast, the NER tasks whose categories of named entities are much larger are called fine-grained NER [13][14][15][16].…”
Section: Named Entity Recognition (Ner)mentioning
confidence: 99%
“…The definition of the label space L varies across literature: some focused on predicting free-form words (e.g., place, hotel, etc) (Ling and Weld, 2012;Choi et al, 2018), while the others aimed at predicting labels organized into a type hierarchy (e.g., /location/hotel) to allow domain knowledge incorporation and ensure label consistency (Gillick et al, 2014). Considering the unclear hierarchy in the studied large-scale label space, we adopt the freeform types without ontologies in this work.…”
Section: Problem Definitionmentioning
confidence: 99%
“…Different from a traditional entity-typing task that typically classifies entities into coarse-grained types (e.g., person, location, organization ) [ 3 , 4 ], FET aims to assign an entity with more specific types [ 5 , 6 ], which usually follow a hierarchical structure that can provide more semantic information about the entity [ 7 , 8 ], such as /person/politician , /book/author , etc. FET is a significant subtask of named-entity recognition (NER) [ 9 ] for downstream natural language processing (NLP) applications, such as relation extraction [ 10 , 11 ], question answering [ 12 , 13 ], knowledge base population [ 14 ], and recommendation [ 15 , 16 ].…”
Section: Introductionmentioning
confidence: 99%
“…In FET, knowledge graphs (KGs) usually play an important role. For example, given large-scale KGs, FET systems resort to distant supervision [ 10 ] to generate large training corpora [ 9 , 17 , 18 ] (i.e., to label entity mentions in the training corpus with all types associated with the entity in KGs). Although distant supervision can eliminate the high cost in labeling training data with KGs, how to efficiently encode a KG’s typing knowledge into FET model is still underexplored.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation