2019
DOI: 10.48550/arxiv.1909.00164
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Named Entity Recognition Only from Word Embeddings

Abstract: Deep neural network models have helped named entity (NE) recognition achieve amazing performance without handcrafting features. However, existing systems require large amounts of human annotated training data. Efforts have been made to replace human annotations with external knowledge (e.g., NE dictionary, part-of-speech tags), while it is another challenge to obtain such effective resources. In this work, we propose a fully unsupervised NE recognition model which only needs to take informative clues from pre-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 39 publications
0
2
0
Order By: Relevance
“…Some works have also performed NER without labeled data or pretrained models. For example, Luo et al propose a fully unsupervised NER model that only relies on pretrained word embeddings [17]. Another approach uses natural language prompts to guide LLMs to perform NER without fine-tuning or labeling data [18].…”
Section: Entity Extractionmentioning
confidence: 99%
“…Some works have also performed NER without labeled data or pretrained models. For example, Luo et al propose a fully unsupervised NER model that only relies on pretrained word embeddings [17]. Another approach uses natural language prompts to guide LLMs to perform NER without fine-tuning or labeling data [18].…”
Section: Entity Extractionmentioning
confidence: 99%
“…Learning a vector representation is, in many cases, a preprocessing step to facilitate another task. For example, the vector representation learned by Word2Vec [52] has been used to map words to dense feature vectors to solve tasks such as Named Entity Recognition [38,48].…”
Section: Learnable Vector Representationsmentioning
confidence: 99%