2023
DOI: 10.3390/ijgi12100394
|View full text |Cite
|
Sign up to set email alerts
|

ChineseCTRE: A Model for Geographical Named Entity Recognition and Correction Based on Deep Neural Networks and the BERT Model

Wei Zhang,
Jingtao Meng,
Jianhua Wan
et al.

Abstract: Social media is widely used to share real-time information and report accidents during natural disasters. Named entity recognition (NER) is a fundamental task of geospatial information applications that aims to extract location names from natural language text. As a result, the identification of location names from social media information has gradually become a demand. Named entity correction (NEC), as a complementary task of NER, plays a crucial role in ensuring the accuracy of location names and further imp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 55 publications
0
2
0
Order By: Relevance
“…Since a character or word in nested entities can be annotated with multiple distinct labels, conventional sequence labeling models cannot directly recognize nested entities. Most learning models use Bidirectional Encoder Representation from Transformers (BERT) and Bidirectional Long Shot-TERM Memory (Bi-LSTM) to extract word-and character-level features, obtaining the context semantic information of the target word or character [3,7,[14][15][16][17][18]. In Tang et al [15], a multi-task BERT-Bi-LSTM-AM-CRF intelligent processing model was constructed to classify the observation annotation sequence to obtain the named entities in a Chinese text.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Since a character or word in nested entities can be annotated with multiple distinct labels, conventional sequence labeling models cannot directly recognize nested entities. Most learning models use Bidirectional Encoder Representation from Transformers (BERT) and Bidirectional Long Shot-TERM Memory (Bi-LSTM) to extract word-and character-level features, obtaining the context semantic information of the target word or character [3,7,[14][15][16][17][18]. In Tang et al [15], a multi-task BERT-Bi-LSTM-AM-CRF intelligent processing model was constructed to classify the observation annotation sequence to obtain the named entities in a Chinese text.…”
Section: Related Workmentioning
confidence: 99%
“…Next, the attention weights from hn heads are concatenated and undergo another linear transformation, resulting in the tag-aware representation vector T awared ∈ R n×(Pd tag )×d att , as shown in Equation (18), where W o ∈ R (hnd head )×d att is a learnable parameter matrix.…”
Section: Label Representation Embeddingmentioning
confidence: 99%