2021
DOI: 10.1007/978-981-16-1964-9_3
|View full text |Cite
|
Sign up to set email alerts
|

FGN: Fusion Glyph Network for Chinese Named Entity Recognition

Abstract: Chinese NER is a challenging task. As pictographs, Chinese characters contain latent glyph information, which is often overlooked. In this paper, we propose the FGN 1 , Fusion Glyph Network for Chinese NER. Except for adding glyph information, this method may also add extra interactive information with the fusion mechanism. The major innovations of FGN include: (1) a novel CNN structure called CGS-CNN is proposed to capture both glyph information and interactive information between glyphs from neighboring char… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 9 publications
0
9
0
Order By: Relevance
“…Furthermore, due to the development of artificial intelligence and knowledge graphs, a lot of research has been done to discuss how to extract knowledge from various forms of data such as plain text, images, and other unstructured data [26–29]. In this paper, knowledge refers to the form of ()e1,r,e2 $\left({e}_{1},r,{e}_{2}\right)$ triples, where e1 ${e}_{1}$ and e2 ${e}_{2}$ are two named entities, and r $r$ is a directed relation between e1 ${e}_{1}$ and e2 ${e}_{2}$.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, due to the development of artificial intelligence and knowledge graphs, a lot of research has been done to discuss how to extract knowledge from various forms of data such as plain text, images, and other unstructured data [26–29]. In this paper, knowledge refers to the form of ()e1,r,e2 $\left({e}_{1},r,{e}_{2}\right)$ triples, where e1 ${e}_{1}$ and e2 ${e}_{2}$ are two named entities, and r $r$ is a directed relation between e1 ${e}_{1}$ and e2 ${e}_{2}$.…”
Section: Literature Reviewmentioning
confidence: 99%
“…For example, BERT [31] is the first PLM that uses deep bidirectional transformers to learn representations from unlabelled text and perform significantly improved for a wide range of tasks. Various studies have exploited pre-trained LMs to extract knowledge from unstructured data in different domains, such as entity recognition [26][27][28], relation extraction [29,34], and joint extraction [35][36][37]. FLAT [26] uses a lattice structure and an elaborate transformer to perform Chinese named entity recognition (NER).…”
Section: Literature Reviewmentioning
confidence: 99%
“…FGN was proposed by Xuan et al On the one hand, a new CNN structure was proposed, called CGS-CNN. On the other hand, a method with a sliding window and Slice-Attention to fuse the BERT representation and glyph representation for a character was provided [28]. Recently, some scholars paid attention to not only the glyph features but also the pinyin features of Chinese characters.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, research on NER has concentrated on either enriching input text representations (Zhang and Yang, 2018;Nie et al, 2020b;Ma et al, 2020) or refining model architectures with various external knowledge (Zhang and Yang, 2018;Ye and Ling, 2018;Li et al, 2020a;Xuan et al, 2020;Li et al, 2020b;Shen et al, 2021). Particularly, NER model, with the aid of large pretrained language models (Peters et al, 2018;Devlin et al, 2019;, has achieved impressive performance gains.…”
Section: Related Workmentioning
confidence: 99%