2020
DOI: 10.1017/s1351324919000627
|View full text |Cite
|
Sign up to set email alerts
|

Transfer learning for Turkish named entity recognition on noisy text

Abstract: In this article, we investigate using deep neural networks with different word representation techniques for named entity recognition (NER) on Turkish noisy text. We argue that valuable latent features for NER can, in fact, be learned without using any hand-crafted features and/or domain-specific resources such as gazetteers and lexicons. In this regard, we utilize character-level, character n-gram-level, morpheme-level, and orthographic character-level word representations. Since noisy data with NER annotatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 31 publications
0
5
0
Order By: Relevance
“…Transfer learning refers to a learning process that uses existing knowledge and similarity between data and models to solve problems in different but related domains [26]. This technique can solve the problem of sparse training samples in deep learning, and has been successfully applied in the fields of image recognition [27,28], speech recognition [29,30], and text recognition [31,32]. Transfer learning is a machine-learning paradigm that generalizes the commonalities between different tasks or domains and applies the models from the source domain into the target domain.…”
Section: Model-based Transfer-learning Methodsmentioning
confidence: 99%
“…Transfer learning refers to a learning process that uses existing knowledge and similarity between data and models to solve problems in different but related domains [26]. This technique can solve the problem of sparse training samples in deep learning, and has been successfully applied in the fields of image recognition [27,28], speech recognition [29,30], and text recognition [31,32]. Transfer learning is a machine-learning paradigm that generalizes the commonalities between different tasks or domains and applies the models from the source domain into the target domain.…”
Section: Model-based Transfer-learning Methodsmentioning
confidence: 99%
“…We use a BiLSTM-CRF model where each word is encoded through a BiLSTM and decoded with a CRF layer to learn the named entities in a given text (Kagan Akkaya and Can, 2021). We feed the BiLSTM with character-level (learned through a character-level BiLSTM), character n-gram-level (fasttext), morpheme-level (morph2vec), and wordlevel word embeddings (word2vec), as well as orthographic embeddings that are learned either with a CNN or BiLSTM by encoding alphabetic characters similar to that of Aguilar et al (2017).…”
Section: Named Entity Recognition (Ner)mentioning
confidence: 99%
“…They reached an F1 score of 91.30% without using any external resources. Akkaya and Can (2021) tackled Turkish NER problem with transfer learning and additional CRF layers reaching an entity level F1 score of 67.39% for a non-domain-specific noisy corpus.…”
Section: Ner Models Developed For Turkishmentioning
confidence: 99%