2021
DOI: 10.1080/00051144.2021.1922150
|View full text |Cite
|
Sign up to set email alerts
|

Advancing natural language processing (NLP) applications of morphologically rich languages with bidirectional encoder representations from transformers (BERT): an empirical case study for Turkish

Abstract: Language model pre-training architectures have demonstrated to be useful to learn language representations. bidirectional encoder representations from transformers (BERT), a recent deep bidirectional self-attention representation from unlabelled text, has achieved remarkable results in many natural language processing (NLP) tasks with fine-tuning. In this paper, we want to demonstrate the efficiency of BERT for a morphologically rich language, Turkish. Traditionally morphologically difficult languages require … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(15 citation statements)
references
References 54 publications
0
14
0
Order By: Relevance
“…The transformer system in BERT identifies contextual relationships between words in text [32] by using the decoder and encoder part in order to make predictions based on the input data. Encoder-decoder system of the transformer leverages the attention mechanism to attain state-of-the-art effectiveness on the majority of NLP tasks [33]. The input embeddings are generated by adding a positional encoding of each word with a pre-trained embedding vector.…”
Section: ) Xlnetmentioning
confidence: 99%
“…The transformer system in BERT identifies contextual relationships between words in text [32] by using the decoder and encoder part in order to make predictions based on the input data. Encoder-decoder system of the transformer leverages the attention mechanism to attain state-of-the-art effectiveness on the majority of NLP tasks [33]. The input embeddings are generated by adding a positional encoding of each word with a pre-trained embedding vector.…”
Section: ) Xlnetmentioning
confidence: 99%
“…Transformers architecture has two key components associated: Encoder and Decoder. The encoder-decoder system leverages the attention mechanism to attain state-of-the-art effectiveness on the majority of natural language-based activities [45].…”
Section: Bert: Bidirectional Encoder Representations From Transformersmentioning
confidence: 99%
“…Moreover, to handle the morphological rich problem of the Persian language, we use the BERT language model. Özçift et al [ 17 ] demonstrated that BERT can overcome the morphological rich problem. The following research questions were explored in this article: Does using a native dataset for answer selection task show better performance than using a translated dataset for the Persian language?…”
Section: Introductionmentioning
confidence: 99%
“…PerAnSel deploys a transformer-based language model to process sentences with other word orders. The transformer-based model is composed of fully connected neural networks and attention mechanism [ 19 ], which enable it to address the morphologically problem in the Persian language [ 17 ]. In order to address the answer selection problem for the Persian language, we use transformer-based models and sequential models in parallel.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation