Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.86
|View full text |Cite
|
Sign up to set email alerts
|

Persian Ezafe Recognition Using Transformers and Its Role in Part-Of-Speech Tagging

Abstract: Ezafe is a grammatical particle in some Iranian languages that links two words together. Regardless of the important information it conveys, it is almost always not indicated in Persian script, resulting in mistakes in reading complex sentences and errors in natural language processing tasks. In this paper, we experiment with different machine learning methods to achieve state-of-the-art results in the task of ezafe recognition. Transformerbased methods, BERT and XLMRoBERTa, achieve the best results, the latte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 30 publications
(26 reference statements)
0
2
0
Order By: Relevance
“…In a research by Doostmohammadi et al, they have exploited Transformer-based, BERT, and XLMRoBERTa methods, and achieved the best results, with respect to the previous works [58]. In another research accomplished by Ansari et al, they tackled the problem of Ezafe recognition using ParsBert transformer.…”
Section: Ezafe Recognition Using Transformersmentioning
confidence: 99%
“…In a research by Doostmohammadi et al, they have exploited Transformer-based, BERT, and XLMRoBERTa methods, and achieved the best results, with respect to the previous works [58]. In another research accomplished by Ansari et al, they tackled the problem of Ezafe recognition using ParsBert transformer.…”
Section: Ezafe Recognition Using Transformersmentioning
confidence: 99%
“…Using transformers library (Wolf et al, 2020), we fine-tune different pretrained transformer-based models for the task of chunking because earlier works have shown their superior ability to perform well on shallow parsing tasks (Tran et al, 2020;Doostmohammadi et al, 2020;Li et al, 2021). The word embeddings are obtained by taking an unweighted average of all its subword embeddings.…”
Section: Modelmentioning
confidence: 99%