2019
DOI: 10.13053/cys-23-3-3247
|View full text |Cite
|
Sign up to set email alerts
|

Joint Learning of Named Entity Recognition and Dependency Parsing using Separate Datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 11 publications
0
6
0
Order By: Relevance
“…The conventional model requires a single dataset annotated with labels for both tasks, which is a delimiting constraint for less resourced languages. Instead, we proposed using separate datasets for each task (Akdemir and Güngör, 2019b) which allows the model to be trained on a larger dataset.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The conventional model requires a single dataset annotated with labels for both tasks, which is a delimiting constraint for less resourced languages. Instead, we proposed using separate datasets for each task (Akdemir and Güngör, 2019b) which allows the model to be trained on a larger dataset.…”
Section: Discussionmentioning
confidence: 99%
“…Specifically, we will apply this method to leverage our previously proposed joint learner for Dependency Parsing and Named Entity Recognition. Part-of-speech tags strongly correlate with named entities and dependencies (Hashimoto et al, 2017;Akdemir and Güngör, 2019b). Thus, we argue that learning joint label embeddings of these tasks can help to further capture the relations between them.…”
Section: Research Planmentioning
confidence: 93%
“…Moreover, this work also encapsulates the most recent NER results for four different morphologically rich languages, which are Czech (81.05% F1), Hungarian (96.11% F1), Finnish (84.34% F1), and Spanish (86.95% F1). Concurrently, Akdemir (2018) combined the same input configuration with joint learning of dependency parsing and NER, which was initially described by Finkel and Manning (2009). This model reached up to a 90.9% F1 score.…”
Section: Ner Models Developed For Turkishmentioning
confidence: 99%
“…As Yeniterzi, Tür, and Oflazer (2018) wrapped up, the first adaptations of Turkish NER were again mainly consisting of handcrafted rules (Küçük and Yazıcı 2009;Dalklç, Gelişli, and Diri 2010) and statistical models (Hakkani-Tür, Oflazer, and Tür 2002;Tür, Hakkani-Tür, and Oflazer 2003). Later, machine learning approaches dominated contemporary applications with conditional random fields (CRFs) (Yeniterzi 2011;Şeker and Eryigit 2012;Küçük and Steinberger 2014) and neural network-based models (Demir and özgür 2014;Akdemir 2018;Güngör et al 2019).…”
Section: Introductionmentioning
confidence: 99%
“…For this reason, studies are carried out to collect dedicated data for social media texts (Okur et al, 2016). The most recent studies in Turkish NER field utilize Transformers and BERT based models (Yıldırım, 2019;Akdemir and Güngör, 2019;Aras et al, 2021).…”
Section: Introductionmentioning
confidence: 99%