2022
DOI: 10.1007/978-3-031-21756-2_7
|View full text |Cite
|
Sign up to set email alerts
|

Ensembling Transformers for Cross-domain Automatic Term Extraction

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 27 publications
0
2
0
Order By: Relevance
“….])' (Tran et al, 2022b) or 'The results [of ATE] can either be used directly to facilitate term management for, e.g., terminologists and translators, or as a preprocessing step for other tasks within natural language processing (NLP) [. .…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“….])' (Tran et al, 2022b) or 'The results [of ATE] can either be used directly to facilitate term management for, e.g., terminologists and translators, or as a preprocessing step for other tasks within natural language processing (NLP) [. .…”
Section: Discussionmentioning
confidence: 99%
“…In particular, we found six papers that used the same dataset, the Annotated Corpora for Term Extraction Research (ACTER) dataset. 6 The authors of the papers Tran et al (2022a) and Tran et al (2022b) compare the multilingual learning to the monolingual learning in the cross-domain sequence-labeling term extraction task (Gooding and Kochmar, 2019). They examine the cross-lingual effect of rich-resource training language over fewer resources, such as Slovenian.…”
Section: Discussionmentioning
confidence: 99%