2022
DOI: 10.1007/s43674-021-00024-6
|View full text |Cite
|
Sign up to set email alerts
|

How to generate data for acronym detection and expansion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 13 publications
0
1
0
Order By: Relevance
“…However, in practice, the BERT model yielded poor results, largely due to the lack of definitions retained in the model. In a previous paper [19], we explored the use of BERT on acronym-type abbreviations, and it presented a 94% acceptance rate. Using the same method, but querying for contractions, the BERT model scored 0% for acceptance.…”
Section: Ad Hoc Abbreviationsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, in practice, the BERT model yielded poor results, largely due to the lack of definitions retained in the model. In a previous paper [19], we explored the use of BERT on acronym-type abbreviations, and it presented a 94% acceptance rate. Using the same method, but querying for contractions, the BERT model scored 0% for acceptance.…”
Section: Ad Hoc Abbreviationsmentioning
confidence: 99%
“…Essentially, BERT is an upgrade on recurrent neural network models, as it is able to take in information directionlessly. Previously, we demonstrated some of the power of BERT in its ability to generate abbreviation definitions [19] and further successfully tested it on ambiguous definitions [13]. The authors Daza et al utilized a SloBERTa model with an additional single neural layer to tackle abbreviation disambiguation for Slovenian biographical lexicons [20].…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, Wikipedia pages are used as a source for doing their experiments. But a drawback of this method is that it cannot find alternative meanings for the same acronym even in the same passage (Choi et al, 2022). 5 Ambiguity in the acronym meaning exists across the categories.…”
Section: Related Workmentioning
confidence: 99%