2021
DOI: 10.1007/s12530-021-09377-2
|View full text |Cite
|
Sign up to set email alerts
|

A transfer learning approach to cross-domain authorship attribution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 34 publications
0
9
0
Order By: Relevance
“…F1-score-a weighted harmonic mean of the Precision and Recall. As can be seen, the worst results over the three shuffles are obtained consistently by Emil Gârleanu (4), Mihai Oltean (6), and Liviu Rebreanu (8), while the best results are obtained by Mihai Eminescu (2) and Emilia Plugaru (7) and Petre Ispirescu (5).…”
Section: Discussionmentioning
confidence: 95%
See 1 more Smart Citation
“…F1-score-a weighted harmonic mean of the Precision and Recall. As can be seen, the worst results over the three shuffles are obtained consistently by Emil Gârleanu (4), Mihai Oltean (6), and Liviu Rebreanu (8), while the best results are obtained by Mihai Eminescu (2) and Emilia Plugaru (7) and Petre Ispirescu (5).…”
Section: Discussionmentioning
confidence: 95%
“…The adoption of using deep neural learning, which was already used for Natural Language Processing (NLP), occurred later for authorship identification. More recently, pre-trained language models (such as BERT and GPT-2) have been used for finetuning and accuracy improvements [2,4,5].…”
Section: Introductionmentioning
confidence: 99%
“…As a particular example, it is important to know what this technology has the very promising ability to bridge the gap between generic and patient specific computational models. An instant classic is the example of ChatGPT’s ability to ‘learn’ a language, and then tailor sentences to the style of writing of a given author [ 63 ], which is a form of transfer learning from generic to particular. Such approach is already being applied to, for example, training a transformer on numerous examples of brain structure to then detect a patient-specific anomaly (i.e., a tumor) in spite of inter-subject variability [ 64 ].…”
Section: Modeling For Personalizationmentioning
confidence: 99%
“…However, the drawback of n-grams is that they produce sparse features as the number of n increases [47]. Traditional feature representations, such as Tf-Idf, suffers from data sparsity and high dimensionality in representing n-grams and have difficulty in grasping the semantic meaning of texts [15].…”
Section: Reviews Of Ai Work In Smfmentioning
confidence: 99%