2019
DOI: 10.48550/arxiv.1912.08259
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Open Set Authorship Attribution toward Demystifying Victorian Periodicals

Abstract: Existing research in computational authorship attribution (AA) has primarily focused on attribution tasks with a limited number of authors in a closed-set configuration. This restricted set-up is far from being realistic in dealing with highly entangled real-world AA tasks that involve a large number of candidate authors for attribution during test time. In this paper, we study AA in historical texts using a new data set compiled from the Victorian literature. We investigate the predictive capacity of most com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
(18 reference statements)
0
1
0
Order By: Relevance
“…Nevertheless, limited research has been conducted on open-set attribution. Badirli et al (2019) discussed the limitations of using standard machine learning techniques for open-set authorship attribution problems. Their experiments suggest that linear classifiers can achieve near-perfect attribution accuracy under closed-set assumptions; however, a more robust approach is required once a large candidate pool is considered as in open-set classification.…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, limited research has been conducted on open-set attribution. Badirli et al (2019) discussed the limitations of using standard machine learning techniques for open-set authorship attribution problems. Their experiments suggest that linear classifiers can achieve near-perfect attribution accuracy under closed-set assumptions; however, a more robust approach is required once a large candidate pool is considered as in open-set classification.…”
Section: Introductionmentioning
confidence: 99%