2020
DOI: 10.48550/arxiv.2006.13816
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Document Classification for COVID-19 Literature

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 8 publications
1
6
0
Order By: Relevance
“…The observations are similar when comparing LITMC-BERT with Linear BERT: e.g., its macro-F1 and accuracy are up to 2% and 4% higher on the HoC dataset, respectively. In terms of comparing Binary BERT with Linear BERT, Binary BERT achieved overall better performance on the LitCovid Bi-oCreative dataset, which is consistent with the literature [6,15], whereas Linear BERT achieved over better performance on the HoC dataset.…”
Section: Statistic Test and Reporting Standardsupporting
confidence: 86%
See 3 more Smart Citations
“…The observations are similar when comparing LITMC-BERT with Linear BERT: e.g., its macro-F1 and accuracy are up to 2% and 4% higher on the HoC dataset, respectively. In terms of comparing Binary BERT with Linear BERT, Binary BERT achieved overall better performance on the LitCovid Bi-oCreative dataset, which is consistent with the literature [6,15], whereas Linear BERT achieved over better performance on the HoC dataset.…”
Section: Statistic Test and Reporting Standardsupporting
confidence: 86%
“…Such methods have achieved promising performance in a range of multilabel classification tasks [13,22]. Indeed, existing studies have shown binary relevance BERT achieved the best performance for topic annotation in LitCovid [6,15]. However, it is computationally expensive and transforming multi-label classification tasks into binary classification may ignore the correlations among labels.…”
Section: Multi-label Classificationmentioning
confidence: 99%
See 2 more Smart Citations
“…Future studies may involve changing the SciBERT model to a There are a few previous studies that have the same objective as our study, classified in the CORD-19 dataset. A study from 2020 [51] used the LitCovid dataset to train the machine learning model. In this article, they trained Logistic Regression and Support Vector Machine for the traditional machine learning model, LSTM model for the neural network model and fine-tuned the BioBERT, Longformer [52] for the BERT based model.…”
Section: Discussionmentioning
confidence: 99%