2020
DOI: 10.2196/preprints.26628
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Machine Learning–Based Prediction of Growth in Confirmed COVID-19 Infection Cases in 114 Countries Using Metrics of Nonpharmaceutical Interventions and Cultural Dimensions: Model Development and Validation (Preprint)

Abstract: BACKGROUND National governments worldwide have implemented nonpharmaceutical interventions to control the COVID-19 pandemic and mitigate its effects. OBJECTIVE The aim of this study was to investigate the prediction of future daily national confirmed COVID-19 infection growth—the percentage change in total cumulative cases—across 14 days for 114 countries using nonpharmaceutical intervention me… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 70 publications
0
1
0
Order By: Relevance
“…Human simulatability (Doshi-Velez and Kim, 2017) has a rich history in machine learning interpretability as a reliable measure of rationale quality from the lens of utility to an end-user (Kim et al, 2016;Chandrasekaran et al, 2018;Yeung et al, 2020;Poursabzi-Sangdeh et al, 2021;Rajagopal et al, 2021, i.a.). Rather than computing word-level overlap with a ground-truth explanation, simulatability measures the additional predictive ability towards the predicted label a rationale provides over the input, computed as the difference between task performance when a rationale is given as input vs. when it is not (IR→Ô minus I→Ô).…”
Section: Tasks and Datasetsmentioning
confidence: 99%
“…Human simulatability (Doshi-Velez and Kim, 2017) has a rich history in machine learning interpretability as a reliable measure of rationale quality from the lens of utility to an end-user (Kim et al, 2016;Chandrasekaran et al, 2018;Yeung et al, 2020;Poursabzi-Sangdeh et al, 2021;Rajagopal et al, 2021, i.a.). Rather than computing word-level overlap with a ground-truth explanation, simulatability measures the additional predictive ability towards the predicted label a rationale provides over the input, computed as the difference between task performance when a rationale is given as input vs. when it is not (IR→Ô minus I→Ô).…”
Section: Tasks and Datasetsmentioning
confidence: 99%