2021
DOI: 10.1109/access.2021.3095222
|View full text |Cite
|
Sign up to set email alerts
|

From Hume to Wuhan: An Epistemological Journey on the Problem of Induction in COVID-19 Machine Learning Models and its Impact Upon Medical Research

Abstract: Advances in computer science have transformed the way artificial intelligence is employed in academia, with Machine Learning (ML) methods easily available to researchers from diverse areas thanks to intuitive frameworks that yield extraordinary results. Notwithstanding, current trends in the mainstream ML community tend to emphasise wins over knowledge, putting the scientific method aside, and focusing on maximising metrics of interest. Methodological flaws lead to poor justification of method choice, which in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 53 publications
(52 reference statements)
0
5
0
Order By: Relevance
“…Nobody questions the huge opportunities that Artificial Intelligence (AI) and ML bring to bioinformatics and computer-aided diagnosis [43], but these opportunities come with challenges [44][45][46][47]. The first is data, which needs to adhere to high-quality standards that vary from area to area.…”
Section: Data-centric Aimentioning
confidence: 99%
See 1 more Smart Citation
“…Nobody questions the huge opportunities that Artificial Intelligence (AI) and ML bring to bioinformatics and computer-aided diagnosis [43], but these opportunities come with challenges [44][45][46][47]. The first is data, which needs to adhere to high-quality standards that vary from area to area.…”
Section: Data-centric Aimentioning
confidence: 99%
“…As Goyal et al note, the decision flows of computer-aided diagnosis solutions often differ from those of clinicians, hampering interpretability and inspection of the results [49] due to the black-box nature of ML models. From a design point of view, dividing the task into several sub-tasks (e.g., (1) detecting pathologies, then (2) diagnosing the disease from the pathologies) can ease both interpretability and maintenance [47].…”
Section: Models Interpretabilitymentioning
confidence: 99%
“…Often models are designed in an end-to-end way that attempts to map input data with the final result with a single model. For instance, a medical imaging CAD system can be designed as a chain of several models, with the first dedicated to finding pathologies and the subsequent models mapping pathologies to diseases or conditions (e.g., through several one class classifiers) (Vega, 2021 ). This approach eases solution maintainance and increases interpretability, allowing inspection of the intermediate results.…”
Section: Limitations Associated With Machine Learning/deep Learningmentioning
confidence: 99%
“…DL is expected to provide more accurate, faster and objective (in that it reports quantitative analysis) diagnosis [12]. However, these systems might fail to translate into real-world scenarios, presenting multiple challenges for safe applications [19]. It has been reported that ML-based health systems produce systematic errors on patient subgroup classification, consequently generating wrong predictions and flawed risk estimations [21].…”
Section: Introductionmentioning
confidence: 99%