2022
DOI: 10.1086/721797
|View full text |Cite
|
Sign up to set email alerts
|

A Falsificationist Account of Artificial Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…26,[121][122][123][130][131][132] Although the iterative nature of this process may not make ML models "universally" generalizable, 26,125,128 it would certainly boost their learning capabilities by leveraging their ability to falsify prediction rules that lack empirical adequacy (as postulated by Buchholz and Raidl). 133 If some major technical challenges are overcome, [134][135][136][137] and these steps can be done automatically, 122,132,138,139 a site-specific autonomous endless selflearning process could eventually be developed.…”
Section: After ML Models' Deploymentmentioning
confidence: 99%
See 1 more Smart Citation
“…26,[121][122][123][130][131][132] Although the iterative nature of this process may not make ML models "universally" generalizable, 26,125,128 it would certainly boost their learning capabilities by leveraging their ability to falsify prediction rules that lack empirical adequacy (as postulated by Buchholz and Raidl). 133 If some major technical challenges are overcome, [134][135][136][137] and these steps can be done automatically, 122,132,138,139 a site-specific autonomous endless selflearning process could eventually be developed.…”
Section: After ML Models' Deploymentmentioning
confidence: 99%
“…Even if the best methods to capture the ground truth remain debatable, and the problems of induction explained by Lauc 128 are ignored, the intrinsic ability of ML to falsify prediction rules that lack empirical adequacy 133 strengthened by the increasing availability of big data 13,[164][165][166]229 could be leveraged to develop ML models that continuously integrate and assign specific weights (i.e., importance) to personal (e.g., clinical, radiological, histopathological, laboratory medicine, multi-omics, self-reported, and collected with wearable devices) and population-based empirical data (e.g., related to "social determinants of health") 97,[167][168][169][170][171]229 to predict health outcomes dynamically. 122,123,139,[172][173][174] In some of these models, hidden information extracted with ML models from WSIs that have shown to be valuable for prediction purposes 41,91,92,[94][95][96][97][98] will most likely obtain high weights.…”
Section: Opportunitiesmentioning
confidence: 99%