2011
DOI: 10.1098/rsta.2011.0227
|View full text |Cite
|
Sign up to set email alerts
|

A tentative taxonomy for predictive models in relation to their falsifiability

Abstract: The growing importance of predictive models in biomedical research raises some concerns on the correct methodological approach to the falsification of such models, as they are developed in interdisciplinary research contexts between physics, biology and medicine. In each of these research sectors, there are established methods to develop cause-effect explanations for observed phenomena, which can be used to predict: epidemiological models, biochemical models, biophysical models, Bayesian models, neural network… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 51 publications
0
6
0
Order By: Relevance
“…Second, a key use for multi‐resolution modeling is generating fast and accurate predictions, and the focus is on inductive and deductive reasoning. Our interest is different: models and methods were needed to enable mechanistic exploration and subsequent development of improved explanations of phenomena including improved phenotype overlap (Figure ). Improving insight requires shifting the aspect of focus of new virtual experiments, i.e., resolution tuning, in ways not easily anticipated.…”
Section: Evolving Principles For Building Tuneable Resolution Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Second, a key use for multi‐resolution modeling is generating fast and accurate predictions, and the focus is on inductive and deductive reasoning. Our interest is different: models and methods were needed to enable mechanistic exploration and subsequent development of improved explanations of phenomena including improved phenotype overlap (Figure ). Improving insight requires shifting the aspect of focus of new virtual experiments, i.e., resolution tuning, in ways not easily anticipated.…”
Section: Evolving Principles For Building Tuneable Resolution Modelsmentioning
confidence: 99%
“…Improving insight requires shifting the aspect of focus of new virtual experiments, i.e., resolution tuning, in ways not easily anticipated. Those activities require expanding the range of reasoning methods used to include analogical and abductive reasoning . (Figure ).…”
Section: Evolving Principles For Building Tuneable Resolution Modelsmentioning
confidence: 99%
“…For phenomenological models, such as machine learning, applicability is framed in term of the generalisation error [17]. For mechanistic models it is related to the concept of "limit of Validity" of the theory used to develop the model [18].…”
Section: )! Applicabilitymentioning
confidence: 99%
“…(Patterson and Whelan, 2017 ) describe a broad concept of validation of biological models, which includes but is more expansive than the engineering/DoD understanding of validation. (Viceconti, 2011 ) refer to model “falsification,” rather than validation, based on the contention that models can only be invalidated (falsified). One common feature of most of the different interpretations of validation is that validation must involve new data not used in the construction of the model, i.e., “calibration is not validation” (Roache, 2009 ).…”
Section: Why Trust a Computational Model?mentioning
confidence: 99%