2020
DOI: 10.1371/journal.pcbi.1007593
|View full text |Cite
|
Sign up to set email alerts
|

Computational optimization of associative learning experiments

Abstract: With computational biology striving to provide more accurate theoretical accounts of biological systems, use of increasingly complex computational models seems inevitable. However, this trend engenders a challenge of optimal experimental design: due to the flexibility of complex models, it is difficult to intuitively design experiments that will efficiently expose differences between candidate models or allow accurate estimation of their parameters. This challenge is well exemplified in associative learning re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 53 publications
0
6
0
Order By: Relevance
“…To what these quantities differ between CS+ and CS− trials can depend on the experimental design, such that some designs may favor one CR and other designs a different CR. Regarding the influence of experimental design on inference in associative learning experiments, we refer the reader to Melinscak and Bach (2020).…”
Section: Discussionmentioning
confidence: 99%
“…To what these quantities differ between CS+ and CS− trials can depend on the experimental design, such that some designs may favor one CR and other designs a different CR. Regarding the influence of experimental design on inference in associative learning experiments, we refer the reader to Melinscak and Bach (2020).…”
Section: Discussionmentioning
confidence: 99%
“…These approaches rely on increased transparency in data reporting and analysis, and we maintain that decisions during data reduction and analysis should be reported and justified (Lonsdorf, Klingelhöfer‐Jens, et al, 2019; Ney et al., 2018). It is also possible that reproducibility may be improved by computational optimization of paradigm design, which may aid in determining how to vary experimental parameters to answer specific research questions (Melinscak & Bach, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…Similarly, the cause for inadequate robustness between trial‐by‐trial and averaged data should be systematically investigated. It is possible that the failure of these methods to replicate is due to lack of power, in which case methods that seek to improve power via experimental design and SCR scoring are highly desirable (Bach & Melinscak, 2020; Melinscak & Bach, 2020). Specifically, improving SCR scoring can provide better estimates of responses relevant to extinction paradigms by reducing measurement error (Bach & Melinscak, 2020), and optimization of experimental designs based on statistical requirements can make analyses more amenable to experimental data (Melinscak & Bach, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…Estimation noise is the inverse of the (relative or absolute) agreement between the true and recovered parameters. Alterations to experimental design can improve parameter recovery and estimation noise, and multiple frameworks have been proposed for testing and improving experimental designs to aid parameter recovery [63,64]. Parameter recovery can also be affected by the model estimation method [34,65].…”
Section: Reducing Parameter Estimation Noisementioning
confidence: 99%