2019
DOI: 10.1007/s42113-019-00049-8
|View full text |Cite
|
Sign up to set email alerts
|

Robust Standards in Cognitive Science

Abstract: Recent discussions within the mathematical psychology community have focused on how Open Science practices may apply to cognitive modelling. Lee et al. (2019) sketched an initial approach for adapting Open Science practices that have been developed for experimental psychology research to the unique needs of cognitive modelling. While we welcome the general proposal of Lee et al. (2019), we believe a more fine-grained view is necessary to accommodate the adoption of Open Science practices in the diverse areas o… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
50
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

5
3

Authors

Journals

citations
Cited by 34 publications
(50 citation statements)
references
References 80 publications
(136 reference statements)
0
50
0
Order By: Relevance
“…Subfields of psychology and neighbouring disciplines in which non-confirmatory research activities are common practice have already begun to tackle these issues (see e.g. Crüwell et al, 2019;Jacobs, 2020;Moravcsik, 2014). Drawing on existing expertise in these fields, exchanging resources, and starting broader discussions about underutilised methods may help us overcome our unhealthy fixation on hypothesis tests.…”
Section: Discussionmentioning
confidence: 99%
“…Subfields of psychology and neighbouring disciplines in which non-confirmatory research activities are common practice have already begun to tackle these issues (see e.g. Crüwell et al, 2019;Jacobs, 2020;Moravcsik, 2014). Drawing on existing expertise in these fields, exchanging resources, and starting broader discussions about underutilised methods may help us overcome our unhealthy fixation on hypothesis tests.…”
Section: Discussionmentioning
confidence: 99%
“…We note that our approach here is one of model application, rather than model validation or comparison (Crüwell, Stefan, & Evans, 2019). We assume that there is value in using the DMC as a theoretical framework and use its parameters to inform our question about whether common mechanisms exist.…”
Section: Overview Of the Papermentioning
confidence: 99%
“…However, we wish to note that the aim of our study was only to distinguish between and compare di↵erent theories of speeded decision-making, and that our study does not have implications for the measurement properties of EAMs. Importantly, one of the most common uses of EAMs is as "measurement tools" of the decision-making process, where researchers use EAMs to estimate the latent parameters of the decision-making process -such as drift rate and threshold -and make inferences about how they vary across experimental conditions and/or groups (Crüwell et al, 2019). Having measurement tools that are theoretically accurate is important, as inaccuracies in the theoretical underpinning of the measurement tools may make the parameters estimated from them meaningless.…”
Section: Discussionmentioning
confidence: 99%
“…Importantly, the choice response time distributions alone were unable to qualitatively distinguish between these models, meaning that the additional constraint provided by the double response proportions created this distinction. However, our previous analyses only provided limited insight into why lateral inhibition helped in accounting for the double response proportions (see Evans, 2019c;Evans & Servant, 2020;Crüwell, Stefan, & Evans, 2019 for discussions on the importance of which and why in model assessment), with the only insight being that models without lateral inhibition provided a large over-prediction for the double response proportions, particularly for trials where an error was made for the initial response.…”
Section: Response Timementioning
confidence: 98%