2018
DOI: 10.3758/s13428-018-1036-5
|View full text |Cite
|
Sign up to set email alerts
|

Comparability, stability, and reliability of internet-based mental chronometry in domestic and laboratory settings

Abstract: The internet-based assessment of response time (RT) and error rate (ERR) has recently become a well-validated alternative to traditional laboratory-based assessment, because methodological research has provided evidence for negligible setting- and setup-related differences in RT and ERR measures of central tendency. However, corresponding data on potential differences in the variability of such performance measures are still lacking, to date. Hence, the aim of this study was to conduct internet-based mental ch… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
19
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(27 citation statements)
references
References 41 publications
2
19
0
Order By: Relevance
“…One important consideration is the online nature of the present experiments. While online experiments can be robust and lead to reliable results that are comparable to lab-based experiments (Bridges et al, 2020;de Leeuw & Motz, 2016;Miller et al, 2018), and similar biases have been found in screen-based experiments using narratives or ostensible interaction (Cikara et al, 2014), the players in the present experiments were disembodied agents presented on a screen. Embodiment is a crucial factor that drives engagements with other humans (Schilbach et al, 2013) and robots (Wykowska et al, 2016).…”
Section: Discussionsupporting
confidence: 68%
“…One important consideration is the online nature of the present experiments. While online experiments can be robust and lead to reliable results that are comparable to lab-based experiments (Bridges et al, 2020;de Leeuw & Motz, 2016;Miller et al, 2018), and similar biases have been found in screen-based experiments using narratives or ostensible interaction (Cikara et al, 2014), the players in the present experiments were disembodied agents presented on a screen. Embodiment is a crucial factor that drives engagements with other humans (Schilbach et al, 2013) and robots (Wykowska et al, 2016).…”
Section: Discussionsupporting
confidence: 68%
“…Participants gave written informed consent and received either monetary rewards (18€) or course credit for their participation. Participants worked on several questionnaires and three executive function tasks measuring inhibition, shifting, and working memory updating (for task description, see below) once in the laboratory and once at home with 1 week in between and in a randomized order because a further goal was to measure whether task performance differed between home compared to the lab which is reported elsewhere (Miller et al, 2018). There were no substantial differences between the lab and home context.…”
Section: Methodsmentioning
confidence: 99%
“…If not, they had to press the key “M” (non-target). Each position was presented for 2.5 s. Each of the cognitive tasks started with a short training period and lasted about 10 min (for a more detailed task description, see Miller et al, 2018).…”
Section: Methodsmentioning
confidence: 99%
“…In fact, several researchers have provided evidence that response times are comparable between browser-based applications and local applications (Barnhoorn, Haasnoot, Bocanegra, & Steenbergen, 2015) even in poorly standardized domestic environments -i.e. at home (Miller, Schmidt, Kirschbaum, & Enge, 2018).…”
Section: Current State Of the Artmentioning
confidence: 99%