2020
DOI: 10.7717/peerj.9414
|View full text |Cite
|
Sign up to set email alerts
|

The timing mega-study: comparing a range of experiment generators, both lab-based and online

Abstract: Many researchers in the behavioral sciences depend on research software that presents stimuli, and records response times, with sub-millisecond precision. There are a large number of software packages with which to conduct these behavioral experiments and measure response times and performance of participants. Very little information is available, however, on what timing performance they achieve in practice. Here we report a wide-ranging study looking at the precision and accuracy of visual and auditory stimul… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

16
320
2
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 347 publications
(339 citation statements)
references
References 24 publications
16
320
2
1
Order By: Relevance
“…Unfortunately, this means that any data reported in a paper will almost certainly reflect an older version of a software by the time of publication. All packages assessed here will likely have improved timing at some point in the future, so we encourage users who really need acute timing accuracy to gather external chronometrics themselves-as others also suggest (Bridges et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Unfortunately, this means that any data reported in a paper will almost certainly reflect an older version of a software by the time of publication. All packages assessed here will likely have improved timing at some point in the future, so we encourage users who really need acute timing accuracy to gather external chronometrics themselves-as others also suggest (Bridges et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…These differences are potential reasons for the different data we report. Researchers should keep these instances of larger delays in mind when conducting reaction-timesensitive studies, by ensuring relative RTs are used (Pronk et al, 2019;Bridges et al, 2020). When timing sensitivity is crucial, we recommend employing within-participant designs where possible to avoid having to make comparisons between participants with different devices, operating systems, and browsers.…”
Section: Discussionmentioning
confidence: 99%
“…In general, latencies and variabilities are higher in web-based compared to lab-environments. Several studies have assessed the quality of timing in online studies, with encouraging results (Anwyl-Irvine, Dalmaijer, et al, 2020;Bridges et al, 2020;Pronk et al, 2019;Reimers & Stewart, 2015). An online evaluation of a masked priming experiment showed that very short stimulus durations (i.e., under 50ms) can be problematic (but see Barnhoorn et al, 2014), but other classic experimental psychology paradigms that rely on reaction times (e.g., Stroop, flanker, and Simon tasks)…”
Section: Frequently Asked Questionsmentioning
confidence: 99%
“…Additionally, modern screen refresh rates are almost exclusively set to 60 Hz (de facto standard). Recently, two large studies investigated timing precision of several online and offline solutions and found good precision with only minor exceptions [47,48], most notably with audio playback. In addition to timing, there could be concerns that participants might be distracted more often when they sit at home and are not directly observed by the experimenter but several studies have shown that this is not necessarily the case [49,50] and data quality is comparable to lab-based studies [51][52][53][54][55][56][57].…”
Section: Data Quality Concernsmentioning
confidence: 99%