2019
DOI: 10.1016/j.jneumeth.2019.108374
|View full text |Cite
|
Sign up to set email alerts
|

USE: An integrative suite for temporally-precise psychophysical experiments in virtual environments for human, nonhuman, and artificially intelligent agents

Abstract: This document describes the tests performed to characterize USE system latencies relating to the USE I/O Box. Test methods and results are summarized.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
56
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 43 publications
(59 citation statements)
references
References 41 publications
0
56
0
Order By: Relevance
“…The experiment was controlled by USE (Unified Suite for Experiments) using the Unity 3D game engine for behavioral control and visual display (Watson et al, 2019b). Four animals performed the experiment in cage-based touchscreen Kiosk Testing Station described in (Womelsdorf et al, in preparation), while two animals performed the experiment in a sound attenuating experimental booth.…”
Section: Methodsmentioning
confidence: 99%
“…The experiment was controlled by USE (Unified Suite for Experiments) using the Unity 3D game engine for behavioral control and visual display (Watson et al, 2019b). Four animals performed the experiment in cage-based touchscreen Kiosk Testing Station described in (Womelsdorf et al, in preparation), while two animals performed the experiment in a sound attenuating experimental booth.…”
Section: Methodsmentioning
confidence: 99%
“…This intelligent visual search and selective attention is constrained by the “bandwidth” of both the visual media as well as the learners themselves (in terms of efficiency of visual processing). The digital media and VR have offered excellent means to simulate the richer learning environment and, hence, to improve the practice of scientific experiments and learning under peer pressure with appropriate social interactions (Watson, Voloh, Thomas, Hasan, & Womelsdorf, ; Zhou, Han, Liang, Hu, & Kuai, ). However, the basic visual processing principles could be fundamentally updated within the new type of spaces and temporal contexts to better address the new learning styles.…”
Section: Limitations and Outstanding Questionsmentioning
confidence: 99%
“…Horstmann et al (2019) found that the average number of fixations on visual targets (about 1.55) was higher compared to the average number of fixations on similar looking distractors (about 1.20) during a search task with static images. Watson et al (2019) reported that the number of fixations on targets ranged from about 3.3 to 4 compared to around 2.8 to 3.8 fixations on distractors, during a free visual search and reward learning task in a virtual environment. In terms of dwell time, Draschkow et al (2014) found subjects looked about 0.6 seconds longer at targets as compared to distractors during visual search of static natural scenes.…”
Section: Introductionmentioning
confidence: 99%
“…Since eye-tracking systems can now be readily integrated with 3D rendering software (i.e. game engines), researchers can conduct eye movement studies in more realistic and immersive environments (Watson et al, 2019). Virtual environments also allow for research designs that may otherwise not be practical for a real-world implementation.…”
Section: Introductionmentioning
confidence: 99%