2022
DOI: 10.22541/au.166939383.32711096/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Action Affordance Affects Proximal and Distal Goal-oriented Planning

Abstract: Visual attention is mainly goal-directed and allocated based on the upcoming action to be performed. However, it is unclear how far this feature of gaze behavior generalizes in more naturalistic settings. The present study investigates active inference processes revealed by eye movements during interaction with familiar and novel tools with two levels of realism of the action affordance. In a between-subject design, a cohort of participants interacted with a VR controller in a low realism environment; another … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(6 citation statements)
references
References 40 publications
(1 reference statement)
0
6
0
Order By: Relevance
“…Using our eye movement classification algorithm, we showed that we could accurately classify eye movements of three-dimensional free-exploration data and that we can generate fERPs and fERSPs, proving that combining EEG and free-viewing virtual reality setups is possible. We investigated the classification quality using our modified version of a velocity-based classification algorithm (Dar et al, 2021;Keshava et al, 2023;Voloh et al, 2020), correcting for subject movement in the virtual environment. Furthermore, we compared two data-segmentation methods dealing with varying noise levels across a long recording.…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…Using our eye movement classification algorithm, we showed that we could accurately classify eye movements of three-dimensional free-exploration data and that we can generate fERPs and fERSPs, proving that combining EEG and free-viewing virtual reality setups is possible. We investigated the classification quality using our modified version of a velocity-based classification algorithm (Dar et al, 2021;Keshava et al, 2023;Voloh et al, 2020), correcting for subject movement in the virtual environment. Furthermore, we compared two data-segmentation methods dealing with varying noise levels across a long recording.…”
Section: Discussionmentioning
confidence: 99%
“…There are a number of tools available to classify eye movements for static 2D images (e.g., Dar et al, 2021); however, we found that dynamic scenes created by the 3D environment, as well as allowing subjects to move, caused a problem. So, step by step, we modified and appended existing algorithms to this more demanding scenario (Dar et al, 2021;Keshava et al, 2023;Voloh et al, 2020). Specifically, adding EEG into the mix considerably increased the demand for precision (Luck, 2014).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations