Previc (1990) postulated that most peri-personal space interactions occurred in the lower visual field (LVF), leading to an advantage when compared to the upper visual field (UVF). It is not clear if extensive practice can affect the difference between interactions in the LVF/UVF. We tested male and female basketball varsity athletes and non-athletes on a DynaVision D2 visuomotor reaction task. We recruited basketball players because in their training they spend significant amount of time processing UVF information. We found a LVF advantage in all participants, but this advantage was significantly reduced in the athletes. The results suggest that training can be a powerful modulator of visuomotor function.
21In 1990, Fred Previc postulated that most peri-personal space interactions occurred in 22 the lower visual field (LVF), leading to an advantage when compared to the upper visual field 23 (UVF). It is not clear if extensive practice can affect the difference between interactions in the 24 LVF/UVF. We tested male and female basketball varsity athletes and non-athletes on a 25 DynaVision D2 visuomotor reaction task. We recruited basketball players because in their 26 training they spend significant amount of time processing upper visual field information. We 27 found a lower visual field advantage in all participants, but this advantage was significantly 28 reduced in the athletes. The results suggest that training can be a powerful modulator of 29 visuomotor function. 30
Everyday tasks such as catching a ball appear effortless, but in fact require complex interactions and tight temporal coordination between the brain's visual and motor systems. What makes such interceptive actions particularly impressive is the capacity of the brain to account for temporal delays in the central nervous system - a limitation that can be mitigated by making predictions about the environment as well as one's own actions. Here, we wanted to assess how well human participants can plan an upcoming movement based on a dynamic, predictable stimulus that is not the target of action. A central stationary or rotating stimulus determined the probability that each of two potential targets would be the eventual target of a rapid reach-to-touch movement. We examined the extent to which reach movement trajectories convey internal predictions about the future state of dynamic probabilistic information conveyed by the rotating stimulus. We show that movement trajectories reflect the target probabilities determined at movement onset, suggesting that humans rapidly and accurately integrate visuospatial predictions and estimates of their own reaction times to effectively guide action.
Artificial agents have often been compared to humans in their ability to categorize images or play strategic games. However, comparisons between human and artificial agents are frequently based on the overall performance on a particular task, and not necessarily on the specifics of how each agent behaves. In this study, we directly compared human behaviour with a reinforcement learning (RL) model. Human participants and an RL agent navigated through different grid world environments with high- and low- value targets. The artificial agent consisted of a deep neural network trained to map pixel input of a 27x27 grid world into cardinal directions using RL. An epsilon greedy policy was used to maximize reward. Behaviour of both agents was evaluated on four different conditions. Results showed both humans and RL agents consistently chose the higher reward over a lower reward, demonstrating an understanding of the task. Though both humans and RL agents consider movement cost for reward, the machine agent considers the movement costs more, trading off the effort with reward differently than humans. We found humans and RL agents both consider long-term rewards as they navigate through the world, yet unlike humans, the RL model completely disregards limitations in movements (e.g. how many total moves received). Finally, we rotated pseudorandom grid arrangements to study how decisions change with visual differences. We unexpectedly found that the RL agent changed its behaviour due to visual rotations, yet remained less variable than humans. Overall, the similarities between humans and the RL agent shows the potential RL agents have of being an adequate model of human behaviour. Additionally, the differences between human and RL agents suggest improvements to RL methods that may improve their performance. This research compares the human mind with artificial intelligence, creating the opportunity for future innovation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.