A recent study [1] showed that different attention cues (social and non-social) produce qualitatively different learning effects. The mechanisms underlying such differences, however, were unclear. Here, we present a novel computational model of audio-visual learning combining two competing processes: habituation and association. The model's parameters were trained to best reproduce each infant's individual looking behavior from trial-to-trial in training and testing. We then isolated each infant's learning function to explain the variance found in preferential looking tests. The model allowed us to rigorously examine the relationship between the infants' looking behavior and their learning mechanisms. By condition, the model revealed that 8-month-olds learned faster from the social (i.e. face) than the non-social cue (i.e., flashing squares), as evidenced by the parameters of their learning functions. In general, the 4-month-olds learned more slowly than the 8-month-olds. The parameters for attention to the cue revealed that infants at both ages who weighted the social cue highly learned quickly. With non-social cues, 8-month-olds were impaired in learning, as the cue competed for attention with the target visual event Using explicit models to link looking and learning, we can draw firm conclusions about infants' cognitive development from eye-movement behavior.
Physical interactions between objects, or between an object and the ground, are amongst the most biologically relevant for live beings. Prior knowledge of Newtonian physics may play a role in disambiguating an object's movement as well as foveation by increasing the spatial resolution of the visual input. Observers were shown a virtual 3D scene, representing an ambiguously rotating ball translating on the ground. The ball was perceived as rotating congruently with friction, but only when gaze was located at the point of contact. Inverting or even removing the visual context had little influence on congruent judgements compared with the effect of gaze. Counterintuitively, gaze at the point of contact determines the solution of perceptual ambiguity, but independently of visual context. We suggest this constitutes a frugal strategy, by which the brain infers dynamics locally when faced with a foveated input that is ambiguous.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.