Through theoretical discussion, literature review, and a computational model, this paper poses a challenge to the notion that perspective-taking involves a fixed architecture in which particular processes have priority. For example, some research suggests that egocentric perspectives can arise more quickly, with other perspectives (such as of task partners) emerging only secondarily. This theoretical dichotomy–between fast egocentric and slow other-centric processes–is challenged here. We propose a general view of perspective-taking as an emergent phenomenon governed by the interplay among cognitive mechanisms that accumulate information at different timescales. We first describe the pervasive relevance of perspective-taking to cognitive science. A dynamical systems model is then introduced that explicitly formulates the timescale interaction proposed. This model illustrates that, rather than having a rigid time course, perspective-taking can be fast or slow depending on factors such as task context. Implications are discussed, with ideas for future empirical research.
The majority of existing studies investigating characteristics of overt social behavior in individuals with autism spectrum disorder (ASD) relied on informants' evaluation through questionnaires and behavioral coding techniques. As a novelty, this study aimed to quantify the complex movements produced during social interactions in order to test differences in ASD movement dynamics and their convergence, or lack thereof, during social interactions. Twenty children with ASD and twenty‐three children with typical development (TD) were videotaped while engaged in a face‐to‐face conversation with an interviewer. An image differencing technique was utilized to extract the movement time series. Spectral analyses were conducted to quantify the average power of movement, and the fractal scaling of movement. The degree of complexity matching was calculated to capture the level of behavioral coordination between the interviewer and children. Results demonstrated that the average power was significantly higher (p < 0.01), and the fractal scaling was steeper (p < 0.05) in children with ASD, suggesting excessive and less complex movement as compared to the TD peers. Complexity matching occurred between children and interviewers, but there was no reliable difference in the strength of matching between the ASD and TD children. Descriptive trends in the interviewer's behavior suggest that her movements adapted to match both ASD and TD movements equally well. The findings of our study might shed light on seeking novel behavioral markers of ASD, and on developing automatic ASD screening techniques during daily social interactions. Lay Summary By implementing an objective behavioral quantifying technique, our study demonstrated that children with autism had more body movement during face‐to‐face conversation, and they moved in a less complex way. The current diagnosis of autism heavily relies on doctor's experiences. These findings suggest a potential that autism might be automatically screened during daily social interactions.
The mechanisms by which infant‐directed (ID) speech and song support language development in infancy are poorly understood, with most prior investigations focused on the auditory components of these signals. However, the visual components of ID communication are also of fundamental importance for language learning: over the first year of life, infants’ visual attention to caregivers’ faces during ID speech switches from a focus on the eyes to a focus on the mouth, which provides synchronous visual cues that support speech and language development. Caregivers’ facial displays during ID song are highly effective for sustaining infants’ attention. Here we investigate if ID song specifically enhances infants’ attention to caregivers’ mouths. 299 typically developing infants watched clips of female actors engaging them with ID song and speech longitudinally at six time points from 3 to 12 months of age while eye‐tracking data was collected. Infants’ mouth‐looking significantly increased over the first year of life with a significantly greater increase during ID song versus speech. This difference was early‐emerging (evident in the first 6 months of age) and sustained over the first year. Follow‐up analyses indicated specific properties inherent to ID song (e.g., slower tempo, reduced rhythmic variability) in part contribute to infants’ increased mouth‐looking, with effects increasing with age. The exaggerated and expressive facial features that naturally accompany ID song may make it a particularly effective context for modulating infants’ visual attention and supporting speech and language development in both typically developing infants and those with or at risk for communication challenges. A video abstract of this article can be viewed at https://youtu.be/SZ8xQW8h93A.Research Highlights Infants’ visual attention to adults’ mouths during infant‐directed speech has been found to support speech and language development. Infant‐directed (ID) song promotes mouth‐looking by infants to a greater extent than does ID speech across the first year of life. Features characteristic of ID song such as slower tempo, increased rhythmicity, increased audiovisual synchrony, and increased positive affect, all increase infants’ attention to the mouth. The effects of song on infants’ attention to the mouth are more prominent during the second half of the first year of life.
Communication is a multimodal phenomenon. The cognitive mechanisms supporting it are still understudied. We explored a natural dataset of academic lectures to determine how communication modalities are used and coordinated during the presentation of complex information. Using automated and semi-automated techniques, we extracted and analyzed, from the videos of 30 speakers, measures capturing the dynamics of their body movement, their slide change rate, and various aspects of their speech (speech rate, articulation rate, fundamental frequency, and intensity). There were consistent but statistically subtle patterns in the use of speech rate, articulation rate, intensity, and body motion across the presentation. Principal component analysis also revealed patterns of system-like covariation among modalities. These findings, although tentative, do suggest that the cognitive system is integrating body, slides, and speech in a coordinated manner during natural language use. Further research is needed to clarify the specific coordination patterns that occur between the different modalities.
How are the domains of space and time related? One approach, A Theory of Magnitude (ATOM), proposes an undifferentiated system of magnitude representation in the brain, predicting that space and time are symmetrically related, while Conceptual Metaphor Theory (CMT) proposes that we represent time using spatial metaphors, predicting asymmetrical interactions between domains. Tau and Kappa effects are perceptual phenomena that arise when observers judge the distance/duration between consecutive stimuli in sequence: Timing affects the perception of space (Tau) and spacing affects the perception of time (Kappa). Interference bidirectionality has been taken as evidence for ATOM. CMT proponents argue that interference may result from the perceiver imputing velocity to the stimuli. Here, the Tau and Kappa paradigm was modified to reduce the illusion of imputed velocity by manipulating stimuli parameters. In favor of CMT, we found that when the illusion of imputed velocity is reduced, asymmetrical interference arises: the Tau effect is eliminated while the Kappa effect remains intact.
Timing is critical to successful social interactions. This study investigated temporal structure of vocal interactions longitudinally in parent-child dyads of typically developing (TD) infants (n=49; 9-18 months; 48% male; 81.6% white) and toddlers with ASD (n=23; 27.2±5.0 months; 91.3% male; 65.2% White). Acoustic hierarchical temporal structure (HTS; event clustering across multiple time scales), which reflects temporal complexity and variability, was measured in free play interactions using Allan Factor. Child expressive language significantly predicted HTS (ß=-0.2) longitudinally across TD infants: greater dyadic HTS was associated with lower child language. ASD dyads exhibited greater HTS than nonverbal matched (d=0.41) and expressive language matched TD dyads (d=0.28). Results provide a new window into how language development and social attunement shape parent-child interaction dynamics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.