No abstract
Inspired by a theory of embodied music cognition, we investigate whether music can entrain the speed of beat synchronized walking. If human walking is in synchrony with the beat and all musical stimuli have the same duration and the same tempo, then differences in walking speed can only be the result of music-induced differences in stride length, thus reflecting the vigor or physical strength of the movement. Participants walked in an open field in synchrony with the beat of 52 different musical stimuli all having a tempo of 130 beats per minute and a meter of 4 beats. The walking speed was measured as the walked distance during a time interval of 30 seconds. The results reveal that some music is ‘activating’ in the sense that it increases the speed, and some music is ‘relaxing’ in the sense that it decreases the speed, compared to the spontaneous walked speed in response to metronome stimuli. Participants are consistent in their observation of qualitative differences between the relaxing and activating musical stimuli. Using regression analysis, it was possible to set up a predictive model using only four sonic features that explain 60% of the variance. The sonic features capture variation in loudness and pitch patterns at periods of three, four and six beats, suggesting that expressive patterns in music are responsible for the effect. The mechanism may be attributed to an attentional shift, a subliminal audio-motor entrainment mechanism, or an arousal effect, but further study is needed to figure this out. Overall, the study supports the hypothesis that recurrent patterns of fluctuation affecting the binary meter strength of the music may entrain the vigor of the movement. The study opens up new perspectives for understanding the relationship between entrainment and expressiveness, with the possibility to develop applications that can be used in domains such as sports and physical rehabilitation.
THE PRESENT STUDY AIMS TO GAIN BETTER INSIGHTinto the connection between music and dance by examining the dynamic effects of the bass drum on a dancing audience in a club-like environment. One hundred adult participants moved freely in groups of five to a musical sequence that comprised six songs. Each song consisted of one section that was repeated three times, each time with a different sound pressure level of the bass drum. Hip and head movements were recorded using motion capture and motion sensing. The study demonstrates that people modify their bodily behavior according to the dynamic level of the bass drum when moving to contemporary dance music in a social context. Participants moved more actively and displayed a higher degree of tempo entrainment as the sound pressure level of the bass drum increased. These results indicate that the prominence of the bass drum in contemporary dance music serves not merely as a stylistic element; indeed, it has a strong influence on dancing itself.
This study explores whether musical affect attribution can be predicted by a linear combination of acoustical structural cues. To that aim, a database of sixty musical audio excerpts was compiled and analyzed at three levels: judgments of affective content by subjects; judgments of structural content by musicological experts (i.e., ''manual structural cues''), and extraction of structural content by an auditory-based computer algorithm (called: acoustical structural cues). In Study I, an affect space was constructed with Valence (gaysad), Activity (tender-bold) and Interest (excitingboring) as the main dimensions, using the responses of a hundred subjects. In Study II manual and acoustical structural cues were analyzed and compared. Manual structural cues such as loudness and articulation could be accounted for in terms of a combination of acoustical structural cues. In Study III, the subjective responses of eight individual subjects were analyzed using the affect space obtained in Study I, and modeled in terms of the structural cues obtained in Study II, using linear regression modeling. This worked better for the Activity dimension than for the Valence dimension, while the Interest dimension could not be accounted for. Overall, manual structural cues worked better than acoustical structural cues. In a final assessment study, a selected set of acoustical structural cues was used for building prediction models. The results indicate that musical affect attribution can partly be predicted using a combination of acoustical structural cues. Future research may focus on non-linear approaches, elaboration of dataset and subjects, and refinement of acoustical structural cue extraction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.