JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. University of California Press is collaborating with JSTOR to digitize, preserve and extend access to Music Perception: An Interdisciplinary Journal. MUSICIANS OFTEN MAKE GESTURES and move their bodies expressing a musical intention. In order to explore to what extent emotional intentions can be conveyed through musicians' movements, participants watched and rated silent video clips of musicians performing the emotional intentions Happy, Sad, Angry, and Fearful. In the first experiment participants rated emotional expression and movement character of marimba performances. The results showed that the intentions Happiness, Sadness, and Anger were well communicated, whereas Fear was not. Showing selected parts of the player only slightly influenced the identification of the intended emotion. In the second experiment participants rated the same emotional intentions and movement character for performances on bassoon and soprano saxophone. The ratings from the second experiment confirmed that Fear was not communicated whereas Happiness, Sadness, and Anger were recognized. The rated movement cues were similar in the two experiments and were analogous to their audio counterpart in music performance.
In acoustic communication timing seems to be an exceedingly important aspect. The just noticeable difference ͑jnd͒ for small perturbations of an isochronous sequence of sounds is particularly important in music, in which such sequences frequently occur. This article reviews the literature in the area and presents an experiment designed to resolve some conflicting results in the literature regarding the tempo dependence for quick tempi and relevance of music experience. The jnd for a perturbation of the timing of a tone appearing in an isochronous sequence was examined by the method of adjustment. Thirty listeners of varied musical background were asked to adjust the position of the fourth tone in a sequence of six, such that they heard the sequence as perfectly isochronous. The tones were presented at a constant interonset time that was varied between 100 and 1000 ms. The absolute jnd was found to be approximately constant at 6 ms for tone interonset intervals shorter than about 240 ms and the relative jnd constant at 2.5% of the tone interonsets above 240 ms. Subjects' musical training did not affect these values. Comparison with previous work showed that a constant absolute jnd below 250 ms and constant relative jnd above 250 ms tend to appear regardless of the perturbation type, at least if the sequence is relatively short.
The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77–89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0–8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010).
The timing in jazz ensemble performances was investigated in order to approach the question of what makes the music "swing." One well-known aspect of swing is that consecutive eighth notes are performed as longshort patterns. The exact duration ratio (the swing ratio) of the longshort pattern has been largely unknown. In this study, the swing ratio produced by drummers on the ride cymbal was measured. Three wellknown jazz recordings and a play-along record were used. A substantial and gradual variation of the drummers' swing ratio with respect to tempo was observed. At slow tempi, the swing ratio was as high as 3.5:1, whereas at fast tempi it reached 1:1. The often-mentioned "triple-feel," that is, a ratio of 2:1, was present only at a certain tempo. The absolute duration of the short note in the long-short pattern was constant at about 100 ms for medium to fast tempi, suggesting a practical limit on tone duration that may be due to perceptual factors. Another aspect of swing is the soloist's timing in relation to the accompaniment. For example, a soloist can be characterized as playing "behind the beat." In the second part, the swing ratio of the soloist and its relation to the cymbal accompaniment was measured from the same recordings. In slow tempi, the soloists were mostly playing their downbeats after the cymbal but were synchronized with the cymbal at the off-beats. This implied that the swing ratio of the soloist was considerably smaller than the cymbal accompaniment in slow tempi. It may give an impression of "playing behind" but at the same time keep the synchrony with the accompaniment at the off-beat positions. Finally, the possibilities of using computer tools in jazz pedagogy are discussed.
This investigation explores the common assumption that music and motion are closely related by comparing the stopping of running and the termination of a piece of music. Video recordings were made of professional dancers' stopping from running under different deceleration conditions, and instant values of body velocity, step frequency, and step length were estimated. In decelerations that were highly rated for aesthetic quality by a panel of choreographers, the mean body velocity could be approximated by a square-root function of time, which is equivalent to a cubic-root function of position. This implies a linear relationship between kinetic energy and time, i.e., a constant braking power. The mean body velocity showed a striking similarity with the mean tempo pattern of final ritardandi in music performances. The constant braking power was used as the basis for a model describing both the changes of tempo in final ritardandi and the changes of velocity in runners' decelerations. The translation of physical motion to musical tempo was realized by assuming that velocity and musical tempo are equivalent. Two parameters were added to the model to account for the variation observed in individual ritardandi and in individual decelerations: ͑1͒ the parameter q controlling the curvature, qϭ3 corresponding to the runners' deceleration, and ͑2͒ the parameter v end for the final velocity and tempo value, respectively. A listening experiment was carried out presenting music examples with final ritardandi according to the model with different q values or to an alternative function. Highest ratings were obtained for the model with qϭ2 and qϭ3. Out of three functions, the model produced the best fit to individual measured ritardandi as well as to individual decelerations. A function previously used for modeling phrase-related tempo variations ͑interonset duration as a quadratic function of score position͒ produced the lowest ratings and the poorest fits to individual ritardandi. The results thus seem to substantiate the commonly assumed analogies between motion and music.
Somatosensation plays an important role in the motor control of vocal functions, yet its neural correlate and relation to vocal learning is not well understood. We used fMRI in 17 trained singers and 12 nonsingers to study the effects of vocal-fold anesthesia on the vocal-motor singing network as a function of singing expertise. Tasks required participants to sing musical target intervals under normal conditions and after anesthesia. At the behavioral level, anesthesia altered pitch accuracy in both groups, but singers were less affected than nonsingers, indicating an experience-dependent effect of the intervention. At the neural level, this difference was accompanied by distinct patterns of decreased activation in singers (cortical and subcortical sensory and motor areas) and nonsingers (subcortical motor areas only) respectively, suggesting that anesthesia affected the higher-level voluntary (explicit) motor and sensorimotor integration network more in experienced singers, and the lower-level (implicit) subcortical motor loops in nonsingers. The right anterior insular cortex (AIC) was identified as the principal area dissociating the effect of expertise as a function of anesthesia by three separate sources of evidence. First, it responded differently to anesthesia in singers (decreased activation) and nonsingers (increased activation). Second, functional connectivity between AIC and bilateral A1, M1, and S1 was reduced in singers but augmented in nonsingers. Third, increased BOLD activity in right AIC in singers was correlated with larger pitch deviation under anesthesia. We conclude that the right AIC and sensorymotor areas play a role in experience-dependent modulation of feedback integration for vocal motor control during singing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.