In Experiment 1, six cyclically repeating interonset interval patterns (1,2:1,2:1:1,3:2:1,3:1:2, and 2:1:1:2) were each presented at six different note rates (very slow to very fast). Each trial began at a random point in the rhythmic cycle. Listeners were asked to tap along with the underlying beat or pulse. The number of times a given pulse (period, phase) was selected was taken as a measure of its perceptual salience. Responses gravitated toward a moderate pulse period of about 700 ms. At faster tempi, taps coincided more often with events followed by longer interonset intervals. In Experiment 2, listeners heard the same set of rhythmic patterns, plus a single sound in a different timbre, and were asked whether the extra sound fell on or off the beat. The position of the downbeat was found to be quite ambiguous. A quantitative model was developed from the following assumptions. The phenomenal accent of an event depends on the interonset interval that follows it, saturating for interonset intervals greater than about 1 s. The salience of a pulse sensation depends on the number of events matching a hypothetical isochronous template, and on the period of the template—pulse sensations are most salient in the vicinity of roughly 100 events per minute (moderate tempo). The metrical accent of an event depends on the saliences of pulse sensations including that event. Calculated pulse saliences and metrical accents according to the model agree well with experimental results (r > 0.85). The model may be extended to cover perceived meter, perceptible subdivisions of a beat, categorical perception, expressive timing, temporal precision and discrimination, and primacy/recency effects. The sensation of pulse may be the essential factor distinguishing musical rhythm from nonrhythm.
Playing a musical instrument is associated with numerous neural processes that continuously modify the human brain and may facilitate characteristic auditory skills. In a longitudinal study, we investigated the auditory and neural plasticity of musical learning in 111 young children (aged 7-9 y) as a function of the intensity of instrumental practice and musical aptitude. Because of the frequent co-occurrence of central auditory processing disorders and attentional deficits, we also tested 21 children with attention deficit (
This study investigates the effect of four variables (tonal hierarchies, sensory chordal consonance, horizontal motion, and musical training) on perceived musical tension. Participants were asked to evaluate the tension created by a chord X in sequences of three chords (C major ---7 X ---7 C major} in a C major context key.The Xchords could be major or minor triads major-minor seventh, or minor seventh chords built on the 12notes of the chromatic scale. The data were compared with Krumhansl's (1990) harmonic hierarchy and with predictions of Lerdahl's (1988)cognitive theory, Hutchinson and Knopoffs (1978)and Pamcutt's (1989)sensory-psychoacoustical theories, and the model of horizontal motion defined in the paper. As a main outcome, it appears thatjudgments of tension arose from a convergence of several cognitive and psychoacoustics influences, whose relative importance varies, depending on musical training.Music and spoken language are complex auditory sequences of events that evolve through time. In both, it is striking that listeners usually perceive events as progressing in a coherent, dynamic way. In spoken language, this temporal coherence is due to semantics and to syntactic and contextual information; it also results from the fact that language usually refers to a well-identified external reality.Such information has no clear equivalent in music (Clarke, 1989). Incontrast, a number ofmusic theorists have considered that intuition of coherent progression through time is mainly determined by the tension-relaxation relations that exist among musical events (Lerdahl & Jackendoff, 1983;Meyer, 1956;Schenker, 1935). Inthe Western tonal system, these tension-relaxation relations are for one part determined by the harmonic relations that exist among chords. Chord designates the simultaneous sounding of three or more notes. In the present study, all chords contained four notes. Following standard usage, we refer to them as soprano, tenor, alto, and bass voices. It has beenWe would like to thank F. Madurell for assisting the experimentation and the anonymous reviewers for their insightful comments, which greatly improved the manuscript. R.P is with the Faculty
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.
The predictions of Terhardt's octave-generalized model of the root of a musical chord occasionally disagree with music theory (notably, in the case of the minor triad). The model is improved by assigning appropriate weights to the intervals used in the model's "subharmonic matching" routine. These intervals, called "root-supports," include the P8 (unison), P5, M3, m7, M9 (M2), and m3. The new model calculates the salience of each pitch class (C, C#/Db..B) as an absolute value. The most likely candidate for the root of a chord corresponds to the most salient pitch class in all cases where the root is unambiguously defined in music theory. The model also calculates a "root ambiguity" value for each chord, a measure of its dissonance. Effects of voicing (inversion, spacing, and doubling) and context on the root are considered.
We attempted to predict perceived musical tension in longer chord sequences by hierarchic and sequential models based on Lerdahl and Jackendoff's and Lerdahl's cognitive theories and on Parncutt's sensory-psychoacoustical theory. Musicians and nonmusicians were asked to rate the perceived tension of chords which were drawn either from a piece composed for the study (Exp. 1) or from a Chopin Prelude (Exps. 2-4). In Exps. 3 and 4, several experimental manipulations were made to emphasize either the global or the local structure of the piece and to verify how these manipultions would affect the respective contribution of the models in the ratings. In all experiments, musical tension was only weakly influenced by global harmonic structure. Instead, it mainly seemed to be determined locally, by harmonic cadences. The hierarchic model of Lerdahl and Jackendoff provided the best fit to tension ratings, not because it accounted for global hierarchic effects, but because it captured the local effect of cadences. By reacting to these local structures, tension ratings fit quite well with a hierarchic model, even though the participants were relatively insensitive to the global structure of the pieces. As a main outcome, it is argued that musical events were perceived through a short perceptual window sliding from cadence to cadence along a sequence.
The fingerings used by keyboard players are determined by a range of ergonomic (anatomic/motor), cognitive, and music-interpretive constraints. We have attempted to encapsulate the most important ergonomic constraints in a model. The model, which is presently limited to isolated melodic fragments, begins by generating all possible fingerings, limited only by maximum practical spans between finger pairs. Many of the fingerings generated in this way seldom occur in piano performance. In the next stage of the model, the difficulty of each fingering is estimated according to a system of rules. Each rule represents a specific ergonomic source of difficulty. The model was subjected to a preliminary test by comparing its output with fingerings written by pianists on the scores of a selection of short Czerny studies. Most fingerings recommended by pianists were among those fingerings predicted by the model to be least difficult; but the model also predicted numerous fingerings that were not recommended by pianists. A variety of suggestions for improving the predictive power of the model are explored. A significant upsurge in psychological studies of performance in the past 15 1. Davies, Kenny, and Barbenel (1989) investigated the interface between the trumpet mouthpiece and the mouth.2. We number the fingers according to standard keyboard practice: 1 = thumb, 2 = index finger, ..., and 5 = little finger. Italics distinguish finger numbers from other numbers in the text.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.