JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. Krumhansl, 1995aKrumhansl, , 1995bSchellenberg, 1996Schellenberg, , 1997. However, there is also evidence that the implemented bottom-up rules constitute too inflexible a model to account for the influence of the musical experience of the listener and the melodic context in which expectations are elicited. A theory is presented, according to which both bottom-up and top-down descriptions of observed patterns of melodic expectation may be accounted for in terms of the induction of statistical regularities in existing musical repertoires. A computational model that embodies this theory is developed and used to reanalyze existing experimental data on melodic expectancy. The results of three experiments with increasingly complex melodic stimuli demonstrate that this model is capable of accounting for listeners' expectations as well as or better than the two-factor model of Schellenberg (1997).
University of California Press
Following in a psychological and musicological tradition beginning with Leonard Meyer, and continuing through David Huron, we present a functional, cognitive account of the phenomenon of expectation in music, grounded in computational, probabilistic modeling. We summarize a range of evidence for this approach, from psychology, neuroscience, musicology, linguistics, and creativity studies, and argue that simulating expectation is an important part of understanding a broad range of human faculties, in music and beyond.
We present the results of a study testing the often-theorized role of musical expectations in inducing listeners' emotions in a live flute concert experiment with 50 participants. Using an audience response system developed for this purpose, we measured subjective experience and peripheral psychophysiological changes continuously. To confirm the existence of the link between expectation and emotion, we used a threefold approach. (1) On the basis of an information-theoretic cognitive model, melodic pitch expectations were predicted by analyzing the musical stimuli used (six pieces of solo flute music). (2) A continuous rating scale was used by half of the audience to measure their experience of unexpectedness toward the music heard. (3) Emotional reactions were measured using a multicomponent approach: subjective feeling (valence and arousal rated continuously by the other half of the audience members), expressive behavior (facial EMG), and peripheral arousal (the latter two being measured in all 50 participants). Results confirmed the predicted relationship between high-information-content musical events, the violation of musical expectations (in corresponding ratings), and emotional reactions (psychologically and physiologically). Musical structures leading to expectation reactions were manifested in emotional reactions at different emotion component levels (increases in subjective arousal and autonomic nervous system activations). These results emphasize the role of musical structure in emotion induction, leading to a further understanding of the frequently experienced emotional effects of music.
In this paper we give an overview of four algorithms that we have developed for pattern matching, pattern discovery and data compression in multidimensional datasets. We show that these algorithms can fruitfully be used for processing musical data. In particular, we show that our algorithms can discover instances of perceptually significant musical repetition that cannot be found using previous approaches. We also describe results that suggest the possibility of using our datacompression algorithm for modelling expert motivicthematic music analysis.
Introduction Grouping and boundary perception are central to the understanding and modelling of core tasks in many areas of cognitive science. They are fundamental processes in, for example, natural language processing (eg speech segmentation and word discoveryöBrent 1999b; Jusczyk 1997), motor learning (eg identifying behavioural episodes öReynolds et al 2007; Newtson 1973), memory storage and retrieval (eg chunkingöKurby and Zacks 2007) and visual perception (eg analysing spatial organisation öMarr 1982). Our focus in this paper is on the perception and cognition of music (Krumhansl 1990; Temperley 2001), where the process by which the human perceptual system groups sequential musical elements together is one of the most fundamental issues. In particular, we examine the grouping of musical elements into contiguous segments that occur sequentially in time or, to put it another way, the identification of boundaries between the final element of one segment and the first element of the subsequent one. This way of structuring a musical surface is usually referred to as grouping (Lerdahl and Jackendoff 1983) or segmentation (Cambouropoulos 2006). We distinguish this kind of perceptual aggregation of auditory elements from the integration, or fusion, of auditory elements that occur simultaneously in time and also from the segregation of parallel auditory streams (Bregman 1990). In musical terms, the kinds of groups we consider correspond with motifs, phrases, sections and other aspects of musical form. We use the term grouping structure to refer to a piece of music structured in this way. It is taken that, just as speech is perceptually segmented into phonemes, and then words which subsequently provide the building blocks for the perception of phrases and complete utterances (Brent 1999b; Jusczyk 1997), motifs or phrases in music are identified by listeners, stored in memory and made available for inclusion in higher-level structural groups (Lerdahl and Jackendoff 1983; Peretz 1989; Tan et al 1981). The lowlevel organisation of the musical surface into groups allows the use of these primitive perceptual units in more complex structural processing and may alleviate demands on memory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.