2018
DOI: 10.3389/fpsyg.2018.00178
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting

Abstract: Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 43 publications
2
8
0
Order By: Relevance
“…The observed correspondence effects for SWS stimuli that were not heard as speech can be explained based on associations of visual contours with auditory stimulus attributes, rather than with speech-specific representations of articulatory gestures or abstract phonological units. Consistent with findings on non-verbal sound-shape correspondences (Adeli, Rouat, & Molotchnikoff, 2014;Liew, Lindborg, Rodrigues, & Styles, 2018;Marks, 1987;O'Boyle & Tarte, 1980;Parise & Spence, 2012;Walker et al, 2010), it has been conjectured that differences in the frequency content and waveform envelope are key to the bouba-kiki effect (Fort et al, 2015;Nielsen & Rendall, 2011). Regarding the taketa-maluma pair, vowel [e] sounds brighter than [u] due to its higher formant frequenciesthe second formant frequency has been identified as a major contributor to sound-shape correspondences (Knoeferle et al, 2017); energy changes associated with voiceless obstrudents [t, k] are sharper than those associated with sonorants [m,l] (see Fig.…”
Section: Discussionsupporting
confidence: 85%
“…The observed correspondence effects for SWS stimuli that were not heard as speech can be explained based on associations of visual contours with auditory stimulus attributes, rather than with speech-specific representations of articulatory gestures or abstract phonological units. Consistent with findings on non-verbal sound-shape correspondences (Adeli, Rouat, & Molotchnikoff, 2014;Liew, Lindborg, Rodrigues, & Styles, 2018;Marks, 1987;O'Boyle & Tarte, 1980;Parise & Spence, 2012;Walker et al, 2010), it has been conjectured that differences in the frequency content and waveform envelope are key to the bouba-kiki effect (Fort et al, 2015;Nielsen & Rendall, 2011). Regarding the taketa-maluma pair, vowel [e] sounds brighter than [u] due to its higher formant frequenciesthe second formant frequency has been identified as a major contributor to sound-shape correspondences (Knoeferle et al, 2017); energy changes associated with voiceless obstrudents [t, k] are sharper than those associated with sonorants [m,l] (see Fig.…”
Section: Discussionsupporting
confidence: 85%
“…The current study confirms the results from Liew et al (2017Liew et al ( , 2018 according to which rougher objects are associated with harsher sounds and vice versa. The key difference, however, is that the stimuli presented in those experiments represented either single tones or the element of noise in music, whereas the current study examines this linkage through the perspective of harmony.…”
Section: Discussionsupporting
confidence: 92%
“…Finally, here, beyond the effect of sonic seasoning on the consumers' tasting experience, there is also some preliminary evidence to suggest that the music playing in the background might also influence the way in which those in the kitchen, or bar, season the food and drink they prepare (Kontukoski, Luomala, Mesz, Sigman, Trevisan, Rotola-Pukkila, & Hopia, 2015; see also Liew, Lindborg, Rodrigues, & Styles, 2018).…”
Section: Crossmodal Correspondences Between Audition and The Chemicalmentioning
confidence: 93%