This study investigated the differences in emotional responses to the International Affective Digitized Sounds 2 (IADS-2)between Americans and Koreans. Korean adult subjects rated their emotional response to atotal of 167 sounds on three dimensions: valence, arousal, and dominance. The results showsignificant differences between Koreans and Americans in twoo ft he emotional dimensions -v alence and arousal. In particular,K oreans and Americans showed the most difference in responses to erotic, rain, belch, and thunderstorm sounds. The analysis of the relationship between basic emotion and dimensional emotion revealed that valence and dominance showed positive correlations with happiness, butnegative correlations with sadness, anger,fear,and disgust. In contrast, the opposite pattern wasobserved for arousal. Our results provide auseful comparative,cultural reference for the development of standardized emotional stimuli.
The purpose of this study was to develop an auditory emotion recognition function that could determine the emotion caused by sounds coming from the environment in our daily life. For this purpose, sound stimuli from the International Affective Digitized Sounds (IADS-2), a standardized database of sounds intended to evoke emotion, were selected, and four psychoacoustic parameters (i.e., loudness, sharpness, roughness, and fluctuation strength) were extracted from the sounds. Also, by using an emotion adjective scale, 140 college students were tested to measure three basic emotions (happiness, sadness, and negativity). From this discriminant analysis to predict basic emotions from the psychoacoustic parameters of sound, a discriminant function with overall discriminant accuracy of 88.9 % was produced from training data. In order to validate the discriminant function, the same four psychoacoustic parameters were extracted from 46 sound stimuli collected from another database and substituted into the discriminant function. The results showed that an overall discriminant accuracy of 63.04 % was confirmed. Our findings provide the possibility that daily-life sounds, beyond voice and music, can be used in a human-machine interface.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.