Basic emotions evoked by odors are predominantly related to happiness and disgust as two opposite sides of a continuum, and few studies concerned a wider spectrum of emotions. The present study aimed to investigate whether exposure to a pleasant and an unpleasant odor—compared with the odor‐neutral control condition—elicits a change in the emotional state that has a measurable effect on facial expressions in terms of four basic emotions: anger, happiness, sadness, and surprise. 167 participants were randomly divided into two groups and presented with a set of three odors per group (fish, rose, and water/peach, tar, and water), with one odor used per session. This way, each participant took part in three odor sessions that consisted of two tasks. The ‘passive task’ was to passively sniff the odor for 30 seconds; the ‘reading task’ was to stay exposed to the odor while reading out silently a short text. Participants’ facial behavior was recorded. Consistently through all time periods and both tasks, odors of fish/tar and rose/peach were noted to evoke more surprise and sadness than the presentation of water. Anger was elicited to a greater extent by the presentation of water than by the odors of fish/tar and rose/peach. None of the investigated odors evoked happiness. Interestingly, the impact of the task on facial expressions of emotions appears to be marginal. We suggest that how odors elicit emotions might be more complex than typically assumed.
Olfaction, i. e., the sense of smell is referred to as the 'emotional sense', as it has been shown to elicit affective responses. Yet, its influence on speech production has not been investigated. In this paper, we introduce a novel speech-based smell recognition approach, drawing from the fields of speech emotion recognition and personalised machine learning. In particular, we collected a corpus of 40 female speakers reading 2 short stories while either no scent, unpleasant odour (fish), or pleasant odour (peach) is applied through a nose clip. Further, we present a machine learning pipeline for the extraction of data representations, model training, and personalisation of the trained models. In a leave-one-speaker-out cross-validation, our best models trained on state-of-the-art wav2vec features achieve a classification rate of 68 % when distinguishing between speech produced under the influence of negative scent and no applied scent. In addition, we highlight the importance of personalisation approaches, showing that a speaker-based feature normalisation substantially improves performance across the evaluated experiments. In summary, the presented results indicate that odours have a weak, but measurable effect on the acoustics of speech.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.