“…A study on detecting emotions in Filipino laughter found that Multilayer Perceptron (MLP) yielded a higher correct classification rate (at 44%) compared with using SVM (18%) [ 73 ]. MLP considers the weights within a network to select features, and may be better suited for audio datasets, while SVM may perform better for video in cases where multimodal information is available [ 74 ]. SVM has also been used to classify laughter as polite or mirthful for a Japanese, Chinese and English dataset with at least 85% accuracy [ 75 ].…”
Like most human non-verbal vocalizations, laughter is produced by speakers of all languages, across all known societies. But despite this obvious fact (or perhaps because of it), there is little comparative research examining the structural and functional similarity of laughter across speakers from different cultures. Here, we describe existing research examining (i) the perception of laughter across disparate cultures, (ii) conversation analysis examining how laughter manifests itself during discourse across different languages, and (iii) computational methods developed for automatically detecting laughter in spoken language databases. Together, these three areas of investigation provide clues regarding universals and cultural variations in laughter production and perception, and offer methodological tools that can be useful for future large-scale cross-cultural studies. We conclude by providing suggestions for areas of research and predictions of what we should expect to discover. Overall, we highlight how important questions regarding human vocal communication across cultures can be addressed through the examination of spontaneous and volitional laughter.
This article is part of the theme issue ‘Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience’.
“…A study on detecting emotions in Filipino laughter found that Multilayer Perceptron (MLP) yielded a higher correct classification rate (at 44%) compared with using SVM (18%) [ 73 ]. MLP considers the weights within a network to select features, and may be better suited for audio datasets, while SVM may perform better for video in cases where multimodal information is available [ 74 ]. SVM has also been used to classify laughter as polite or mirthful for a Japanese, Chinese and English dataset with at least 85% accuracy [ 75 ].…”
Like most human non-verbal vocalizations, laughter is produced by speakers of all languages, across all known societies. But despite this obvious fact (or perhaps because of it), there is little comparative research examining the structural and functional similarity of laughter across speakers from different cultures. Here, we describe existing research examining (i) the perception of laughter across disparate cultures, (ii) conversation analysis examining how laughter manifests itself during discourse across different languages, and (iii) computational methods developed for automatically detecting laughter in spoken language databases. Together, these three areas of investigation provide clues regarding universals and cultural variations in laughter production and perception, and offer methodological tools that can be useful for future large-scale cross-cultural studies. We conclude by providing suggestions for areas of research and predictions of what we should expect to discover. Overall, we highlight how important questions regarding human vocal communication across cultures can be addressed through the examination of spontaneous and volitional laughter.
This article is part of the theme issue ‘Cracking the laugh code: laughter through the lens of biology, psychology and neuroscience’.
“…Moreover, the study in [24] went further in trying to characterize different types of laughter. They investigated automatic discrimination of five types of acted laughter: happiness, giddiness, excitement, embarrassment and hurtful.…”
“…There has been growing evidence supporting the possibility of automatically discriminating between different emotions from various modalities: acoustics [40], facial expressions [41] and body movements [42], [43], [44], [45], [46], [47]. Galvan et al [48] investigated automatic discrimination of five types of acted laughter: happiness, giddiness, excitement, embarrassment and hurtful. Actors were asked to enact these five emotions using both vocal and facial expressions while they were video-recorded.…”
Section: Synthesis and Recognition Of Laughtermentioning
Despite its importance in social interactions, laughter remains little studied in affective computing. Intelligent virtual agents are often blind to users' laughter and unable to produce convincing laughter themselves. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received less attention. The aim of this study is threefold. First, to probe human laughter perception by analyzing patterns of categorisations of natural laughter animated on a minimal avatar. Results reveal that a low dimensional space can describe perception of laughter "types". Second, to investigate observers' perception of laughter (hilarious, social, awkward, fake, and non-laughter) based on animated avatars generated from natural and acted motioncapture data. Significant differences in torso and limb movements are found between animations perceived as laughter and those perceived as non-laughter. Hilarious laughter also differs from social laughter. Different body movement features were indicative of laughter in sitting and standing avatar postures. Third, to investigate automatic recognition of laughter to the same level of certainty as observers' perceptions. Results show recognition rates of the Random Forest model approach human rating levels. Classification comparisons and feature importance analyses indicate an improvement in recognition of social laughter when localized features and nonlinear models are used.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.