2010 3rd International Conference on Human-Centric Computing 2010
DOI: 10.1109/humancom.2010.5563313
|View full text |Cite
|
Sign up to set email alerts
|

Discovering Emotions in Filipino Laughter Using Audio Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…Different machine learning strategies vary in their success rates for classification of laughter types. A study on detecting emotions in Filipino laughter found that Multilayer Perceptron (MLP) yielded a higher correct classification rate (at 44%) compared with using SVM (18%) [ 73 ]. MLP considers the weights within a network to select features, and may be better suited for audio datasets, while SVM may perform better for video in cases where multimodal information is available [ 74 ].…”
Section: Computational Approachesmentioning
confidence: 99%
“…Different machine learning strategies vary in their success rates for classification of laughter types. A study on detecting emotions in Filipino laughter found that Multilayer Perceptron (MLP) yielded a higher correct classification rate (at 44%) compared with using SVM (18%) [ 73 ]. MLP considers the weights within a network to select features, and may be better suited for audio datasets, while SVM may perform better for video in cases where multimodal information is available [ 74 ].…”
Section: Computational Approachesmentioning
confidence: 99%
“…Work on automatic recognition of laughter has also started to emerge but, as with the synthesis of laughter, has mostly focused the acoustic modality (e.g., [11]- [13]) and more recently on the combination of face and voice cues [14]. Less attention has been given to body laughter expressions.…”
Section: Introductionmentioning
confidence: 99%
“…Work on automatic recognition of laughter has also started to emerge but, as with the synthesis of laughter, has mostly focused on the acoustic modality e.g., [29], [30], [31], [32], [33], [34] and more recently on the combination of face and voice cues [35], [36], [37]. Fukushima et al used electromyographic sensors to measure diaphragmatic activity, which drives laughter vocalisations, to detect laughter in people watching television [38].…”
Section: Synthesis and Recognition Of Laughtermentioning
confidence: 99%