2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON) 2018
DOI: 10.1109/iemcon.2018.8614831
|View full text |Cite
|
Sign up to set email alerts
|

Automated Facial Expression and Speech Emotion Recognition App Development on Smart Phones using Cloud Computing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
14
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(16 citation statements)
references
References 38 publications
1
14
0
1
Order By: Relevance
“…In [39], four pitch and spectral energy features were combined with two prosodic features to distinguish two high activation states of angry and happy plus low activation states of sadness and boredom for speech emotion recognition using SVM with Emo-DB. Alshamsi et al [40] proposed a smart phone method for automated facial expression and speech emotion recognition using SVM with MFCC features extracted from SAVEE database.…”
Section: Related Studiesmentioning
confidence: 99%
See 2 more Smart Citations
“…In [39], four pitch and spectral energy features were combined with two prosodic features to distinguish two high activation states of angry and happy plus low activation states of sadness and boredom for speech emotion recognition using SVM with Emo-DB. Alshamsi et al [40] proposed a smart phone method for automated facial expression and speech emotion recognition using SVM with MFCC features extracted from SAVEE database.…”
Section: Related Studiesmentioning
confidence: 99%
“…We carefully applied the specialized jAudio software [55] to construct MFCC1, MFCC2 and HAF as three sets of features representing several voice aspects. MFCC1 is the set of MFCC features inspired by the applications of MFCC features [24,40,42]. MFCC2 is the set of features based on MFCC, energy, ZCR and fundamental frequency as inspired by the fusion of MFCC with other acoustic features [11,25,26,44,46].…”
Section: Feature Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…In fact, smartphones and smart watches are equipped with a variety of sensors, which include accelerometer, gyroscope, fingerprint sensor, heart rate sensor, and microphone. Alshamsi et al [37] proposed a framework that consisted of smartphone sensor technology supported by cloud computing for the real-time recognition of emotion in speech and facial expression. Hossain et al [38] combined the potential of emotion-aware big data and cloud technology towards 5G.…”
Section: Visual Signal Based Emotion Classificationmentioning
confidence: 99%
“…To this end, smart mobiles and smart wrist watches are fully equipped with different types of sensors, for instance, accelerometer, gyroscope, fingerprint Sensor, heart rate sensor, and microphone. Alshamsi et al [24] investigated a method driven by sensor technology and cloud computing for identification of emotion in both speech and facial expression. Hossain et al [25] introduced a framework that puts together the strengths of emotion-aware big data and cloud technology towards 5G.…”
Section: Literature Reviewmentioning
confidence: 99%