Proceedings of the 6th International ICST Conference on Body Area Networks 2011
DOI: 10.4108/icst.bodynets.2011.247079
|View full text |Cite
|
Sign up to set email alerts
|

How’s my Mood and Stress? An Efficient Speech Analysis Library for Unobtrusive Monitoring on Mobile Phones

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
37
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 41 publications
(37 citation statements)
references
References 0 publications
0
37
0
Order By: Relevance
“…VibeFones [Madan and Pentland 2006] require a long-term analysis to derive standard deviations of the features and do not describe the process of concluding to the particular feature set. AMMON [Chang et al 2011] showed the performance improvements achieved when combining prosodic features with glottal timings 4 . However, the system was evaluated off-line on datasets created in constrained environments while the performance in real-world situations was not provided.…”
Section: Auditorymentioning
confidence: 99%
See 3 more Smart Citations
“…VibeFones [Madan and Pentland 2006] require a long-term analysis to derive standard deviations of the features and do not describe the process of concluding to the particular feature set. AMMON [Chang et al 2011] showed the performance improvements achieved when combining prosodic features with glottal timings 4 . However, the system was evaluated off-line on datasets created in constrained environments while the performance in real-world situations was not provided.…”
Section: Auditorymentioning
confidence: 99%
“…As shown, existing literature has focussed on inferring stress through auditory, activity and physiological cues. AMMON [Chang et al 2011] was able to manage 84.4% accuracy through prosody including glottal features and utterances given the trade-off of computational burden introduced by eigenvalues solving and other glottal features. In StressSense as expected the personalised classifier achieved the highest accuracy.…”
Section: Stressmentioning
confidence: 99%
See 2 more Smart Citations
“…There has been a growing interest in inferring stress and emotions from recording speech using mobile phone-based sensors. The Affective and Mental health MONitor (AMMON) [40,41] is a speech analysis library that is designed to work on mobile phones, to recognize emotions and analyze the mental stress of the user based on voice. AMMON is evaluated against speech under simulated and actual stress (SUSAS) [42], a dataset which is the most common dataset used for stress detection tasks.…”
Section: Speech-based Analysismentioning
confidence: 99%