2020
DOI: 10.1016/j.chb.2019.106156
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of the efficacy and reliability of the Moodies app for detecting emotions through speech: Does it actually work?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…An app that detects the emotion in people's voice [7] Techniques for recognizing emotions from voice Anger, happiness, sadness and neutral state Sound Deep neural networks, hybrid CNN and SVM model [8] A prototype system for detecting emotions in a text based on social media posts Anger, anticipation, disgust, fear, joy, sadness, surprise, trust Text Long Short Term Memory (LSTM) networks [15] A model for emotion recognition based on ECG signal analysis Happiness, sadness, pleasure, anger ECG Spiker-Shield Heart and Brain sensor, Extra Tree Classification, ADA Boost Classification with SVM, Python Scikit API [25] Development of an emotion recognition system based on physiological reactions of the organism Sadness, fear and pleasure ECG 1 , GSR 2 , BVP 3 , pulse, respiration…”
Section: Soundmentioning
confidence: 99%
See 2 more Smart Citations
“…An app that detects the emotion in people's voice [7] Techniques for recognizing emotions from voice Anger, happiness, sadness and neutral state Sound Deep neural networks, hybrid CNN and SVM model [8] A prototype system for detecting emotions in a text based on social media posts Anger, anticipation, disgust, fear, joy, sadness, surprise, trust Text Long Short Term Memory (LSTM) networks [15] A model for emotion recognition based on ECG signal analysis Happiness, sadness, pleasure, anger ECG Spiker-Shield Heart and Brain sensor, Extra Tree Classification, ADA Boost Classification with SVM, Python Scikit API [25] Development of an emotion recognition system based on physiological reactions of the organism Sadness, fear and pleasure ECG 1 , GSR 2 , BVP 3 , pulse, respiration…”
Section: Soundmentioning
confidence: 99%
“…There is an entire class of solutions called multimodal-based affective human–computer interaction that enables computer systems to recognize specific affective states. Emotions in these approaches can be recognized in many ways, including those based on: Voice parameters (timbre, raised voice, speaking rate, linguistic analysis and errors made) [ 7 , 8 , 9 , 10 ]; Characteristics of writing [ 11 , 12 , 13 , 14 , 15 ]; Changes in facial expressions in specific areas of the face [ 16 , 17 , 18 , 19 , 20 ]; Gestures and posture analysis [ 21 , 22 , 23 , 24 ]; Characterization of biological signals, including but not limited to respiration, skin conductance, blood pressure, brain imaging, and brain bioelectrical signals [ 25 , 26 , 27 , 28 , 29 , 30 ]; Context—assessing the fit between the emotion and the context of expression [ 31 ]. …”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, measurements of the the index of DESU on a speech waveform x(t) with fi nite duration of T x < ∞ are urgent. Studies in this scientifi c area are being performed for a number of recent years, but the measurement precision obtained for the index of DESU is all the same inadequate for broad practical applications of systems for remote accommodation of the population under conditions of speech dialogue [5][6][7][8]. The reason is not so much in the complexity of the problem being studied, as it is in the absence so far of an effective mathematical apparatus for its solution.…”
mentioning
confidence: 99%
“…The authors' own software (Voice Self-Analysis, https://sites.google.com/site/frompidcreators/VoiceSelfAnalysis.zip) was used here, and the user interface is described in detail in [24] and provided on the site of the authors of this article. The principle of operation of the program is based on the algorithm for automatic speech processing (3), (8), and (10) and the method of acoustic measurements of the informational index in accordance with Eqs. ( 5) and ( 6) using the autoregression model ( 9) of suffi ciently high order p = 20.…”
mentioning
confidence: 99%