2017
DOI: 10.1016/j.apacoust.2016.06.020
|View full text |Cite
|
Sign up to set email alerts
|

A bio-inspired emotion recognition system under real-life conditions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…Chenchah, F. argued that in real life, combining the development of emotion recognition with the development of speech features can improve the efficiency of emotion recognition. Chenchah, F. proposed and introduced a feature extraction method based on spectral features in speech emotion recognition applications [19]. Spilka, M. J. pointed out that the relationship between endogenous intranasal oxytocin and social cognition in schizophrenia is poorly understood by the general public.…”
Section: Introductionmentioning
confidence: 99%
“…Chenchah, F. argued that in real life, combining the development of emotion recognition with the development of speech features can improve the efficiency of emotion recognition. Chenchah, F. proposed and introduced a feature extraction method based on spectral features in speech emotion recognition applications [19]. Spilka, M. J. pointed out that the relationship between endogenous intranasal oxytocin and social cognition in schizophrenia is poorly understood by the general public.…”
Section: Introductionmentioning
confidence: 99%
“…We increase the levels of each dimension in order to describe more emotions. We use Long Short Term Memory (LSTM) network as a classifier, in which the feature set is Mel-Frequency Cepstrum Coefficients (MFCCs) [25]. Furthermore, we evaluate a Low-Level Descriptors (LLDs) feature set and basic MFCCs feature set to test how feature sets influence the speech emotion recognition system.…”
Section: Introductionmentioning
confidence: 99%
“…MFCC is calculated by performing a discrete cosine transformation on the output from triangular filter banks evenly spaced along a logarithmic axis; this is referred to as a mel scale, and it approximates the human auditory frequency response. PNCC is a feature value developed to improve the robustness of voice recognition systems in noisy environments [11][12][13][14]. Because BS captured using noncontact microphones are generally low in volume and have degraded SNR, PNCC can be expected to be effective; it improves the process of calculating MFCC to make it more similar to certain physiological aspects of humans.…”
Section: Automatic Bs Extraction On the Basis Of Acoustic Featuresmentioning
confidence: 99%
“…The proposed method is primarily made up of the following four steps: (1) segment detection using the short-term energy (STE) method; (2) automatic extraction of two acoustic features-mel-frequency cepstral coefficients (MFCC) [9,10] and power-normalized cepstral coefficients (PNCC) [11][12][13][14]-from segments; (3) automatic classification of segments as BS/non-BS based on an artificial neural network (ANN); and (4) evaluation of bowel motility on the basis of the acoustic features in the time domain of the BS that were automatically extracted. On the basis of audio data recorded from 20 human participants before and after they consumed carbonated water, we verified (i) the validity of automatic BS extraction by the proposed method and (ii) the validity of bowel motility evaluation based on acoustic features in the time domain.…”
Section: Introductionmentioning
confidence: 99%