One of the fastest and richest methods, which represents emotional profile of human beings is speech. It also conveys the mental and perceptual concepts between humans. In this paper we have addressed the recognition of emotional characteristics of speech signal and propose a method to model the emotional changes of the utterance during the speech by using a statistical learning method. In this procedure of speech recognition, the internal feelings of the individual speaker are processed, and then classified during the speech. And so on, the system classifies emotions of the utterance in six standard classes including, anger, boredom, fear, disgust, neutral and sadness. For that reason, we call the standard and widely used speech database, EmoDB for training phase of proposed system. When pre-processing tasks done, speech patterns and features are extracted by MFCC method, and then we apply a classification approach based on statistical learning classifier to simulate changes trend of emotional states. Empirical experimentation indicates that we have achieved 85.54% of average accuracy rate and the score 2.5 of standard deviation in emotion recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.