This paper presents a novel approach for emotional speech recognition. Instead of using a full length of speech for classification, the proposed method decomposes speech signals into component words, groups the words into segments and generates an acoustic model for each segment by using features such as audio power, MFCC, log attack time, spectrum spread and segment duration. Based on the proposed segment-based classification, unknown speech signals can be recognized into sequences of segment emotions. Emotion profiles (EPs) are extracted from the emotion sequences. Finally, speech emotion can be determined by using EP as features. Experiments are conducted by using 6,810 training samples and 722 test samples which are composed of eight emotional classes from IEMOCAP database. In comparison with a conventional method, the proposed method can improve recognition rate from 46.81% to 58.59% in eight emotion classification and from 60.18% to 71.25% in four emotion classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.