Speech is one of the most promising models through which various human emotions such as happiness, anger, sadness, and normal state can be determined, apart from facial expressions. Researchers have proved that acoustic parameters of a speech signal such as energy, pitch, Mel frequency Cepstral Coefficient (MFCC) are vital in determining the emotion state of a person. There is an increasing need for a new Feature selection method, to increase the processing rate and recognition accuracy of the classifier, by selecting the discriminative features. This study investigates the various feature selection algorithms, used for selecting the optimal features from speech vectors which are extracted using MFCC. The feature selected is then used in the modeling stage.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.