“…The review of speech emotion recognition showed that there are numerous corpora available for model research, validation, and testing (Haq et al, 2008;Grimm et al, 2008;Canavan et al, 1997;Durston et al, 2001;Engbert and Hansen, 2007;Burkhardt et al, 2005;Hansen and Bou-Gazale, 1997;Martin et al, 2006;Schuller et al, 2007;Steininger et al, 2002). Speech features used for input to emotion recognition models include MFCC (Motamed et al, 2017;Wu et al, 2011), prosody (Dai et al, 2015;Wu et al, 2011;Fernandez and Picard, 2011), and voice-quality (Fernandez and Picard, 2011;Dai et al, 2015) features. The use of a pre-compiled, publically available speech feature set was investigated as part of the work undertaken by C. K.…”