“…Most state-of-the-art in the area of emotive robots do not run in real-time, being then still inapplicable to real cases of human-robot-interaction applications. This paper builds on the work of [19,15] where real-time Bayesian classifiers were presented for visual signal and to auditory signal; independently and respectively. Both of these classifiers give output among the scope {happy, sad, fear, neutral, anger}.…”