“…Emotions are largely expressed by a combination of facial, vocal, and bodily signals, and a well-developed literature documents how these different cues interact [19], [63], [130]. While studies on perception of artificial agents' emotion have mainly focussed on one channel, there is some indication that recognition accuracy and evaluation increases for a robot that uses multiple channels, for example, facial and bodily expressions [16], [21]. As human emotions are largely expressed by a combination of facial, vocal, and bodily signals, future studies should use a combination of these signals to study the expression and perception of emotion in artificial agents.…”