The concept of affordance perception is one of the distinctive traits of human cognition; and its application to robots can dramatically improve the quality of human-robot interaction (HRI). In this paper we explore and discuss the idea of "emotional affordances" by proposing a viable model for implementation into HRI; which considers allocentric and multimodal perception. We consider "2-ways" affordances: perceived object triggering an emotion; and perceived human emotion expression triggering an action. In order to make the implementation generic; the proposed model includes a library that can be customised depending on the specific robot and application scenario. We present the AAA (Affordance-Appraisal-Arousal) model; which incorporates Plutchik's Wheel of Emotions; and we outline some numerical examples of how it can be used in different scenarios.Multimodal Technologies and Interact. 2018, 2, 78 2 of 20 emotional mapping based on 6 Ekman's emotions [17,18], and secondly it defends an egocentric spatial imagery, which involves a self-to-object representational system. Most of current studies on robotics and emotions are based on very basic aspects of emotional clues, mainly, visual ones (and based on a one-to-one direct connection with Ekman's thesis). Instead, here we want to consider the temporal nature of emotional modulations as well as the multidimensional elucidators of emotional values. At least two works are worth mentioning in these regards: Breazeal's emotional model for the robot Kismet [19] and the emotional model used for the robot . Both of them rely on a model based on three dimensions: while those are interesting approaches, we go beyond that architecture, extending to the concepts of affordances and learning.We have previously discussed some of the issues related to emotions [21,22], and with this paper we extend those discussions and provide a new way to consider emotional affordances in HRI. In our previous research we have studied how to identify a fundamental tree of basic emotions and how this mapping can be transformed into a dynamical semantic model, which includes temporal variations and a "logic of transitions" (that is, to model mood changes that look as real in average humans). The increase of the complexity of such model as well as the affordance approach has helped us to define the several parameters that must be considered for complex HRI modelling.In the present manuscript, we propose a viable model for implementation into HRI. We implement the case of 2-way affordances: