“…We will caution that selecting a modeling framework introduces assumptions as well. For example, some frameworks that invoke the concept of reward tend to assume relevance and action to be separate ( reinforcement learning ) ( Kaelbling et al, 1996 ; Wiering and Van Otterlo, 2012 ; Sequeira et al, 2011 ; Moerland et al, 2018 ; Grahek et al, 2020 ; Levine, 2018 ; Sutton and Barto, 2018 ), while others view relevance as probabilistic ( Bayesian inference ), going as far as building relevance from action ( active inference ) ( Friston et al, 2017 ; Attias, 2003 ; Botvinick and Toussaint, 2012 ; Knill and Pouget, 2004 ; Friston and Kiebel, 2009 ; Allen and Friston, 2018 ; Friston et al, 2013 ; Ramstead et al, 2020 ; Sajid et al, 2021 ; Clark, 2013 , 2015 , 1997 ; Parr et al, 2022 ; Kiverstein et al, 2022 ; Smith et al, 2019 , 2022 ; Millidge et al, 2020 ; Miłkowski and Litwin, 2022 ; Di Paolo et al, 2022 ; Nave, 2023 ). Notwithstanding accounts that mix these methods, none to our knowledge have successfully built algorithms of affective phenomena up from the earliest principle of an organism, although the groundwork is being laid ( Prokopenko et al, 2009 ; Fernández et al, 2014 ; Ringstrom, 2022 ; Heylighen and Busseniers, 2023 ; Hodson et al, 2023 ; Broekens et al, 2013 ; Cunningham et al, 2013 ; Scherer et al, 2010 ; Marsella and Gratch, 2009 ; Marsella et al, 2010 , 2016 ; Calvo et al, 2015 ; Poria et al, 2017 ; Emanuel and Eldar, 2023 ).…”