2023
DOI: 10.1002/aaai.12116
|View full text |Cite
|
Sign up to set email alerts
|

Towards machines that understand people

Andrew Howes,
Jussi P. P. Jokinen,
Antti Oulasvirta

Abstract: The ability to estimate the state of a human partner is an insufficient basis on which to build cooperative agents. Also needed is an ability to predict how people adapt their behavior in response to an agent's actions. We propose a new approach based on computational rationality, which models humans based on the idea that predictions can be derived by calculating policies that are approximately optimal given human‐like bounds. Computational rationality brings together reinforcement learning and cognitive mode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 129 publications
0
4
0
Order By: Relevance
“…While systems that comply with this steering may exhibit stronger interaction performance (Colella et al, 2020), a more advanced system could aim to identify and learn from users' mental models of AI, their refinement over the course of the interaction, and the influence of mutable user goals on interaction behavior. Developing such mental models of AI systems is currently a research challenge, even more so in the context of learning these online during interaction (Howes et al, 2023;Steyvers and Kumar, 2022;Bansal et al, 2019). Co-operative multi-agent setups (C ¸elikok et al, 2019) with the user and the AI system as interacting agents are a promising approach to improve interactive behavior by better anticipating the user and their strategies -doing so with computationally rational user models would be an interesting avenue for further research (Howes et al, 2023); these are bound to be confronted by computational challenges in a real-time and interactive setting.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…While systems that comply with this steering may exhibit stronger interaction performance (Colella et al, 2020), a more advanced system could aim to identify and learn from users' mental models of AI, their refinement over the course of the interaction, and the influence of mutable user goals on interaction behavior. Developing such mental models of AI systems is currently a research challenge, even more so in the context of learning these online during interaction (Howes et al, 2023;Steyvers and Kumar, 2022;Bansal et al, 2019). Co-operative multi-agent setups (C ¸elikok et al, 2019) with the user and the AI system as interacting agents are a promising approach to improve interactive behavior by better anticipating the user and their strategies -doing so with computationally rational user models would be an interesting avenue for further research (Howes et al, 2023); these are bound to be confronted by computational challenges in a real-time and interactive setting.…”
Section: Discussionmentioning
confidence: 99%
“…Developing such mental models of AI systems is currently a research challenge, even more so in the context of learning these online during interaction (Howes et al, 2023;Steyvers and Kumar, 2022;Bansal et al, 2019). Co-operative multi-agent setups (C ¸elikok et al, 2019) with the user and the AI system as interacting agents are a promising approach to improve interactive behavior by better anticipating the user and their strategies -doing so with computationally rational user models would be an interesting avenue for further research (Howes et al, 2023); these are bound to be confronted by computational challenges in a real-time and interactive setting. Computational efficiency and approximation to computationally rational behavior can however be achieved by employing surrogate computational rationality models using methods such as amortized inference (Moon et al, 2023) or likelihood-free inference (Aushev et al, 2023;Palestro et al, 2018;Hartig et al, 2011).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations