2019
DOI: 10.3233/aic-190615
|View full text |Cite
|
Sign up to set email alerts
|

Modelling deception using theory of mind in multi-agent systems

Abstract: If citing, it is advised that you check and use the publisher's definitive version for pagination, volume/issue, and date of publication details. And where the final published version is provided on the Research Portal, if citing you are again advised to check the publisher's website for any subsequent corrections.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0
3

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 28 publications
(12 citation statements)
references
References 20 publications
0
9
0
3
Order By: Relevance
“…Othello's fallacy was that he took into consideration only the behaviour a guilty person would exhibit, without taking into consideration all the other cues that might have falsified his beliefs, such as the fact that desperation causes individuals to exhibit some of the behaviours a guilty person would exhibit. In the case of complex reasoning artificial agents, Sarkadi et al have shown in [24] that high levels of scepticism in communicative social interactions between artificial agents could lead to deception even when the deceiver agent's communicative skill is low. This type of artificial agent deception, argue the authors in [24], represents the special case of unintended deception where the deceiver does not act deceptively because it wrongly estimates that deception would fail, but the interrogator (the deceiver's target) is so sceptical that it caused it to believe that the deceiver has attempted deception, and thus the interrogator is caused to infer something that is false from a truthful message.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Othello's fallacy was that he took into consideration only the behaviour a guilty person would exhibit, without taking into consideration all the other cues that might have falsified his beliefs, such as the fact that desperation causes individuals to exhibit some of the behaviours a guilty person would exhibit. In the case of complex reasoning artificial agents, Sarkadi et al have shown in [24] that high levels of scepticism in communicative social interactions between artificial agents could lead to deception even when the deceiver agent's communicative skill is low. This type of artificial agent deception, argue the authors in [24], represents the special case of unintended deception where the deceiver does not act deceptively because it wrongly estimates that deception would fail, but the interrogator (the deceiver's target) is so sceptical that it caused it to believe that the deceiver has attempted deception, and thus the interrogator is caused to infer something that is false from a truthful message.…”
Section: Discussionmentioning
confidence: 99%
“…have shown in [ 24 ] that high levels of scepticism in communicative social interactions between artificial agents could lead to deception even when the deceiver agent’s communicative skill is low. This type of artificial agent deception, argue the authors in [ 24 ], represents the special case of unintended deception where the deceiver does not act deceptively because it wrongly estimates that deception would fail, but the interrogator (the deceiver’s target) is so sceptical that it caused it to believe that the deceiver has attempted deception, and thus the interrogator is caused to infer something that is false from a truthful message.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Emotional Well-being Theory. It states description in logic of the OCC model of emotion [60,80,81,86] that has an objective to achieve and maximize happiness of the student [70]. 3.…”
Section: 1mentioning
confidence: 99%
“…The Cecilia architecture has been designed to include a Theory of Mind [80] extended with emotions [60,81] of the User Agent (the student) as a Logic Programming (LP) Theory in the User Model. It is by LP Knowledge Representation that is possible to reason and plan a Dialogue Composition (DC) to help the user human development considering her beliefs, intentions, desires and emotions.…”
Section: Introductionmentioning
confidence: 99%