2015
DOI: 10.1007/978-3-319-28460-6_8
|View full text |Cite
|
Sign up to set email alerts
|

Building Support-Based Opponent Models in Persuasion Dialogues

Abstract: Abstract. This paper deals with an approach to opponent-modelling in argumentation-based persuasion dialogues. It assumes that dialogue participants (agents) have models of their opponents' knowledge, which can be augmented based on previous dialogues. Specifically, previous dialogues indicate relationships of support, which refer both to arguments as abstract entities and to their logical constituents. The augmentation of an opponent model relies on these relationships. An argument external to an opponent mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0
1

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 29 publications
(32 citation statements)
references
References 15 publications
0
29
0
1
Order By: Relevance
“…This can then be used to determine whether there is any chance of the dialogue leading to success, and if not, giving up and unsuccessfully terminating the dialogue [23]. Probabilistic models of the opponent have been used in some strategies allowing the selection of moves for an agent based on what it believes the other agent believes [73], selection of moves based on what it believes the other agent is aware of [126], and based on the history of previous dialogues to predict the arguments that an opponent might put forward [59]. In [20], a planning system is used by the persuader to optimize choice of arguments based on belief in premises, and in [21], an automated planning approach is used for persuasion that accounts for the uncertainty of the proponent's model of the opponent by finding strategies that have a certain probability of guaranteed success no matter which arguments the opponent chooses to assert.…”
Section: Dialogical Levelmentioning
confidence: 99%
See 1 more Smart Citation
“…This can then be used to determine whether there is any chance of the dialogue leading to success, and if not, giving up and unsuccessfully terminating the dialogue [23]. Probabilistic models of the opponent have been used in some strategies allowing the selection of moves for an agent based on what it believes the other agent believes [73], selection of moves based on what it believes the other agent is aware of [126], and based on the history of previous dialogues to predict the arguments that an opponent might put forward [59]. In [20], a planning system is used by the persuader to optimize choice of arguments based on belief in premises, and in [21], an automated planning approach is used for persuasion that accounts for the uncertainty of the proponent's model of the opponent by finding strategies that have a certain probability of guaranteed success no matter which arguments the opponent chooses to assert.…”
Section: Dialogical Levelmentioning
confidence: 99%
“…There are some promising proposals that could contribute to a solution (e.g. [20,21,59,73,75,127]), and I will discuss the progress we have made on this in Section 7.4. However, if we are to harness some of the other levers of persuasion that I discussed in Section 3, then we will need to broaden the modelling to incorporate aspects of personality and bias.…”
Section: Shortcomings In the State Of The Artmentioning
confidence: 99%
“…A probabilistic model of the opponent has been used in a dialogue strategy allowing the selection of moves for an agent based on what it believes the other agent is aware of and the moves it might take (Rienstra, Thimm, & Oren, 2013). In another approach to probabilistic opponent modelling, the history of previous dialogues is used to predict the arguments that an opponent might put forward (Hadjinikolis, Siantos, Modgil, Black, & McBurney, 2013). For modelling the possible dialogues that might be generated by a pair of agents, a probabilistic finite state machine can represent the possible moves that each agent can make in each state of the dialogue assuming a set of arguments that each agent is aware of (Hunter, 2014b).…”
Section: Probabilistic Argumentationmentioning
confidence: 99%
“…In their work, Rienstra et al [6] apply the basic model update mechanism and Black et al [5] use the smart approach, however neither explicitly considers the effect the update mechanism has on the outcome of the dialogue. Hadjinikolis et al [7,8] propose a method an agent can use to augment an opponent model with extra information, based on previous dialogue experience, however they do not consider how this relates to dialogue outcome.…”
Section: Performance Of Mechanisms For Scenarios That Are Not Accuratementioning
confidence: 99%
“…However, there is a lack of formal investigation into how such a model can be maintained and under what circumstances it can be useful. Rienstra et al propose a mechanism for updating an opponent model with the addition of arguments proposed or received by the opponent [6], Black et al's approach involves also removing from the opponent model anything that is inconsistent with the observed opponent behaviour [5], while Hadjinikolis et al consider how an agent can develop a model of the likelihood that an opponent will know a particular argument if it asserts some other argument [7,8]; however, none of these works formally investigate the impact of the model update mechanism on the dialogue outcome.…”
Section: Introductionmentioning
confidence: 99%