Abstract:A deductive argument is a pair where the first item is a set of premises, the second item is a claim, and the premises entail the claim. This can be formalised by assuming a logical language for the premises and the claim, and logical entailment (or consequence relation) for showing that the claim follows from the premises. Examples of logics that can be used include classical logic, modal logic, description logic, temporal logic, and conditional logic. A counterargument for an argument A is an argument B wher… Show more
“…if B is less preferred than A (in symbols, B A), then B A fails; see e.g. [1,6,9,12,21,22,24,27,33]. Some formalisms (e.g.…”
Section: Cakes Example In Argumentationmentioning
confidence: 99%
“…We begin with a well known formalism called Deductive Argumentation [8,9] in which classical logic is commonly used as a basis. Deductive Argumentation can be seen as a representative of those formalisms that employ forms of propositional, first-order, conditional or temporal logics, e.g.…”
Section: Deductive Argumentationmentioning
confidence: 99%
“…Deductive Argumentation can be seen as a representative of those formalisms that employ forms of propositional, first-order, conditional or temporal logics, e.g. [1,8,9,22,25].…”
Section: Deductive Argumentationmentioning
confidence: 99%
“…Hence the variety of formalisms of argumentation with preferences: e.g. [1,3,5,6,9,12,21,22,24,27,30,32,33,36,37].…”
Abstract. One of the main objectives of AI is modelling human reasoning. Since preference information is an indispensable component of common-sense reasoning, the two should be studied in tandem. Argumentation is an established branch of AI dedicated to this task. In this paper, we study how argumentation with preferences models human intuition behind a particular decision making scenario concerning reasoning with rules and preferences. To this end, we present an example of a common-sense reasoning problem complemented with a survey of decisions made by human respondents. The survey reveals an answer that contrasts with solutions offered by various argumentation formalisms. We argue that our results call for advancements of approaches to argumentation with preferences as well as for examination of the type of problems of reasoning with preferences put forward in this paper. Our work contributes to the line of research on preference handling in argumentation, and it also enriches the discussions on the increasingly important topic of preference treatment in AI at large.
“…if B is less preferred than A (in symbols, B A), then B A fails; see e.g. [1,6,9,12,21,22,24,27,33]. Some formalisms (e.g.…”
Section: Cakes Example In Argumentationmentioning
confidence: 99%
“…We begin with a well known formalism called Deductive Argumentation [8,9] in which classical logic is commonly used as a basis. Deductive Argumentation can be seen as a representative of those formalisms that employ forms of propositional, first-order, conditional or temporal logics, e.g.…”
Section: Deductive Argumentationmentioning
confidence: 99%
“…Deductive Argumentation can be seen as a representative of those formalisms that employ forms of propositional, first-order, conditional or temporal logics, e.g. [1,8,9,22,25].…”
Section: Deductive Argumentationmentioning
confidence: 99%
“…Hence the variety of formalisms of argumentation with preferences: e.g. [1,3,5,6,9,12,21,22,24,27,30,32,33,36,37].…”
Abstract. One of the main objectives of AI is modelling human reasoning. Since preference information is an indispensable component of common-sense reasoning, the two should be studied in tandem. Argumentation is an established branch of AI dedicated to this task. In this paper, we study how argumentation with preferences models human intuition behind a particular decision making scenario concerning reasoning with rules and preferences. To this end, we present an example of a common-sense reasoning problem complemented with a survey of decisions made by human respondents. The survey reveals an answer that contrasts with solutions offered by various argumentation formalisms. We argue that our results call for advancements of approaches to argumentation with preferences as well as for examination of the type of problems of reasoning with preferences put forward in this paper. Our work contributes to the line of research on preference handling in argumentation, and it also enriches the discussions on the increasingly important topic of preference treatment in AI at large.
“…negates the support of the argument). A range of options for structured argumentation at the logic level have been investigated (see [17,51,104,140] for tutorial reviews of some of the key proposals. Whilst most proposals for structured argumentation involve simple rule-based reasoning, there is some investigation of the role of classical logic in argumentation (see for example [16]), and of how probabilistic reasoning can be incorporated in structured arguments (see for example [74,139,144]).…”
Abstract. Persuasion is an activity that involves one party trying to induce another party to believe something or to do something. It is an important and multifaceted human facility. Obviously, sales and marketing is heavily dependent on persuasion. But many other activities involve persuasion such as a doctor persuading a patient to drink less alcohol, a road safety expert persuading drivers to not text while driving, or an online safety expert persuading users of social media sites to not reveal too much personal information online. As computing becomes involved in every sphere of life, so too is persuasion a target for applying computer-based solutions. An automated persuasion system (APS) is a system that can engage in a dialogue with a user (the persuadee) in order to persuade the persuadee to do (or not do) some action or to believe (or not believe) something. To do this, an APS aims to use convincing arguments in order to persuade the persuadee. Computational persuasion is the study of formal models of dialogues involving arguments and counterarguments, of user models, and strategies, for APSs. A promising application area for computational persuasion is in behaviour change. Within healthcare organizations, government agencies, and non-governmental agencies, there is much interest in changing behaviour of particular groups of people away from actions that are harmful to themselves and/or to others around them.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.