In this paper we describe a method for efficient argumentbased inquiry. In this method, an agent creates arguments for and against a particular topic by matching argumentation rules with observations gathered by querying the environment. To avoid making superfluous queries, the agent needs to determine if the acceptability status of the topic can change given more information. We define a notion of stability, where a structured argumentation setup is stable if no new arguments can be added, or if adding new arguments will not change the status of the topic. Because determining stability requires hypothesizing over all future argumentation setups, which is computationally very expensive, we define a less complex approximation algorithm and show that this is a sound approximation of stability. Finally, we show how stability (or our approximation of it) can be used in determining an optimal inquiry policy, and discuss how this policy can be used to, for example, determine a strategy in an argument-based inquiry dialogue.
We propose an agent architecture for transparent human-in-the-loop classification. By combining dynamic argumentation with legal case-based reasoning, we create an agent that is able to explain its decisions at various levels of detail and adapts to new situations. It keeps the human analyst in the loop by presenting suggestions for corrections that may change the factors on which the current decision is based and by enabling the analyst to add new factors. We are currently implementing the agent for classification of fraudulent web shops at the Dutch Police.
Reasoning under incomplete information is an important research direction in AI argumentation. Most computational advances in this direction have so-far focused on abstract argumentation frameworks. Development of computational approaches to reasoning under incomplete information in structured formalisms remains to-date to a large extent a challenge. We address this challenge by studying the so-called stability and relevance problems---with the aim of analyzing aspects of resilience of acceptance statuses in light of new information---in the central structured formalism of ASPIC+. Focusing on the case of the grounded semantics and an ASPIC+ fragment motivated through application scenarios, we develop exact ASP-based algorithms for stability and relevance in incomplete ASPIC+ theories, and pinpoint the complexity of reasoning about stability (coNP-complete) and relevance (Sigma_2^P-complete), further justifying our ASP-based approaches. Empirically, the algorithms exhibit promising scalability, outperforming even a recent inexact approach to stability, with our ASP-based iterative approach being the first algorithm proposed for reasoning about relevance in ASPIC+.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.