Abstract:Approaches to the verification of multi-agent systems are typically based on games or transition systems defined in terms of states and actions. However such approaches often ignore a key aspect of multi-agent systems, namely that the agents' actions require (and sometimes produce) resources. We survey previous work on the verification of multi-agent systems that takes resources into account, extending substantially a survey from 2016 [9].
“…As we are faced with dynamic degrees of autonomy in TAS, we require contextualised methods that are able to ascribe responsibility dynamically. A way forward is to capture resource and cost dynamics (Alechina and Logan 2020). Using such cost-aware methods, one can formulate degrees of responsibility based on agents' control over the resources.…”
Section: Quantified Degrees To Address Responsibility Voidsmentioning
Ensuring the trustworthiness of autonomous systems and artificial intelligence is an important interdisciplinary endeavour. In this position paper, we argue that this endeavour will benefit from technical advancements in capturing various forms of responsibility, and we present a comprehensive research agenda to achieve this. In particular, we argue that ensuring the reliability of autonomous system can take advantage of technical approaches for quantifying degrees of responsibility and for coordinating tasks based on that. Moreover, we deem that, in certifying the legality of an AI system, formal and computationally implementable notions of responsibility, blame, accountability, and liability are applicable for addressing potential responsibility gaps (i.e. situations in which a group is responsible, but individuals’ responsibility may be unclear). This is a call to enable AI systems themselves, as well as those involved in the design, monitoring, and governance of AI systems, to represent and reason about who can be seen as responsible in prospect (e.g. for completing a task in future) and who can be seen as responsible retrospectively (e.g. for a failure that has already occurred). To that end, in this work, we show that across all stages of the design, development, and deployment of trustworthy autonomous systems (TAS), responsibility reasoning should play a key role. This position paper is the first step towards establishing a road map and research agenda on how the notion of responsibility can provide novel solution concepts for ensuring the reliability and legality of TAS and, as a result, enables an effective embedding of AI technologies into society.
“…As we are faced with dynamic degrees of autonomy in TAS, we require contextualised methods that are able to ascribe responsibility dynamically. A way forward is to capture resource and cost dynamics (Alechina and Logan 2020). Using such cost-aware methods, one can formulate degrees of responsibility based on agents' control over the resources.…”
Section: Quantified Degrees To Address Responsibility Voidsmentioning
Ensuring the trustworthiness of autonomous systems and artificial intelligence is an important interdisciplinary endeavour. In this position paper, we argue that this endeavour will benefit from technical advancements in capturing various forms of responsibility, and we present a comprehensive research agenda to achieve this. In particular, we argue that ensuring the reliability of autonomous system can take advantage of technical approaches for quantifying degrees of responsibility and for coordinating tasks based on that. Moreover, we deem that, in certifying the legality of an AI system, formal and computationally implementable notions of responsibility, blame, accountability, and liability are applicable for addressing potential responsibility gaps (i.e. situations in which a group is responsible, but individuals’ responsibility may be unclear). This is a call to enable AI systems themselves, as well as those involved in the design, monitoring, and governance of AI systems, to represent and reason about who can be seen as responsible in prospect (e.g. for completing a task in future) and who can be seen as responsible retrospectively (e.g. for a failure that has already occurred). To that end, in this work, we show that across all stages of the design, development, and deployment of trustworthy autonomous systems (TAS), responsibility reasoning should play a key role. This position paper is the first step towards establishing a road map and research agenda on how the notion of responsibility can provide novel solution concepts for ensuring the reliability and legality of TAS and, as a result, enables an effective embedding of AI technologies into society.
“…Фактически, мы можем указать только на фундаментальную монографию [7]. Правда, есть довольно-таки обширная литература по различным вариантам динамической эпистемической логики для мультиагентных систем (например, [11,12]), но эта литература посвящена классическим логическим вопросам -семантика и аксиоматизируемость, выразительная сила и разрешимость -но не вопросам конструирования мультиагентных алгоритмов, основанных на знаниях.…”
Multiagent algorithm is a knowledge-based distributed algorithm that solves some problems by means of cooperative work of agents. From an individual agent's perspective, a multiagent algorithm is a reactive and proactive knowledge/believebased rational algorithm aimed to achieve an agent's own desires. In the paper we study a couple of knowledge-based multiagent algorithms. One particular algorithm is for a system consisting of agents that arrive one by one (in a nondeterministic order) to a resource center to rent (for a while) one of available desired resources. Available resources are passive, they form a cloud; each of the available resources is lent on demand if there is no race for this resource and returns to the cloud a er use. Agents also form a cloud but leave the cloud immediately when they rent a desired resource. e problem is to design a knowledge-based multiagent algorithm, which allows each arriving agent eventually to rent some of desired resources (without race for these resources).
“…However, if logics for strategies are to be applied to concrete AI scenarios, it is key to account for the resources that actions might consume or produce. These considerations have prompted recently investigations in resource-aware logics for strategies [11,12]. The need for managing resources in MAS has been identified quite early and many logical formalisms based on ATL have been introduced to endow actions with consumption or production of resources [13,14,15,16,17].…”
Section: Introductionmentioning
confidence: 99%
“…Hereafter we focus specifically on the several ATL-like formalisms that have been put forward in recent years, which are characterized by endowing actions with the consumption or production of resources [11,13,14,15,16,17]. In this line of research, a remarkable breakthrough occurred with the design of the logic RB±ATL, with production (+) and consumption (−) of resources, whose model-checking problem was shown decidable in [16].…”
Section: Introductionmentioning
confidence: 99%
“…For a thorough discussion of the current state of the art in resource-bounded ATL, we refer to the survey paper [11].…”
The resource-bounded alternating-time temporal logic RB±ATL combines strategic reasoning with reasoning about resources. Its model-checking problem is known to be 2exptime-complete (the same as its proper extension RB±ATL * ) and fragments have been identified to lower the complexity.In this work, we consider the variant RB±ATL + that allows for Boolean combinations of path formulae starting with single temporal operators, but restricted to a single resource, providing an interesting trade-off between temporal expressivity and resource analysis. We show that the model-checking problem for RB±ATL + restricted to a single agent and a single resource is ∆ P 2 -complete, hence the same as for the standard branching-time temporal logic CTL + . In this case reasoning about resources comes at no extra computational cost. When a fixed finite set of linear-time temporal operators is considered, the model-checking problem drops to ptime, which includes the special case of RB±ATL restricted to a single agent and a single resource. Furthermore, we show that, with an arbitrary number of agents and a fixed number of resources, the model-checking problem for RB±ATL + can be solved in exptime using a sophisticated Turing reduction to the parity game problem for alternating vector addition systems with states (AVASS).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.