2020
DOI: 10.1007/978-3-030-51924-7_3
|View full text |Cite
|
Sign up to set email alerts
|

In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap

Abstract: In the race for automation, distributed systems are required to perform increasingly complex reasoning to deal with dynamic tasks, often not controlled by humans. On the one hand, systems dealing with strict-timing constraints in safety-critical applications mainly focused on predictability, leaving little room for complex planning and decisionmaking processes. Indeed, real-time techniques are very efficient in predetermined, constrained, and controlled scenarios. Nevertheless, they lack the necessary flexibil… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 34 publications
0
7
0
Order By: Relevance
“…Some AI and ML application domains can be considered very sensitive and highrisk, closely associated with human lives, their wellness, and safety-industrial control, healthcare, military domain, or self-driving cars perfectly represent instances of such safetycritical domains. Most authors agree that the demand for XAI in safety-critical domains is inseparable from further developments, as they involve high-consequence decisions, and a failure of the AI system could result in significant harm to people or the environment [7][8][9].…”
Section: The Need For Explainability In Smart Home Domainmentioning
confidence: 99%
See 2 more Smart Citations
“…Some AI and ML application domains can be considered very sensitive and highrisk, closely associated with human lives, their wellness, and safety-industrial control, healthcare, military domain, or self-driving cars perfectly represent instances of such safetycritical domains. Most authors agree that the demand for XAI in safety-critical domains is inseparable from further developments, as they involve high-consequence decisions, and a failure of the AI system could result in significant harm to people or the environment [7][8][9].…”
Section: The Need For Explainability In Smart Home Domainmentioning
confidence: 99%
“…Experiments have shown that XAI principles implemented through conversational and virtual agents contribute to increasing user trust in the XAI system [17]. The authors in [9] discuss the concept of real-time multi-agent systems and how they are designed to operate in highly dynamic environments, highlighting the importance of meeting both soft and hard deadlines. The paper also explores the use of BDI-based agents, which are suited for unpredictable scenarios requiring dynamic decision-making, and how they can be used to develop explainable and real-time compliant MAS.…”
Section: Explainable Multi-agent Systems For Iotmentioning
confidence: 99%
See 1 more Smart Citation
“…Explaining AI applications, especially those involving Machine Learning (ML) ( Holzinger, 2018 ), and Deep Neural Networks (DNN) ( Angelov and Soares, 2020 ; Booth et al, 2021 ), is howbeit still an ongoing effort, due to the high complexity and sophistication of the processes in place (e.g., data handling, algorithm tuning, etc.) as well as the wide range of AI systems such as recommendation systems ( Zhang and Chen, 2020 ), human-agent systems ( Rosenfeld and Richardson, 2019 ), planning systems ( Chakraborti et al, 2020 ), multi-agent systems ( Alzetta et al, 2020 ), autonomous systems ( Langley et al, 2017 ), or robotic systems ( Anjomshoae et al, 2019 ; Rotsidis et al, 2019 ).…”
Section: Related Workmentioning
confidence: 99%
“…In the literature, the explainable AI presents some challenges in terms of it being used as a selective decision making that focuses on explanations and background knowledge [127], a large amount of information [128] and using case-specific decision making [129]. Moreover, we have faced other challenges as well in this work which might be useful to other researchers working on utilising Explainable AI for developing Smart Cities' solutions.…”
Section: Accuracy Of the Hybrid Image Classifiermentioning
confidence: 99%