2023 IEEE 20th International Conference on Software Architecture Companion (ICSA-C) 2023
DOI: 10.1109/icsa-c57050.2023.00029
|View full text |Cite
|
Sign up to set email alerts
|

Towards Better Trust in Human-Machine Teaming through Explainable Dependability

Abstract: The human-machine teaming paradigm is increasingly widespread in critical domains, such as healthcare and domestic assistance. The paradigm goes beyond human-on-the-loop and human-in-the-loop systems by promoting tight teamwork between humans and autonomous machines that collaborate in the same physical space. These systems are expected to build a certain level of trust by enforcing dependability and exhibiting interpretable behavior. We present emerging results in this direction, with a novel framework aiming… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 26 publications
0
0
0
Order By: Relevance
“…In recent years, explainability, seen as the ability to provide a human with understandable explanations of the results produced by AI and ML algorithms, has become an essential aspect of designing tools based on these techniques [1], especially in critical areas such as healthcare [26]. Even if explainability is a term coined in the area of AI, interest in it is also growing in the software engineering and requirement engineering communities [9], [25]; researchers in these communities have proposed, for example, explainable analytical models for predictions and decision-making [25], explainable counterexamples [14], explainable quality attribute trade-offs in software architecture selection [4], the analysis of explainability as a non-functional requirement and its tradeoff with other quality attributes [9], [15] and in relation to human-machine teaming [3]. Work describing the theoretical basis of explainability, exploiting concepts from philosophy, psychology, and sociology can be found, for example, in [8], [21], [22], [24].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…In recent years, explainability, seen as the ability to provide a human with understandable explanations of the results produced by AI and ML algorithms, has become an essential aspect of designing tools based on these techniques [1], especially in critical areas such as healthcare [26]. Even if explainability is a term coined in the area of AI, interest in it is also growing in the software engineering and requirement engineering communities [9], [25]; researchers in these communities have proposed, for example, explainable analytical models for predictions and decision-making [25], explainable counterexamples [14], explainable quality attribute trade-offs in software architecture selection [4], the analysis of explainability as a non-functional requirement and its tradeoff with other quality attributes [9], [15] and in relation to human-machine teaming [3]. Work describing the theoretical basis of explainability, exploiting concepts from philosophy, psychology, and sociology can be found, for example, in [8], [21], [22], [24].…”
Section: Related Workmentioning
confidence: 99%
“…The framework is complemented by a set of meta-requirements and means to be engineered within a system to make it capable of producing explanations. We illustrate the applicability of the proposed framework by instantiating its main conceptual aspects in a Human-Machine-Teaming (HMT) [3], [19] application scenario where service robots assist patients and hospital staff during daily operations.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations