2023
DOI: 10.21203/rs.3.rs-2639603/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Human-centric and Semantics-based Explainable Event Detection: A Survey

Abstract: In recent years, there has been a surge in interest in artificial intelligent systems that can provide human-centric explanations for decisions or predictions. No matter how good and efficient a model is, users or practitioners find it difficult to trust such model if they cannot understand the model or its behaviours. Incorporating explainability that is human-centric in event detection systems is significant for building a decision-making process that is more trustworthy and sustainable. Human-centric and se… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 83 publications
(72 reference statements)
0
0
0
Order By: Relevance
“…According to the Kolajo and Daramola paper (2023), scholars examined both human-centric and semantics based explainable event detection models. The paper evaluated these techniques by presenting ways on how an AI system semantic linguistic and user centered design principles could be improved so that the interpretability of event driven AI system could be enhanced [23]. Lukomski et al (in the year 2024) studied the effect of oncological data on the AI predictions of five years survival in esophageal cancer, which is a proof of the necessity of interpretable AI models for clinical decision support and personalized medicine [24].…”
Section: Related Workmentioning
confidence: 99%
“…According to the Kolajo and Daramola paper (2023), scholars examined both human-centric and semantics based explainable event detection models. The paper evaluated these techniques by presenting ways on how an AI system semantic linguistic and user centered design principles could be improved so that the interpretability of event driven AI system could be enhanced [23]. Lukomski et al (in the year 2024) studied the effect of oncological data on the AI predictions of five years survival in esophageal cancer, which is a proof of the necessity of interpretable AI models for clinical decision support and personalized medicine [24].…”
Section: Related Workmentioning
confidence: 99%