Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization 2020
DOI: 10.1145/3386392.3399276
|View full text |Cite
|
Sign up to set email alerts
|

Accessible Cultural Heritage through Explainable Artificial Intelligence

Abstract: Ethics Guidelines for Trustworthy AI advocate for AI technology that is, among other things, more inclusive. Explainable AI (XAI) aims at making state of the art opaque models more transparent, and defends AI-based outcomes endorsed with a rationale explanation, i.e., an explanation that has as target the non-technical users. XAI and Responsible AI principles defend the fact that the audience expertise should be included in the evaluation of explainable AI systems. However, AI has not yet reached all public an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 34 publications
(18 citation statements)
references
References 47 publications
0
9
0
Order By: Relevance
“…Remaining within the artificial intelligence area, authors in [8] discuss some challenges and research questions to be addressed by the latest explainable artificial intelligence (XAI) models. Fairness, accountability, and transparency in machine learning are the first topics where specific challenges defined by the authors revolve around.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Remaining within the artificial intelligence area, authors in [8] discuss some challenges and research questions to be addressed by the latest explainable artificial intelligence (XAI) models. Fairness, accountability, and transparency in machine learning are the first topics where specific challenges defined by the authors revolve around.…”
Section: Related Workmentioning
confidence: 99%
“…As Deep Learning is a flourishing strategy in image and video captioning [24], we exploited a deep neural network-based Image Captioning framework to label each candidate frame f C ∈ CF with an appropriate caption. In detail, CulturAI embeds an Image Captioning framework 8 compliant with the architecture proposed by Xu et al [27]. The framework adopts a convolutional neural network to extract from each candidate frame its visual fea-tures, which are subsequently decoded into human-friendly sentences by an LSTM recurrent neural network.…”
Section: E Image Captioningmentioning
confidence: 99%
See 1 more Smart Citation
“…To our knowledge, there has been no prior research that specifically tests the use of the drama-based approach for older adults in a remote museum visitation. Previous work focused on other types of approaches for remote museum visitations for older adults (Kostoska et al , 2015; Beer and Takayama, 2011; Kostoska et al , 2016; Pisoni et al , 2019; Díaz-Rodríguez and Pisoni, 2020), and our work advances the state of art by providing an empirical investigation of the appropriateness of the drama-based approach for remote museum visits for older adults. Our work addresses the need of this population to stay in contact with both people and places.…”
Section: Introductionmentioning
confidence: 97%
“…In this sense, the exploitation of artificial intelligence techniques leaves ample room for improvement [2], [3]. In particular, the field of Natural Language Processing (NLP) can help to distinguish genuine reviews from fake ones by correctly directing efforts to modernize and improve the services offered for cultural heritage.…”
Section: Introductionmentioning
confidence: 99%