27th International Conference on Intelligent User Interfaces 2022
DOI: 10.1145/3490099.3511140
|View full text |Cite
|
Sign up to set email alerts
|

Explaining Recommendations in E-Learning: Effects on Adolescents' Trust

Abstract: Recommender systems are increasingly supporting explanations to increase trust in their recommendations. However, studies on explaining recommendations typically target adults in low-risk e-commerce or media contexts, and using explanations in e-learning has received little research attention. To address these limits, we investigated how explanations affect adolescents' trust in an exercise recommender on a mathematical e-learning platform. In a randomized controlled experiment with 37 adolescents, we compared… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(24 citation statements)
references
References 65 publications
(69 reference statements)
0
14
1
Order By: Relevance
“…In our experiment, neither an accuracy statement nor a full explanation led to higher trust (neither subjective trust nor willingness to change the grade towards the system), compared to the control condition. This contradicts previous literature, which showed that explanations do result in higher trust in an intelligent tutoring system (Conati et al, 2021;Ooge et al, 2022). This could be because the context was different (intelligent tutoring system vs. automated essay scoring), or because the appropriate format, content, and timing of explanations may differ across contexts .…”
Section: Discussioncontrasting
confidence: 58%
See 1 more Smart Citation
“…In our experiment, neither an accuracy statement nor a full explanation led to higher trust (neither subjective trust nor willingness to change the grade towards the system), compared to the control condition. This contradicts previous literature, which showed that explanations do result in higher trust in an intelligent tutoring system (Conati et al, 2021;Ooge et al, 2022). This could be because the context was different (intelligent tutoring system vs. automated essay scoring), or because the appropriate format, content, and timing of explanations may differ across contexts .…”
Section: Discussioncontrasting
confidence: 58%
“…In addition, these explanations resulted in a higher intention to use hints, increased perceived helpfulness of the hints, and a higher trust in the system to provide appropriate hints. Ooge et al (2022) studied the effect of explanations on student trust while they interacted with an e-learning platform that recommended mathematics exercises (N = 37). The explanations were created via a user-centred design process, and included a why statement explaining why the student received the recommendation and a justification of the estimated number of tries needed, as well as a histogram showing the number of tries needed by similar students.…”
Section: Explainable Artificial Intelligence In Educationmentioning
confidence: 99%
“…Alternatively, other researchers consider trust as a multidimensional ensemble of several constructs which they typically measure with multiple Likert-type questions. For example, McKnight et al [62] introduced trusting beliefs as a composition of competence, benevolence, and integrity; and Ooge et al [69] measured trust as the average of trusting beliefs, intention to return, and perceived transparency.…”
Section: Explainable Ai and Trustmentioning
confidence: 99%
“…In words, whenever someone correctly solves an exercise, their Elo rating increases and the exercise's Elo rating decreases, proportional to how unexpected that correct answer was; vice versa for incorrect answers. Besides its intuitive functioning, the Elo rating system has the asset that it can be extended to multivariate settings [1], adapted to consider how quickly students solve questions [50], and combined with other techniques such as collaborative filtering [16,69].…”
Section: Estimating Mastery and Exercise Difficultymentioning
confidence: 99%
“…The term "Explainable Recommendation" was first defined by Zhang et al [341]. As an important sub-field of AI and machine learning research and due to the fact that recommendation naturally involves humans in the loop, the recommender system community has been leading the research on Explainable AI ever since, which triggers a broader scope of explainability research in other AI and machine learning sub-fields [71,340], such as explainability in scientific research [181], computer vision [297], natural language processing [40,106,172,217,229], graph neural networks [265,299], database [112,291], healthcare systems [121,228,350], online education [9,20,216,264,277], psychological studies [271] and cyber-physical systems [10,12,134,135,241].…”
Section: Overview Of Explainable Recommendationmentioning
confidence: 99%