Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems 2021
DOI: 10.1145/3411764.3445497
|View full text |Cite
|
Sign up to set email alerts
|

Adapting User Interfaces with Model-based Reinforcement Learning

Abstract: Adapting an interface requires taking into account both the positive and negative efects that changes may have on the user. A carelessly picked adaptation may impose high costs to the user -for example, due to surprise or relearning efort -or "trap" the process to a suboptimal design immaturely. However, efects on users are hard to predict as they depend on factors that are latent and evolve over the course of interaction. We propose a novel approach for adaptive user interfaces that yields a conservative adap… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
3

Relationship

3
6

Authors

Journals

citations
Cited by 52 publications
(23 citation statements)
references
References 58 publications
0
21
0
2
Order By: Relevance
“…One way of realizing this support could entail using MutationObserver [46] to track DOM changes and applying layout adaptation only after client-side rendering is complete. Also, incorporating the latest advancements in the field of adaptive user interfaces would enhance the adaptation strategy so as to consider both end-user costs and benefits [35].…”
Section: Discussionmentioning
confidence: 99%
“…One way of realizing this support could entail using MutationObserver [46] to track DOM changes and applying layout adaptation only after client-side rendering is complete. Also, incorporating the latest advancements in the field of adaptive user interfaces would enhance the adaptation strategy so as to consider both end-user costs and benefits [35].…”
Section: Discussionmentioning
confidence: 99%
“…DQNs have been tested in various complicated tasks and were able to outperform all previous RL algorithms [Silver et al 2016[Silver et al , 2017. DQNs have also enabled breakthroughs such as "AlphaGO" [Chen 2016] and"AlphaStar" [Arulkumaran et al 2019], which have inspired recent work on AUIs in the context of linear menus [Todi et al 2021]. These advancements demonstrate the potential of RL to build intelligent agents by giving them the freedom to learn by exploring their environment and make decisions to take actions that maximise a long term reward.…”
Section: A Contextual Framework For Auismentioning
confidence: 99%
“…Why not mix the best of both fields by feeding classical or new models with users' data abstracted by means of ML techniques [27], thus obtaining "grey models"? A representative example is a model-based reinforcement learning approach proposed by Todi et al [38], which plans a sequence of adaptation steps (instead of a one-shot adaptation) and exploits a model to assess the cost/benefit ratio. 3.…”
Section: Opportunities For the Modelling Communitymentioning
confidence: 99%