Abstract. Model-Driven Engineering (MDE) has been extensively used for generating User Interfaces (UIs) from models. As long as these models are kept alive at runtime, the UIs are capable of adapting to variations of the context of use. This paper investigates a potentially powerful side effect: the possibility of enriching the UIs with explanations directly generated from these models. This paper first describes a software infrastructure that supports this generation of explanations. It then reports on a user study that evaluates the added value of such model based self-explanations. Keywords: Self-Explanatory User Interfaces, Model-Driven Engineering, Models at runtime, Self-Explanation.
IntroductionMany works ([9, 10, 14]) have reported on the benefits of supporting users through explanations in interactive systems. These explanations address specific questions that users ask about the User Interface (UI). For instance, how a task can be accomplished, why a feature is not enabled, or where an option is. Classical approaches [7], which are based on predefined information such as static documentation, FAQs, and guides, specify this information at design time. Their scope is therefore limited because users can have questions about the UI that are not covered by these kinds of supports. Moreover, this static documentation is not only a time consuming task but, additionally, it requires manual updates when the program specification changes. To overcome this limitation, some researches [4] have recently proposed Model-Driven Engineering (MDE) as a means for supporting users at runtime. Model-Driven UIs use the models created at design time as their knowledge-base at runtime, exploiting the models and the relationships between them to find answers to the users' questions. These kinds of UIs with support facilities based on their own models are also known as Self-Explanatory UIs. Their main advantages are that answers are generated at runtime, and they evolve with the program specification automatically.