Two problems may arise when an intelligent (recommender) system elicits users' preferences. First, there may be a mismatch between the quantitative preference representations in most preference models and the users' mental preference models. Giving exact numbers, e.g., such as "I like 30 days of vacation 2.5 times better than 28 days" is difficult for people. Second, the elicitation process can greatly influence the acquired model (e.g., people may prefer different options based on whether a choice is represented as a loss or gain). We explored these issues in three studies. In the first experiment we presented users with different preference elicitation methods and found that cognitively less demanding methods were perceived low in effort and high in liking. However, for methods enabling users to be more expressive, the perceived effort was not an indicator of how much the methods were liked. We thus hypothesized that users are willing to spend more effort if the feedback mechanism enables them to be more expressive. We examined this hypothesis in two follow-up studies. In the second experiment, we explored the trade-off between giving detailed preference feedback and effort. We found that familiarity with and opinion about an item are important factors mediating this trade-off. Additionally, affective feedback was preferred over a finer grained one-dimensional rating scale for giving additional detail. In the third study, we explored the influence of the interface on the elicitation process in a participatory set-up. People considered it helpful to be able to explore the link between their interests, preferences and the desirability of outcomes. We also confirmed that people do not want to spend additional effort in cases where it seemed unnecessary. Based on the findings, we propose four design guidelines to foster interface design of preference elicitation from a user view.
Explicitly considering human values in the design process of socio-technical systems has become a responsibility of designers. It is, however, challenging to design for values because (1) relevant values must be identified and communicated between all stakeholders and designers and (2) stakeholders' values differ and trade-offs must be made. We focus on the first aspect, which requires elicitation of stakeholders' situated values, i.e. values relevant to a specific real life context. Available techniques to elicit knowledge and requirements from stakeholders lack in providing the context and means for reflection needed to elicit situated values as well as an explicit concept of value. In this paper we present our design of a tool to support active elicitation of stakeholders' values and communication between stakeholders and designers. We conducted an exploratory user study in which we compared the suitability of methods used in social sciences for (1) eliciting situated values, (2) supporting people's expressions of values and (3) being implemented in value elicitation tool. Based on the outcomes we propose a design for a value elicitation tool that consists of a mobile application used by stakeholders for data collection and in-situ self-reflection, and a website used collaboratively by designers and stakeholders to analyse and communicate values. Discussion focuses on contributions to value sensitive design.
No abstract
To test whether synthetic emotions expressed by a virtual human elicit positive or negative emotions in a human conversation partner and affect satisfaction towards the conversation, an experiment was conducted where the emotions of a virtual human were manipulated during both the listening and speaking phase of the dialogue. Twenty-four participants were recruited and were asked to have a real conversation with the virtual human on six different topics. For each topic the virtual human's emotions in the listening and speaking phase were different, including positive, neutral and negative emotions. The results support our hypotheses that (1) negative compared to positive synthetic emotions expressed by a virtual human can elicit a more negative emotional state in a human conversation partner, (2) synthetic emotions expressed in the speaking phase have more impact on a human conversation partner than emotions expressed in the listening phase, (3) humans with less speaking confidence also experience a conversation with a virtual human as less positive, and (4) random positive or negative emotions of a virtual human have a negative effect on the satisfaction with the conversation. These findings have practical implications for the treatment of social anxiety as they allow therapists to control the anxiety evoking stimuli, i.e. the expressed emotion of a virtual human in a virtual reality exposure environment of a simulated conversation. In addition, these findings may be useful to other virtual applications that include conversations with a virtual human.
Abstract. Surveillance systems in shopping malls or supermarkets are usually used for detecting abnormal behavior. We used the distributed video cameras system to design digital shopping assistants which assess the behavior of customers while shopping, detect when they need assistance, and offer their support in case there is a selling opportunity. In this paper we propose a system for analyzing human behavior patterns related to products interaction, such as browse through a set of products, examine, pick products, try on, interact with the shopping cart, and look for support by waiving one hand. We used the Kinect sensor to detect the silhouettes of people and extracted discriminative features for basic action detection. Next we analyzed different classification methods, statistical and also spatio-temporal ones, which capture relations between frames, features, and basic actions. By employing feature level fusion of appearance and movement information we obtained an accuracy of 80% for the mentioned six basic actions.
Abstract. We developed a learning-based question classifier for question answering systems. A question classifier tries to predict the entity type of the possible answers to a given question written in natural language. We extracted several lexical, syntactic and semantic features and examined their usefulness for question classification. Furthermore we developed a weighting approach to combine features based on their importance. Our result on the well-known TREC questions dataset is competitive with the state-of-the-art on this task.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.