Ubiquitous computing systems tend to be complex, seamless, data-driven and interactive. Reacting to both context, and users' implicit actions resulting from the lived experience, they cast all traces of human life as potential 'data'. To augment users' endeavours, such systems are necessarily embedded below the line of human attention, drawing upon new and highly sensitive types of data. This begs the question, where is the moment of user consent and how can this moment be truly informed? We would argue that it is time to revisit our design principles in respect of consent and redress the balance of agency towards the user. We draw upon a series of multidisciplinary interviews with experts to (a) reframe consent for ubicomp, and (b) offer three indicative principles, supportive of consent, for designers to 'balance' against system functionality. We hope that this will afford a new prism through which designers might make value judgements.
Notions like 'Big Data' and the 'Internet of Things' turn upon anticipated harvesting of personal data through ubiquitous computing and networked sensing systems. It is largely presumed that understandings of people's everyday interactions will be relatively easy to 'read off' of such data and that this, in turn, poses a privacy threat. An ethnographic study of how people account for sensed data to third parties uncovers serious challenges to such ideas. The study reveals that the legibility of sensor data turns upon various orders of situated reasoning involved in articulating the data and making it accountable. Articulation work is indispensable to personal data sharing and raises real requirements for networked sensing systems premised on the harvesting of personal data.
Chatbots are increasingly becoming important gateways to digital services and information—taken up within domains such as customer service, health, education, and work support. However, there is only limited knowledge concerning the impact of chatbots at the individual, group, and societal level. Furthermore, a number of challenges remain to be resolved before the potential of chatbots can be fully realized. In response, chatbots have emerged as a substantial research area in recent years. To help advance knowledge in this emerging research area, we propose a research agenda in the form of future directions and challenges to be addressed by chatbot research. This proposal consolidates years of discussions at the CONVERSATIONS workshop series on chatbot research. Following a deliberative research analysis process among the workshop participants, we explore future directions within six topics of interest: (a) users and implications, (b) user experience and design, (c) frameworks and platforms, (d) chatbots for collaboration, (e) democratizing chatbots, and (f) ethics and privacy. For each of these topics, we provide a brief overview of the state of the art, discuss key research challenges, and suggest promising directions for future research. The six topics are detailed with a 5-year perspective in mind and are to be considered items of an interdisciplinary research agenda produced collaboratively by avid researchers in the field.
Terms and conditions are central in acquiring user consent by service providers. Such documents are frequently highly complex and unreadable, placing doubts on the validity of so called 'informed consent'. While readability and web accessibility have been major themes for some time in HCI, the core principles have yet to be applied beyond webpage content and are absent from the underpinning terms and conditions. Our concern is that accessible web pages will encourage consent, masking the complexities of the terms of usage.Using the SMOG readability formula and UK Energy services as a case study, we observed that a series of supplier terms and conditions were far beyond what a functionally literate adult could be expected to understand. We also present a browser based plug-in which compares SMOG readability scores to popular books. The intention is to use this plug-in to assist in surfacing the hidden complexities underpinning online consent.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.