Notifications are a core feature of mobile phones. They inform users about a variety of events. Users may take immediate action or ignore them depending on the importance of a notification as well as their current context. The nature of notifications is manifold, applications use them both sparsely and frequently. In this paper we present the first large-scale analysis of mobile notifications with a focus on users' subjective perceptions. We derive a holistic picture of notifications on mobile phones by collecting close to 200 million notifications from more than 40,000 users. Using a data-driven approach, we break down what users like and dislike about notifications. Our results reveal differences in importance of notifications and how users value notifications from messaging apps as well as notifications that include information about people and events. Based on these results we derive a number of findings about the nature of notifications and guidelines to effectively use them.
Spearcons have broadened the taxonomy of nonspeech auditory cues. Users can benefit from the application of spearcons in real devices.
Boredom is a common human emotion which may lead to an active search for stimulation. People often turn to their mobile phones to seek that stimulation. In this paper, we tackle the challenge of automatically inferring boredom from mobile phone usage. In a two-week in-the-wild study, we collected over 40,000,000 usage logs and 4398 boredom self-reports of 54 mobile phone users. We show that a user-independent machine-learning model of boredom -leveraging features related to recency of communication, usage intensity, time of day, and demographics-can infer boredom with an accuracy (AUCROC) of up to 82.9%. Results from a second field study with 16 participants suggest that people are more likely to engage with recommended content when they are bored, as inferred by our boredom-detection model. These findings enable boredom-triggered proactive recommender systems that attune their users' level of attention and need for stimulation.
Current digital systems are largely blind to users' cognitive states. Systems that adapt to users' states show great potential for augmenting cognition and for creating novel user experiences. However, most approaches for sensing cognitive states, and cognitive load specifically, involve obtrusive technologies, such as physiological sensors attached to users' bodies. This paper present an unobtrusive indicator of the users' cognitive load based on thermal imaging that is applicable in real-world. We use a commercial thermal camera to monitor a person's forehead and nose temperature changes to estimate their cognitive load. To assess the effect of different levels of cognitive load on facial temperature we conducted a user study with 12 participants. The study showed that different levels of the Stroop test and the complexity of reading texts affect facial temperature patterns, thereby giving a measure of cognitive load. To validate the feasibility for real-time assessments of cognitive load, we conducted a second study with 24 participants, we analyzed the temporal latency of temperature changes. Our system detected temperature changes with an average latency of 0.7 seconds after users were exposed to a stimulus, outperforming latency in related work that used other thermal imaging techniques. We provide empirical evidence showing how to unobtrusively detect changes in cognitive load in real-time. Our exploration of exposing users to different content types gives rise to thermal-based activity tracking, which facilitates new applications in the field of cognition-aware computing. CCS Concepts: • Human-centered computing → Human computer interaction (HCI); • Computing methodologies → Cognitive science; • Hardware → Displays and imagers;
In this paper, we demonstrate the existence of a bidirectional causal relationship between smartphone application use and user emotions. In a two-week long in-the-wild study with 30 participants we captured 502,851 instances of smartphone application use in tandem with corresponding emotional data from facial expressions. Our analysis shows that while in most cases application use drives user emotions, multiple application categories exist for which the causal effect is in the opposite direction. Our findings shed light on the relationship between smartphone use and emotional states. We furthermore discuss the opportunities for research and practice that arise from our findings and their potential to support emotional well-being.
Background Hand hygiene is a crucial and cost-effective method to prevent health care–associated infections, and in 2009, the World Health Organization (WHO) issued guidelines to encourage and standardize hand hygiene procedures. However, a common challenge in health care settings is low adherence, leading to low handwashing quality. Recent advances in machine learning and wearable sensing have made it possible to accurately measure handwashing quality for the purposes of training, feedback, or accreditation. Objective We measured the accuracy of a sensor armband (Myo armband) in detecting the steps and duration of the WHO procedures for handwashing and handrubbing. Methods We recruited 20 participants (10 females; mean age 26.5 years, SD 3.3). In a semistructured environment, we collected armband data (acceleration, gyroscope, orientation, and surface electromyography data) and video data from each participant during 15 handrub and 15 handwash sessions. We evaluated the detection accuracy for different armband placements, sensor configurations, user-dependent vs user-independent models, and the use of bootstrapping. Results Using a single armband, the accuracy was 96% (SD 0.01) for the user-dependent model and 82% (SD 0.08) for the user-independent model. This increased when using two armbands to 97% (SD 0.01) and 91% (SD 0.04), respectively. Performance increased when the armband was placed on the forearm (user dependent: 97%, SD 0.01; and user independent: 91%, SD 0.04) and decreased when placed on the arm (user dependent: 96%, SD 0.01; and user independent: 80%, SD 0.06). In terms of bootstrapping, user-dependent models can achieve more than 80% accuracy after six training sessions and 90% with 16 sessions. Finally, we found that the combination of accelerometer and gyroscope minimizes power consumption and cost while maximizing performance. Conclusions A sensor armband can be used to measure hand hygiene quality relatively accurately, in terms of both handwashing and handrubbing. The performance is acceptable using a single armband worn in the upper arm but can substantially improve by placing the armband on the forearm or by using two armbands.
Reviewing lifelogging data has been proposed as a useful tool to support human memory. However, the sheer volume of data (particularly images) that can be captured by modern lifelogging systems makes the selection and presentation of material for review a challenging task. We present the results of a five-week user study involving 16 participants and over 69,000 images that explores both individual requirements for video summaries and the differences in cognitive load, user experience, memory experience, and recall experience between review using video summarisations and non-summary review techniques. Our results can be used to inform the design of future lifelogging data summarisation systems for memory augmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.