BackgroundThe needs of the growing population of complex patients with multiple chronic conditions calls for a different approach to care. Clinical teams need to acknowledge, respect, and support the work that patients do and the capacity they mobilize to enact this work, and to adapt and self-manage. Tools that enable this approach to care are needed.MethodsUsing user-centered design principles, we set out to create a discussion aid for use by patients, clinicians, and other health professionals during clinical encounters. We observed clinical encounters, visited patient homes, and dialogued with patient support groups. We then developed and tested prototypes in routine clinical practice. Then we refined a final prototype with extensive stakeholder feedback.ResultsFrom this process resulted the ICAN Discussion Aid, a tool completed by the patient and reviewed during the consultation in which patients classified domains that contribute to capacity as sources of burden or satisfaction; clinical demands were also classified as sources of help or burden. The clinical review facilitated by ICAN generates hypotheses regarding why some treatment plans may be problematic and may not be enacted in the patient’s situation.ConclusionWe successfully created a discussion aid to elucidate and share insights about the capacity patients have to enact the treatment plan and hypotheses as to why this plan may or may not be enacted. Next steps involve the evaluation of the impact of the ICAN Discussion Aid on clinical encounters with a variety of health professionals and the impact of ICAN-informed treatment plans on patient-important outcomes.
While many sensors can monitor physical activity, there is no device that can unobtrusively measure eating at the same level of detail. Yet, tracking and reacting to food consumption is key to managing many chronic diseases such as obesity and diabetes. Eating recognition has primarily used a single sensor at a time in a constrained environment but sensors may fail and each may pick up different types of eating. We present a multi-modality study of eating recognition, which combines head and wrist motion (Google Glass, smartwatches on each wrist), with audio (custom earbud microphone). We collect 72 hours of data from 6 participants wearing all sensors and eating an unrestricted set of foods, and annotate video recordings to obtain ground truth. Using our noise cancellation method, audio sensing alone achieved 92% precision and 89% recall in finding meals, while motion sensing was needed to find individual intakes. CCS Concepts •Human-centered computing → Ubiquitous and mobile computing;
A vast majority of epileptic seizure (ictal) detection on electroencephalogram (EEG) data has been retrospective. Therefore, even though some may include many patients and extensive evaluation benchmarking, they all share a heavy reliance on labelled data. This is perhaps the most significant obstacle against the utility of seizure detection systems in clinical settings. In this paper, we present a prospective automatic ictal detection and labelling performed at the level of a human expert (arbiter) and reduces labelling time by more than an order of magnitude. Accurate seizure detection and labelling are still a time-consuming and cumbersome task in epilepsy monitoring units (EMUs) and epilepsy centres, particularly in countries with limited facilities and insufficiently trained human resources. This work implements a convolutional long short-term memory (ConvLSTM) network that is pre-trained and tested on Temple University Hospital (TUH) EEG corpus. It is then deployed prospectively at the Comprehensive Epilepsy Service at the Royal Prince Alfred Hospital (RPAH) in Sydney, Australia, testing nearly 14,590 hours of EEG data across nine years. Our system prospectively labelled RPAH epilepsy ward data and subsequently reviewed by two neurologists and three certified EEG specialists. Our clinical result shows the proposed method achieves a 92.19% detection rate for an average time of 7.62 mins per 24 hrs of recorded 18-channel EEG. A human expert usually requires about 2 hrs of reviewing and labelling per any 24 hrs of recorded EEG and is often assisted by a wide range of auxiliary data such as patient, carer, or nurse inputs. In this prospective analysis, we consider humans' role as an expert arbiter who confirms to reject each alarm raised by our system. We achieved an average of 56 false alarms per 24 hrs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.