“…In the limited data on retention methods for remote and randomized clinical trials, one meta-analysis of digital health studies with large remote samples found that providing a monetary incentive resulted in better overall retention than providing no monetary incentive, where no monetary incentive resulted in retention rates as low as 10% ( 7 ). This meta-analysis confirms previous research into participant preferences and suggestions for encouraging long term participation ( 22 , 23 ). A recent study of monetary incentives for the Verily Mood Baseline Study, a 12-week remote passive sensing and daily survey study, found that large monetary incentives resulted in 83% retention over the course of 12 weeks ( 16 ).…”
Section: Introductionsupporting
confidence: 87%
“…Prior to being randomized to treatment conditions, participants were randomized to one of two incentive conditions, (1) high monetary incentive (HMI; $125USD), and (2) combined low monetary and alternative incentive (LMAI; $75USD). The two monetary incentive values were based on a meta-analysis of various incentives ( 7 ) and on user-centered design research asking a representative sample of 20 US dwelling adults with depression which type of incentive was viewed as fair ( 35 ). Although participants did interact with study therapists as part of the treatment protocol, interaction with the study team was limited to informed consent, technical assistance, reminders, and thanks for participation.…”
Section: Methodsmentioning
confidence: 99%
“…The LMAI engagement condition included additional, in app messages of encouragement for completing daily assessments, facts about depression, humorous GIFs after completing surveys, and prompts to reflect on their responses to their daily activity surveys and how that compared to their mood. The participant engagement method used in the LMAI condition was co-designed with representative participants at the University of Washington ALACRITY Center (UWAC), using Human Centered Design and User Experience Research methods, that employed A/B testing and interactive design with 20 participants suffering from depression to ensure strategies were useful, meaningful, understandable, and engaging, and to determine what incentive amount was deemed to be the lowest yet the most fair compensation amount ( 23 ).…”
Section: Methodsmentioning
confidence: 99%
“…Participants were paid every 4 weeks, resulting in a total of 3 payments which were distributed in the form of Amazon gift codes by email. See supplemental materials for details on the user centered design strategies and findings, as well as example of feedback and engagement strategies ( 35 ).…”
Numerous studies have found that long term retention is very low in remote clinical studies (>4 weeks) and to date there is limited information on the best methods to ensure retention. The ability to retain participants in the completion of key assessments periods is critical to all clinical research, and to date little is known as to what methods are best to encourage participant retention. To study incentive-based retention methods we randomized 215 US adults (18+ years) who agreed to participate in a sequential, multiple assignment randomized trial to either high monetary incentive (HMI, $125 USD) and combined low monetary incentive ($75 USD) plus alternative incentive (LMAI). Participants were asked to complete daily and weekly surveys for a total of 12 weeks, which included a tailoring assessment around week 5 to determine who should be stepped up and rerandomized to one of two augmentation conditions. Key assessment points were weeks 5 and 12. There was no difference in participant retention at week 5 (tailoring event), with approximately 75% of the sample completing the week-5 survey. By week 10, the HMI condition retained approximately 70% of the sample, compared to 60% of the LMAI group. By week 12, all differences were attenuated. Differences in completed measures were not significant between groups. At the end of the study, participants were asked the impressions of the incentive condition they were assigned and asked for suggestions for improving engagement. There were no significant differences between conditions on ratings of the fairness of compensation, study satisfaction, or study burden, but study burden, intrinsic motivation and incentive fairness did influence participation. Men were also more likely to drop out of the study than women. Qualitative analysis from both groups found the following engagement suggestions: desire for feedback on survey responses and an interest in automated sharing of individual survey responses with study therapists to assist in treatment. Participants in the LMAI arm indicated that the alternative incentives were engaging and motivating. In sum, while we were able to increase engagement above what is typical for such study, more research is needed to truly improve long term retention in remote trials.
“…In the limited data on retention methods for remote and randomized clinical trials, one meta-analysis of digital health studies with large remote samples found that providing a monetary incentive resulted in better overall retention than providing no monetary incentive, where no monetary incentive resulted in retention rates as low as 10% ( 7 ). This meta-analysis confirms previous research into participant preferences and suggestions for encouraging long term participation ( 22 , 23 ). A recent study of monetary incentives for the Verily Mood Baseline Study, a 12-week remote passive sensing and daily survey study, found that large monetary incentives resulted in 83% retention over the course of 12 weeks ( 16 ).…”
Section: Introductionsupporting
confidence: 87%
“…Prior to being randomized to treatment conditions, participants were randomized to one of two incentive conditions, (1) high monetary incentive (HMI; $125USD), and (2) combined low monetary and alternative incentive (LMAI; $75USD). The two monetary incentive values were based on a meta-analysis of various incentives ( 7 ) and on user-centered design research asking a representative sample of 20 US dwelling adults with depression which type of incentive was viewed as fair ( 35 ). Although participants did interact with study therapists as part of the treatment protocol, interaction with the study team was limited to informed consent, technical assistance, reminders, and thanks for participation.…”
Section: Methodsmentioning
confidence: 99%
“…The LMAI engagement condition included additional, in app messages of encouragement for completing daily assessments, facts about depression, humorous GIFs after completing surveys, and prompts to reflect on their responses to their daily activity surveys and how that compared to their mood. The participant engagement method used in the LMAI condition was co-designed with representative participants at the University of Washington ALACRITY Center (UWAC), using Human Centered Design and User Experience Research methods, that employed A/B testing and interactive design with 20 participants suffering from depression to ensure strategies were useful, meaningful, understandable, and engaging, and to determine what incentive amount was deemed to be the lowest yet the most fair compensation amount ( 23 ).…”
Section: Methodsmentioning
confidence: 99%
“…Participants were paid every 4 weeks, resulting in a total of 3 payments which were distributed in the form of Amazon gift codes by email. See supplemental materials for details on the user centered design strategies and findings, as well as example of feedback and engagement strategies ( 35 ).…”
Numerous studies have found that long term retention is very low in remote clinical studies (>4 weeks) and to date there is limited information on the best methods to ensure retention. The ability to retain participants in the completion of key assessments periods is critical to all clinical research, and to date little is known as to what methods are best to encourage participant retention. To study incentive-based retention methods we randomized 215 US adults (18+ years) who agreed to participate in a sequential, multiple assignment randomized trial to either high monetary incentive (HMI, $125 USD) and combined low monetary incentive ($75 USD) plus alternative incentive (LMAI). Participants were asked to complete daily and weekly surveys for a total of 12 weeks, which included a tailoring assessment around week 5 to determine who should be stepped up and rerandomized to one of two augmentation conditions. Key assessment points were weeks 5 and 12. There was no difference in participant retention at week 5 (tailoring event), with approximately 75% of the sample completing the week-5 survey. By week 10, the HMI condition retained approximately 70% of the sample, compared to 60% of the LMAI group. By week 12, all differences were attenuated. Differences in completed measures were not significant between groups. At the end of the study, participants were asked the impressions of the incentive condition they were assigned and asked for suggestions for improving engagement. There were no significant differences between conditions on ratings of the fairness of compensation, study satisfaction, or study burden, but study burden, intrinsic motivation and incentive fairness did influence participation. Men were also more likely to drop out of the study than women. Qualitative analysis from both groups found the following engagement suggestions: desire for feedback on survey responses and an interest in automated sharing of individual survey responses with study therapists to assist in treatment. Participants in the LMAI arm indicated that the alternative incentives were engaging and motivating. In sum, while we were able to increase engagement above what is typical for such study, more research is needed to truly improve long term retention in remote trials.
“…When innovations are marketed toward older people, they often reflect a pathological view of aging and are limited to support for emergency monitoring (e.g., fall detection). Our call to action is that AI developers leverage a user-centered perspective, including diverse older adults with a range of health-related quality of life, during the design and evaluation ( 44 ) to uncover such technology's viability and fit-for-purpose in the target population.…”
Artificial intelligence (AI) in healthcare aims to learn patterns in large multimodal datasets within and across individuals. These patterns may either improve understanding of current clinical status or predict a future outcome. AI holds the potential to revolutionize geriatric mental health care and research by supporting diagnosis, treatment, and clinical decision-making. However, much of this momentum is driven by data and computer scientists and engineers and runs the risk of being disconnected from pragmatic issues in clinical practice. This interprofessional perspective bridges the experiences of clinical scientists and data science. We provide a brief overview of AI with the main focus on possible applications and challenges of using AI-based approaches for research and clinical care in geriatric mental health. We suggest future AI applications in geriatric mental health consider pragmatic considerations of clinical practice, methodological differences between data and clinical science, and address issues of ethics, privacy, and trust.
As self-tracking has evolved from a niche practice to a mass-market phenomenon, it has become possible to track a broad range of activities and vital parameters over years and decades. This creates both new opportunities for long term research and also illustrates some challenges associated with longitudinal research. We establish characteristics of very long-term tracking, based on previous work from diverse areas of Ubicomp, HCI, and health informatics. We identify differences between long-and short-term tracking, and discuss consequences on the tracking process. A model for long-term tracking integrates the specific characteristics and facilitates identifying viewpoints of tracking. Finally, a research agenda suggests major topics for future work, including respecting gaps in data and incorporating secondary data sources.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.