2023
DOI: 10.1038/s41746-023-00789-9
|View full text |Cite
|
Sign up to set email alerts
|

Identifying vulnerable populations in the electronic Framingham Heart Study to improve digital device adherence

Abstract: The usage of digital devices in clinical and research settings has rapidly increased. Despite their promise, optimal use of these devices is often hampered by low adherence. The relevant factors predictive of long-term adherence have yet to be fully explored. A recent study investigated device usage over 12 months in a cohort of the electronic Framingham Heart Study. It identified sociodemographic and health-related factors associated with the long-term use of three digital health components: a smartphone app,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
7
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 11 publications
0
7
0
Order By: Relevance
“…This paper presents OTTEHR , an OT-based unsupervised TL framework for EHRs. While biased models can lead to incorrect diagnoses, treatments, and healthcare decisions [Chen et al, 2023, Mittermaier et al, 2023], OTTEHR can potentially alleviate these biases by leveraging OT when comparing different population groups. Our study more precisely establishes a theoretical upper bound for the generalization error.…”
Section: Discussionmentioning
confidence: 99%
“…This paper presents OTTEHR , an OT-based unsupervised TL framework for EHRs. While biased models can lead to incorrect diagnoses, treatments, and healthcare decisions [Chen et al, 2023, Mittermaier et al, 2023], OTTEHR can potentially alleviate these biases by leveraging OT when comparing different population groups. Our study more precisely establishes a theoretical upper bound for the generalization error.…”
Section: Discussionmentioning
confidence: 99%
“…91 Upon completion of a final model, initial and then routine reporting of model performance across data sets can be facilitated using tools such as model cards for algorithms and data sheets for data sets—intended to provide optimal transparency and updated guidance on how to use, validate, and interpret results from a given model applied to a given data set. 78–80,92,93 In addition to routinely applied and updated evaluations of overall performance and fairness metrics, these same tools are likely to add value for ongoing assessments of how an algorithm operates in real-world settings. Transparency of differential performance will also for interdisciplinary teams to iterate on and further optimize models across multiple use scenerios.…”
Section: Bias In Medicine and Mitigation By MLmentioning
confidence: 99%
“…To date, AI technologies have gained approval without necessarily having robust external validation, 96 and the only high-level guidance is provided to promote fairness without any specific requirements to ensure mitigation of potential bias. 59 Notwithstanding plans to establish processes for monitoring of AI model performance in real-world settings, 93 there are growing calls for the FDA to establish a more structured process for initial evaluation and approval of AI technologies that are ideally aligned with how drugs and devices are regulated. 97,98 For this reason, we offer a framework for considering approaches to bias mitigation that are based on existing workflows for building new AI technologies while emphasizing transparency and also approximating the conventional 4 phases of drug development and review (Table 4).…”
Section: Bias In Medicine and Mitigation By MLmentioning
confidence: 99%
See 1 more Smart Citation
“…Since current datasets used in AI models are trained on non‐psychiatric sources, today all major AI chatbots clearly state that their products must not be used for clinical purposes. Even with proper training, risks of AI bias must be carefully explored, given numerous recent examples of clear harm in other medical fields 6 . A rapid glance at images generated by an AI program when asked to draw “schizophrenia” 7 visualized the extent to which extreme stigma and harmful bias have informed what current AI models conceptualize as mental illness.…”
mentioning
confidence: 99%