2021
DOI: 10.1016/j.genhosppsych.2021.02.008
|View full text |Cite
|
Sign up to set email alerts
|

Patient perspectives on acceptability of, and implementation preferences for, use of electronic health records and machine learning to identify suicide risk

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 21 publications
(19 citation statements)
references
References 15 publications
2
17
0
Order By: Relevance
“…Given the increasing momentum of the OpenNotes movement [ 40 ], including within the mental health domain [ 41 ], it seems likely that over the long term, we can expect more EHR information to be available to service users. Thus, partnering with patient stakeholders [ 31 ] to develop protocols for transparent, sensitive communications about suicide risk, including key contextual information (eg, how to interpret being in a high-risk category) and language, may also be strategic.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Given the increasing momentum of the OpenNotes movement [ 40 ], including within the mental health domain [ 41 ], it seems likely that over the long term, we can expect more EHR information to be available to service users. Thus, partnering with patient stakeholders [ 31 ] to develop protocols for transparent, sensitive communications about suicide risk, including key contextual information (eg, how to interpret being in a high-risk category) and language, may also be strategic.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, before widespread clinical deployment, it is critical to partner with stakeholders (eg, frontline clinicians, patients, administrators, and payers) who can help guide such efforts. Recent work in this area has involved collecting self-report survey data from mental health professionals [ 30 ] and patients [ 31 ] on clinical and operational issues pertaining to automated suicide risk models. A recent study used self-report surveys (n=35) and interviews (n=12) to collect qualitative data from Veterans Affairs (VA) clinicians involved in the recently implemented VA program that uses predictive analytics to identify and provide outreach to veterans at high risk for suicide [ 32 ].…”
Section: Introductionmentioning
confidence: 99%
“…There is increasing recognition that these confidentiality concerns should be addressed before the implementation of suicide predictive analytic tools by making it clear to patients how their data will be used and by providing them with the choice of opting out (Chan et al, 2016; Tucker et al, 2019; Yarborough & Stumbo, 2021). Specifically, while patients may agree to allowing use of their aggregated data to improve system‐level care, they may object to being identified personally (Riordan et al, 2015).…”
Section: Clinical and Ethical Considerations Within A Risk‐benefit Co...mentioning
confidence: 99%
“…However, the specific use of patient electronic health record data for suicide prediction may exceed the scope of existing informed consent (Glenn & Monteith, 2014). The analysis of these data within a healthcare system may require additional informed consent that clearly outlines the benefits and risks of suicide predictive analytic tools (Yarborough & Stumbo, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…When patients receiving inpatient psychiatric treatment ( n = 102) completed anonymous questionnaires and reacted to three hypothetical vignettes exploring different approaches to introducing a predictive model-driven suicide prevention program, negative reactions and privacy concerns were rare [ 12 ]. However, focus groups and a survey of 1,357 members of a large integrated health system revealed that although patients hypothetically supported this use of their health data, they had reservations about how risk models might be implemented [ 13 ]. Privacy was a universal concern, there was a preference that only trusted clinicians should have access to suicide risk information derived from risk models, and patients were worried about the potential for negative consequences including strain on the clinician-patient relationship, risk conversations causing anxiety for patients, and stigma [ 13 ].…”
Section: Introductionmentioning
confidence: 99%