Background Despite the rapidly growing number of digital assessment tools for screening and diagnosing mental health disorders, little is known about their diagnostic accuracy. Objective The purpose of this systematic review and meta-analysis is to establish the diagnostic accuracy of question- and answer-based digital assessment tools for diagnosing a range of highly prevalent psychiatric conditions in the adult population. Methods The Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) will be used. The focus of the systematic review is guided by the population, intervention, comparator, and outcome framework (PICO). We will conduct a comprehensive systematic literature search of MEDLINE, PsychINFO, Embase, Web of Science Core Collection, Cochrane Library, Applied Social Sciences Index and Abstracts (ASSIA), and Cumulative Index to Nursing and Allied Health Literature (CINAHL) for appropriate articles published from January 1, 2005. Two authors will independently screen the titles and abstracts of identified references and select studies according to the eligibility criteria. Any inconsistencies will be discussed and resolved. The two authors will then extract data into a standardized form. Risk of bias will be assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool, and a descriptive analysis and meta-analysis will summarize the diagnostic accuracy of the identified digital assessment tools. Results The systematic review and meta-analysis commenced in November 2020, with findings expected by May 2021. Conclusions This systematic review and meta-analysis will summarize the diagnostic accuracy of question- and answer-based digital assessment tools. It will identify implications for clinical practice, areas for improvement, and directions for future research. Trial Registration PROSPERO International Prospective Register of Systematic Reviews CRD42020214724; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42020214724. International Registered Report Identifier (IRRID) DERR1-10.2196/25382
Background Perinatal mental health symptoms commonly remain underdiagnosed and undertreated in maternity care settings in the United Kingdom, with outbreaks of disease, like the COVID-19 pandemic, further disrupting access to adequate mental health support. Digital technologies may offer an innovative way to support the mental health needs of women and their families throughout the perinatal period, as well as assist midwives in the recognition of perinatal mental health concerns. However, little is known about the acceptability and perceived benefits and barriers to using such technologies. Objective The aim of this study was to conduct a mixed methods evaluation of the current state of perinatal mental health care provision in the United Kingdom, as well as users’ (women and partners) and midwives’ interest in using a digital mental health assessment throughout the perinatal period. Methods Women, partners, and midwives were recruited to participate in the study, which entailed completing an online survey. Quantitative data were explored using descriptive statistics. Open-ended response data were first investigated using thematic analysis. Resultant themes were then mapped onto the components of the Capability, Opportunity, and Motivation Behavior model and summarized using descriptive statistics. Results A total of 829 women, 103 partners, and 90 midwives participated in the study. The provision of adequate perinatal mental health care support was limited, with experiences varying significantly across respondents. There was a strong interest in using a digital mental health assessment to screen, diagnose, and triage perinatal mental health concerns, particularly among women and midwives. The majority of respondents (n=781, 76.42%) expressed that they would feel comfortable or very comfortable using or recommending a digital mental health assessment. The majority of women and partners showed a preference for in-person consultations (n=417, 44.74%), followed by a blended care approach (ie, both in-person and online consultations) (n=362, 38.84%), with fewer participants preferring online-only consultations (n=120, 12.88%). Identified benefits and barriers mainly related to physical opportunity (eg, accessibility), psychological capability (eg, cognitive skills), and automatic motivation (eg, emotions). Conclusions This study provides proof-of-concept support for the development and implementation of a digital mental health assessment to inform clinical decision making in the assessment of perinatal mental health concerns in the United Kingdom.
Background The ever-increasing pressure on health care systems has resulted in the underrecognition of perinatal mental disorders. Digital mental health tools such as apps could provide an option for accessible perinatal mental health screening and assessment. However, there is a lack of information regarding the availability and features of perinatal app options. Objective This study aims to evaluate the current state of diagnostic and screening apps for perinatal mental health available on the Google Play Store (Android) and Apple App Store (iOS) and to review their features following the mHealth Index and Navigation Database framework. Methods Following a scoping review approach, the Apple App Store and Google Play Store were systematically searched to identify perinatal mental health assessment apps. A total of 14 apps that met the inclusion criteria were downloaded and reviewed in a standardized manner using the mHealth Index and Navigation Database framework. The framework comprised 107 questions, allowing for a comprehensive assessment of app origin, functionality, engagement features, security, and clinical use. Results Most apps were developed by for-profit companies (n=10), followed by private individuals (n=2) and trusted health care companies (n=2). Out of the 14 apps, 3 were available only on Android devices, 4 were available only on iOS devices, and 7 were available on both platforms. Approximately one-third of the apps (n=5) had been updated within the last 180 days. A total of 12 apps offered the Edinburgh Postnatal Depression Scale in its original version or in rephrased versions. Engagement, input, and output features included reminder notifications, connections to therapists, and free writing features. A total of 6 apps offered psychoeducational information and references. Privacy policies were available for 11 of the 14 apps, with a median Flesch-Kincaid reading grade level of 12.3. One app claimed to be compliant with the Health Insurance Portability and Accountability Act standards and 2 apps claimed to be compliant with General Data Protection Regulation. Of the apps that could be accessed in full (n=10), all appeared to fulfill the claims stated in their description. Only 1 app referenced a relevant peer-reviewed study. All the apps provided a warning for use, highlighting that the mental health assessment result should not be interpreted as a diagnosis or as a substitute for medical care. Only 3 apps allowed users to export or email their mental health test results. Conclusions These results indicate that there are opportunities to improve perinatal mental health assessment apps. To this end, we recommend focusing on the development and validation of more comprehensive assessment tools, ensuring data protection and safety features are adequate for the intended app use, and improving data sharing features between users and health care professionals for timely support.
Background Given the role digital technologies are likely to play in the future of mental health care, there is a need for a comprehensive appraisal of the current state and validity (ie, screening or diagnostic accuracy) of digital mental health assessments. Objective The aim of this review is to explore the current state and validity of question-and-answer–based digital tools for diagnosing and screening psychiatric conditions in adults. Methods This systematic review was based on the Population, Intervention, Comparison, and Outcome framework and was carried out in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. MEDLINE, Embase, Cochrane Library, ASSIA, Web of Science Core Collection, CINAHL, and PsycINFO were systematically searched for articles published between 2005 and 2021. A descriptive evaluation of the study characteristics and digital solutions and a quantitative appraisal of the screening or diagnostic accuracy of the included tools were conducted. Risk of bias and applicability were assessed using the revised tool for the Quality Assessment of Diagnostic Accuracy Studies 2. Results A total of 28 studies met the inclusion criteria, with the most frequently evaluated conditions encompassing generalized anxiety disorder, major depressive disorder, and any depressive disorder. Most of the studies used digitized versions of existing pen-and-paper questionnaires, with findings revealing poor to excellent screening or diagnostic accuracy (sensitivity=0.32-1.00, specificity=0.37-1.00, area under the receiver operating characteristic curve=0.57-0.98) and a high risk of bias for most of the included studies. Conclusions The field of digital mental health tools is in its early stages, and high-quality evidence is lacking. International Registered Report Identifier (IRRID) RR2-10.2196/25382
Digital mental health technologies such as mobile health (mHealth) tools can offer innovative ways to help develop and facilitate mental health care provision, with the COVID-19 pandemic acting as a pivot point for digital health implementation. This viewpoint offers an overview of the opportunities and challenges mHealth innovators must navigate to create an integrated digital ecosystem for mental health care moving forward. Opportunities exist for innovators to develop tools that can collect a vast range of active and passive patient and transdiagnostic symptom data. Moving away from a symptom-count approach to a transdiagnostic view of psychopathology has the potential to facilitate early and accurate diagnosis, and can further enable personalized treatment strategies. However, the uptake of these technologies critically depends on the perceived relevance and engagement of end users. To this end, behavior theories and codesigning approaches offer opportunities to identify behavioral drivers and address barriers to uptake, while ensuring that products meet users’ needs and preferences. The agenda for innovators should also include building strong evidence-based cases for digital mental health, moving away from a one-size-fits-all well-being approach to embrace the development of comprehensive digital diagnostics and validated digital tools. In particular, innovators have the opportunity to make their clinical evaluations more insightful by assessing effectiveness and feasibility in the intended context of use. Finally, innovators should adhere to standardized evaluation frameworks introduced by regulators and health care providers, as this can facilitate transparency and guide health care professionals toward clinically safe and effective technologies. By laying these foundations, digital services can become integrated into clinical practice, thus facilitating deeper technology-enabled changes.
Mental health screening and diagnostic apps can provide an opportunity to reduce strain on mental health services, improve patient well-being, and increase access for underrepresented groups. Despite promise of their acceptability, many mental health apps on the market suffer from high dropout due to a multitude of issues. Understanding user opinions of currently available mental health apps beyond star ratings can provide knowledge which can inform the development of future mental health apps. This study aimed to conduct a review of current apps which offer screening and/or aid diagnosis of mental health conditions on the Apple app store (iOS), Google Play app store (Android), and using the m-health Index and Navigation Database (MIND). In addition, the study aimed to evaluate user experiences of the apps, identify common app features and determine which features are associated with app use discontinuation. The Apple app store, Google Play app store, and MIND were searched. User reviews and associated metadata were then extracted to perform a sentiment and thematic analysis. The final sample included 92 apps. 45.65% (n = 42) of these apps only screened for or diagnosed a single mental health condition and the most commonly assessed mental health condition was depression (38.04%, n = 35). 73.91% (n = 68) of the apps offered additional in-app features to the mental health assessment (e.g., mood tracking). The average user rating for the included apps was 3.70 (SD = 1.63) and just under two-thirds had a rating of four stars or above (65.09%, n = 442). Sentiment analysis revealed that 65.24%, n = 441 of the reviews had a positive sentiment. Ten themes were identified in the thematic analysis, with the most frequently occurring being performance (41.32%, n = 231) and functionality (39.18%, n = 219). In reviews which commented on app use discontinuation, functionality and accessibility in combination were the most frequent barriers to sustained app use (25.33%, n = 19). Despite the majority of user reviews demonstrating a positive sentiment, there are several areas of improvement to be addressed. User reviews can reveal ways to increase performance and functionality. App user reviews are a valuable resource for the development and future improvements of apps designed for mental health diagnosis and screening.
Tweetable abstract Reflections on challenges and promises of COVID-19 vaccine development show opportunities for innovation and collaboration between stakeholders.
Digital mental health interventions (DMHI) have the potential to address barriers to face-to-face mental healthcare. In particular, digital mental health assessments offer the opportunity to increase access, reduce strain on services, and improve identification. Despite the potential of DMHIs there remains a high drop-out rate. Therefore, investigating user feedback may elucidate how to best design and deliver an engaging digital mental health assessment. The current study aimed to understand 1304 user perspectives of (1) a newly developed digital mental health assessment to determine which features users consider to be positive or negative and (2) the Composite International Diagnostic Interview (CIDI) employed in a previous large-scale pilot study. A thematic analysis method was employed to identify themes in feedback to three question prompts related to: (1) the questions included in the digital assessment, (2) the homepage design and reminders, and (3) the assessment results report. The largest proportion of the positive and negative feedback received regarding the questions included in the assessment (n = 706), focused on the quality of the assessment (n = 183, 25.92% and n = 284, 40.23%, respectively). Feedback for the homepage and reminders (n = 671) was overwhelmingly positive, with the largest two themes identified being positive usability (i.e., ease of use; n = 500, 74.52%) and functionality (i.e., reminders; n = 278, 41.43%). The most frequently identified negative theme in results report feedback (n = 794) was related to the report content (n = 309, 38.92%), with users stating it was lacking in-depth information. Nevertheless, the most frequent positive theme regarding the results report feedback was related to wellbeing outcomes (n = 145, 18.26%), with users stating the results report, albeit brief, encouraged them to seek professional support. Interestingly, despite some negative feedback, most users reported that completing the digital mental health assessment has been worthwhile (n = 1,017, 77.99%). Based on these findings, we offer recommendations to address potential barriers to user engagement with a digital mental health assessment. In summary, we recommend undertaking extensive co-design activities during the development of digital assessment tools, flexibility in answering modalities within digital assessment, customizable additional features such as reminders, transparency of diagnostic decision making, and an actionable results report with personalized mental health resources.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.