Background Despite the rapidly growing number of digital assessment tools for screening and diagnosing mental health disorders, little is known about their diagnostic accuracy. Objective The purpose of this systematic review and meta-analysis is to establish the diagnostic accuracy of question- and answer-based digital assessment tools for diagnosing a range of highly prevalent psychiatric conditions in the adult population. Methods The Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) will be used. The focus of the systematic review is guided by the population, intervention, comparator, and outcome framework (PICO). We will conduct a comprehensive systematic literature search of MEDLINE, PsychINFO, Embase, Web of Science Core Collection, Cochrane Library, Applied Social Sciences Index and Abstracts (ASSIA), and Cumulative Index to Nursing and Allied Health Literature (CINAHL) for appropriate articles published from January 1, 2005. Two authors will independently screen the titles and abstracts of identified references and select studies according to the eligibility criteria. Any inconsistencies will be discussed and resolved. The two authors will then extract data into a standardized form. Risk of bias will be assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool, and a descriptive analysis and meta-analysis will summarize the diagnostic accuracy of the identified digital assessment tools. Results The systematic review and meta-analysis commenced in November 2020, with findings expected by May 2021. Conclusions This systematic review and meta-analysis will summarize the diagnostic accuracy of question- and answer-based digital assessment tools. It will identify implications for clinical practice, areas for improvement, and directions for future research. Trial Registration PROSPERO International Prospective Register of Systematic Reviews CRD42020214724; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42020214724. International Registered Report Identifier (IRRID) DERR1-10.2196/25382
Background The ever-increasing pressure on health care systems has resulted in the underrecognition of perinatal mental disorders. Digital mental health tools such as apps could provide an option for accessible perinatal mental health screening and assessment. However, there is a lack of information regarding the availability and features of perinatal app options. Objective This study aims to evaluate the current state of diagnostic and screening apps for perinatal mental health available on the Google Play Store (Android) and Apple App Store (iOS) and to review their features following the mHealth Index and Navigation Database framework. Methods Following a scoping review approach, the Apple App Store and Google Play Store were systematically searched to identify perinatal mental health assessment apps. A total of 14 apps that met the inclusion criteria were downloaded and reviewed in a standardized manner using the mHealth Index and Navigation Database framework. The framework comprised 107 questions, allowing for a comprehensive assessment of app origin, functionality, engagement features, security, and clinical use. Results Most apps were developed by for-profit companies (n=10), followed by private individuals (n=2) and trusted health care companies (n=2). Out of the 14 apps, 3 were available only on Android devices, 4 were available only on iOS devices, and 7 were available on both platforms. Approximately one-third of the apps (n=5) had been updated within the last 180 days. A total of 12 apps offered the Edinburgh Postnatal Depression Scale in its original version or in rephrased versions. Engagement, input, and output features included reminder notifications, connections to therapists, and free writing features. A total of 6 apps offered psychoeducational information and references. Privacy policies were available for 11 of the 14 apps, with a median Flesch-Kincaid reading grade level of 12.3. One app claimed to be compliant with the Health Insurance Portability and Accountability Act standards and 2 apps claimed to be compliant with General Data Protection Regulation. Of the apps that could be accessed in full (n=10), all appeared to fulfill the claims stated in their description. Only 1 app referenced a relevant peer-reviewed study. All the apps provided a warning for use, highlighting that the mental health assessment result should not be interpreted as a diagnosis or as a substitute for medical care. Only 3 apps allowed users to export or email their mental health test results. Conclusions These results indicate that there are opportunities to improve perinatal mental health assessment apps. To this end, we recommend focusing on the development and validation of more comprehensive assessment tools, ensuring data protection and safety features are adequate for the intended app use, and improving data sharing features between users and health care professionals for timely support.
Background Given the role digital technologies are likely to play in the future of mental health care, there is a need for a comprehensive appraisal of the current state and validity (ie, screening or diagnostic accuracy) of digital mental health assessments. Objective The aim of this review is to explore the current state and validity of question-and-answer–based digital tools for diagnosing and screening psychiatric conditions in adults. Methods This systematic review was based on the Population, Intervention, Comparison, and Outcome framework and was carried out in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. MEDLINE, Embase, Cochrane Library, ASSIA, Web of Science Core Collection, CINAHL, and PsycINFO were systematically searched for articles published between 2005 and 2021. A descriptive evaluation of the study characteristics and digital solutions and a quantitative appraisal of the screening or diagnostic accuracy of the included tools were conducted. Risk of bias and applicability were assessed using the revised tool for the Quality Assessment of Diagnostic Accuracy Studies 2. Results A total of 28 studies met the inclusion criteria, with the most frequently evaluated conditions encompassing generalized anxiety disorder, major depressive disorder, and any depressive disorder. Most of the studies used digitized versions of existing pen-and-paper questionnaires, with findings revealing poor to excellent screening or diagnostic accuracy (sensitivity=0.32-1.00, specificity=0.37-1.00, area under the receiver operating characteristic curve=0.57-0.98) and a high risk of bias for most of the included studies. Conclusions The field of digital mental health tools is in its early stages, and high-quality evidence is lacking. International Registered Report Identifier (IRRID) RR2-10.2196/25382
Mental health screening and diagnostic apps can provide an opportunity to reduce strain on mental health services, improve patient well-being, and increase access for underrepresented groups. Despite promise of their acceptability, many mental health apps on the market suffer from high dropout due to a multitude of issues. Understanding user opinions of currently available mental health apps beyond star ratings can provide knowledge which can inform the development of future mental health apps. This study aimed to conduct a review of current apps which offer screening and/or aid diagnosis of mental health conditions on the Apple app store (iOS), Google Play app store (Android), and using the m-health Index and Navigation Database (MIND). In addition, the study aimed to evaluate user experiences of the apps, identify common app features and determine which features are associated with app use discontinuation. The Apple app store, Google Play app store, and MIND were searched. User reviews and associated metadata were then extracted to perform a sentiment and thematic analysis. The final sample included 92 apps. 45.65% (n = 42) of these apps only screened for or diagnosed a single mental health condition and the most commonly assessed mental health condition was depression (38.04%, n = 35). 73.91% (n = 68) of the apps offered additional in-app features to the mental health assessment (e.g., mood tracking). The average user rating for the included apps was 3.70 (SD = 1.63) and just under two-thirds had a rating of four stars or above (65.09%, n = 442). Sentiment analysis revealed that 65.24%, n = 441 of the reviews had a positive sentiment. Ten themes were identified in the thematic analysis, with the most frequently occurring being performance (41.32%, n = 231) and functionality (39.18%, n = 219). In reviews which commented on app use discontinuation, functionality and accessibility in combination were the most frequent barriers to sustained app use (25.33%, n = 19). Despite the majority of user reviews demonstrating a positive sentiment, there are several areas of improvement to be addressed. User reviews can reveal ways to increase performance and functionality. App user reviews are a valuable resource for the development and future improvements of apps designed for mental health diagnosis and screening.
BACKGROUND Currently, the screening of perinatal mental health symptoms is confined to maternity and primary care settings. Critically, the ever-increasing pressure on healthcare systems has resulted in under-recognition of perinatal mental disorders. Digital mental health tools, such as applications (apps) could provide an option for accessible perinatal mental health screening and assessments. However, there is a lack of information regarding the availability and effectiveness of perinatal app options. OBJECTIVE To evaluate the current state of diagnostic and screening apps for perinatal mental health available on the Google Play store (Android) and Apple App store (iOS), and to review their features following the App Evaluation Model framework. METHODS A systematic review approach was used to identify perinatal mental health assessment apps on the Apple App store and Google Play store. 14 apps met inclusion criteria, were downloaded, and reviewed in a standardized manner using the App Evaluation Model framework. The framework comprised 107 questions allowing for a comprehensive assessment of app origin, functionality, engagement features, security, and clinical use. RESULTS The majority of apps were developed by for-profit companies (n=10), followed by private individuals (n=2), and trusted healthcare companies (n=2). Three apps were only available on Android devices, four were available only on iOS devices, and seven on both platforms. Approximately a third of apps (n=5) had been updated within the last 180 days. Most apps did not have enough reviews to display average ratings. Twelve apps offered the Edinburgh Postnatal Depression Scale (EPDS) in its original version or in rephrased versions. Additionally, one app included screening scales for anxiety, insomnia, and post-traumatic stress disorder. Engagement, input, and output features included reminder notifications, connection to therapists, and free writing features. Six apps offered psychoeducational information or references. Privacy policies were available for 11 of the 14 apps, with a median Flesch-Kincaid reading grade level of 12.3 One app claimed to be compliant with Health Insurance Portability and Accountability Act standards, two apps claimed to be General Data Protection Regulation compliant. Of the apps that could be accessed in full (n=10), all appeared to fulfil the claims stated in their description. Only one app referenced a relevant peer-reviewed study. All the apps provided a warning for use highlighting that the mental health assessment result should not be interpreted as a diagnosis nor as a substitute for medical care, hence all the apps were regarded as reference apps and not self-help tools. Only three apps allowed users to export or email their mental health test results. CONCLUSIONS These results support the view that there is space for designing and improving perinatal mental health applications. To this end, we recommend three areas of focus for app developers and clinicians looking to design apps for perinatal mental health assessment.
Digital mental health interventions (DMHI) have the potential to address barriers to face-to-face mental healthcare. In particular, digital mental health assessments offer the opportunity to increase access, reduce strain on services, and improve identification. Despite the potential of DMHIs there remains a high drop-out rate. Therefore, investigating user feedback may elucidate how to best design and deliver an engaging digital mental health assessment. The current study aimed to understand 1304 user perspectives of (1) a newly developed digital mental health assessment to determine which features users consider to be positive or negative and (2) the Composite International Diagnostic Interview (CIDI) employed in a previous large-scale pilot study. A thematic analysis method was employed to identify themes in feedback to three question prompts related to: (1) the questions included in the digital assessment, (2) the homepage design and reminders, and (3) the assessment results report. The largest proportion of the positive and negative feedback received regarding the questions included in the assessment (n = 706), focused on the quality of the assessment (n = 183, 25.92% and n = 284, 40.23%, respectively). Feedback for the homepage and reminders (n = 671) was overwhelmingly positive, with the largest two themes identified being positive usability (i.e., ease of use; n = 500, 74.52%) and functionality (i.e., reminders; n = 278, 41.43%). The most frequently identified negative theme in results report feedback (n = 794) was related to the report content (n = 309, 38.92%), with users stating it was lacking in-depth information. Nevertheless, the most frequent positive theme regarding the results report feedback was related to wellbeing outcomes (n = 145, 18.26%), with users stating the results report, albeit brief, encouraged them to seek professional support. Interestingly, despite some negative feedback, most users reported that completing the digital mental health assessment has been worthwhile (n = 1,017, 77.99%). Based on these findings, we offer recommendations to address potential barriers to user engagement with a digital mental health assessment. In summary, we recommend undertaking extensive co-design activities during the development of digital assessment tools, flexibility in answering modalities within digital assessment, customizable additional features such as reminders, transparency of diagnostic decision making, and an actionable results report with personalized mental health resources.
Background Every year, one-fourth of the people in the United Kingdom experience diagnosable mental health concerns, yet only a proportion receive a timely diagnosis and treatment. With novel developments in digital technologies, the potential to increase access to mental health assessments and triage is promising. Objective This study aimed to investigate the current state of mental health provision in the United Kingdom and understand the utility of, and interest in, digital mental health technologies. Methods A web-based survey was generated using Qualtrics XM. Participants were recruited via social media. Data were explored using descriptive statistics. Results The majority of the respondents (555/618, 89.8%) had discussed their mental health with a general practitioner. More than three-fourths (503/618, 81.4%) of the respondents had been diagnosed with a mental health disorder, with the most common diagnoses being depression and generalized anxiety disorder. Diagnostic waiting times from first contact with a health care professional varied by diagnosis. Neurodevelopmental disorders (30/56, 54%), bipolar disorder (25/52, 48%), and personality disorders (48/101, 47.5%) had the longest waiting times, with almost half (103/209, 49.3%) of these diagnoses taking >6 months. Participants stated that waiting times resulted in symptoms worsening (262/353, 74.2%), lower quality of life (166/353, 47%), and the necessity to seek emergency care (109/353, 30.9%). Of the 618 participants, 386 (62.5%) stated that they felt that their mental health symptoms were not always taken seriously by their health care provider and 297 (48.1%) were not given any psychoeducational information. The majority of the respondents (416/595, 77.5%) did not have the chance to discuss mental health support and treatment options. Critically, 16.1% (96/595) did not find any treatment or support provided at all helpful, with 63% (48/76) having discontinued treatment with no effective alternatives. Furthermore, 88.3% (545/617) of the respondents) had sought help on the web regarding mental health symptoms, and 44.4% (272/612) had used a web application or smartphone app for their mental health. Psychoeducation (364/596, 61.1%), referral to a health care professional (332/596, 55.7%), and symptom monitoring (314/596, 52.7%) were the most desired app features. Only 6.8% (40/590) of the participants said that they would not be interested in using a mental health assessment app. Respondents were the most interested to receive an overall severity score of their mental health symptoms (441/546, 80.8%) and an indication of whether they should seek mental health support (454/546, 83.2%). Conclusions Key gaps in current UK mental health care provision are highlighted. Assessment and treatment waiting times together with a lack of information regarding symptoms and treatment options translated into poor care experiences. The participants’ responses provide proof-of-concept support for the development of a digital mental health assessment app and valuable recommendations regarding desirable app features.
BACKGROUND Every year, 1 in 4 people in the UK experience diagnosable mental health concerns, yet only a proportion receive a timely diagnosis and treatment. With novel developments in digital technologies, the potential to increase access to mental health assessments and triage is promising. OBJECTIVE To investigate the current state of mental health provision in the UK as well as understand the utility and interest in digital mental health technologies. To investigate attitudes towards using a digital tool (eg, mobile app) to assess mental health symptoms. METHODS An online survey was generated using Qualtrics XM® and participants were recruited via social media and organic posts on relevant forums. Data were explored using descriptive statistics. RESULTS Data from 618 participants were analyzed. The majority (89.8%, n=555) of respondents had discussed their mental health with a general practitioner. Approximately 80% (n=503) of respondents were diagnosed with a mental health disorder, with the most common diagnoses being depression and generalized anxiety disorder. Diagnostic waiting times varied by diagnosis. Neurodevelopmental disorders, bipolar disorder, and personality disorders had the longest waiting times, with almost half of these diagnoses taking longer than six months (53.6% (n=30), 48.1% (n=25), 47.5% (n=48), respectively). 83.2% (n=262) expressed waiting times resulted in symptoms worsening, lower quality of life (52.7%, n=166), and the necessity to seek emergency care (34.6%, n=109). 62.5% of respondents (n=386) expressed that they felt their mental health symptoms were not always taken seriously by their health care provider, 48.1% (n=297) were not given any psychoeducational information, and 77.5% (n=416) did not have the chance to discuss mental health support and treatment options. Critically, 16.1% (n=96) did not find any treatment or support provided at all helpful, with 63.2% (n=48) having discontinued treatment with no effective alternatives. In terms of digital technology use, 88% (n=545) of respondents had sought help online regarding mental health symptoms and 44.4% (n=272) had used a web or smartphone app for their mental health. Psychoeducation (61.1%, n=364), followed by signposting and referral to a health care professional (55.7%, n=332), and monitoring symptoms (52.7%, n=314) were the most desired app features. Only 6.8% (n=40) said they would not be interested in using a mental health assessment app. In a hypothetical results report, respondents were most interested to receive an overall severity score of their mental health symptoms (80.8%, n=441) and an indication of whether they should seek mental health support (83.2%, n=454). CONCLUSIONS Key gaps in current UK mental health care provision were highlighted. Assessment and treatment waiting times together with a lack of information regarding symptoms and treatment options translated in poor care experiences. The responses provide proof-of-concept support for the development of a digital mental health assessment app and valuable recommendations regarding desirable app features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.