BackgroundDespite growing interest in mobile mental health and utilization of smartphone technology to monitor psychiatric symptoms, there remains a lack of knowledge both regarding patient ownership of smartphones and their interest in using such to monitor their mental health.ObjectiveTo provide data on psychiatric outpatients’ prevalence of smartphone ownership and interest in using their smartphones to run applications to monitor their mental health.MethodsWe surveyed 320 psychiatric outpatients from four clinics around the United States in order to capture a geographically and socioeconomically diverse patient population. These comprised a state clinic in Massachusetts (n=108), a county clinic in California (n=56), a hybrid public and private clinic in Louisiana (n=50), and a private/university clinic in Wisconsin (n=106).ResultsSmartphone ownership and interest in utilizing such to monitor mental health varied by both clinic type and age with overall ownership of 62.5% (200/320), which is slightly higher than the average United States’ rate of ownership of 58% in January 2014. Overall patient interest in utilizing smartphones to monitor symptoms was 70.6% (226/320).ConclusionsThese results suggest that psychiatric outpatients are interested in using their smartphones to monitor their mental health and own the smartphones capable of running mental healthcare related mobile applications.
Telepsychiatry (TP; video; synchronous) is effective, well received and a standard way to practice. Best practices in TP education, but not its desired outcomes, have been published. This paper proposes competencies for trainees and clinicians, with TP situated within the broader landscape of e-mental health (e-MH) care. TP competencies are organized using the US Accreditation Council of Graduate Medical Education framework, with input from the CanMEDS framework. Teaching and assessment methods are aligned with target competencies, learning contexts, and evaluation options. Case examples help to apply concepts to clinical and institutional contexts. Competencies can be identified, measured and evaluated. Novice or advanced beginner, competent/proficient, and expert levels were outlined. Andragogical (i.e. pedagogical) methods are used in clinical care, seminar, and other educational contexts. Cross-sectional and longitudinal evaluation using quantitative and qualitative measures promotes skills development via iterative feedback from patients, trainees, and faculty staff. TP and e-MH care significantly overlap, such that institutional leaders may use a common approach for change management and an e-platform to prioritize resources. TP training and assessment methods need to be implemented and evaluated. Institutional approaches to patient care, education, faculty development, and funding also need to be studied.
With thousands of smartphone apps targeting mental health, it is difficult to ignore the rapidly expanding use of apps in the treatment of psychiatric disorders. Patients with psychiatric conditions are interested in mental health apps and have begun to use them. That does not mean that clinicians must support, endorse, or even adopt the use of apps, but they should be prepared to answer patients' questions about apps and facilitate shared decision making around app use. This column describes an evaluation framework designed by the American Psychiatric Association to guide informed decision making around the use of smartphone apps in clinical care.
BackgroundThere are over 165,000 mHealth apps currently available to patients, but few have undergone an external quality review. Furthermore, no standardized review method exists, and little has been done to examine the consistency of the evaluation systems themselves.ObjectiveWe sought to determine which measures for evaluating the quality of mHealth apps have the greatest interrater reliability.MethodsWe identified 22 measures for evaluating the quality of apps from the literature. A panel of 6 reviewers reviewed the top 10 depression apps and 10 smoking cessation apps from the Apple iTunes App Store on these measures. Krippendorff’s alpha was calculated for each of the measures and reported by app category and in aggregate.ResultsThe measure for interactiveness and feedback was found to have the greatest overall interrater reliability (alpha=.69). Presence of password protection (alpha=.65), whether the app was uploaded by a health care agency (alpha=.63), the number of consumer ratings (alpha=.59), and several other measures had moderate interrater reliability (alphas>.5). There was the least agreement over whether apps had errors or performance issues (alpha=.15), stated advertising policies (alpha=.16), and were easy to use (alpha=.18). There were substantial differences in the interrater reliabilities of a number of measures when they were applied to depression versus smoking apps.ConclusionsWe found wide variation in the interrater reliability of measures used to evaluate apps, and some measures are more robust across categories of apps than others. The measures with the highest degree of interrater reliability tended to be those that involved the least rater discretion. Clinical quality measures such as effectiveness, ease of use, and performance had relatively poor interrater reliability. Subsequent research is needed to determine consistent means for evaluating the performance of apps. Patients and clinicians should consider conducting their own assessments of apps, in conjunction with evaluating information from reviews.
With over 10,000 mental health- and psychiatry-related smartphone apps available today and expanding, there is a need for reliable and valid evaluation of these digital tools. However, the updating and nonstatic nature of smartphone apps, expanding privacy concerns, varying degrees of usability, and evolving interoperability standards, among other factors, present serious challenges for app evaluation. In this article, we provide a narrative review of various schemes toward app evaluations, including commercial app store metrics, government initiatives, patient-centric approaches, point-based scoring, academic platforms, and expert review systems. We demonstrate that these different approaches toward app evaluation each offer unique benefits but often do not agree to each other and produce varied conclusions as to which apps are useful or not. Although there are no simple solutions, we briefly introduce a new initiative that aims to unify the current controversies in app elevation called CHART (Collaborative Health App Rating Teams), which will be further discussed in a second article in this series.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.