Machine learning techniques were used to identify highly informative early psychosis self-report items and to validate an early psychosis screener (EPS) against the Structured Interview for Psychosis-risk Syndromes (SIPS). The Prodromal Questionnaire-Brief Version (PQ-B) and 148 additional items were administered to 229 individuals being screened with the SIPS at 7 North American Prodrome Longitudinal Study sites and at Columbia University. Fifty individuals were found to have SIPS scores of 0, 1, or 2, making them clinically low risk (CLR) controls; 144 were classified as clinically high risk (CHR) (SIPS 3-5) and 35 were found to have first episode psychosis (FEP) (SIPS 6). Spectral clustering analysis, performed on 124 of the items, yielded two cohesive item groups, the first mostly related to psychosis and mania, the second mostly related to depression, anxiety, and social and general work/school functioning. Items within each group were sorted according to their usefulness in distinguishing between CLR and CHR individuals using the Minimum Redundancy Maximum Relevance procedure. A receiver operating characteristic area under the curve (AUC) analysis indicated that maximal differentiation of CLR and CHR participants was achieved with a 26-item solution (AUC=0.899±0.001). The EPS-26 outperformed the PQ-B (AUC=0.834±0.001). For screening purposes, the self-report EPS-26 appeared to differentiate individuals who are either CLR or CHR approximately as well as the clinician-administered SIPS. The EPS-26 may prove useful as a self-report screener and may lead to a decrease in the duration of untreated psychosis. A validation of the EPS-26 against actual conversion is underway.
BackgroundThe computerized administration of self-report psychiatric diagnostic and outcomes assessments has risen in popularity. If results are similar enough across different administration modalities, then new administration technologies can be used interchangeably and the choice of technology can be based on other factors, such as convenience in the study design. An assessment based on item response theory (IRT), such as the Patient-Reported Outcomes Measurement Information System (PROMIS) depression item bank, offers new possibilities for assessing the effect of technology choice upon results.ObjectiveTo create equivalent halves of the PROMIS depression item bank and to use these halves to compare survey responses and user satisfaction among administration modalities—paper, mobile phone, or tablet—with a community mental health care population.MethodsThe 28 PROMIS depression items were divided into 2 halves based on content and simulations with an established PROMIS response data set. A total of 129 participants were recruited from an outpatient public sector mental health clinic based in Memphis. All participants took both nonoverlapping halves of the PROMIS IRT-based depression items (Part A and Part B): once using paper and pencil, and once using either a mobile phone or tablet. An 8-cell randomization was done on technology used, order of technologies used, and order of PROMIS Parts A and B. Both Parts A and B were administered as fixed-length assessments and both were scored using published PROMIS IRT parameters and algorithms.ResultsAll 129 participants received either Part A or B via paper assessment. Participants were also administered the opposite assessment, 63 using a mobile phone and 66 using a tablet. There was no significant difference in item response scores for Part A versus B. All 3 of the technologies yielded essentially identical assessment results and equivalent satisfaction levels.ConclusionsOur findings show that the PROMIS depression assessment can be divided into 2 equivalent halves, with the potential to simplify future experimental methodologies. Among community mental health care recipients, the PROMIS items function similarly whether administered via paper, tablet, or mobile phone. User satisfaction across modalities was also similar. Because paper, tablet, and mobile phone administrations yielded similar results, the choice of technology should be based on factors such as convenience and can even be changed during a study without adversely affecting the comparability of results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.