Background: The African Surgical Outcomes Study (ASOS) showed that surgical patients in Africa have a mortality twice the global average. Existing risk assessment tools are not valid for use in this population because the pattern of risk for poor outcomes differs from high-income countries. The objective of this study was to derive and validate a simple, preoperative risk stratification tool to identify African surgical patients at risk for in-hospital postoperative mortality and severe complications. Methods: ASOS was a 7-day prospective cohort study of adult patients undergoing surgery in Africa. The ASOS Surgical Risk Calculator was constructed with a multivariable logistic regression model for the outcome of in-hospital mortality and severe postoperative complications. The following preoperative risk factors were entered into the model; age, sex, smoking status, ASA physical status, preoperative chronic comorbid conditions, indication for surgery, urgency, severity, and type of surgery. Results: The model was derived from 8799 patients from 168 African hospitals. The composite outcome of severe postoperative complications and death occurred in 423/8799 (4.8%) patients. The ASOS Surgical Risk Calculator includes the following risk factors: age, ASA physical status, indication for surgery, urgency, severity, and type of surgery. The model showed good discrimination with an area under the receiver operating characteristic curve of 0.805 and good calibration with c-statistic corrected for optimism of 0.784. Conclusions: This simple preoperative risk calculator could be used to identify high-risk surgical patients in African hospitals and facilitate increased postoperative surveillance. Clinical trial registration: NCT03044899.
Speech is a critical biomarker for Huntington Disease (HD), with changes in speech increasing in severity as the disease progresses. Speech analyses are currently conducted using either transcriptions created manually by trained professionals or using global rating scales. Manual transcription is both expensive and time-consuming and global rating scales may lack sufficient sensitivity and fidelity [1]. Ultimately, what is needed is an unobtrusive measure that can cheaply and continuously track disease progression. We present first steps towards the development of such a system, demonstrating the ability to automatically differentiate between healthy controls and individuals with HD using speech cues. The results provide evidence that objective analyses can be used to support clinical diagnoses, moving towards the tracking of symptomatology outside of laboratory and clinical environments.
This paper shows that extraction and analysis of various acoustic features from speech using mobile devices can allow the detection of patterns that could be indicative of neurological trauma. This may pave the way for new types of biomarkers and diagnostic tools. Toward this end, we created a mobile application designed to diagnose mild traumatic brain injuries (mTBI) such as concussions. Using this application, data were collected from youth athletes from 47 high schools and colleges in the Midwestern United States. In this paper, we focus on the design of a methodology to collect speech data, the extraction of various temporal and frequency metrics from that data, and the statistical analysis of these metrics to find patterns that are indicative of a concussion. Our results suggest a strong correlation between certain temporal and frequency features and the likelihood of a concussion.
Robust speech recognition is a key prerequisite for semantic feature extraction in automatic aphasic speech analysis. However, standard one-size-fits-all automatic speech recognition models perform poorly when applied to aphasic speech. One reason for this is the wide range of speech intelligibility due to different levels of severity (i.e., higher severity lends itself to less intelligible speech). To address this, we propose a novel acoustic model based on a mixture of experts (MoE), which handles the varying intelligibility stages present in aphasic speech by explicitly defining severity-based experts. At test time, the contribution of each expert is decided by estimating speech intelligibility with a speech intelligibility detector (SID). We show that our proposed approach significantly reduces phone error rates across all severity stages in aphasic speech compared to a baseline approach that does not incorporate severity information into the modeling process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.