Background Effectively identifying patients with COVID-19 using nonpolymerase chain reaction biomedical data is critical for achieving optimal clinical outcomes. Currently, there is a lack of comprehensive understanding in various biomedical features and appropriate analytical approaches for enabling the early detection and effective diagnosis of patients with COVID-19. Objective We aimed to combine low-dimensional clinical and lab testing data, as well as high-dimensional computed tomography (CT) imaging data, to accurately differentiate between healthy individuals, patients with COVID-19, and patients with non-COVID viral pneumonia, especially at the early stage of infection. Methods In this study, we recruited 214 patients with nonsevere COVID-19, 148 patients with severe COVID-19, 198 noninfected healthy participants, and 129 patients with non-COVID viral pneumonia. The participants’ clinical information (ie, 23 features), lab testing results (ie, 10 features), and CT scans upon admission were acquired and used as 3 input feature modalities. To enable the late fusion of multimodal features, we constructed a deep learning model to extract a 10-feature high-level representation of CT scans. We then developed 3 machine learning models (ie, k-nearest neighbor, random forest, and support vector machine models) based on the combined 43 features from all 3 modalities to differentiate between the following 4 classes: nonsevere, severe, healthy, and viral pneumonia. Results Multimodal features provided substantial performance gain from the use of any single feature modality. All 3 machine learning models had high overall prediction accuracy (95.4%-97.7%) and high class-specific prediction accuracy (90.6%-99.9%). Conclusions Compared to the existing binary classification benchmarks that are often focused on single-feature modality, this study’s hybrid deep learning-machine learning framework provided a novel and effective breakthrough for clinical applications. Our findings, which come from a relatively large sample size, and analytical workflow will supplement and assist with clinical decision support for current COVID-19 diagnostic methods and other clinical applications with high-dimensional multimodal biomedical features.
Today's digital health revolution aims to improve the efficiency of healthcare delivery and make care more personalized and timely. Sources of data for digital health tools include multiple modalities such as electronic medical records (EMR), radiology images, and genetic repositories, to name a few. While historically, these data were utilized in silos, new machine learning (ML) and deep learning (DL) technologies enable the integration of these data sources to produce multi-modal insights. Data fusion, which integrates data from multiple modalities using ML and DL techniques, has been of growing interest in its application to medicine. In this paper, we review the state-of-the-art research that focuses on how the latest techniques in data fusion are providing scientific and clinical insights specific to the field of cardiovascular medicine. With these new data fusion capabilities, clinicians and researchers alike will advance the diagnosis and treatment of cardiovascular diseases (CVD) to deliver more timely, accurate, and precise patient care.
Background Social media has become a major resource for observing and understanding public opinions using infodemiology and infoveillance methods, especially during emergencies such as disease outbreaks. For public health agencies, understanding the driving forces of web-based discussions will help deliver more effective and efficient information to general users on social media and the web. Objective The study aimed to identify the major contributors that drove overall Zika-related tweeting dynamics during the 2016 epidemic. In total, 3 hypothetical drivers were proposed: (1) the underlying Zika epidemic quantified as a time series of case counts; (2) sporadic but critical real-world events such as the 2016 Rio Olympics and World Health Organization’s Public Health Emergency of International Concern (PHEIC) announcement, and (3) a few influential users’ tweeting activities. Methods All tweets and retweets (RTs) containing the keyword Zika posted in 2016 were collected via the Gnip application programming interface (API). We developed an analytical pipeline, EventPeriscope, to identify co-occurring trending events with Zika and quantify the strength of these events. We also retrieved Zika case data and identified the top influencers of the Zika discussion on Twitter. The influence of 3 potential drivers was examined via a multivariate time series analysis, signal processing, a content analysis, and text mining techniques. Results Zika-related tweeting dynamics were not significantly correlated with the underlying Zika epidemic in the United States in any of the four quarters in 2016 nor in the entire year. Instead, peaks of Zika-related tweeting activity were strongly associated with a few critical real-world events, both planned, such as the Rio Olympics, and unplanned, such as the PHEIC announcement. The Rio Olympics was mentioned in >15% of all Zika-related tweets and PHEIC occurred in 27% of Zika-related tweets around their respective peaks. In addition, the overall tweeting dynamics of the top 100 most actively tweeting users on the Zika topic, the top 100 users receiving most RTs, and the top 100 users mentioned were the most highly correlated to and preceded the overall tweeting dynamics, making these groups of users the potential drivers of tweeting dynamics. The top 100 users who retweeted the most were not critical in driving the overall tweeting dynamics. There were very few overlaps among these different groups of potentially influential users. Conclusions Using our proposed analytical workflow, EventPeriscope, we identified that Zika discussion dynamics on Twitter were decoupled from the actual disease epidemic in the United States but were closely related to and highly influenced by certain sporadic real-world events as well as by a few influential users. This study provided a methodology framework and insights to better understand the driving forces of web-based public discourse during health emergencies. Therefore, health agencies could deliver more effective and efficient web-based communications in emerging crises.
Effectively identifying COVID-19 patients using non-PCR clinical data is critical for the optimal clinical outcomes. Currently, there is a lack of comprehensive understanding of various biomedical features and appropriate technical approaches to accurately detecting COVID-19 patients. In this study, we recruited 214 confirmed COVID-19 patients in non-severe (NS) and 148 in severe (S) clinical type, 198 non-infected healthy (H) participants and 129 non-COVID viral pneumonia (V) patients. The participants' clinical information (23 features), lab testing results (10 features), and thoracic CT scans upon admission were acquired as three input feature modalities. To enable late fusion of multimodality data, we developed a deep learning model to extract a 10-feature high-level representation of the CT scans. Exploratory analyses showed substantial differences of all features among the four classes. Three machine learning models (k-nearest neighbor kNN, random forest RF, and support vector machine SVM) were developed based on the 43 features combined from all three modalities to differentiate four classes (NS, S, V, and H) at once. All three models had high accuracy to differentiate the overall four classes (95.4%-97.7%) and each individual class (90.6%-99.9%). Multimodal features provided substantial performance gain from using any single feature modality. Compared to existing binary classification benchmarks often focusing on single feature modality, this study provided a novel and effective breakthrough for clinical applications. Findings and the analytical workflow can be used as clinical decision support for current COVID-19 and other clinical applications with high-dimensional multimodal biomedical features.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.