Recent progress in deep learning is revolutionizing the healthcare domain including providing solutions to medication recommendations, especially recommending medication combination for patients with complex health conditions. Existing approaches either do not customize based on patient health history, or ignore existing knowledge on drug-drug interactions (DDI) that might lead to adverse outcomes. To fill this gap, we propose the Graph Augmented Memory Networks (GAMENet), which integrates the drug-drug interactions knowledge graph by a memory module implemented as a graph convolutional networks, and models longitudinal patient records as the query. It is trained end-to-end to provide safe and personalized recommendation of medication combination. We demonstrate the effectiveness and safety of GAMENet by comparing with several state-of-the-art methods on real EHR data. GAMENet outperformed all baselines in all effectiveness measures, and also achieved 3.60% DDI rate reduction from existing EHR data.
Medication recommendation is an important healthcare application. It is commonly formulated as a temporal prediction task. Hence, most existing works only utilize longitudinal electronic health records (EHRs) from a small number of patients with multiple visits ignoring a large number of patients with a single visit (selection bias). Moreover, important hierarchical knowledge such as diagnosis hierarchy is not leveraged in the representation learning process. To address these challenges, we propose G-BERT, a new model to combine the power of Graph Neural Networks (GNNs) and BERT (Bidirectional Encoder Representations from Transformers) for medical code representation and medication recommendation. We use GNNs to represent the internal hierarchical structures of medical codes. Then we integrate the GNN representation into a transformer-based visit encoder and pre-train it on EHR data from patients only with a single visit. The pre-trained visit encoder and representation are then fine-tuned for downstream predictive tasks on longitudinal EHRs from patients with multiple visits. G-BERT is the first to bring the language model pre-training schema into the healthcare domain and it achieved state-of-the-art performance on the medication recommendation task.
We propose ENCASE to combine expert features and DNNs (Deep Neural Networks) together for ECG classification. We first explore and implement expert features from statistical area, signal processing area and medical area. Then, we build DNNs to automatically extract deep features. Besides, we propose a new algorithm to find the most representative wave (called centerwave) among long ECG record, and extract features from centerwave. Finally, we combine these features together and put them into ensemble classifiers. Experiment on 4-class ECG data classification reports 0.84 F 1 score, which is much better than any of the single model.
Objective: We aim to combine deep neural networks and engineered features (hand-crafted features based on medical domain knowledge) for cardiac arrhythmia detection from short single-lead ECG recordings. Approach: We propose a two-stage method named for cardiac arrhythmia detection. The first stage is feature extraction and the second stage is classifier building. In the feature extraction stage, we extract both deep features and engineered features. Deep features are obtained by modifying deep neural networks into a deep feature extractor. Engineered features are extracted by summarizing existing approaches into four feature groups. Then, we propose a feature aggregation approach to combine these features. In the classifier building stage, we build multiple gradient boosting decision trees and combine them to get the final detector. Main results: Experiments are performed on the PhysioNet/Computing in Cardiology Challenge 2017 dataset (Clifford et al 2017 Computing in Cardiology vol 44). Using F1 scores reported on the hidden test set as measurements, got 0.9117 on Normal (F1N), 0.8128 on Atrial Fibrillation (AF) (F1A), 0.7505 on Others (F1O), and 0.5671 on Noise (F1P). It placed 5th in the Challenge and 8th in the follow-up challenge (ranked by considering the average of Normal, AF, and Others (F1NAO = 0.825)). When rounding to two decimal places, we were part a three-way tie for 1st place and were part a seven-way tie for 2nd place in the follow-up challenge. Further experiments show that combined features perform better than individual features, and deep features show more importance scores than other features. Significance: can benefit from both feature engineering-based methods and recent deep neural networks. It is flexible and can easily assimilate the ability of new cardiac arrhythmia detection methods.
Pre-trained language models have achieved state-of-the-art results in various Natural Language Processing (NLP) tasks. has shown that scaling up pre-trained language models can further exploit their enormous potential. A unified framework named ERNIE 3.0 [2] was recently proposed for pre-training large-scale knowledge enhanced models and trained a model with 10 billion parameters. ERNIE 3.0 outperformed the state-of-the-art models on various NLP tasks. In order to explore the performance of scaling up ERNIE 3.0, we train a hundred-billion-parameter model called ERNIE 3.0 Titan with up to 260 billion parameters on the PaddlePaddle [3] platform. Furthermore, we design a self-supervised adversarial loss and a controllable language modeling loss to make ERNIE 3.0 Titan generate credible and controllable texts. To reduce the computation overhead and carbon emission, we propose an online distillation framework for ERNIE 3.0 Titan, where the teacher model will teach students and train itself simultaneously. ERNIE 3.0 Titan is the largest Chinese dense pre-trained model so far. Empirical results show that the ERNIE 3.0 Titan outperforms the state-of-the-art models on 68 NLP datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.