2022
DOI: 10.1109/access.2022.3211334
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-Headed Transformer Approach for Predicting the Patient’s Clinical Time-Series Variables From Charted Vital Signs

Abstract: Deep learning has progressively been the spotlight of innovations that aim to leverage the clinical time-series data that are longitudinally recorded in the Electronic Health Records(EHR) to forecast the patient's survival and vital signs deterioration. However, their recording velocity, as well as their noisiness, hinder the proper adoption of the recently proposed benchmarks. The Recurrent Neural Networks (RNN) especially the Long-short Term Memory (LSTMs) have achieved better results in recent studies but t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 38 publications
0
1
0
Order By: Relevance
“…Extending from a single prediction point, this work uses LSTM to model irregularly timed time series data and predict multistep future trajectories on diabetes and mental health cohorts [74]. In recent years, researchers have also turned to transformer models for prediction tasks [75][76][77] as they capture more long-range dependencies and provide more interpretability of model weights than RNNs. In this work, selfattention is used to measure each feature's relevance to each other and also each time point's relevance within each feature to predict delirium in critical care [78], and another paper uses similar mechanisms to predict cardiac patient mortality risk [79].…”
Section: Applicationsmentioning
confidence: 99%
“…Extending from a single prediction point, this work uses LSTM to model irregularly timed time series data and predict multistep future trajectories on diabetes and mental health cohorts [74]. In recent years, researchers have also turned to transformer models for prediction tasks [75][76][77] as they capture more long-range dependencies and provide more interpretability of model weights than RNNs. In this work, selfattention is used to measure each feature's relevance to each other and also each time point's relevance within each feature to predict delirium in critical care [78], and another paper uses similar mechanisms to predict cardiac patient mortality risk [79].…”
Section: Applicationsmentioning
confidence: 99%
“…These adaptations, as explored by and Na et al (2023), underscore the ongoing efforts to refine ESNs for better performance. The incorporation of attention mechanisms and transformers, as employed in the multi-headed transformer approach by Harerimana et al (2022), represents another pivotal trend. These methods, rooted in natural language processing, have shown promising results in capturing long-term dependencies and enhancing the interpretability of predictions, marking a significant leap forward in the analysis of clinical and multivariate time series.…”
mentioning
confidence: 99%