2021
DOI: 10.3390/healthcare9080992
|View full text |Cite
|
Sign up to set email alerts
|

Forecasting Teleconsultation Demand Using an Ensemble CNN Attention-Based BILSTM Model with Additional Variables

Abstract: To enhance the forecasting accuracy of daily teleconsultation demand, this study proposes an ensemble hybrid deep learning model. The proposed ensemble CNN attention-based BILSTM model (ECA-BILSTM) combines shallow convolutional neural networks (CNNs), attention mechanisms, and bidirectional long short-term memory (BILSTM). Moreover, additional variables are selected according to the characteristics of teleconsultation demand and added to the inputs of forecasting models. To verify the superiority of ECA-BILST… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…1 b). The BiLSTM layer processes sequence information in both forward and backward directions [ 46 ], essential for understanding context and progression in temporal data, like video frames or time-series sensor data. The final output goes through a softmax layer (or another suitable activation function) to categorize the video into one of four classifications: /A/, /B/, /C/, or /D /.…”
Section: Methodsmentioning
confidence: 99%
“…1 b). The BiLSTM layer processes sequence information in both forward and backward directions [ 46 ], essential for understanding context and progression in temporal data, like video frames or time-series sensor data. The final output goes through a softmax layer (or another suitable activation function) to categorize the video into one of four classifications: /A/, /B/, /C/, or /D /.…”
Section: Methodsmentioning
confidence: 99%
“…The feature outputs obtained by the two deep neural networks are weighted with the attention mechanism, respectively. Within the length of the statement window m, the new text feature vector u can be calculated as follows [51]:…”
Section: Attention Layermentioning
confidence: 99%