“…We set the learning rate LR = 0.001, and the batch size is set at 256. Similar to the Encoder-Decoder model, we set the length of the encoder to [2,4,6,8,10,12], and then we find the optimal settings of the parameters by testing different parameters. For ARIMA, MLP, and SVM, we use the previous 6-day data as input, and for deep learning methods including DBN, LSTM, and CA-LSTM, we use the default settings in their paper [2], [22], [23].…”