“…Asghar et al [26] proposed multi-modal emotion recognition approach using AlexNet for extracting time and frequency domain features and bag of deep features (BoDF) for feature reduction. Besides, attention-based convolutional recurrent neural network (ACRNN) [27], channel-fused dense convolutional network (CDCN) [28], fusion model of long-short term memories neural networks (LSTM) and graph convolutional neural network (GCNN) named ECLGCNN [29], cascaded and parallel hybrid convolution recurrent neural networks [30], dynamical graph convolutional neural networks (DGCNN) [31], spatial-temporal recurrent neural network (STRNN) [32], deep convolutional neural network [33], four-dimensional convolutional recurrent neural network (4D-CRNN) [34], graph convolutional broad network (GCB-net) along with broad learning system (BLS) [35], regularized graph neural network (RGNN) [36], combination of convolutional neural network (CNN) and deep neural network (DNN) [37], spiking neural networks (SNNs) [38], optimized residual networks (ResNet) [39], combined deep neural network (DNN) model [40], [41], CNN-BiLSTM-MHSA model [42] consisting of a convolutional neural network (CNN), bidirectional long and short-term memory network (BiLSTM), and multi-head self-attention (MHSA) were also proposed by the researchers for emotion recognition from EEG signals. Identifying EEG features that are effective for proper emotion recognition is very important.…”