In vocal performance, capturing psychological changes through electroencephalography (EEG) and integrating them with music can improve musical expression. We propose an EEG-based emotion recognition model using continuous convolutional neural networks (CNN). EEG signals in different frequency bands are extracted as features using differential entropy. An EEG enhancement method reassigns variable importance to channels by suppressing redundant information. The model is evaluated on the DEAP dataset for emotion recognition. Combining a rest-state baseline signal before each trial significantly improves accuracy. Following the simulation results, the suggested continuous CNN model has an average classification accuracy of 95.36% and 95.31 for arousal and valence on 22 channels, respectively. This is close to the average accuracy of 32 channels. Likewise, it can help capture the psychological changes of vocal performers and improve musical expression.