Deep Convolutional Neural Network (CNN) has achieved remarkable results in computer vision tasks for end-to-end learning. We evaluate here the power of a deep CNN to learn robust features from raw Electroencephalogram (EEG) data to detect seizures. Seizures are hard to detect, as they vary both inter- and intra-patient. In this article, we use a deep CNN model for seizure detection task on an open-access EEG epilepsy dataset collected at the Boston Children's Hospital. Our deep learning model is able to extract spectral, temporal features from EEG epilepsy data and use them to learn the general structure of a seizure that is less sensitive to variations. For cross-patient EEG data, our method produced an overall sensitivity of 90.00%, specificity of 91.65%, and overall accuracy of 98.05% for the whole dataset of 23 patients. The system can detect seizures with an accuracy of 99.46%. Thus, it can be used as an excellent cross-patient seizure classifier. The results show that our model performs better than the previous state-of-the-art models for patient-specific and cross-patient seizure detection task. The method gave an overall accuracy of 99.65% for patient-specific data. The system can also visualize the special orientation of band power features. We use correlation maps to relate spectral amplitude features to the output in the form of images. By using the results from our deep learning model, this visualization method can be used as an effective multimedia tool for producing quick and relevant brain mapping images that can be used by medical experts for further investigation.
Deep learning methods, such as convolution neural networks (CNNs), have achieved remarkable success in computer vision tasks. Hence, an increasing trend in using deep learning for electroencephalograph (EEG) analysis is evident. Extracting relevant information from CNN features is one of the key reasons behind the success of the CNN-based deep learning models. Some CNN models use convolutional features from different CNN layers with good effect. However, extraction and fusion of multilevel convolutional features remain unexplored for EEG applications. Moreover, cognitive computing and artificial intelligence experience increasing applications in all fields. Cognitive process is based on understanding human brain cognition through signals, such as EEG. Hence, deep learning can aid in developing cognitive systems and related applications by improving EEG decoding. The classification and recognition of EEG have consistently been challenging due to its characteristics of dynamic time series data and low signal-to-noise ratio. However, the information hidden in different convolution layers can aid in improving feature discrimination capability. In this paper, we use the EEG motor imagery data to uncover the benefits of extracting and fusing multilevel convolutional features from different CNN layers, which are abstract representations of the input at various levels. Our proposed CNN model can learn robust spectral and temporal features from the raw EEG data. We demonstrate that such multilevel feature fusion outperforms the models that use features only from the last layer. Our results are better than the state of the art for EEG decoding and classification. INDEX TERMS EEG motor imagery classification, deep learning, convolution neural network, multilevel feature fusion.
An accurate vision system to classify and analyze fruits in real time is critical for harvesting robots to be cost-effective and efficient. However, practical success in this area is still limited, and to the best of our knowledge, there is no research in the area of machine vision for date fruits in an orchard environment. In this work, we propose an efficient machine vision framework for date fruit harvesting robots. The framework consists of three classification models used to classify date fruit images in real time according to their type, maturity, and harvesting decision. In the classification models, deep convolutional neural networks are utilized with transfer learning and fine-tuning on pre-trained models. To build a robust vision system, we create a rich image dataset of date fruit bunches in an orchard that consists of more than 8000 images of five date types in different pre-maturity and maturity stages. The dataset has a large degree of variations that reflects the challenges in the date orchard environment including variations in angles, scales, illumination conditions, and date bunches covered by bags. The proposed date fruit classification models achieve accuracies of 99.01%, 97.25%, and 98.59% with classification times of 20.6, 20.7, and 35.9 msec for the type, maturity, and harvesting decision classification tasks, respectively. INDEX TERMS Dates classification, maturity analysis, automated harvesting, deep learning, convolutional neural networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.