The robustness and computational load are the key challenges in motor imagery (MI) based on electroencephalography (EEG) signals to decode for the development of practical brain-computer interface (BCI) systems. In this study, we propose a robust and simple automated multivariate empirical wavelet transform (MEWT) algorithm for the decoding of different MI tasks. The main contributions of this study are four-fold. First, the multiscale principal component analysis method is utilized in the preprocessing module to obtain robustness against noise. Second, a novel automated channel selection strategy is proposed and then is further verified with comprehensive comparisons among three different strategies for decoding channel combination selection. Third, a sub-band alignment method by utilizing MEWT is adopted to obtain joint instantaneous amplitude and frequency components for the first time in MI applications. Four, a robust correlation-based feature selection strategy is applied to largely reduce the system complexity and computational load. Extensive experiments for subject-specific and subject independent cases are conducted with the three-benchmark datasets from BCI competition III to evaluate the performances of the proposed method by employing typical machine-learning classifiers. For subject-specific case, experimental results show that an average sensitivity, specificity and classification accuracy of 98% was achieved by employing multilayer perceptron neural networks, logistic model tree and least-square support vector machine (LS-SVM) classifiers, respectively for three datasets, resulting in an improvement of upto 23.50% in classification accuracy as compared with other existing method. While an average sensitivity, specificity and classification accuracy of 93%, 92.1% and 91.4% was achieved for subject independent case by employing LS-SVM classifier for all datasets with an increase of up to 18.14% relative to other existing methods. Results also show that our proposed algorithm provides a classification accuracy of 100% for subjects with small training size in subject-specific case, and for subject independent case by employing a single source subject. Such satisfactory results demonstrate the great potential of the proposed MEWT algorithm for practical MI EEG signals classification.
Brain tumors (BTs) are spreading very rapidly across the world. Every year, thousands of people die due to deadly brain tumors. Therefore, accurate detection and classification are essential in the treatment of brain tumors. Numerous research techniques have been introduced for BT detection as well as classification based on traditional machine learning (ML) and deep learning (DL). The traditional ML classifiers require hand-crafted features, which is very time-consuming. On the contrary, DL is very robust in feature extraction and has recently been widely used for classification and detection purposes. Therefore, in this work, we propose a hybrid deep learning model called DeepTumorNet for three types of brain tumors (BTs)—glioma, meningioma, and pituitary tumor classification—by adopting a basic convolutional neural network (CNN) architecture. The GoogLeNet architecture of the CNN model was used as a base. While developing the hybrid DeepTumorNet approach, the last 5 layers of GoogLeNet were removed, and 15 new layers were added instead of these 5 layers. Furthermore, we also utilized a leaky ReLU activation function in the feature map to increase the expressiveness of the model. The proposed model was tested on a publicly available research dataset for evaluation purposes, and it obtained 99.67% accuracy, 99.6% precision, 100% recall, and a 99.66% F1-score. The proposed methodology obtained the highest accuracy compared with the state-of-the-art classification results obtained with Alex net, Resnet50, darknet53, Shufflenet, GoogLeNet, SqueezeNet, ResNet101, Exception Net, and MobileNetv2. The proposed model showed its superiority over the existing models for BT classification from the MRI images.
Educational data generated through various platforms such as e-learning, e-admission systems, and automated result management systems can be effectively processed through educational data mining techniques in order to gather highly useful insights into students’ performance. The prediction of student performance from historical academic data is a highly desirable application of educational data mining. In this regard, there is an urgent need to develop an automated technique for student performance prediction. Existing studies on student performance prediction primarily focus on utilizing the conventional feature representation schemes, where extracted features are fed to a classifier. In recent years, deep learning has enabled researchers to automatically extract high-level features from raw data. Such advanced feature representation schemes enable superior performance in challenging tasks. In this work, we examine the deep neural network model, namely, the attention-based Bidirectional Long Short-Term Memory (BiLSTM) network to efficiently predict student performance (grades) from historical data. In this article, we have used the most advanced BiLSTM combined with an attention mechanism model by analyzing existing research problems, which are based on advanced feature classification and prediction. This work is really vital for academicians, universities, and government departments to early predict the performance. The superior sequence learning capabilities of BiLSTM combined with attention mechanism yield superior performance compared to the existing state-of-the-art. The proposed method has achieved a prediction accuracy of 90.16%.
Object detection plays a vital role in the fields of computer vision, machine learning, and artificial intelligence applications (such as FUSE-AI (E-healthcare MRI scan), face detection, people counting, and vehicle detection) to identify good and defective food products. In the field of artificial intelligence, target detection has been at its peak, but when it comes to detecting multiple targets in a single image or video file, there are indeed challenges. This article focuses on the improved K-nearest neighbor (MK-NN) algorithm for electronic medical care to realize intelligent medical services and applications. We introduced modifications to improve the efficiency of MK-NN, and a comparative analysis was performed to determine the best fuse target detection algorithm based on robustness, accuracy, and computational time. The comparative analysis is performed using four algorithms, namely, MK-NN, traditional K-NN, convolutional neural network, and backpropagation. Experimental results show that the improved K-NN algorithm is the best model in terms of robustness, accuracy, and computational time.
Recent advancements in green building technologies (GBTs) have grown substantially, as an outcome of the environmental, economic and societal benefits. It has the potential to move toward sustainable development, specifically related to climate change. In GBTs, the main objective is to use energy, water and other resources in a balanced way, without using them extensively. This will improve the environmental conditions. Green buildings (GBs) are beneficial when it comes to energy consumption and emissions; low maintenance and operation costs; boosting health and productivity; etc. There is a lack of a critical review of the past or present research work in the area of the Green Building Technology (GBT) sector to identify the future roadmap for sustainable green building technologies. A critical review, with the help of proper research methodology, was identified. The scope of this study is to analyze the existing work on different issues, and find different key issues in green building research, which has minimal use of natural resources, is cost-effective and is designed and constructed for a longer duration, considering future prospects. This paper examines the state of green building construction today and makes recommendations for further study and development which will be necessary for a sustainable future. In order to encourage research, this study also identified a few possible future research directions in sustainable development.
Deep Neural Networks have offered numerous innovative solutions to brain-related diseases including Alzheimer’s. However, there are still a few standpoints in terms of diagnosis and planning that can be transformed via quantum Machine Learning (QML). In this study, we present a hybrid classical–quantum machine learning model for the detection of Alzheimer’s using 6400 labeled MRI scans with two classes. Hybrid classical–quantum transfer learning is used, which makes it possible to optimally pre-process complex and high-dimensional data. Classical neural networks extract high-dimensional features and embed informative feature vectors into a quantum processor. We use resnet34 to extract features from the image and feed a 512-feature vector to our quantum variational circuit (QVC) to generate a four-feature vector for precise decision boundaries. Adam optimizer is used to exploit the adaptive learning rate corresponding to each parameter based on first- and second-order gradients. Furthermore, to validate the model, different quantum simulators (PennyLane, qiskit.aer and qiskit.basicaer) are used for the detection of the demented and non-demented images. The learning rate is set to 10−4 for and optimized quantum depth of six layers, resulting in a training accuracy of 99.1% and a classification accuracy of 97.2% for 20 epochs. The hybrid classical–quantum network significantly outperformed the classical network, as the classification accuracy achieved by the classical transfer learning model was 92%. Thus, a hybrid transfer-learning model is used for binary detection, in which a quantum circuit improves the performance of a pre-trained ResNet34 architecture. Therefore, this work offers a method for selecting an optimal approach for detecting Alzheimer’s disease. The proposed model not only allows for the automated detection of Alzheimer’s but would also speed up the process significantly in clinical settings.
Cyberattacks can trigger power outages, military equipment problems, and breaches of confidential information, i.e., medical records could be stolen if they get into the wrong hands. Due to the great monetary worth of the data it holds, the banking industry is particularly at risk. As the number of digital footprints of banks grows, so does the attack surface that hackers can exploit. This paper aims to detect distributed denial-of-service (DDOS) attacks on financial organizations using the Banking Dataset. In this research, we have used multiple classification models for the prediction of DDOS attacks. We have added some complexity to the architecture of generic models to enable them to perform well. We have further applied a support vector machine (SVM), K-Nearest Neighbors (KNN) and random forest algorithms (RF). The SVM shows an accuracy of 99.5%, while KNN and RF scored an accuracy of 97.5% and 98.74%, respectively, for the detection of (DDoS) attacks. Upon comparison, it has been concluded that the SVM is more robust as compared to KNN, RF and existing machine learning (ML) and deep learning (DL) approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.