We present a 4-port Multiple-Input-Multiple-Output (MIMO) antenna array operating in the mm-wave band for 5G applications. An identical two-element array excited by the feed network based on a T-junction power combiner/divider is introduced in the reported paper. The array elements are rectangular-shaped slotted patch antennas, while the ground plane is made defected with rectangular, circular, and a zigzag-shaped slotted structure to enhance the radiation characteristics of the antenna. To validate the performance, the MIMO structure is fabricated and measured. The simulated and measured results are in good coherence. The proposed structure can operate in a 25.5–29.6 GHz frequency band supporting the impending mm-wave 5G applications. Moreover, the peak gain attained for the operating frequency band is 8.3 dBi. Additionally, to obtain high isolation between antenna elements, the polarization diversity is employed between the adjacent radiators, resulting in a low Envelope Correlation Coefficient (ECC). Other MIMO performance metrics such as the Channel Capacity Loss (CCL), Mean Effective Gain (MEG), and Diversity gain (DG) of the proposed structure are analyzed, and the results indicate the suitability of the design as a potential contender for imminent mm-wave 5G MIMO applications.
Much attention has been paid to the recognition of human emotions with the help of electroencephalogram (EEG) signals based on machine learning technology. Recognizing emotions is a challenging task due to the non-linear property of the EEG signal. This paper presents an advanced signal processing method using the deep neural network (DNN) for emotion recognition based on EEG signals. The spectral and temporal components of the raw EEG signal are first retained in the 2D Spectrogram before the extraction of features. The pre-trained AlexNet model is used to extract the raw features from the 2D Spectrogram for each channel. To reduce the feature dimensionality, spatial, and temporal based, bag of deep features (BoDF) model is proposed. A series of vocabularies consisting of 10 cluster centers of each class is calculated using the k-means cluster algorithm. Lastly, the emotion of each subject is represented using the histogram of the vocabulary set collected from the raw-feature of a single channel. Features extracted from the proposed BoDF model have considerably smaller dimensions. The proposed model achieves better classification accuracy compared to the recently reported work when validated on SJTU SEED and DEAP data sets. For optimal classification performance, we use a support vector machine (SVM) and k-nearest neighbor (k-NN) to classify the extracted features for the different emotional states of the two data sets. The BoDF model achieves 93.8% accuracy in the SEED data set and 77.4% accuracy in the DEAP data set, which is more accurate compared to other state-of-the-art methods of human emotion recognition.
Electrocardiogram (ECG) signals play a vital role in diagnosing and monitoring patients suffering from various cardiovascular diseases (CVDs). This research aims to develop a robust algorithm that can accurately classify the electrocardiogram signal even in the presence of environmental noise. A one-dimensional convolutional neural network (CNN) with two convolutional layers, two down-sampling layers, and a fully connected layer is proposed in this work. The same 1D data was transformed into two-dimensional (2D) images to improve the model’s classification accuracy. Then, we applied the 2D CNN model consisting of input and output layers, three 2D-convolutional layers, three down-sampling layers, and a fully connected layer. The classification accuracy of 97.38% and 99.02% is achieved with the proposed 1D and 2D model when tested on the publicly available Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Both proposed 1D and 2D CNN models outperformed the corresponding state-of-the-art classification algorithms for the same data, which validates the proposed models’ effectiveness.
Unmanned aerial vehicles (UAVs) have become popular in surveillance, security, and remote monitoring. However, they also pose serious security threats to public privacy. The timely detection of a malicious drone is currently an open research issue for security provisioning companies. Recently, the problem has been addressed by a plethora of schemes. However, each plan has a limitation, such as extreme weather conditions and huge dataset requirements. In this paper, we propose a novel framework consisting of the hybrid handcrafted and deep feature to detect and localize malicious drones from their sound and image information. The respective datasets include sounds and occluded images of birds, airplanes, and thunderstorms, with variations in resolution and illumination. Various kernels of the support vector machine (SVM) are applied to classify the features. Experimental results validate the improved performance of the proposed scheme compared to other related methods.
Location-based services have permeated Smart academic institutions, enhancing the quality of higher education. Position information of people and objects can predict different potential requirements and provide relevant services to meet those needs. Indoor positioning system (IPS) research has attained robust location-based services in complex indoor structures. Unforeseeable propagation loss in complex indoor environments results in poor localization accuracy of the system. Various IPSs have been developed based on fingerprinting to precisely locate an object even in the presence of indoor artifacts such as multipath and unpredictable radio propagation losses. However, such methods are deleteriously affected by the vulnerability of fingerprint matching frameworks. In this paper, we propose a novel machine learning framework consisting of Bag-of-Features and followed by a k-nearest neighbor classifier to categorize the final features into their respective geographical coordinate data. BoF calculates the vocabulary set using k-mean clustering, where the frequency of the vocabulary in the raw fingerprint data represents the robust final features that improve localization accuracy. Experimental results from simulation-based indoor scenarios and real-time experiments demonstrate that the proposed framework outperforms previously developed models.
Malware’s structural transformation to withstand the detection frameworks encourages hackers to steal the public’s confidential content. Researchers are developing a protective shield against the intrusion of malicious malware in mobile devices. The deep learning-based android malware detection frameworks have ensured public safety; however, their dependency on diverse training samples has constrained their utilization. The handcrafted malware detection mechanisms have achieved remarkable performance, but their computational overheads are a major hurdle in their utilization. In this work, Multifaceted Deep Generative Adversarial Networks Model (MDGAN) has been developed to detect malware in mobile devices. The hybrid GoogleNet and LSTM features of the grayscale and API sequence have been processed in a pixel-by-pixel pattern through conditional GAN for the robust representation of APK files. The generator produces syntactic malicious features for differentiation in the discriminator network. Experimental validation on the combined AndroZoo and Drebin database has shown 96.2% classification accuracy and a 94.7% F-score, which remain superior to the recently reported frameworks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.