Software fault prediction plays a vital role in software quality assurance. Identifying the faulty modules helps to better concentrate on those modules and helps improve the quality of the software. With increasing complexity of software nowadays feature selection is important to remove the redundant, irrelevant and erroneous data from the dataset. In general, Feature selection is done mainly based on filter and wrapper. In this paper a hybrid feature selection method is proposed which gives a better prediction than the traditional methods. NASA's public dataset KC1 available at promise software engineering repository is used. To evaluate the performance of the software fault prediction models Accuracy, Mean absolute error (MAE), Root mean squared error (RMSE) values are used.
Machine learning (ML) has been broadly connected to the upper layers of communication systems for different purposes, for example, arrangement of cognitive radio and communication network. Nevertheless, its application to the physical layer is hindered by complex channel conditions and constrained learning capacity of regular ML algorithms. Deep learning (DL) has been as of late connected for some fields, for example, computer vision and normal dialect preparing, given its expressive limit and advantageous enhancement ability. This paper describes about a novel use of DL for the physical layer. By deciphering a communication system as an auto encoder, we build up an essential better approach to consider communication system outline as a conclusion to-end reproduction undertaking that tries to together enhance transmitter and receiver in a solitary procedure. This DL based technique demonstrates promising execution change than traditional communication system.
Emotional AI is the next era of AI to play a major role in various fields such as entertainment, health care, self-paced online education, etc., considering clues from multiple sources. In this work, we propose a multimodal emotion recognition system extracting information from speech, motion capture, and text data. The main aim of this research is to improve the unimodal architectures to outperform the state-of-the-arts and combine them together to build a robust multi-modal fusion architecture. We developed 1D and 2D CNN-LSTM time-distributed models for speech, a hybrid CNN-LSTM model for motion capture data, and a BERT-based model for text data to achieve state-of-the-art results, and attempted both concatenation-based decision-level fusion and Deep CCA-based feature-level fusion schemes. The proposed speech and mocap models achieve emotion recognition accuracies of 65.08% and 67.51%, respectively, and the BERT-based text model achieves an accuracy of 72.60% . The decision-level fusion approach significantly improves the accuracy of detecting emotions on the IEMOCAP and MELD datasets. This approach achieves 80.20% accuracy on IEMOCAP which is 8.61% higher than the state-of-the-art methods, and 63.52% and 61.65% in 5-class and 7-class classification on the MELD dataset which are higher than the state-of-the-arts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.