The failure of bearings can have a significant negative impact on the safe operation of equipment. Recently, deep learning has become one of the focuses of RUL prediction due to its potent scalability and nonlinear fitting ability. The supervised learning process in deep learning requires a significant quantity of labeled data, but data labeling can be expensive and time-consuming. Cotraining is a semisupervised learning method that reduces the quantity of required labeled data through exploiting available unlabeled data in supervised learning to boost accuracy. This paper innovatively proposes a cotraining-based approach for RUL prediction. A CNN and an LSTM were cotrained on large amounts of unlabeled data to obtain a health indicator (HI), then the monitoring data were entered into the HI and the RUL prediction was realized. The effectiveness of the proposed approach was compared and analyzed against individual CNN and LSTM and the stacking networks SAE+LSTM and CNN+LSTM in the existing literature using RMSE and MAPE values on a PHM 2012 dataset. The results demonstrate that the RMSE and MAPE value of the proposed approach are superior to individual CNN and LSTM, and the RMSE value of the proposed approach is 54.72, which is significantly lower than SAE+LSTM (137.12), and close to CNN+LSTM (49.36). The proposed approach has also been tested successfully on a real-world task and thus has strong application value.
With the development of computer vision technology, more and more enterprises begin to use computer vision instead of manual inspection for steel surface defect detection. However, classical image processing methods often face great difficulties when dealing with images containing noise and distortions, which leads to low computational efficiency and poor accuracy of detection. In view of the particularity of hot round steel production, a computational intelligence method is proposed in this paper. On the basis of preliminary image preprocessing, we combine the improved PCA with genetic algorithm for feature selection and then use evolutionary computing and CUDA-based parallel computing to screen out the suspected defective image of round steel surface intelligently, quickly, and accurately. This method can provide decision support for subsequent defect analysis and production process improvement.
Deep neural networks (DNNs) require large amounts of labeled data for model training. However, label noise is a common problem in datasets due to the difficulty of classification and high cost of labeling processes. Introducing the concepts of curriculum learning and progressive learning, this paper presents a novel solution that is able to handle massive noisy labels and improve model generalization ability. It proposes a new network model training strategy that considers mislabeled samples directly in the network training process. The new learning curriculum is designed to measures the complexity of the data with their distribution density in a feature space. The sample data in each category are then divided into easy-to-classify (clean samples), relatively easy-to-classify, and hard-to-classify (noisy samples) subsets according to the smallest intra-class local density with each cluster. On this basis, DNNs are trained progressively in three stages, from easy to hard, i.e., from clean to noisy samples. The experimental results demonstrate that the accuracy of image classification can be improved through data augmentation, and the classification accuracy of the proposed method is clearly higher than that of standard Inception_v2 for the NEU dataset after data augmentation, when the proportion of noisy labels in the training set does not exceed 60%. With 50% noisy labels in the training set, the classification accuracy of the proposed method outperformed recent state-of-the-art label noise learning methods, CleanNet and MentorNet. The proposed method also performed well in practical applications, where the number of noisy labels was uncertain and unevenly distributed. In this case, the proposed method not only can alleviate the adverse effects of noisy labels, but it can also improve the generalization ability of standard deep networks and their overall capability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.