2023
DOI: 10.1088/2631-8695/acd625
|View full text |Cite
|
Sign up to set email alerts
|

Efficient bearing fault diagnosis with neural network search and parameter quantization based on vibration and temperature

Abstract: In this work, we propose a deep-learning method to diagnose bearing faults of electric motors based on vibration and bearing housing temperature. Our methods can accurately diagnose faults related to bearing cracking and lubricant shortages. The proposed method is effective in terms of computational complexity and model capacity thanks to the advantages of neural architecture search (NAS) and parameter quantization in the model establishment. The experimental results found that the information on bearing tempe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 48 publications
0
0
0
Order By: Relevance
“…This sophisticated model serves as the foundation for subsequent stages, where the compressed model is meticulously crafted using an array of techniques. These techniques, including knowledge distillation [11], network pruning [12], quantization [13], and low-rank factorization [14] are strategically employed to systematically reduce the size and computational demands of the model. The artful application of these methods ensures that the compressed model maintains a delicate balance, preserving its overall performance even as it undergoes a process of size and computational optimization.…”
Section: Introductionmentioning
confidence: 99%
“…This sophisticated model serves as the foundation for subsequent stages, where the compressed model is meticulously crafted using an array of techniques. These techniques, including knowledge distillation [11], network pruning [12], quantization [13], and low-rank factorization [14] are strategically employed to systematically reduce the size and computational demands of the model. The artful application of these methods ensures that the compressed model maintains a delicate balance, preserving its overall performance even as it undergoes a process of size and computational optimization.…”
Section: Introductionmentioning
confidence: 99%