2022
DOI: 10.1016/j.asoc.2022.109331
|View full text |Cite
|
Sign up to set email alerts
|

A neural network compression method based on knowledge-distillation and parameter quantization for the bearing fault diagnosis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(13 citation statements)
references
References 44 publications
0
8
0
Order By: Relevance
“…The slight difference is only in accuracy, precision and recall due to different ways of working. Specifically, the compression methods examined are Naive, DoReFa, BNN, LSQ, Observer [50], and PTQ [24]. We find that the efficiency of the quantizers does not differ too much, ranging from 92.47% to 96.95% with a margin of only 4.5%.…”
Section: Model Quantization and Classification Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The slight difference is only in accuracy, precision and recall due to different ways of working. Specifically, the compression methods examined are Naive, DoReFa, BNN, LSQ, Observer [50], and PTQ [24]. We find that the efficiency of the quantizers does not differ too much, ranging from 92.47% to 96.95% with a margin of only 4.5%.…”
Section: Model Quantization and Classification Resultsmentioning
confidence: 99%
“…In addition, one of the new trends in bearing fault detection and diagnosis is optimizing the machine learning model (model capacity, computational resources, inference latency) so that it can be deployed on constrained hardware [17]. Regarding this trend, there are several popular approaches such as neural network architecture search [18][19][20][21] or model compression using pruning [17,[21][22][23], quantization [23][24][25][26], and knowledge distillation [24,[27][28][29]. Each approach will have different advantages and disadvantages, so how to use them should be carefully considered.…”
Section: Introductionmentioning
confidence: 99%
“…MSCF are used to enhance the fault signal, then the multisensor data are fused by MCNN for feature extraction and fault pattern classification. Ji et al [14] proposed a CNN compression method based on knowledge-distillation (K-D) and parameter quantization to facilitate rolling bearing fault diagnosis on an embedded platform. However, the current application of CNN in the field of fault diagnosis has the following problems: (1) the existing CNN has large scale and large parameters, which is difficult to be mounted on a small embedded platform; (2) Current model compression methods, such as network pruning, quantization and knowledge distillation, will inevitably reduce the performance of network recognition.…”
Section: Introductionmentioning
confidence: 99%
“…Shen and Guo 37 developed a modified probabilistic K-D method for the bearing fault diagnosis. Ji et al 38 combined the K-D method and the parameter quantization to compress the bearing fault diagnosis networks. In these works, the K-D process was proven to be able to improve the classification accuracy of the lightweight student networks.…”
Section: Introductionmentioning
confidence: 99%