2022
DOI: 10.1016/j.isatra.2021.02.042
|View full text |Cite
|
Sign up to set email alerts
|

Intelligent fault diagnosis of machines with small & imbalanced data: A state-of-the-art review and possible extensions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
86
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 312 publications
(117 citation statements)
references
References 85 publications
0
86
0
Order By: Relevance
“…When constructing weak learners into an ensemble model in many mainstream methods, they will not combine the weak learners directly but get the weighted average of all weak learners. The weight functions are usually relative to the classification results of the base estimators generated in the previous training process [46]. In RUSBoost [37], the pseudo-loss will be calculated to update the weight parameters which will have a great impact on the resampling method in the next iteration and will directly affect the weight of the base estimator trained in this iteration.…”
Section: Contamination Entropy Weight Ensemblementioning
confidence: 99%
“…When constructing weak learners into an ensemble model in many mainstream methods, they will not combine the weak learners directly but get the weighted average of all weak learners. The weight functions are usually relative to the classification results of the base estimators generated in the previous training process [46]. In RUSBoost [37], the pseudo-loss will be calculated to update the weight parameters which will have a great impact on the resampling method in the next iteration and will directly affect the weight of the base estimator trained in this iteration.…”
Section: Contamination Entropy Weight Ensemblementioning
confidence: 99%
“…Such issues fall within the scope of zero-shot learning, wherein a model is required to observe and predict samples from a previously unseen class or distribution. [50]. For zero-shot learning to work, there has to be a distinct characteristic of faults and healthy states that is true for previously unseen faults or healthy states, which can be leveraged to assign these distributions to the correct class.…”
Section: Unsupervised Learningmentioning
confidence: 99%
“…Few-shot learning provides an interesting opportunity to learn fault features with only a few instances in a training datasets, as can be the case for many rare faults or components. In the case where no labels exist, supervision algorithms might still be applicable through zero-shot learning [50]. In zero-shot learning, the model seeks to generalise knowledge from seen classes to unseen classes with similar behaviour, much like how humans can see images of house-cats and dogs and then correctly categorize lions to felines and wolves to canines [101,102].…”
Section: Weak Supervisionmentioning
confidence: 99%
“…Targeting the unbalanced data issue in fault diagnosis, researchers have proposed a variety of methods [ 12 14 ]. Currently, solutions to unbalanced classification mainly fall into two categories: data-level methods and algorithm-level methods.…”
Section: Introductionmentioning
confidence: 99%