2017
DOI: 10.1109/tie.2017.2668986
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Framework for Fault Diagnosis Using Kernel Partial Least Squares Based on an Optimal Preference Matrix

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(17 citation statements)
references
References 23 publications
0
17
0
Order By: Relevance
“…The most popular are evolutionary algorithms and swarm intelligence algorithms, which have demonstrated their potential for solving important engineering decision-making problems [31]. The basic formulations of four nature-inspired metaheuristic algorithms: GA, PSO, FFA, and GWO [32]- [34], are described in this section. The derivative-free optimization algorithms used in this work are also briefly presented [11].…”
Section: B Nature-inspired Metaheuristics Algorithmsmentioning
confidence: 99%
“…The most popular are evolutionary algorithms and swarm intelligence algorithms, which have demonstrated their potential for solving important engineering decision-making problems [31]. The basic formulations of four nature-inspired metaheuristic algorithms: GA, PSO, FFA, and GWO [32]- [34], are described in this section. The derivative-free optimization algorithms used in this work are also briefly presented [11].…”
Section: B Nature-inspired Metaheuristics Algorithmsmentioning
confidence: 99%
“…The application was in the HSMP, wherein both quality-related and non-quality-related faults were investigated. Further developments on kernel PLS can be found in [146,160,163,164,168,173,196,197,199,206,229,231,242,243,268,284]. Concurrent PLS was also proposed to solve some drawbacks of the T-PLS.…”
Section: Quality-relevant Monitoringmentioning
confidence: 99%
“…[17,18] Kernel learning methods map the process data into a high-dimension feature space and subsequently perform linear transformation in the feature space. [17,19,20] The underlying assumption is that data that cannot be linearly separated become separable in a high-dimension space. However, this assumption does not always hold, and the representative ability may be limited.…”
Section: Introductionmentioning
confidence: 99%