2019
DOI: 10.3390/app9050843
|View full text |Cite
|
Sign up to set email alerts
|

A Survey of Feature Set Reduction Approaches for Predictive Analytics Models in the Connected Manufacturing Enterprise

Abstract: The broad context of this literature review is the connected manufacturing enterprise, characterized by a data environment such that the size, structure and variety of information strain the capability of traditional software and database tools to effectively capture, store, manage and analyze it. This paper surveys and discusses representative examples of existing research into approaches for feature set reduction in the big data environment, focusing on three contexts: general industrial applications; specif… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(11 citation statements)
references
References 80 publications
0
8
0
Order By: Relevance
“…Industrial applications often require the data analysis to be not only accurate, but also interpretable and can provide insights for process experts. According to the literature, machine learning pipelines in manufacturing can be largely divided into two schools [41]. Among which, an important school is feature engineering, which is to generate new features by changing the representation of the data [42] to improve modelling performance, and render the machine learning more transparent by make feature evaluation and interpretation possible.…”
Section: Workflow Of ML Developmentmentioning
confidence: 99%
See 1 more Smart Citation
“…Industrial applications often require the data analysis to be not only accurate, but also interpretable and can provide insights for process experts. According to the literature, machine learning pipelines in manufacturing can be largely divided into two schools [41]. Among which, an important school is feature engineering, which is to generate new features by changing the representation of the data [42] to improve modelling performance, and render the machine learning more transparent by make feature evaluation and interpretation possible.…”
Section: Workflow Of ML Developmentmentioning
confidence: 99%
“…The ML pipelines adopt the classic ML school of feature engineering and modelling [41]. Feature engineering is to manually design some strategies to extract new features from the raw features (named as engineered features) [42].…”
Section: Four ML Pipelines For the Use Casementioning
confidence: 99%
“…Finally, ML models are beneficial as they can potentially perform quality control for every welding spot reliably, ensuring process capability and reducing costs for quality monitoring (Zhou et al 2018). In the ML community there are two large groups of approaches (LaCasse et al 2019): feature engineering with classic machine learning, and feature learning with neural networks. In this work, we focus on the former, feature engineering, which means manual design of strategies to extract new features from existing features (Bengio et al 2013), examples include extraction of statistic features like maximum, mean, etc., or geometric features, such as slopes, drops, etc.…”
Section: Introductionmentioning
confidence: 99%
“…Four settings of engineered features are designed for machine learning modelling to explore and test whether and to what degree feature engineering can increase the prediction power. Three ML methods, Linear regression (LR), multi-layer perceptron with one hidden layer (MLP), and support vector regression (SVR) are studied as representative classic machine learning methods (LaCasse et al 2019). The combination of the feature processing settings and ML methods gives 12 ML pipelines.…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, the work done by LaCasse et. al[12] encompasses surveys related to feature set reduction for data analysis methods in context with general industrial applications, specific industrial applications, and data reductions. The study highlights prospects for feature-based data prioritization.…”
mentioning
confidence: 99%