2019
DOI: 10.1016/j.sab.2019.105721
|View full text |Cite
|
Sign up to set email alerts
|

Laser-induced breakdown spectroscopy spectral feature selection to enhance classification capabilities: A t-test filter approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 42 publications
0
4
0
Order By: Relevance
“…Feature selection can be performed in a number of ways, ranging from methods such as t tests (Chu, Hsu, Chou, Bandettini, & Lin, 2012; Huffman, Sobral, & Teran‐Hinojosa, 2019; Wang, 2012; Wang, Zhang, Liu, Lv, & Wang, 2014; Zhou & Wang, 2007) to more complex methods utilizing PCA, mutual information, L1 norms, regression [such as LASSO (Zhao & Yu, 2006) and AdaBoost (Wang, 2012)], and so on. Our current choice of using the t test as a way to rank and select the most separable features is a standard method in the field as evidenced by existing literature (Damoulas & Girolami, 2008b; Shawe‐Taylor & Cristianini, 2004; Varol, Gaonkar, Erus, Schultz, & Davatzikos, 2012).…”
Section: Discussionmentioning
confidence: 99%
“…Feature selection can be performed in a number of ways, ranging from methods such as t tests (Chu, Hsu, Chou, Bandettini, & Lin, 2012; Huffman, Sobral, & Teran‐Hinojosa, 2019; Wang, 2012; Wang, Zhang, Liu, Lv, & Wang, 2014; Zhou & Wang, 2007) to more complex methods utilizing PCA, mutual information, L1 norms, regression [such as LASSO (Zhao & Yu, 2006) and AdaBoost (Wang, 2012)], and so on. Our current choice of using the t test as a way to rank and select the most separable features is a standard method in the field as evidenced by existing literature (Damoulas & Girolami, 2008b; Shawe‐Taylor & Cristianini, 2004; Varol, Gaonkar, Erus, Schultz, & Davatzikos, 2012).…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, the characteristics and functions of these features in the classications were not exactly the same, especially the morphological features of the spectrum, which were not weaker than the traditional spectral intensity features. 29,47 Based on the above results, we have reason to believe that these different kinds of features contain different and valuable information. 48 Fusion and reconstruction using specic methods can positively improve the classication accuracy.…”
Section: Classication Using a Single Type Of Featurementioning
confidence: 95%
“…As a result, a prediction model is produced as an ensemble of weak prediction models. Every step evaluates the model values at each training data point, using the residuals of previous steps to minimize the loss function [ 50 ]. A GBM utilizes the best practices to avoid overfitting the classification machine.…”
Section: Methodsmentioning
confidence: 99%
“…Averaging the improvements made by each variable over every tree that uses that variable follows. In the split criterion, the variables with the greatest average decrease are listed as the most significant [ 50 ].When it comes to GBM modeling, there are a variety of tuning parameters available. The following variables were used in this study: (boosting_type = ‘gbdt’, num_leaves = 31, max_depth = −1, learning_rate = 0.1, n_estimators = 100).…”
Section: Methodsmentioning
confidence: 99%