1999
DOI: 10.1002/(sici)1099-128x(199903/04)13:2<165::aid-cem535>3.0.co;2-y
|View full text |Cite
|
Sign up to set email alerts
|

Iterative predictor weighting (IPW) PLS: a technique for the elimination of useless predictors in regression problems

Abstract: SUMMARYA new method for the elimination of useless predictors in multivariate regression problems is proposed. The method is based on the cyclic repetition of PLS regression. In each cycle the predictor importance (product of the absolute value of the regression coefficient and the standard deviation of the predictor) is computed, and in the next cycle the predictors are multiplied by their importance. The algorithm converges after 10-20 cycles. A reduced number of relevant predictors is retained in the final … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
45
0
3

Year Published

2000
2000
2017
2017

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 134 publications
(48 citation statements)
references
References 12 publications
0
45
0
3
Order By: Relevance
“…However, this stochastic nature (involving predictable processes as well as random actions) is also a major disadvantage in establishing a universal spectral subset since it is almost impossible to recreate identical GA or SA models. 26 Other methods that have been employed for feature selection include iterative variable selection, 27 iterative predictor weighting, 28 and uninformative variable elimination. 29 In particular, the important work of Centner et al 29 proposes the addition of artificial (noise) variables prior to the development of a closed form PLS or principal component regression (PCR) model for the dataset containing both the experimental and artificial variables.…”
Section: Theoretical Background 21 Wavelength Selectionmentioning
confidence: 99%
“…However, this stochastic nature (involving predictable processes as well as random actions) is also a major disadvantage in establishing a universal spectral subset since it is almost impossible to recreate identical GA or SA models. 26 Other methods that have been employed for feature selection include iterative variable selection, 27 iterative predictor weighting, 28 and uninformative variable elimination. 29 In particular, the important work of Centner et al 29 proposes the addition of artificial (noise) variables prior to the development of a closed form PLS or principal component regression (PCR) model for the dataset containing both the experimental and artificial variables.…”
Section: Theoretical Background 21 Wavelength Selectionmentioning
confidence: 99%
“…13 Weights of samples are dependent upon the value of the cross-validated regression residuals. The median r˜ of the absolute values of the residuals |ri| is calculated and the new sample square weights can be obtained by the weight function:…”
Section: Modified Ipow-pls Methodsmentioning
confidence: 99%
“…Weights of samples are dependent upon the values of the regression residuals, as done in iterative reweighted PLS (IRPLS), 12 and weights of wavelengths are obtained from the PLS regression coefficients and the standard deviation, as done in iterative predictor weighting PLS (IPW-PLS). 13 During the calculation of the weights of samples, the tuning constant in IPOW-PLS is the key parameter that defines a threshold beyond which a weight of zero is assigned to that sample. A different tuning constant for the same weight function can give different result, 7,8 and the data-dependent feature makes IPOW-PLS inconstant and computationally complex.…”
Section: Introductionmentioning
confidence: 99%
“…In some cases, conventional methods may not offer a satisfactory solution to a given problem due to the complexity of the data, and it may be necessary to apply some sort of variable selection. There have been many mathematical methods of variable selection and genetic algorithm is one of them offering a fast and effective solution for large scale problems [17][18][19][20]. Inverse least squares (ILS) is based on the inverse of Beer's Law, where concentrations of an analyte are modeled as a function of absorbance measurements.…”
Section: Introductionmentioning
confidence: 99%