2009
DOI: 10.3724/sp.j.1004.2008.01515
|View full text |Cite
|
Sign up to set email alerts
|

Incorporating Prior Knowledge into Kernel Based Regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2010
2010
2017
2017

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 5 publications
0
7
0
Order By: Relevance
“…Once the coefficients of the prior sample are obtained, the second objective function can be formulated as given in Equation (14) is the scalar value that determining the significant of the second objective as compared to the first objective and ξ is the predefined tolerance of the estimated weight, the value is between 0.01 and 0.1 . …”
Section: ( )mentioning
confidence: 99%
See 1 more Smart Citation
“…Once the coefficients of the prior sample are obtained, the second objective function can be formulated as given in Equation (14) is the scalar value that determining the significant of the second objective as compared to the first objective and ξ is the predefined tolerance of the estimated weight, the value is between 0.01 and 0.1 . …”
Section: ( )mentioning
confidence: 99%
“…In related works, many forms of prior knowledge that incorporated to the kernel based regression are available in various fashions such as a bias data points in equality and inequality constraints form [9][10], derivative points [11][12] polyhedral constraints points [13] and discrete points derived from prior function to be represented as prior function space [14]. However, all of the existing techniques tried to find only one optimal solution for a given problem.…”
Section: Introductionmentioning
confidence: 98%
“…A change in the kernel as well as in the regularizer is proposed in [13] to emphasize local features and incorporate invariances. Other approaches include extending the optimization problem with a term penalizing the distance between a prior knowledge function, the data, and the estimate [14]. Note that the incorporation of prior knowledge for conditional density estimation is especially hard as additional constraints have to be asserted to obtain valid conditional densities, i.e., the probability mass of the conditional density estimate has to be non-negative and integrate to one for all fixed input values.…”
Section: Related Workmentioning
confidence: 99%
“…Compared with standard SVR, the incorporation of initial guess may produce a more reasonable model. The detailed implementation of PKBKR can be found in [25] and is ignored here due to the limitation of space.…”
Section: An Introduction To Prior Knowledge Based Kernel Regressionmentioning
confidence: 99%
“…Prior Knowledge Based Kernel Regression (PKBKR) [25] is an extension of SVR [26,27]. Similar to SVR, PKBKR Science and Technology of Nuclear Installations 3 is a sample-based modeling method.…”
Section: An Introduction To Prior Knowledge Based Kernel Regressionmentioning
confidence: 99%