2015
DOI: 10.48550/arxiv.1509.08165
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Computational Framework for Multivariate Convex Regression and its Variants

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…In this last subsection, we return to the least-squares (LS) piecewise affine regression problem that has motivated this study. Being a generalization of the classic LS linear regression problem, the least-squares piecewise affine regression problem is to find, given data points {x , y } N =1 ∈ R d+1 , a LS estimator of a continuous piecewise affine function [9,23,3,16]. Since every piecewise affine function can be written in the form (1), we may formulate this regression problem as…”
Section: Least-squares Piecewise Affine Regressionmentioning
confidence: 99%
“…In this last subsection, we return to the least-squares (LS) piecewise affine regression problem that has motivated this study. Being a generalization of the classic LS linear regression problem, the least-squares piecewise affine regression problem is to find, given data points {x , y } N =1 ∈ R d+1 , a LS estimator of a continuous piecewise affine function [9,23,3,16]. Since every piecewise affine function can be written in the form (1), we may formulate this regression problem as…”
Section: Least-squares Piecewise Affine Regressionmentioning
confidence: 99%
“…As the density of the evaluation points increases, the estimated function potentially has more hyperplane components and is more flexible; however, the computation time typically increases. If a smooth functional estimate is preferred, see Nesterov (2005) and Mazumder et al (2015), where methods for smoothing are provided. In practice, we propose to select the bandwidth vector h via the leave-one-out cross-validation based on the unconstrained estimator.…”
Section: Shape Constrained Kernel-weighted Least Squares (Sckls) With...mentioning
confidence: 99%