2020 28th European Signal Processing Conference (EUSIPCO) 2021
DOI: 10.23919/eusipco47968.2020.9287715
|View full text |Cite
|
Sign up to set email alerts
|

Smooth Strongly Convex Regression

Abstract: Convex regression (CR) is the problem of fitting a convex function to a finite number of noisy observations of an underlying convex function. CR is important in many domains and one of its workhorses is the non-parametric least square estimator (LSE). Currently, LSE delivers only non-smooth non-strongly convex function estimates. In this paper, leveraging recent results in convex interpolation, we generalize LSE to smooth strongly convex regression problems. The resulting algorithm relies on a convex quadratic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…has been extremely fruitful for characterizing their convergence properties [29,30,15,31,32]. Convex regression is treated extensively in [33,34,35,36], while recently being generalized to smooth strongly convex functions [37] based on A. Taylor's works [38,39]; an interesting approach using similar techniques for optimal transport is offered in [40]. Operator regression is a recent and at the same time old topic.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…has been extremely fruitful for characterizing their convergence properties [29,30,15,31,32]. Convex regression is treated extensively in [33,34,35,36], while recently being generalized to smooth strongly convex functions [37] based on A. Taylor's works [38,39]; an interesting approach using similar techniques for optimal transport is offered in [40]. Operator regression is a recent and at the same time old topic.…”
Section: Related Workmentioning
confidence: 99%
“…The concept of convex regression is briefly introduced here; we refer the reader to, e.g., [35,37] for the technical details. Suppose one has noisy measurements of a convex function ϕ(x) : R n → R (say y i ) at points x i ∈ R n , i ∈ I , and (optionally) its gradients ∇ x ϕ(x i ).…”
Section: A Convex Regressionmentioning
confidence: 99%
“…By imposing the smooth and strong convexity constraints on points d that are dense enough, shape-constrained GPs ensure that the posterior mean function μp • (x • ) is "practically" (i.e., indistinguishable for all practical purposes) smooth and strongly convex [12]. The choice of shape-constrained GPs versus exact methods, such as smooth strong convex regression [19] (which would ensure shape properties exactly and everywhere) is motivated by the fact that the latter is more computationally intensive and its learning rate can be significantly slower.…”
Section: B Shape-constrained Gaussian Processesmentioning
confidence: 99%
“…In the proposed personalized gradient tracking strategy, the dynamic gradient tracking update is interlaced with a learning mechanism to let each node learn the user's cost function U i (x), by employing noisy user's feedback in the form of a scalar quantity given by y i,t = U (x i,t ) + i,t , where x i,t is the local, tentative solution at time t and i,t is a noise term. It is worth pointing out that in this paper, we consider convex parametric models, instead of more generic non-parametric models, such as Gaussian Processes [12,[16][17][18][19][20][21], or convex regression [22,23]. The reasons for this choice stem from the fact that (i) user's functions are or can be often approximated as convex (see, e.g., [24,25] and references therein), which makes the overall optimization problem much easier to be solved; (ii) convex parametric models have better asymptotical rate bounds 2 than convex non-parametric models [22], which is fundamental when attempting at learning with scarce data; and (iii) a solid online theory already exists in the form of recursive least squares (RLS) [26][27][28][29][30][31].…”
Section: Introductionmentioning
confidence: 99%
“…This represents a first step towards generic parametric models 3 . Non-parametric approaches in the literature to learn unknown functions are e.g., (shape-constrained) Gaussian processes [16,21,35] and convex regression [23,36]. As said, we prefer here parametric models for their faster asymptotical rates, cheap online computational load, and ease of introducing convexity constraints 4 .…”
Section: Introductionmentioning
confidence: 99%