2015
DOI: 10.1109/jstsp.2015.2400412
|View full text |Cite
|
Sign up to set email alerts
|

Designing Statistical Estimators That Balance Sample Size, Risk, and Computational Cost

Abstract: Abstract-This paper proposes a tradeoff between computational time, sample complexity, and statistical accuracy that applies to statistical estimators based on convex optimization. When we have a large amount of data, we can exploit excess samples to decrease statistical risk, to decrease computational cost, or to trade off between the two. We propose to achieve this tradeoff by varying the amount of smoothing applied to the optimization problem. This work uses regularized linear regression as a case study to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
20
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(22 citation statements)
references
References 21 publications
1
20
0
Order By: Relevance
“…The authors of [39] showed that by modifying the original iterations, it is possible to achieve faster convergence rates to maintain the estimation accuracy without increasing the computational cost of each iteration considerably. More generally, smoothing techniques such as convex relaxation [40] or simply adding a nice smooth function to smooth the nondifferentiable objective function [41], [35], [42] often achieves a faster convergence rate. However, the amount of smoothing should be chosen carefully to guarantee the performance of sporadic device activity detection in IoT networks.…”
Section: A Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The authors of [39] showed that by modifying the original iterations, it is possible to achieve faster convergence rates to maintain the estimation accuracy without increasing the computational cost of each iteration considerably. More generally, smoothing techniques such as convex relaxation [40] or simply adding a nice smooth function to smooth the nondifferentiable objective function [41], [35], [42] often achieves a faster convergence rate. However, the amount of smoothing should be chosen carefully to guarantee the performance of sporadic device activity detection in IoT networks.…”
Section: A Related Workmentioning
confidence: 99%
“…2) Computation and Estimation Trade-offs: To address the computational challenges in massive IoT networks with a limited time budget, we adopt the smoothing method to smooth the non-differentiable group sparsity inducing regularizer to accelerate the convergence rates. The computational speedups can be achieved by projecting onto simpler sets [40], varying the amount of smoothing [42], or adjusting the step sizes [38] applied to the optimization algorithms. However, the computational speedups will normally reduce the estimation accuracy.…”
Section: System Model and Problem Formulation A System Model Andmentioning
confidence: 99%
See 1 more Smart Citation
“…Wang et al [4], in a Sparse Principal Component Analysis framework, addressed the question of whether is possible to find an estimator that is computable in polynomial time, and then they analyzed its minimax optimal rate of convergence. Several other applications can be found in [5][6][7][8][9].…”
Section: Introductionmentioning
confidence: 99%
“…Another work [5] examined the time-data trade-off for image interpolation problem, by varying the amount of smoothing applied to the convex optimization problem. However, most of these works have addressed the problem of computational and statistical trade-off in terms of modifying or improving a statistical optimization scheme, while in this work we pursue a different approach based on building a more general family of denoisers.…”
Section: Introductionmentioning
confidence: 99%