2019
DOI: 10.1007/s10107-019-01401-3
|View full text |Cite
|
Sign up to set email alerts
|

A filtered bucket-clustering method for projection onto the simplex and the $$\ell _1$$ ball

Abstract: We propose in this paper a new method processing the projection of an arbitrary size vector onto the probabilistic simplex or the 1 ball. Our method merges two principles. The first one is an original search of the projection using a bucket algorithm. The second one is a filtering, on the fly, of the values that cannot be part of the projection. The combination of these two principles offers a simple and efficient algorithm whose worst-case complexity is linear with respect to the vector size. Furthermore, the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

4
4

Authors

Journals

citations
Cited by 14 publications
(27 citation statements)
references
References 14 publications
0
27
0
Order By: Relevance
“…Where φ(Z, Y ) is the classification loss in the latent space and ψ( X − X) is the reconstructed loss. We also introduce a constrained regularization loss W 1 ≤ η for features selection [30], [31]. The parameter λ weights the classification loss and the reconstruction loss.…”
Section: Method: Non-parametric Supervised Au-toencoder Frameworkmentioning
confidence: 99%
“…Where φ(Z, Y ) is the classification loss in the latent space and ψ( X − X) is the reconstructed loss. We also introduce a constrained regularization loss W 1 ≤ η for features selection [30], [31]. The parameter λ weights the classification loss and the reconstruction loss.…”
Section: Method: Non-parametric Supervised Au-toencoder Frameworkmentioning
confidence: 99%
“…Classical Learning structured sparse DNNs are based on proximal regularization methods. In this paper, we propose an alternative constrained approach that takes advantage of available efficient projections on the 1 -ball [47], [48], [49] and on the 2,1 ball [50], [51]. We further propose a new 1,1 projection.…”
Section: Goal Of the Workmentioning
confidence: 99%
“…In this work, we propose a constrained approach in which the constraint is directly related to the number of zero-weights. Moreover it takes advantage of an available efficient projection on the 1 -ball [47], [49]. Let L(W ) be a gradient Lipschitz loss, R(w) be a convex constraint, and C its convex set.…”
Section: A Projection Gradient Algorithm For Constrained Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Theoretically, P ∆n (x) can be computed with worst case complexity O(n). However, this involves either (i) using a linear time median selection algorithm Blum et al (1973) known to be slow in practice; or (ii) using the recently proposed algorithm of Perez et al (2020), which requires a priori knowledge (i.e. a bound on the size of the entries of x) as well as some non-standard floating point operations.…”
Section: Introductionmentioning
confidence: 99%