The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2018
DOI: 10.48550/arxiv.1804.07169
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Large-scale Nonlinear Variable Selection via Kernel Random Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…We then apply STG and evaluate the classification accuracy and the number of selected features. We use the architecture [200,50,10] with tanh activations. The experiment was repeated 10 times, the extracted features and accuracies were consistent over 20 trials.…”
Section: Sparse Handwritten Digits Classificationmentioning
confidence: 99%
See 4 more Smart Citations
“…We then apply STG and evaluate the classification accuracy and the number of selected features. We use the architecture [200,50,10] with tanh activations. The experiment was repeated 10 times, the extracted features and accuracies were consistent over 20 trials.…”
Section: Sparse Handwritten Digits Classificationmentioning
confidence: 99%
“…(b) The comparison of accuracy and sparsity level performance for λ in the range of [10 −3 , 10 −2 ] between using our proposed method (STG) and its variant using the Hard-Concrete (HC) distribution. [50]. Following the same format as [50], the following functions are used to generate synthetic data: (SE1: 100/5) y = sin (x 1 + x 3 ) 2 sin (x 7 x 8 x 9 ) + N (0, 0.1).…”
Section: Regression Using Synthetic and Real Datasetsmentioning
confidence: 99%
See 3 more Smart Citations