2022
DOI: 10.48550/arxiv.2206.08918
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning a Single Neuron with Adversarial Label Noise via Gradient Descent

Abstract: We study the fundamental problem of learning a single neuron, i.e., a function of the form x → σ(w • x) for monotone activations σ : R → R, with respect to the L 2 2 -loss in the presence of adversarial label noise. Specifically, we are given labeled examples from a distribution D on (x, y) ∈ R d × R such that there exists w * ∈ R d achieving F (w * ) = , whereThe goal of the learner is to output a hypothesis vector w such that F ( w) = C with high probability, where C > 1 is a universal constant. As our main … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 4 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?