2016
DOI: 10.1016/j.spa.2016.04.021
|View full text |Cite
|
Sign up to set email alerts
|

Robust estimation of U-statistics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(18 citation statements)
references
References 14 publications
0
15
0
Order By: Relevance
“…We also note that, as the name indicates, the U-statistics are unbiased estimators of the kernel mean on the population of X and have minimal variance among all its unbiased estimators (see e.g. Joly and Lugosi [2016]). For the above type of kernel concerning arbitrary machine learning algorithms, ( 6) is known as leave-pair-out cross-validation (LPOCV) (see Airola et al [2011], Montoya Perez et al [2018).…”
Section: Null Hypothesis Of Label Independencementioning
confidence: 99%
See 1 more Smart Citation
“…We also note that, as the name indicates, the U-statistics are unbiased estimators of the kernel mean on the population of X and have minimal variance among all its unbiased estimators (see e.g. Joly and Lugosi [2016]). For the above type of kernel concerning arbitrary machine learning algorithms, ( 6) is known as leave-pair-out cross-validation (LPOCV) (see Airola et al [2011], Montoya Perez et al [2018).…”
Section: Null Hypothesis Of Label Independencementioning
confidence: 99%
“…As an extreme case of re-sampling, we can consider what we refer to as the leave-pair-out cross-validation (LPOCV), in which every differently labeled pair is held out from the sample at a time. LPOCV is an unbiased estimator of the pairwise error probability considered in this paper for any learning algorithm and it is also the lowest variance estimator of all unbiased estimators of that quantity (this follows from the theory of U-statistics for which we refer to Joly and Lugosi [2016]).…”
Section: Introductionmentioning
confidence: 99%
“…An intermediate step, see W MoU , is to look after diagonal blocks only as represented in (c) of the Figure 1. This formulation is also used by Lerasle et al (2019) for deriving robust mean embedding and Maximum Mean Discrepancy estimators, in order to simplify theoretical analysis due to the independency of blocks but leading to an increasing variance of the estimator (Joly and Lugosi, 2016).…”
Section: Mom and Mou-based Estimatorsmentioning
confidence: 99%
“…The Median-of-Means (MoM in short) is a robust mean estimator firstly introduced in complexity theory during the 1980s (Nemirovsky and Yudin, 1983;Jerrum et al, 1986;Alon et al, 1999). Following the seminal deviation study by Catoni (2012), MoM has recently witnessed a surge of interest, mainly due to its nice sub-gaussian behavior under the sole requirement that the second order moment is finite (Devroye et al, 2016).Originally devoted to scalar random variables, MoM has notably been extended to random vectors (Minsker et al, 2015;Hsu and Sabato, 2016;Lugosi and Mendelson, 2017) and U -statistics (Joly and Lugosi, 2016;Laforgue et al, 2019) with minimal loss of performance. As a valuable alternative to the empirical mean in presence of outliers or heavy-tailed distributions, MoM is now the cornerstone of many robust learning procedures such as bandits (Bubeck et al, 2013), robust mean embedding (Lerasle et al, 2019), or the more general frameworks of MoM-minimization (Lecué et al, 2018).…”
Section: Introductionmentioning
confidence: 99%
“…, X i m ) we cannot expect to get exponential inequalities anymore. Nevertheless working with kernels that have finite p-th moment for some p ∈ (1, 2], Joly and Lugosi in Joly and Lugosi (2016) construct an estimator of the mean of the U-process using the median-of-means technique that performs as well as the classical U-statistic with bounded kernels.…”
Section: Introductionmentioning
confidence: 99%