2022
DOI: 10.1007/978-3-031-04137-2_16
|View full text |Cite
|
Sign up to set email alerts
|

On the Choice of the Optimal Tuning Parameter in Robust One-Shot Device Testing Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…\end{align}$$In practical applications, this tuning parameter is typically chosen from a grid of potential values ranging between 0 and 1, since it is uncommon to work with a tuning parameter λ$\lambda$ greater than 1. This aligns with 25 , which employed different loss functions relating empirical and theoretical model probabilities to choose the optimal tuning parameter inside the family of minimum DPD estimators in this context.…”
Section: Minimum Phi‐divergence Estimatorsmentioning
confidence: 64%
“…\end{align}$$In practical applications, this tuning parameter is typically chosen from a grid of potential values ranging between 0 and 1, since it is uncommon to work with a tuning parameter λ$\lambda$ greater than 1. This aligns with 25 , which employed different loss functions relating empirical and theoretical model probabilities to choose the optimal tuning parameter inside the family of minimum DPD estimators in this context.…”
Section: Minimum Phi‐divergence Estimatorsmentioning
confidence: 64%
“…The proof of Proposition 99 works analogously to the proof of Proposition 72, by replacing Theorem 22(a) with (196). Proposition 101: In the above set-up, one has…”
Section: A Naive Estimators Of Min and Argmin -Base-divergence-method...mentioning
confidence: 95%
“…is the random vector constructed from (i) an auxiliary deterministic probability vector P aux ∈ S K >0 inducing index blocks I 125) respectively ( 126)). From (196), we obtain for large m the minimal-empirical-risk approximation (cf. ( 147))…”
Section: A Naive Estimators Of Min and Argmin -Base-divergence-method...mentioning
confidence: 99%
See 1 more Smart Citation