2020
DOI: 10.3390/risks8030083
|View full text |Cite
|
Sign up to set email alerts
|

Nagging Predictors

Abstract: We define the nagging predictor, which, instead of using bootstrapping to produce a series of i.i.d. predictors, exploits the randomness of neural network calibrations to provide a more stable and accurate predictor than is available from a single neural network run. Convergence results for the family of Tweedie’s compound Poisson models, which are usually used for general insurance pricing, are provided. In the context of a French motor third-party liability insurance example, the nagging predictor achieves s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0
3

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 57 publications
(47 citation statements)
references
References 22 publications
0
24
0
3
Order By: Relevance
“…Lastly, KODE achieved the best performance on the NSL-KDD dataset with a detection rate of 96.64% and a performance accuracy of 99.73%. However, it achieved the highest false alarm rates compared to [31]. Finally, unlike the selected studies and many other approaches.…”
Section: Comparison Of Our Proposed Approach With Other Cutting-edge Ids Approachesmentioning
confidence: 83%
See 1 more Smart Citation
“…Lastly, KODE achieved the best performance on the NSL-KDD dataset with a detection rate of 96.64% and a performance accuracy of 99.73%. However, it achieved the highest false alarm rates compared to [31]. Finally, unlike the selected studies and many other approaches.…”
Section: Comparison Of Our Proposed Approach With Other Cutting-edge Ids Approachesmentioning
confidence: 83%
“…Firstly, reference [31] is believed to have first proposed the concept of bagging. The authors intelligently used random draw with replacement to create several samples of the training dataset and train various models, and averaged their score with the ensemble technique of voting.…”
Section: Synopsis Of Bagging Boosting and Stackingmentioning
confidence: 99%
“…Network regression models lack a certain degree of robustness as gradient descent network fitting explores different (local) minima of the objective function; note that, in general, neural network fitting is not a convex minimization problem. This issue of non-uniqueness of good predictive models has been widely discussed in the literature, and ensembling may be one solution to mitigate this problem, we refer to [Dietterich 2000a, Dietterich 2000b], [Zhou et al 2002], [Zhou 2012] and [Richman and Wüthrich 2020]. The top row shows the empirical distributions of the canonical parameters (θ(x i )) 1≤i≤n for 4 different networks; we observe that there are some differences in these empirical densities.…”
Section: St and 2nd Order Contributionsmentioning
confidence: 99%
“…Network regression models lack a certain degree of robustness as gradient descent network fitting explores different (local) minima of the objective function; note that, in general, neural network fitting is not a convex minimization problem. This issue of non-uniqueness of good predictive models has been widely discussed in the literature, and ensembling may be one solution to mitigate this problem, we refer to [Dietterich 2000a, Dietterich 2000b, [Zhou et al 2002], [Zhou 2012] and [Richman and Wüthrich 2020]. The top row shows the empirical distributions of the canonical parameters (θ(x i )) 1≤i≤n for 4 different networks; we observe that there are some differences in these empirical densities.…”
Section: St and 2nd Order Contributionsmentioning
confidence: 99%