Proceedings of the 13th ACM Conference on Recommender Systems 2019
DOI: 10.1145/3298689.3347010
|View full text |Cite
|
Sign up to set email alerts
|

On the discriminative power of hyper-parameters in cross-validation and how to choose them

Abstract: Hyper-parameters tuning is a crucial task to make a model perform at its best. However, despite the well-established methodologies, some aspects of the tuning remain unexplored. As an example, it may affect not just accuracy but also novelty as well as it may depend on the adopted dataset. Moreover, sometimes it could be sufficient to concentrate on a single parameter only (or a few of them) instead of their overall set. In this paper we report on our investigation on hyper-parameters tuning by performing an e… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 35 publications
0
2
0
Order By: Relevance
“…Then, to ensure a fair comparison, we have used the same learning rate to train FedeRank. We have set up the remaining parameters as follows: the user-and positive item-regularization parameter is set to 1/20 of the learning rate; conversely, the negative item-regularization parameter is set to 1/200 of the learning rate as suggested in mymedialite 2 implementation as well as in [4]. Moreover, for each setting, we have selected the best model in the first 20 epochs.…”
Section: Methodsmentioning
confidence: 99%
“…Then, to ensure a fair comparison, we have used the same learning rate to train FedeRank. We have set up the remaining parameters as follows: the user-and positive item-regularization parameter is set to 1/20 of the learning rate; conversely, the negative item-regularization parameter is set to 1/200 of the learning rate as suggested in mymedialite 2 implementation as well as in [4]. Moreover, for each setting, we have selected the best model in the first 20 epochs.…”
Section: Methodsmentioning
confidence: 99%
“…Regression coefficients based on regression analysis of 1 and 2 versus the linear densities, ı ı ı U and ı ı ı I in Eq (18)…”
mentioning
confidence: 99%