2017
DOI: 10.7287/peerj.preprints.3185v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Parameters tuning boosts hyperSMURF predictions of rare deleterious non-coding genetic variants

Abstract: The regulatory code that determines whether and how a given genetic variant affects the function of a regulatory element remains poorly understood for most classes of regulatory variation. Indeed the large majority of bioinformatics tools have been developed to predict the pathogenicity of genetic variants in coding sequences or conserved splice sites.Computational algorithms for the prediction of non-coding deleterious variants associated with rare genetic diseases are faced with special challenges owing to t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…The speed-up introduced by parSMURF allows the automatic fine tuning of its learning parameters to improve predictions on real genomic data. Indeed, as preliminarily shown in [33], fine tuning of hyperSMURF learning parameters can boost the performance on real data.…”
Section: Auto-tuning Of Learning Parameters Improves Prediction Of Pamentioning
confidence: 87%
See 1 more Smart Citation
“…The speed-up introduced by parSMURF allows the automatic fine tuning of its learning parameters to improve predictions on real genomic data. Indeed, as preliminarily shown in [33], fine tuning of hyperSMURF learning parameters can boost the performance on real data.…”
Section: Auto-tuning Of Learning Parameters Improves Prediction Of Pamentioning
confidence: 87%
“…The behavior of the algorithm strongly depends on the learning hyper-parameters, reported in Table 1, which deeply influence the hyperSMURF performance, as shown in [33], and fine tuning of the learning parameters can dramatically improve prediction performance. Since hyperSMURF is an ensemble of random forests which in turn are ensembles of decision trees, its sequential implementation undergoes a high execution time, especially on large datasets, thus limiting a broad exploration of the hyper-parameter space.…”
Section: From Hypersmurf To Parsmurfmentioning
confidence: 99%