2018
DOI: 10.1007/978-3-319-64410-3
|View full text |Cite
|
Sign up to set email alerts
|

Probability and Statistics for Computer Science

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
17
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(22 citation statements)
references
References 0 publications
2
17
0
2
Order By: Relevance
“…4.1) with N = 10, 000 and p I ≈ 30% (the plotted results are the median over 20 runs). Our obtained results conform to statistical theory [24]. Specifically, the inlier rates in the subsets of N sample points have a mean of p I and standard deviation (SD) of p I (1−p I ) Nsample .…”
Section: Ablation Studiessupporting
confidence: 85%
“…4.1) with N = 10, 000 and p I ≈ 30% (the plotted results are the median over 20 runs). Our obtained results conform to statistical theory [24]. Specifically, the inlier rates in the subsets of N sample points have a mean of p I and standard deviation (SD) of p I (1−p I ) Nsample .…”
Section: Ablation Studiessupporting
confidence: 85%
“…Note that we also identified (and subsequently discarded) an additional "Junk" cluster with relatively fewer elements (9.7% of the total number of connectivity profiles computed for all neurons in all recordings) and small values of all features. We thus removed these cases, as usual in unsupervised clustering applications 44 , to better discriminate the remaining "interesting" classes listed above.…”
Section: /32mentioning
confidence: 99%
“…MLC trading strategy [35] brings a high level of reliability because it is based on several algorithms that are tested simultaneously: decision trees [50], linear discriminant analysis by Fisher [18], support vector machines [5,19], k-nearest neighbors (k-NN) [1], and ensemble learning methods [45]. Three different branches of decision trees are used in our MLC trading strategy: fine, medium, and coarse, which differ according to their complexity, i.e., the number of splits [26]. Several support vector machines (SVMs) are tested: linear, quadratic, and cubic SVM, as well as fine Gaussian, medium Gaussian, and coarse Gaussian.…”
Section: Machine Learning Classifiersmentioning
confidence: 99%