2018
DOI: 10.1007/978-1-4842-3207-1
|View full text |Cite
|
Sign up to set email alerts
|

Practical Machine Learning with Python

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
27
0
10

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 78 publications
(37 citation statements)
references
References 0 publications
0
27
0
10
Order By: Relevance
“…This latter difference is referred to as the bias of the technique [1]. By balancing a model's complexity one achieves an optimal trade-off between bias and variance of a model [24,27]. In fact, the authors of [27] make a recommendation for small sample sizes of using k-fold cross validation because of the good variance and bias properties for only minor additional computational costs due to the rather small sample sizes.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…This latter difference is referred to as the bias of the technique [1]. By balancing a model's complexity one achieves an optimal trade-off between bias and variance of a model [24,27]. In fact, the authors of [27] make a recommendation for small sample sizes of using k-fold cross validation because of the good variance and bias properties for only minor additional computational costs due to the rather small sample sizes.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
“…Another approach to determine feature importance is a model-agnostic version called model reliance, where feature importance is indicated by the amount of increase of model error, for example measured by the AUC or any other performance measure, by fitting a model after permuting the features [29,30]. We will use the model-agnostic approach [24] as implemented in Skater [31]. Skater measures the mean absolute value of the change in predictions given a perturbation of a certain feature.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…CNNs have also integrated with other algorithms, such as multilayer perceptrons [30] and support vector machines [31]. Many studies have reported that CNNs have contributed to an accuracy improvement of land cover classification, with the overall accuracy ranging from 81% to 93%, depending on the sensor type, spatial resolution of input images, and target classes [18,19,21,27,[29][30][31][32][33].Feature engineering is defined as the process of transforming raw data into features for better representation of the given problem, which can result in an improvement of the model accuracy on unseen data [34]. Good features are a contributing factor in model performance since machine learning algorithms are problem specific and dependent on their domains.…”
mentioning
confidence: 99%