2013
DOI: 10.1111/j.1600-0587.2013.07585.x
|View full text |Cite
|
Sign up to set email alerts
|

How to assess the prediction accuracy of species presence–absence models without absence data?

Abstract: It is very common that only presence data are available in ecological niche modeling. However, most existing methods for evaluating the accuracy of presence-absence (binary) predictions of species require presence-absence data. The aim of this study is to present a new method for accuracy assessment that does not rely on absence data.Two new statistics F pb and F cpb were derived based on presence-background data. With generated six virtual species, we used DOMAIN, generalized linear modeling (GLM), and maximu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
76
0
5

Year Published

2014
2014
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 79 publications
(83 citation statements)
references
References 52 publications
(128 reference statements)
2
76
0
5
Order By: Relevance
“…AUC provides a single measure of model performance and ranges from 0.5 (randomness) to 1 (perfect discrimination), where a score higher than 0.7 is considered a good model performance (Fielding & Bell, 1997;Rebelo et al, 2010). As AUC is not appropriate to evaluate the accuracy of binary predictions, we also used true skill statistic (TSS) as suggested by recent studies (Lobo et al, 2008;Li & Guo, 2013) to assess the accuracy of the bamboo species models. The TSS takes into account both omission and commission errors, and success as a result of random guessing, and ranges from À1 to +1, where +1 indicates perfect agreement and values of zero or less indicate a performance no better than random.…”
Section: Species Distribution Modelling and Testingmentioning
confidence: 99%
See 1 more Smart Citation
“…AUC provides a single measure of model performance and ranges from 0.5 (randomness) to 1 (perfect discrimination), where a score higher than 0.7 is considered a good model performance (Fielding & Bell, 1997;Rebelo et al, 2010). As AUC is not appropriate to evaluate the accuracy of binary predictions, we also used true skill statistic (TSS) as suggested by recent studies (Lobo et al, 2008;Li & Guo, 2013) to assess the accuracy of the bamboo species models. The TSS takes into account both omission and commission errors, and success as a result of random guessing, and ranges from À1 to +1, where +1 indicates perfect agreement and values of zero or less indicate a performance no better than random.…”
Section: Species Distribution Modelling and Testingmentioning
confidence: 99%
“…We performed 10 replications for each bamboo species, using cross-validation procedures where we divided our dataset using 75% of the data for model calibration and retaining 25% of the data to evaluate models. In response to criticisms to the use of the area under the receiver operating characteristic (ROC) curve (AUC) in the species distribution modelling (Lobo et al, 2008;Li & Guo, 2013), we assessed model performance with both average AUC and the relative importance of commission and omission. AUC provides a single measure of model performance and ranges from 0.5 (randomness) to 1 (perfect discrimination), where a score higher than 0.7 is considered a good model performance (Fielding & Bell, 1997;Rebelo et al, 2010).…”
Section: Species Distribution Modelling and Testingmentioning
confidence: 99%
“…The all-points models were used to predict range shifts in the same and opposing time period. We calculated four measures of SDM performance: the Continuous Boyce Index (CBI), which measures the correlation of the model prediction with the actual probability of presence [48,49]; area under the receiveroperator curve (AUC) which indicates the probability that a presence site has a higher predicted value than a background site [44]; maximum F pb , the mean of precision (proportion of presence predictions that are correct) and sensitivity (proportion of test presences correctly predicted) [50]; and the point-biserial correlation (COR), a measure of model calibration accuracy [50,51].…”
Section: No Shiftmentioning
confidence: 99%
“…We tested thresholds using the value that maximized sensitivity plus specificity (proportion of test presences and test absences correctly predicted; MSSS), minimized the difference between sensitivity and specificity (MDSS), and maximized F pb (max-F pb ) [50]. Only MSSS is guaranteed to have the same value if calculated using real absences or background sites [50][51][52]. Overall we found the MSSS threshold best matched observed range shifts, so for brevity we only present results using this threshold (the others are reported in supplementary material).…”
Section: No Shiftmentioning
confidence: 99%
“…Subsequently, we applied the average predicted probability as the threshold to define the presence-absence distribution of giant panda habitats, as this method has been found to be a robust approach (Liu et al, 2005). Areas under the Operating Characteristic Curve (AUC) is a widely-used approach to evaluate model performance of species distribution models, but it is a threshold-independent measure that should not be applied to binary predictions (Lobo et al, 2008;Li and Guo, 2013). In this study, we adopted AUC to evaluate the model performance of our bamboo species models, whereas True Skill Statistic (TSS) was used to evaluate the model performance of our giant panda model.…”
Section: Species Distribution Modeling and Testingmentioning
confidence: 99%