2021
DOI: 10.1007/s10346-021-01693-7
|View full text |Cite
|
Sign up to set email alerts
|

Counteracting flawed landslide data in statistically based landslide susceptibility modelling for very large areas: a national-scale assessment for Austria

Abstract: The reliability of input data to be used within statistically based landslide susceptibility models usually determines the quality of the resulting maps. For very large territories, landslide susceptibility assessments are commonly built upon spatially incomplete and positionally inaccurate landslide information. The unavailability of flawless input data is contrasted by the need to identify landslide-prone terrain at such spatial scales. Instead of simply ignoring errors in the landslide data, we argue that m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 43 publications
(39 citation statements)
references
References 95 publications
0
29
0
Order By: Relevance
“…Recent advances in computer hardware and storage have made it possible to run DdLSM using very detailed raster resolutions, even for very large areas. The interplay between inventory positional accuracy and resolution of the input data should be one of the initial concerns (Lima et al 2021). The role of predictors resolution and its effects on DdLSM, were assessed in publications like Arnone et al (2016); Claessens (2005); Durić et al (2019); Lee et al (2004); Palamakumbure et al (2015); Shirzadi et al (2019).…”
Section: Modelling Unit and Spatial Resolutionmentioning
confidence: 99%
See 2 more Smart Citations
“…Recent advances in computer hardware and storage have made it possible to run DdLSM using very detailed raster resolutions, even for very large areas. The interplay between inventory positional accuracy and resolution of the input data should be one of the initial concerns (Lima et al 2021). The role of predictors resolution and its effects on DdLSM, were assessed in publications like Arnone et al (2016); Claessens (2005); Durić et al (2019); Lee et al (2004); Palamakumbure et al (2015); Shirzadi et al (2019).…”
Section: Modelling Unit and Spatial Resolutionmentioning
confidence: 99%
“…For this reason, assuring inventory's best positional accuracy is extremely important. It is known that historical and accurate landslide inventories are virtually never available, especially when dealing with very large areas (Herrera et al 2018;Lima et al 2021;Lin et al 2021; van den Eeckhaut and Hervas 2014). However emerging landslide mapping techniques that enable, immediate and precise event-based cataloging or techniques allowing past landslide extraction from highresolution digital terrain models can positively contribute (Guzzetti et al 2012).…”
Section: Study Site Extent and Landslide Inventorymentioning
confidence: 99%
See 1 more Smart Citation
“…In this study, we create a statistical LSS model using MELR (Zuur, 2009), as previously also employed by , and at national scale by Lin et al (2021) and Lima et al (2021). Logistic regression is the most commonly used approach for statistical LSS mapping (Reichenbach et al, 2018), and is associated with strong generalizing capabilities (Brenning, 2005)a necessity when working at the global scale.…”
Section: Mixed Effects Logistic Regression (Melr) For Model Developmentmentioning
confidence: 99%
“…and input uncertainty (here: 'How correct is the input to these equations?'). Model uncertainty stems from heuristic choices that are necessary in the process of model creation, including the choice of the statistical modelling approach, the selection of predictor variables, training data sampling and training data quality (see for example Steger et al (2015); Pourghasemi and Rossi (2016); Zêzere et al (2017); Depicker et al (2020); Lima et al (2021)). In order to estimate these model-intrinsic errors for a chosen modelling approach, cross validation (CV) is a widely used method where data is divided into a number of subsets, that are subsequently used for training and testing of the model.…”
Section: Introductionmentioning
confidence: 99%