2021
DOI: 10.3390/s21134489
|View full text |Cite
|
Sign up to set email alerts
|

Earthquake-Induced Building-Damage Mapping Using Explainable AI (XAI)

Abstract: Building-damage mapping using remote sensing images plays a critical role in providing quick and accurate information for the first responders after major earthquakes. In recent years, there has been an increasing interest in generating post-earthquake building-damage maps automatically using different artificial intelligence (AI)-based frameworks. These frameworks in this domain are promising, yet not reliable for several reasons, including but not limited to the site-specific design of the methods, the lack … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 38 publications
(16 citation statements)
references
References 53 publications
0
16
0
Order By: Relevance
“…Frisoni et al (2020) argued that plain text explanations have been insufficiently formalized so they used mapping to visualize the position of given keywords in semantic space. Matin and Pradhan (2021) employed SHAP values to explain, via a beeswarm plot, earthquake features within DNN. Even as methods are used in the explanation and interpretation of GeoAI (Ma et al, 2021), most visualization explanations are not georeferenced.…”
Section: Geographic Applications Of Xai Methods: State‐of‐the‐artmentioning
confidence: 99%
See 1 more Smart Citation
“…Frisoni et al (2020) argued that plain text explanations have been insufficiently formalized so they used mapping to visualize the position of given keywords in semantic space. Matin and Pradhan (2021) employed SHAP values to explain, via a beeswarm plot, earthquake features within DNN. Even as methods are used in the explanation and interpretation of GeoAI (Ma et al, 2021), most visualization explanations are not georeferenced.…”
Section: Geographic Applications Of Xai Methods: State‐of‐the‐artmentioning
confidence: 99%
“…Well-cited XAI visualization tools (e.g., https://github.com/yosin ski/deep-visua lizat ion-toolbox) display images as ideal types but lose geographic context like georeferences. In Section 2, we briefly described Matin and Pradhan's (2021) research on XAI to explain earthquake damage. XAI and DNN both extract common features (here, image patches) of all earthquake damaged buildings at any given layer.…”
Section: Challenges Of Using Geovisualization As An Explanationmentioning
confidence: 99%
“…Several studies use multiple XAI methods to interpret trained models, such as using Grad-CAM to obtain the contributions of input pixels as well as visualizing the feature maps of part layers (Xing et al, 2020 ). In addition to computer science, XAI methods have been applied in various fields including medicine (Tjoa and Guan, 2020 ), geography (Cheng et al, 2021 ), and disaster assessment (Matin and Pradhan, 2021 ). However, few studies have attempted to interpret models in the field of forestry (Onishi and Ise, 2021 ), even though deep learning methods have been widely applied in this field.…”
Section: Introductionmentioning
confidence: 99%
“…XAI refers to AI algorithms in which humans can logically interpret the output at an acceptable level [24]. Explainability facilitates understanding the influence and contribution of each input feature on the AI models' outputs [25]. Furthermore, XAI has the ability to detect bias in the training dataset, thereby ensuring impartiality in decision-making [26].…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, XAI has the ability to detect bias in the training dataset, thereby ensuring impartiality in decision-making [26]. XAI has been increasingly applied in the field of spatial prediction of droughts [27,28] and landslides [29], mapping of earthquake-induced building damage [25], and urban vegetation mapping [30]. Most of these studies concluded that XAI can not only provide insight into the output of intelligent models but also change our understanding of using MLbased models to make informed decisions [28].…”
Section: Introductionmentioning
confidence: 99%