2022
DOI: 10.1016/j.engstruct.2021.112883
|View full text |Cite
|
Sign up to set email alerts
|

Machine-learning interpretability techniques for seismic performance assessment of infrastructure systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 79 publications
(12 citation statements)
references
References 34 publications
0
6
0
Order By: Relevance
“…In recent years, with the development of artificial intelligence, some algorithms with data at the core have emerged [19]. Among these algorithms, machine learning has received remarkable attention of researchers, and there have been many successful examples [20][21][22][23][24]. In structure engineering, Hoang et al [25] constructed machine learning based alternatives for estimating the punching shear capacity of steel fiber reinforced concrete (SFRC) flat slabs.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, with the development of artificial intelligence, some algorithms with data at the core have emerged [19]. Among these algorithms, machine learning has received remarkable attention of researchers, and there have been many successful examples [20][21][22][23][24]. In structure engineering, Hoang et al [25] constructed machine learning based alternatives for estimating the punching shear capacity of steel fiber reinforced concrete (SFRC) flat slabs.…”
Section: Introductionmentioning
confidence: 99%
“…The formula is as follows: where , is the number of input features, and . The variables typically represent a feature being observed or unknown , and the are the feature attribution values [ 43 , 44 , 45 ].…”
Section: Methodsmentioning
confidence: 99%
“…In other words, the physical relationship between the input features and the output response is not clear (Mangalathu et al., 2022). XAI tries to enhance the transparency of the DLMs by explaining the relationship between the input features (predictors or input variables) and the response variable (Mangalathu et al., 2022). This explainability approach is highly appreciated in the earthquake engineering field where designers, stakeholders, and decision‐makers are highly interested in having insights into the input–output variables relationship and the interpretability of the predicted response from the DLM.…”
Section: The Proposed Proceduresmentioning
confidence: 99%
“…Explainability of the DLM using XAI DLMs; (Zou et al, 2022) are considered "black boxes" that cannot provide an understandable justification regarding the predictions obtained. In other words, the physical relationship between the input features and the output response is not clear (Mangalathu et al, 2022). XAI tries to enhance the transparency of the DLMs by explaining the relationship between the input features (predictors or input variables) and the response variable (Mangalathu et al, 2022).…”
Section: 5mentioning
confidence: 99%