2020
DOI: 10.1007/978-3-030-66498-5_10
|View full text |Cite
|
Sign up to set email alerts
|

XNAP: Making LSTM-Based Next Activity Predictions Explainable by Using LRP

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 19 publications
0
13
0
Order By: Relevance
“…Recently, awareness levels are increased about the necessity of supporting PPM results with explanations among PPM practitioners. A few number of approaches emerged to either equip the proposed PPM approach with a post-hoc explanation technique [27,28,29]. [27] proposes an approach which integrates Layer-wise Relevance Propagation(LRP) to explain next activity predicted using an LSTM predictive model.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, awareness levels are increased about the necessity of supporting PPM results with explanations among PPM practitioners. A few number of approaches emerged to either equip the proposed PPM approach with a post-hoc explanation technique [27,28,29]. [27] proposes an approach which integrates Layer-wise Relevance Propagation(LRP) to explain next activity predicted using an LSTM predictive model.…”
Section: Related Workmentioning
confidence: 99%
“…A few number of approaches emerged to either equip the proposed PPM approach with a post-hoc explanation technique [27,28,29]. [27] proposes an approach which integrates Layer-wise Relevance Propagation(LRP) to explain next activity predicted using an LSTM predictive model. This approach tends to propagate relevance scores backwards through the model to indicate which previous activities were crucial to obtaining the resulting prediction.…”
Section: Related Workmentioning
confidence: 99%
“…7.1 Leveraging PPM with explanations [28] proposes an approach which integrates Layer-wise Relevance Propagation(LRP) to explain next activity predicted using an LSTM predictive model. This approach tends to propagate relevance scores backwards through the model to indicate which previous activities were crucial to obtaining the resulting prediction.…”
Section: Related Workmentioning
confidence: 99%
“…One direction of work uses attribute importance derived from neural networks as an explanation. Weinzierl et al use layerwise relevance propagation to derive attribute importance from LSTMs [34]. Sindhgatta et al use the attention mechanism of LSTMs to extract attribute importance [25].…”
Section: Related Workmentioning
confidence: 99%