2022
DOI: 10.1016/j.knosys.2022.108773
|View full text |Cite
|
Sign up to set email alerts
|

Building interpretable models for business process prediction using shared and specialised attention mechanisms

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(23 citation statements)
references
References 15 publications
0
7
0
Order By: Relevance
“…For example, Bukhsh et al [11] proposed the Process Transformer model, that is, to modify the structure of the Transformer network according to specific process prediction tasks to achieve ideal prediction results. Similarly, Wickramanayake et al [28] proposed two types of attention for the prediction task of future activities: the event-level attention to capturing the impact of specific events on the prediction task and the attribute-level attention to reveal which attributes of events affect the prediction task. And they designed two different attention models, the shared and specialized attention-based models.…”
Section: B Prediction Based On Deep Learningmentioning
confidence: 99%
“…For example, Bukhsh et al [11] proposed the Process Transformer model, that is, to modify the structure of the Transformer network according to specific process prediction tasks to achieve ideal prediction results. Similarly, Wickramanayake et al [28] proposed two types of attention for the prediction task of future activities: the event-level attention to capturing the impact of specific events on the prediction task and the attribute-level attention to reveal which attributes of events affect the prediction task. And they designed two different attention models, the shared and specialized attention-based models.…”
Section: B Prediction Based On Deep Learningmentioning
confidence: 99%
“…The framework proposed in this paper specializes the use of Shapley values to the problem of providing explanations for predictive analytics. Furthermore, the use of SHAP enabled us to provide explanations independently from the leveraged predictive model (model agnostic), while model specific approaches are specifically designed for certain model types; in particular, some approaches [38,39,40,41,42,43] can provide explanations only for neural network models. Other works [16,40] use attention mechanisms, which also have the limitation that is linked to the lack of consensus that attention weights are always correlated to feature importance.…”
Section: Explanations In the Bpm Fieldmentioning
confidence: 99%
“…Attention mechanism is a resource allocation scheme, which can strengthen the ability of the model to focus on more critical information [35][36][37][38][39][40][41][42]. The principle of attention mechanism used in the paper is shown in Fig.…”
Section: Attention Mechanism In the Clstm Modelmentioning
confidence: 99%