2023
DOI: 10.1021/acs.iecr.3c01481
|View full text |Cite
|
Sign up to set email alerts
|

Distance Matrix Patterns for Visual and Interpretable Process Data Analytics

Afrânio Melo,
Fernando Freitas Fadel,
Maurício Melo Câmara
et al.

Abstract: A novel methodology for visual process data analytics based on distance matrices is proposed. A distance matrix is a two-dimensional representation that reflects intrinsic data patterns and is not constrained by a specific model structure. It is shown for the first time that fundamental patterns of process data, such as steps, drifts, and oscillations, can be clearly identified in distance matrices, allowing their use in process monitoring pipelines and as tools to aid process understanding and interpretation.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 52 publications
(115 reference statements)
0
2
0
Order By: Relevance
“…Among recent research on model interpretability, it is worth highlighting the works of: Sivaram et al [327] and Das et al [328], who investigated hidden representations in neural networks applied to classification and regression tasks, respectively; Harinarayan and Shalinie [680], who utilized the SHAP method to explain the results of a XGBoost [681] model applied to the TEP benchmark; Yang et al [533], who developed an unsupervised Bayesian network model for monitoring processes at both global and local levels; Bhakte et al [682], who proposed a methodology using alarm limits for explaining process monitoring results from deep learning models; Ye et al [683], who combined the use of frequency spectra inputs with a layer-wise relevance propagation strategy to explain predictions of convolutional neural networks; and Melo et al [569], who proposed a visual and interpretable methodology based on distance matrix patterns.…”
Section: Model Interpretabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Among recent research on model interpretability, it is worth highlighting the works of: Sivaram et al [327] and Das et al [328], who investigated hidden representations in neural networks applied to classification and regression tasks, respectively; Harinarayan and Shalinie [680], who utilized the SHAP method to explain the results of a XGBoost [681] model applied to the TEP benchmark; Yang et al [533], who developed an unsupervised Bayesian network model for monitoring processes at both global and local levels; Bhakte et al [682], who proposed a methodology using alarm limits for explaining process monitoring results from deep learning models; Ye et al [683], who combined the use of frequency spectra inputs with a layer-wise relevance propagation strategy to explain predictions of convolutional neural networks; and Melo et al [569], who proposed a visual and interpretable methodology based on distance matrix patterns.…”
Section: Model Interpretabilitymentioning
confidence: 99%
“…Applications of this methodology in process monitoring have been proposed in combination with various techniques, such as convolutional neural networks [371], recurrence quantification [562][563][564][565] and texture analysis [566][567][568]. Melo et al [569] have recently proposed a novel methodology for visual process data analytics and process monitoring based on a direct analysis of distance matrix patterns.…”
mentioning
confidence: 99%