2022
DOI: 10.3390/app12136390
|View full text |Cite
|
Sign up to set email alerts
|

Design and Implementation of an Explainable Bidirectional LSTM Model Based on Transition System Approach for Cooperative AI-Workers

Abstract: Recently, interest in the Cyber-Physical System (CPS) has been increasing in the manufacturing industry environment. Various manufacturing intelligence studies are being conducted to enable faster decision-making through various reliable indicators collected from the manufacturing process. Artificial intelligence (AI) and Machine Learning (ML) have advanced enough to give various possibilities of predicting manufacturing time, which can help implement CPS in manufacturing environments, but it is difficult to s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…• Model-based: Model interpretation is the core aspect of any AI-based system. It ensures to provide better decision-making policies with fairness, transparency, and accountability of the results, which may give enough confidence to humans [59]. With the value and accuracy interpreted from the models, it can be easy to debug them and thereby carry out necessary measures to improve their performance.…”
Section: A Explainable Aimentioning
confidence: 99%
“…• Model-based: Model interpretation is the core aspect of any AI-based system. It ensures to provide better decision-making policies with fairness, transparency, and accountability of the results, which may give enough confidence to humans [59]. With the value and accuracy interpreted from the models, it can be easy to debug them and thereby carry out necessary measures to improve their performance.…”
Section: A Explainable Aimentioning
confidence: 99%
“…• Model-based: Model interpretation is the core aspect of any AI-based system. It ensures to provide better decision-making policies with fairness, transparency, and accountability of the results, which may give enough confidence to humans [58]. With the value and accuracy interpreted from the models, it can be easy to debug them and thereby carry out necessary measures to improve their performance.…”
Section: A Explainable Aimentioning
confidence: 99%
“…Firstly, the state space of the abstraction model is significantly reduced, which can effectively save time and space to visit the original RNN, especially when the dataset is large [11]. Secondly, some practical model analysis and verification techniques can be applied to the abstraction model to explain the original RNN [12]. This undoubtedly meets the inherent requirement of transparency in explainability principles.…”
Section: Introductionmentioning
confidence: 99%