2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2021
DOI: 10.1109/smc52423.2021.9659223
|View full text |Cite
|
Sign up to set email alerts
|

Show Why the Answer is Correct! Towards Explainable AI using Compositional Temporal Attention

Abstract: Visual Question Answering (VQA) models have achieved significant success in recent times. Despite the success of VQA models, they are mostly black-box models providing no reasoning about the predicted answer, thus raising questions for their applicability in safety-critical such as autonomous systems and cyber-security. Current state of the art fail to better complex questions and thus are unable to exploit compositionality. To minimize the black-box effect of these models and also to make them better exploit … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…Layer-wise relevance propagation involves the decomposition of the model's prediction by assigning importance score to features in each layer of a neural network and also providing information about what each component has contributed to the model's final predictions (Wenli et al, 2023). Attention mechanisms worked with the model that implements attention mechanism, and it visualises the learning weight of the model to figure out the aspect of the input sequence that the model considered for its predictions (Bendre et al, 2021;Ntrougkas et al, 2022;Gkartzonika et al, 2023).…”
Section: Techniques and Approaches To Explainable Aimentioning
confidence: 99%
“…Layer-wise relevance propagation involves the decomposition of the model's prediction by assigning importance score to features in each layer of a neural network and also providing information about what each component has contributed to the model's final predictions (Wenli et al, 2023). Attention mechanisms worked with the model that implements attention mechanism, and it visualises the learning weight of the model to figure out the aspect of the input sequence that the model considered for its predictions (Bendre et al, 2021;Ntrougkas et al, 2022;Gkartzonika et al, 2023).…”
Section: Techniques and Approaches To Explainable Aimentioning
confidence: 99%
“…Modular hierarchical or collaborative ANN models can be more explainable due to their organization. 22,23 While not completely transparent, hierarchical ensemble ANN models might advance the fields of computational neuroscience and applied AI by enabling the design of AI platforms that are transparent enough for ethical examination and functional tuning without losing the strengths of data-based code generation that characterizes ML. BHMA models (figure 2) use context-dependent integration ANNs for executive control of encapsulated purpose-tailored ANN submodules.…”
Section: Ann Models With Biologically Hierarchical and Modular Archit...mentioning
confidence: 99%