2018
DOI: 10.3233/jifs-169457
|View full text |Cite
|
Sign up to set email alerts
|

Conceptual alignment deep neural networks

Abstract: Deep Neural Networks (DNNs) have powerful recognition abilities to classify different objects. Although the models of DNNs can reach very high accuracy even beyond human level, they are regarded as black boxes that are absent of interpretability. In the training process of DNNs, abstract features can be automatically extracted from high-dimensional data, such as images. However, the extracted features are usually mapped into a representation space that is not aligned with human knowledge. In some cases, the in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 27 publications
(8 citation statements)
references
References 17 publications
0
8
0
Order By: Relevance
“…A major problem to build a deep architecture for TCM diagnosis is the model interpretability. In order to improve the notorious interpretability of DNN models, we propose to use conceptual alignment deep neural networks (CADNNs), 12 which have been tested with fashion‐MNIST dataset, to embed prior conceptual knowledge and produce interpretable representations in the deep models. By adding constraints to objective function and using a hierarchically supervised training method, CADNNs can align the representation space of some hidden neurons with human‐formed concepts almost without loss of accuracy.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…A major problem to build a deep architecture for TCM diagnosis is the model interpretability. In order to improve the notorious interpretability of DNN models, we propose to use conceptual alignment deep neural networks (CADNNs), 12 which have been tested with fashion‐MNIST dataset, to embed prior conceptual knowledge and produce interpretable representations in the deep models. By adding constraints to objective function and using a hierarchically supervised training method, CADNNs can align the representation space of some hidden neurons with human‐formed concepts almost without loss of accuracy.…”
Section: Methodsmentioning
confidence: 99%
“…The extensions of this article are mainly twofold. Firstly, we have improved the multimodal deep architecture and increased its interpretability by introducing CADNNs 12 . Secondly, we have added a text processing channel and enhanced the experiments for an image‐text conjoined dataset.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The basic principle is as follows: with the transfer learning (TL), the “prior knowledge” obtained from “non-single” datasets is applied to CNN training of domain-specific recognition to alleviate the overfitting problem caused by insufficient data volume in a specific domain. In this paper, TL classifies crop diseases to apply the useful skills learned from one or more auxiliary domain tasks to the new targets and tasks [ 17 , 18 , 19 , 20 , 21 , 22 , 23 ]. Most of the research where the TL method is applied to the recognition of crop diseases is based on the TL method for parameter fine-tuning.…”
Section: Proposed Algorithmmentioning
confidence: 99%
“…In [36], a top down semi-supervised subspace clustering is proposed to identify a subset of important attributes based on the known label for each data instance. In [37], the authors propose a training method that employs Deep Neural Networks (DNNs) to map the sensory data into a representation space aligned with human concepts. The DNNs become interpretable with human knowledge and become easy to be trained by a small amount of training data with prior knowledge.…”
mentioning
confidence: 99%