2022
DOI: 10.1101/2022.07.09.494042
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Compression-enabled interpretability of voxelwise encoding models

Abstract: Voxel-wise encoding models based on convolutional neural networks (CNNs) have emerged as state-of-the-art predictive models of brain activity evoked by natural movies. Despite the superior predictive performance of CNN-based models, the huge number of parameters in these models have made them difficult to interpret for domain experts. Here, we investigate the role of model compression in building more interpretable and more stable CNN-based voxel-wise models. We used (1) structural compression techniques to pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 42 publications
0
1
0
Order By: Relevance
“…Recent work has shown that after training a DNN model to perform object recognition, the DNN’s internal representations are predictive of V4 and IT responses both in human and non-human primates [1, 2, 16]. However, these task-driven DNN models have tens of millions of parameters, making it next to impossible to understand the step-by-step computations between image and response [17, 18]. Are such large DNN models necessary?…”
Section: Mainmentioning
confidence: 99%
“…Recent work has shown that after training a DNN model to perform object recognition, the DNN’s internal representations are predictive of V4 and IT responses both in human and non-human primates [1, 2, 16]. However, these task-driven DNN models have tens of millions of parameters, making it next to impossible to understand the step-by-step computations between image and response [17, 18]. Are such large DNN models necessary?…”
Section: Mainmentioning
confidence: 99%