2021
DOI: 10.1101/2021.07.08.451521
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Improved prediction of behavioral and neural similarity spaces using pruned DNNs

Abstract: Deep Neural Networks (DNNs) have become an important tool for modeling brain and behaviour. One key area of interest has been to apply these networks to model human similarity judgements. Several previous works have used the embeddings from the penultimate layer of vision DNNs and showed that a reweighting of these features improves the fit between human similarity judgments and DNNs. These studies underline the idea that these embeddings form a good basis set but lack the correct level of salience. Here we re… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…First, other penalization regimes could be applied. Instead of using an L2-penalty, one could either use an L1-penalty or a combination of L1- and L2-penalization (or pruning, see Tarigopula et al, 2021). We opted for the L2-penalization since we did not want to select a subset of features (as any penalization regime utilizing the L1-norm would do) and since an L1-penalization would incur a greater computational load.…”
Section: Discussionmentioning
confidence: 99%
“…First, other penalization regimes could be applied. Instead of using an L2-penalty, one could either use an L1-penalty or a combination of L1- and L2-penalization (or pruning, see Tarigopula et al, 2021). We opted for the L2-penalization since we did not want to select a subset of features (as any penalization regime utilizing the L1-norm would do) and since an L1-penalization would incur a greater computational load.…”
Section: Discussionmentioning
confidence: 99%
“…Tarigopula et al [230] notice that feature reprojection from high dimensional to low dimensional space, as proposed by Jha et al [215], may cause information loss, leading [215]. The network takes a pair of images as input and a bottleneck layer projects the learned features to a low-dimensional space.…”
Section: Improving Interpretability With Biologically-informed Neural...mentioning
confidence: 99%