2022
DOI: 10.48550/arxiv.2211.01201
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Human alignment of neural network representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(12 citation statements)
references
References 0 publications
1
11
0
Order By: Relevance
“…There are numerous names, definitions, measures, and uses of this form of alignment across various fields, including cognitive science, neuroscience, and machine learning. Some of the other names include latent space alignment (Tucker et al, 2022), concept(ual) alignment (Stolk et al, 2016;Muttenthaler et al, 2022), system alignment (Goldstone & Rogosky, 2002;Roads & Love, 2020;Aho et al, 2022), representational similarity analysis (RSA) (Kriegeskorte et al, 2008), and model alignment (Marjieh et al, 2022b). Shepard (1980) proposed that human representations can be recovered by using behavioral data to measure the similarity of a set of stimuli and then finding embeddings that satisfy those similarity associations using methods like multidimensional scaling (MDS).…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…There are numerous names, definitions, measures, and uses of this form of alignment across various fields, including cognitive science, neuroscience, and machine learning. Some of the other names include latent space alignment (Tucker et al, 2022), concept(ual) alignment (Stolk et al, 2016;Muttenthaler et al, 2022), system alignment (Goldstone & Rogosky, 2002;Roads & Love, 2020;Aho et al, 2022), representational similarity analysis (RSA) (Kriegeskorte et al, 2008), and model alignment (Marjieh et al, 2022b). Shepard (1980) proposed that human representations can be recovered by using behavioral data to measure the similarity of a set of stimuli and then finding embeddings that satisfy those similarity associations using methods like multidimensional scaling (MDS).…”
Section: Related Workmentioning
confidence: 99%
“…Several recent studies have also attempted to identify what design choices lead to improved representational alignment in models (Kumar et al, 2022;Muttenthaler et al, 2022;Fel et al, 2022), although Moschella et al (2022) found that even with variation in design choices, many models trained on the same dataset end up learning similar 'relative representations' (embeddings projected into a relational form like a similarity matrix), or in other words, converge to the same representational space. Tucker et al (2022) showed that representational alignment emerges not only in static settings like image classification, but also dynamic reinforcement learning tasks involving human-robot interaction.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations