“…When using DNNs to model neural systems, one of the fundamental questions that researchers hope to answer is: What core factors explain why some DNNs succeed and others fail? Researchers often attribute the success of DNNs to explicit design choices in a model’s construction, such as its architecture, learning objective, and training data (Cadena et al, 2022; Cao & Yamins, 2021a, 2021b; Conwell, Prince, Alvarez, & Konkle, 2022; Dwivedi, Bonner, Cichy, & Roig, 2021; Khaligh-Razavi & Kriegeskorte, 2014; Konkle & Alvarez, 2022; Kriegeskorte, 2015; Lindsay, 2020; Yamins & DiCarlo, 2016; Yamins et al, 2014; Zhuang et al, 2021). However, an alternative perspective explains DNNs through the geometry of their latent representational subspaces, which abstracts over the details of training procedures and architectures (Chung & Abbott, 2021; Chung, Lee, & Sompolinsky, 2018; Cohen, Chung, Lee, & Sompolinsky, 2020; Jazayeri & Ostojic, 2021; Sorscher, Ganguli, & Sompolinsky, 2021).…”