“…As the first step toward automating analogical mapping, we adopt semantic representations of individual words generated by a machine-learning model, Word2vec (Mikolov et al, 2013). Word2vec and similar models based on distributional semantics, such as Global Vectors (GloVe; Pennington et al, 2014) and Bidirectional Encoder Representations from Transformers (BERT; Devlin et al, 2019), have proved successful in predicting behavioral judgments of lexical similarity or association (Hill et al, 2015; Hofmann et al, 2018; Pereira et al, 2016; Richie & Bhatia, 2021), neural responses to word and relation meanings (Huth et al, 2016; Pereira et al, 2018; Zhang et al, 2020), and high-level inferences including assessments of probability (Bhatia, 2017; Bhatia et al, 2019) and semantic verification (Bhatia & Richie, in press). In the simulations reported here, the semantic meanings of individual concepts are represented by 300-dimensional embeddings created by Word2vec after training on a corpus of articles drawn from Google News.…”