“…Word embeddings, most often the word2vec (Mikolov et al, 2013) or GloVe (Pennington et al, 2014) varieties, have been used by social scientists to model cultural meaning, representations of intersectionality, and variation in meaning by author income (Kozlowski et al, 2019;Nelson 2021;Arthurs & Alvero, 2020). Sociologists have also been active in developing new word embedding methods, such as concept movers distance to compare how concept meanings in documents (Stoltz & Taylor, 2019), combining topic modeling and word embedding approaches , and showing how cultural associations between words can serve as attractors between words and concepts (Boutyline et al, 2021). Most of these studies leverage large word embedding datasets that were trained on massive amounts of text, such as all of the text on Wikipedia, the entire Google Books corpus, or every digitized newspaper article ever printed in the US.…”