The verbal fluency task-listing words from a category or words that begin with a specific letter-is a common experimental paradigm that is used to diagnose memory impairments and to understand how we store and retrieve knowledge. Data from the verbal fluency task are analyzed in many different ways, often requiring manual coding that is time intensive and error-prone. Researchers have also used fluency data from groups or individuals to estimate semantic networks-latent representations of semantic memory that describe the relations between concepts-that further our understanding of how knowledge is encoded. However computational methods used to estimate networks are not standardized and can be difficult to implement, which has hindered widespread adoption. We present SNAFU: the Semantic Network and Fluency Utility, a tool for estimating networks from fluency data and automatizing traditional fluency analyses, including counting cluster switches and cluster sizes, intrusions, perseverations, and word frequencies. In this manuscript, we provide a primer on using the tool, illustrate its application by creating a semantic network for foods, and validate the tool by comparing results to trained human coders using multiple datasets.
Over the last few decades, psychologists have increasingly found that the mind stores and uses the statistics of its environment. However, less work has analyzed whether the environmental statistics have changed and what that would imply for the mind. In this chapter, we consider human memory as the solution to the computational problem of predicting what events will happen next given a history of past events. Prior work examining two years of data (1986-1987) found that the environmental statistics of events occurring in the world are reflected in human memory of events, such as practice and retention effects. We analyse the last century of event statistics by assuming that words in the headlines The New York Times are each an event. While presenting our methods, we do so in the form of a case study – we discuss general practices for behavioural data science projects, standard issues that arise, and how to resolve different issues as they arise for the presented analyses. After replicating prior work analysing event statistics in this manner during 1986-1987, we extend the methodology to the last century (1919-2019). Our analyses suggest that the events are occurring in denser bursts, meaning that when a new event occurs in the last few years, this event reoccurs more often in the short-term and less often in the long-term (as compared to events that first occur in the early 20th century). This suggests that human memory faces different environmental demands than it has in the past and may be adapting to the dynamics of event statistics.Keywords: behavioural data science, rational analysis, human memory
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.