One popular and classic theory of how the mind encodes knowledge is an associative semantic network, where concepts and associations between concepts correspond to nodes and edges, respectively. A major issue in semantic network research is that there is no consensus among researchers as to the best method for estimating the network of an individual or group. We propose a novel method (U-INVITE) for estimating semantic networks from semantic fluency data (listing items from a category) based on a censored random walk model of memory retrieval. We compare this method to several other methods in the literature for estimating networks from semantic fluency data. In simulations, we find that U-INVITE can recover semantic networks with low error rates given only a moderate amount of data. U-INVITE is the only known method derived from a psychologically plausible process model of memory retrieval and one of two known methods that we found to be consistent estimators of this process: if semantic memory retrieval is consistent with this process, the procedure will eventually estimate the true network (given enough data). We conduct the first exploration of different methods for estimating psychologically valid semantic networks by comparing people's similarity judgments of edges estimated by each network estimation method. To encourage best practices, we discuss the merits of each network estimation technique, provide a flow chart that assists with choosing an appropriate method, and supply code for others to employ these techniques on their own data.
An illusion of explanatory depth (IOED) occurs when people believe they understand a concept more deeply than they actually do. To date, IOEDs have been identified only in mechanical and natural domains, occluding why they occur and suggesting that their implications are quite limited. Six studies illustrated that IOEDs occur because people adopt an inappropriately abstract construal style when they assess how well they understand concrete concepts. As this mechanism predicts, participants who naturally adopted concrete construal styles (Study 1) or were induced to adopt a concrete construal style (Studies 2-4 and 6), experienced diminished IOEDs. Two additional studies documented a novel IOED in the social psychological domain of electoral voting (Studies 5 and 6), demonstrating the generality of the construal mechanism, the authors also extended the presumed boundary conditions of the effect beyond mechanical and natural domains. These findings suggest a novel factor that might contribute to such diverse social-cognitive shortcomings as stereotyping, egocentrism, and the planning fallacy, where people adopt abstract representations of concepts that should be represented concretely.
People frequently rely on explanations provided by others to understand complex phenomena. A fair amount of attention has been devoted to the study of scientific explanation, and less on understanding how people evaluate naturalistic, everyday explanations. Using a corpus of diverse explanations from Reddit's "Explain Like I'm Five" and other online sources, we assessed how well a variety of explanatory criteria predict judgments of explanation quality. We find that while some criteria previously identified as explanatory virtues do predict explanation quality in naturalistic settings, other criteria such as simplicity do not. Notably, we find that people have a preference for complex explanations that invoke more causal mechanisms to explain an effect. We propose that this preference for complexity is driven by a desire to identify enough causes to make the effect seem inevitable. EVALUATING EVERYDAY EXPLANATIONS 3 Evaluating Everyday ExplanationsPeople are explanatory creatures. We often seek to generate explanations based on our own knowledge of how the world works. However our ability to generate complete explanations on our own is frequently inadequate. We may not have all of the evidence or the expertise to be able to form accurate models of complex phenomena. So we use the knowledge of experts, friends, and communities to piece together explanations. Our beliefs about science are not limited to intuitive preconceptions, but are also derived from scientists who inform us of how things work. Our beliefs about the economy are affected not only by our own experiences, but also by what economists and politicians tell us about large-scale financial systems. We rely on the explanations of others to form our own beliefs.How, then, do we evaluate the explanations of others? Explanatory CriteriaA common view has emerged that the quality or value of an explanation can be determined by how well it satisfies a set of criteria known as explanatory virtues (Lipton, 2004;Thagard, 1978;Harman, 1965; Mackonis, 2003;Glymour, 2014;Lombrozo, 2011).However, there is disagreement about what counts as an explanatory virtue, how these virtues are defined and measured, and how they are weighted when we evaluate an explanation. Two commonly proposed virtues are simplicity and coherence. For example, a good explanation should be simple, requiring the fewest number of causes to explain a phenomenon (e.g., Lombrozo, 2007). A good explanation should also be coherent; it should be compatible with our existing beliefs and consistent with the evidence and with itself (e.g., Thagard, 1989). EVALUATING EVERYDAY EXPLANATIONS 4We may also evaluate an explanation using other criteria, such as the credibility of the explainer or how well the explanation is articulated, that do not reflect the intrinsic value of an explanation. These criteria are useful in satisfying goals beyond identifying the information inherent to an explanation (Patterson, Operskalski, & Barbey, 2015). For instance, a well-articulated explanation can be useful for ...
The verbal fluency task-listing words from a category or words that begin with a specific letter-is a common experimental paradigm that is used to diagnose memory impairments and to understand how we store and retrieve knowledge. Data from the verbal fluency task are analyzed in many different ways, often requiring manual coding that is time intensive and error-prone. Researchers have also used fluency data from groups or individuals to estimate semantic networks-latent representations of semantic memory that describe the relations between concepts-that further our understanding of how knowledge is encoded. However computational methods used to estimate networks are not standardized and can be difficult to implement, which has hindered widespread adoption. We present SNAFU: the Semantic Network and Fluency Utility, a tool for estimating networks from fluency data and automatizing traditional fluency analyses, including counting cluster switches and cluster sizes, intrusions, perseverations, and word frequencies. In this manuscript, we provide a primer on using the tool, illustrate its application by creating a semantic network for foods, and validate the tool by comparing results to trained human coders using multiple datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.