Making accurate judgments is an essential skill in everyday life. Although how different memory abilities relate to categorization and judgment processes has been hotly debated, the question is far from resolved. We contribute to the solution by investigating how individual differences in memory abilities affect judgment performance in 2 tasks that induced rule-based or exemplar-based judgment strategies. In a study with 279 participants, we investigated how working memory and episodic memory affect judgment accuracy and strategy use. As predicted, participants switched strategies between tasks. Furthermore, structural equation modeling showed that the ability to solve rule-based tasks was predicted by working memory, whereas episodic memory predicted judgment accuracy in the exemplar-based task. Last, the probability of choosing an exemplar-based strategy was related to better episodic memory, but strategy selection was unrelated to working memory capacity. In sum, our results suggest that different memory abilities are essential for successfully adopting different judgment strategies.
Multitasking poses a major challenge in modern work environments by putting the worker under cognitive load. Performance decrements often occur when people are under high cognitive load because they switch to less demanding-and often less accurate-cognitive strategies. Although cognitive load disturbs performance over a wide range of tasks, it may also carry benefits. In the experiments reported here, we showed that judgment performance can increase under cognitive load. Participants solved a multiple-cue judgment task in which high performance could be achieved by using a similarity-based judgment strategy but not by using a more demanding rule-based judgment strategy. Accordingly, cognitive load induced a shift to a similarity-based judgment strategy, which consequently led to more accurate judgments. By contrast, shifting to a similarity-based strategy harmed judgments in a task best solved by using a rule-based strategy. These results show how important it is to consider the cognitive strategies people rely on to understand how people perform in demanding work environments.
The distinction between similarity-based and rule-based strategies has instigated a large body of research in categorization and judgment. Within both domains, the task characteristics guiding strategy shifts are increasingly well documented. Across domains, past research has observed shifts from rule-based strategies in judgment to similarity-based strategies in categorization, but limited these comparisons to 1 prototypical environment, a linear task structure, and a restricted set of strategies. To systematically compare the 2 domains, we considered several instantiations of rule-based and similarity-based strategies and examined strategy choice across different types of judgment and categorization tasks. Between participants, we varied task characteristics from a 1-dimensional linear to a multidimensional linear and to 2 multidimensional nonlinear tasks. Irrespective of domain, strategies considered, or model comparison technique used, we find that more participants relied on similarity-based strategies when the functional relationship between the cues and the criterion was nonlinear. Shifts from rule-based strategies in judgment to similarity-based strategies in categorization, however, were rare and most pronounced in 1-dimensional environments. These results support the hypothesis that the cognitive strategies people select to solve a judgment or categorization task depend less on the domain but more on the complexity of the task. (PsycINFO Database Record
Research on quantitative judgments from multiple cues suggests that judgments are simultaneously influenced by previously abstracted knowledge about cue-criterion relations and memories of past instances (or exemplars). Yet extant judgment theories leave 2 questions unanswered: (a) How are past exemplars and abstracted cue knowledge combined to form a judgment? (b) Are all past exemplars retrieved from memory to form the judgment (integrative retrieval) or is the judgment based on one exemplar (competitive retrieval)? To address these questions we propose and test a new model, CX-COM (combining Cue abstraction with eXemplar memory assuming COMpetitive memory retrieval). In a first step, CX-COM recalls only a single exemplar from memory. In a second step, the initially retrieved judgment is adjusted based on abstracted cue knowledge. Qualitatively, we show that CX-COM naturally captures judgment patterns that have been previously attributed to multiple strategies. Next, we tested CX-COM quantitatively in 2 experiments and found that it accounts well for people's judgment behavior. In the second experiment we additionally tested 2 qualitative predictions of CX-COM: The existence of multimodal response distributions within participants and systematic variability in judgments depending on the distance between similar exemplars in memory. The empirical results confirm CX-COM's assumptions. In sum, the evidence suggests that CX-COM is a viable new model for quantitative judgments and shows the importance of considering judgment variability in addition to average responses in judgment research.
Processing social feedback optimistically helps maintain a positive self-image and stable social relationships. Individuals with depression and social anxiety often lack this optimistic bias. Yet, the cognitive routes over which social feedback reinforces a negative self-image have remained largely unclear and may differ between depression and social anxiety. A reanalysis of previous studies (n = 450) and a pre-registered replication (n = 807) demonstrated that self-reported depressive symptoms and social anxiety were associated with better learning of negative social evaluations about the self, relative to positive evaluations, in a computerised social evaluation task. Transdiagnostically, this asymmetry was driven by reduced positive trait-like beliefs. Yet, in social anxiety, this bias reflected a heightened sensitivity to negative social feedback, whereas in depression it co-existed with a blunted response to social feedback. Recognising such differences in feedback processing may inform approaches to personalizing treatment.
One of the earliest discovered laws in psychology is the law of forgetting. The more time has passed between encoding an item and retrieving this item, that is, the longer the retention interval, the less likely people recall the item correctly (Ebbinghaus, 1885; Rubin & Wenzel, 1996). On a class reunion 1 year after high school, for instance, the names of former classmates may easily come to your mind. After 20 years, however, you may even encounter problems when naming your former best friends. The course of time makes remembering facts, such as the names of previous classmates (Bahrick, Bahrick, & Wittlinger, 1975), or past events, such as headlines in newspapers (Meeter, Murre, & Janssen, 2005), more difficult. If people forget information with the passage of time, this should also limit their ability to use this information when making judgements and decisions, affecting judgement quality. Although knowledge about how judgement accuracy varies as time passes by is limited (Ashton, 2000), it seems that not all judgements are equally affected by the time that has passed. For instance, meteorological forecasters have been shown to be more consistent than forecasters in the business or medical domain (Ashton, 2000). This domain difference could be due to people retrieving different information from memory depending on the judgement strategy they rely on. Suppose, for instance, a hiker tries to forecast every weekend how much rain will fall on a scale from 0 to 40 mm/hr. To judge the precipitation, the hiker may consider how cloudy it is, which shape those clouds have, and how strongly the wind blows. If the hiker correctly remembers how important each of those predictors is to forecast
Weighing the importance of different pieces of information is a key determinant of making accurate judgments. In social judgment theory, these weighting processes have been successfully described with linear models. How people learn to make judgments has received less attention. Although the hitherto proposed delta learning rule can perfectly learn to solve linear problems, reanalyzing a previous experiment showed that it does not adequately describe human learning. To provide a more accurate description of learning processes we amended the delta learning rule with three learning mechanisms-a decay, an attentional learning mechanism, and a capacity limitation. An additional study tested the different learning mechanisms in predicting learning in linear judgment tasks. In this study, participants first learned to predict a continuous criterion based on four cues. To test the three learning mechanisms rigorously against each other, we changed the importance of the cues after 200 trials so that the mechanisms make different predictions with regard to how fast people adapt to the new environment. On average, judgment accuracy improved from Trial 1 to Trial 200, dropped when the task environment changed, but improved again until the end of the task. The capacity-restricted learning model, restricting how much people update the cue weights on a single trial, best described and predicted the learning curve of the majority of participants. Taken together, these results suggest that considering cognitive constraints within learning models may help to understand how humans learn when making inferences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.