52% Yes, a signiicant crisis 3% No, there is no crisis 7% Don't know 38% Yes, a slight crisis 38% Yes, a slight crisis 1,576 RESEARCHERS SURVEYED M ore than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. Those are some of the telling figures that emerged from Nature's survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research. The data reveal sometimes-contradictory attitudes towards reproduc-ibility. Although 52% of those surveyed agree that there is a significant 'crisis' of reproducibility, less than 31% think that failure to reproduce published results means that the result is probably wrong, and most say that they still trust the published literature. Data on how much of the scientific literature is reproducible are rare and generally bleak. The best-known analyses, from psychology 1 and cancer biology 2 , found rates of around 40% and 10%, respectively. Our survey respondents were more optimistic: 73% said that they think that at least half of the papers in their field can be trusted, with physicists and chemists generally showing the most confidence. The results capture a confusing snapshot of attitudes around these issues, says Arturo Casadevall, a microbiologist at the Johns Hopkins Bloomberg School of Public Health in Baltimore, Maryland. "At the current time there is no consensus on what reproducibility is or should be. " But just recognizing that is a step forward, he says. "The next step may be identifying what is the problem and to get a consensus. "
This study investigates the relation between vowel identity and emotional state. In Experiment 1, (pseudo)words were invented and articulated in a positive or negative mood condition. Subjects in a positive mood produced more words containing /i:/, a vowel involving the same muscle that is used in smiling--the zygomaticus major muscle (ZMM). Subjects in a negative mood produced more words containing /o:/, involving an antagonist of the ZMM--the orbicularis orbis muscle (OOM). We argue that the link between mood and vowel identity is related to orofacial muscle activity, which provides articulatory feedback to speakers on their emotional state. Experiment 2 tests this hypothesis more specifically. Participants rated the funniness of cartoons while repeatedly articulating either /i:/ (ZMM) or /o:/ (OOM). In line with our hypothesis, the cartoons were rated as funnier by subjects articulating /i:/ than by those articulating /o:/.
We introduce the CAL model (Category Abstraction Learning), a cognitive framework formally describing category learning built on similarity-based generalization, dissimilarity-based abstraction, two attention learning mechanisms, error-driven knowledge structuring, and stimulus memorization. Our hypotheses draw on an array of empirical and theoretical insights connecting reinforcement and category learning. The key novelty of the model is its explanation of how rules are learned from scratch based on three central assumptions. (1) Category rules emerge from two processes of stimulus generalization (similarity) and its direct inverse (category contrast) on independent dimensions. (2) Two attention mechanisms guide learning by focusing on rules, or on the contexts in which they produce errors. (3) Knowing about these contexts inhibits executing the rule, without correcting it, and consequently leads to applying partial rules in different situations. The model is designed to capture both systematic and individual differences in a broad range of learning paradigms. We illustrate the model's explanatory scope by simulating several benchmarks, including the classic Six Problems, the 5-4 problem, and linear separability. Beyond the common approach of predicting average response probabilities, we also propose explanations for more recently studied phenomena that challenge existing learning accounts, regarding task instructions, individual differences in rule extrapolation in three different tasks, individual attention shifts to stimulus features during learning, and other phenomena. We discuss CAL's relation to different models, and its potential to measure the cognitive processes regarding attention, abstraction, error detection, and memorization from multiple psychological perspectives.
Reward magnitude is a central concept in most theories of preferential decision making and learning. However, it is unknown whether variable rewards also influence cognitive processes when learning how to make accurate decisions (e.g., sorting healthy and unhealthy food differing in appeal). To test this, we conducted 3 studies. Participants learned to classify objects with 3 feature dimensions into two categories before solving a transfer task with novel objects. During learning, we rewarded all correct decisions, but specific category exemplars yielded a 10 times higher reward (high vs. low). Counterintuitively, categorization performance did not increase for high-reward stimuli, compared with an equal-reward baseline condition. Instead, performance decreased reliably for low-reward stimuli. To analyze the influence of reward magnitude on category generalization, we implemented an exemplar-categorization model and a cue-weighting model using a Bayesian modeling approach. We tested whether reward magnitude affects (a) the availability of exemplars in memory, (b) their psychological similarity to the stimulus, or (c) attention to stimulus features. In all studies, the evidence favored the hypothesis that reward magnitude affects the similarity gradients of high-reward exemplars compared with the equal-reward baseline. The results from additional reward-judgment tasks (Studies 2 and 3) strongly suggest that the cognitive processes of reward-value generalization parallel those of category generalization. Overall, the studies provide insights highlighting the need for integrating reward-and category-learning theories. AbstractReward magnitude is a central concept in most theories of preferential decision making and learning. However, it is unknown whether variable rewards also influence cognitive processes when learning how to make accurate decisions (e.g., sorting healthy and unhealthy food differing in appeal). To test this, we conducted three studies.Participants learned to classify objects with three feature dimensions into two categories before solving a transfer task with novel objects. During learning, we rewarded correct decisions, but specific category exemplars yielded a 10 times higher reward (high vs.low). Counterintuitively, categorization performance did not increase for high-reward stimuli, compared to an equal-reward baseline condition. Instead, performance decreased reliably for low-reward stimuli. To analyze the influence of reward magnitude on category generalization, we implemented an exemplar-categorization model and a cue-weighting model using a Bayesian modeling approach. We tested whether reward magnitude affects (a) the availability of exemplars in memory, (b) their psychological similarity to the stimulus, or (c) attention to stimulus features. In all studies, the evidence favored the hypothesis that reward magnitude affects the similarity gradients of high-reward exemplars compared to the equal-reward baseline. The results from additional reward-judgment tasks (Studies 2 and 3) strongly su...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.