Research on category-based induction has documented a consistent typicality effect: Typical exemplars promote stronger inferences about their broader category than atypical exemplars. This work has been largely confined to categories whose central tendencies are also the most typical members of the category. Does the typicality effect apply to the broad set of categories for which the ideal category member is considered most typical? In experiments with natural and artificial categories, typicality and induction-strength ratings were obtained for ideal and central-tendency exemplars. Induction strength was greatest for the central-tendency exemplars, regardless of whether the central tendency or the ideal was rated more typical. These results suggest that the so-called "typicality" effect is a special case of a more universal central-tendency effect in category-based induction.
Federal Aviation Administration Jonathan Rein -TG O'Brien Nicole Racine -TG O'BrienResearchers conducted a human-in-the-loop simulation to determine the minimum information required for a supplementary traffic display to effectively support Detect and Avoid (DAA) procedures. Unmanned Aircraft System (UAS) pilots were asked to detect and avoid surrounding aircraft with the aid of a custom-built traffic information display with four configurations (Position, Direction, Prediction, and Rate). The comparisons between traffic display configurations indicated that the minimum information required to support effective DAA procedures corresponded to the Prediction configuration and included the following seven information types: a) aircraft ID, b) position indicator, c) relative altitude, d) heading indicator, e) climb/descend indicator, f) collision threat status alert, and g) visual projection of future position.
TriControl is a controller working position (CWP) prototype developed by German Aerospace Center (DLR) to enable more natural, efficient, and faster command inputs. The prototype integrates three input modalities: speech recognition, eye tracking, and multi-touch sensing. Air traffic controllers may use all three modalities simultaneously to build commands that will be forwarded to the pilot and to the air traffic management (ATM) system. This paper evaluates possible speed improvements of TriControl compared to conventional systems involving voice transmission and manual data entry. 26 air traffic controllers participated in one of two air traffic control simulation sub-studies, one with each input system. Results show potential of a 15% speed gain for multimodal controller command input in contrast to conventional inputs. Thus, the use and combination of modern human machine interface (HMI) technologies at the CWP can increase controller productivity.
Cognitive Science research is hard to conduct, because researchers must take phenomena from the world and turn them into laboratory tasks for which a reasonable level of experimental control can be achieved. Consequently, research necessarily makes tradeoffs between internal validity (experimental control) and external validity (the degree to which a task represents behavior outside of the lab). Researchers are thus seeking the best possible tradeoff between these constraints, which we refer to as the optimal level of fuzz. We present two principles for finding the optimal level of fuzz, in research, and then illustrate these principles using research from motivation, individual differences, and cognitive neuroscience.A hallmark of cognitive science involves the interplay of methods from different disciplines. Despite the importance of this interplay, methodological discussions in Psychology under the banner of cognitive science tend to focus on statistical issues such as the possibility that null hypothesis testing may lead research astray (e.g., Killeen, 2006). Much less discussion has centered on how to use the power of multidisciplinary cognitive science to construct research questions in ways that are likely to provide insight into the difficult questions that the field must address. In this paper, we present a principle that we call the optimal level of fuzz that we believe can guide good research. We start by defining the concept of fuzz and then discuss a set of principles that can guide researchers toward finding the optimal level of fuzz for their research. Next, we present three case studies of the optimal level of fuzz in action. Finally, we discuss the implications of this principle for research. What is Fuzz?Within cognitive science, experimental research in psychology provides data that can be used to constrain theories in neuroscience and psychology and to inspire new computational methods in artificial intelligence and reinforcement learning. Experimental research in psychology must typically trade off between internal and external validity. Internal validity is the basic idea that our experiments should be free from confounds and alternative explanations so that the results of our experiments can be unambiguously attributed to the variables we manipulated in our studies. External validity is the degree to which our studies reflect behavior that might actually occur outside the laboratory. laboratory to bring about desired behaviors and has constructed systems for illuminating internal mental processes. For example, many studies use lexical decision tasks in which a subject is shown strings of letters that may or may not form a word and is asked to judge as quickly as possible whether the string forms a word. This task is quite useful for measuring the activity of concepts during a cognitive process. Tasks like this have been used in a variety of studies ranging from work on language comprehension to studies of goal activation (e.g., Fishbach, Friedman, & Kruglanski, 2003;McNamara, 2005)...
Research has shown that people's ability to transfer abstract relational knowledge across situations can be heavily influenced by the concrete objects that fill relational roles. This article provides evidence that the concreteness of the relations themselves also affects performance. In 3 experiments, participants viewed simple relational patterns of visual objects and then identified these same patterns under a variety of physical transformations. Results show that people have difficulty generalizing to novel concrete forms of abstract relations, even when objects are unchanged. This suggests that stimuli are initially represented as concrete relations by default. In the 2nd and 3rd experiments, the number of distinct concrete relations in the training set was increased to promote more abstract representation. Transfer improved for novel concrete relations but not for other transformations such as object substitution. Results indicate that instead of automatically learning abstract relations, people's relational representations preserve all properties that appear consistently in the learning environment, including concrete objects and concrete relations.
The Representational Distortion (RD) approach to similarity (e.g., Hahn, Chater, & Richardson, 2003) proposes that similarity is computed using the transformation distance between two entities. We argue that researchers who adopt this approach need to be concerned with how representational transformations can be determined a priori. We discuss several roadblocks to using this approach. Specifically, we demonstrate the difficulties inherent in determining what transformations are psychologically salient and the importance of considering the directionality of transformations.
What performance level must automation reach to be a net benefit to the user? This paper presents a metaanalysis of 34 data points taken from 12 studies in the human factors literature, each representing the effect of an imperfect automation aid on system performance, relative to baseline. Bayesian regression analysis indicated a consistent relationship between automation reliability (i.e., overall percent correct) and performance, with values greater than 67% associated with performance gains. The credible interval for this crossover point ranged from 55 to 75%. There was also a consistent effect of d', with a crossover point of 1.47 and a credible interval from -0.04 to 2.22. However, we urge caution in using these values as a benchmark criterion, due to the sizeable uncertainty in the crossover estimates and the variability in how researchers compute false alarm and reliability rates. The question "How good is good enough?" likely does not have a single domain-general answer, with the automation performance threshold varying across task domains and other variables.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.