Abstract-We propose that an important aspect of human-robot interaction is perspective-taking. We show how perspective-taking occurs in a naturalistic environment (astronauts working on a collaborative project) and present a cognitive architecture for performing perspective-taking called Polyscheme. Finally, we show a fully integrated system that instantiates our theoretical framework within a working robot system. Our system successfully solves a series of perspective-taking problems and uses the same frames of references that astronauts do to facilitate collaborative problem solving with a person.
Five studies argue against claims that preschoolers understand a biological germ theory of illness. In Studies 1-3, participants were read stories in which characters develop symptoms (e.g., a bellyache) caused by germs, poisons, or events (e.g., eating too much candy) and were asked whether another character could catch the symptoms from the first. Few children made judgments in terms of germs as part of an underlying causal process linking the origin of a symptom to its subsequent transmission. Some children may have reasoned simply that certain kinds of symptoms are likely to be contagious. Studies 4 and 5 undermined the claim that preschoolers understand germs to be uniquely biological causal agents. Young children did not attribute properties to germs as they did for animate beings or for plants. It is suggested that children undergo conceptual reorganization in constructing a Western adult understanding of germs.
We propose that many problems in robotics arise from the difficulty of integrating multiple representation and inference techniques. These include problems involved in planning and reasoning using noisy sensor information from a changing world, symbol grounding and data fusion. We describe an architecture that integrates multiple reasoning, planning, sensation and mobility techniques by composing them from strategies of managing mental simulations. Since simulations are conducted by modules that include high-level artificial intelligence representation techniques as well as robotic techniques for sensation and reactive mobility, cognition, perception and action are continually integrated. Our work designing a robot based on this architecture demonstrates that high-level cognition can make robot behavior more intelligent and flexible and improve human-robot interaction.
Computational models will play an important role in our understanding of human higher-order cognition. How can a model's contribution to this goal be evaluated? This article argues that three important aspects of a model of higher-order cognition to evaluate are (a) its ability to reason, solve problems, converse, and learn as well as people do; (b) the breadth of situations in which it can do so; and (c) the parsimony of the mechanisms it posits. This article argues that fits of models to quantitative experimental data, although valuable for other reasons, do not address these criteria. Further, using analogies with other sciences, the history of cognitive science, and examples from modern-day research programs, this article identifies five activities that have been demonstrated to play an important role in our understanding of human higher-order cognition. These include modeling within a cognitive architecture, conducting artificial intelligence research, measuring and expanding a model's ability, finding mappings between the structure of different domains, and attempting to explain multiple phenomena within a single model. Keywords:Higher-order cognition; Human-level intelligence; Cognitive models Understanding higher-order cognitionComputational modeling is a particularly important part of understanding higher-order cognition. One reason for this is that precise models can help clarify or obviate often troublesome theoretical constructs, such as "representation" and "concept." A second is that the characteristics of human intelligence appear to be so different from other topics of scientific research as to call into question whether a mechanistic account of human intelligence is possible. Being instantiated in a computational model would resolve doubts about whether a theory Correspondence should be sent to Nicholas L. Cassimatis,
Cognitive modelers attempting to explain human intelligence share a puzzle with artificial intelligence researchers aiming to create computers that exhibit human-level intelligence: how can a system composed of relatively unintelligent parts (such as neurons or transistors) behave intelligently? I argue that although cognitive science has made significant progress towards many of its goals, that solving the puzzle of intelligence requires special standards and methods in addition to those already employed in cognitive science. To promote such research, I suggest creating a subfield within cognitive science called intelligence science and propose some guidelines for research addressing the intelligence puzzle. The Intelligence ProblemCognitive scientists attempting to fully understand human cognition share a puzzle with artificial intelligence researchers aiming to create computers that exhibit human-level intelligence: how can a system composed of relatively unintelligent parts (say, neurons or transistors) behave intelligently? 2.1.1 Naming the problem I will call the problem of understanding how unintelligent components can combine to generate human-level intelligence the intelligence problem; the endeavor to understand how the human brain embodies a solution to this problem understanding human intelligence; and the project of making computers with human-level intelligence human-level artificial intelligence. 11
We describe a cognitive architecture for creating more robust intelligent systems by executing hybrids of algorithms based on different computational formalisms. The architecture is motivated by the belief that (1) most existing computational methods often exhibit some of the characteristics desired of intelligent systems at the cost of other desired characteristics and (2) a system exhibiting robust intelligence can be designed by implementing hybrids of these computational methods. The main obstacle to this approach is that the various relevant computational methods are based on data structures and algorithms that are very difficult to integrate into one system. We describe a new method of executing hybrids of algorithms using the focus of attention of multiple modules. This approach has been embodied in the Polyscheme cognitive architecture. Systems based on Polyscheme can integrate reactive robotic controllers, logical and probabilistic inference algorithms, frame-based formalisms and sensor-processing algorithms into one system. Existing applications involve human-robot interaction, heterogeneous information retrieval and natural language understanding. Systems built using Polyscheme demonstrate that algorithmic hybrids implemented using a focus of attention can (1) exhibit more characteristics of intelligence than individual computational methods alone and (2) deal with problems that have formerly been beyond the reach of synthetic computational intelligence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers