Virtual simulated environments provide multiple ways of testing cognitive function and evaluating problem solving with humans (e.g., Woollett et al. 2009). The use of such interactive technology has increasingly become an essential part of modern life (e.g., autonomously driving vehicles, global positioning systems (GPS), and touchscreen computers; Chinn and Fairlie 2007; Brown 2011). While many nonhuman animals have their own forms of "technology", such as chimpanzees who create and use tools, in captive animal environments the opportunity to actively participate with interactive technology is not often made available. Exceptions can be found in some state-of-the-art zoos and laboratory facilities (e.g., Mallavarapu and Kuhar 2005). When interactive technology is available, captive animals often selectively choose to engage with it. This enhances the animal’s sense of control over their immediate surroundings (e.g., Clay et al. 2011; Ackerman 2012). Such self-efficacy may help to fulfill basic requirements in a species’ daily activities using problem solving that can involve foraging and other goal-oriented behaviors. It also assists in fulfilling the strong underlying motivation for contrafreeloading and exploration expressed behaviorally by many species in captivity (Young 1999). Moreover, being able to present nonhuman primates virtual reality environments under experimental conditions provides the opportunity to gain insight into their navigational abilities and spatial cognition. It allows for insight into the generation and application of internal mental representations of landmarks and environments under multiple conditions (e.g., small- and large-scale space) and subsequent spatial behavior. This paper reviews methods using virtual reality developed to investigate the spatial cognitive abilities of nonhuman primates, and great apes in particular, in comparison with that of humans of multiple age groups. We make recommendations about training, best practices, and also pitfalls to avoid.
Many claim that social stimuli are rewarding to primates, but few, if any, studies have explicitly demonstrated their reward value. Here, we examined whether chimpanzees would produce overt responses for the opportunity to view conspecific social, compared to dynamic (video: Experiment 1) and static (picture: Experiment 2) control content. We also explored the relationships between variation in social reward and social behavior and cognition. We provided captive chimpanzees with access to a touchscreen during four, one-hour sessions (two ‘conspecific social’ and two ‘control’). The sessions consisted of ten, 15-second videos (or pictures in Experiment 2) of either chimpanzees engaging in a variety of behaviors (social condition) or vehicles, humans, or other animals engaged in some activity (control condition). For each chimpanzee, we recorded the number of responses to the touchscreen and the frequency of watching the stimuli. Independent t-tests revealed no sex or rearing differences in touching and watching the social or control videos (p>0.05). Repeated measures ANOVAs showed chimpanzees touched and watched the screen significantly more often during the social compared to control video sessions. Furthermore, although chimpanzees did not touch the screen more often during social than control picture sessions in Experiment 2, they did watch the screen more often. Additionally, chimpanzees that previously performed better on a task of social cognition and engaged in more affiliative behavior watched a higher percentage of social videos during the touchscreen task. These results are consistent with the social motivation theory, and indicate social stimuli are intrinsically rewarding, as chimpanzees made more overt responses for the opportunity to view conspecific social, compared to control, content.
Three student projects involving neural networks are described. The projects include recognizing handwritten letters of the alphabet, determining the orientation of an imaged line, and recognizing particular rooms of a house based on samples of furniture found in the rooms. All projects were run on a back propagation neural network program implemented in Modula-2. A description of the program is presented and a sample module for simulating the behavior of an OR gate is included as an appendix. The program has been successfully used in several Artificial Intelligence classes for classroom demonstrations and carrying out various cognitive science experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.