The research described in this article used a visual search task and demonstrated that the eye region alone can produce a threat superiority effect. Indeed, the magnitude of the threat superiority effect did not increase with whole-face, relative to eye-region-only, stimuli. The authors conclude that the configuration of the eyes provides a key signal of threat, which can mediate the search advantage for threat-related facial expressions.
1814The extent to which the phenomenon of categorical perception represents underlying perceptual asymmetry, or merely the engagement of category labels has been the subject of much recent debate (Bornstein & Korda, 1984;Calder, Young, Perrett, Etcoff, & Rowland, 1996;Harnad, 1987;Munnich & Landau, 2003;Pilling, Wiggett, Özgen, & Davies, 2003;Young et al., 1997). Categorical Perception (CP) is the name given to the increased sensitivity to a physical change found when that change crosses the boundary between two perceptual categories (Harnad, 1987). It has been demonstrated for simple perceptual continua such as phoneme blends (Pastore, 1987) or color categories (Roberson, Davies, & Davidoff, 2000;Pilling et al., 2003), and a number of recent studies have examined the phenomenon with regard to face processing. A steady change between two faces (differing in either identity or facial expression) can be created using morphing software (Beale & Keil, 1995;Calder et al., 1996). However, rather than being perceived as a monotonic linear progression, the continuum of identity or expression is perceived as an abrupt discontinuity at the boundary between two categories (e.g., happy-sad).Demonstrations of CP for morphed facial expressions (Etcoff & Magee, 1992;Calder et al., 1996;Young et al., 1997) as well as for, facial identity (Beale & Keil, 1995;Campanella, Hanoteau, Seron, Joassin, & Bruyer, 2003;Stevenage, 1998) have shown that it is relatively hard to make discriminations near the center of a category and easy close to a boundary (Calder et al., 1996). Young et al., (1997) used an X-AB two-alternative forced-choice (2-AFC) memory paradigm in which a series of morphed facial expressions were created, with an equal amount of physical difference between each successive pair in the continuum (see Figure 1). Participants viewed one of the morphs as a target expression, followed by a test pair of images (target distractor), differing only in morphed expression. Their task was to identify the position of the target face in the test pair by keypress. Participants showed superior identification of expressions from pairs that crossed a category boundary (e.g., happy-fearful) than from pairs where both members belonged to the same category (e.g., both happy). The finding is robust, but the locus of the effect remains unclear.Some researchers have suggested that CP arises at an early perceptual level, or even that, for domains such as color, it might be innate and hardwired into the visual system (Bornstein, Kessen, & Weiskopf, 1976;Franklin & Davies, 2004). Young et al. (1997) argued for perceptually (rather than verbally) mediated CP for facial expressions, because labeling emotional facial expressions is actually more difficult at boundaries than at the center of categories, where a "prototypical" emotional face gives a more consistent and rapid response. Thus it seems unlikely that verbal labeling could underlie the superior discrimination of emotional expressions at category boundaries. Categorical perception of facial ...
In this study, we used the distinction between remember and know (R/K) recognition responses to investigate the retrieval of episodic information during familiar face and voice recognition. The results showed that familiar faces presented in standard format were recognized with R responses on approximately 50% of the trials. The corresponding figure for voices was less than 20%. Even when overall levels of recognition were matched between faces and voices by blurring the faces, significantly more R responses were observed for faces than for voices. Voices were significantly more likely to be recognized with K responses than were blurred faces. These findings indicate that episodic information was recalled more often from familiar faces than from familiar voices. The results also showed that episodic information about a familiar person was never recalled unless some semantic information, such as the person's occupation, was also retrieved.
Previous studies demonstrate that lexical coding of colour influences categorical perception of colour, such that participants are more likely to rate two colours to be more similar if they belong to the same linguistic category (Roberson et al., 2000, 2005). Recent work shows changes in Greek–English bilinguals' perception of within and cross-category stimulus pairs as a function of the availability of the relevant colour terms in semantic memory, and the amount of time spent in the L2-speaking country (Athanasopoulos, 2009). The present paper extends Athanasopoulos' (2009) investigation by looking at cognitive processing of colour in Japanese–English bilinguals. Like Greek, Japanese contrasts with English in that it has an additional monolexemic term for ‘light blue’ (mizuiro). The aim of the paper is to examine to what degree linguistic and extralinguistic variables modulate Japanese–English bilinguals' sensitivity to the blue/light blue distinction. Results showed that those bilinguals who used English more frequently distinguished blue and light blue stimulus pairs less well than those who used Japanese more frequently. These results suggest that bilingual cognition may be dynamic and flexible, as the degree to which it resembles that of either monolingual norm is, in this case, fundamentally a matter of frequency of language use.
We make sense of objects and events around us by classifying them into identifiable categories. The extent to which language affects this process has been the focus of a long-standing debate: Do different languages cause their speakers to behave differently? Here, we show that fluent German-English bilinguals categorize motion events according to the grammatical constraints of the language in which they operate. First, as predicted from cross-linguistic differences in motion encoding, participants functioning in a German testing context prefer to match events on the basis of motion completion to a greater extent than participants in an English context. Second, when participants suffer verbal interference in English, their categorization behavior is congruent with that predicted for German and when we switch the language of interference to German, their categorization becomes congruent with that predicted for English. These findings show that language effects on cognition are context-bound and transient, revealing unprecedented levels of malleability in human cognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.