Previous research has shown that two heads working together can outperform one working alone, but whether such benefits result from social interaction or from the statistical facilitation of independent responses is not clear. Here we apply Miller's (Cognitive Psychology, 14, 247-279, 1982; Ulrich, Miller & Schröter, Behavior Research Methods, 39(2), 291-302, 2007) race model inequality (RMI) to distinguish between these two possibilities. Pairs of participants completed a visual enumeration task, both as independent individuals and as two members of a team. The results showed that team performance exceeded the efficiency of two individuals working independently, indicating that interpersonal interaction underlies the collaborative gains in this task. This interpretation was bolstered by analyses showing that the magnitude of the collaborative benefit was positively mediated by the strength of social affiliation and by the similarity of verbal communication among team members. This research serves as a proof-of-concept that Miller's RMI can differentiate between interactive versus independent effects of collaborative cognition. Moreover, the finding that social affiliation and communication similarity each contribute to the collaborative benefit suggests new avenues of research for establishing the mechanisms supporting collaborative cognition.
Visual search involves the coordination of looking (moving one's gaze to new locations) and seeing (distinguishing targets and nontargets). These two aspects of visual search are distinct from one another because high-acuity vision is possible only in a small region at the center of gaze (the fovea), and only when the eyes are stationary (a fixation). To sample detailed information from an extended scene, the eyes must move abruptly (saccade) from one location to another. In the typical inspection of a scene, this fixation-saccade cycle is repeated 3-4 times/sec.The efficiency of visual search-how rapidly and accurately the target is found-is typically measured by the time that elapses between the first glimpse of a scene and a response indicating target detection. This entails a direct trading relation between seeing and looking: Longer fixations increase information fidelity from each location at the cost of exploring fewer locations, whereas quickly exploring many locations results in reduced fidelity at each one. Studies comparing human oculomotor behavior with an ideal psychophysical observer have indicated that many participants come close to optimizing this trade-off in search (Najemnik & Geisler, 2005.In the present study, we explored the consequences of adopting particular cognitive strategies on this trading relationship. Several studies have shown that participants who are instructed to search passively search more efficiently than those who are instructed to search actively (complete instructions are in the Method section) (Smilek, Dixon, & Merikle, 2006;Smilek, Enns, Eastwood, & Merikle, 2006). Smilek, Enns, et al. (2006) hypothesized that the passive strategy gives automatic processes more influence over spatial attention, whereas the active strategy encourages greater reliance on unnecessary executive processes (cf. Wolfe, Alvarez, & Horowitz, 2000). This interpretation was bolstered by a second experiment in Smilek, Enns, et al. (2006) showing that search was improved when participants performed a simultaneous task that occupied executive processes.In the present study, we asked three broad questions concerning cognitive strategies and eye movements. First, is there any relationship between the two at all? It may be that strategy has no effect on eye movements, in that all of the participants use their eyes to sample information in essentially the same way. If so, the passive advantage found by Smilek, Dixon, and Merikle (2006) and Smilek, Enns, et al. (2006) may be purely cognitive, reflecting differences in the way scene information is processed after the eyes have sampled it.Second, if strategies alter eye movements, which oculomotor measures are affected? We hypothesized that passively instructed searchers will shift their emphasis to looking less and seeing more, spending more time on individual fixations than do active searchers. We also expected differences in other oculomotor behaviors. There are at least two ways one could see more: by expanding the attentional window of each fixation (i...
Working together feels easier with some people than with others. We asked participants to perform a visual search task either alone or with a partner while simultaneously measuring each participant's EEG. Local phase synchronization and inter-brain phase synchronization were generally higher when subjects jointly attended to a visual search task than when they attended to the same task individually. Some participants searched the visual display more efficiently and made faster decisions when working as a team, whereas other dyads did not benefit from working together. These inter-team differences in behavioral performance gain in the visual search task were reliably associated with inter-team differences in local and inter-brain phase synchronization. Our results suggest that phase synchronization constitutes a neural correlate of social facilitation, and may help to explain why some teams perform better than others.
This study compared visual search under everyday conditions among participants across the life span (healthy participants in 4 groups, with average age of 6 years, 8 years, 22 years, and 75 years, and 1 group averaging 73 years with a history of falling). The task involved opening a door and stepping into a room find 1 of 4 everyday objects (apple, golf ball, coffee can, toy penguin) visible on shelves. The background for this study included 2 well-cited laboratory studies that pointed to different cognitive mechanisms underlying each end of the U-shaped pattern of visual search over the life span (Hommel et al., 2004; Trick & Enns, 1998). The results recapitulated some of the main findings of the laboratory study (e.g., a U-shaped function, dissociable factors for maturation and aging), but there were several unique findings. These included large differences in the baseline salience of common objects at different ages, visual eccentricity effects that were unique to aging, and visual field effects that interacted strongly with age. These findings highlight the importance of studying cognitive processes in more natural settings, where factors such as personal relevance, life history, and bodily contributions to cognition (e.g., limb, head, and body movements) are more readily revealed. (PsycINFO Database Record
Not all cognitive collaborations are equally effective. We tested whether friendship and communication influenced collaborative efficiency by randomly assigning participants to complete a cognitive task with a friend or non-friend, while visible to their partner or separated by a partition. Collaborative efficiency was indexed by comparing each pair’s performance to an optimal individual performance model of the same two people. The outcome was a strong interaction between friendship and partner visibility. Friends collaborated more efficiently than non-friends when visible to one another, but a partition that prevented pair members from seeing one another reduced the collaborative efficiency of friends and non-friends to a similar lower level. Secondary measures suggested that verbal communication differences, but not psychophysiological arousal, contributed to these effects. Analysis of covariance indicated that females contributed more than males to overall levels of collaboration, but that the interaction of friendship and visibility was independent of that effect. These findings highlight the critical role of partner visibility in the collaborative success of friends.
The exploration of a familiar object by hand can benefit its identification by eye. What is unclear is how much this multisensory cross-talk reflects shared shape representations versus generic semantic associations. Here, we compare several simultaneous priming conditions to isolate the potential contributions of shape and semantics in haptic-to-visual priming. Participants explored a familiar object manually (haptic prime) while trying to name a visual object that was gradually revealed in increments of spatial resolution. Shape priming was isolated in a comparison of identity priming (shared semantic category and shape) with category priming (same category, but different shapes). Semantic priming was indexed by the comparisons of category priming with unrelated haptic primes. The results showed that both factors mediated priming, but that their relative weights depended on the reliability of the visual information. Semantic priming dominated in Experiment 1, when participants were free to use high-resolution visual information, but shape priming played a stronger role in Experiment 2, when participants were forced to respond with less reliable visual information. These results support the structural description hypothesis of haptic-visual priming (Reales and Ballesteros in J Exp Psychol Learn Mem Cogn 25:644-663, 1999) and are also consistent with the optimal integration theory (Ernst and Banks in Nature 415:429-433, 2002), which proposes a close coupling between the reliability of sensory signals and their weight in decision making.
Does person perception-the impressions we form from watching others-hold clues to the mental states of people engaged in cognitive tasks? We investigated this with a two-phase method: In Phase 1, participants searched on a computer screen (Experiment 1) or in an office (Experiment 2); in Phase 2, other participants rated the searchers' video-recorded behavior. The results showed that blind raters are sensitive to individual differences in search proficiency and search strategy, as well as to environmental factors affecting search difficulty. Also, different behaviors were linked to search success in each setting: Eye movement frequency predicted successful search on a computer screen; head movement frequency predicted search success in an office. In both settings, an active search strategy and positive emotional expressions were linked to search success. These data indicate that person perception informs cognition beyond the scope of performance measures, offering the potential for new measurements of cognition that are both rich and unobtrusive.Keywords Visual search . Eye movements and visual attention . Attention "You can observe a lot by just watching." -Widely attributed to Yogi Berra A typical experiment in cognitive psychology involves the presentation of a stimulus in a controlled laboratory setting, systematic variation of the conditions under which the stimulus is presented, and measurement of the participant's response with a combination of keypresses, brief vocal responses, eye movements, and limb actions. Cognitive researchers almost never look directly at participants while they perform in an experiment, leaving it an open question whether they are missing key features of visible behavior that are relevant to the mental processes under investigation. The purpose of the present study is to ask whether researchers can enhance their understanding of cognition by adding measures of person perception to their standard toolbox of performance measurements.Our motivation is both practical and theoretical. On the practical side, most personal computers today come equipped with a built-in webcam aimed directly at the user. We can think of no reason why this resource should lie dormant without consideration of its research potential. Theoretically, many studies in social psychology over the past decade have demonstrated the surprising reliability and validity of thin-slicing, referring to the ability of persons to make rapid evaluations of the personality, disposition, and intent of others from very small samples of their behavior (Ambady, Bernieri, & Richeson, 2000;Ambady, Hallahan, & Rosenthal, 1995;Borkenau & Liebler, 1995;Borkenau, Mauer, Riemann, Spinath, & Angleitner, 2004;Carney, Colvin, & Hall, 2007;Gladwell, 2007;Naumann, Vazire, Rentfrow, & Gosling, 2009;Rule, Macrae, & Ambady, 2009;Weisbuch & Ambady, 2011). Why should cognitive researchers not also consider this potential signal?A second theoretical motivation comes from the growing interest in emotional, social, and motivational influence...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.