In daily life, humans often perform visual tasks, such as solving puzzles or searching for a friend in a crowd. Performing these visual searches jointly with a partner can be beneficial: The two task partners can devise effective division of labour strategies and thereby outperform individuals who search alone. To date, it is unknown whether these group benefits scale up to triads or whether the cost of coordinating with others offsets any potential benefit for group sizes above two. To address this question, we compare participants' performance in a visual search task that they perform either alone, in dyads, or in triads. When the search task is performed jointly, co-actors receive information about each other's gaze location. After controlling for speed-accuracy trade-offs, we found that triads searched faster than dyads, suggesting that group benefits do scale up to triads. Moreover, we found that the triads' divided the search space in accordance with the co-actors' individual search performances but searched less efficiently than dyads. We also present a statistical model to predict group benefits, which accounts for 70% of the variance. The model includes our experimental factors and a set of non-redundant predictors, quantifying the similarities in the individual performances, the collaboration between co-actors, and the estimated benefits that co-actors would attain without collaborating. Overall, the present study demonstrates that group benefits scale up to larger group sizes, but the additional gains are attenuated by the increased costs associated with devising effective division of labour strategies.
In daily life, humans often perform visual tasks, such as solving puzzles or searching for a friend in a crowd. Performing these visual searches jointly with a partner can be beneficial: The two task partners can devise effective division of labour strategies and thereby outperform individuals who search alone. To date, it is unknown whether these group benefits scale up to triads or whether the cost of coordinating with others offsets any potential benefit for group sizes above two. To address this question, we compare participants' performance in a visual search task that they perform either alone, in dyads, or in triads. When the search task is performed jointly, co-actors receive information about each other's gaze location. After controlling for speed-accuracy trade-offs, we found that triads searched faster than dyads, suggesting that group benefits do scale up to triads. Moreover, we found that the triads' divided the search space in accordance with the co-actors' individual search performances but searched less efficiently than dyads. We also present a linear model to predict group benefits, which accounts for 70% of the variance. The model includes our experimental factors and a set of non-redundant predictors, quantifying the similarities in the individual performances, the collaboration between co-actors, and the estimated benefits that co-actors would attain without collaborating. Overall, the present study demonstrates that group benefits scale up to larger group sizes, but the additional gains are attenuated by the increased costs associated with devising effective division of labour strategies.
Brand names are often considered a special type of words of special relevance to examine the role of visual codes during reading: unlike common words, brand names are typically presented with the same letter‐case configuration (e.g., IKEA, adidas). Recently, Pathak et al. (European Journal of Marketing, 2019, 53, 2109) found an effect of visual similarity for misspelled brand names when the participants had to decide whether the brand name was spelled correctly or not (e.g., tacebook [baseword: facebook] was responded more slowly and less accurately than xacebook). This finding is at odds with both orthographically based visual‐word recognition models and prior experiments using misspelled common words (e.g., viotin [baseword: violin] is identified as fast as viocin). To solve this puzzle, we designed two experiments in which the participants had to decide whether the presented item was written correctly. In Experiment 1, following a procedure similar to Pathak et al. (European Journal of Marketing, 2019, 53, 2109), we examined the effect of visual similarity on misspelled brand names with/without graphical information (e.g., anazon vs. atazon [baseword: amazon]). Experiment 2 was parallel to Experiment 1, but we focused on misspelled common words (e.g., anarillo vs. atarillo; baseword: amarillo [yellow in Spanish]). Results showed a sizeable effect of visual similarity on misspelled brand names – regardless of their graphical information, but not on misspelled common words. These findings suggest that visual codes play a greater role when identifying brand names than common words. We examined how models of visual‐word recognition can account for this dissociation.
Recent research has shown that omitting the accent mark in a Spanish word, which is a language in which these diacritics only indicate lexical stress, does not cause a delay in lexical access (e.g., cárcel [prison] ≈ carcel; cárcel-CÁRCEL ≈ carcel-CÁRCEL). This pattern has been interpreted as accented and nonaccented vowels sharing the abstract letter representations in Spanish. However, adding an accent mark to a nonaccented Spanish word appears to produce a reading cost in masked priming paradigms (e.g., féliz-FELIZ [happy] > feliz-FELIZ). We examined whether adding an accent mark to a non accented Spanish word slows down lexical access in two semantic categorization experiments to solve this puzzle. We added an accent mark either on the stressed syllable (Experiment 1, e.g., cébra for the word cebra [zebra]) or an unstressed syllable (Experiment 2, e.g., cebrá). While effect sizes were small in magnitude, adding an accent mark produced a cost relative to the intact words, especially when the accent mark was added on an unstressed syllable (cebrá > cebra). These findings favor the view that letter identity and (to a lesser extent) accent mark information are encoded during word recognition in Spanish. We also examined the practical implications of these results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.