Eighty-seven years ago, Köhler reported that the majority of students picked the same answer in a quiz: Which novel word form (‘maluma’ or ‘takete’) went best with which abstract line drawing (one curved, one angular). Others have consistently shown the effect in a variety of contexts, with only one reported failure by Rogers and Ross. In the spirit of transparency, we report our own failure in the same journal. In our study, speakers of Syuba, from the Himalaya in Nepal, do not show a preference when matching word forms ‘kiki’ and ‘bubu’ to spiky versus curvy shapes. We conducted a meta-analysis of previous studies to investigate the relationship between pseudoword legality and task effects. Our combined analyses suggest a common source for both of the failures: ‘wordiness’ – We believe these tests fail when the test words do not behave according to the sound structure of the target language.
Do infants learn their early words in semantic isolation? Or do they integrate new words into an inter-connected semantic system? In an infant-friendly adaptation of the adult lexical priming paradigm, infants at 18 and 24 months-of-age heard two words in quick succession. The noun-pairs were either related or unrelated. Following the onset of the target word, two pictures were presented, one of which depicted the target. Eye movements revealed that both age groups comprehended the target word. In addition, 24-month-olds demonstrated primed picture looking in two measures of comprehension: Named target pictures preceded by a related word pair took longer to disengage from and attracted more looking overall. The finding of enhanced target recognition demonstrates the emergence of semantic organisation by the end of the second year.
Models of the lexicon under developmentOne model of lexical network development comes from the field of computational linguistics. Steyvers and Tenenbaum (2005) tested the How do infants build a semantic system? 3
Is parental report of comprehension valid for individual words? If so, how well must an infant know a word before their parents will report it as 'understood'? We report an experiment in which parental report predicts infant performance in a referent identification task at 1 ; 6. Unlike in previous research of this kind (i.e. Houston-Price, Mather & Sakkalou, 2007), infants saw items only once, and image pairs were taxonomic sisters. The match between parental report and infant behaviour provides evidence of the item-level accuracy of both measures of lexical comprehension, and informs our understanding of how British parents interpret standardized Communicative Development Inventories (CDIs).
Nonarbitrary mappings between sound and shape (i.e., the bouba-kiki effect) have been shown across different cultures and early in development; however, the level of processing at which this effect arises remains unclear. Here we show that the mapping occurs prior to conscious awareness of the visual stimuli. Under continuous flash suppression, congruent stimuli (e.g., "kiki" inside an angular shape) broke through to conscious awareness faster than incongruent stimuli. This was true even when we trained people to pair unfamiliar letters with auditory word forms, a result showing that the effect was driven by the phonology, not the visual features, of the letters. Furthermore, visibility thresholds of the shapes decreased when they were preceded by a congruent auditory word form in a masking paradigm. Taken together, our results suggest that sound-shape mapping can occur automatically prior to conscious awareness of visual shapes, and that sensory congruence facilitates conscious awareness of a stimulus being present.
Graphical abstractHighlights► Adults and infants show similar effects generated by unexpected word forms. ► 14-Month-olds show an analogue of the adult PMN effect. ► Infants detect mispronounced vowels from 225 ms, and larger changes 75 ms earlier. ► Adult semantic N400 effects are discrete from expectation-related PMN effects. ► Novel data visualisation method integrates spatial and temporal information.
Studies investigating cross-modal correspondences between auditory pitch and visual shapes have shown children and adults consistently match high pitch to pointy shapes and low pitch to curvy shapes, yet no studies have investigated linguistic-uses of pitch. In the present study, we used a bouba/kiki style task to investigate the sound/shape mappings for Tones of Mandarin Chinese, for three groups of participants with different language backgrounds. We recorded the vowels [i] and [u] articulated in each of the four tones of Mandarin Chinese. In Study 1 a single auditory stimulus was presented with two images (one curvy, one spiky). In Study 2 a single image was presented with two auditory stimuli differing only in tone. Participants were asked to select the best match in an online ‘Quiz.’ Across both studies, we replicated the previously observed ‘u-curvy, i-pointy’ sound/shape cross-modal correspondence in all groups. However, Tones were mapped differently by people with different language backgrounds: speakers of Mandarin Chinese classified as Chinese-dominant systematically matched Tone 1 (high, steady) to the curvy shape and Tone 4 (falling) to the pointy shape, while English speakers with no knowledge of Chinese preferred to match Tone 1 (high, steady) to the pointy shape and Tone 3 (low, dipping) to the curvy shape. These effects were observed most clearly in Study 2 where tone-pairs were contrasted explicitly. These findings are in line with the dominant patterns of linguistic pitch perception for speakers of these languages (pitch-change, and pitch height, respectively). Chinese English balanced bilinguals showed a bivalent pattern, swapping between the Chinese pitch-change pattern and the English pitch-height pattern depending on the task. These findings show for that the supposedly universal pattern of mapping linguistic sounds to shape is modulated by the sensory properties of a speaker’s language system, and that people with high functioning in more than one language can dynamically shift between patterns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.