People often talk about musical pitch using spatial metaphors. In English, for instance, pitches can be “high” or “low” (i.e., height-pitch association), whereas in other languages, pitches are described as “thin” or “thick” (i.e., thickness-pitch association). According to results from psychophysical studies, metaphors in language can shape people’s nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place, or does it only modify preexisting associations? To find out, we tested 4-month-old Dutch infants’ sensitivity to height-pitch and thickness-pitch mappings using a preferential-looking paradigm. The infants looked significantly longer at cross-modally congruent stimuli for both space-pitch mappings, which indicates that infants are sensitive to these associations before language acquisition. The early presence of space-pitch mappings means that these associations do not originate from language. Instead, language builds on preexisting mappings, changing them gradually via competitive associative learning. Space-pitch mappings that are language-specific in adults develop from mappings that may be universal in infants.
Do people who speak different languages think differently, even when they are not using language? To find out, we used nonlinguistic psychophysical tasks to compare mental representations of musical pitch in native speakers of Dutch and Farsi. Dutch speakers describe pitches as high (hoog) or low (laag), whereas Farsi speakers describe pitches as thin (nazok) or thick (koloft). Differences in language were reflected in differences in performance on two pitch-reproduction tasks, even though the tasks used simple, nonlinguistic stimuli and responses. To test whether experience using language influences mental representations of pitch, we trained native Dutch speakers to describe pitch in terms of thickness, as Farsi speakers do. After the training, Dutch speakers' performance on a nonlinguistic psychophysical task resembled the performance of native Farsi speakers. People who use different linguistic space-pitch metaphors also think about pitch differently. Language can play a causal role in shaping nonlinguistic representations of musical pitch.
How can a visual environment shape our utterances? A variety of visual and conceptual factors appear to affect sentence production, such as the visual cueing of patients or agents, their position relative to one another, and their animacy. These factors have previously been studied in isolation, leaving the question about their interplay open. The present study brings them together to examine systematic variations in eye movements, speech initiation and voice selection in descriptions of visual scenes. A sample of 44 native speakers of German were asked to describe depicted event scenes presented on a computer screen, while both their utterances and eye movements were recorded. Participants were instructed to produce one-sentence descriptions. The pictures depicted scenes with animate agents and either animate or inanimate patients who were situated to the right or to the left of agents. Half of the patients were preceded by a visual cue – a small circle appearing for 60 ms on a blank screen in the place of patients. The results show that scenes with left- rather than right-positioned patients lead to longer speech onset times, a higher probability of passive sentences and looks toward the patient. In addition, scenes with animate patients received more looks and elicited more passive utterances than scenes with inanimate patients. Visual cueing did not produce significant changes in speech, even though there were more looks to cued vs. non-cued referents, demonstrating that cueing only impacted initial scene scanning patterns but not speech. Our findings demonstrate that when examined together rather than separately, visual and conceptual factors of event scenes influence different aspects of behavior. In comparison to cueing that only affected eye movements, patient animacy also acted on the syntactic realization of utterances, whereas patient position in addition altered their onset. In terms of time course, visual influences are rather short-lived, while conceptual factors have long-lasting effects.
To what extent are links between musical pitch and space universal, and to what extent are they shaped by language? There is contradictory evidence in support of both universality and linguistic relativity presently, leaving the question open. To address this, speakers of Dutch who talk about pitch in terms of spatial height and speakers of Turkish who use a thickness metaphor were tested in simple nonlinguistic space-pitch association tasks. Both groups showed evidence of a thickness-pitch association, but differed significantly in their heightpitch associations, suggesting the latter may be more susceptible to language. When participants had to match pitches to spatial stimuli where height and thickness were opposed (i.e., a thick line high in space vs. a thin line low in space), Dutch and Turkish differed in their relative preferences. Whereas Turkish participants predominantly opted for a thickness-pitch interpretation-even if this meant a reversal of height-pitch mappings-Dutch participants favored a height-pitch interpretation more often. These findings provide new evidence that speakers of different languages vary in their space-pitch associations, while at the same time showing such associations are not equally susceptible to linguistic influences. Some space-pitch (i.e., heightpitch) associations are more malleable than others (i.e., thickness-pitch).
Amodal (redundant) and arbitrary cross-sensory feature associations involve the context-insensitive mapping of absolute feature values across sensory domains. Cross-sensory associations of a different kind, known as correspondences, involve the context-sensitive mapping of relative feature values. Are such correspondences in place at birth (like amodal associations), or are they learned from subsequently experiencing relevant feature co-occurrences in the world (like arbitrary associations)? To decide between these two possibilities, human newborns (median age = 44 hr) watched animations in which two balls alternately rose and fell together in space. The pitch of an accompanying sound rose and fell either congruently with this visual change (pitch rising and falling as the balls moved up and down), or incongruently (pitch rising and falling as the balls moved down and up). Newborns' looking behavior was sensitive to this congruence, providing the strongest indication to date that cross-sensory correspondences can be in place at birth.
People associate information with different senses but the mechanism by which this happens is unclear. Such associations are thought to arise from innate structural associations in the brain, statistical associations in the environment, via shared affective content, or through language. A developmental perspective on crossmodal associations can help determine which explanations are more likely for specific associations. Certain associations with pitch (e.g., pitch–height) have been observed early in infancy, but others may only occur late into childhood (e.g., pitch–size). In contrast, tactile–chroma associations have been observed in children, but not adults. One modality that has received little attention developmentally is olfaction. In the present investigation, we explored crossmodal associations from sound, tactile stimuli, and odor to a range of stimuli by testing a broad range of participants. Across the three modalities, we found little evidence for crossmodal associations in young children. This suggests an account based on innate structures is unlikely. Instead, the number and strength of associations increased over the lifespan. This suggests that experience plays a crucial role in crossmodal associations from sound, touch, and smell to other senses.
How does non-linguistic, visual experience affect language production? A series of experiments addressed this question by examining linguistic and visual preferences for agent positions in transitive action scenarios. In Experiment 1, 30 native German speakers described event scenes where agents were positioned either to the right or to the left of patients. Produced utterances had longer speech onset times for scenes with right- rather than left-positioned agents, suggesting that the visual organization of events can affect sentence production. In Experiment 2 another cohort of 36 native German participants indicated their aesthetic preference for left- or right-positioned agents in mirrored scenes and displayed a preference for scenes with left-positioned agents. In Experiment 3, 37 Arabic native participants performed the same non-verbal task showing the reverse preference. Our findings demonstrate that non-linguistic visual preferences seem to affect sentence production, which in turn may rely on the writing system of a specific language.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.