The growing body of literature on the recognition of sexual orientation from voice (“auditory gaydar”) is silent on the cognitive and social consequences of having a gay-/lesbian- versus heterosexual-sounding voice. We investigated this issue in four studies (overall N = 276), conducted in Italian language, in which heterosexual listeners were exposed to single-sentence voice samples of gay/lesbian and heterosexual speakers. In all four studies, listeners were found to make gender-typical inferences about traits and preferences of heterosexual speakers, but gender-atypical inferences about those of gay or lesbian speakers. Behavioral intention measures showed that listeners considered lesbian and gay speakers as less suitable for a leadership position, and male (but not female) listeners took distance from gay speakers. Together, this research demonstrates that having a gay/lesbian rather than heterosexual-sounding voice has tangible consequences for stereotyping and discrimination.
Empirical research had initially shown that English listeners are able to identify the speakers' sexual orientation based on voice cues alone. However, the accuracy of this voice-based categorization, as well as its generalizability to other languages (language-dependency) and to non-native speakers (language-specificity), has been questioned recently. Consequently, we address these open issues in 5 experiments: First, we tested whether Italian and German listeners are able to correctly identify sexual orientation of same-language male speakers. Then, participants of both nationalities listened to voice samples and rated the sexual orientation of both Italian and German male speakers. We found that listeners were unable to identify the speakers' sexual orientation correctly. However, speakers were consistently categorized as either heterosexual or gay on the basis of how they sounded. Moreover, a similar pattern of results emerged when listeners judged the sexual orientation of speakers of their own and of the foreign language. Overall, this research suggests that voice-based categorization of sexual orientation reflects the listeners' expectations of how gay voices sound rather than being an accurate detector of the speakers' actual sexual identity. Results are discussed with regard to accuracy, acoustic features of voices, language dependency and language specificity.
Notwithstanding rising interest, a coherent picture of the brain's representation of two languages has not yet been achieved. In the present meta-analysis we analysed a large number of functional neuroimaging studies focusing on language processing in bilinguals. We used activation likelihood estimation (ALE) to enucleate activation areas involved in bilingual processing and control of different types of linguistic knowledge -lexico-semantics, grammar, phonology -in L1 and L2. Results show that surprisingly, compared to L2, lexico-semantic processing in L1 involves a widespread system of cortico-subcortical regions, especially when L2 is acquired later in life. By contrast, L2 processing recruits regions exceeding the L1 semantic network and relating to executive control processes. Only few regions displayed selective activation for grammar and phonology. Analyses of language switching highlight a functional overlap between domain-general and bilingual language control networks. Collectively, our findings point to a shared neural network for L1 and L2 with few differences depending on the linguistic level. The emerging picture identifies under-investigated issues, offering clear directions for future research.
a b s t r a c tIn two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress. Words with penultimate stress -the most common pattern -appeared to be recognized by default. In Experiment 2, listeners had to learn new words from which some stress cues had been removed, and then recognize reduced-and full-cue versions of those words. The acoustic manipulation affected recognition only of newly-learnt words with antepenultimate stress: Full-cue versions, even though they were never heard during training, were recognized earlier than reduced-cue versions. Newly-learnt words with penultimate stress were recognized earlier overall, but recognition of the two versions of these words did not differ. Abstract knowledge (i.e., knowledge generalized over the lexicon) about lexical stresswhich pattern is the default and which cues signal the non-default pattern -appears to be used during the recognition of known and newly-learnt Italian words.
In 4 naming experiments we investigated how Italian readers assign stress to pseudowords. We assessed whether participants assign stress following distributional information such as stress neighborhood (the proportion and number of existent words sharing orthographic ending and stress pattern) and whether such distributional information affects naming speed. Experiments 1 and 2 tested how readers assign stress to pseudowords. The results showed that participants assign stress on the basis of the pseudowords' stress neighborhood, but only when this orthographic/phonological information is widely represented in the lexicon. Experiments 3 and 4 tested the naming speed of pseudowords with different stress patterns. Participants were faster in reading pseudowords with antepenultimate than with penultimate stress. The effect was not driven by distributional information, but it was related to the stage of articulation planning. Overall, the experiments showed that, under certain conditions, readers assign stress using orthographic/phonological distributional information. However, the distributional information does not speed up pseudoword naming, which is affected by stress computation at the level of the articulation planning of the stimulus. It is claimed that models of reading aloud and speech production should be merged at the level of phonological encoding, when segmental and metrical information are assembled and articulation is planned.
In the present study, the role of phonological information in visual word recognition is investigated by adopting a large-scale data-driven approach that exploits a new consistency measure based on distributional semantics methods. A recent study by Marelli, Amenta, and Crepaldi (2015) showed that the consistency between an orthographic string and the meanings to which it is associated in a large corpus is a relevant predictor in lexical decision experiments. Exploiting irregular mappings between orthography and phonology in English, we were able to compute a phonology-to-semantics consistency measure that dissociates from its orthographic counterpart and tested both measures on lexical decision data taken from the British Lexicon Project (Keuleers et al., 2012). Results showed that both orthography and phonology are activated during visual word recognition. However, their contribution is crucially determined by the extent to which they are informative of the word semantics, and phonology plays a crucial role in accessing word meaning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.