Understanding how odors are coded within an olfactory system requires knowledge about its input. This is constituted by the molecular receptive ranges (MRR) of olfactory sensory neurons that converge in the glomeruli of the olfactory bulb (vertebrates) or the antennal lobe (AL, insects). Aiming at a comprehensive characterization of MRRs in Drosophila melanogaster we measured odor-evoked calcium responses in olfactory sensory neurons that express the olfactory receptor Or22a. We used an automated stimulus application system to screen [Ca(2+)] responses to 104 odors both in the antenna (sensory transduction) and in the AL (neuronal transmission). At 10(-2) (vol/vol) dilution, 39 odors elicited at least a half-maximal response. For these odorants we established dose-response relationships over their entire dynamic range. We tested 15 additional chemicals that are structurally related to the most efficient odors. Ethyl hexanoate and methyl hexanoate were the best stimuli, eliciting consistent responses at dilutions as low as 10(-9). Two substances led to calcium decrease, suggesting that Or22a might be constitutively active, and that these substances might act as inverse agonists, reminiscent of G-protein coupled receptors. There was no difference between the antennal and the AL MRR. Furthermore we show that Or22a has a broad yet selective MRR, and must be functionally described both as a specialist and a generalist. Both these descriptions are ecologically relevant. Given that adult Drosophila use approximately 43 ORs, a complete description of all MRRs appears now in reach.
Songbirds spend much of their time learning, producing, and listening to complex vocal sequences we call songs. Songs are learned via cultural transmission, and singing, usually by males, has a strong impact on the behavioral state of the listeners, often promoting affiliation, pair bonding, or aggression. What is it in the acoustic structure of birdsong that makes it such a potent stimulus? We suggest that birdsong potency might be driven by principles similar to those that make music so effective in inducing emotional responses in humans: a combination of rhythms and pitches —and the transitions between acoustic states—affecting emotions through creating expectations, anticipations, tension, tension release, or surprise. Here we propose a framework for investigating how birdsong, like human music, employs the above “musical” features to affect the emotions of avian listeners. First we analyze songs of thrush nightingales (Luscinia luscinia) by examining their trajectories in terms of transitions in rhythm and pitch. These transitions show gradual escalations and graceful modifications, which are comparable to some aspects of human musicality. We then explore the feasibility of stripping such putative musical features from the songs and testing how this might affect patterns of auditory responses, focusing on fMRI data in songbirds that demonstrate the feasibility of such approaches. Finally, we explore ideas for investigating whether musical features of birdsong activate avian brains and affect avian behavior in manners comparable to music’s effects on humans. In conclusion, we suggest that birdsong research would benefit from current advances in music theory by attempting to identify structures that are designed to elicit listeners’ emotions and then testing for such effects experimentally. Birdsong research that takes into account the striking complexity of song structure in light of its more immediate function – to affect behavioral state in listeners – could provide a useful animal model for studying basic principles of music neuroscience in a system that is very accessible for investigation, and where developmental auditory and social experience can be tightly controlled.
Music is thought to engage its listeners by driving feelings of surprise, tension, and relief through a dynamic mixture of predictable and unpredictable patterns, a property summarized here as “expressiveness”. Birdsong shares with music the goal to attract its listeners’ attention and might use similar strategies to achieve this. We here tested a thrush nightingale’s (Luscinia luscinia) rhythm, as represented by song amplitude envelope (containing information on note timing, duration, and intensity), for evidence of expressiveness. We used multifractal analysis, which is designed to detect in a signal dynamic fluctuations between predictable and unpredictable states on multiple timescales (e.g. notes, subphrases, songs). Results show that rhythm is strongly multifractal, indicating fluctuations between predictable and unpredictable patterns. Moreover, comparing original songs with re-synthesized songs that lack all subtle deviations from the “standard” note envelopes, we find that deviations in note intensity and duration significantly contributed to multifractality. This suggests that birdsong is more dynamic due to subtle note timing patterns, often similar to musical operations like accelerando or crescendo. While different sources of these dynamics are conceivable, this study shows that multi-timescale rhythm fluctuations can be detected in birdsong, paving the path to studying mechanisms and function behind such patterns.
All-trans retinoic acid (ATRA), the main active metabolite of vitamin A, is a powerful signaling molecule that regulates large-scale morphogenetic processes during vertebrate embryonic development, but is also involved post-natally in regulating neural plasticity and cognition. In songbirds, it plays an important role in the maturation of learned song. The distribution of the ATRA-synthesizing enzyme, zRalDH, and of ATRA receptors (RARs) have been described, but information on the distribution of other components of the retinoid signaling pathway is still lacking. To address this gap, we have determined the expression patterns of two obligatory RAR co-receptors, the retinoid X receptors (RXR) α and γ, and of the three ATRA-degrading cytochromes CYP26A1, CYP26B1, and CYP26C1. We have also studied the distribution of zRalDH protein using immunohistochemistry, and generated a refined map of ATRA localization, using a modified reporter cell assay to examine entire brain sections. Our results show that (1) ATRA is more broadly distributed in the brain than previously predicted by the spatially restricted distribution of zRalDH transcripts. This could be due to long-range transport of zRalDH enzyme between different nuclei of the song system: Experimental lesions of putative zRalDH peptide source regions diminish ATRA-induced transcription in target regions. (2) Four telencephalic song nuclei express different and specific subsets of retinoid-related receptors and could be targets of retinoid regulation; in the case of the lateral magnocellular nucleus of the anterior nidopallium (lMAN), receptor expression is dynamically regulated in a circadian and age-dependent manner. (3) High-order auditory areas exhibit a complex distribution of transcripts representing ATRA synthesizing and degrading enzymes and could also be a target of retinoid signaling. Together, our survey across multiple connected song nuclei and auditory brain regions underscores the prominent role of retinoid signaling in modulating the circuitry that underlies the acquisition and production of learned vocalizations.
It seems trivial to identify sound sequences as music or speech, particularly when the sequences come from different sound sources, such as an orchestra and a human voice. Can we also easily distinguish these categories when the sequence comes from the same sound source? On the basis of which acoustic features? We investigated these questions by examining listeners’ classification of sound sequences performed by an instrument intertwining both speech and music: the dùndún talking drum. The dùndún is commonly used in south-west Nigeria as a musical instrument but is also perfectly fit for linguistic usage in what has been described as speech surrogates in Africa. One hundred seven participants from diverse geographical locations (15 different mother tongues represented) took part in an online experiment. Fifty-one participants reported being familiar with the dùndún talking drum, 55% of those being speakers of Yorùbá. During the experiment, participants listened to 30 dùndún samples of about 7s long, performed either as music or Yorùbá speech surrogate (n = 15 each) by a professional musician, and were asked to classify each sample as music or speech-like. The classification task revealed the ability of the listeners to identify the samples as intended by the performer, particularly when they were familiar with the dùndún, though even unfamiliar participants performed above chance. A logistic regression predicting participants’ classification of the samples from several acoustic features confirmed the perceptual relevance of intensity, pitch, timbre, and timing measures and their interaction with listener familiarity. In all, this study provides empirical evidence supporting the discriminating role of acoustic features and the modulatory role of familiarity in teasing apart speech and music.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.