Acoustic communication plays a key role for mate attraction in grasshoppers. Males use songs to advertise themselves to females. Females evaluate the song pattern, a repetitive structure of sound syllables separated by short pauses, to recognize a conspecific male and as proxy to its fitness. In their natural habitat females often receive songs with degraded temporal structure. Perturbations may, for example, result from the overlap with other songs. We studied the response behavior of females to songs that show different signal degradations. A perturbation of an otherwise attractive song at later positions in the syllable diminished the behavioral response, whereas the same perturbation at the onset of a syllable did not affect song attractiveness. We applied naïve Bayes classifiers to the spike trains of identified neurons in the auditory pathway to explore how sensory evidence about the acoustic stimulus and its attractiveness is represented in the neuronal responses. We find that populations of three or more neurons were sufficient to reliably decode the acoustic stimulus and to predict its behavioral relevance from the single-trial integrated firing rate. A simple model of decision making simulates the female response behavior. It computes for each syllable the likelihood for the presence of an attractive song pattern as evidenced by the population firing rate. Integration across syllables allows the likelihood to reach a decision threshold and to elicit the behavioral response. The close match between model performance and animal behavior shows that a spike rate code is sufficient to enable song pattern recognition.
Many different invertebrate and vertebrate species use acoustic communication for pair formation. In the cricket Gryllus bimaculatus, females recognize their species-specific calling song and localize singing males by positive phonotaxis. The song pattern of males has a clear structure consisting of brief and regular pulses that are grouped into repetitive chirps. Information is thus present on a short and a long time scale. Here, we ask which structural features of the song critically determine the phonotactic performance. To this end we employed artificial neural networks to analyze a large body of behavioral data that measured females’ phonotactic behavior under systematic variation of artificially generated song patterns. In a first step we used four non-redundant descriptive temporal features to predict the female response. The model prediction showed a high correlation with the experimental results. We used this behavioral model to explore the integration of the two different time scales. Our result suggested that only an attractive pulse structure in combination with an attractive chirp structure reliably induced phonotactic behavior to signals. In a further step we investigated all feature sets, each one consisting of a different combination of eight proposed temporal features. We identified feature sets of size two, three, and four that achieve highest prediction power by using the pulse period from the short time scale plus additional information from the long time scale.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.