Animal acoustic communication often takes the form of complex sequences, made up of multiple distinct acoustic units. Apart from the well-known example of birdsong, other animals such as insects, amphibians, and mammals (including bats, rodents, primates, and cetaceans) also generate complex acoustic sequences. Occasionally, such as with birdsong, the adaptive role of these sequences seems clear (e.g. mate attraction and territorial defence). More often however, researchers have only begun to characterise – let alone understand – the significance and meaning of acoustic sequences. Hypotheses abound, but there is little agreement as to how sequences should be defined and analysed. Our review aims to outline suitable methods for testing these hypotheses, and to describe the major limitations to our current and near-future knowledge on questions of acoustic sequences. This review and prospectus is the result of a collaborative effort between 43 scientists from the fields of animal behaviour, ecology and evolution, signal processing, machine learning, quantitative linguistics, and information theory, who gathered for a 2013 workshop entitled, “Analysing vocal sequences in animals”. Our goal is to present not just a review of the state of the art, but to propose a methodological framework that summarises what we suggest are the best practices for research in this field, across taxa and across disciplines. We also provide a tutorial-style introduction to some of the most promising algorithmic approaches for analysing sequences. We divide our review into three sections: identifying the distinct units of an acoustic sequence, describing the different ways that information can be contained within a sequence, and analysing the structure of that sequence. Each of these sections is further subdivided to address the key questions and approaches in that area. We propose a uniform, systematic, and comprehensive approach to studying sequences, with the goal of clarifying research terms used in different fields, and facilitating collaboration and comparative studies. Allowing greater interdisciplinary collaboration will facilitate the investigation of many important questions in the evolution of communication and sociality.
BACKGROUND Prediction of subsequent school-age asthma during the preschool years has proven challenging. OBJECTIVE To confirm in a post hoc analysis the predictive ability of the modified Asthma Predictive Index (mAPI) in a high-risk cohort and a theoretical unselected population. We also tested a potential mAPI modification with a 2-wheezing episode requirement (m2API) in the same populations. METHODS Subjects (n = 289) with a family history of allergy and/or asthma were used to predict asthma at age 6, 8, and 11 years with the use of characteristics collected during the first 3 years of life. The mAPI and the m2API were tested for predictive value. RESULTS For the mAPI and m2API, school-age asthma prediction improved from 1 to 3 years of age. The mAPI had high predictive value after a positive test (positive likelihood ratio ranging from 4.9 to 55) for asthma development at years 6, 8, and 11. Lowering the number of wheezing episodes to 2 (m2API) lowered the predictive value after a positive test (positive likelihood ratio ranging from 1.91 to 13.1) without meaningfully improving the predictive value of a negative test. Posttest probabilities for a positive mAPI reached 72% and 90% in unselected and high-risk populations, respectively. CONCLUSIONS In a high-risk cohort, a positive mAPI greatly increased future asthma probability (eg, 30% pretest probability to 90% posttest probability) and is a preferred predictive test to the m2API. With its more favorable positive posttest probability, the mAPI can aid clinical decision making in assessing future asthma risk for preschool-age children.
Abstract. Intelligent Environments (IEs) have specific computational properties that generally distinguish them from other computational systems.They have large numbers of hardware and software components that need to be interconnected. Their infrastructures tend to be highly distributed, reflecting both the distributed nature of the real world and the IEs' need for large amounts of computational power. They also tend to be highly dynamic and require reconfiguration and resource management on the fly as their components and inhabitants change, and as they adjust their operation to suit the learned preferences of their users. Because IEs generally have multimodal interfaces, they also usually have high degrees of parallelism for resolving multiple, simultaneous events. Finally, debugging IEs present unique challenges to their creators, not only because of their distributed parallelism, but also because of the difficulty of pinning down their "state" in a formal computational sense. This paper describes Metaglue, an extension to the Java programming language for building software agent systems for controlling Intelligent Environments that has been specifically designed to address these needs. Metaglue has been developed as part of the MIT Artificial Intelligence Lab's Intelligent Room Project, which has spent the past four years designing Intelligent Environments for research in Human-Computer Interaction.
study objectives: Sleep after learning often benefits memory consolidation, but the underlying mechanisms remain unclear. In previous studies, we found that learning a visuomotor task is followed by an increase in sleep slow wave activity (SWA, the electroencephalographic [EEG] power density between 0.5 and 4.5 Hz during non-rapid eye movement sleep) over the right parietal cortex. The SWA increase correlates with the postsleep improvement in visuomotor performance, suggesting that SWA may be causally responsible for the consolidation of visuomotor learning. Here, we tested this hypothesis by studying the effects of slow wave deprivation (SWD). design: After learning the task, subjects went to sleep, and acoustic stimuli were timed either to suppress slow waves (SWD) or to interfere as little as possible with spontaneous slow waves (control acoustic stimulation, CAS). setting: Sound-attenuated research room. participants: Healthy subjects (mean age 24.6 ± 1.0 years; n = 9 for EEG analysis, n = 12 for behavior analysis; 3 women) Measurements and results: Sleep time and efficiency were not affected, whereas SWA and the number of slow waves decreased in SWD relative to CAS. Relative to the night before, visuomotor performance significantly improved in the CAS condition (+5.93% ± 0.88%) but not in the SWD condition (-0.77% ± 1.16%), and the direct CAS vs SWD comparison showed a significant difference (P = 0.0007, n = 12, paired t test). Changes in visuomotor performance after SWD were correlated with SWA changes over right parietal cortex but not with the number of arousals identified using clinically established criteria, nor with any sign of "EEG lightening" identified using a novel automatic method based on event-related spectral perturbation analysis. Conclusion: These results support a causal role for sleep slow waves in sleep-dependent improvement of visuomotor performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.