Birdsong provides a unique model for understanding the behavioral and neural bases underlying complex sequential behaviors. However, birdsong analyses require laborious effort to make the data quantitatively analyzable. The previous attempts had succeeded to provide some reduction of human efforts involved in birdsong segment classification. The present study was aimed to further reduce human efforts while increasing classification performance. In the current proposal, a linear-kernel support vector machine was employed to minimize the amount of human-generated label samples for reliable element classification in birdsong, and to enable the classifier to handle highly-dimensional acoustic features while avoiding the over-fitting problem. Bengalese finch's songs in which distinct elements (i.e., syllables) were aligned in a complex sequential pattern were used as a representative test case in the neuroscientific research field. Three evaluations were performed to test (1) algorithm validity and accuracy with exploring appropriate classifier settings, (2) capability to provide accuracy with reducing amount of instruction dataset, and (3) capability in classifying large dataset with minimized manual labeling. The results from the evaluation (1) showed that the algorithm is 99.5% reliable in song syllables classification. This accuracy was indeed maintained in evaluation (2), even when the instruction data classified by human were reduced to one-minute excerpt (corresponding to 300–400 syllables) for classifying two-minute excerpt. The reliability remained comparable, 98.7% accuracy, when a large target dataset of whole day recordings (∼30,000 syllables) was used. Use of a linear-kernel support vector machine showed sufficient accuracies with minimized manually generated instruction data in bird song element classification. The methodology proposed would help reducing laborious processes in birdsong analysis without sacrificing reliability, and therefore can help accelerating behavior and studies using songbirds.
Blind source separation (BSS) algorithms extract neural signals from electroencephalography (EEG) data. However, it is difficult to quantify source separation performance because there is no criterion to dissociate neural signals and noise in EEG signals. This study develops a method for evaluating BSS performance. The idea is neural signals in EEG can be estimated by comparison with simultaneously measured electrocorticography (ECoG). Because the ECoG electrodes cover the majority of the lateral cortical surface and should capture most of the original neural sources in the EEG signals. We measured real EEG and ECoG data and developed an algorithm for evaluating BSS performance. First, EEG signals are separated into EEG components using the BSS algorithm. Second, the EEG components are ranked using the correlation coefficients of the ECoG regression and the components are grouped into subsets based on their ranks. Third, canonical correlation analysis estimates how much information is shared between the subsets of the EEG components and the ECoG signals. We used our algorithm to compare the performance of BSS algorithms (PCA, AMUSE, SOBI, JADE, fastICA) via the EEG and ECoG data of anesthetized nonhuman primates. The results (Best case >JADE = fastICA >AMUSE = SOBI ≥ PCA >random separation) were common to the two subjects. To encourage the further development of better BSS algorithms, our EEG and ECoG data are available on our Web site (http://neurotycho.org/) as a common testing platform.
Electrophysiological Source Imaging (ESI) is hampered by lack of “gold standards” for model validation. Concurrent electroencephalography (EEG) and electrocorticography (ECoG) experiments (EECoG) are useful for this purpose, especially primate models due to their flexibility and translational value for human research. Unfortunately, there is only one EECoG experiments in the public domain that we know of: the Multidimensional Recording (MDR) is based on a single monkey ( www.neurotycho.org ). The mining of this type of data is hindered by lack of specialized procedures to deal with: (1) Severe EECoG artifacts due to the experimental produces; (2) Sophisticated forward models that account for surgery induced skull defects and implanted ECoG electrode strips; (3) Reliable statistical procedures to estimate and compare source connectivity (partial correlation). We provide solutions to the processing issues just mentioned with EECoG-Comp: an open source platform ( https://github.com/Vincent-wq/EECoG-Comp ). EECoG lead fields calculated with FEM (Simbio) for MDR data are also provided and were used in other papers of this special issue. As a use case with the MDR, we show: (1) For real MDR data, 4 popular ESI methods (MNE, LCMV, eLORETA and SSBL) showed significant but moderate concordance with a usual standard, the ECoG Laplacian (standard partial ); (2) In both monkey and human simulations, all ESI methods as well as Laplacian had a significant but poor correspondence with the true source connectivity. These preliminary results may stimulate the development of improved ESI connectivity estimators but require the availability of more EECoG data sets to obtain neurobiologically valid inferences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.