First insights into the molecular programs orchestrating the progression from neural stem cells to cortical projection neurons are emerging. Loss of the transcriptional regulator Ski has been linked to the human 1p36 deletion syndrome, which includes central nervous system defects. Here, we report critical roles for Ski in the maintenance of the neural stem cell pool and the specification of callosal neurons. Ski -deficient callosal neurons lose their identity and ectopically express the transcription factor Ctip2. The misspecified callosal neurons largely fail to form the corpus callosum and instead redirect their axons toward subcortical targets. We identify the chromatin-remodeling factor Satb2 as a partner of Ski, and show that both proteins are required for transcriptional repression of Ctip2 in callosal neurons. We propose a model in which Satb2 recruits Ski to the Ctip2 locus, and Ski attracts histone deacetylases, thereby enabling the formation of a functional nucleosome remodeling and deacetylase repressor complex. Our findings establish a central role for Ski–Satb2 interactions in regulating transcriptional mechanisms of callosal neuron specification.
miR-128, a brain-enriched microRNA, has been implicated in the control of neurogenesis and synaptogenesis but its potential roles in intervening processes have not been addressed. We show that post-transcriptional mechanisms restrict miR-128 accumulation to post-mitotic neurons during mouse corticogenesis and in adult stem cell niches. Whereas premature miR-128 expression in progenitors for upper layer neurons leads to impaired neuronal migration and inappropriate branching, sponge-mediated inhibition results in overmigration. Within the upper layers, premature miR-128 expression reduces the complexity of dendritic arborization, associated with altered electrophysiological properties. We show that Phf6, a gene mutated in the cognitive disorder Börjeson-Forssman-Lehmann syndrome, is an important regulatory target for miR-128. Restoring PHF6 expression counteracts the deleterious effect of miR-128 on neuronal migration, outgrowth and intrinsic physiological properties. Our results place miR-128 upstream of PHF6 in a pathway vital for cortical lamination as well as for the development of neuronal morphology and intrinsic excitability.DOI: http://dx.doi.org/10.7554/eLife.04263.001
An appealing representation of emotions is the use of emotional attributes such as arousal (passive versus active), valence (negative versus positive) and dominance (weak versus strong). While previous studies have considered these dimensions as orthogonal descriptors to represent emotions, there are strong theoretical and practical evidences showing the interrelation between these emotional attributes. This observation suggests that predicting emotional attributes with a unified framework should outperform machine learning algorithms that separately predict each attribute. This study presents methods to jointly learn emotional attributes by exploiting their interdependencies. The framework relies on multi-task learning (MTL) implemented with deep neural networks (DNN) with shared hidden layers. The framework provides a principled approach to learn shared feature representations that maximize the performance of regression models. The results of within-corpus and cross-corpora evaluation show the benefits of MTL over single task learning (STL). MTL achieves gains on concordance correlation coefficient (CCC) as high as 4.7% for within-corpus evaluations, and 14.0% for cross-corpora evaluations. The visualization of the activations of the last hidden layers illustrates that MTL creates better feature representation. The best structure has shared layers followed by attribute-dependent layers, capturing better the relation between attributes.
The external language models (LM) integration remains a challenging task for end-to-end (E2E) automatic speech recognition (ASR) which has no clear division between acoustic and language models. In this work, we propose an internal LM estimation (ILME) method to facilitate a more effective integration of the external LM with all pre-existing E2E models with no additional model training, including the most popular recurrent neural network transducer (RNN-T) and attention-based encoder-decoder (AED) models. Trained with audio-transcript pairs, an E2E model implicitly learns an internal LM that characterizes the training data in the source domain. With ILME, the internal LM scores of an E2E model are estimated and subtracted from the log-linear interpolation between the scores of the E2E model and the external LM. The internal LM scores are approximated as the output of an E2E model when eliminating its acoustic components. ILME can alleviate the domain mismatch between training and testing, or improve the multi-domain E2E ASR. Experimented with 30K-hour trained RNN-T and AED models, ILME achieves up to 15.5% and 6.8% relative word error rate reductions from Shallow Fusion on out-of-domain LibriSpeech and in-domain Microsoft production test sets, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.