Pioneer factors are a special class of transcription factor that can associate with compacted chromatin to facilitate the binding of additional transcription factors. The function of pioneer factors was originally described during development; more recently, they have been implicated in hormone-dependent cancers, such as oestrogen receptor-positive breast cancer and androgen receptor-positive prostate cancer. We discuss the importance of pioneer factors in these specific cancers, the discovery of new putative pioneer factors and the interplay between these proteins in mediating nuclear receptor function in cancer.
Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g., “eye”) and category labels (e.g., “animal”) for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features other than object parts perform relatively poorly, perhaps because DNNs more comprehensively capture the colors, textures and contours which matter to human object perception. However, categorical models outperform DNNs, suggesting that further work may be needed to bring high-level semantic representations in DNNs closer to those extracted by humans. Modern DNNs explain similarity judgments remarkably well considering they were not trained on this task, and are promising models for many aspects of human cognition.
SummaryFOXA1 is a pioneer factor that binds to enhancer regions that are enriched in H3K4 mono- and dimethylation (H3K4me1 and H3K4me2). We performed a FOXA1 rapid immunoprecipitation mass spectrometry of endogenous proteins (RIME) screen in ERα-positive MCF-7 breast cancer cells and found histone-lysine N-methyltransferase (MLL3) as the top FOXA1-interacting protein. MLL3 is typically thought to induce H3K4me3 at promoter regions, but recent findings suggest it may contribute to H3K4me1 deposition. We performed MLL3 chromatin immunoprecipitation sequencing (ChIP-seq) in breast cancer cells, and MLL3 was shown to occupy regions marked by FOXA1 occupancy and H3K4me1 and H3K4me2. MLL3 binding was dependent on FOXA1, indicating that FOXA1 recruits MLL3 to chromatin. MLL3 silencing decreased H3K4me1 at enhancer elements but had no appreciable impact on H3K4me3 at enhancer elements. We propose a mechanism whereby the pioneer factor FOXA1 recruits the chromatin modifier MLL3 to facilitate the deposition of H3K4me1 histone marks, subsequently demarcating active enhancer elements.
Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as “human”, “mammal”, and “animal”). The feature-based model includes both object parts (such as “eye”, “tail”, and “handle”) and other descriptive features (such as “circular”, “green”, and “stubbly”). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation. Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation.
The degree to which we perceive real-world objects as similar or dissimilar structures our perception and guides categorization behavior. Here, we investigated the neural representations enabling perceived similarity using behavioral judgments, fMRI and MEG. As different object dimensions co-occur and partly correlate, to understand the relationship between perceived similarity and brain activity it is necessary to assess the unique role of multiple object dimensions. We thus behaviorally assessed perceived object similarity in relation to shape, function, color and background. We then used representational similarity analyses to relate these behavioral judgments to brain activity. We observed a link between each object dimension and representations in visual cortex. These representations emerged rapidly within 200 ms of stimulus onset. Assessing the unique role of each object dimension revealed partly overlapping and distributed representations: while color-related representations distinctly preceded shape-related representations both in the processing hierarchy of the ventral visual pathway and in time, several dimensions were linked to high-level ventral visual cortex. Further analysis singled out the shape dimension as neither fully accounted for by supra-category membership, nor a deep neural network trained on object categorization. Together our results comprehensively characterize the relationship between perceived similarity of key object dimensions and neural activity.
similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other.Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whetherthe IT object representation and similarity judgments are best explained by a categorical or a feature-based model.We use rich models (> 100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as "human", "mammal", "animal"). The feature model includes both object parts (such as "eye", "tail", "handle") and other descriptive features (such as "circular", "green", "stubbly"). We used nonnegative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both modelsexplained significant variance in ITand the amounts explained were not significantly different.The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation.Similarity judgments contain additional categorical variance that is not explained by visual features, reflecting a higher-level more purely semantic representation.
Background Autism is a heterogeneous collection of disorders with a complex molecular underpinning. Evidence from postmortem brain studies have indicated that early prenatal development may be altered in autism. Induced pluripotent stem cells (iPSCs) generated from individuals with autism with macrocephaly also indicate prenatal development as a critical period for this condition. But little is known about early altered cellular events during prenatal stages in autism. Methods iPSCs were generated from 9 unrelated individuals with autism without macrocephaly and with heterogeneous genetic backgrounds, and 6 typically developing control individuals. iPSCs were differentiated toward either cortical or midbrain fates. Gene expression and high throughput cellular phenotyping was used to characterize iPSCs at different stages of differentiation. Results A subset of autism-iPSC cortical neurons were RNA-sequenced to reveal autism-specific signatures similar to postmortem brain studies, indicating a potential common biological mechanism. Autism-iPSCs differentiated toward a cortical fate displayed impairments in the ability to self-form into neural rosettes. In addition, autism-iPSCs demonstrated significant differences in rate of cell type assignment of cortical precursors and dorsal and ventral forebrain precursors. These cellular phenotypes occurred in the absence of alterations in cell proliferation during cortical differentiation, differing from previous studies. Acquisition of cell fate during midbrain differentiation was not different between control- and autism-iPSCs. Conclusions Taken together, our data indicate that autism-iPSCs diverge from control-iPSCs at a cellular level during early stage of neurodevelopment. This suggests that unique developmental differences associated with autism may be established at early prenatal stages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.