Models of neural architecture and organization are critical for the study of disease, aging, and development. Unfortunately, automating the process of building maps of microarchitectural differences both within and across brains still remains a challenge. In this paper, we present a way to build data-driven representations of brain structure using deep learning. With this model we can build meaningful representations of brain structure within an area, learn how different areas are related to one another anatomically, and use this model to discover new regions of interest within a sample that share similar characteristics in terms of their anatomical composition. We start by training a deep convolutional neural network to predict the brain area that it is in, using only small snapshots of its immediate surroundings. By requiring that the network learn to discriminate brain areas from these local views, it learns a rich representation of the underlying anatomical features that allow it to distinguish different brain areas. Once we have the trained network, we open up the black box, extract features from its last hidden layer, and then factorize them. After forming a low-dimensional factorization of the network's representations, we find that the learned factors and their embeddings can be used to further resolve biologically meaningful subdivisions within brain regions (e.g., laminar divisions and barrels in somatosensory cortex). These findings speak to the potential use of neural networks to learn meaningful features for modeling neural architecture, and discovering new patterns in brain anatomy directly from images.Much of our study of the brain and it's organization, which dates back to Cajal's beautiful renderings of neural architecture [10], still relies on human reasoning to define regions-ofinterest (ROIs). This is done by examining different parts of an image, either in terms of their anatomical compositions or patterning, and then building an atlas or model of how the architecture changes across different brain regions. Moving forward however, given the everincreasing sizes of new neuroimaging datasets, we need automated solutions that can find characteristics of brain structure, discover architectural patterns or primitives that are characteristic of a brain area, and also provide good ways to discover substructures (like cortical lamina) within a known brain area.Unfortunately, when attempting to translate expert human knowledge into an automated method for region discovery, it is often unclear how to define the necessary image properties to extract. Moreover, extracting these features can be difficult to automate, or they may not be robust to minor changes in data (e.g., illumination conditions, background intensity, or noise) [11,12]. Features therefore typically need to be hand-crafted for each individual dataset and application, and problems with generalization are only further exacerbated by biological variability, as well as process variations introduced during sample preparation, imaging, or post...