Cue-induced cocaine seeking intensifies or incubates after withdrawal from extended access cocaine self-administration, a phenomenon termed incubation of cocaine craving. The expression of incubated craving is mediated by Ca 2+ -permeable AMPA receptors (CP-AMPARs) in the nucleus accumbens (NAc). Thus, CP-AMPARs are a potential target for therapeutic intervention, making it important to understand mechanisms that govern their accumulation. Here we used subcellular fractionation and biotinylation of NAc tissue to examine the abundance and distribution of AMPAR subunits, and GluA1 phosphorylation, in the incubation model. We also studied two transmembrane AMPA receptor regulatory proteins (TARPs), γ2 and γ4. Our results, together with earlier findings, suggest that some of the new CP-AMPARs are synaptic. These are probably associated with γ2, but they are loosely tethered to the PSD. Levels of GluA1 phosphorylated at serine 845 (pS845 GluA1) were significantly increased in biotinylated tissue and in an extrasynaptic membrane-enriched fraction. These results suggest that increased synaptic levels of CP-AMPARs may result in part from an increase in pS845 GluA1 in extrasynaptic membranes, given that S845 phosphorylation primes GluA1-containing AMPARs for synaptic insertion and extrasynaptic AMPARs supply the synapse. Some of the new extrasynaptic CPAMPARs are likely associated with γ4, rather than γ2. The maintenance of CP-AMPARs in NAc synapses during withdrawal is accompanied by activation of CaMKII and ERK2 but not CaMKI. Overall, AMPAR plasticity in the incubation model shares some features with better described forms of synaptic plasticity, although the timing of the phenomenon and the persistence of related neuroadaptations is significantly different.
Network representation learning (NRL) aims to map vertices of a network into a low-dimensional space which preserves the network structure and its inherent properties. Most existing methods for network representation adopt shallow models which have relatively limited capacity to capture highly nonlinear network structures, resulting in sub-optimal network representations. Therefore, it is nontrivial to explore how to effectively capture highly nonlinear network structure and preserve the global and local structure in NRL. To solve this problem, in this pap er we prop ose a new graph convolutional autoencoder architecture based on a depth-based representation of graph structure, referred to as the depth-based subgraph convolutional autoencoder (DS-CAE), which integrates both the global topological and local connectivity structures within a graph. Our idea is to first decompose a graph into a family of K-layer expansion subgraphs rooted at each vertex aimed at better capturing long-range vertex inter-dependencies. Then a set of convolution filters slide over the entire sets of subgraphs of a vertex to extract the local structural connectivity information. This is analogous to the standard convolution operation on grid data. In contrast to most existing models for unsupervised learning on graph-structured data, our model can capture highly non-linear structure by simultaneously integrating node features and network structure into network representation learning. This significantly Many naturally occurring problems can be represented using an underlying graph or network structure, such as natural language processing [1], computer vision [2], or social network analysis [3]. It is therefore crucial to accurately extract useful information from a network. One promising strategy is to map vertices of a network into a low-dimensional space, i.e. learn vector representations for each vertex, with the goal of encoding meaningful information conveyed by the graph in the learned embedding space. As a result, the learned representation can be used directly with existing machine learning methods to perform various network analysis tasks, such as vertex classification [3] and clustering [4].As u c c e s s f u ln e t w o r kr e p r e s e n t a t i o nl e a r n i n gm o d e ls h o u l de x h i b i tt w o desirable properties: a) a High non-linearity: As [5] stated, the structure and inherent properties of a network are highly non-linear. b) Structurepreserving [6]: The learned representation should preserve both the global topological arrangement information (e.g. long-range dependencies) and the local connectivity structure of a network.Traditional metho ds for graph dimensionality reduction such as Lo cal Linear Embedding (LLE) [7], Laplacian eigenmap [8] and IsoMAP [9,10] are based on a graph affinity matrix decomposition, which treats the leading eigenvectors as a representation. However, the time complexity of these methods is at least quadratic in the number of graph nodes, which makes them not applicable to large-scale netw...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.