Focal adhesions (FAs) are specialized membrane-associated multi-protein complexes that link the cell to the extracellular matrix and play crucial roles in cell-matrix sensing. Considerable information is available on the complex molecular composition of these sites, yet the regulation of FA dynamics is largely unknown. Based on a combination of FRAP studies in live cells, with in silico simulations and mathematical modeling, we show that the FA plaque proteins paxillin and vinculin exist in four dynamic states: an immobile FA-bound fraction, an FA-associated fraction undergoing exchange, a juxtamembrane fraction experiencing attenuated diffusion, and a fast-diffusing cytoplasmic pool. The juxtamembrane region surrounding FAs displays a gradient of FA plaque proteins with respect to both concentration and dynamics. Based on these findings, we propose a new model for the regulation of FA dynamics in which this juxtamembrane domain acts as an intermediary layer, enabling an efficient regulation of FA formation and reorganization.
Language and music are two human-unique capacities whose relationship remains debated. Some argue for overlap in processing mechanisms, especially for structure processing, but others fail to find overlap. Using fMRI, we examined the responses of language brain regions to diverse music stimuli, and also probed the musical abilities of individuals with severe aphasia. Across four experiments, we obtained a clear answer: music does not recruit nor requires the language system. The language regions′ responses to music are generally low and never exceed responses elicited by non-music auditory conditions, like animal sounds. Further, the language regions are not sensitive to music structure: they show low responses to both intact and scrambled music, and to melodies with vs. without structural violations. Finally, individuals with aphasia who cannot judge sentence grammaticality perform well on melody well-formedness judgments. Thus the mechanisms that process structure in language do not appear to support music processing.
BackgroundDNA chips allow simultaneous measurements of genome-wide response of thousands of genes, i.e. system level monitoring of the gene-network activity. Advanced analysis methods have been developed to extract meaningful information from the vast amount of raw gene-expression data obtained from the microarray measurements. These methods usually aimed to distinguish between groups of subjects (e.g., cancer patients vs. healthy subjects) or identifying marker genes that help to distinguish between those groups. We assumed that motifs related to the internal structure of operons and gene-networks regulation are also embedded in microarray and can be deciphered by using proper analysis.Methodology/Principal FindingsThe analysis presented here is based on investigating the gene-gene correlations. We analyze a database of gene expression of Bacillus subtilis exposed to sub-lethal levels of 37 different antibiotics. Using unsupervised analysis (dendrogram) of the matrix of normalized gene-gene correlations, we identified the operons as they form distinct clusters of genes in the sorted correlation matrix. Applying dimension-reduction algorithm (Principal Component Analysis, PCA) to the matrices of normalized correlations reveals functional motifs. The genes are placed in a reduced 3-dimensional space of the three leading PCA eigen-vectors according to their corresponding eigen-values. We found that the organization of the genes in the reduced PCA space recovers motifs of the operon internal structure, such as the order of the genes along the genome, gene separation by non-coding segments, and translational start and end regions. In addition to the intra-operon structure, it is also possible to predict inter-operon relationships, operons sharing functional regulation factors, and more. In particular, we demonstrate the above in the context of the competence and sporulation pathways.Conclusions/SignificanceWe demonstrated that by analyzing gene-gene correlation from gene-expression data it is possible to identify operons and to predict unknown internal structure of operons and gene-networks regulation.
Much of what is known about the timing of visual processing in the brain is inferred from intracranial studies in monkeys, with human data limited to mainly non-invasive methods with lower spatial resolution. Here, we estimated visual onset latencies from electrocorticographic (ECoG) recordings in a patient who was implanted with 112 sub-dural electrodes, distributed across the posterior cortex of the right hemisphere, for pre-surgical evaluation of intractable epilepsy. Functional MRI prior to surgery was used to determine boundaries of visual areas. The patient was presented with images of objects from several categories. Event Related Potentials (ERPs) were calculated across all categories excluding targets, and statistically reliable onset latencies were determined using a bootstrapping procedure over the single trial baseline activity in individual electrodes. The distribution of onset latencies broadly reflected the known hierarchy of visual areas, with the earliest cortical responses in primary visual cortex, and higher areas showing later responses. A clear exception to this pattern was robust, statistically reliable and spatially localized, very early responses on the bank of the posterior intra-parietal sulcus (IPS). The response in the IPS started nearly simultaneously with responses detected in peristriate visual areas, around 60 milliseconds post-stimulus onset. Our results support the notion of early visual processing in the posterior parietal lobe, not respecting traditional hierarchies, and give direct evidence for onset times of visual responses across the human cortex.
Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially for structure processing. Such claims often concern the inferior frontal component of the language system located within “Broca’s area.” However, others have failed to find overlap. Using a robust individual-subject fMRI approach, we examined the responses of language brain regions to music stimuli, and probed the musical abilities of individuals with severe aphasia. Across 4 experiments, we obtained a clear answer: music perception does not engage the language system, and judgments about music structure are possible even in the presence of severe damage to the language network. In particular, the language regions’ responses to music are generally low, often below the fixation baseline, and never exceed responses elicited by nonmusic auditory conditions, like animal sounds. Furthermore, the language regions are not sensitive to music structure: they show low responses to both intact and structure-scrambled music, and to melodies with vs. without structural violations. Finally, in line with past patient investigations, individuals with aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, the mechanisms that process structure in language do not appear to process music, including music syntax.
A network of left frontal and temporal brain regions supports 'high-level' language processing-including the processing of word meanings, as well as word-combinatorial processing-across presentation modalities. This 'core' language network has been argued to store our knowledge of words and constructions as well as constraints on how those combine to form sentences. However, our linguistic knowledge additionally includes information about sounds (phonemes) and how they combine to form clusters, syllables, and words. Is this knowledge of phoneme combinatorics also represented in these language regions? Across five fMRI experiments, we investigated the sensitivity of high-level language processing brain regions to sub-lexical linguistic sound patterns by examining responses to diverse nonwords-sequences of sounds/letters that do not constitute real words (e.g., punes, silory, flope). We establish robust responses in the language network to visually (Experiment 1a, n=605) and auditorily (Experiments 1b, n=12, and 1c, n=13) presented nonwords relative to baseline. In Experiment 2 (n=16), we find stronger responses to nonwords that obey the phoneme-combinatorial constraints of English. Finally, in Experiment 3 (n=14) and a post-hoc analysis of Experiment 2, we provide suggestive evidence that the responses in Experiments 1 and 2 are not due to the activation of real words that share some phonology with the nonwords. The results suggest that knowledge of phoneme combinatorics and representations of sub-lexical linguistic sound patterns are stored within the same fronto-temporal network that stores higher-level linguistic knowledge and supports word and sentence comprehension.
The perceptual organization of pitch is frequently described as helical, with a monotonic dimension of pitch height and a circular dimension of pitch chroma, accounting for the repeating structure of the octave. Although the neural representation of pitch height is widely studied, the way in which pitch chroma representation is manifested in neural activity is currently debated. We tested the automaticity of pitch chroma processing using the MMN—an ERP component indexing automatic detection of deviations from auditory regularity. Musicians trained to classify pure or complex tones across four octaves, based on chroma—C versus G (21 participants, Experiment 1) or C versus F# (27, Experiment 2). Next, they were passively exposed to MMN protocols designed to test automatic detection of height and chroma deviations. Finally, in an “attend chroma” block, participants had to detect the chroma deviants in a sequence similar to the passive MMN sequence. The chroma deviant tones were accurately detected in the training and the attend chroma parts both for pure and complex tones, with a slightly better performance for complex tones. However, in the passive blocks, a significant MMN was found only to height deviations and complex tone chroma deviations, but not to pure tone chroma deviations, even for perfect performers in the active tasks. These results indicate that, although height is represented preattentively, chroma is not. Processing the musical dimension of chroma may require higher cognitive processes, such as attention and working memory.
Everyday auditory stimuli contain structure at multiple time and frequency scales. Using EEG, we demonstrate sensitivity of human auditory cortex to the content of past stimulation in unattended sequences of equiprobable tones. In 3 experiments including 79 participants overall, we found that at different latencies after stimulus onset, neural responses were sensitive to frequency intervals computed over distinct time scales. To account for these results, we tested a model consisting of neural populations with frequency-specific but broad tuning that undergo adaptation with exponential recovery. We found that the coexistence of neural populations with distinct recovery rates can explain our results. Furthermore, the adaptation bandwidth depends on spectral context -it is wider when the stimulation sequence has a wider frequency range. Our results provide electrophysiological evidence as well as a possible mechanistic explanation for dynamic and multi-scale context-dependent auditory processing in the human cortex.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.