As CMOS scaling reaches its technological limits, a radical departure from traditional von Neumann systems, which involve separate processing and memory units, is needed in order to significantly extend the performance of today's computers. In-memory computing is a promising approach in which nanoscale resistive memory devices, organized in a computational memory unit, are used for both processing and memory. However, to reach the numerical accuracy typically required for data analytics and scientific computing, limitations arising from device variability and non-ideal device characteristics need to be addressed. Here we introduce the concept of mixed-precision in-memory computing, which combines a von Neumann machine with a computational memory unit. In this hybrid system, the computational memory unit performs the bulk of a computational task, while the von Neumann machine implements a backward method to iteratively improve the accuracy of the solution. The system therefore benefits from both the high precision of digital computing and the energy/areal efficiency of in-memory computing. We experimentally demonstrate the efficacy of the approach by accurately solving systems of linear equations, in particular, a system of 5, 000 equations using 998, 752 phase-change memory devices.
Most bacteria live in ever-changing environments where periods of stress are common. One fundamental question is whether individual bacterial cells have an increased tolerance to stress if they recently have been exposed to lower levels of the same stressor. To address this question, we worked with the bacterium Caulobacter crescentus and asked whether exposure to a moderate concentration of sodium chloride would affect survival during later exposure to a higher concentration. We found that the effects measured at the population level depended in a surprising and complex way on the time interval between the two exposure events: The effect of the first exposure on survival of the second exposure was positive for some time intervals but negative for others. We hypothesized that the complex pattern of history dependence at the population level was a consequence of the responses of individual cells to sodium chloride that we observed: (i) exposure to moderate concentrations of sodium chloride caused delays in cell division and led to cell-cycle synchronization, and (ii) whether a bacterium would survive subsequent exposure to higher concentrations was dependent on the cell-cycle state. Using computational modeling, we demonstrated that indeed the combination of these two effects could explain the complex patterns of history dependence observed at the population level. Our insight into how the behavior of single cells scales up to processes at the population level provides a perspective on how organisms operate in dynamic environments with fluctuating stress exposure.bacterial memory | single cell | cell cycle | priming | synchronization B acteria are constantly challenged by their environment (1). Are bacterial cells able to respond better to environmental changes if they have experienced similar conditions in the recent past? It has been demonstrated that bacterial populations respond faster to a change of nutrient source when the forthcoming nutrient source has been presented in the recent past (2, 3). Similarly, bacterial populations that were exposed to sublethal stress levels showed increased survival of a higher stress level of the same type (4-6). Theoretical and experimental studies indicate that basing cellular decisions on environmental cues perceived in the past can be advantageous in dynamic environments (3,7,8), suggesting that such history-dependent behavior can be the result of adaptive evolution in dynamic environments.In this study we addressed the question of memory on a singlecell level. We asked whether weak stress events provide individual cells with increased tolerance against future stress. Memory effects usually have been studied on the basis of population measurements (4, 9-12). Using population measurements, it is difficult to determine whether history dependence is a consequence of the behavioral changes in individuals or of a shift in the composition of the population as a result of past events. By using single-cell analysis, we investigated how the behavior of individuals scaled up to hi...
We present the Network-based Biased Tree Ensembles (NetBiTE) method for drug sensitivity prediction and drug sensitivity biomarker identification in cancer using a combination of prior knowledge and gene expression data. Our devised method consists of a biased tree ensemble that is built according to a probabilistic bias weight distribution. The bias weight distribution is obtained from the assignment of high weights to the drug targets and propagating the assigned weights over a protein-protein interaction network such as STRING. The propagation of weights, defines neighborhoods of influence around the drug targets and as such simulates the spread of perturbations within the cell, following drug administration. Using a synthetic dataset, we showcase how application of biased tree ensembles (BiTE) results in significant accuracy gains at a much lower computational cost compared to the unbiased random forests (RF) algorithm. We then apply NetBiTE to the Genomics of Drug Sensitivity in Cancer (GDSC) dataset and demonstrate that NetBiTE outperforms RF in predicting IC50 drug sensitivity, only for drugs that target membrane receptor pathways (MRPs): RTK, EGFR and IGFR signaling pathways. We propose based on the NetBiTE results, that for drugs that inhibit MRPs, the expression of target genes prior to drug administration is a biomarker for IC50 drug sensitivity following drug administration. We further verify and reinforce this proposition through control studies on, PI3K/MTOR signaling pathway inhibitors, a drug category that does not target MRPs, and through assignment of dummy targets to MRP inhibiting drugs and investigating the variation in NetBiTE accuracy.
Reliable identification of molecular biomarkers is essential for accurate patient stratification. While state-of-the-art machine learning approaches for sample classification continue to push boundaries in terms of performance, most of these methods are not able to integrate different data types and lack generalization power, limiting their application in a clinical setting. Furthermore, many methods behave as black boxes, and we have very little understanding about the mechanisms that lead to the prediction. While opaqueness concerning machine behavior might not be a problem in deterministic domains, in health care, providing explanations about the molecular factors and phenotypes that are driving the classification is crucial to build trust in the performance of the predictive system. We propose Pathway-Induced Multiple Kernel Learning (PIMKL), a methodology to reliably classify samples that can also help gain insights into the molecular mechanisms that underlie the classification. PIMKL exploits prior knowledge in the form of a molecular interaction network and annotated gene sets, by optimizing a mixture of pathway-induced kernels using a Multiple Kernel Learning (MKL) algorithm, an approach that has demonstrated excellent performance in different machine learning applications. After optimizing the combination of kernels to predict a specific phenotype, the model provides a stable molecular signature that can be interpreted in the light of the ingested prior knowledge and that can be used in transfer learning tasks.
Summary In recent years, SWATH-MS has become the proteomic method of choice for data-independent–acquisition, as it enables high proteome coverage, accuracy and reproducibility. However, data analysis is convoluted and requires prior information and expert curation. Furthermore, as quantification is limited to a small set of peptides, potentially important biological information may be discarded. Here we demonstrate that deep learning can be used to learn discriminative features directly from raw MS data, eliminating hence the need of elaborate data processing pipelines. Using transfer learning to overcome sample sparsity, we exploit a collection of publicly available deep learning models already trained for the task of natural image classification. These models are used to produce feature vectors from each mass spectrometry (MS) raw image, which are later used as input for a classifier trained to distinguish tumor from normal prostate biopsies. Although the deep learning models were originally trained for a completely different classification task and no additional fine-tuning is performed on them, we achieve a highly remarkable classification performance of 0.876 AUC. We investigate different types of image preprocessing and encoding. We also investigate whether the inclusion of the secondary MS2 spectra improves the classification performance. Throughout all tested models, we use standard protein expression vectors as gold standards. Even with our naïve implementation, our results suggest that the application of deep learning and transfer learning techniques might pave the way to the broader usage of raw mass spectrometry data in real-time diagnosis. Availability and implementation The open source code used to generate the results from MS images is available on GitHub: https://ibm.biz/mstransc. The raw MS data underlying this article cannot be shared publicly for the privacy of individuals that participated in the study. Processed data including the MS images, their encodings, classification labels and results can be accessed at the following link: https://ibm.box.com/v/mstc-supplementary. Supplementary information Supplementary data are available at Bioinformatics online.
Boolean models are a powerful abstraction for qualitative modeling of gene regulatory networks. With the recent availability of advanced high-throughput technologies, Boolean models have increasingly grown in size and complexity, posing a challenge for existing software simulation tools that have not scaled at the same speed. Field Programmable Gate Arrays (FPGAs) are powerful reconfigurable integrated circuits that can offer massive performance improvements. Due to their highly parallel nature, FPGAs are well suited to simulate complex molecular networks. We present here a new simulation framework for Boolean models, which first converts the model to Verilog, a standardized hardware description language, and then connects it to an execution core that runs on an FPGA coherently attached to a POWER8 processor. We report an order of magnitude speedup over a multi-threaded software simulation tool running on the same processor on a selection of Boolean models. Analysis on a T-cell large granular lymphocyte leukemia (T-LGL) demonstrates that our framework achieves consistent performance improvements resulting in new biological insights. In addition, we show that our solution allows to perform attractor detection at an unprecedented speed, exhibiting a speedup ranging from one to three orders of magnitude compared to alternative software solutions.
BackgroundThe ability to form a cellular memory and use it for cellular decision-making could help bacteria to cope with recurrent stress conditions. We analyzed whether bacteria would form a cellular memory specifically if past events are predictive of future conditions. We worked with the asymmetrically dividing bacterium Caulobacter crescentus where past events are expected to only be informative for one of the two cells emerging from division, the sessile cell that remains in the same microenvironment and does not migrate.ResultsTime-resolved analysis of individual cells revealed that past exposure to low levels of antibiotics increases tolerance to future exposure for the sessile but not for the motile cell. Using computer simulations, we found that such an asymmetry in cellular memory could be an evolutionary response to situations where the two cells emerging from division will experience different future conditions.ConclusionsOur results raise the question whether bacteria can evolve the ability to form and use cellular memory conditionally in situations where it is beneficial.Electronic supplementary materialThe online version of this article (doi:10.1186/s12862-017-0884-4) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.