Fluorescent nanosensors are powerful tools for basic research and bioanalytical applications. Individual nanosensors are able to detect single molecules, while ensembles of nanosensors can be used to measure the bulk concentration of an analyte. Collective imaging of multiple nanosensors could provide both spatial and temporal chemical information from the nano- to the microscale. This type of chemical imaging with nanosensors would be very attractive to study processes such as chemical signaling between cells (e.g., neurons). So far, it is not understood what processes are resolvable (concentration, time, space) and how optimal sensors should be designed. Here, we develop a theoretical framework to simulate the fluorescence image of arrays of nanosensors in response to a concentration gradient. For that purpose, binding and unbinding of the analyte is simulated for each single nanosensor by using a Monte Carlo simulation and varying rate constants (k, k). Multiple nanosensors are arranged on a surface and exposed to a concentration pattern c(x,y,t) of an analyte. We account for the resolution limit of light microscopy (Abbe limit) and the acquisition speed and resolution of optical setups and determine the resulting response images ΔI(x,y,t). Consequently, we introduce terms for the spatial and temporal resolution and simulate phase diagrams for different rate constants that allow us to predict how a sensor should be designed to provide a desired spatial and temporal resolution. Our results show, for example, that imaging of neurotransmitter release requires rate constants of k = 10 M sand k = 10 s in many scenarios, which corresponds to high dissociation constants of K > 100 μM. This work predicts if a given fluorescent nanosensor array (rate constants, size, shape, geometry, density) is able to resolve fast concentration changes such as neurotransmitter release from cells. Additionally, we provide rational design principles to engineer nanosensors for chemical imaging.
Using a molecular-level equilibrium theory where proteins are described using their crystallographic structure, we have studied protein adsorption from binary and ternary mixtures of myoglobin, lysozyme, and cytochrome c to poly(methacrylic acid) hydrogel films. The pH gradients these films induce can lead to selective protein adsorption, where the solution pH provides a sensible dial to externally control protein separation. Changing the chemical composition of the polymer network, adding either another acidic or a neutral comonomer, allows for protein localization to controlled spatial regions of the film with nanometer resolution. As pH-sensitive polymer hydrogels are promising candidates for smart, responsive biomaterials, understanding the complexity of competitive protein adsorption is essential. In this work, we highlight the decisive role of amino acid protonation in selective protein adsorption. We present conditions such that the hydrogel film will selectively incorporate the more weakly charged protein, provided that it requires less work to protonate its amino acids.
Epileptic seizures are characterized by abnormal and excessive neural activity, where cortical network dynamics seem to become unstable. However, most of the time, during seizure-free periods, cortex of epilepsy patients shows perfectly stable dynamics. This raises the question of how recurring instability can arise in the light of this stable default state. In this work, we examine two potential scenarios of seizure generation: (i) epileptic cortical areas might generally operate closer to instability, which would make epilepsy patients generally more susceptible to seizures, or (ii) epileptic cortical areas might drift systematically towards instability before seizure onset. We analyzed single-unit spike recordings from both the epileptogenic (focal) and the nonfocal cortical hemispheres of 20 epilepsy patients. We quantified the distance to instability in the framework of criticality, using a novel estimator, which enables an unbiased inference from a small set of recorded neurons. Surprisingly, we found no evidence for either scenario: Neither did focal areas generally operate closer to instability, nor were seizures preceded by a drift towards instability. In fact, our results from both pre-seizure and seizure-free intervals suggest that despite epilepsy, human cortex operates in the stable, slightly subcritical regime, just like cortex of other healthy mammalians.
Here we present our Python toolbox “MR. Estimator” to reliably estimate the intrinsic timescale from electrophysiologal recordings of heavily subsampled systems. Originally intended for the analysis of time series from neuronal spiking activity, our toolbox is applicable to a wide range of systems where subsampling—the difficulty to observe the whole system in full detail—limits our capability to record. Applications range from epidemic spreading to any system that can be represented by an autoregressive process. In the context of neuroscience, the intrinsic timescale can be thought of as the duration over which any perturbation reverberates within the network; it has been used as a key observable to investigate a functional hierarchy across the primate cortex and serves as a measure of working memory. It is also a proxy for the distance to criticality and quantifies a system’s dynamic working point.
Camera calibration is a prerequisite for many computer vision applications. While a good calibration can turn a camera into a measurement device, it can also deteriorate a system’s performance if not done correctly. In the recent past, there have been great efforts to simplify the calibration process. Yet, inspection and evaluation of calibration results typically still requires expert knowledge. In this work, we introduce two novel methods to capture the fundamental error sources in camera calibration: systematic errors (biases) and remaining uncertainty (variance). Importantly, the proposed methods do not require capturing additional images and are independent of the camera model. We evaluate the methods on simulated and real data and demonstrate how a state-of-the-art system for guided calibration can be improved. In combination, the methods allow novice users to perform camera calibration and verify both the accuracy and precision. Electronic supplementary material The online version of this chapter (10.1007/978-3-030-71278-5_3) contains supplementary material, which is available to authorized users.
Accurate camera calibration is a precondition for many computer vision applications. Calibration errors, such as wrong model assumptions or imprecise parameter estimation, can deteriorate a system's overall performance, making the reliable detection and quantification of these errors critical. In this work, we introduce an evaluation scheme to capture the fundamental error sources in camera calibration: systematic errors (biases) and uncertainty (variance). The proposed bias detection method uncovers smallest systematic errors and thereby reveals imperfections of the calibration setup and provides the basis for camera model selection. A novel resampling-based uncertainty estimator enables uncertainty estimation under non-ideal conditions and thereby extends the classical covariance estimator. Furthermore, we derive a simple uncertainty metric that is independent of the camera model. In combination, the proposed methods can be used to assess the accuracy of individual calibrations, but also to benchmark new calibration algorithms, camera models, or calibration setups. We evaluate the proposed methods with simulations and real cameras.
Information processing in the brain requires integration of information over time.Such an integration can be achieved if signals are maintained in the network activity for the required period, as quantified by the intrinsic timescale. While short timescales are considered beneficial for fast responses to stimuli, long timescales facilitate information storage and integration. We quantified intrinsic timescales from spiking activity in the medial temporal lobe of humans. We found extended and highly diverse timescales ranging from tens to hundreds of milliseconds, though with no evidence for differences between subareas. Notably, however, timescales differed between sleep stages and were longest during slow wave sleep. This supports the hypothesis that intrinsic timescales are a central mechanism to tune networks to the requirements of different tasks and cognitive states.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.