Optical microscopy is one of the most widely used diagnostic methods in scientific, industrial, and biomedical applications. However, while useful for detailed examination of a small number (< 10,000) of microscopic entities, conventional optical microscopy is incapable of statistically relevant screening of large populations (> 100,000,000) with high precision due to its low throughput and limited digital memory size. We present an automated flow-through single-particle optical microscope that overcomes this limitation by performing sensitive blur-free image acquisition and nonstop real-time image-recording and classification of microparticles during high-speed flow. This is made possible by integrating ultrafast optical imaging technology, self-focusing microfluidic technology, optoelectronic communication technology, and information technology. To show the system’s utility, we demonstrate high-throughput image-based screening of budding yeast and rare breast cancer cells in blood with an unprecedented throughput of 100,000 particles/s and a record false positive rate of one in a million.
Multi-channel electrical recordings of neural activity in the brain is an increasingly powerful method revealing new aspects of neural communication, computation, and prosthetics. However, while planar silicon-based CMOS devices in conventional electronics scale rapidly, neural interface devices have not kept pace. Here, we present a new strategy to interface silicon-based chips with three-dimensional microwire arrays, providing the link between rapidly-developing electronics and high density neural interfaces. The system consists of a bundle of microwires mated to large-scale microelectrode arrays, such as camera chips. This system has excellent recording performance, demonstrated via single unit and local-field potential recordings in isolated retina and in the motor cortex or striatum of awake moving mice. The modular design enables a variety of microwire types and sizes to be integrated with different types of pixel arrays, connecting the rapid progress of commercial multiplexing, digitisation and data acquisition hardware together with a three-dimensional neural interface.
A central goal of systems neuroscience is to develop accurate quantitative models of how neural circuits process information. Prevalent models of light response in retinal ganglion cells (RGCs) usually begin with linear filtering over space and time, which reduces the highdimensional visual stimulus to a simpler and more tractable scalar function of time that in turn determines the model output. Although these pseudo-linear models can accurately replicate RGC responses to stochastic stimuli, it is unclear whether the strong linearity assumption captures the function of the retina in the natural environment. This paper tests how accurately one pseudo-linear model, the generalized linear model (GLM), explains the responses of primate RGCs to naturalistic visual stimuli. Light responses from macaque RGCs were obtained using large-scale multi-electrode recordings, and two major cell types, ON and OFF parasol, were examined. Visual stimuli consisted of images of natural environments with simulated saccadic and fixational eye movements. The GLM accurately reproduced RGC responses to white noise stimuli, as observed previously, but did not generalize to predict RGC responses to naturalistic stimuli. It also failed to capture RGC responses when fitted and tested with naturalistic stimuli alone. Fitted scalar nonlinearities before and after the linear filtering stage were insufficient to correct the failures. These findings suggest that retinal signaling under natural conditions cannot be captured by models that begin with linear filtering, and emphasize the importance of additional spatial nonlinearities, gain control, and/or peripheral effects in the first stage of visual processing.
In this article we show how Ehrenfest mean field theory can be made both a more accurate and efficient method to treat nonadiabatic quantum dynamics by combining it with the generalized quantum master equation framework. The resulting mean field generalized quantum master equation (MF-GQME) approach is a non-perturbative and non-Markovian theory to treat open quantum systems without any restrictions on the form of the Hamiltonian that it can be applied to. By studying relaxation dynamics in a wide range of dynamical regimes, typical of charge and energy transfer, we show that MF-GQME provides a much higher accuracy than a direct application of mean field theory. In addition, these increases in accuracy are accompanied by computational speedups of between one and two orders of magnitude that become larger as the system becomes more nonadiabatic. This combination of quantum-classical theory and master equation techniques thus makes it possible to obtain the accuracy of much more computationally expensive approaches at a cost lower than even mean field dynamics, providing the ability to treat the quantum dynamics of atomistic condensed phase systems for long times.
Responses of sensory neurons are often modeled using a weighted combination of rectified linear subunits. Since these subunits often cannot be measured directly, a flexible method is needed to infer their properties from the responses of downstream neurons. We present a method for maximum likelihood estimation of subunits by soft-clustering spike-triggered stimuli, and demonstrate its effectiveness in visual neurons. For parasol retinal ganglion cells in macaque retina, estimated subunits partitioned the receptive field into compact regions, likely representing aggregated bipolar cell inputs. Joint clustering revealed shared subunits between neighboring cells, producing a parsimonious population model. Closed-loop validation, using stimuli lying in the null space of the linear receptive field, revealed stronger nonlinearities in OFF cells than ON cells. Responses to natural images, jittered to emulate fixational eye movements, were accurately predicted by the subunit model. Finally, the generality of the approach was demonstrated in macaque V1 neurons.
Spike sorting is a critical first step in extracting neural signals from large-scale multi-electrode array (MEA) data. This manuscript presents several new techniques that make MEA spike sorting more robust and accurate. Our pipeline is based on an efficient multi-stage "triage-then-cluster-then-pursuit" approach that initially extracts only clean, high-quality waveforms from the electrophysiological time series by temporarily skipping noisy or "collided" events (representing two neurons firing synchronously). This is accomplished by developing a neural network detection and denoising method followed by efficient outlier triaging. The denoised spike waveforms are then used to infer the set of spike templates through nonparametric Bayesian clustering. We use a divide-andconquer strategy to parallelize this clustering step. Finally, we recover collided waveforms with matching-pursuit deconvolution techniques, and perform further split-and-merge steps to estimate additional templates from the pool of recovered waveforms. We apply the new pipeline to data recorded in the primate retina, where high firing rates and highly-overlapping axonal units provide a challenging testbed for the deconvolution approach; in addition, the well-defined mosaic structure of receptive fields in this preparation provides a useful quality check on any spike sorting pipeline. We show that our pipeline improves on the state-of-the-art in spike sorting (and outperforms manual sorting) on both real and semi-simulated MEA data with > 500 electrodes; open source code can be found at https://github.com/paninski-lab/yass. * Equal contribution authors ‡ DARPA Neural Engineering System Design program BAA-16-09 1 datastream as efficiently as possible. Finally, scalability must be a key consideration. To feasibly process the oncoming data deluge, we use parallel, scalable algorithms based on efficient data summarizations wherever possible and focus computational power on the "hard cases," using cheap fast methods to handle easy cases.To evaluate the resulting pipeline, we focus here on MEA data collected from the primate retina. This preparation is a useful spike sorting testbed for several important reasons. First, the two-dimensional MEA used here matches the approximately two-dimensional substrate of the retinal ganglion layer. Second, receptive fields of well-characterized retinal ganglion cell (RGC) types (e.g., ON parasols, OFF midgets, etc.) are known to approximately tile the visual field, providing useful side information for scoring different spike sorting pipelines. Third, many RGCs have moderately high firing rates and often have significant axonal projections that overlap with each other spatially on the MEA, making it challenging to demix spikes that overlap spatially and temporally from different RGCs * .We will first outline the methodology that forms the core of our pipeline in Section 2.1, then provide details of each module in the following subsections, and finally demonstrate the improvements in performance on 512-electrode primat...
The visual message conveyed by a retinal ganglion cell (RGC) is often summarized by its spatial receptive field, but in principle should also depend on other cells' responses and natural image statistics. To test this idea, linear reconstruction (decoding) of natural images was performed using combinations of responses of four high-density macaque RGC types, revealing consistent visual representations across retinas. Each cell's visual message, defined by the optimal reconstruction filter, reflected natural image statistics, and resembled the receptive field only when nearby, same-type cells were included. Reconstruction from each cell type revealed different and largely independent visual representations, consistent with their distinct properties. Stimulus-independent correlations primarily affected reconstructions from noisy responses. Nonlinear response transformation slightly improved reconstructions with either ON or OFF parasol cells, but not both. Inclusion of ON-OFF interactions enhanced reconstruction by emphasizing oriented edges, consistent with linear-nonlinear encoding models. Spatiotemporal reconstructions revealed similar spatial visual messages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.