Despite rapid advances in machine learning tools, the majority of neural decoding approaches still use traditional methods. Modern machine learning tools, which are versatile and easy to use, have the potential to significantly improve decoding performance. This tutorial describes how to effectively apply these algorithms for typical decoding problems. We provide descriptions, best practices, and code for applying common machine learning methods, including neural networks and gradient boosting. We also provide detailed comparisons of the performance of various methods at the task of decoding spiking activity in motor cortex, somatosensory cortex, and hippocampus. Modern methods, particularly neural networks and ensembles, significantly outperform traditional approaches, such as Wiener and Kalman filters. Improving the performance of neural decoding algorithms allows neuroscientists to better understand the information contained in a neural population and can help to advance engineering applications such as brain–machine interfaces. Our code package is available at github.com/kordinglab/neural_decoding .
Neuroscience has long focused on finding encoding models that effectively ask “what predicts neural spiking?” and generalized linear models (GLMs) are a typical approach. It is often unknown how much of explainable neural activity is captured, or missed, when fitting a model. Here we compared the predictive performance of simple models to three leading machine learning methods: feedforward neural networks, gradient boosted trees (using XGBoost), and stacked ensembles that combine the predictions of several methods. We predicted spike counts in macaque motor (M1) and somatosensory (S1) cortices from standard representations of reaching kinematics, and in rat hippocampal cells from open field location and orientation. Of these methods, XGBoost and the ensemble consistently produced more accurate spike rate predictions and were less sensitive to the preprocessing of features. These methods can thus be applied quickly to detect if feature sets relate to neural activity in a manner not captured by simpler methods. Encoding models built with a machine learning approach accurately predict spike rates and can offer meaningful benchmarks for simpler models.
Over the last several years, the use of machine learning (ML) in neuroscience has been rapidly increasing. Here, we review ML's contributions, both realized and potential, across several areas of systems neuroscience. We describe four primary roles of ML within neuroscience: 1) creating solutions to engineering problems, 2) identifying predictive variables, 3) setting benchmarks for simple models of the brain, and 4) serving itself as a model for the brain. The breadth and ease of its applicability suggests that machine learning should be in the toolbox of most systems neuroscientists. Figure 1: Growth of Machine Learning in Neuroscience.Here we plot the proportion of neuroscience papers that have used ML over the last two decades. That is, we calculate the number of papers involving both neuroscience and machine learning, normalized by the total number of neuroscience papers. Neuroscience papers were identified using a search for "neuroscience" on Semantic Scholar. Papers involving neuroscience and machine learning were identified with a search for "machine learning" and "neuroscience" on Semantic Scholar.On the highest level, ML is typically divided into the subtypes of supervised, unsupervised, and reinforcement learning. Supervised learning builds a model that predicts outputs from input data. Unsupervised learning is concerned with finding structure in data, e.g. clustering, dimensionality reduction, and compression. Reinforcement learning allows a system to learn the best actions based on the reward that occurs at an end of a sequence of actions. This review focuses on supervised learning.Why is creating progressively more accurate regression or classification methods (see Box 1) worthy of a title like 'The AI Revolution' (Appenzeller 2017) ? It is because countless questions can be framed in this manner. When classifying images, an input picture can be used to predict the object in the picture. When playing a game, the setup of the board (input) can be used to predict an optimal move (output). When texting on our smartphones, our current text is used to create suggestions of the next word. Similarly, science has many instances where we desire to make predictions from measured data. Figure 2: Examples of the four roles of supervised machine learning in neuroscience.1 -ML can solve engineering problems . For example, it can help researchers control a prosthetic limb using brain activity. 2 -ML can identify predictive variables . For example, by using MRI data, we can identify which brain regions are most predictive for diagnosing Alzheimer's disease (Lebedev et al. 2014) . 3 -ML can benchmark simple models . For example, we can compare the predictive performance of the simple "population vector" model of how neural activity relates to movement (Georgopoulos, Schwartz, and Kettner 1986) to a ML benchmark (e.g. an RNN). 4 -ML can serve as a model of the brain . For example, researchers have studied how neurons in the visual pathway correspond to units in an artificial network that is trained to classify images...
9To understand activity in the visual cortex, researchers typically investigate how parametric changes in 10 stimuli affect neural activity. A fundamental tenet of this approach is that the response properties of neurons 11 in one context, e.g. color stimuli, are representative of responses in other contexts, e.g. natural scenes. This 12 assumption is not often tested. Here, for neurons in macaque area V4, we first estimated tuning curves for 13 hue by presenting artificial stimuli of varying hue, and then tested whether these would correlate with hue 14 tuning curves estimated from responses to natural images. We found that neurons' hue tuning on artificial 15 stimuli was not representative of their hue tuning on natural images, even if the neurons were strongly 16 color-responsive. One explanation of this result is that neurons in V4 respond to interactions between hue 17 and other visual features. This finding exemplifies how tuning curves estimated by varying a small number 18 of stimulus features can communicate a small and potentially unrepresentative slice of the neural response 19 function. 20 21
Acoustic properties of the fluorinated copolymer Kel F-800 were determined with Brillouin spectroscopy up to pressures of 85 GPa at 300 K. This research addresses outstanding issues in high-pressure polymer behavior, as to date the acoustic properties and equation of state of any polymer have not been determined above 20 GPa. We observed both longitudinal and transverse modes in all pressure domains, allowing us to calculate the C(11) and C(12) moduli, bulk, shear, and Young's moduli, and the density of Kel F-800 as a function of pressure. We found the behavior of the polymer with respect to all parameters to change drastically with pressure. As a result, we find that the data are best understood when split into two pressure regimes. At low pressures (less than ∼5 GPa), analysis of the room temperature isotherm with a semi-empirical equation of state yielded a zero-pressure bulk modulus K(o) and its derivative K(0) (') of 12.8 ± 0.8 GPa and 9.6 ± 0.7, respectively. The same analysis for the higher pressure data yielded values for K(o) and K(0) (') of 34.9 ± 1.7 GPa and 5.1 ± 0.1, respectively. We discuss this significant difference in behavior with reference to the concept of effective free volume collapse.
No abstract
15Neuroscience has long focused on finding encoding models that effectively ask "what predicts neural 16 spiking?" and generalized linear models (GLMs) are a typical approach. It is often unknown how much of 17 explainable neural activity is captured, or missed, when fitting a GLM. Here we compared the predictive 18 performance of GLMs to three leading machine learning methods: feedforward neural networks, gradient 19 boosted trees (using XGBoost), and stacked ensembles that combine the predictions of several methods. 20We predicted spike counts in macaque motor (M1) and somatosensory (S1) cortices from standard 21 representations of reaching kinematics, and in rat hippocampal cells from open field location and orientation. 22In general, the modern methods (particularly XGBoost and the ensemble) produced more accurate spike 23 predictions and were less sensitive to the preprocessing of features. This discrepancy in performance 24suggests that standard feature sets may often relate to neural activity in a nonlinear manner not captured by 25GLMs. Encoding models built with machine learning techniques, which can be largely automated, more 26 accurately predict spikes and can offer meaningful benchmarks for simpler models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.