2017
DOI: 10.1101/111450
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Modern machine learning outperforms GLMs at predicting spikes

Abstract: 15Neuroscience has long focused on finding encoding models that effectively ask "what predicts neural 16 spiking?" and generalized linear models (GLMs) are a typical approach. It is often unknown how much of 17 explainable neural activity is captured, or missed, when fitting a GLM. Here we compared the predictive 18 performance of GLMs to three leading machine learning methods: feedforward neural networks, gradient 19 boosted trees (using XGBoost), and stacked ensembles that combine the predictions of several … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
18
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(19 citation statements)
references
References 38 publications
1
18
0
Order By: Relevance
“…Using a supervised classifier, so-called gradient boosting [Benjamin et al, 2017, Truccolo andDonoghue, 2007], we show how this method can determine an encoding model for predicting population spike trains knowing the stimulus input. We also show, in line with recently published work [Benjamin et al, 2017], how gradient boosted trees (XGB) can also be used as a very efficient decoding model that is retrieving the stimulus likelihood knowing the spiking activity of a population of neurons. Finally, we demonstrate how it generates a very accurate encoding model for predicting a population spike train conditioned on another, anatomically projected, set of neuronal activity [Harris et al, 2003].…”
Section: Introductionsupporting
confidence: 84%
See 2 more Smart Citations
“…Using a supervised classifier, so-called gradient boosting [Benjamin et al, 2017, Truccolo andDonoghue, 2007], we show how this method can determine an encoding model for predicting population spike trains knowing the stimulus input. We also show, in line with recently published work [Benjamin et al, 2017], how gradient boosted trees (XGB) can also be used as a very efficient decoding model that is retrieving the stimulus likelihood knowing the spiking activity of a population of neurons. Finally, we demonstrate how it generates a very accurate encoding model for predicting a population spike train conditioned on another, anatomically projected, set of neuronal activity [Harris et al, 2003].…”
Section: Introductionsupporting
confidence: 84%
“…where T is the total number of leaves and w j the score of leaf j. γ andλ are two free parameters weighting the contribution of the two previous items in the objective function. For the sake of comparison with a related study Benjamin et al [2017], we used the same values: γ = 0.4 and λ = 0.0. However, in the following section detailing the methods, we keep these two parameters as variables.…”
Section: Gradient Boosted Treesmentioning
confidence: 99%
See 1 more Smart Citation
“…This lack of ground-truth data when performing data analysis is particularly unavoidable in neuroscience [ 25 ]. It has thus become necessary to establish standard, model-free methods that, even if they do not contribute to our understanding of the data, set levels of performance that may be used to benchmark model-based approaches [ 26 , 27 ]. Machine Learning provides a large array of techniques to classify datasets that have demonstrated high level of performance in fields ranging from image processing to astrophysics [ 28 ].…”
Section: Introductionmentioning
confidence: 99%
“…Pyglmnet has already been used in published work (Benjamin et al, 2017;Bertrán et al, 2018;Höfling, Berens, & Zeck, 2019;Rybakken, Baas, & Dunn, 2019). It contains unit tests and includes documentation in the form of tutorials, docstrings and examples that are run through continuous integration.…”
Section: Pyglmnet Is Unit-tested and Documented With Examplesmentioning
confidence: 99%