2015
DOI: 10.3389/fncom.2015.00001
|View full text |Cite
|
Sign up to set email alerts
|

State-dependencies of learning across brain scales

Abstract: Learning is a complex brain function operating on different time scales, from milliseconds to years, which induces enduring changes in brain dynamics. The brain also undergoes continuous “spontaneous” shifts in states, which, amongst others, are characterized by rhythmic activity of various frequencies. Besides the most obvious distinct modes of waking and sleep, wake-associated brain states comprise modulations of vigilance and attention. Recent findings show that certain brain states, particularly during sle… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
59
0
2

Year Published

2015
2015
2017
2017

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 50 publications
(63 citation statements)
references
References 285 publications
(372 reference statements)
2
59
0
2
Order By: Relevance
“…One approach to address this problem is to use large numbers of naturalistic images and model multiple dimensions simultaneously. For example, a comprehensive analysis [125; Figure 3B] revealed that the dimensions of spatial frequency, subjective distance and object category all explained variance in scene-selective regions. However, most of the variance explained was shared across the models suggesting that, for example, the apparent sensitivity to scene category could just as easily be interpreted as reflecting differences in spatial frequency.…”
Section: The Neural Mechanisms Of Scene Understandingmentioning
confidence: 99%
See 2 more Smart Citations
“…One approach to address this problem is to use large numbers of naturalistic images and model multiple dimensions simultaneously. For example, a comprehensive analysis [125; Figure 3B] revealed that the dimensions of spatial frequency, subjective distance and object category all explained variance in scene-selective regions. However, most of the variance explained was shared across the models suggesting that, for example, the apparent sensitivity to scene category could just as easily be interpreted as reflecting differences in spatial frequency.…”
Section: The Neural Mechanisms Of Scene Understandingmentioning
confidence: 99%
“…Note that scenes may differ from one another at multiple levels; for example, the beach scene can be distinguished from the park and living room based on virtually all dimensions, whereas the park and living room image share some but not all properties. Due to the inherent correlations between scene features, assessing their individual contributions to scene representations is challenging [125]. …”
Section: Figurementioning
confidence: 99%
See 1 more Smart Citation
“…Recent studies of OU processes driving neural models have investigated the effects of coloured noise on temporal distributions of neuronal spiking (Braun et al, 2015, da Silva and Vilela, 2015) and the generation of multimodal patterns of alpha activity (Freyer et al, 2011). In addition, networks of spiking neurons (Sancristóbal et al, 2013) and of neuronal populations (Jedynak et al, 2015) have been shown to generate realistic 1/fb-like spectra when driven by OU noise, or more complex dynamics when subjected to driving at specific frequencies (Spiegler et al, 2011, Malagarriga et al, 2015). However, we lack an understanding of the ways in which non-white noise or rhythmic perturbations interact with neuronal populations to produce epileptiform dynamics.…”
Section: Introductionmentioning
confidence: 99%
“…It should be noted that the various entropy methods that are very popular in EEG time series analysis [1, 6, 7] like Sample Entropy (SampEn) [8], Lempel–Ziv complexity [9], auto-mutual information, Shannon’s entropy, and other approximate entropies are most similar according to Fulcher et al [5] to the approximate entropy algorithm (ApEn) proposed by Pincus [10]. Unfortunately, current entropy measures are mostly unable to quantify the complexity of any underlying structure in the series, as well as determine if the variation arises from a random process [11].…”
Section: Introductionmentioning
confidence: 99%