2011
DOI: 10.1016/j.neuron.2011.04.030
|View full text |Cite
|
Sign up to set email alerts
|

Contrast Gain Control in Auditory Cortex

Abstract: SummaryThe auditory system must represent sounds with a wide range of statistical properties. One important property is the spectrotemporal contrast in the acoustic environment: the variation in sound pressure in each frequency band, relative to the mean pressure. We show that neurons in ferret auditory cortex rescale their gain to partially compensate for the spectrotemporal contrast of recent stimulation. When contrast is low, neurons increase their gain, becoming more sensitive to small changes in the stimu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

21
313
3

Year Published

2011
2011
2021
2021

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 254 publications
(352 citation statements)
references
References 47 publications
21
313
3
Order By: Relevance
“…Instead, we found that a dynamic nonlinear model is necessary, accounting for both feed-forward, subtractive synaptic depression (21,27) and feedback, multiplicative gain normalization (14,15,28). Although the synaptic depression model alone can account partly for the suppression of additive noise, the combined depression/gain control model is necessary to replicate the neural data in more complex distortions such as reverberation.…”
Section: Discussionmentioning
confidence: 90%
See 1 more Smart Citation
“…Instead, we found that a dynamic nonlinear model is necessary, accounting for both feed-forward, subtractive synaptic depression (21,27) and feedback, multiplicative gain normalization (14,15,28). Although the synaptic depression model alone can account partly for the suppression of additive noise, the combined depression/gain control model is necessary to replicate the neural data in more complex distortions such as reverberation.…”
Section: Discussionmentioning
confidence: 90%
“…Encoding models derived in clean signal conditions may not be sufficient to account for the encoding of spectrotemporally rich signals in complex noisy environments. A previous study of A1 that focused only on the representation of clean speech did not identify a functional role for divisive normalization (21), but studies with other noise stimuli have suggested that it might play a role (28). More generally, the relevance of environmental noise is well established in studies of neurophysiology (4), psychoacoustics (16), and automatic speech processing (29).…”
Section: Discussionmentioning
confidence: 99%
“…Results are reported as variance explained; this denotes the difference between the variance of the empirical distribution of CSD events, and the variance of the residuals obtained after subtracting the predicted distribution. We then improved these linear models by extending them to linear-nonlinear models (Chichilnisky, 2001;Simoncelli et al, 2004;Rabinowitz et al, 2011). This captures additional nonlinearities (such as thresholding) in the relationship between stimulus and CSD events, by passing the output of the STRF through a static nonlinearity.…”
Section: Spectrotemporal Receptive Field Estimationmentioning
confidence: 99%
“…The presence of multiple auditory cortical areas on the ectosylvian gyrus (EG) of this species was first demonstrated by using 2‐deoxyglucose autoradiography (Wallace et al, 1997) and subsequently confirmed by using optical imaging of intrinsic signals (Nelken et al, 2004) and single‐unit recording (Kelly et al, 1986; Kelly and Judge, 1994; Kowalski et al, 1995; Bizley et al, 2005). Although most electrophysiological recording studies have focused on the primary auditory cortex (A1) (Phillips et al, 1988; Kowalski et al, 1996; Schnupp et al, 2001; Fritz et al, 2003; Rabinowitz et al, 2011; Keating et al, 2013), the nonprimary auditory fields in this species are now receiving increasing attention (Nelken et al, 2008; Bizley et al, 2009, 2010, 2013; Walker et al, 2011; Atiani et al, 2014). …”
mentioning
confidence: 99%