Memory-based decisions are often accompanied by an assessment of choice certainty, but the mechanisms of such confidence judgments remain unknown. We studied the response of 1065 individual neurons in the human hippocampus and amygdala while neurosurgical patients made memory retrieval decisions together with a confidence judgment. Combining behavioral, neuronal and computational analysis, we identified a population of memory-selective (MS) neurons whose activity signaled stimulus familiarity and confidence as assessed by subjective report. In contrast, the activity of visually selective (VS) neurons was not sensitive to memory strength. The groups further differed in response latency, tuning, and extracellular waveforms. The information provided by MS neurons was sufficient for a race model to decide stimulus familiarity and retrieval confidence. Together, this demonstrates a trial-by-trial relationship between a specific group of neurons and declared memory strength in humans. We suggest that VS and MS neurons are a substrate for declarative memories.
Highlights d A neural decoder tracks speech processing in a cocktailparty paradigm during sleep d Speech is encoded in cortical activity during rapid eye movement (REM) sleep d Informative speech is selectively processed over meaningless speech during REM sleep d Informative speech is on the contrary selectively suppressed during eye movements within REM
Online speech processing imposes significant computational demands on the listening brain, the underlying mechanisms of which remain poorly understood. Here, we exploit the perceptual “pop-out” phenomenon (i.e. the dramatic improvement of speech intelligibility after receiving information about speech content) to investigate the neurophysiological effects of prior expectations on degraded speech comprehension. We recorded electroencephalography (EEG) and pupillometry from 21 adults while they rated the clarity of noise-vocoded and sine-wave synthesized sentences. Pop-out was reliably elicited following visual presentation of the corresponding written sentence, but not following incongruent or neutral text. Pop-out was associated with improved reconstruction of the acoustic stimulus envelope from low-frequency EEG activity, implying that improvements in perceptual clarity were mediated via top-down signals that enhanced the quality of cortical speech representations. Spectral analysis further revealed that pop-out was accompanied by a reduction in theta-band power, consistent with predictive coding accounts of acoustic filling-in and incremental sentence processing. Moreover, delta-band power, alpha-band power, and pupil diameter were all increased following the provision of any written sentence information, irrespective of content. Together, these findings reveal distinctive profiles of neurophysiological activity that differentiate the content-specific processes associated with degraded speech comprehension from the context-specific processes invoked under adverse listening conditions.
New information can be learned during sleep but the extent to which we can access this knowledge after awakening is far less understood. Using a novel Associative Transfer Learning paradigm, we show that, after hearing unknown Japanese words with sounds referring to their meaning during sleep, participants could identify the images depicting the meaning of newly acquired Japanese words after awakening (N = 22). Moreover, we demonstrate that this cross-modal generalization is implicit, meaning that participants remain unaware of this knowledge. Using electroencephalography, we further show that frontal slow-wave responses to auditory stimuli during sleep predicted memory performance after awakening. This neural signature of memory formation gradually emerged over the course of the sleep phase, highlighting the dynamics of associative learning during sleep. This study provides novel evidence that the formation of new associative memories can be traced back to the dynamics of slow-wave responses to stimuli during sleep and that their implicit transfer into wakefulness can be generalized across sensory modalities.
Online speech processing imposes significant computational demands on the listening brain. Predictive coding provides an elegant account of the way this challenge is met through the exploitation of prior knowledge. While such accounts have accrued considerable evidence at the sublexical- and word-levels, relatively little is known about the predictive mechanisms that support sentence-level processing. Here, we exploit the 'pop-out' phenomenon (i.e. dramatic improvement in the intelligibility of degraded speech following prior information) to investigate the psychophysiological correlates of sentence comprehension. We recorded electroencephalography and pupillometry from 21 humans (10 females) while they rated the clarity of full sentences that had been degraded via noise-vocoding or sine-wave synthesis. Sentence pop-out was reliably elicited following visual presentation of the corresponding written sentence, despite never hearing the undistorted speech. No such effect was observed following incongruent or no written information. Pop-out was associated with improved reconstruction of the acoustic stimulus envelope from low-frequency EEG activity, implying that pop-out is mediated via top-down signals that enhance the precision of cortical speech representations. Spectral analysis revealed that pop-out was accompanied by a reduction in theta-band power, consistent with predictive coding accounts of acoustic filling-in and incremental sentence processing. Moreover, delta- and alpha-band power, as well as pupil diameter, were increased following the provision of any written information. We interpret these findings as evidence of a transition to a state of active listening, whereby participants selectively engage attentional and working memory processes to evaluate the congruence between expected and actual sensory input.
Highlights d A neural decoder tracks speech processing in a cocktailparty paradigm during sleep d Speech is encoded in cortical activity during rapid eye movement (REM) sleep d Informative speech is selectively processed over meaningless speech during REM sleep d Informative speech is on the contrary selectively suppressed during eye movements within REM
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.