How is speech understood despite the lack of a deterministic relationship between the sounds reaching auditory cortex and what we perceive? One possibility is that unheard words that are unconsciously activated in association with listening context are used to constrain interpretation.We hypothesized that a mechanism for doing so involves reusing the ability of the brain to predict the sensory effects of speaking associated words. Predictions are then compared to signals arriving in auditory cortex, resulting in reduced processing demands when accurate. Indeed, we show that sensorimotor brain regions are more active prior to words predictable from listening context. This activity resembles lexical and speech production related processes and, specifically, subsequent but still unpresented words. When those words occur, auditory cortex activity is reduced, through feedback connectivity. In less predictive contexts, activity patterns and connectivity for the same words are markedly different. Results suggest that the brain reorganizes to actively use knowledge about context to construct the speech we hear, enabling rapid and accurate comprehension despite acoustic variability.A long history of lexical priming studies in psychology demonstrates that hearing words activates associated words 1 . For example, in a lexical decision experiment, the prime 'pond' results in faster reaction times to the subsequent presentation of 'frogs', compared to unrelated words. Whether explained in terms of spreading activation among semantically related words 2 , and/or generative prediction 3,4 , primes may serve as part of a solution to the problem of how humans so easily perceive speech in the face of acoustic variability. Despite more than 50 years of searching, speech scientists have found no consistent acoustic information that can account for perceptual constancy of speech . CC-BY-NC-ND 4.0 International license peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was not . http://dx.doi.org/10.1101/101113 doi: bioRxiv preprint first posted online Jan. 18, 2017; Speech reorganization 2 sounds 5,6 . Primed words might help mitigate this problem by serving as hypotheses to test the identity of upcoming speech sounds, thereby constraining interpretation of those indeterminate or ambiguous patterns as specific categories [6][7][8][9] . For example, the sentence context 'The pond was full of croaking...'primes 'frogs'. This can serve as an hypothesis to test whether there is enough evidence to interpret the following acoustic pattern as an /f/ despite the uniqueness of that particular 'f'.We propose that the neural implementation of this 'hypothesis-and-test' mechanism involves the 'neural reuse' 10 of processing steps associated with speech production 6,11,12 . These steps, 'selection', 'sequencing', 'prediction' and 'comparison', are implemented in a sensorimotor speech production network. When someone wants to speak, a word must first be selected from among competing ...