Understanding spoken language requires transforming ambiguous stimulus streams into a hierarchy of increasingly abstract representations, ranging from speech sounds to meaning. It has been suggested that the brain uses predictive computations to guide the interpretation of incoming information. However, the exact role of prediction in language understanding remains unclear, with widespread disagreement about both the ubiquity of prediction, and the level of representation at which predictions unfold. Here, we address both issues by analysing brain recordings of participants listening to audiobooks, and using a state-of-the-art deep neural network (GPT-2) to quantify predictions in a fine-grained, contextual fashion. First, we establish clear evidence for predictive processing, confirming that brain responses to words are modulated by probabilistic predictions. Next, we factorised the model-based predictions into distinct linguistic dimensions, revealing dissociable neural signatures of syntactic, phonemic and semantic predictions. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting theories of hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, and demonstrate that linguistic prediction is not implemented by a single system but occurs throughout the language network, forming a hierarchy of linguistic predictions across all levels of analysis.