The distributional patterns of words in language forms the basis of linguistic distributional knowledge and contributes to conceptual processing across cognition. While corpus-based linguistic distributional models (LDMs) can capture human performance in many cognitive tasks, questions remain regarding the nature and role of linguistic distributional knowledge in cognition. We propose that LDMs can be a cognitively plausible approach to modelling linguistic distributional knowledge when assumed to represent an essential component of semantics that is grounded in a complementary sensorimotor component, when trained on appropriate corpora that are representative of human language experience, and when they capture syntagmatic, paradigmatic, and bag-of-words semantic relations that are useful to cognition. Using an extensive set of cognitive tasks that vary in their conceptual processing demands and response measurements, we systematically evaluate a wide range of model families (predict vector, count vector, n-gram), corpora varying in size and quality, and parameter settings. Our findings demonstrate that there is no one-size-fits-all approach for how linguistic distributional knowledge is used across cognition, and that its use depends on the conceptual complexity of the task at hand. Conceptually simple tasks that rely on paradigmatic relations are relatively easy to model even with poor-quality language experience, but conceptually complex tasks that involve sophisticated processing of multiple relations (paradigmatic, syntagmatic, and bag-of-words) require a diverse set of task-specific models and high-quality language experience. Linguistic distributional knowledge is a rich source of information about the world that can be accessed flexibly according to cognitive need. Online materials are available at https://osf.io/uj92m/.