Accurate crop diseases classification is crucial for ensuring food security and enhancing agricultural productivity. However, existing crop disease classification algorithms primarily focus on a single image modality and typically require a large number of samples. Our research counters these issues by using pre-trained Vision-Language Models (VLMs), which enhances multimodal synergy for better crop disease classification than traditional unimodal approaches. Firstly, we apply the multimodal model Qwen-VL to generate meticulous textual descriptions for representative disease images selected through clustering from the training set, which will serve as prompt text for generating classifier weights. Compared to using solely the language model for prompt text generation, this approach better captures and conveys fine-grained and image-specific information, thereby enhancing prompt quality. Secondly, we integrate the cross-attention and the SE (Squeeze-and-Excitation) attention into the training-free mode VLCD (Vision-Language model for Crop Diseases classification) and the training-required mode VLCD-T (VLCD-Training) respectively for prompt text processing, enhancing classifier weights by emphasizing key text features. Experimental outcomes conclusively prove our method’s heightened classification effectiveness in few-shot crop disease scenarios, tackling data limitations and intricate disease recognition issues. It offers a pragmatic tool for agricultural pathology and reinforces the smart farming surveillance infrastructure.