With the rapid development of deep learning, computer vision, natural semantic processing and other technologies, Visual Question Answering (VQA) has gradually become to be an important research direction in the multimodal field. VQA technology has a wide range of application scenarios, such as multi-modal search, medical consultation, intelligent driving and so on, and has become the research hotspot and focus of scholars. Based on the classical CLEVR dataset, this paper builds different VQA prediction models by using the BilSTM model and different deep learning algorithms, including modnet, VGG and ResNet models. The results show that the hybrid model of ResNet and BiLSTM has the highest prediction accuracy, reaching 0.978. Compared with other algorithms, the accuracy is 1.3%, 2.5% and 2.7% higher than that of MobileMet, VGG19 and VGG16 models, which means that the hybrid model of resnet50 and BiLSTM is more effective in VQA tasks. At the same time, some problems and deficiencies of our model are analyzed and the future improvement direction is given.