With the rapid development of deep learning, computer vision, natural semantic processing and other technologies, Visual Question Answering (VQA) has gradually become to be an important research direction in the multimodal field. VQA technology has a wide range of application scenarios, such as multi-modal search, medical consultation, intelligent driving and so on, and has become the research hotspot and focus of scholars. Based on the classical CLEVR dataset, this paper builds different VQA prediction models by using the BilSTM model and different deep learning algorithms, including modnet, VGG and ResNet models. The results show that the hybrid model of ResNet and BiLSTM has the highest prediction accuracy, reaching 0.978. Compared with other algorithms, the accuracy is 1.3%, 2.5% and 2.7% higher than that of MobileMet, VGG19 and VGG16 models, which means that the hybrid model of resnet50 and BiLSTM is more effective in VQA tasks. At the same time, some problems and deficiencies of our model are analyzed and the future improvement direction is given.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.