We present DocFormer -a multi-modal transformer based architecture for the task of Visual Document Understanding (VDU). VDU is a challenging problem which aims to understand documents in their varied formats (forms, receipts etc.) and layouts. In addition, DocFormer is pre-trained in an unsupervised fashion using carefully designed tasks which encourage multi-modal interaction. DocFormer uses text, vision and spatial features and combines them using a novel multi-modal self-attention layer. DocFormer also shares learned spatial embeddings across modalities which makes it easy for the model to correlate text to visual tokens and vice versa. DocFormer is evaluated on 4 different datasets each with strong baselines. DocFormer achieves state-of-the-art results on all of them, sometimes beating models 4x its size (in no. of parameters).
Word2VecQuestion OUR MODEL Uses a single layer perceptron + well tuned Word2Vec trained on movie plots and doesn't use any other information like videos or subtitles Plot Subtitle Video Good Word2Vec is able to capture enough semantics to answer half the questions in dataset Answer choices Figure 1: Answering questions about movies without watching any movies. The MovieQA task is: Given a question and multiple answer choices, find the correct answer by using the context provided in the corresponding videos and subtitles. Prior works use deep networks to incorporate information from videos and subtitles to do this task. We show a much simpler model that achieves state of the art performance, without using any video or subtitles context. Our model uses a well-tuned word embedding trained in an unsupervised manner on Wikipedia movie plots (movie summaries), and is able to answer about half of the questions in the dataset by just looking at the questions and choices.
AbstractJoint vision and language tasks like visual question answering are fascinating because they explore high-level understanding, but at the same time, can be more prone to language biases. In this paper, we explore the biases in the MovieQA dataset and propose a strikingly simple model which can exploit them. We find that using the right word embedding is of utmost importance. By using an appropriately-trained word embedding, about half the Question-Answers (QAs) can be answered by looking at the questions and answers alone, completely ignoring narrative context from video clips, subtitles, and movie scripts. Com-pared to the best published papers on the leaderboard, our simple question+answer only model improves accuracy by 5% for video + subtitle category, 5% for subtitle, 15% for DVS and 6% higher for scripts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.