The task of referring expression comprehension (REC) is to localise an image region of a specific object described by a natural language expression, and all existing REC methods assume that the object described by the referring expression must be located in the given image. However, this assumption is not correct in some real applications. For example, a visually impaired user might tell his robot ‘please take the laptop on the table to me’. In fact, the laptop is not on the table anymore. To address this problem, the authors propose a novel REC model to deal with the situation where expression‐image mismatching occurs and explain the mismatching by linguistic feedback. The authors' REC model consists of four modules: the expression parsing module, the entity detection module, the relationship detection module, and the matching detection module. They built a data set called NP‐RefCOCO+ from RefCOCO+ including both positive samples and negative samples. The positive samples are original expression‐image pairs in RefCOCO+. The negative samples are the expression‐image pairs in RefCOCO+, whose expressions are replaced. They evaluate the model on NP‐RefCOCO+ and the experimental results show the advantages of their method for dealing with the problem of expression‐image mismatching.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.