Visual question answering (VQA) requires joint comprehension of images and natural language questions, where many questions can't be directly or clearly answered from visual content but require reasoning from structured human knowledge with confirmation from visual content. This paper proposes visual knowledge memory network (VKMN) to address this issue, which seamlessly incorporates structured human knowledge and deep visual features into memory networks in an end-to-end learning framework. Comparing to existing methods for leveraging external knowledge for supporting VQA, this paper stresses more on two missing mechanisms. First is the mechanism for integrating visual contents with knowledge facts. VKMN handles this issue by embedding knowledge triples (subject, relation, target) and deep visual features jointly into the visual knowledge features. Second is the mechanism for handling multiple knowledge facts expanding from question and answer pairs. VKMN stores joint embedding using key-value pair structure in the memory networks so that it is easy to handle multiple facts. Experiments show that the proposed method achieves promising results on both VQA v1.0 and v2.0 benchmarks, while outperforms state-of-the-art methods on the knowledge-reasoning related questions.
Highly efficient catalysts for both oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) are key to the commercialization of rechargeable zinc−air batteries (ZABs). In this work, a catalyst with uniform nanospherical morphology was prepared from cobalt nitrate, acetylacetone, and hydrazine hydrate. The final catalyst possesses high ORR and OER performances, with a half-wave potential of 0.911 V [vs reversible hydrogen electrode (RHE)] for ORR and a low potential of 1.57 V (vs RHE) at 10 mA cm −2 for OER in 0.1 M KOH solution. Specially, a ZAB based on the catalyst demonstrates an ultrahigh power density of 479.1 mW cm −2 , as well as excellent stability, and potential in practical applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.