Knowledge graphs, representation of information as a semantic graph, have caused wide concern in both industrial and academic world. Their property of providing semantically structured information has brought important possible solutions for many tasks including question answering, recommendation and information retrieval, and is considered to offer great promise for building more intelligent machines by many researchers. Although knowledge graphs have already supported multiple “Big Data” applications in all sorts of commercial and scientific domains since Google coined this term in 2012, there was no previous study give a systemically review of the application of knowledge graphs. Therefore, unlike other related work which focuses on the construction techniques of knowledge graphs, this present paper aims at providing a first survey on these applications stemming from different domains. This paper also points out that while important advancements of applying knowledge graphs’ great ability of providing semantically structured information into specific domains have been made in recent years, several aspects still remain to be explored.
Continual learning aims to alleviate catastrophic forgetting when handling consecutive tasks under non-stationary distributions. Gradient-based meta-learning algorithms have shown the capability to implicitly solve the transfer-interference trade-off problem between different examples. However, they still suffer from the catastrophic forgetting problem in the setting of continual learning, since the past data of previous tasks are no longer available. In this work, we propose a novel efficient meta-learning algorithm for solving the online continual learning problem, where the regularization terms and learning rates are adapted to the Taylor approximation of the parameter's importance to mitigate forgetting. The proposed method expresses the gradient of the meta-loss in closed-form and thus avoid computing second-order derivative which is computationally inhibitable. We also use Proximal Gradient Descent to further improve computational efficiency and accuracy. Experiments on diverse benchmarks show that our method achieves better or on-par performance and much higher efficiency compared to the state-of-the-art approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.