Neural language representation models such as BERT pre-trained on large-scale corpora can well capture rich semantic patterns from plain text, and be fine-tuned to consistently improve the performance of various NLP tasks. However, the existing pre-trained language models rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better language understanding. We argue that informative entities in KGs can enhance language representation with external knowledge. In this paper, we utilize both large-scale textual corpora and KGs to train an enhanced language representation model (ERNIE), which can take full advantage of lexical, syntactic, and knowledge information simultaneously. The experimental results have demonstrated that ERNIE achieves significant improvements on various knowledge-driven tasks, and meanwhile is comparable with the state-of-the-art model BERT on other common NLP tasks. The source code and experiment details of this paper can be obtained from https:// github.com/thunlp/ERNIE. * indicates equal contribution † Corresponding author: Z.Liu(liuzy@tsinghua.edu.cn) is_a is_a Song Book a u th o r c o m p o s e r Bob Dylan Chronicles: Volume One Blowin' in the wind Songwriter Writer is_a is_a Bob Dylan wrote
Pre-trained language representation models (PLMs) cannot well capture factual knowledge from text. In contrast, knowledge embedding (KE) methods can effectively represent the relational facts in knowledge graphs (KGs) with informative entity embeddings, but conventional KE models cannot take full advantage of the abundant textual information. In this paper, we propose a unified model for Knowledge Embedding and Pre-trained LanguagERepresentation (KEPLER), which can not only better integrate factual knowledge into PLMs but also produce effective text-enhanced KE with the strong PLMs. In KEPLER, we encode textual entity descriptions with a PLM as their embeddings, and then jointly optimize the KE and language modeling objectives. Experimental results show that KEPLER achieves state-of-the-art performances on various NLP tasks, and also works remarkably well as an inductive KE model on KG link prediction. Furthermore, for pre-training and evaluating KEPLER, we construct Wikidata5M1 , a large-scale KG dataset with aligned entity descriptions, and benchmark state-of-the-art KE methods on it. It shall serve as a new KE benchmark and facilitate the research on large KG, inductive KE, and KG with text. The source code can be obtained from https://github.com/THU-KEG/KEPLER.
Low loading is one of the bottlenecks
limiting the performance
of quantum dot sensitized solar cells (QDSCs). Although previous QD
secondary deposition relying on electrostatic interaction can improve
QD loading, due to the introduction of new recombination centers,
it is not capable of enhancing the photovoltage and fill factor. Herein,
without the introduction of new recombination centers, a convenient
QD secondary deposition approach is developed by creating new adsorption
sites via the formation of a metal oxyhydroxide layer around QD presensitized
photoanodes. MgCl2 solution treated Zn–Cu–In–S–Se
(ZCISSe) QD sensitized TiO2 film electrodes have been chosen
as a model device to investigate this secondary deposition approach.
The experimental results demonstrate that additional 38% of the QDs
are immobilized on the photoanode as a single layer. Due to the increased
QD loading and concomitant enhanced light-harvesting capacity and
reduced charge recombination, not only photocurrent but also photovoltage
and fill factor have been remarkably enhanced. The average PCE of
resulted ZCISSe QDSCs is boosted to 15.31% (J
sc = 26.52 mA cm–2, V
oc = 0.802 V, FF = 0.720), from the original 13.54% (J
sc = 24.23 mA cm–2, V
oc = 0.789 V, FF = 0.708). Furthermore, a new
certified PCE record of 15.20% has been obtained for liquid-junction
QDSCs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.