Rumor detection on social media puts pretrained language models (LMs), such as BERT, and auxiliary features, such as comments, into use. However, on the one hand, rumor detection datasets in Chinese companies with comments are rare; on the other hand, intensive interaction of attention on Transformer-based models like BERT may hinder performance improvement. To alleviate these problems, we build a new Chinese microblog dataset named Weibo20 1 by collecting posts and associated comments from Sina Weibo and propose a new ensemble named STANKER (Stacking neTwork bAsed-on atteNtion-masKed BERT). STANKER adopts two level-grained attentionmasked BERT (LGAM-BERT) models as base encoders. Unlike the original BERT, our new LGAM-BERT model takes comments as important auxiliary features and masks coattention between posts and comments on lower-layers. Experiments on Weibo20 and three existing social media datasets showed that STANKER outperformed all compared models, especially beating the old state-of-theart on Weibo dataset.
Recently, there is increasing interest in action model learning. However, most previous studies focused on learning effect-based action models. On the other hand, a rule-based planning domain description language was proposed in the latest planning competition. That is the Relational Dynamic Influence Diagram Language (RDDL). It uses rules to describe transitions instead of action models. In this paper, we build a system to learn planning domain descriptions in the RDDL. There are three major parts of an RDDL domain description: constraints, transitions and rewards. We first take advantage of the finite state machine analysis to identify constraints. Then, we employ the inductive learning technique to learn transitions. At last, we use regression to fix rewards. The evaluation was performed on benchmarks from planning competitions. It showed that our system can learn domain descriptions in the RDDL with low error rates. Moreover, our system is developed based on classical approaches. It implicates that the RDDL roots in previous planning languages. Therefore, more classical approaches could be useful in the RDDL domains.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.