In this paper, we introduce the Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects. First, a reattention mechanism is proposed to refine current attentions by directly accessing to past attentions that are temporally memorized in a multi-round alignment architecture, so as to avoid the problems of attention redundancy and attention deficiency. Second, a new optimization approach, called dynamic-critical reinforcement learning, is introduced to extend the standard supervised method. It always encourages to predict a more acceptable answer so as to address the convergence suppression problem occurred in traditional reinforcement learning algorithms. Extensive experiments on the Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-the-art results. Meanwhile, our model outperforms previous systems by over 6% in terms of both Exact Match and F1 metrics on two adversarial SQuAD datasets.
Background: Coronary heart disease is a leading cause of mortality among women. Systematic evaluation of the quality of care and outcomes in women hospitalized for acute coronary syndrome (ACS), an acute manifestation of coronary heart disease, remains lacking in China. Methods: The CCC-ACS project (Improving Care for Cardiovascular Disease in China–Acute Coronary Syndrome) is an ongoing nationwide registry of the American Heart Association and the Chinese Society of Cardiology. Using data from the CCC-ACS project, we evaluated sex differences in acute management, medical therapies for secondary prevention, and in-hospital mortality in 82 196 patients admitted for ACS at 192 hospitals in China from 2014 to 2018. Results: Women with ACS were older than men (69.0 versus 61.1 years, P <0.001) and had more comorbidities. After multivariable adjustment, eligible women were less likely to receive evidence-based acute treatments for ACS than men, including early dual antiplatelet therapy, heparins during hospitalization, and reperfusion therapy for ST-segment–elevation myocardial infarction. With respect to strategies for secondary prevention, eligible women were less likely to receive dual antiplatelet therapy, angiotensin-converting enzyme inhibitors/angiotensin receptor blockers, statins at discharge, and smoking cessation and cardiac rehabilitation counseling during hospitalization. In-hospital mortality rate was higher in women than in men (2.60% versus 1.50%, P <0.001). The sex difference in in-hospital mortality was no longer observed in patients with ST-segment–elevation myocardial infarction (adjusted odds ratio, 1.18; 95% CI, 1.00 to 1.41; P =0.057) and non-ST–segment elevation ACS (adjusted odds ratio, 0.84; 95% CI, 0.66 to 1.06; P =0.147) after adjustment for clinical characteristics and acute treatments. Conclusions: Women hospitalized for ACS in China received acute treatments and strategies for secondary prevention less frequently than men. The observed sex differences in in-hospital mortality were mainly attributable to worse clinical profiles and fewer evidence-based acute treatments provided to women with ACS. Specially targeted quality improvement programs may be warranted to narrow sex-related disparities in quality of care and outcomes in patients with ACS. Clinical Trial Registration: URL: https://www.clinicaltrials.gov . Unique identifier: NCT02306616.
Previous research on relation classification has verified the effectiveness of using dependency shortest paths or subtrees. In this paper, we further explore how to make full use of the combination of these dependency information. We first propose a new structure, termed augmented dependency path (ADP), which is composed of the shortest dependency path between two entities and the subtrees attached to the shortest path. To exploit the semantic representation behind the ADP structure, we develop dependency-based neural networks (DepNN): a recursive neural network designed to model the subtrees, and a convolutional neural network to capture the most important features on the shortest path. Experiments on the SemEval-2010 dataset show that our proposed method achieves state-of-art results.
Pre-trained models for programming language have achieved dramatic empirical improvements on a variety of code-related tasks such as code search, code completion, code summarization, etc. However, existing pre-trained models regard a code snippet as a sequence of tokens, while ignoring the inherent structure of code, which provides crucial code semantics and would enhance the code understanding process. We present GraphCodeBERT, a pre-trained model for programming language that considers the inherent structure of code. Instead of taking syntacticlevel structure of code like abstract syntax tree (AST), we use data flow in the pre-training stage, which is a semantic-level structure of code that encodes the relation of "where-the-value-comes-from" between variables. Such a semantic-level structure is neat and does not bring an unnecessarily deep hierarchy of AST, the property of which makes the model more efficient. We develop GraphCodeBERT based on Transformer. In addition to using the task of masked language modeling, we introduce two structure-aware pre-training tasks. One is to predict code structure edges, and the other is to align representations between source code and code structure. We implement the model in an efficient way with a graph-guided masked attention function to incorporate the code structure. We evaluate our model on four tasks, including code search, clone detection, code translation, and code refinement. Results show that code structure and newly introduced pre-training tasks can improve GraphCodeBERT and achieves state-of-the-art performance on the four downstream tasks. We further show that the model prefers structure-level attentions over token-level attentions in the task of code search.
The study is the first to show the elevated NLR levels significantly correlate with an increased risk of developing hypertension. This result may be useful in elucidating the mechanism underlying the development of hypertension. New therapeutic approaches aimed at inflammation might be proposed to control hypertension and hypertensive damage.
Open domain response generation has achieved remarkable progress in recent years, but sometimes yields short and uninformative responses. We propose a new paradigm, prototypethen-edit for response generation, that first retrieves a prototype response from a pre-defined index and then edits the prototype response according to the differences between the prototype context and current context. Our motivation is that the retrieved prototype provides a good start-point for generation because it is grammatical and informative, and the post-editing process further improves the relevance and coherence of the prototype. In practice, we design a contextaware editing model that is built upon an encoder-decoder framework augmented with an editing vector. We first generate an edit vector by considering lexical differences between a prototype context and current context. After that, the edit vector and the prototype response representation are fed to a decoder to generate a new response. Experiment results on a large scale dataset demonstrate that our new paradigm significantly increases the relevance, diversity and originality of generation results, compared to traditional generative models. Furthermore, our model outperforms retrieval-based methods in terms of relevance and originality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.