The quadratic computational and memory complexities of large Transformers have limited their scalability for long document summarization. In this paper, we propose HEPOS, a novel efficient encoder-decoder attention with head-wise positional strides to effectively pinpoint salient information from the source. We further conduct a systematic study of existing efficient self-attentions. Combined with HEPOS, we are able to process ten times more tokens than existing models that use full attentions. For evaluation, we present a new dataset, GOVREPORT, with significantly longer documents and summaries. Results show that our models produce significantly higher ROUGE scores than competitive comparisons, including new state-of-the-art results on PubMed. Human evaluation also shows that our models generate more informative summaries with fewer unfaithful errors.tokens with a batch size of 1, 70GB of memory is needed for encoder attentions, and 8GB for encoder-decoder attentions.2 Our code is released at https://github.com/ luyang-huang96/LongDocSum.3 GOVREPORT can be downloaded from https:// gov-report-data.github.io.
To combat COVID-19, both clinicians and scientists need to digest vast amounts of relevant biomedical knowledge in scientific literature to understand the disease mechanism and related biological functions. We have developed a novel and comprehensive knowledge discovery framework, COVID-KG to extract finegrained multimedia knowledge elements (entities and their visual chemical structures, relations and events) from scientific literature. We then exploit the constructed multimedia knowledge graphs (KGs) for question answering and report generation, using drug repurposing as a case study. Our framework also provides detailed contextual sentences, subfigures, and knowledge subgraphs as evidence. All of the data, KGs, reports 1 , resources, and shared services are publicly available 2 .
The quadratic computational and memory complexities of large Transformers have limited their scalability for long document summarization. In this paper, we propose HEPOS, a novel efficient encoder-decoder attention with head-wise positional strides to effectively pinpoint salient information from the source. We further conduct a systematic study of existing efficient self-attentions. Combined with HEPOS, we are able to process ten times more tokens than existing models that use full attentions. For evaluation, we present a new dataset, GOVREPORT, with significantly longer documents and summaries. Results show that our models produce significantly higher ROUGE scores than competitive comparisons, including new state-of-the-art results on PubMed. Human evaluation also shows that our models generate more informative summaries with fewer unfaithful errors.tokens with a batch size of 1, 70GB of memory is needed for encoder attentions, and 8GB for encoder-decoder attentions.2 Our code is released at https://github.com/ luyang-huang96/LongDocSum.3 GOVREPORT can be downloaded from https:// gov-report-data.github.io.
To combat COVID-19, both clinicians and scientists need to digest the vast amount of relevant biomedical knowledge in literature to understand the disease mechanism and the related biological functions. We have developed a novel and comprehensive knowledge discovery framework, COVID-KG to extract finegrained multimedia knowledge elements (entities, relations and events) from scientific literature. We then exploit the constructed multimedia knowledge graphs (KGs) for question answering and report generation, using drug repurposing as a case study. Our framework also provides detailed contextual sentences, subfigures and knowledge subgraphs as evidence. All of the data, KGs, reports 1 , resources and shared services are publicly available 2 .
Scientific retractions occur for a multitude of reasons. A growing body of research has studied the phenomenon of retraction through systematic analyses of the characteristics of retracted articles and their associated citations. In our study, we focus on the characteristics of articles that cite retracted articles, and the changes in citation dynamics pre‐ and post‐retraction. We leverage descriptive statistics and ego‐network methods to examine 4,871 retracted articles and their citations before and after retraction. Our retracted articles data was obtained from PubMed, Scopus, and Retraction Watch and their citing articles from Scopus. Our findings indicate a stark decrease in post‐retraction citations and that most of these citations came from countries different from the retracted article's country of publication. Citation context analyses of a subset of retracted articles also reveal that post‐retraction citations came from articles with disciplinary and geographical boundaries different from that of the retracted article.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.