Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture the abilities a dataset is intended to test. We propose a more rigorous annotation paradigm for NLP that helps to close systematic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Contrast sets provide a local view of a model's decision boundary, which can be used to more accurately evaluate a model's true linguistic capabilities. We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, and IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets-up to 25% in some cases. We release our contrast sets as new evaluation benchmarks and encourage future dataset construction efforts to follow similar annotation processes.
One key consequence of the information revolution is a significant increase and a contamination of our information supply. The practice of fact-checking won't suffice to eliminate the biases in text data we observe, as the degree of factuality alone does not determine whether biases exist in the spectrum of opinions visible to us. To better understand controversial issues, one needs to view them from a diverse yet comprehensive set of perspectives.
The so-called graphane is a fully hydrogenated form of graphene. Because it is fully hydrogenated, graphane is expected to have a wide bandgap and is theoretically an electrical insulator. The transition from graphene to graphane is that of an electrical conductor, to a semiconductor, and ultimately to an electrical insulator. This unique characteristic of graphane has recently gained both academic and industrial interest. Towards the end of developing novel applications of this important class of nanoscale material, computational modeling work has been carried out by a number of theoreticians to predict the structures and electronic properties of graphane. At the same time, experimental evidence has emerged to support the proposed structure of graphane. This review article covers the important aspects of graphane including its theoretically predicted structures, properties, fabrication methods, as well as its potential applications.
The desire for this lightweight and flexible electronics has grown increasingly, and the flexible and wearable electronic textiles can be realized by coating traditional textiles with conductive materials. Here, the conductive silk fabrics are prepared by coating graphene oxide (GO) onto silk fabrics and followed by thermal reduction. The scanning electron microscope results show that the GO coated onto silk fabrics successfully forms a continuous thin film. The oxygen functional groups are removed by thermal reduction. The main structure (β‐sheet structure) of silk fabrics is not destroyed through a series of treatment, guaranteeing good mechanical properties. The resistivity and conductivity of silk fabrics using regenerated silk fibroin as a glue can reach 3.28 KΩ cm−1, 3.06 × 10−4 S cm−1 respectively, which can meet the electron conductive requirement of wearable electronics. Thus, it can be used for sensors, portable devices, and wearable electronic textiles.
Despite significant progress in neural abstractive summarization, recent studies have shown that the current models are prone to generating summaries that are unfaithful to the original context. To address the issue, we study contrast candidate generation and selection as a model-agnostic post-processing technique to correct the extrinsic hallucinations (i.e. information not present in the source text) in unfaithful summaries. We learn a discriminative correction model by generating alternative candidate summaries where named entities and quantities in the generated summary are replaced with ones with compatible semantic types from the source document. This model is then used to select the best candidate as the final output summary. Our experiments and analysis across a number of neural summarization systems show that our proposed method is effective in identifying and correcting extrinsic hallucinations. We analyze the typical hallucination phenomenon by different types of neural summarization systems, in hope to provide insights for future work on the direction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.