Our work involves enriching the Stack-LSTM transition-based AMR parser (Ballesteros and Al-Onaizan, 2017) by augmenting training with Policy Learning and rewarding the Smatch score of sampled graphs. In addition, we also combined several AMR-to-text alignments with an attention mechanism and we supplemented the parser with pre-processed concept identification, named entities and contextualized embeddings. We achieve a highly competitive performance that is comparable to the best published results. We show an in-depth study ablating each of the new components of the parser.
We introduce doc2dial, a new dataset of goal-oriented dialogues that are grounded in the associated documents. Inspired by how the authors compose documents for guiding end users, we first construct dialogue flows based on the content elements that corresponds to higher-level relations across text sections as well as lower-level relations between discourse units within a section. Then we present these dialogue flows to crowd contributors to create conversational utterances. The dataset includes over 4500 annotated conversations with an average of 14 turns that are grounded in over 450 documents from four domains. Compared to the prior document-grounded dialogue datasets, this dataset covers a variety of dialogue scenes in information-seeking conversations. For evaluating the versatility of the dataset, we introduce multiple dialogue modeling tasks and present baseline approaches.A9: Would you like to find out whether you are eligible? U10: That's exactly why I contact again! A11: Were there any damages to your clothes that were caused by prosthetic or orthopedic device or your skin medicine? U12: The latter happened.
We propose MultiDoc2Dial, a new task and dataset on modeling goal-oriented dialogues grounded in multiple documents. Most previous works treat document-grounded dialogue modeling as a machine reading comprehension task based on a single given document or passage. In this work, we aim to address more realistic scenarios where a goaloriented information-seeking conversation involves multiple topics, and hence is grounded on different documents. To facilitate such a task, we introduce a new dataset that contains dialogues grounded in multiple documents from four different domains. We also explore modeling the dialogue-based and documentbased context in the dataset. We present strong baseline approaches and various experimental results, aiming to support further research efforts on such a task. Social Security CreditsYou must earn at least 40 Social Security credits to qualify for social security benefits. Number of Credit Needed for Disability BenefitsTo be eligible for disability benefits, you must meet a recent work test and a duration work test. Number of CreditNeeded for Retirement Benefits If you are born after 1928, you will need 40 credits to qualify for retirement benefits. 30 years or older -In general, you must have at least 20 credits in the 10-year period immediately before you become disabled. U1: I need help with SSDI. I heard that it could benefit my relatives too. I am in my 50s. A2: Yes SSDI pays benefits to you and family members if you are insured. A3: Do you know if you are "insured"? U4: Could you tell me more about it? A5: We measure it in "work credits". To be eligible for disability benefits, you must meet a recent work test. U6: How many credits do I need to get the benefit? A7: Since you are over 31 years old, you must have at least 20 credits in the 10-year period … U8: OK. My wife is currently unemployed. I want to know what benefit she gets from me. A9: The qualifying member could receive up to 50% of your benefit. Access Your Benefit Information Online Sign up a new account To sign up an new account Recover your username and password If you can't log in your account, you can fill out this form to recover your account information. If you can't log in your account, you can fill out this form to recover your account information.
Explaining neural network models is important for increasing their trustworthiness in realworld applications. Most existing methods generate post-hoc explanations for neural network models by identifying individual feature attributions or detecting interactions between adjacent features. However, for models with text pairs as inputs (e.g., paraphrase identification), existing methods are not sufficient to capture feature interactions between two texts and their simple extension of computing all word-pair interactions between two texts is computationally inefficient. In this work, we propose the Group Mask (GMASK) method to implicitly detect word correlations by grouping correlated words from the input text pair together and measure their contribution to the corresponding NLP tasks as a whole. The proposed method is evaluated with two different model architectures (decomposable attention model and BERT) across four datasets, including natural language inference and paraphrase identification tasks. Experiments show the effectiveness of GMASK in providing faithful explanations to these models 1 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.