In dialog system, dialog act recognition and sentiment classification are two correlative tasks to capture speakers' intentions, where dialog act and sentiment can indicate the explicit and the implicit intentions separately (Kim and Kim 2018). Most of the existing systems either treat them as separate tasks or just jointly model the two tasks by sharing parameters in an implicit way without explicitly modeling mutual interaction and relation. To address this problem, we propose a Deep Co-Interactive Relation Network (DCR-Net) to explicitly consider the cross-impact and model the interaction between the two tasks by introducing a co-interactive relation layer. In addition, the proposed relation layer can be stacked to gradually capture mutual knowledge with multiple steps of interaction. Especially, we thoroughly study different relation layers and their effects. Experimental results on two public datasets (Mastodon and Dailydialog) show that our model outperforms the state-of-the-art joint model by 4.3% and 3.4% in terms of F1 score on dialog act recognition task, 5.7% and 12.4% on sentiment classification respectively. Comprehensive analysis empirically verifies the effectiveness of explicitly modeling the relation between the two tasks and the multi-steps interaction mechanism. Finally, we employ the Bidirectional Encoder Representation from Transformer (BERT) in our framework, which can further boost our performance in both tasks.
This paper describes our system (HIT-SCIR) for the CoNLL 2019 shared task: Cross-Framework Meaning Representation Parsing. We extended the basic transition-based parser with two improvements: a) Efficient Training by realizing stack LSTM parallel training; b) Effective Encoding via adopting deep contextualized word embeddings BERT (Devlin et al., 2019). Generally, we proposed a unified pipeline to meaning representation parsing, including framework-specific transitionbased parsers, BERT-enhanced word representation, and post-processing. In the final evaluation, our system was ranked first according to ALL-F1 (86.2%) and especially ranked first in UCCA framework (81.67%).
In real-world scenarios, users usually have multiple intents in the same utterance. Unfortunately, most spoken language understanding (SLU) models either mainly focused on the single intent scenario, or simply incorporated an overall intent context vector for all tokens, ignoring the fine-grained multiple intents information integration for token-level slot prediction. In this paper, we propose an Adaptive Graph-Interactive Framework (AGIF) for joint multiple intent detection and slot filling, where we introduce an intent-slot graph interaction layer to model the strong correlation between the slot and intents. Such an interaction layer is applied to each token adaptively, which has the advantage to automatically extract the relevant intents information, making a fine-grained intent information integration for the token-level slot prediction. Experimental results on three multiintent datasets show that our framework obtains substantial improvement and achieves the state-of-the-art performance. In addition, our framework achieves new state-of-the-art performance on two single-intent datasets.
Research has indicated that clinical serious disease may lead to posttraumatic growth (PTG). However, little is known about PTG among hemodialysis (HD) patients. The study examined the relationship among resilience, rumination and PTG among Chinese HD patients. 196 HD patients were recruited from a tertiary hospital in a Northern city of China between 1 June 2015 and 30 May 2016. Patients were surveyed using the Posttraumatic Growth Inventory-Chinese version, Connor-Davidson Resilience Scale, and Chinese Event Related Rumination Inventory. Correlation analyses showed that resilience was most highly positively correlated with PTG (r = .70, p < .001), deliberate rumination moderately correlated to PTG (r = .50, p < .001), and intrusive rumination was lower negatively related to PTG (r = -.26, p < .001). Regression analyses showed that age, gender, duration of dialysis, resilience and deliberate rumination had significant associations with PTG (β = -.31, p < .0001; β = -.14, p = .002; β = .10, p = .032; β = .44, p < .001; β = .20, p < .001). They together explained 65% of the total variance in PTG (F [8,195] = 46.74, p < .001). However, intrusive rumination was not associated with PTG (p > .05). The results suggested that resilience and deliberate rumination may be instrumental for PTG improvement.
The purpose of this article is to illustrate and demonstrate the use of the Cultural Genogram (CG) in a graduate-level course in gender and culture for family therapists-in-training at a large Midwestern university's accredited program in family therapy. Although the importance of the CG as a training tool is delineated by Hardy and Laszloffy, very little information exists about the actual implementation and usefulness of this tool within a training program for family therapists. In this article, we present a qualitative research study of the lived experiences of a class of women from diverse cultures as they constructed and presented their CGs. We discuss the basic curriculum and structure of the course in which the CG was used, the process the class members developed to create and present their CGs, the effects of presenting the CGs, and a set of recommendations and ideas for further exploration.
Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer). Despite the effectiveness of existing methods on this benchmark, they treat these two sub-tasks individually during training while ignoring their dependencies. To address this issue, we present a novel multi-grained machine reading comprehension framework that focuses on modeling documents at their hierarchical nature, which are different levels of granularity: documents, paragraphs, sentences, and tokens. We utilize graph attention networks to obtain different levels of representations so that they can be learned simultaneously. The long and short answers can be extracted from paragraphlevel representation and token-level representation, respectively. In this way, we can model the dependencies between the two-grained answers to provide evidence for each other. We jointly train the two sub-tasks, and our experiments show that our approach significantly outperforms previous systems at both long and short answer criteria.
Table-based fact verification is expected to perform both linguistic reasoning and symbolic reasoning. Existing methods lack attention to take advantage of the combination of linguistic information and symbolic information. In this work, we propose HeterTFV, a graph-based reasoning approach, that learns to combine linguistic information and symbolic information effectively. We first construct a program graph to encode programs, a kind of LISP-like logical form, to learn the semantic compositionality of the programs. Then we construct a heterogeneous graph to incorporate both linguistic information and symbolic information by introducing program nodes into the heterogeneous graph. Finally, we propose a graph-based reasoning approach to reason over the multiple types of nodes to make an effective combination of both types of information. Experimental results on a large-scale benchmark dataset TABFACT illustrate the effect of our approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.