Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.192
|View full text |Cite
|
Sign up to set email alerts
|

TSDG: Content-aware Neural Response Generation with Two-stage Decoding Process

Abstract: Neural response generative models have achieved remarkable progress in recent years but tend to yield irrelevant and uninformative responses. One of the reasons is that encoderdecoder based models always use a single decoder to generate a complete response at a stroke. This tends to generate high-frequency function words with less semantic information rather than low-frequency content words with more semantic information. To address this issue, we propose a content-aware model with two-stage decoding process n… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…After defining and clarifying the research objectives, types of objects, and specifying object templates, we study knowledge extraction techniques based on domain pre-training models to develop a knowledge architecture primarily based on tree structures and auxiliary attribute classification, as well as a knowledge association network primarily based on triplets [4]. By researching and applying techniques such as topic modeling, label relation mining, semantic representation, and multi-level knowledge integration, we construct an integrated system for identifying and annotating knowledge objects at different granular levels, including phrase-level, sentence-level, and chapter-level, to meet the requirements for annotating and extracting general fine-grained knowledge objects and domainspecific knowledge.…”
Section: Establishing a Standard Knowledge Base Framework Based On Re...mentioning
confidence: 99%
“…After defining and clarifying the research objectives, types of objects, and specifying object templates, we study knowledge extraction techniques based on domain pre-training models to develop a knowledge architecture primarily based on tree structures and auxiliary attribute classification, as well as a knowledge association network primarily based on triplets [4]. By researching and applying techniques such as topic modeling, label relation mining, semantic representation, and multi-level knowledge integration, we construct an integrated system for identifying and annotating knowledge objects at different granular levels, including phrase-level, sentence-level, and chapter-level, to meet the requirements for annotating and extracting general fine-grained knowledge objects and domainspecific knowledge.…”
Section: Establishing a Standard Knowledge Base Framework Based On Re...mentioning
confidence: 99%
“…We remove stopwords, and then obtain the Part-Of-Speech (POS) feature for each word. As a content word, its POS tagging should be noun, verb, adjective or adverb [23].…”
Section: First Stage: Visual Impression Constructionmentioning
confidence: 99%