2021
DOI: 10.1109/access.2021.3133495
|View full text |Cite
|
Sign up to set email alerts
|

SEQ2SEQ++: A Multitasking-Based Seq2seq Model to Generate Meaningful and Relevant Answers

Abstract: Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 42 publications
0
3
0
Order By: Relevance
“…Meanwhile, the GRU model is easy for training and the efficiency can be greatly improved. Hence, GRU structure is greatly used in natural language processing [39] and feature classification [40]. It has been also used in traffic condition estimation [41] and car-following behavior [8].…”
Section: Methodology a Preliminaries 1) Seq2seq Modelmentioning
confidence: 99%
“…Meanwhile, the GRU model is easy for training and the efficiency can be greatly improved. Hence, GRU structure is greatly used in natural language processing [39] and feature classification [40]. It has been also used in traffic condition estimation [41] and car-following behavior [8].…”
Section: Methodology a Preliminaries 1) Seq2seq Modelmentioning
confidence: 99%
“…Cai, Tian, Kazemnejad and et al [6], [7], [43] treat the retrieval and generation as disjointed components and train them separately, but this means additional data is needed. Multi-task learning that jointly optimize retrieval and generation steps are also explored [44]. Unlike other studies that largely focus on improving the generation component, Wu et al [45] propose improving the performance of the retrieval component through entity alignment.…”
Section: Related Workmentioning
confidence: 99%
“…Some attempts have been made to improve model efficiency and implement multiple processes by building multiple models or deploying multi-task learning (MTL) in neural networks [27], [28]. In recent years, there has been an increasing amount of studies describing the applications of MTL in various domains including Natural Language Processing (NLP) [18], [19] and computer vision [23], [29] to allow the model to share low level features, thereby improving the achievement of task objectives. However, improvement is needed for their application to parallel processing which requires more independent analysis rather than correlations for sub-group behavior analysis.…”
Section: Related Workmentioning
confidence: 99%