2020
DOI: 10.1016/j.neucom.2019.07.110
|View full text |Cite
|
Sign up to set email alerts
|

DTC: Transfer learning for commonsense machine comprehension

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 2 publications
0
5
0
Order By: Relevance
“…It should be noted that a relatively small number of dataset samples has a common basis in benchmarking due to expensive annotation or the need for expert competencies. Unlike datasets for machine-reading comprehension, such as MultiRC (Khashabi et al, 2018) and ReCoRD (Zhang et al, 2018), the GLUE-style datasets for learning choice of alternatives, logic, and causal relationships are often represented by a smaller number of manually collected and verified samples. They are by design sufficient for the human type of generalization but often pose a challenge for the tested LMs.…”
Section: Discussionmentioning
confidence: 99%
“…It should be noted that a relatively small number of dataset samples has a common basis in benchmarking due to expensive annotation or the need for expert competencies. Unlike datasets for machine-reading comprehension, such as MultiRC (Khashabi et al, 2018) and ReCoRD (Zhang et al, 2018), the GLUE-style datasets for learning choice of alternatives, logic, and causal relationships are often represented by a smaller number of manually collected and verified samples. They are by design sufficient for the human type of generalization but often pose a challenge for the tested LMs.…”
Section: Discussionmentioning
confidence: 99%
“…Reference [ 34 ] used learning networks to perform deep neural network learning through regularized approximations of decision trees. Reference [ 35 ] proposed an interpretable CNN for end-to-end learning, adding a priori constraints with filters to achieve automatic regression to a specific object (such as a bird's head, beak, and legs) after training and separating them in the top layer of the convolutional layer. Then, the representation of the neural network is refined into a decision tree structure [ 45 ], each decision mode hidden in the fully connected layer of CNN is encoded from coarse to fine, and the decision tree is used to approximate the final decision result.…”
Section: Methodsmentioning
confidence: 99%
“…Based on the answer type, they differentiate cloze answer (the question is a sentence with a missing word which has to be inserted, e.g. ReCoRD [28]), selective or multiple choice (a number of options is given, and the correct one(s) should be selected, e.g. MultiRC [9]), boolean (a yes/no answer is expected, e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Zhang et al [28] extracted their examples (more than 120 000 entries) from the CNN/Daily Mail 1 corpus to create the Reading Comprehension with Commonsense Reasoning (ReCoRD) dataset. These news articles were divided into multiple units: passage, cloze-style query (containing the masked entity) and the reference answer.…”
Section: Related Workmentioning
confidence: 99%