2022 26th International Conference on Pattern Recognition (ICPR) 2022
DOI: 10.1109/icpr56361.2022.9956441
|View full text |Cite
|
Sign up to set email alerts
|

ISD-QA: Iterative Distillation of Commonsense Knowledge from General Language Models for Unsupervised Question Answering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…Some approaches, such as KagNet [41], KTL [37], MHGRN [42], QAGNN [38], OCN [43], KEAR [44], and KnowledgePath [39], to name a few, have integrated commonsense knowledge found in symbolic knowledge bases, such as ConceptNet and ATOMIC, into neural networks using knowledge-injection techniques (such as attention and graph neural networks) to enhance performance on CNLI tasks through supervised learning. Other approaches leverage the knowledge captured in large language models, such as BERT [6], as supervision for CNLI using different mechanisms, such as consistency optimization [45], question rewriting [46], and leveraging the autoregressive pretraining objective to rank answer options [47], [48]. Our approach falls under this category of models that reduce the requirements for annotated training data for commonsense NLI.…”
Section: Related Workmentioning
confidence: 99%
“…Some approaches, such as KagNet [41], KTL [37], MHGRN [42], QAGNN [38], OCN [43], KEAR [44], and KnowledgePath [39], to name a few, have integrated commonsense knowledge found in symbolic knowledge bases, such as ConceptNet and ATOMIC, into neural networks using knowledge-injection techniques (such as attention and graph neural networks) to enhance performance on CNLI tasks through supervised learning. Other approaches leverage the knowledge captured in large language models, such as BERT [6], as supervision for CNLI using different mechanisms, such as consistency optimization [45], question rewriting [46], and leveraging the autoregressive pretraining objective to rank answer options [47], [48]. Our approach falls under this category of models that reduce the requirements for annotated training data for commonsense NLI.…”
Section: Related Workmentioning
confidence: 99%