2022
DOI: 10.48550/arxiv.2210.02498
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Honest Students from Untrusted Teachers: Learning an Interpretable Question-Answering Pipeline from a Pretrained Language Model

Abstract: Explainable question answering systems should produce not only accurate answers but also rationales that justify their reasoning and allow humans to check their work. But what sorts of rationales are useful and how can we train systems to produce them? We propose a new style of rationale for open-book question answering, called markup-and-mask, which combines aspects of extractive and free-text explanations. In the markup phase, the passage is augmented with free-text markup that enables each sentence to stand… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 26 publications
1
4
0
Order By: Relevance
“…Snell et al (2022) demonstrated the usefulness of providing instruction that can help models achieve better reasoning skills. Similar to our hypothesis, Eisenstein et al (2022) argued that question-answering systems should focus not only on the final answer, but also on the rationale that justifies their reasoning, to help them reason better. We go beyond this; in our work, in addition to the question-answering system, we also focus on what questions need to be asked at each step that can help to learn that reasoning step better.…”
Section: Related Worksupporting
confidence: 67%
“…Snell et al (2022) demonstrated the usefulness of providing instruction that can help models achieve better reasoning skills. Similar to our hypothesis, Eisenstein et al (2022) argued that question-answering systems should focus not only on the final answer, but also on the rationale that justifies their reasoning, to help them reason better. We go beyond this; in our work, in addition to the question-answering system, we also focus on what questions need to be asked at each step that can help to learn that reasoning step better.…”
Section: Related Worksupporting
confidence: 67%
“…Examples of these tasks include math problems [38,88,101,145,158], blurry text recognition [92,93], and matching images with the same content [74,109]. For these tasks, workflows can improve performance on benchmarks [31,42,51,74,87,107,126,145,156], enable new tasks [47,50,58,62,93], classify examples for which machine learning approaches struggle [4,15,25,109], improve instructional clarity [17], and reduce hallucinations [32,33,108,110,111,116,140]. Verifiable creativity.…”
Section: Outcomementioning
confidence: 99%
“…One workflow may employ multiple architectural patterns. Sequential architectures order subtasks in a chain in which the output of one worker passes forward to another worker [42,45,56,57,97,110,122,135,142,143,158].…”
Section: Workflow Architecturementioning
confidence: 99%
See 2 more Smart Citations