Findings of the Association for Computational Linguistics: NAACL 2022 2022
DOI: 10.18653/v1/2022.findings-naacl.10
|View full text |Cite
|
Sign up to set email alerts
|

In-BoXBART: Get Instructions into Biomedical Multi-Task Learning

Abstract: Single-task models have proven pivotal in solving specific tasks; however, they have limitations in real-world applications where multitasking is necessary and domain shifts are exhibited. Recently, instructional prompts have shown significant improvement towards multitask generalization; however, the effect of instructional prompts and Multi-Task Learning (MTL) has not been systematically studied in the biomedical domain. Motivated by this, this paper explores the impact of instructional prompts for biomedica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(17 citation statements)
references
References 26 publications
1
16
0
Order By: Relevance
“…Design strategies for improving code generation from LLMs. LLMs are capable of learning and generating instructions, and breaking down or decomposing a large task into smaller subtasks has consistently been found to improve the performance of LLMs [26,37,75,83,84,115,119,121,122]. Jayagopal et al [42] examine the usability of program synthesizers, including GitHub Copilot, by novices, finding problems arising from the difficulty of task decomposition, and recommending that designers offer scaffolding for decomposition.…”
Section: Natural Language Programmingmentioning
confidence: 99%
“…Design strategies for improving code generation from LLMs. LLMs are capable of learning and generating instructions, and breaking down or decomposing a large task into smaller subtasks has consistently been found to improve the performance of LLMs [26,37,75,83,84,115,119,121,122]. Jayagopal et al [42] examine the usability of program synthesizers, including GitHub Copilot, by novices, finding problems arising from the difficulty of task decomposition, and recommending that designers offer scaffolding for decomposition.…”
Section: Natural Language Programmingmentioning
confidence: 99%
“…Instruction-based zero-shot learning Instruction-based zero-shot learning is an innovative approach that leverages natural language instructions and definitions to enable neural models to solve a variety of tasks [50,6,55,18,89,44,45,49]. By providing a human-readable prompt, this method enables easier and more efficient specification of the learning task by utilizing knowledge about the task without data.…”
Section: B3 Detalied Related Workmentioning
confidence: 99%
“…Followed by this, FLAN [13] has been proposed which uses instructions to achieve generalization across unseen tasks. Recently, Parmar et al [14] proposed instruction learning for biomedical multitask. Along with that, Mishra et al [25] shows reframing instructional prompts can boost both few-shot and zero-shot model performance.…”
Section: Instruction Learningmentioning
confidence: 99%
“…This dataset also serves to evaluate the generalization of a model, a well-known issue that many language models have failed even though they outperform humans in many popular benchmarks [10,11]. Recently, instruction-learning [12,13,14] have improved model's performance to unseen tasks. Inspired by this, we leverage instruction-tuning to build a model and verify whether instruction learning also show stronger generalization on BioTabQA.…”
Section: Introductionmentioning
confidence: 99%