2023
DOI: 10.48550/arxiv.2301.08771
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Matching Exemplar as Next Sentence Prediction (MeNSP): Zero-shot Prompt Learning for Automatic Scoring in Science Education

Abstract: Developing natural language processing (NLP) models to automatically score students' written responses to science problems is critical for science education. However, collecting sufficient student responses and labeling them for training or fine-tuning NLP models is time and cost-consuming. Recent studies suggest that large-scale pre-trained language models (PLMs) can be adapted to downstream tasks without fine-tuning by using prompts. However, no research has employed such a prompt approach in science educati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 36 publications
(51 reference statements)
0
2
0
Order By: Relevance
“…Specifically, these LLMs, such as Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al, 2018) and Generative Pre-trained Transformer (GPT) (Brown et al, 2020), utilise deep learning and self-attention mechanisms (Vaswani et al, 2017) to selectively attend to the different parts of input texts, depending on the focus of the current tasks, allowing the model to learn complex patterns and relationships among textual contents, such as their semantic, contextual, and syntactic relationships (Liu et al, 2023;Min et al, 2021). As several LLMs (eg, GPT-3 and Codex) have been pre-trained on massive amounts of data across multiple disciplines, they are capable of completing natural language processing tasks with little (few-shot learning) or no additional training (zero-shot learning) (Brown et al, 2020;Wu et al, 2023). This could lower the technological barriers to LLMs-based innovations as researchers and practitioners can develop new educational technologies by fine-tuning LLMs on specific educational tasks without starting from scratch (Caines et al, 2023;Sridhar et al, 2023).…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, these LLMs, such as Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al, 2018) and Generative Pre-trained Transformer (GPT) (Brown et al, 2020), utilise deep learning and self-attention mechanisms (Vaswani et al, 2017) to selectively attend to the different parts of input texts, depending on the focus of the current tasks, allowing the model to learn complex patterns and relationships among textual contents, such as their semantic, contextual, and syntactic relationships (Liu et al, 2023;Min et al, 2021). As several LLMs (eg, GPT-3 and Codex) have been pre-trained on massive amounts of data across multiple disciplines, they are capable of completing natural language processing tasks with little (few-shot learning) or no additional training (zero-shot learning) (Brown et al, 2020;Wu et al, 2023). This could lower the technological barriers to LLMs-based innovations as researchers and practitioners can develop new educational technologies by fine-tuning LLMs on specific educational tasks without starting from scratch (Caines et al, 2023;Sridhar et al, 2023).…”
Section: Introductionmentioning
confidence: 99%
“…The AI community has been making progress in these directions. For example, few-shot learning (Wu et al, 2023) and fine-tuned pre-trained large language models (Liu et al, 2023) are promising approaches for learning from limited data. With advances in AI interpretability, we may be able to understand how AI makes decisions and then modify them to better serve our students.…”
Section: Ai Can Provide the Tools And Time Urgently Needed By Teachersmentioning
confidence: 99%