Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.484
|View full text |Cite
|
Sign up to set email alerts
|

Math Word Problem Generation with Mathematical Consistency and Problem Context Constraints

Abstract: We study the problem of generating arithmetic math word problems (MWPs) given a math equation that specifies the mathematical computation and a context that specifies the problem scenario. Existing approaches are prone to generating MWPs that are either mathematically invalid or have unsatisfactory language quality. They also either ignore the context or require manual specification of a problem template, which compromises the diversity of the generated MWPs. In this paper, we develop a novel MWP generation ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(7 citation statements)
references
References 52 publications
(45 reference statements)
0
7
0
Order By: Relevance
“…Beyond the generation of the correct answer, transformer models are also able to create distractor answers, as needed for the generation of multiple choice questionnaires [42,43]. Bringing language models to mathematics education, sev-eral works discuss the automatic generation of math word problems [44,45,46], which combines the challenge of understanding equations and putting them into the appropriate context.…”
Section: Review Of Research Applying Large Languagementioning
confidence: 99%
“…Beyond the generation of the correct answer, transformer models are also able to create distractor answers, as needed for the generation of multiple choice questionnaires [42,43]. Bringing language models to mathematics education, sev-eral works discuss the automatic generation of math word problems [44,45,46], which combines the challenge of understanding equations and putting them into the appropriate context.…”
Section: Review Of Research Applying Large Languagementioning
confidence: 99%
“…Language models can also be used in generating synthetic EHR data. PromptEHR uses prompt learning for conditional generation of synthetic data (Wang and Sun 2022). Different from previous methods, we model the logical associations between various types of medical events with multivisit health state inference.…”
Section: Synthetic Ehr Generationmentioning
confidence: 99%
“…We compare MSIC to mainstream synthetic EHR generation methods as baselines. For medical event synthesis, the baselines include: (1) LSTM+MLP is a straightforward sequential prediction model to yield event probabilities across multiple visits; (2) MedGAN (Choi et al 2017) is a pioneer in using GAN for synthetic EHR generation; (3) MedBGAN & MedWGAN (Baowaly et al 2019) take advantage of Boundary-seeking GAN (Hjelm et al 2018) and Wasserstein GAN (Adler and Lunz 2018) to improve the generation performance; (4) EVA (Biswal et al 2021) is a VAE-based model featuring a BiLSTM encoder and CNN decoder; (5) MTGAN (Lu et al 2023) is a time-series generation model via conditional GAN; and (6) PromptEHR (Wang and Sun 2022) employs prompt learning for conditional EHR generation.…”
Section: Experiments Experimental Setupmentioning
confidence: 99%
“…MWP Generation: Early MWP generation methods are mostly template-based, including Answer Set Programming (ASP) (Polozov et al, 2015), schema and frame semantics (Singley and Bennett, 2002;Deane, 2003). With the development of deep learning framework, ; Wang et al (2021) generate problem text given equation templates and keywords, where the keywords are extracted from the golden MWP via heuristic rules. Their model is learned with Seq2seq in an end-to-end manner and integrates features of templates and keywords in the decoding phase.…”
Section: Related Workmentioning
confidence: 99%