2022
DOI: 10.1145/3563327
|View full text |Cite
|
Sign up to set email alerts
|

Neurosymbolic repair for low-code formula languages

Abstract: Most users of low-code platforms, such as Excel and PowerApps, write programs in domain-specific formula languages to carry out nontrivial tasks. Often users can write most of the program they want, but introduce small mistakes that yield broken formulas. These mistakes, which can be both syntactic and semantic, are hard for low-code users to identify and fix, even though they can be resolved with just a few edits. We formalize the problem of producing such edits as the last-mile repair … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(12 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…We evaluate FLAME's ability to perform last-mile formula repair (Bavishi et al 2022), completion, and similar formula retrieval. Prior work in the repair domain includes Deep-Fix (Gupta et al 2017), BIFI (Yasunaga and Liang 2021), Dr.Repair (Yasunaga and Liang 2020), TFix (Berabi et al 2021), and RING (Joshi et al 2022), which use deep learning to perform syntax, compilation, or diagnostics repair in general-purpose programming languages.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…We evaluate FLAME's ability to perform last-mile formula repair (Bavishi et al 2022), completion, and similar formula retrieval. Prior work in the repair domain includes Deep-Fix (Gupta et al 2017), BIFI (Yasunaga and Liang 2021), Dr.Repair (Yasunaga and Liang 2020), TFix (Berabi et al 2021), and RING (Joshi et al 2022), which use deep learning to perform syntax, compilation, or diagnostics repair in general-purpose programming languages.…”
Section: Related Workmentioning
confidence: 99%
“…Last-mile repair refers to repairs that require few edits and fix syntax and simple semantic errors, such as wrong function call arity (Bavishi et al 2022). In this setting, FLAME is given the buggy formula as the input sequence, and the task is to generate the intended (valid) formula.…”
Section: Last-mile Repairmentioning
confidence: 99%
“…When querying Codex-Complete, we use three few-shot examples selected from D shot , an annotated dataset of exam-ples comprising buggy programs and desired feedback obtained by expert annotations (see Section 4.2). These annotated examples essentially provide a context to LLMCs and have shown to play an important role in optimizing the generated output (e.g., see [1,2,17,18,32]). In our case, D shot provides contextualized training data, capturing the format of how experts/tutors give explanations.…”
Section: Stage-2: Generating Explanationmentioning
confidence: 99%
“…However, even state-ofthe-art models fine-tuned on a specific class of programming tasks still require a costly filtering step where the LLM outputs that do not compile or pass tests are discarded [10]. These outputs tend to be superficially similar to correct solutions [11] despite failing to produce the expected output, a phenomenon known as "near miss syndrome" or "last mile problem" [12].…”
Section: Introductionmentioning
confidence: 99%