2022
DOI: 10.48550/arxiv.2212.09248
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Natural Language to Code Generation in Interactive Data Science Notebooks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…Many benchmarks have focused on code generation in APIs. Benchmarks like DS-1000 (Lai et al, 2023), ARCADE (Yin et al, 2022), NumpyEval , and PandasEval (Jain et al, 2022) focus on data science APIs. Other benchmarks measure using broader APIs or general software engineering tasks, such as JuICe (Agashe et al, 2019), APIBench (Patil et al, 2023), RepoBench , ODEX (Wang et al, 2022b), SWE-Bench (Jimenez et al, 2023), GoogleCodeRepo (Shrivastava et al, 2023), RepoEval , and Cocomic-Data .…”
Section: Code Generationmentioning
confidence: 99%
“…Many benchmarks have focused on code generation in APIs. Benchmarks like DS-1000 (Lai et al, 2023), ARCADE (Yin et al, 2022), NumpyEval , and PandasEval (Jain et al, 2022) focus on data science APIs. Other benchmarks measure using broader APIs or general software engineering tasks, such as JuICe (Agashe et al, 2019), APIBench (Patil et al, 2023), RepoBench , ODEX (Wang et al, 2022b), SWE-Bench (Jimenez et al, 2023), GoogleCodeRepo (Shrivastava et al, 2023), RepoEval , and Cocomic-Data .…”
Section: Code Generationmentioning
confidence: 99%