Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering 2022
DOI: 10.1145/3551349.3559548
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Code Documentation Generation Using GPT-3

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 40 publications
(9 citation statements)
references
References 36 publications
0
8
0
Order By: Relevance
“…This was sufficient for our study: an explanation failure occurred only 13 times out of 159 queries in the grounded condition, on average 1.08 times per user. We do not claim that our algorithm is the most effective, and future work could explore alternative ways of producing grounded utterances, such as leveraging the LLM itself [45,72]. Our system is limited in assuming a single well-defined relational data table.…”
Section: Limitationsmentioning
confidence: 97%
“…This was sufficient for our study: an explanation failure occurred only 13 times out of 159 queries in the grounded condition, on average 1.08 times per user. We do not claim that our algorithm is the most effective, and future work could explore alternative ways of producing grounded utterances, such as leveraging the LLM itself [45,72]. Our system is limited in assuming a single well-defined relational data table.…”
Section: Limitationsmentioning
confidence: 97%
“…Recently, a new generation of LLMs demonstrated the capability to understand and generate both natural languages and computer languages. GPT-3 was examined in writing code explanations [52], documentation [44], and providing feedback for assignments [3]. Soon, educators started to believe that Codex could be used to solve simple programming problems [27,90].…”
Section: Related Work 21 Llms For Computational Programming and Modelingmentioning
confidence: 99%
“…The research community explored GLLMs for coding tasks across various languages like Java [251], [252], [255], [260], [263], [264], [266], [267], [269], [270], Python [253], [254], [256]- [258], [260], [262], [263], [265], [267], [268], [271], PHP [260], GO [260], Ruby [260], JavaScript [260], C [261], [268], C++ [259], [268], Julia [268], and MATLAB [268]. Most of the research works focused on Python and Java languages, while a few research works focused on other languages like GO, PHP, GO, Ruby, JavaScript, C, C++, Julia and MATLAB.…”
Section: Research Work Exploring Gllms For Various Coding Tasksmentioning
confidence: 99%
“…In all the above discussed research works, the performance of GLLMs in various coding tasks is promising but still lags behind SOTA results. Some of the research works [251], [254], [260], [262], [266] demonstrated that GLLMs can achieve SOTA results in coding tasks. Xia et al [251] proposed ChatRepair, an automatic program repair tool based on ChatGPT.…”
Section: Research Work Exploring Gllms For Various Coding Tasksmentioning
confidence: 99%
See 1 more Smart Citation