2019 IEEE/ACM 27th International Conference on Program Comprehension (ICPC) 2019
DOI: 10.1109/icpc.2019.00054
|View full text |Cite
|
Sign up to set email alerts
|

Recommending Comprehensive Solutions for Programming Tasks by Mining Crowd Knowledge

Abstract: Developers often search for relevant code examples on the web for their programming tasks. Unfortunately, they face two major problems. First, the search is impaired due to a lexical gap between their query (task description) and the information associated with the solution. Second, the retrieved solution may not be comprehensive, i.e., the code segment might miss a succinct explanation. These problems make the developers browse dozens of documents in order to synthesize an appropriate solution. To address the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 36 publications
(48 citation statements)
references
References 41 publications
0
48
0
Order By: Relevance
“…The second experiment relies on post links and is used to provide proof that our system is effective (and minimize possible threats to validity). Finally, for our third experiment, we compare StackSearch to the tool CROKAGE [24], which is quite similar to our system. Comparing StackSearch with other approaches was not possible, since several systems are not maintained and/or they are not publicly available (to facilitate researchers with similar challenges, we uploaded our code at https://github.com/AuthEceSoftEng/StackSearch).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The second experiment relies on post links and is used to provide proof that our system is effective (and minimize possible threats to validity). Finally, for our third experiment, we compare StackSearch to the tool CROKAGE [24], which is quite similar to our system. Comparing StackSearch with other approaches was not possible, since several systems are not maintained and/or they are not publicly available (to facilitate researchers with similar challenges, we uploaded our code at https://github.com/AuthEceSoftEng/StackSearch).…”
Section: Discussionmentioning
confidence: 99%
“…This system, however, is also largely based on synthesizing API calls and does not focus on semantic retrieval. Finally, an even more recent system is CROKAGE [24], which employs embeddings and further expands the query with relevant API classes from Stack Overflow. The final results are ranked according to multiple factors, including their lexical and semantic similarity with the query and their similar API usage.…”
Section: Related Workmentioning
confidence: 99%
“…Soliman et al's [15] approach on the other hand is more domain-specific, focusing on how architects search for architecturally relevant information on Stack Overflow. CROKAGE by Silva et al [16] takes the description of a programming task and provides a comprehensive solution for this task by searching multiple threads. In contrast, our work focuses on the navigation of a single thread, with the goal of identifying navigational cues.…”
Section: Related Workmentioning
confidence: 99%
“…Recall that participants answer this question at the end of the survey, allowing them to reflect on all sentences they evaluated. The set of codes we use, along with the number of instances per code in parentheses is as follows: direct solution (16), explanation (13), relevance of info (11), code/lib (6), wellwritten (5), SO info (2), alternate solution (1), and warning (1). Our inter-rater agreement kappa score is 0.86.…”
Section: Rq4: Helpful Navigation Informationmentioning
confidence: 99%
See 1 more Smart Citation