Proceedings of the 2008 International Workshop on Recommendation Systems for Software Engineering 2008
DOI: 10.1145/1454247.1454254
|View full text |Cite
|
Sign up to set email alerts
|

On evaluating recommender systems for API usages

Abstract: To ease framework understanding, tools have been developed that analyze existing framework instantiations to extract API usage patterns and present them to the user. However, detailed quantitative evaluations of such recommender systems are lacking. In this paper we present an automated evaluation process which extracts queries and expected results from existing code bases. This enables the validation of recommendation systems with large test beds in an objective manner by means of precision and recall measure… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 13 publications
(13 citation statements)
references
References 14 publications
0
13
0
Order By: Relevance
“…Similar limitations apply to qualitative approaches that manually evaluate the result of applying the tool to some selected tasks, see Bruch et al [1] for a discussion.…”
Section: Discussionmentioning
confidence: 96%
See 1 more Smart Citation
“…Similar limitations apply to qualitative approaches that manually evaluate the result of applying the tool to some selected tasks, see Bruch et al [1] for a discussion.…”
Section: Discussionmentioning
confidence: 96%
“…It has been suggested to evaluate recommendations systems by replaying recorded IDE sessions [11], or by querying the code base for pairs of questions and answers [1]. For example, for each name in the source code we could extract a query with its type and context (but not its name) and then expect the name as correct answer.…”
Section: Discussionmentioning
confidence: 99%
“…Standard evaluation techniques that refer directly to the performance metrics [22] related to time, storage requirements and computation complexity is used to evaluate the performance of mROSE.…”
Section: Mrose Evaluationmentioning
confidence: 99%
“…Typically most filtering algorithms can be divided into two separate steps [22]: Fist, there is a model building step, usually executed off-line, followed by a second execution step, which is always executed on-line. Preprocessing, representation, calculation and recommendation, prediction generation, which appear in most discussed filtering algorithms, can be as part of off-line step.…”
Section: Computation Complexitymentioning
confidence: 99%
“…Several factors such as the current location in code, e.g., whether the developer is currently working in the control-flow of a framework method [6], or the availability of certain other variables in the current scope affect the relevance of a method.…”
Section: Introductionmentioning
confidence: 99%