Proceedings of the 13th International Conference on Software Engineering - ICSE '08 2008
DOI: 10.1145/1368088.1368092
|View full text |Cite
|
Sign up to set email alerts
|

Answering conceptual queries with Ferret

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
42
0

Year Published

2010
2010
2015
2015

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(42 citation statements)
references
References 34 publications
0
42
0
Order By: Relevance
“…We plan to enhance its filters and the visual presentation of changes to run a user study in which we can measure whether and how much Replay can help developer to answer program comprehension questions. To accomplish that, we plan to ask subjects to answer some of the questions cataloged in previous work [33,5,23,20,35], and compare their performances using Replay against using commonly adopted differencing algorithms for SCM revisions.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We plan to enhance its filters and the visual presentation of changes to run a user study in which we can measure whether and how much Replay can help developer to answer program comprehension questions. To accomplish that, we plan to ask subjects to answer some of the questions cataloged in previous work [33,5,23,20,35], and compare their performances using Replay against using commonly adopted differencing algorithms for SCM revisions.…”
Section: Discussionmentioning
confidence: 99%
“…Previous studies have provided catalogs of such questions, from which two focus on questions related to source code [33,5], and three explore questions related to development activities [23,20,35]. For our case study, we concentrate on answering a subset of the questions (shown in Table 2) raised by Fritz and Murphy [35], which cover a broad set of development activities.…”
Section: Case Studymentioning
confidence: 99%
“…From an abstract IFT perspective, these cues aim to engender highly accurate scent Sji in the minds of predators. Similar concepts, like Ferret [De Alwis and Murphy 2008] and Information Fragments [Fritz and Murphy 2010] make it easier for the programmer to ask and answer particular questions by using a context-sensitive view that identifies relationships between different artifacts as well as people in a project. In particular, information fragments can display links to artifacts in response to a "What code files is someone working on?"…”
Section: Debuggingmentioning
confidence: 99%
“…Additionally, queries used in this phase were based on templates created after conducting experiments with professional programmers for the identification of standard and useful software engineering questions that programmers tend to ask when evolving a code base (de Alwis and Murphy, 2008;Sillito et al, 2006Sillito et al, , 2008. The 50 queries were generated to include ones with varying levels of complexity: simple ones such as 'Which methods have the declared return class x?'…”
Section: Semantic Evaluation At Large Scale (Seals) -Searchmentioning
confidence: 99%