2007
DOI: 10.1007/978-3-540-73066-8_16
|View full text |Cite
|
Sign up to set email alerts
|

Utilising Code Smells to Detect Quality Problems in TTCN-3 Test Suites

Abstract: Abstract. Today, test suites of several ten thousand lines of code are specified using the Testing and Test Control Notation (TTCN-3). Experience shows that the resulting test suites suffer from quality problems with respect to internal quality aspects like usability, maintainability, or reusability. Therefore, a quality assessment of TTCN-3 test suites is desirable. A powerful approach to detect quality problems in source code is the identification of code smells. Code smells are patterns of inappropriate lan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 27 publications
(22 citation statements)
references
References 7 publications
0
22
0
Order By: Relevance
“…For example, in Figure 1, the petal in the ↑ direction (DATA CLUMPS) shows a strong smell, whereas the next petal to its left (FEATURE ENVY) shows a weaker smell. This is in contrast with smell visualizations that use a threshold, such as TRex [17] and CodeNose [27], which don't report smells at all if their metrics fall below a threshold. We made this design decision because we suspect that code smells are highly subjective; if we had chosen a threshold, it would probably differ from the programmer's preferred threshold, with the consequence that the tool will either miss smells that the programmer might want to see (false-negatives), or over-emphasize smells that the programmer would rather ignore (false-positives).…”
Section: Ambient Viewmentioning
confidence: 90%
See 1 more Smart Citation
“…For example, in Figure 1, the petal in the ↑ direction (DATA CLUMPS) shows a strong smell, whereas the next petal to its left (FEATURE ENVY) shows a weaker smell. This is in contrast with smell visualizations that use a threshold, such as TRex [17] and CodeNose [27], which don't report smells at all if their metrics fall below a threshold. We made this design decision because we suspect that code smells are highly subjective; if we had chosen a threshold, it would probably differ from the programmer's preferred threshold, with the consequence that the tool will either miss smells that the programmer might want to see (false-negatives), or over-emphasize smells that the programmer would rather ignore (false-positives).…”
Section: Ambient Viewmentioning
confidence: 90%
“…Our choice to use progressive disclosure contrasts with other smell detectors, such as Parnin and colleague's Noseprints tool [21], that display a single visualization of code smells. However, many existing smell detectors, especially ones that underline code that contains smells [6,17,27,29], do include a basic form of progressive disclosure: they allow the user to mouse-over an underlined piece of code to see the name of a smell that that code is exhibiting. Stench Blossom takes this technique one step further in Explanation View.…”
Section: Active Viewmentioning
confidence: 99%
“…Later, van Deursen et al introduced the term test smells by applying the smell metaphor to test code [6]. Since then, their initial set of test smells has been extended [2], [9], [10]. In [7], we enhanced these test smells with additional fixture-related smells, derived metrics to aid in their detection, and implemented a technique to automatically detect test fixture smells in a tool called TestHound.…”
Section: Test Smellsmentioning
confidence: 99%
“…One can learn from Baker et al [36], who provide metrics and refactoring specifically for TTCN-3 test specifications. The test code smells for TTCN-3 by Neukirchen and Bisanz [37] is also highly interesting input. Both papers are targeting existing test artifacts to measure and improve test code.…”
Section: Related Workmentioning
confidence: 99%