Proceedings of the 9th ACM/IEEE International Conference on Information Processing in Sensor Networks 2010
DOI: 10.1145/1791212.1791235
|View full text |Cite
|
Sign up to set email alerts
|

KleeNet

Abstract: Complex interactions and the distributed nature of wireless sensor networks make automated testing and debugging before deployment a necessity. A main challenge is to detect bugs that occur due to non-deterministic events, such as node reboots or packet duplicates. Often, these events have the potential to drive a sensor network and its applications into corner-case situations, exhibiting bugs that are hard to detect using existing testing and debugging techniques.In this paper, we present KleeNet, a debugging… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 98 publications
(11 citation statements)
references
References 43 publications
0
10
0
Order By: Relevance
“…In addition, despite exploring all code paths, symbolic execution does not explore all system execution paths, such as different event interleavings. Techniques exist that can add artificial branching points to a program to inject faults or explore different event orderings [21,25], but at the expense of extra complexity. As such, symbolic execution is insufficient for testing OpenFlow applications.…”
Section: Background On Symbolic Executionmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, despite exploring all code paths, symbolic execution does not explore all system execution paths, such as different event interleavings. Techniques exist that can add artificial branching points to a program to inject faults or explore different event orderings [21,25], but at the expense of extra complexity. As such, symbolic execution is insufficient for testing OpenFlow applications.…”
Section: Background On Symbolic Executionmentioning
confidence: 99%
“…While model checking [18,17,15,16,19] and symbolic execution [28,20,21] are automatic techniques, a drawback is that they typically require a closed system, i.e., a system (model) together with its environment. Typically, the creation of such an environment is a manual process (e.g., [25]). NICE re-uses the idea of model checking-systematic state-space exploration-and combines it with the idea of symbolic executionexhaustive path coverage-to avoid pushing the burden of modeling the environment on the user.…”
Section: Related Workmentioning
confidence: 99%
“…Simulators, such as TOSSIM , and emulators, such as Avrora [Titzer et al 2005] or COOJA [Österlind et al 2006], enable the testing of complete sensornet systems and protocol suites at scale. Similarly, testbeds like Motelab [WernerAllen et al 2005], Kansei [Ertin et al 2006], or our own ones [Iwanicki et al 2008;Michalowski et al 2012] are invaluable for assessing the performance of complete systems or self-contained subsystems and protocols on actual sensornet hardware.…”
Section: Related Workmentioning
confidence: 99%
“…Conversely, formal methods, like symbolic execution [Sasnauskas et al 2010] and model checking [Li and Regehr 2010], do enable exhaustively analyzing the control flow paths of a system. In effect, they are indispensable for identifying software flaws due to unpredicted interactions between various modules of the system.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation