IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS) 2019
DOI: 10.1109/infcomw.2019.8845160
|View full text |Cite
|
Sign up to set email alerts
|

OpenBenchmark: Repeatable and Reproducible Internet of Things Experimentation on Testbeds

Abstract: Experimentation on testbeds with Internet of Things (IoT) devices is hard. The tedious firmware development, the lack of user interfaces, the stochastic nature of the radio channel, the testbed learning curve, are some of the factors that make the evaluation process error prone. The impact of such errors on published results can be quite unfortunate, leading to misconclusions and false common wisdom. The selection of experiment conditions or performance metrics to evaluate one's own proposal may not lead to pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…In this section, we provide a comparison of five testbeds under consideration. Specifically, we consider the following key points: (1) architecture, (2) resources available for wireless experiments, (3) operating system which can be deployed, (4) IoT capabilities, (5) limitations, (6) automatic configuration, (7) SDN/OpenFlow capability, (8) data that can be collected, and (9) machine learning capability. The comparison is shown in Table 1 and is explained in the following subsections.…”
Section: Theoretical Comparison Of the Testbedsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this section, we provide a comparison of five testbeds under consideration. Specifically, we consider the following key points: (1) architecture, (2) resources available for wireless experiments, (3) operating system which can be deployed, (4) IoT capabilities, (5) limitations, (6) automatic configuration, (7) SDN/OpenFlow capability, (8) data that can be collected, and (9) machine learning capability. The comparison is shown in Table 1 and is explained in the following subsections.…”
Section: Theoretical Comparison Of the Testbedsmentioning
confidence: 99%
“…The goal is to find the bottlenecks of the considered testbeds. Numerous papers in the literature focus on a particular scenario on a particular testbed, including [7][8][9][10][11][12]. The objective of this paper is to report the results of comparing a similar network scenario deployed on different testbeds.…”
Section: Introductionmentioning
confidence: 99%
“…This challenge has been recognized in the community, spurring initiatives such as the IoT Benchmarking consortium (https://iotbench.ethz. ch/ (accessed on 12 December 2021)), OpenBenchmark [150], and others discussed in Section 5.3-yet several tasks remain unresolved.…”
Section: Future Research Areas and Current Challengesmentioning
confidence: 99%
“…Furthermore, most evaluations use different settings for traffic patterns, duration, RPL configuration and convergence, software versions, and so on—making comparison inherently difficult. This challenge has been recognized in the community, spurring initiatives such as the IoT Benchmarking consortium ( (accessed on 12 December 2021)), OpenBenchmark [ 150 ], and others discussed in Section 5.3 —yet several tasks remain unresolved.…”
Section: Future Research Areas and Current Challengesmentioning
confidence: 99%
“…1). The platform takes care of testbed 1 The article is an extension of the paper [7] published in the INFOCOM 2019 CNERT workshop. This version complements with the produced KPIs through an extensive experimentation campaign performed using the Open-Benchmark platform.…”
Section: Introductionmentioning
confidence: 99%