2019
DOI: 10.1007/978-3-030-32813-9_4
|View full text |Cite
|
Sign up to set email alerts
|

AIoT Bench: Towards Comprehensive Benchmarking Mobile and Embedded Device Intelligence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 49 publications
(27 citation statements)
references
References 7 publications
0
25
0
Order By: Relevance
“…By contrast, macro benchmarks refer to benchmarks that capture application-specific system performance metrics for different application domains. Only two benchmarks capture both micro and macro benchmarks, namely RIoTBench (64) and AIoTBench (65). The majority of benchmarks are macro benchmarks and only two utilize generic workloads, namely EdgeBench (66) and DeFog (10).…”
Section: Edge Performance Benchmarkingmentioning
confidence: 99%
See 1 more Smart Citation
“…By contrast, macro benchmarks refer to benchmarks that capture application-specific system performance metrics for different application domains. Only two benchmarks capture both micro and macro benchmarks, namely RIoTBench (64) and AIoTBench (65). The majority of benchmarks are macro benchmarks and only two utilize generic workloads, namely EdgeBench (66) and DeFog (10).…”
Section: Edge Performance Benchmarkingmentioning
confidence: 99%
“…Devices that run machine learning workloads: The benchmarking of machine-learning-specific workloads for various devices was presented in (65). Both micro benchmarks, such as the individual layers of a neural network, and macro benchmarks, such as applications in image classification, speech recognition, and language translation on the TensorFlow and Caffe2 frameworks, were considered.…”
Section: B1 Destinationmentioning
confidence: 99%
“…Many of their benchmark suites, namely, AIBench, HPC AI500, AIoT Bench, Edge AIBench and BigDataBench, although not primarily focused on scientific applications, could be a useful complement to the SciML benchmarks proposed here. Each of these benchmark suites targets different domains of the problem, such as IoTs or Edge computing devices, and includes many different types of benchmarks covering micro kernels, components and applications [63][64][65][66][67]. -The CORAL-2 suite includes a ML/DL micro-benchmark suite that captures operations that are fundamental to deep learning and machine learning [60].…”
Section: Big Scientific Data and Machine Learning Benchmarks (A) Introductionmentioning
confidence: 99%
“…Coordinated by BenchCouncil (http://www.benchcouncil.org), we are also building an edge computing testbed with a federated learning framework to resolve security and privacy issue, which can be accessed from http: //www.benchcouncil.org/testbed/index.php. BenchCouncil also release dat-acenter AI benchmarks [8], HPC AI benchmarks [9], and IoT AI benchmarks [10], publicly available from http://www.benchcouncil.org/AIBench/index.html.…”
Section: Introductionmentioning
confidence: 99%