2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA) 2020
DOI: 10.1109/isca45697.2020.00045
|View full text |Cite
|
Sign up to set email alerts
|

MLPerf Inference Benchmark

Abstract: Machine-learning (ML) hardware and software system demand is burgeoning. Driven by ML applications, the number of different ML inference systems has exploded. Over 100 organizations are building ML inference chips, and the systems that incorporate existing models span at least three orders of magnitude in power consumption and four orders of magnitude in performance; they range from embedded devices to data-center solutions. Fueling the hardware are a dozen or more software frameworks and libraries. The myriad… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
78
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 268 publications
(96 citation statements)
references
References 37 publications
0
78
0
Order By: Relevance
“…Regarding the selection of AI-processors and software tools for the optimal deployment of DNNs, we have established criteria to fulfill the system's requirements and analyzed some remarkable systems based on the MLPerf Inference benchmark suite (Reddi et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Regarding the selection of AI-processors and software tools for the optimal deployment of DNNs, we have established criteria to fulfill the system's requirements and analyzed some remarkable systems based on the MLPerf Inference benchmark suite (Reddi et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…To support the AI-processor scouting process we rely on MLPerf Inference (Reddi et al, 2019), which is a relevant benchmark suite in the machine learning community for measuring how fast machine learning systems can process inputs and produce results using a trained DNN model. It was designed with the involvement of more than 30 organizations as well as more than 200 machine learning engineers and practitioners to overcome the challenge of assessing machine learning-system performance in an architecture-neutral, representative, and reproducible manner, despite the machine learning ecosystem's many possible combinations of hardware, machinelearning tasks, DNN models, data sets, frameworks, toolsets, libraries, architectures, and inference engines.…”
Section: Algorithms Deploymentmentioning
confidence: 99%
“…Thus, our work urges taking the whole software-hardware co-design solution as a benchmark object instead of benchmarking pure hardware designs by providing a fixed set of applications. The recently released MLPerf inference benchmark [44] includes an Open Division under the same motivation as our study, although they are a preliminary release and the rules of Open Division are immature. Compared to the immature rules in this preliminary release, our methodology provides a concrete interface to take the model compression techniques as the input and generate the compressed models as the output.…”
Section: Discussionmentioning
confidence: 99%
“…MLPerf [22] is an attempt by over 30 organizations to create an industry-wide standard benchmark to assess the vast number of machine learning software and hardware combinations, while DAWNBench [23] is led by academia. MLPerf limits the problem space by defining a set of scenarios, datasets, libraries, frameworks, and metrics.…”
Section: Related Workmentioning
confidence: 99%