2020
DOI: 10.48550/arxiv.2002.12418
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MNN: A Universal and Efficient Inference Engine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(19 citation statements)
references
References 0 publications
0
19
0
Order By: Relevance
“…As neural network models become increasingly commonplace in production, there have been a number of different open and closed source inference frameworks from both academia and industry. Some notable examples include OpenVINO for CPUs, TensorRT for Nvidia GPUs and MNN for mobile devices [15,25,31]. Almost all deep learning inference engines consist of two stages: preparation and execution.…”
Section: Deep Learning Inference Enginesmentioning
confidence: 99%
See 4 more Smart Citations
“…As neural network models become increasingly commonplace in production, there have been a number of different open and closed source inference frameworks from both academia and industry. Some notable examples include OpenVINO for CPUs, TensorRT for Nvidia GPUs and MNN for mobile devices [15,25,31]. Almost all deep learning inference engines consist of two stages: preparation and execution.…”
Section: Deep Learning Inference Enginesmentioning
confidence: 99%
“…OpenVINO and TensorRT rely on Intel oneDNN (formerly MKL-DNN) and Nvidia cuDNN respectively [15,31]. MNN relies on a semi-automated search technique to generate the kernels from a pre-defined number of optimization strategies [25]. TVM takes it a step further and performs compilation and autotuning for each kernel [2].…”
Section: Deep Learning Inference Enginesmentioning
confidence: 99%
See 3 more Smart Citations