2022
DOI: 10.1109/access.2022.3229767
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Hardware Architectures for Accelerating Deep Neural Networks: Survey

Abstract: In the modern-day era of technology, a paradigm shift has been witnessed in the areas involving applications of Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL). Specifically, Deep Neural Networks (DNNs) have emerged as a popular field of interest in most AI applications such as computer vision, image and video processing, robotics, etc. In the context of developed digital technologies and the availability of authentic data and data handling infrastructure, DNNs have been a credible … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 36 publications
(15 citation statements)
references
References 193 publications
(215 reference statements)
0
9
0
Order By: Relevance
“…However, the policies learned for the decision networks are opaque to users, and, hence, they cannot be used for predictable adaptation to meet the resource conditions. Various hardware-and software-based inference accelerators have been developed to run trained neural networks efficiently on target devices [43], [44]. While hardware-based accelerators try to maximize the throughput of deep learning operations on specialized hardware [45], software-based accelerators mainly focus on optimizing resource management, pipeline design, model restructuring, and quantization [46]- [50].…”
Section: Related Workmentioning
confidence: 99%
“…However, the policies learned for the decision networks are opaque to users, and, hence, they cannot be used for predictable adaptation to meet the resource conditions. Various hardware-and software-based inference accelerators have been developed to run trained neural networks efficiently on target devices [43], [44]. While hardware-based accelerators try to maximize the throughput of deep learning operations on specialized hardware [45], software-based accelerators mainly focus on optimizing resource management, pipeline design, model restructuring, and quantization [46]- [50].…”
Section: Related Workmentioning
confidence: 99%
“…FPGA-based DNN accelerators [13], can be broadly categorized into two types: accelerators tailored for specific applications such as speech recognition, object detection, and natural language processing, and accelerators designed for specific algorithms such as CNN and RNN. Additionally, there exist accelerator frameworks equipped with hardware templates.…”
Section: B Fpga-based Acceleratorsmentioning
confidence: 99%
“…Innovations in automated machine learning (AutoML) [25] and continual learning models, model compression for deployment on resource-limited devices, robustness against data distribution shifts, and effective multi-modal data integration are pivotal for maximizing DL's impact. For that reason, optimal DL model performance necessitates co-designing hardware [26] and software, highlighting the intricate balance between technological advancements and practical applications in economic contexts. All the preceding and succeeding data is context based on following envisioned environment, as illustrated in Table 1.…”
Section: Challenges and Opportunitiesmentioning
confidence: 99%