2020 IEEE 23rd International Multitopic Conference (INMIC) 2020
DOI: 10.1109/inmic50486.2020.9318136
|View full text |Cite
|
Sign up to set email alerts
|

A Survey Comparing Specialized Hardware And Evolution In TPUs For Neural Networks

Abstract: This survey paper is based on the evolution of TPUs from first generation TPUs to edge TPUs and their architectures. This paper compares CPUs, GPUs, FPGAs and TPUs, their hardware architectures, their similarities and differences will be discussed. Modern neural networks are immensely used these days but they require more time, computation and energy. Due to the greater demand and attractive options for architects to explore, companies are continuously working to reduce training and inference response time. Du… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 20 publications
0
8
0
1
Order By: Relevance
“…Producers of TPU are Google, Coral, and Hailo. Further, the evolution of TPUs can be visualized in four generations [ 16 ]. The first-generation TPU is a CISC processor, and the complex instructions are executed by Matrix Multiplier Unit.…”
Section: Hardware Optimizationmentioning
confidence: 99%
“…Producers of TPU are Google, Coral, and Hailo. Further, the evolution of TPUs can be visualized in four generations [ 16 ]. The first-generation TPU is a CISC processor, and the complex instructions are executed by Matrix Multiplier Unit.…”
Section: Hardware Optimizationmentioning
confidence: 99%
“…TPUs demonstrate a notable reduction in energy consumption per computation, a critical factor in sustainable and scalable AI development. This efficiency is paramount in scenarios where large-scale computations are routine, making TPUs an indispensable asset in the AI and ML landscape [65,[81][82][83].…”
Section: Tensor Processing Units (Tpus)mentioning
confidence: 99%
“…The trade-offs related to the realization of vision at the edge have been addressed from diverse perspectives. Specific and progressively more efficient devices have been developed in the last few years, such as low-power edge GPUs [1]- [3], neural-network accelerators [4], embedded CPUs [5], or tensor processor units [6]. Strategies based on multiple devices have also been proposed, including hybrid cloud-edge solutions [7]- [13] and collaborative systemsin the framework of the Internet of Things [14]- [18].…”
Section: Introductionmentioning
confidence: 99%