Proceedings of the 49th Annual International Symposium on Computer Architecture 2022
DOI: 10.1145/3470496.3533042
|View full text |Cite
|
Sign up to set email alerts
|

AI accelerator on IBM Telum processor

Abstract: IBM Telum is the next generation processor chip for IBM Z and LinuxONE systems. The Telum design is focused on enterprise class workloads and it achieves over 40% per socket performance growth compared to IBM z15. The IBM Telum is the first server-class chip with a dedicated on-chip AI accelerator that enables clients to gain real time insights from their data as it is getting processed.Seamlessly infusing AI in all enterprise workloads is highly desirable to get real business insight on every transaction as w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 89 publications
0
1
0
Order By: Relevance
“…For example, experiments performed at Google (Wang and Kanwar 2019) suggest that such training computations are more sensitive to the exponent range than to the precision. This motivated the introduction of the bfloat16 (or BF16) format (Intel 2018, Henry, Tang and Heinecke 2019, Osorio et al 2022) and of the DLFloat (or DLFLT-16) format (Agrawal et al 2019, Lichtenau et al 2022. Smaller 8-bit or 9-bit floating-point formats (with 2-bit or 3-bit significands) (Chung et al 2018), or even 4-bit formats (Sun et al 2020), have been suggested for AI applications.…”
Section: Formats Roundings and Operationsmentioning
confidence: 99%
“…For example, experiments performed at Google (Wang and Kanwar 2019) suggest that such training computations are more sensitive to the exponent range than to the precision. This motivated the introduction of the bfloat16 (or BF16) format (Intel 2018, Henry, Tang and Heinecke 2019, Osorio et al 2022) and of the DLFloat (or DLFLT-16) format (Agrawal et al 2019, Lichtenau et al 2022. Smaller 8-bit or 9-bit floating-point formats (with 2-bit or 3-bit significands) (Chung et al 2018), or even 4-bit formats (Sun et al 2020), have been suggested for AI applications.…”
Section: Formats Roundings and Operationsmentioning
confidence: 99%