2022 IEEE Hot Chips 34 Symposium (HCS) 2022
DOI: 10.1109/hcs55958.2022.9895479
|View full text |Cite
|
Sign up to set email alerts
|

Cerebras Architecture Deep Dive: First Look Inside the HW/SW Co-Design for Deep Learning : Cerebras Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(3 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…Clear examples are mapping and resource allocation techniques, whose success greatly depend on aspects such as the algorithm structure, qubit connectivity, gate fidelities, or the or the cost of moving qubits within and across chips. Similarly to recent trends in co-design in deep learning [25], one could conceive techniques where the algorithms, the compiler, and the runtime techniques are co-designed with the interconnect.…”
Section: Discussionmentioning
confidence: 99%
“…Clear examples are mapping and resource allocation techniques, whose success greatly depend on aspects such as the algorithm structure, qubit connectivity, gate fidelities, or the or the cost of moving qubits within and across chips. Similarly to recent trends in co-design in deep learning [25], one could conceive techniques where the algorithms, the compiler, and the runtime techniques are co-designed with the interconnect.…”
Section: Discussionmentioning
confidence: 99%
“…In this section the focus will be on Cerebras Systems Wafer Scale Engine (WSE) [11,12] as it is the most advanced large scale silicon die currently made. This is a system where the processor itself if of the size of a wafer and is primarily made for ML models and computations.…”
Section: Large Scale Silicon Diementioning
confidence: 99%
“…Companies like Graphcore [50], Cerebras [52], Groq [27], and earlier generations of SambaNova's RDU [63], [64] offer alternate AI accelerators. However, they all lack the three-tier memory system required to execute CoEs efficiently.…”
Section: Related Workmentioning
confidence: 99%