2015 Workshop on Exploiting Silicon Photonics for Energy-Efficient High Performance Computing 2015
DOI: 10.1109/siphotonics.2015.10
|View full text |Cite
|
Sign up to set email alerts
|

High-Speed Optical Cache Memory as Single-Level Shared Cache in Chip-Multiprocessor Architectures

Abstract: We present an optical bus-based Chip Multiprocessor architecture where the processing cores share an optical single-level cache unit. Physically, the optical cache is implemented externally in a separate chip located next to the CPU die. The cache interconnection system is realized through WDM optical interfaces that connect the shared cache module with the processing cores and the Main Memory via spatialmultiplexed optical waveguides; hence, the CPU-DRAM communication completely takes place in the optical dom… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 18 publications
0
4
0
Order By: Relevance
“…The conventional ways of implementing a higher level of cache hierarchies not only increases the total area of the chip and the energy consumption but also the miss rates with more number of cores and higher cache size. P. Maniotis et al [3] proposed a solution to the above problem by presenting an optical cache memory architecture that uses optical CPU-MM buses, in place of standard electronic buses, for connecting all optical subsystems. In contrast to the standard way of putting the L1 and L2 cache along with the core in order to match their speeds, the optical caches are kept on a separate chip, as shown in fig.…”
Section: Solutionmentioning
confidence: 99%
See 2 more Smart Citations
“…The conventional ways of implementing a higher level of cache hierarchies not only increases the total area of the chip and the energy consumption but also the miss rates with more number of cores and higher cache size. P. Maniotis et al [3] proposed a solution to the above problem by presenting an optical cache memory architecture that uses optical CPU-MM buses, in place of standard electronic buses, for connecting all optical subsystems. In contrast to the standard way of putting the L1 and L2 cache along with the core in order to match their speeds, the optical caches are kept on a separate chip, as shown in fig.…”
Section: Solutionmentioning
confidence: 99%
“…5. Performance graph for Solution 3 [3] or for bigger workloads which fit in the conventional LLC space.…”
Section: Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the recent advances on the still new technology of Optical Static RAMs and the first designs of optical cache memories [154]- [168] might allow an alternative visionary route towards an expanded disintegrated compute architecture with off-die shared optical caching [74]. This is illustrated in Fig.…”
Section: From C2c and Rack-scale Disaggregation To Disintegrated mentioning
confidence: 99%