As diminishing feature sizes drive down the energy for computations, the power budget for on-chip communication is steadily rising. Furthermore, the increasing number of cores is placing a huge performance burden on the network-onchip (NoC) infrastructure. While NoCs are designed as regular architectures that allow scaling to hundreds of cores, the lack of a flexible topology gives rise to higher latencies, lower throughput, and increased energy costs. In this paper, we explore MorphoNoCs -scalable, configurable, hybrid NoCs obtained by extending regular electrical networks with configurable nanophotonic links. In order to design MorphoNoCs, we first carry out a detailed study of the design space for Multi-Write Multi-Read (MWMR) nanophotonics links. After identifying optimum design points, we then discuss the router architecture for deploying them in hybrid electronic-photonic NoCs. We then study the design space at the network level, by varying the waveguide lengths and the number of hybrid routers. This affords us to carry out energy-latency trade-offs. For our evaluations, we adopt traces from synthetic benchmarks as well as the NAS Parallel Benchmark suite. Our results indicate that MorphoNoCs can achieve latency improvements of up to 3.0× or energy improvements of up to 1.37× over the base electronic network.
Optical computing has been an active topic of research for over seven decades, although solutions have been elusive. This special issue explores recent advances in all-optical information processing, including digital and analog, classical and quantum, and those based on Turing, neuromorphic, and metaphoric models of computation.Optical computing can generally be defined as "the use of electromagnetic radiation to process information". The term "optical" is widely understood to mean "electromagnetic radiation", but the term computing is frequently assumed to lack a formal definition. In computer science, a hierarchy exists that precisely defines computational complexity, based on a combination of the amount of state and how that state is accessed. This special issue gathers papers from both perspectives, addressing issues of computation class and complexity as well as optical phenomenon from a nanophotonic viewpoint.The quest for optical computation is arguably as old as Foucalt's knife-edge test in 1858, but the more notable activity has occurred over the past 65 years [1]. The first decades were dominated by the advent of holography in the late 1940s and lasers in the early 1960s, which combined with lens manipulations (Fourier transforms) to enable analog synthetic aperture radar (SAR) processing [1]. Room temperature liquid crystals drove research in the use of analog spatial light modulators (SLMs), which could be coupled with efficient LEDs for the first time in the 1970s [2]. That decade also brought the first exploration into optical transistors, the first foray into digital optical devices [3].The 1980s introduced micro-electromechanical mirror (MEMS) technology and micromirrors, which provide a much more compact method for modulating arrays of light than SLMs [2]. This included new approaches for the optical transistor based on interferometers [4]. In the 1990s, vertical-cavity surface-emitting lasers (VCSELs) and the self-electrooptic effect (SEED) devices became available [5]. Research in ring resonators and more complex nonlinear optics properties became more popular in the 2000s, as did optical methods for processing network data [6]. Except for the optical transistor, much of this research focused on analog methods. The notion of optical computing in this era was limited by the assumption that device fabrication would halt at 100 nm, so nanophotonic devices might not be practical and electronic devices might have scaling limitations [7].Although some considered the field of optical processing to have passed its peak [1], the 2010s have since seen a resurgence in activity, centering around new approaches in quantum and analog mesh and phase-based computing [8][9][10]. This new activity was the highlight of the OSA Optical Computing Incubator meeting in late 2015 [11] and the recent IEEE Summer Topical Meeting on Photonic Hardware Accelerators and Neuro-inspired Computing in July 2016 [12], both of which helped result in the content of this special issue.To provide a common reference for comparison,...
Software prefetc hing and locality optimizations are techniques for overcoming the speed gap betw een processor and memory. In this paper, we e v aluate the impact of memory trends on the e ectiveness of soft w are prefetc hing and locality optimizations for three types of applications: regular scienti c codes, irregular scienti c codes, and pointer-chasing codes. We nd for many applications, softw areprefetching outperforms locality optimizations when there is sufcien t memory bandwidth,but localit y optimizations outperform softw are prefetc hing under bandwidth-limited conditions. The break-even poin t(for 1 Ghz processors) occurs at roughly 2.5 GBytes/sec on today's memory systems, and will increase on future memory systems. We also study the interactions between software prefetc hing and locality optimizations when applied in concert. Naively combining the techniques provides robustness to changes in memory bandwidth and latency, but does not yield additional performance gains. We propose and evaluate sev eral algorithms to better integrate softw areprefetc hing and locality optimizations, including a modi ed tiling algorithm, padding for prefetc hing, and index prefetching.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.