This paper presents a new type of wireless networking applications in data centers using steered-beam mmWave links. By taking advantage of clean LOS channels on top of server racks, robust wireless packet-switching network can be built. The transmission latency can be reduced by flexibly bridging adjacent rows of racks wirelessly without using long cables and multiple switches. Eliminating cables and switches also reduces equipment costs as well as server installation and reconfiguration costs. Security can be physically enhanced with controlled directivity and negligible wall penetration. The aggregate data transmission BW per given volume is expected to scale as the fourth power of carrier frequency. The paper also deals with the architecture of such network configurations and a preliminary demonstration system.
The temperature dependence of the diagonal conductivity, a"(T),at integer and fractional quantum Hall effect (FQHE) minima was measured in a sample at various densities. We find o "", the 1/T~O extrapolated value of o. ""(T)from Arrhenius plots, is different for different densities. While a reasonable (1/q) scaling of e"'"at the filling factors v=p/q is observed at lower densities, the scaling is not seen in the highest density data. We explain this loss of scaling by a breakdown of the assumption for a simple activated formula caused by a crossover between the extended state width, I, and T for the measurement. For kT )I, the scaling of the 1/T intercept is recovered by plotting o ""(T) X T vs 1/T and fitting to cr""(T)=(cr"*"'/T)exp( -hE/kT). We attribute the (1/q)' scaling in 0"'"and 0. ""'observed at each density to a (1/q)' scaling in the T =0 conductivity. This supports the assertion of Clark et al. that the charge e of the quasiparticle excitation from the FQHE ground state at v=p/q can be determined from o ""( T) and the charge is e * = e/q.
The development of hardware neural networks, including neuromorphic hardware, has been accelerated over the past few years. However, it is challenging to operate very large-scale neural networks with low-power hardware devices, partly due to signal transmissions through a massive number of interconnections. Our aim is to deal with the issue of communication cost from an algorithmic viewpoint and study learning algorithms for energy-efficient information processing. Here, we consider two approaches to finding spatially arranged sparse recurrent neural networks with the high cost-performance ratio for associative memory. In the first approach following classical methods, we focus on sparse modular network structures inspired by biological brain networks and examine their storage capacity under an iterative learning rule. We show that incorporating long-range intermodule connections into purely modular networks can enhance the cost-performance ratio. In the second approach, we formulate for the first time an optimization problem where the network sparsity is maximized under the constraints imposed by a pattern embedding condition. We show that there is a tradeoff between the interconnection cost and the computational performance in the optimized networks. We demonstrate that the optimized networks can achieve a better cost-performance ratio compared with those considered in the first approach. We show the effectiveness of the optimization approach mainly using binary patterns and apply it also to grayscale image restoration. Our results suggest that the presented approaches are useful in seeking more sparse and less costly connectivity of neural networks for the enhancement of energy efficiency in hardware neural networks.
This paper deals with alternative server memory architecture options in multicore CPU generations using optically-attached memory systems. Thanks to its large bandwidth-distance product, optical interconnect technology enables CPUs and local memory to be placed meters away from each other without sacrificing bandwidth. This topologically-local but physically-remote main memory attached via an ultra-high-bandwidth parallel optical interconnect can lead to flexible memory architecture options using low-cost commodity memory technologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.