GaInAs/InAs composite channels in InP-based pHEMTs enable wideband and/or low-noise performances because of their superior carrier transport properties. To date, the influence of the InAs inset design details on transistor performance has not been parametrized in the literature. We present a systematic study of the effects of the InAs channel inset thickness on transistor characteristics and cutoff frequencies versus temperature, and on the noise performance at 300 K. The epitaxial layer structures considered here incorporate 2 to 5-nm InAs insets in a fixed total composite channel thickness. All layers exhibit excellent electron mobilities (from 40 200 to 54 800 cm 2 /Vs at 77 K). Thicker InAs insets improve both the current gain cutoff frequency (f T) and the maximum oscillation frequency (f MAX). However, they also result in higher gate leakage currents and increased channel impact ionization. 50-nm gate length pHEMTs with a 5-nm InAs inset feature the highest simultaneous f T /f MAX ≥ 390/675 (455/800) GHz at 300 (15) K for a low-noise bias but exhibit the poorest minimum noise figure NF MIN. Whereas higher f T (and/or f MAX) values have traditionally been associated with improved noise performances, this is no longer the case.
Most existing Spiking Neural Network (SNN) works state that SNNs may utilize temporal information dynamics of spikes. However, an explicit analysis of temporal information dynamics is still missing. In this paper, we ask several important questions for providing a fundamental understanding of SNNs: What are temporal information dynamics inside SNNs? How can we measure the temporal information dynamics? How do the temporal information dynamics affect the overall learning performance? To answer these questions, we estimate the Fisher Information of the weights to measure the distribution of temporal information during training in an empirical manner. Surprisingly, as training goes on, Fisher information starts to concentrate in the early timesteps. After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration. We observe that the temporal information concentration phenomenon is a common learning feature of SNNs by conducting extensive experiments on various configurations such as architecture, dataset, optimization strategy, time constant, and timesteps. Furthermore, to reveal how temporal information concentration affects the performance of SNNs, we design a loss function to change the trend of temporal information. We find that temporal information concentration is crucial to building a robust SNN but has little effect on classification accuracy. Finally, we propose an efficient iterative pruning method based on our observation on temporal information concentration.
Code is available at https://github.com/Intelligent-Computing-Lab-Yale/Exploring-Temporal-Information-Dynamics-in-Spiking-Neural-Networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.