We demonstrate the electrical properties of metal-oxide-semiconductor capacitors on molecular beam epitaxial GaAs in situ passivated with ultrathin amorphous Si (a-Si) layer and with ex situ deposited HfO2 gate oxide and TaN metal gate. Minimum thickness of the Si interface passivation layer of 1.5 nm is needed to prevent the Fermi level pinning and provide good capacitance-voltage characteristics with equivalent oxide thickness of 2.1 nm and leakage current of ⩽1.0mA∕cm2. Transmission electron microscopy analysis showed that the Si layer was oxidized up to 1.4 nm during ex situ processing while the interface between the GaAs and a-Si remained atomically sharp without any sign of interfacial reaction.
We present a 256 × 256 in-memory compute (IMC) core designed and fabricated in 14-nm CMOS technology with backend-integrated multi-level phase change memory (PCM). It comprises 256 linearized current-controlled oscillator (CCO)-based A/D converters (ADCs) at a compact 4-µm pitch and a local digital processing unit (LDPU) performing affine scaling and ReLU operations. A frequency-linearization technique for CCO is introduced, which increases the maximum Manuscript
In this letter, the authors present the capacitance-voltage and current-voltage characteristics of TaN∕HfO2∕n-GaAs metal-oxide-semiconductor capacitors with thin silicon and germanium interfacial passivation layers (IPLs). Physical vapor deposition high-k dielectric films and silicon/germanium IPLs were deposited on GaAs substrate which has been cleaned with HCl and (NH4)2S solutions. Equivalent oxide thickness (EOT) of 12.5Å and dielectric leakage current density of 2.0×10−4A∕cm2 at ∣VG−VFB∣=1V with low capacitance-voltage frequency dispersion have been obtained. The results indicate that the use of a thin silicon/germanium IPL assists in scaling EOT below 13Å, while improving the quality of the interface.
Hardware acceleration of deep learning using analog non-volatile memory (NVM) requires large arrays with high device yield, high accuracy Multiply-ACcumulate (MAC) operations, and routing frameworks for implementing arbitrary deep neural network (DNN) topologies. In this article, we present a 14-nm test-chip for Analog AI inference-it contains multiple arrays of phase change memory (PCM)devices, each array capable of storing 512 × 512 unique DNN weights and executing massively parallel MAC operations at the location of the data. DNN excitations are transported across the chip using a duration representation on a parallel and reconfigurable 2-D mesh. To accurately transfer inference models to the chip, we describe a closed-loop tuning (CLT) algorithm that programs the four PCM conductances in each weight, achieving <3% average weighterror. A row-wise programming scheme and associated circuitry allow us to execute CLT on up to 512 weights concurrently. We show that the test chip can achieve near-software-equivalent accuracy on two different DNNs. We demonstrate tile-to-tile transport with a fully-on-chip two-layer network for MNIST (accuracy degradation ∼0.6%)
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.