The spiking neural networks (SNNs) are considered as one of the most promising artificial neural networks due to their energyefficient computing capability. Recently, conversion of a trained deep neural network to an SNN has improved the accuracy of deep SNNs. However, most of the previous studies have not achieved satisfactory results in terms of inference speed and energy efficiency. In this paper, we propose a fast and energy-efficient information transmission method with burst spikes and hybrid neural coding scheme in deep SNNs. Our experimental results showed the proposed methods can improve inference energy efficiency and shorten the latency.
There has been great interest in developing a time-of-flight (TOF) PET to improve the signal-to-noise ratio of PET image relative to that of non-TOF PET. Silicon photomultiplier (SiPM) arrays have attracted attention for use as a fast TOF PET photosensor. Since numerous SiPM arrays are needed to construct a modern human PET, a multiplexing method providing both good timing performance and high channel reduction capability is required to develop a SiPM-based TOF PET. The purpose of this study was to develop a capacitive multiplexing circuit for the SiPM-based TOF PET. The proposed multiplexing circuit was evaluated by measuring the coincidence resolving time (CRT) and the energy resolution as a function of the overvoltage using three different capacitor values of 15, 30, and 51 pF. A flood histogram was also obtained and quantitatively assessed. Experiments were performed using a [Formula: see text] array of [Formula: see text] mm SiPMs. Regarding the capacitor values, the multiplexing circuit using a smaller capacitor value showed the best timing performance. On the other hand, the energy resolution and flood histogram quality of the multiplexing circuit deteriorated as the capacitor value became smaller. The proposed circuit was able to achieve a CRT of [Formula: see text] ps FWHM and an energy resolution of 17.1[Formula: see text] with a pair of [Formula: see text] mm LYSO crystals using a capacitor value of 30 pF at an overvoltage of 3.0 V. It was also possible to clearly resolve a [Formula: see text] array of LYSO crystals in the flood histogram using the multiplexing circuit. The experiment results indicate that the proposed capacitive multiplexing circuit is useful to obtain an excellent timing performance and a crystal-resolving capability in the flood histogram with a minimal degradation of the energy resolution, as well as to reduce the number of the readout channels of the SiPM-based TOF PET detector.
This study demonstrated that the proposed MTOT method consisting of only FPGA without ADC and TDC could provide a simple and cost-effective analog and digital signal processing system for PET.
Deep neural networks continue to awe the world with their remarkable performance. Their predictions, however, are prone to be corrupted by adversarial examples that are imperceptible to humans. Current efforts to improve the robustness of neural networks against adversarial examples are focused on developing robust training methods, which update the weights of a neural network in a more robust direction. In this work, we take a step beyond training of the weight parameters and consider the problem of designing an adversarially robust neural architecture with high intrinsic robustness. We propose AdvRush, a novel adversarial robustness-aware neural architecture search algorithm, based upon a finding that independent of the training method, the intrinsic robustness of a neural network can be represented with the smoothness of its input loss landscape. Through a regularizer that favors a candidate architecture with a smoother input loss landscape, AdvRush successfully discovers an adversarially robust neural architecture. Along with a comprehensive theoretical motivation for Ad-vRush, we conduct an extensive amount of experiments to demonstrate the efficacy of AdvRush on various benchmark datasets. Notably, on CIFAR-10, AdvRush achieves 55.91% robust accuracy under FGSM attack after standard training and 50.04% robust accuracy under AutoAttack after 7-step PGD adversarial training.
Despite the increasing interest in neural architecture search (NAS), the significant computational cost of NAS is a hindrance to researchers. Hence, we propose to reduce the cost of NAS using proxy data, i.e., a representative subset of the target data, without sacrificing search performance. Even though data selection has been used across various fields, our evaluation of existing selection methods for NAS algorithms offered by NAS-Bench-1shot1 reveals that they are not always appropriate for NAS and a new selection method is necessary. By analyzing proxy data constructed using various selection methods through data entropy, we propose a novel proxy data selection method tailored for NAS. To empirically demonstrate the effectiveness, we conduct thorough experiments across diverse datasets, search spaces, and NAS algorithms. Consequently, NAS algorithms with the proposed selection discover architectures that are competitive with those obtained using the entire dataset. It significantly reduces the search cost: executing DARTS with the proposed selection requires only 40 minutes on CIFAR-10 and 7.5 hours on ImageNet with a single GPU. Additionally, when the architecture searched on ImageNet using the proposed selection is inversely transferred to CIFAR-10, a state-of-the-art test error of 2.4% is yielded. Our code is available at https://github.com/nabk89/NAS-with-Proxy-data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.