Deep neural networks are widely used in machine learning applications. However, the deployment of large neural networks models can be difficult to deploy on mobile devices with limited power budgets. To solve this problem, we propose Trained Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values. This method has very little accuracy degradation and can even improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet model is trained from scratch, which means it's as easy as to train normal full precision model. We highlight our trained quantization method that can learn both ternary values and ternary assignment. During inference, only ternary values (2-bit weights) and scaling factors are needed, therefore our models are nearly 16× smaller than fullprecision models. Our ternary models can also be viewed as sparse binary weight networks, which can potentially be accelerated with custom circuit. Experiments on CIFAR-10 show that the ternary models obtained by trained quantization method outperform full-precision models of ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and outperforms previous ternary models by 3%.
Rapid development of medical imaging, such as cellular tracking, has increased the demand for “live” contrast agents. This study provides the first experimental evidence demonstrating that transfection of the clMagR/clCry4 gene can impart magnetic resonance imaging (MRI) T2-contrast properties to living prokaryotic Escherichia coli (E. coli) in the presence of Fe3+ through the endogenous formation of iron oxide nanoparticles. The transfected clMagR/clCry4 gene markedly promoted uptake of exogenous iron by E. coli, achieving an intracellular co-precipitation condition and formation of iron oxide nanoparticles. This study will stimulate further exploration of the biological applications of clMagR/clCry4 in imaging studies.
This paper describes SatIn, a hardware accelerator for determining boolean satisfiability (SAT)-an important problem in many domains including verification, security analysis, and planning. SatIn is based on a distributed associative array which performs short, atomic operations that can be composed into high level operations. To overcome scaling limitations imposed by wire delay, we extended the algorithms used in software solvers to function efficiently on a distributed set of nodes communicating with message passing. A cycle-level simulation on real benchmarks shows that SatIn achieves an average 72x speedup against Glucose [1], the winner of 2016 SAT competition, with the potential to achieve a 113x speedup using two contexts. To quantify SatIn's physical requirements, we placed and routed a single clause using the Synopsys 32 nm educational development kit. We were able to meet a 1 ns cycle constraint with our target clause fitting in 4867 µm 2 and consuming 63.8 µW of dynamic power; with a network, this corresponds to 100k clauses consuming 8.3 W of dynamic power (not including leakage or global clock power) in a 500 mm 2 32 nm chip.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.