Model checkers and sequential equivalence checkers have become essential tools for the semiconductor industry in recent years. The Hardware Model Checking Competition (HWMCC) was founded in 2006 with the purpose of intensifying research interest in these technologies, and establishing more of a science behind them. For example, the competition provided a standardized benchmark format, a challenging and diverse set of industriallyrelevant public benchmarks, and, as a consequence, a significant motivation for additional research to advance the state-of-the-art in model checkers for these verification problems. This paper provides a historical perspective, and an analysis of the tools and benchmarks submitted to the competition. It also presents a detailed analysis of the results collected in the 2014 edition of the contest, showing relations among tools, and among tools and benchmarks. It finally proposes a list of considerations, lessons learned, and hints for both future organizers and competitors.
The realization of a deep neural architecture on a mobile platform is challenging, but can open up a number of possibilities for visual analysis applications. A neural network can be realized on a mobile platform by exploiting the computational power of the embedded GPU and simplifying the flow of a neural architecture trained on the desktop workstation or a GPU server. This paper presents an embedded platform-based Italian license plate detection and recognition system using deep neural classifiers. In this work, trained parameters of a highly precise automatic license plate recognition (ALPR) system are imported and used to replicate the same neural classifiers on a Nvidia Shield K1 tablet. A CUDA-based framework is used to realize these neural networks. The flow of the trained architecture is simplified to perform the license plate recognition in real-time. Results show that the tasks of plate and character detection and localization can be performed in real-time on a mobile platform by simplifying the flow of the trained architecture. However, the accuracy of the simplified architecture would be decreased accordingly.
Abstract:Deep convolutional neural networks achieve state-of-the-art performance in image classification. The computational and memory requirements of such networks are however huge, and that is an issue on embedded devices due to their constraints. Most of this complexity derives from the convolutional layers and in particular from the matrix multiplications they entail. This paper proposes a complete approach to image classification providing common layers used in neural networks. Namely, the proposed approach relies on a heterogeneous CPU-GPU scheme for performing convolutions in the transform domain. The Compute Unified Device Architecture(CUDA)-based implementation of the proposed approach is evaluated over three different image classification networks on a Tegra K1 CPU-GPU mobile processor. Experiments show that the presented heterogeneous scheme boasts a 50× speedup over the CPU-only reference and outperforms a GPU-based reference by 2×, while slashing the power consumption by nearly 30%.
Abstract-Recent developments in embedded processors have enabled heterogeneous computing on mobile devices using open-access general-purpose computing languages. Following the MPEG CDVS standard, this paper presents an efficient feature computation phase, completely implemented on embedded devices supporting the OpenCL framework. Following our contribution to the MPEG-CDVS standard, we present the new born CDVS detector and its design for multicore parallel GPUs. We show how to adjust algorithmic choices and implementation details to target the intrinsic characteristics of the embedded platforms selected. We compare our GPU implementation of the ALP keypoint detector with the CPU based implementation of the CDVS standard. We present data on different GPUs showing that our solution is up to 7x faster than the CPU version. To sum up, one of the main feature of our algorithm is to be fast enough to be able to open new visual search scenarios exploiting entire real-time on-board computations with no data transfer.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.