This paper presents an approach for reducing testbench implementation effort of SystemC designs, thus allowing an early verification success. We propose an automatic Universal Verification Methodology (UVM) environment that enables assertions-based, coverage driven and functional verification of SystemC models. The aim of this verification environment is to ease and speed up the verification of SystemC IPs by automatically producing a complete and working UVM testbench with all sub-environments constructed and blocks connected. Our experimentation shows that the proposed environment can rapidly be integrated to a SystemC design while improving its coverage and assertion-based verification.
We present a framework for fast prototyping of embedded video applications. Starting with a high-level executable specification written in OpenCV, we apply semi-automatic refinements of the specification at various levels (TLM and RTL), the lowest of which is a system-on-chip prototype in FPGA. The refinement leverages the structure of image processing applications to map high-level representations to lower level implementation with limited user intervention. Our framework integrates the computer vision library OpenCV for software, SystemC/TLM for high-level hardware representation, UVM and QEMU-OS for virtual prototyping and verification into a single and uniform design and verification flow. With applications in the field of driving assistance and object recognition, we prove the usability of our framework in producing performance and correct design.
IP-based design is used to tackle complexity and reduce time-to-market in systems-on-chip with highperformance requirements. Component integration, the main part in this process, is a complicated and time-consuming task, largely due to interfacing issues. Standard interfaces can help to reduce the integration efforts. However, existing implementations use more resources than necessary and lack of a formalism to capture and manipulate resource requirements and design constraints. In this paper, we propose a novel interface, the Component Interconnect and Data Access (CIDA), and its implementation, based on the interface automata formalism. CIDA can be used to capture system-on-chip architecture, with primarily focus on video processing applications, which are mostly based on data streaming paradigm, with occasional direct memory accesses. We introduce the notion of component-interface clustering for resource reduction and provide a method to automatize this process. With real-life video processing applications implemented in FPGA, we show that our approach can reduce the resource usage (#slices) by an average of 20 % and reduce power consumption by 5 % compared to implementation based on vendor interfaces.
In this paper we present an approach for designing an adaptive video compression system that allows regions of interest to be identified and the picture size and quality configured to optimize performance for a system computation and communication capabilities. We present an FPGA prototype of the complete system, as well as a prototyping environment that allows users to easily explore and evaluate design alternatives. Design exploration can be performed on the Motion JPEG coding standard, with an operation frequency of up to 52 MHZ, a frame rate of over 37 fps with a resolution of (720 × 480) and a compression ratio of 47:1 for 0.51 bits per pixel.
Design verification takes 80 % of times in the flow design of hardware/software applications. To reduce this duration, subsequent transformations are performed across different levels of abstraction until the final implementation. We propose a rapid prototyping camera system based on FPGAs, which allows designs to be explored and evaluated in realistic environments. Our focus is on the design of a generic embedded hardware/software architecture with a symbolic representation of the input application to allow a programmability at a very high abstraction level. The hardware/software partitioning is facilitated through the integration of OpenCV and SystemC in the same environment for rapid simulation and OpenCV and Linux in the run-time environment.
A synthesis approach based on Answer Set Programming (ASP) for heterogeneous system-on-chips to be used in distributed camera networks is presented. In such networks, the tight resource limitations represent a major challenge for application development. Starting with a high-level description of applications, the physical constraints of the target devices, and the specification of network configuration, our goal is to produce optimal computing infrastructures made of a combination of hardware and software components for each node of the network. Optimization aims at maximizing speed while minimizing chip area and power consumption. Additionally, by performing the architecture synthesis simultaneously for all cameras in the network, we are able to minimize the overall utilization of communication resources and consequently reduce power consumption. Because of its reconfiguration capabilities, a Field Programmable Gate Array (FPGA) has been chosen as the target device, which enhances the exploration of several design alternatives. We present several realistic network scenarios to evaluate and validate the proposed synthesis approach. ACM Reference Format:Franck Yonga, Michael Mefenza, and Christophe Bobda. 2015. ASP-based encoding model of architecture synthesis for smart cameras in distributed networks.
No abstract
Tracking several objects across multiple cameras is essential for collaborative monitoring in distributed camera networks. The tractability of the related optimization aiming at tracking a maximal number of important targets, decreases with the growing number of objects moving across cameras. To tackle this issue, a viable model and sound object representation, which can leverage the power of existing tool at run-time for a fast computation of solution, is required.In this paper, we provide a formalism to object tracking across multiple cameras. A first assignment of objects to cameras is performed at start-up to initialize a set of distributed trackers in embedded cameras. We model the runtime self-coordination problem with target handover by encoding the problem as a run-time binding of objects to cameras. This approach has successively been used in high-level system synthesis. Our model of distributed tracking is based on Answer Set Programming, a declarative programming paradigm, that helps formulate the distribution and target handover problem as a search problem, such that by using existing answer set solvers, we produce stable solutions in real-time by incrementally solving time-based encoded ASP problems. The effectiveness of the proposed approach is proven on a 3-node camera network deployment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.