Abstract:We develop a design methodology for mapping computer vision algorithms onto an FPGA through the use of coarse-grain reconfigurable dataflow graphs as a representation to guide the designer. We first describe a new dataflow modeling technique called homogeneous parameterized dataflow (HPDF), which effectively captures the structure of an important class of computer vision applications. This form of dynamic dataflow takes advantage of the property that in a large number of image processing applications, data pro… Show more
“…To model typical image processing applications, in HPDF the data production and consumption rate is the same along dataflow graph edges. In [14], the vision algorithms of gesture recognition and face detection were modeled using HPDF, and in [15], the HPDF model was used for mapping a gesture recognition algorithms onto an FPGA board.…”
Section: B Dataflow and Actor Modelmentioning
confidence: 99%
“…This is the reason why the dataflow research is largely done within the embedded systems community. Though [14] and [15] apply dataflow modeling techniques to computer vision applications, the target use-case remains the design of image processing hardware, and therefore one requires strict formal properties, such as bounded memory requirements and efficient synthesis solutions [15], which restrict the expressiveness of the dataflow models. Dynamic dataflow models therefore seem more suitable for describing more complex data-dependent computer vision applications.…”
Section: E Limitations Of the Existing Approachesmentioning
Abstract-The need for more flexible manufacturing systems stimulates the adoption of industrial robots in combination with intelligent computing resources and sophisticated sensing technologies. In this context, industrial vision systems play a role of inherently flexible sensing means that can be used for a variety of tasks within automated inspection, process control and robot guidance. When vision sensing is used within a large complex system, it is of particular importance to handle the complexity by introducing the appropriate formal methods. This paper overviews the challenges arising during design, implementation and application of industrial vision systems, and proposes an approach, dubbed Discrete Event Dataflow (DEDF), allowing to formally specify vision dataflow in the context of larger systems.
“…To model typical image processing applications, in HPDF the data production and consumption rate is the same along dataflow graph edges. In [14], the vision algorithms of gesture recognition and face detection were modeled using HPDF, and in [15], the HPDF model was used for mapping a gesture recognition algorithms onto an FPGA board.…”
Section: B Dataflow and Actor Modelmentioning
confidence: 99%
“…This is the reason why the dataflow research is largely done within the embedded systems community. Though [14] and [15] apply dataflow modeling techniques to computer vision applications, the target use-case remains the design of image processing hardware, and therefore one requires strict formal properties, such as bounded memory requirements and efficient synthesis solutions [15], which restrict the expressiveness of the dataflow models. Dynamic dataflow models therefore seem more suitable for describing more complex data-dependent computer vision applications.…”
Section: E Limitations Of the Existing Approachesmentioning
Abstract-The need for more flexible manufacturing systems stimulates the adoption of industrial robots in combination with intelligent computing resources and sophisticated sensing technologies. In this context, industrial vision systems play a role of inherently flexible sensing means that can be used for a variety of tasks within automated inspection, process control and robot guidance. When vision sensing is used within a large complex system, it is of particular importance to handle the complexity by introducing the appropriate formal methods. This paper overviews the challenges arising during design, implementation and application of industrial vision systems, and proposes an approach, dubbed Discrete Event Dataflow (DEDF), allowing to formally specify vision dataflow in the context of larger systems.
“…However, these extensions are based on imperative languages (e.g., C, C++, Fortran) that do not provide mechanisms to model specific signal flow graph topologies. On the contrary, signal processing oriented dataflow MoCs are widely used for specification of data-driven signal flow graphs in a wide range of application areas, including video decoding [13], telecommunication [14], [15], and computer vision [16]. The popularity of dataflow MoCs in design and implementation of signal processing systems is due largely to their analyzability and their natural expressivity of the concurrency in signal processing algorithms, which makes them suitable for exploiting the parallelism offered by MPSoCs.…”
This paper introduces a novel multicore scheduling method that leverages a parameterized dataflow Model of Computation (MoC). This method, which we have named Just-In-Time Multicore Scheduling (JIT-MS), aims to efficiently schedule Parameterized and Interfaced Synchronous DataFlow (PiSDF) graphs on multicore architectures. This method exploits features of PiSDF to find locally static regions that exhibit predictable communications. This paper uses a multicore signal processing benchmark to demonstrate that the JIT-MS scheduler can exploit more parallelism than a conventional multicore task scheduler based on task creation and dispatch. Experimental results of the JIT-MS on an 8-core Texas Instruments Keystone Digital Signal Processor (DSP) are compared with those obtained from the OpenMP implementation provided by Texas Instruments. Results shows latency improvements of up to 26% for multicore signal processing systems.
“…The advantages and disadvantages of FPGA technology and its suitability for computer vision tasks were discussed in detail in [10] and its optimization in [11]. A design methodology for mapping the computer vision algorithm onto an FPGA through the use of a coarse grain reconfigurable data flow graph was discussed in detail in [12].…”
This is an accepted version of a paper published in IEEE transactions on circuits and systems for video technology (Print). This paper has been peer-reviewed but does not include the final publisher proof-corrections or journal pagination.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.