Abstract. The challenges associated with wireless vision sensor networks are low energy consumption, less bandwidth and limited processing capabilities. In order to meet these challenges different approaches are proposed. Research in wireless vision sensor networks has been focused on two different assumptions, first is sending all data to the central base station without local processing, second approach is based on conducting all processing locally at the sensor node and transmitting only the final results. Our research is focused on partitioning the vision processing tasks between Senor node and central base station. In this paper we have added the exploration dimension to perform some of the vision tasks such as image capturing, background subtraction, segmentation and Tiff Group4 compression on FPGA while communication on microcontroller. The remaining vision processing tasks i.e. morphology, labeling, bubble remover and classification are processed on central base station. Our results show that the introduction of FPGA for some of the visual tasks will result in a longer life time for the visual sensor node while the architecture is still programmable.
I. INTRODUCTIONTypically Vision Sensor Nodes (VSN) in Wireless Vision Sensor Networks (WVSN) consists of a camera for acquiring images, a processor for local image processing and a transceiver for communicating the results to the central base station. Due to the technological development in image sensors, sensor networking, distributed processing, low power processing and embedded systems smart camera networks can perform complex tasks using limited resources such as batteries, a wireless link and with a limited storage facility. Such camera based networks could easily be installed in out-doors areas where there is a limited availability of power, where access is difficult and it is inconvenient to modify the locations of the nodes or frequently change the batteries. VSN have been designed and implemented on microcontroller and microprocessor [1,4]. Often these solutions have high power consumption and moderate processing capabilities. Due to rapid development in the semiconductor technology, the single chip capacity of Field Programmable Gate Array (FPGA) increases greatly while its power consumption decreases tremendously [15]. Presently FPGA chips consist of many cores which makes it ideal candidate for the designing of VSN. As VSN needs to be capable of performing complex image processing such as image compression, which needs a lot of processing. High processing requirement is increased for an increased image size. Attention must be paid to the hardware/software co-design strategy to meet both processing and power requirements of VSN [8]. In [9] the authors designed a novel VSN based on a low cost, low power FPGA plus microcontroller System on Programmable Chip (SOPC). The authors in [10] have implemented a computer vision algorithm in hardware. They have provided a comparison of hardware and software implemented system using the same algorithm. It is c...
This is an accepted version of a paper published in IEEE transactions on circuits and systems for video technology (Print). This paper has been peer-reviewed but does not include the final publisher proof-corrections or journal pagination.
The current trend in embedded vision systems is to propose bespoke solutions for specific problems as each application has different requirement and constraints. There is no widely used model or benchmark which aims to facilitate generic solutions in embedded vision systems. Providing such model is a challenging task due to the wide number of use cases, environmental factors, and available technologies. However, common characteristics can be identified to propose an abstract model. Indeed, the majority of vision applications focus on the detection, analysis and recognition of objects. These tasks can be reduced to vision functions which can be used to characterize the vision systems. In this paper, we present the results of a thorough analysis of a large number of different types of vision systems. This analysis led us to the development of a system's taxonomy, in which a number of vision functions as well as their combination characterize embedded vision systems. To illustrate the use of this taxonomy, we have tested it against a real vision system that detects magnetic particles in a flowing liquid to predict and avoid critical machinery failure. The proposed taxonomy is evaluated by using a quantitative parameter which shows that it covers 95 percent of the investigated vision systems and its flow is ordered for 60 percent systems. This taxonomy will serve as a tool for classification and comparison of systems and will enable the researchers to propose generic and efficient solutions for same class of systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.