Abstract. The challenges associated with wireless vision sensor networks are low energy consumption, less bandwidth and limited processing capabilities. In order to meet these challenges different approaches are proposed. Research in wireless vision sensor networks has been focused on two different assumptions, first is sending all data to the central base station without local processing, second approach is based on conducting all processing locally at the sensor node and transmitting only the final results. Our research is focused on partitioning the vision processing tasks between Senor node and central base station. In this paper we have added the exploration dimension to perform some of the vision tasks such as image capturing, background subtraction, segmentation and Tiff Group4 compression on FPGA while communication on microcontroller. The remaining vision processing tasks i.e. morphology, labeling, bubble remover and classification are processed on central base station. Our results show that the introduction of FPGA for some of the visual tasks will result in a longer life time for the visual sensor node while the architecture is still programmable. I. INTRODUCTIONTypically Vision Sensor Nodes (VSN) in Wireless Vision Sensor Networks (WVSN) consists of a camera for acquiring images, a processor for local image processing and a transceiver for communicating the results to the central base station. Due to the technological development in image sensors, sensor networking, distributed processing, low power processing and embedded systems smart camera networks can perform complex tasks using limited resources such as batteries, a wireless link and with a limited storage facility. Such camera based networks could easily be installed in out-doors areas where there is a limited availability of power, where access is difficult and it is inconvenient to modify the locations of the nodes or frequently change the batteries. VSN have been designed and implemented on microcontroller and microprocessor [1,4]. Often these solutions have high power consumption and moderate processing capabilities. Due to rapid development in the semiconductor technology, the single chip capacity of Field Programmable Gate Array (FPGA) increases greatly while its power consumption decreases tremendously [15]. Presently FPGA chips consist of many cores which makes it ideal candidate for the designing of VSN. As VSN needs to be capable of performing complex image processing such as image compression, which needs a lot of processing. High processing requirement is increased for an increased image size. Attention must be paid to the hardware/software co-design strategy to meet both processing and power requirements of VSN [8]. In [9] the authors designed a novel VSN based on a low cost, low power FPGA plus microcontroller System on Programmable Chip (SOPC). The authors in [10] have implemented a computer vision algorithm in hardware. They have provided a comparison of hardware and software implemented system using the same algorithm. It is c...
This is an accepted version of a paper published in IEEE transactions on circuits and systems for video technology (Print). This paper has been peer-reviewed but does not include the final publisher proof-corrections or journal pagination.
Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.
Abstract-Wireless Visual Sensor Network (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. The VSNs acquire images of the area of interest in the field, perform some local processing on these images and transmit the results using an embedded wireless transceiver. The energy consumption on transmitting the results wirelessly is correlated with the information amount that is being transmitted. The images acquired by the VSNs contain huge amount of data due to many kinds of redundancies in the images. Suitable bi-level image compression standards can efficiently reduce the information amount in images and will thus be effective in reducing the communication energy consumption in the WVSN. But compression capability of the bi-level image compression standards is limited to the underline compression algorithm. Further data reduction can be achieved by detecting Region of Interest (ROI) in the bilevel images and then coding these ROIs using bi-level image compression method. We explored the compression performance of the lossless ROI detection and coding method for various kinds of changes such as different shapes, locations and number of objects in the continuous set of frames. The CCITT Group 4, JBIG2 and Gzip are used for coding the detected ROIs. We concluded that CCITT Group 4 is a better choice for coding the ROIs in the Bi-level images because of its comparatively good compression performance and less computational complexity. This paper is intended to be a resource for the researchers interested in reducing the amount of data in the bi-level images for energy constrained WVSNs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.