Abstrak
Abstract
In general, visual sensors are applied to build virtual view images. When number of visual sensors increases then quantity and quality of the information improves. However, the view images generation is a challenging task in Wireless Visual Sensor Network environment due to energy restriction, computation complexity, and bandwidth limitation. Hence this paper presents a new method of virtual view images generation from selected cameras on Wireless Visual Sensor Network. The aim of the paper is to meet bandwidth and energy limitations without reducing information quality. The experiment resultsshowed that this method could minimize number of transmitted imageries with sufficient information..
Keywords: wireless visual sensor network, camera selection method, virtual view
IntroductionWireless Visual Sensor Network (WVSN) is a system with capability to communicate, to receive and to process signals [1]. In general WVSN architecture, as shown in Figure 1, consists of nodes that contain visual sensor that are spread in area of visual observation. Multi-cameras as visual sensors are used in WVSN to provide multi-view services, multi-resolutions, environmental monitoring, and surveillance system. In WVSN application, visual sensor nodes send captured image and video to sink for further processing suited to application purposes which are designed to meet limited resources such as energy resources and processing capability. However users are still able to get optimal information.The more sensors to reconstruct scenery, the better results will be obtained. However limitation of energy, processing capability, and transmission bandwidth on WVSNs become obstacles to received maximum information from multi-camera network. To solve those problems, we need a method of virtual sensor camera selection that can give maximum information with minimum amount of data. There are two things to minimize data transmission, i.e. reducing numbers of sensors and maximizing image compression. In other words, we need to select few cameras from all available one, with maximum information.Algorithms for automatic visual sensor selection on network were designed for any purposes. In study [2] and [3], cameras communicate one another to obtain the amount of active cameras to gain expected spaces on expected scene. Two algorithms were developed in this research, such as (1) distributed processing algorithm, a method used to conduct background and foreground segmentation process that is continued with a method for human face detection, and this method results in lower transmitted data; and (2) centered algorithm, base station uses information obtained from distributed processing to determine the shape on the scene. In this