Moving object detection in a video sequence is one of the leading tasks of marine scientists to explore and monitor applications. The videos acquired in the underwater environment are usually degraded due to the physical properties of water medium as compared with images acquired in the air and that affects the performance of feature descriptors. In this study, a new feature descriptor, multi-frame triplet pattern (MFTP) is proposed for underwater moving object detection. The MFTP encodes the structure of local region based on three sets of frames, which are calculated by considering local differences in intensities between the centre pixel and its nine neighbours. Furthermore, the robustness of the proposed method is increased by integrating it with colour and motion features. The performance of the proposed framework is tested by conducting seven experiments on Fish4Knowledge database for underwater moving object detection applications. The results of the proposed method show a significant improvement as compared with state-of-the-art techniques in terms of their evaluation measures.
Wireless Sensor Network (WSN) is a self-configuring and highly flexible distributed network used for real time monitoring and is comprised of a number of wireless, minute, battery-operated independent sensor nodes connected to a common sink. Due to energy considerations, the network is generally divided into clusters and the entire communication is carried out through the cluster heads. The clock mismatch between various nodes and clusters often leads to collisions, delay and data loss. In order to overcome this problem, a novel RTS/CTS based Relative Time Synchronization Protocol is being proposed for Radio-Frequency Identification based WSN. It achieves better performance due to its energy efficiency and lower service messages. Simulation results show a substantial improvement in the net throughput and reliability of the network when compared with the GPS-based Synchronization technique.
Image matching plays an important role in many fields such as pattern searching & recognizing [6], image analysis, robotics & computer vision. It is a method to find a certain image in the image database which matches or can be said similar to the given template picture. The template image can be thought of as a subset of the matching image. This paper aims at the improved matching algorithm which is based on the image feature point [5]. By searching correct feature point and setting bidirectional threshold value, the matching process can be quickly & precisely implemented with optimistic results. Visual C++ to be used for design and implementation. In future, the feature based algorithm can be modified to choose feature selection threshold adaptively depending on the image's content.
Image de-fogging in brightness defined to image calculated in a deprived climate like as fog, rain and ocean and pollutants or dust particles. To alter the fog and some other pollutants from the image, various methods are customized, some mainly utilized methods are DCP, Detection, and Classification of foggy images. Haze is an arrangement of dual components, air-light and DA (Direct Attenuation), low image quality and generates various issues in VS (Video Surveillance), Navigation and Target Tracking, etc. So, its removes from an image, several de-fogging approaches have been discussed in this paper. Image De-fogging can attain utilizing several and single image haze removal techniques. The famous methods are discussed in this paper used for image de-fogging in DCP, Depth-map for accurate estimation, Guided Filter, and Transmission methods. These techniques still efficient in removing haze from images have very high time complexity. The guided filter is a new region preservative filter with region enhancement and smoothing. The previous result was a local linear transformation of the Guided Image. It defines a review of the classification and detection technique of a hazy image. This method mitigates the limitations of filtration and DCP and at the same time preserves the image quality. At that time, described the existing image de-fogging methods containing image restoration, contrast improvement, and fusion-based image de-fogging methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.