<p>In the computer vision, background extraction is a promising technique. It is characterized by being applied in many different real time applications in diverse environments and with variety of challenges. Background extraction is the most popular technique employed in the domain of detecting moving foreground objects taken by stationary surveillance cameras. Achieving high performance is required with many perspectives and demands. Choosing the suitable background extraction model plays the major role in affecting the performance matrices of time, memory, and accuracy.</p><p>In this article we present an extensive review on background extraction in which we attempt to cover all the related topics. We list the four process stages of background extraction and we consider several well-known models starting with the conventional models and ending up with the state-of-the art models. This review also focuses on the model environments whether it is human activities, Nature or sport environments and illuminates on some of the real time applications where background extraction method is adopted. Many challenges are addressed in respect to environment, camera, foreground objects, background, and computation time. </p><p>In addition, this article provides handy tables containing different common datasets and libraries used in the field of background extraction experiments. Eventually, we illustrate the performance evaluation with a table of the set performance metrics to measure the robustness of the background extraction model against other models in terms of time, accurate performance and required memory.</p>
Nowadays, electronic applications are being adopted instead of many traditional processes in data and information management that use Internet technology as a transmission medium. Therefore, these data and information suffer from different types of attacks that aim to destroy or steal them. One of these attacks is the cyber classification that can halt the whole system. In this paper, a cyber-attacks detector method is proposed based on deep learning technology for Wireless Sensor Network (WSN). This method adopts the behavior of the WSN's nodes as well as the data transmission that depends on the MQTT protocol. The use of the deep learning model in this method improves the detection accuracy compared to traditional machine learning methods. The results demonstrate the efficiency of using the combination of deep learning CNN-LSTM techniques to be 96.02% in training accuracy and 95.08% for validation accuracy depending on the dataset of [1]. The machine learning model in [1] obtains an accuracy between 87% and 91% for the augmented dataset processes.
In an encryption scheme, the message or information, referred to as plaintext, is encrypted using an encryption algorithm, generating cipher text that can only be read if decrypted. A proposal algorithm for images protection is depending on the block cipher serpent algorithm in feistel network structure, because numbers of round and linear transformation function and used block size of 512 bits rather than 128 bits has more complexity for attacker or unauthorized person to discover original images. In modified serpent, the correlation coefficient decreases to below the traditional serpent algorithm. When 64*64 pixel bitmap image is used the correlation coefficient for gray level between plain image and cipher image is (0.0023) in modified serpent and (0.0814) in traditional serpent.
The background subtraction is a leading technique adopted for detecting the moving objects in video surveillance systems. Various background subtraction models have been applied to tackle different challenges in many surveillance environments. In this paper, we propose a model of pixel-based color-histogram and Fuzzy C-means (FCM) to obtain the background model using cosine similarity (CS) to measure the closeness between the current pixel and the background model and eventually determine the background and foreground pixel according to a tuned threshold. The performance of this model is benchmarked on CDnet2014 dynamic scenes dataset using statistical metrics. The results show a better performance against the state-of the art background subtraction models.
The main goal of image enhancement is to enhance the fine details present in the images having low luminance for better image quality. In the digital image processing field, the enhancement and removing the noise from the image is a critical issue; image noise removal is the manipulation of the image data to produce a visually high-quality image. The important details and useful information on image decreasing by the noise where the noise treated as information. The filters are used to remove unwanted information. The filters’ objectives are to improve the image quality. This paper proposes an enhancement image system, which chooses the appropriate filter and value of center pixel depends on the number of similarities adjusted neighbors pixels to the center pixel. The performance of this system is evaluated by using different quality metrics, such as Mean square error (MSE), Peak Signal Noise to Ratio (PSNR), Absolute Mean Brightness Error (AMBE), Measure of Enhancement (EME), and Measure of Enhancement by Entropy (EMEE), Entropy, Second-Order Entropy (SOE), and Image Enhancement Metric (IEM). The proposed enhancement system is efficient in removing noises and enhancing the image quality. Experiments are applied to a set of images, such as Lena, butterfly, etc. with different image sizes. The results show that the enhancement quality was performed well in the proposed system with minimal unexpected artifacts as compared to the other techniques, where the results of the proposed system for MSE, PSNR, AMBE, Entropy, SOE, EME, EMEE, and IEM for baboon image with the size 255x 255 are 2.906, 8.875, 3.92, 5.154, 2.692, 3.915, 0.442 and 3.674 in sequence.
The localization problem in ad hoc networks has drawn wide attention in recent years due to the rapid advances in mobile computing and improvements of wireless communication technologies. In this paper, we design the localization model for Ships Ad Hoc network (SANET) involves three stages (Data collection, Data clustering, and Localization model). Our proposal can estimate its own position by using the travelling distance and the previous position which is estimated after the achievement of the stability of the ships. Our experimental results will show the effectiveness of the proposal.
An Augmented Reality (AR) incorporates a mix of genuine and PC created scene segments. AR frameworks enhance a client's impression of this present reality with information that is not entirely of the scene. A key test for making an expanded the truth is to keep up precise arrangement amongst genuine and virtual thing.This exploration delineates a method to build up the enlistment extent of a dream based enlarged reality (AR) framework, also explores a simple method for detecting and tracking natural features in video stream. In this method, a reference image has been used as a tool to find the a proper position of an object. This method first uses Harris Corner Detector to detect the interest features and find the correspondences using crosscorrelation method then it used the Random Sample Consensus (RANSAC) algorithm to find the Homography matrix .After acquiring keypoints in the video frame, a Kanade-Lucas-Tomasi (KLT) feature tracker optical flow tracking algorithm has been used to track the motion of these keypoints frame-by-frame. By maintaining the correspondence between the tracked keypoints and those on the clean marker image, a new homography for every frame has been computed. This allows tracking the orientation of the marker as it moves in the video frames. Experiments for assessing the possibility of the technique are implemented in order to illustrate the potential benefits of the method, in which result's that to the target registration error ( TRE) reach 0.0020 , root mean square error (RMSE ) is 0.003 and average time for whole dataset is 2.5 s
Content-based image retrieval has been developed in numerous fields. This provides more active management and retrieval of images than the keyword-based method. So the content based image retrieval becomes one of the liveliest researches in the past few years. In a given set of objects, the retrieval of information suggests solutions to search for those in response to a particular description. The set of objects which can be considered are documents, images, videos, or sounds.Moments can be viewed as powerful image descriptors that capture global characteristics of an image. The magnitude of the moment coefficients is said to be invariant under geometrical transformations like rotation which makes them suitable for most of the recognition applications.This paper presents a method to retrieve a multi-view face from a large face database according to face image moments and genetic algorithm. The GA is preferred for its power and because it can be used without any specific information of the domain.The experimental results concludes that using GA gives a good performance and it decreases the average search time to (56.44 milliseconds) compared with (891.6 milliseconds) for traditional search.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.