This paper presents a new combined neural network and chaos based pseudorandom sequence generator and a DNA-rules based chaotic encryption algorithm for secure transmission and storage of images. The proposed scheme uses a new heterogeneous chaotic neural network generator controlling the operations of the encryption algorithm: pixel position permutation, DNA-based bit substitution and a new proposed DNA-based bit permutation method. The randomness of the generated chaotic sequence is improved by dynamically updating the control parameters as well as the number of iterations of the chaotic functions in the neural network. Several tests including auto correlation, 0/1 balance and NIST tests are performed to show high degree of randomness of the proposed chaotic generator. Experimental results such as pixel correlation coefficients, entropy, NPCR and UACI etc. as well as security analyses are given to demonstrate the security and efficiency of the proposed chaos based genetic encryption method.
The computational models of visual attention, originally proposed as cognitive models of human attention, nowadays are being used as front-ends to numerous vision systems like automatic object recognition. These systems are generally evaluated against eye tracking data or manually segmented salient objects in images. We previously showed that this comparison can lead to different rankings depending on which of the two ground truths is used. These findings suggest that the saliency models ranking might be different for each application and the use of eye-tracking rankings to choose a model for a given application is not optimal. Therefore, in this paper, we propose a new saliency evaluation framework optimized for object recognition. This paper aims to answer the question: 1) Is the application-driven saliency models rankings consistent with classical ground truth like eye-tracking? 2) If not, which saliency models one should use for the precise CBIR applications?
International audienceContent based image retrieval (CBIR) has been the center of interest for a long time. A lot of research have been done to enhance the performance of these systems. Most of the proposed works focused on improving the image representation(bag-of-features) and classification methods. In this paper, we focus on enhancing the second component of CBIR system: region appearance description method. In this context, we propose a new descriptor describing the spatial frequency property of some perceptual features in the image. This descriptor has the advantage of being lower dimension vs. traditional descriptors as SIFT (60 vs 128), thus computationally more efficient, with only 5% loss in performance using a typical CBIR algorithm on VOC 2007 dataset. The number of digital images continues to increase, especially with the expansion of social networks: according to Time magazine, more than 130,000 images are uploading each minute on Facebook. Thus, it will be difficult for a human to use this vast collection of images, e.g.: searching manually for images containing objects or persons. Content based image retrieval(CBIR) system is necessary for this kind of tasks. Content based image retrieval (CBIR) has been the subject of interest in the computer vision community for a long time: a lot of algorithms have been proposed in the last decades. Most of these systems are based on local approaches (illustrated in figure 1). According to [6], local approaches consists in 5 steps: region selection, region appearance description, region appearance encoding, derivation of image features from the set of region appearance codes by spatial pooling, classification. A baseline of CBIR method [6] is represented in figure 1. In the following, we present the different methods used in each step. region selection As shown in figure 1, a typical CBIR algorithm first scans the image to select the regions of interest. In the literature, there are two concepts to accomplish such task
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.