Because deep neural networks (DNNs) are both memory-intensive and computation-intensive, they are difficult to apply to embedded systems with limited hardware resources. Therefore, DNN models need to be compressed and accelerated. By applying depthwise separable convolutions, MobileNet can decrease the number of parameters and computational complexity with less loss of classification precision. Based on MobileNet, 3 improved MobileNet models with local receptive field expansion in shallow layers, also called Dilated-MobileNet (Dilated Convolution MobileNet) models, are proposed, in which dilated convolutions are introduced into a specific convolutional layer of the MobileNet model. Without increasing the number of parameters, dilated convolutions are used to increase the receptive field of the convolution filters to obtain better classification accuracy. The experiments were performed on the Caltech-101, Caltech-256, and Tubingen animals with attribute datasets, respectively. The results show that Dilated-MobileNets can obtain up to 2% higher classification accuracy than MobileNet.
In recent years, more and more attention has been paid to single image super-resolution reconstruction (SISR) by using deep learning networks. These networks have achieved good reconstruction results, but how to make better use of the feature information in the image, how to improve the network convergence speed, and so on still need further study. According to the above problems, a novel deep residual dense network (DRDN) is proposed in this paper. In detail, DRDN uses the residual-dense structure for local feature fusion, and finally carries out global residual fusion reconstruction. Residual-dense connection can make full use of the features of low-resolution images from shallow to deep layers, and provide more low-resolution image information for super-resolution reconstruction. Multi-hop connection can make errors spread to each layer of the network more quickly, which can alleviate the problem of difficult training caused by deepening network to a certain extent. The experiments show that DRDN not only ensure good training stability and successfully converge but also has less computing cost and higher reconstruction efficiency.
Image steganography is the technique of hiding secret information within images. It is an important research direction in the security field. Benefitting from the rapid development of deep neural networks, many steganographic algorithms based on deep learning have been proposed. However, two problems remain to be solved in which the most existing methods are limited by small image size and information capacity. In this paper, to address these problems, we propose a high capacity image steganographic model named HidingGAN. The proposed model utilizes a new secret information preprocessing method and Inception‐ResNet block to promote better integration of secret information and image features. Meanwhile, we introduce generative adversarial networks and perceptual loss to maintain the same statistical characteristics of cover images and stego images in the high‐dimensional feature space, thereby improving the undetectability. Through these manners, our model reaches higher imperceptibility, security, and capacity. Experiment results show that our HidingGAN achieves the capacity of 4 bits‐per‐pixel (bpp) at 256 × 256 pixels, improving over the previous best result of 0.4 bpp at 32 × 32 pixels.
The study is concerned with the representation and aggregation of complex uncertainty information. First, the concept of hesitant Fermatean 2-tuple linguistic sets (HF2TLSs) is introduced for characterizing an individual’s imprecision preferences and assessing information by combining 2-tuple linguistic terms and Fermatean fuzzy sets. The advantage of hesitant Fermatean 2-tuple linguistic information is that it can handle higher levels of uncertainty and express the decision-makers’ hesitancy. Second, we extend Bonferroni mean (BM) operators under the background of HF2TLSs for the sake of their application in information fusion and decision making. The Archimedean t-norm and s-norm- (ATS-) based hesitant Fermatean 2-tuple linguistic weighted Bonferroni mean (A-HF2TLWBM) operator and the ATS-based hesitant Fermatean 2-tuple linguistic weighted geometric Bonferroni mean (A-HF2TLWGBM) operator are developed by considering the interrelationship between any two variables. The main benefit of the proposed operators is that these operators deliver more complete and flexible results compared to existing methods. Moreover, some fundamental properties and special cases are examined by adjusting parameter values. Finally, an approach is designed as a support for handling decision making problems, and an example regarding investment selection is provided to demonstrate the practicality of the designed method with a detailed discussion of parameter influence and comparisons with the existing methods.
Frameproof codes are used to fingerprint digital data. It can prevent copyrighted materials from unauthorized use. In this paper, we study upper and lower bounds for w-frameproof codes of length N over an alphabet of size q. The upper bound is based on a combinatorial approach and the lower bound is based on a probabilistic construction. Both bounds can improve previous results when q is small compared to w, say cq ≤ w for some constant c ≤ q. Furthermore, we pay special attention to binary frameproof codes. We show a binary w-frameproof code of length N can not have more than N codewords if N < w+1 2 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.