Satellite remote sensing images contain adequate ground object information, making them distinguishable from natural images. Due to the constraint hardware capability of the satellite remote sensing imaging system, coupled with the surrounding complex electromagnetic noise, harsh natural environment, and other factors, the quality of the acquired image may not be ideal for follow-up research to make suitable judgment. In order to obtain clearer images, we propose a dual-path adversarial generation network model algorithm that particularly improves the accuracy of the satellite remote sensing image super-resolution. This network involves a dual-path convolution operation in a generator structure, a feature mapping attention mechanism that first extracts important feature information from a low-resolution image, and an enhanced deep convolutional network to extract the deep feature information of the image. The deep feature information and the important feature information are then fused in the reconstruction layer. Furthermore, we also improve the algorithm structure of the loss function and discriminator to achieve a relatively optimal balance between the output image and the discriminator, so as to restore the super-resolution image closer to human perception. Our algorithm was validated on the public UCAS-AOD datasets, and the obtained results showed significantly improved performance compared to other methods, thus exhibiting a real advantage in supporting various image-related field applications such as navigation monitoring.
In the process of film and television production, clear images can give the audience a real sensory experience, but high-resolution images require a massive amount of production time and highly specialized imaging equipment, which is not a cost-effective solution at the moment. To achieve a better cost efficiency during video production, we propose a multichannel featured superresolution network model that utilizes rendered low-resolution images according to their characteristics. This model includes a feature extraction layer, a series of subnetworks, and a reconstruction module. Inside the network model, a series of subnetworks are cascaded to improve the information flow from coarse to fine, which helps to fully extract the depth, normal vector, edge, and texture features from low-resolution rendered images to reconstruct the high-resolution image. Additionally, residual learning is introduced at each stage to further improve the reconstruction performance. We experiment with the model on the classic Disney Monte Carlo datasets and compare it with several related algorithms. The results show that our algorithm is able to reconstruct the image with clearer details and texture. Thus, our research not only helps to maintain the audience’s sensory experience but also increases the efficiency of film and television production, which also brings considerable economic benefits.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.