Map is an essential medium for people to understand our changing planet. Recently, research on generating and updating maps through remote sensing images has been an important and challenging task in geographic information. Traditional methods for map generation are time-consuming and labor-intensive. Besides, most supervised learning methods for map generation lack labeled training samples. It is challenging to generate maps quickly and efficiently for emergency rescue operations such as earthquakes, fire disasters, or tsunami. In this paper, we propose an unsupervised domain mapping model based on adversarial learning called MapGen-GAN. MapGen-GAN is a generative adversarial network that can do end-to-end translation from remote sensing images to general map quickly, and trained with no human annotation data. In order to improve the fidelity and the geometry precision of generated maps, we employ circularity-consistency and geometrical-consistency constraints as a part of the loss function of the proposed model. And then, an improved residual block Unet is designed and adopted as the generator of MapGen-GAN to capture the geographic structure information of buildings, roads, and topography outlines under different resolutions in the map generation. By applying the proposed model to two distinct datasets, experiments demonstrate that our model can generate maps efficiently and quickly and outperform the state-of-the-art approaches.
Maps can help governments in infrastructure development and emergency rescue operations around the world. Using adversarial learning to generate maps from remote sensing images is an emerging field. As we now know, the urban construction styles of different cities are diverse. The current translation methods for remote sensing image-to-map tasks only work on the specific regions with similar styles and structures to the training set and perform poorly on previously unseen areas. We argue that this greatly limits their use. In this work, we intend to seek a remote sensing image-to-map translation model that approaches the challenge of generating maps for the remote sensing images of unseen areas. Our remote sensing image-to-map translation model (RSMT) achieves universal and general applicability to generate maps over multiple regions by combining adversarial deep transfer training schemes with novel attention-based network designs. Extracting the content and style latent features from remote sensing images and a series of maps, respectively, RSMT generalizes a pattern applied to the remote sensing images of new areas. Meanwhile, we introduce feature map loss and map consistency loss to reinforce generated maps’ precision and geometry similarity. We critically analyze qualitative and quantitative results using widely adopted evaluation metrics through extensive validation and comparisons with previous remote sensing image-to-map approaches. The results of experiment indicate that RSMT can translate remote sensing images to maps better than several state-of-the-art methods.
Images crowdsourcing of mobile devices can be applied to many real-life application scenarios. However, this type of scenario application often faces issues such as the limitation of bandwidth, insufficient storage space, and the processing capability of CPU. These lead to only a few photos that can be crowdsourced. Therefore, it is a great challenge to use a limited number of resources to select photos and make it possible to cover the target area maximally. In this paper, the geographic and geometric information of the photo called data-unit is used to cover the target area as much as possible. Compared with traditional content-based image delivery methods, the network delay and computational costs can be greatly reduced. In the case of resource constraints, this paper uses the utility of photos to measure the coverage of the target area, and improves a photo utility calculation method based on data-unit. In the meantime, this paper proposes the minimum selection problem of images under the coverage requirements, and designs a selection algorithm based on greedy strategies. Compared with other traditional random selection algorithms, the results prove the effectiveness and superiority of the minimum selection algorithm.
Special features and varied applications make complex claims for LED driver selection. The aim of study is to establish a comprehensive evaluation system for LED driver for selection under varied applications. This study first analyzes the importance of key elements for driver evaluation concerned of visiual comfort. Besides, research shows that the change of driver current, driver mode and dimming procedure will have impact on the LED SPD (Spectrum Power Distribution), color and visual comfort. Thus, visual comfort index including dimming Linearity, dimming stability and strobe is added to the evaluation system besides the traditional evaluation index of circuits. As the complex evaluation factors can not be described using one equation, we structure a comprehensive evaluation system containing these three factors system performance, driver performance and visual performance based on the Analytic Hierarchy Process (AHP) method. Finally, an application case is implemented and applied to clarify the proposed method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.