Effective feature representations play a decisive role in content-based remote sensing image retrieval (CBRSIR). Recently, learning-based features have been widely used in CBRSIR and they show powerful ability of feature representations. In addition, a significant effort has been made to improve learning-based features from the perspective of the network structure. However, these learning-based features are not sufficiently discriminative for CBRSIR. In this paper, we propose two effective schemes for generating discriminative features for CBRSIR. In the first scheme, the attention mechanism and a new attention module are introduced to the Convolutional Neural Networks (CNNs) structure, causing more attention towards salient features, and the suppression of other features. In the second scheme, a multi-task learning network structure is proposed, to force learning-based features to be more discriminative, with inter-class dispersion and intra-class compaction, through penalizing the distances between the feature representations and their corresponding class centers. Then, a new method for constructing more challenging datasets is first used for remote sensing image retrieval, to better validate our schemes. Extensive experiments on challenging datasets are conducted to evaluate the effectiveness of our two schemes, and the comparison of the results demonstrate that our proposed schemes, especially the fusion of the two schemes, can improve the baseline methods by a significant margin.
The content-based remote sensing image retrieval (CBRSIR) has recently become a hot topic due to its wide applications in analysis of remote sensing data. However, since conventional CBRSIR is unsuitable in harsh environments, this paper focuses on the cross-modality CBRSIR (CM-CBRSIR) between synthetic aperture radar (SAR) and optical images. Besides the large inter-class and small intra-class in CBRSIR, CM-CBRSIR is limited by prominent modality discrepancy caused by different imaging mechanisms. To address this limitation, this study proposes a deep cross-modality hashing network (DCMHN). First, we transform optical images with three channels into four different types of single-channel images to increase diversity of the training modalities. This helps the network to mainly focus on extracting the contour and texture shared features, and make it less sensitive to colour information for images across modalities. Secondly, we combine any type of randomly selected transformed images and its corresponding SAR or optical images to form image pairs that are fed into the networks. The training strategy, with paired image data, eliminates the large cross-modality variations caused by different modalities. Finally, the triplet loss, in combination with the hash function, helps the modal to extract the discriminative features of images and upgrade the retrieval efficiency. To further evaluate the proposed modality, we construct a SAR-optical dual-modality remote sensing image dataset (SODMRSID) containing twelve categories. Experimental results demonstrate the superiority of the proposed method with regards to efficiency and generality. Index Terms-Cross-modality content-based remote sensing image retrieval (CM-CBRSIR); modality discrepancy; deep crossmodality hashing network (DCMHN); SAR-optical dual-modality remote sensing image dataset (SODMRSID)
COVID-19 is a new strain of highly contagious coronavirus. Now, an innovative approach to detect it by combining DNA/RNA oligomers as aptamers and a GO coated optical microfiber as a sensor system.
Remote sensing image scene classification (RSISC) is an active task in the remote sensing community and has attracted great attention due to its wide applications. Recently, the deep convolutional neural networks (CNNs)-based methods have witnessed a remarkable breakthrough in performance of remote sensing image scene classification. However, the problem that the feature representation is not discriminative enough still exists, which is mainly caused by the characteristic of inter-class similarity and intra-class diversity. In this paper, we propose an efficient end-to-end local-global-fusion feature extraction (LGFFE) network for a more discriminative feature representation. Specifically, global and local features are extracted from channel and spatial dimensions respectively, based on a high-level feature map from deep CNNs. For the local features, a novel recurrent neural network (RNN)-based attention module is first proposed to capture the spatial layout information and context information across different regions. Gated recurrent units (GRUs) is then exploited to generate the important weight of each region by taking a sequence of features from image patches as input. A reweighed regional feature representation can be obtained by focusing on the key region. Then, the final feature representation can be acquired by fusing the local and global features. The whole process of feature extraction and feature fusion can be trained in an end-to-end manner. Finally, extensive experiments have been conducted on four public and widely used datasets and experimental results show that our method LGFFE outperforms baseline methods and achieves state-of-the-art results. methods is mainly attributed to the stacking of a variety of convolutional layers with non-linearity, which can extract more high-level semantic information, thereby helping to alleviate the problem of semantic gap.Deep learning-based methods have exhibited remarkably better performance in computer vision [4,[7][8][9]. However, RSISC is still a big challenge for the large differences between natural and remote sensing images, as well as inapplicability of the deep learning-based methods in representing remote sensing images. Particularly, remote sensing images are more complex than natural ones, which cover a large area from the "view of God", contain many types of contents and objects, and their semantics are very ambiguous [10]. As some samples for the task of RSISC shown in Figure 1, there are some images from different categories sharing many similar contents and semantics, such as the farmland in Figure 1a,b, and the runway in Figure 1d,e. Whereas some scene images from the same category may show high diversity in content, such as the Figure 1b,c and the Figure 1e,f. Furthermore, the terms of semantic categories (e.g., farmland and airport) summarily describe the content of scene images in high-level abstraction [10], and some attributes (e.g., the farmland in Figure 1b and the runway in Figure 1e) in the scene images are not fully described by the cat...
Nowadays, several remote sensing image capturing technologies are used ranging from unmanned aerial vehicles to satellites. Powerful learning-based discriminative features play an essential role in content-based remote sensing image retrieval (CBRSIR). Cross-source CBRSIR (CS-CBRSIR) is used to find relevant remote sensing images across different remote sensing sources (i.e., multispectral images and panchromatic images). But it is limited by large cross-source and intrasource variations caused by different semantic objects, spatial resolution, and spectral resolution. The main limitation of CS-CBRSIR is that it cannot address the inconsistency between different sources and exploit the intrinsic relation between them. This study proposes a discriminative distillation network for CS-CBRSIR to address this limitation. To enlarge the interclass variations and reduce the intraclass differences, the discriminative features from the first source are first extracted with a well-designed joint optimization configuration (JOC) on the basis of deep neural networks. Thereafter, the features extracted from the first source are used as a supervision signal for the second source; feature distribution in common feature space between the first and second sources are made significantly similar. The method proposed in this study simultaneously handles the cross-source and intersource variations, unlike the existing methods. Extensive experiments on the DSRSID dataset with Euclidean distance verify the effectiveness of our proposed method.
PurposeThis study aimed to bibliometrically and visually analyze and review hospitality and tourism marketing studies published from 2000–2020.Design/methodology/approachA total of 3,942 articles collected from the databases of Social Science Citation Index (SSCI) and Science Citation Index Expanded (SCI-E) in the Web of Science (WoS) along with their references were used for analyses. The bibliometric software HistCiteTM and literature measurement visualization tools, VOSviewer and CiteSpace, were employed to analyze the selected articles.FindingsThe results of the study demonstrated top influential scholars and institutions, intellectual structure and emerging trends of the study topics, and future research opportunities in the field of hospitality and tourism marketing.Research limitations/implicationsFirst, academic influence of a scholar was evaluated by citations of his/her publications, which did not take the order of authorship into consideration. Second, this study was restricted to the English language journals. Third, other types of published documents related to the studied field such as review papers were not considered by this research.Originality/valueIn comparison to traditional qualitative analysis such as content analysis, bibliometric analysis is a more objective approach to vividly demonstrate trends and performance of a research field, offers unique insights for its advancement with wider inclusiveness of a larger amount of data.
Since the thermal load would adversely introduce degradation to the normal operation of spacecraft, resulting in unpredictable thermal-dynamic behavior, thermomechanical coupling problems are important and have been investigated extensively. Based on the absolute nodal coordinate formulation (ANCF), a thermal integrated ANCF thin plate element based on the unified description is constructed, which could depict the displacement and the temperature field integratedly. By means of the proposed element, the heat transfer and continuum mechanics are integrated in the unified finite element method (FEM) mesh of revolving paraboloid antenna. Additionally, the ANCF reference node is introduced for describing the rigid central hub where the antenna is mounted on to make the rigid-flexible-thermal coupled response being captured in a unified analysis procedure. The solar radiation input and the surface emitting radiation are included in the heat transfer equations. Furthermore, the influence of the rigid body motion and the deformation on the radiant absorption are also considered with the self-shadowing. The established rigid-flexible-thermal coupled simulation is performed on a modified generalized-α integrator which solves the set of multidisciplinary governing equations synchronously. For revealing the nonlinear behavior of the rigid-flexible-thermal coupled system, the observed thermally induced vibration and perturbation on the pointing accuracy of the spacecraft are given in the results, and the feasibility of the presented method is proved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.