Multi-label learning is often applied to handle complex decision tasks, and feature selection is its essential part. The relation of labels is always ignored or not enough to consider for both multi-label learning and its feature selection. To deal with the problem, F-neighborhood rough sets are employed. Different from other methods, the original approximate space is not changed, but the relation of labels is sufficient to consider. To be specific, a multi-label decision system is discomposed into a family of singlelabel decision tables with the label set(first-order strategy) at first. Secondly, calculate attribute significance in the family of single-label decision tables. Third, construct an attribute significance matrix and improved attribute significance matrices to evaluate the quality of the features, then a parallel reduct is obtained with information fusion. These processes construct F-neighborhood parallel reduction algorithm for a multi-label decision system(FNPRMS). Compared with the state-of-the-arts, experimental results show that FNPRMS is effective and efficient on 9 publicly available data sets.
Siamese network-based trackers consider tracking as features cross-correlation between the target template and the search region. Therefore, feature representation plays an important role for constructing a high-performance tracker. However, all existing Siamese networks extract the deep but low-resolution features of the entire patch, which is not robust enough to estimate the target bounding box accurately. In this work, to address this issue, we propose a novel high-resolution Siamese network, which connects the high-to-low resolution convolution streams in parallel as well as repeatedly exchanges the information across resolutions to maintain high-resolution representations. The resulting representation is semantically richer and spatially more precise by a simple yet effective multi-scale feature fusion strategy. Moreover, we exploit attention mechanisms to learn object-aware masks for adaptive feature refinement, and use deformable convolution to handle complex geometric transformations. This makes the target more discriminative against distractors and background. Without bells and whistles, extensive experiments on popular tracking benchmarks containing OTB100, UAV123, VOT2018 and LaSOT demonstrate that the proposed tracker achieves state-of-the-art performance and runs in real time, confirming its efficiency and effectiveness.
Image deraining is a low-level restoration task that has become quite popular during the past decades. Although recent data-driven deraining models exhibit promising results, most of these models are trained on synthetic rain data sets which do not generalize well when applied to real rain images. While recent real-rain data sets have achieved favorable generalization performance, generating rain-free ground-truths can be tedious and time-consuming. To address this problem, in this work, we present rain to rain training, an unsupervised training method for single image deraining. Our experiments show that it is possible to train single image deraining models by using only rain images. This can be achieved by simply training models to map pairs of rain images. We also introduce the idea of using the least overlapping training pairs, a method of selecting adequate training pairs that enables rain to rain training to achieve equivalent deraining performance compared to supervised training.INDEX TERMS image restoration, real rain, synthetic rain, single image deraining, unsupervised training.
Recently, Siamese networks based tracking algorithms have shown favorable performance. Latest work focuses on better feature embedding and target state estimation, which greatly improves the accuracy. Nevertheless, the simple cross-correlation operation of the features between a fixed template and the search region limits their robustness and discrimination capability. In this paper, we pay more attention to learn an outstanding similarity measure for robust tracking. We propose a novel relation network that can be integrated on top of previous trackers without any need for further training of the siamese networks, which achieves a superior discriminative ability. During online inference, we utilize the feedback from high-confidence tracking results to obtain an additional template and update it, which improves the robustness and generalization. We implement two versions of the proposed approach with the SiamFC-based tracker and SiamRPN-based tracker to validate the strong compatibility of our algorithm. Extensive experimental results on several tracking benchmarks indicate that the proposed method can effectively improve the performance and robustness of the underlying trackers without reducing speed too much, and performs superiorly against the state-of-the-art trackers.
In the last few years, single image super-resolution (SISR) has benefited a lot from the rapid development of deep convolutional neural networks (CNNs), and the introduction of attention mechanisms further improves the performance of SISR. However, previous methods use one or more types of attention independently in multiple stages and ignore the correlations between different layers in the network. To address these issues, we propose a novel end-to-end architecture named global-context attention network (GCAN) for SISR, which consists of several residual global-context attention blocks (RGCABs) and an inter-group fusion module (IGFM). Specifically, the proposed RGCAB extracts representative features that capture non-local spatial interdependencies and multiple channel relations. Then the IGFM aggregates and fuses hierarchical features of multi-layers discriminatively by considering correlations among layers. Extensive experimental results demonstrate that our method achieves superior results against other state-of-the-art methods on publicly available datasets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.