2018
DOI: 10.1016/j.isprsjprs.2018.01.003
|View full text |Cite
|
Sign up to set email alerts
|

One-two-one networks for compression artifacts reduction in remote sensing

Abstract: Compression artifacts reduction (CAR) is a challenging problem in the field of remote sensing. Most recent deep learning based methods have demonstrated superior performance over the previous hand-crafted methods. In this paper, we propose an end-to-end one-two-one (OTO) network, to combine different deep models, i.e., summation and difference models, to solve the CAR problem.Particularly, the difference model motivated by the Laplacian pyramid is designed to obtain the high frequency information, while the su… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 54 publications
(22 citation statements)
references
References 61 publications
0
22
0
Order By: Relevance
“…The extensive experiments show that GCNs significantly improved baselines, resulting in the state-of-theart performance over several benchmarks. In the future, more GCNs architectures (larger ones) will be tested on other tasks, such as object tracking, detection and segmentation [41], [42], [43], [44]. …”
Section: E Experiments On Food-101 Datasetmentioning
confidence: 99%
“…The extensive experiments show that GCNs significantly improved baselines, resulting in the state-of-theart performance over several benchmarks. In the future, more GCNs architectures (larger ones) will be tested on other tasks, such as object tracking, detection and segmentation [41], [42], [43], [44]. …”
Section: E Experiments On Food-101 Datasetmentioning
confidence: 99%
“…For the fusion network, we follow the research proposed by Baochang Zhang et al [2]. The difference lies in the object of fusion.…”
Section: Phase IIImentioning
confidence: 99%
“…The difference lies in the object of fusion. In previous work, sub-networks extract multi-channel features, and the fusion network fuses these features as shown in Fig 3(a) [2]. For example, if there are two 64-channel features, the 64 pairs of channels will be added/subtracted in fusion network, thus they must match each other.…”
Section: Phase IIImentioning
confidence: 99%
“…Developing a reliable object tracker is very important for intelligent video analysis, and it plays the key role in motion perception in videos (Chang et al (2017b,a); Chang and Yang (2017); Li et al (2017b); Ma et al (2018); Wang et al (2017Wang et al ( , 2016b; Luo et al (2017)). While significant progress in object tracking research has been made and many object tracking algorithms have been developed with promising performance (Ye et al (2015(Ye et al ( , 2016(Ye et al ( , 2017(Ye et al ( , 2018b; Zhou et al (2018b,a); Ye et al (2018a); Liu et al (2018); Lan et al (2018a); Zhang et al (2013bZhang et al ( , 2017dZhang et al ( ,c, 2018c; Song et al (2017Song et al ( , 2018; Zhang et al (2017bZhang et al ( , 2016Zhang et al ( , 2018a; Hou et al (2017); Yang et al (2016); Zhong et al (2014); Guo et al (2017); Ding et al (2018); Shao et al (2018); Yang et al (2018b,a); Pang et al (2017)), it is worth noting that most of these trackers are designed for tracking objects in RGB image sequences, in which they model the object's appearance via the visual features extracted from RGB video frames. This may limit them to be employed in real applications, such as tracking objects in a dark environment where * * Corresponding author: mangye@comp.hkbu.edu.hk (Mang Ye) the lighting condition is poor and the RGB information is not reliable.…”
Section: Introductionmentioning
confidence: 99%