2019
DOI: 10.36227/techrxiv.11459574.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Convolutional Dictionary Pair Learning Network for Image Representation Learning.pdf

Abstract: 10 pages, 6 figures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 8 publications
(14 reference statements)
0
3
0
Order By: Relevance
“…The training datasets include BSD400 [3] and WED [38] for denoising by adding Gaussian noise levels σ ∈ {15, 25, 50}, Rain100L [64] for deraining, and SOTS [28] for dehazing. In addition, we include two challenging tasks of image deblurring and low-light enhancement and use the GoPro [41] and LOL dataset [59] for training, as previous research [8,69,70]. For all the datasets, we follow the standard practices in data splitting and preprocessing in the field.…”
Section: Methodsmentioning
confidence: 99%
“…The training datasets include BSD400 [3] and WED [38] for denoising by adding Gaussian noise levels σ ∈ {15, 25, 50}, Rain100L [64] for deraining, and SOTS [28] for dehazing. In addition, we include two challenging tasks of image deblurring and low-light enhancement and use the GoPro [41] and LOL dataset [59] for training, as previous research [8,69,70]. For all the datasets, we follow the standard practices in data splitting and preprocessing in the field.…”
Section: Methodsmentioning
confidence: 99%
“…SDRCF [120]. SDRCF is inspired by the sparse representation (SR) [56][57][58][59][60][61][62][63][64][65][66], which simultaneously incorporates the local geometrical structures of both the data and features into CF, and obtain a weight matrix. For a sample x and a matrix DN   containing the dictionary atoms in its columns, SR represents x using as few entries of as possible, defined as follows:…”
Section: Scf [127] and Rscf [127]mentioning
confidence: 99%
“…Because the RL methods can effectively simplify the complex input data, eliminate invalid information and extract useful information (or features) from observed inputs [45][46][47][48][49][50][51][52][53][54][55]. Classical RL approaches include feature extraction (FE) , sparse dictionary learning (SDL) [56][57][58][59][60][61][62][63][64][65][66], low-rank coding (LRC) [91][92][93][94][95][96][97][98][99][100][101][102][103][104], matrix factorization (MF) [1][2][3][4][5][6] [ [105][106][107][108][109][110][111]…”
Section: Introductionmentioning
confidence: 99%