2017 IEEE International Conference on Image Processing (ICIP) 2017
DOI: 10.1109/icip.2017.8296571
|View full text |Cite
|
Sign up to set email alerts
|

Subproblem coupling in convolutional dictionary learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
55
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 23 publications
(55 citation statements)
references
References 11 publications
0
55
0
Order By: Relevance
“…2 presents a comparison of the objective value as a function of time for each of the competing algorithms, showing that our method achieves the 6 All our notations are in accordance with those presented in [29] Fig. 2: Run time comparison between our method and the batch methods: the SBDL algorithm [1], the algorithm of Wohlberg [40] and the algorithm by Garcia et al [41].…”
Section: A Run Time Comparisonmentioning
confidence: 82%
“…2 presents a comparison of the objective value as a function of time for each of the competing algorithms, showing that our method achieves the 6 All our notations are in accordance with those presented in [29] Fig. 2: Run time comparison between our method and the batch methods: the SBDL algorithm [1], the algorithm of Wohlberg [40] and the algorithm by Garcia et al [41].…”
Section: A Run Time Comparisonmentioning
confidence: 82%
“…First, a synthesis dictionary D is obtained, which satisfies X0: N −1 ≈ DY0: N −1, by using CSC instead of giving an analysis map Ψ as in Sect. 2 [9][10][11][12]. Fig.…”
Section: Convolutional-sparse-coded Dmdmentioning
confidence: 98%
“…Atom termination is applied to the boundary [20,21]. The sparse approximation in (11) and (13) used ISTA [18]. In the dictionary update step (12), a stochastic gradient descent method with AdaGrad was adopted [22][23][24], where 128 patches of size 32 × 128 × 2 voxels were randomly extracted from the ST training data.…”
Section: Experimental River Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…A final remark is that the reviewed DL techniques are designated for flat-fading channels. Multipath scenarios would require convolutional dictionary learning [38], and would be the topic of a future study.…”
Section: B Channel Estimationmentioning
confidence: 99%