2020
DOI: 10.1137/19m1297026
|View full text |Cite
|
Sign up to set email alerts
|

Nonnegative Tensor Patch Dictionary Approaches for Image Compression and Deblurring Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(17 citation statements)
references
References 31 publications
0
17
0
Order By: Relevance
“…where the first inequality follows from (35), the second inequality follows from |X | ijk ≤ b A ∞ , and the last inequality follows from (36). Therefore, combining (36) with (38), we get that…”
Section: Theorem 51 Suppose That the Scalar Probability Density Funct...mentioning
confidence: 89%
See 3 more Smart Citations
“…where the first inequality follows from (35), the second inequality follows from |X | ijk ≤ b A ∞ , and the last inequality follows from (36). Therefore, combining (36) with (38), we get that…”
Section: Theorem 51 Suppose That the Scalar Probability Density Funct...mentioning
confidence: 89%
“…for a specified β ≥ 2, we construct L to be the set of all tensors A ∈ R n 1 ×r×n 3 + whose entries are discretized to one of ϑ uniformly sized bins in the range [0, 1], and D to be the set of all tensors B ∈ R r×n 2 ×n 3 + whose entries either take the value 0, or are discretized to one of ϑ uniformly sized bins in the range [0, b]. Remark 3.1 When all entries of Y are observed and Y is corrupted by additive Gaussian noise, the model ( 2) reduces to sparse NTF with tensor-tensor product, which has been applied in patch-based dictionary learning for image data [35,46]. Remark 3.2 We do not specialize the noise in model (2), and just need the joint probability density function or probability mass function of observations in (1).…”
Section: Sparse Ntf and Completion Via Tensor-tensor Productmentioning
confidence: 99%
See 2 more Smart Citations
“…where A ∈ R m×n×l , X ∈ R n×p×l and B ∈ R m×p×l are third-order tensors, and the operator * denotes the T-product introduced by Kilmer and Martin [1]. The problem (1.1) arises in many applications, including tensor dictionary learning [2][3][4][5][6][7], tensor neural network [8], boundary finite element method [9][10][11], etc. For T-product, it has an advantage that it can reserve the information inherent in the flattening of a tensor and, with it, many properties of numerical linear algebra can be extend to third and high order tensors [12][13][14][15][16][17][18].…”
Section: Introductionmentioning
confidence: 99%