IEEE 1999 International Geoscience and Remote Sensing Symposium. IGARSS'99 (Cat. No.99CH36293)
DOI: 10.1109/igarss.1999.772016
|View full text |Cite
|
Sign up to set email alerts
|

Texture preserving despeckling of SAR images using GMRFs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 3 publications
0
7
0
Order By: Relevance
“…In (11), the first expression can be solved analytically. However, the second expression, which represents the autobinomial model, is solved by subtracting log p((x i + 1)|θ i ) − log p(x i |θ i ), numerically.…”
Section: Maximum a Posteriori Information Extraction Using Auto-bimentioning
confidence: 99%
See 2 more Smart Citations
“…In (11), the first expression can be solved analytically. However, the second expression, which represents the autobinomial model, is solved by subtracting log p((x i + 1)|θ i ) − log p(x i |θ i ), numerically.…”
Section: Maximum a Posteriori Information Extraction Using Auto-bimentioning
confidence: 99%
“…However, the resulting images are subject to the degradation of spatial and radiometric resolution, which can result in a loss of image information [23]. Therefore, despeckling techniques should put emphasis on speckle removal (radiometric enhancement) with texture and structure preservation to conserve spatial resolution [11].…”
Section: A Quality Of the Despecklingmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, the user should continue the training process of "mountains" on one of these images. In this example, we have defined "mountains" using the signal class indices derived from the estimated parameters of "model based despeckling" [26] at scales of 50 m and 100 m.…”
Section: B Calculation Of Posterior Probability and Separabilitymentioning
confidence: 99%
“…As also mentioned in the main text, the posterior distribution of after training is a Dirichlet distribution Dir (25) with the hyperparameters . Therefore, we can obtain the likelihood via integration over all as (26) where the integration extends over the allowed parameter space of as given by the normalization constraint of the likelihood. Furthermore, we obtain the variance of the likelihood as Var (27) Note, that for a strict notation, we always have to specify the state of knowledge on the right side of the conditioned probability.…”
Section: A Mean and Variance Of A Single Likelihoodmentioning
confidence: 99%