Object Recognition Supported by User Interaction for Service Robots
DOI: 10.1109/icpr.2002.1048423
|View full text |Cite
|
Sign up to set email alerts
|

Affine parameter estimation from the trace transform

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
12
0

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(13 citation statements)
references
References 14 publications
1
12
0
Order By: Relevance
“…Fig.4 illustrates different kinds of image processing attacks, including cropping (60% remains), noise (5%), JPEG (20% quality), Darken (nonlinear) and Darken plus Histogram Equalization. We also test the moment based algorithm [3] and the trace transform based algorithm [5] for comparison. In the trace transform based algorithm, the selected trace function and diagonal function are illustrated in…”
Section: Experiments Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Fig.4 illustrates different kinds of image processing attacks, including cropping (60% remains), noise (5%), JPEG (20% quality), Darken (nonlinear) and Darken plus Histogram Equalization. We also test the moment based algorithm [3] and the trace transform based algorithm [5] for comparison. In the trace transform based algorithm, the selected trace function and diagonal function are illustrated in…”
Section: Experiments Resultsmentioning
confidence: 99%
“…In [4], Lucchese et al estimated affine parameters by the relationships between companion stretched slices of the Fourier transforms magnitudes of the two images. In [5], Kadyrov et al proposed a trace transform based algorithm, which calculates affine parameters by trace transforms. However, the above algorithms are not robust to cropping and darken attacks, and the estimation precision is not accurate enough for digital watermark applications.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…With the Trace transform, an image is transformed into another "image," which is also a 2-D function but depending on two parameters that characterize those criss-crossing lines. Then by computing the second and third functional of this 2-D function, we finally obtain a single number-the triple feature [18], [19]. When the three functionals are invariant or sensitive [16] to displacement, the triple feature would be invariant to image rotation, translation and scaling, which may find direct application in database query, change detection, site monitoring, etc.…”
Section: Introductionmentioning
confidence: 99%
“…We can distinguish mainly two type of approaches: image-based and feature-based. The imagebased approaches try to find a transformation that maximizes the overlap between the two images, usually by analyzing them in the frequency domain [2,4,6]. Conversely, feature-based approaches are characterized by two phases: Initially a set of features is extracted from each image and are then matched to estimate the affine transformation [3,5,12,11].…”
Section: Introductionmentioning
confidence: 99%