1990
DOI: 10.1109/29.56063
|View full text |Cite
|
Sign up to set email alerts
|

Classification of invariant image representations using a neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
86
0
4

Year Published

1991
1991
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 267 publications
(91 citation statements)
references
References 8 publications
1
86
0
4
Order By: Relevance
“…Therefore, the pattern space with 16 ring features is a 16-dimensional unit cube. The Zernike features up to order five defined in [15] are also extracted and then scaled within the range [0,1] along each dime nsion. Pattern space with Zernike features is 16-dimensional unit cube because; selected features start from second order m oments.…”
Section: Case 1: Accommodation By Expansion Of Hsmentioning
confidence: 99%
“…Therefore, the pattern space with 16 ring features is a 16-dimensional unit cube. The Zernike features up to order five defined in [15] are also extracted and then scaled within the range [0,1] along each dime nsion. Pattern space with Zernike features is 16-dimensional unit cube because; selected features start from second order m oments.…”
Section: Case 1: Accommodation By Expansion Of Hsmentioning
confidence: 99%
“…This is computationally the fastest deterministic algorithm; however, it is also the most susceptible to local minima problems. Following Khotanazad and Lu (1990), this learning procedure can be briefly described as follows. The connection weights Wi are initialized to small random values.…”
Section: The Back-error-propagation Learning Algorithmmentioning
confidence: 99%
“…Following Rumelhart and McClelland (1986), and Khotanazad and Lu (1990), learning can be accomplished by the optimization of a criterion function, where constraint satisfaction can be achieved by estimating the discrepancy between the desired and actual output values, feeding back an error signal layer by layer to the inputs, and then adjusting the interconnection weights in such ways as to modify them in proportion to their contribution to the total mean-square-error. This is computationally the fastest deterministic algorithm; however, it is also the most susceptible to local minima problems.…”
Section: The Back-error-propagation Learning Algorithmmentioning
confidence: 99%
“…For example, the Fourier-Mellin integral is costly to compute and converges only under certain strong conditions [15]. The geometric moments, on the other hand, suffer from a high degree of information redundancy [16] and are sensitive to noise; such problems have been investigated by many researchers, e.g., [17][18][19].…”
Section: Introductionmentioning
confidence: 99%