1998
DOI: 10.1117/12.304973
|View full text |Cite
|
Sign up to set email alerts
|

<title>Continuous logic equivalence models of Hamming neural network architectures with adaptive-correlated weighting</title>

Abstract: The continuous logic "equivalental" models (CLEM) of Hamming neural networks (NN) with adaptive-correlated weighting (HNN ACW) and multiport associative memory (MAM) based on equivalence operation of neural logic are considered. The models for simple network with weighted correlation coefficient, for network with adapted weighting and double weighting and their system equivalental func ions are suggested. The models require calculations based on two-step algorithms and vector-matrix procedures with the normali… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
18
0
2

Year Published

2002
2002
2019
2019

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 13 publications
(21 citation statements)
references
References 7 publications
1
18
0
2
Order By: Relevance
“…Equivalence models of neural net autoassociative and heteroassociative memory (HАM) were offered in papers [5][6][7] . Simulation results of such equivalence models (EM) 8,9 have confirmed that the EM has such advantages as substantial increase of memory capacity and possibility to keep highly correlated patterns of considerable dimension. These researches of EM HAM have showed that these models allow to recognize vectors with 1024 components and considerable percent (to 25-30%) of damages, at the capacity of network which in 3 -4 times exceeds the amount of neurons 6,7,9 .…”
Section: Introductionsupporting
confidence: 58%
See 2 more Smart Citations
“…Equivalence models of neural net autoassociative and heteroassociative memory (HАM) were offered in papers [5][6][7] . Simulation results of such equivalence models (EM) 8,9 have confirmed that the EM has such advantages as substantial increase of memory capacity and possibility to keep highly correlated patterns of considerable dimension. These researches of EM HAM have showed that these models allow to recognize vectors with 1024 components and considerable percent (to 25-30%) of damages, at the capacity of network which in 3 -4 times exceeds the amount of neurons 6,7,9 .…”
Section: Introductionsupporting
confidence: 58%
“…1, is defined by a cluster point, that it was the least distance. As a measure of closeness we used generalized normalized function equivalence ( nonequivalence ) 8 . Modification of MTNLEMs DAEW for spatio-invariant recognition of 2D images is considered in paper 17 .…”
Section: Modeling Combined With Self-learning Clustering Methods Of Immentioning
confidence: 99%
See 1 more Smart Citation
“…The strategic direction of solution of various scientific problems, including the problem of creation of artificial intelligence (AI) systems, human brain simulators, robotics systems, monitoring and control systems, decision-making systems, as well as systems based on artificial neural networks, etc, becomes fast-acting and parallel processing of large arrays (2-D) of data (up to 1024x1024 and higher) using non-conventional computational systems, corresponding matrix logics (multi-valued, signed-digit, fuzzy logics, continuous, neuro-fuzzy and others) and corresponding mathematical apparatus [1][2][3][4][5][6] . For numerous perspective realizations of optical learning neural networks (NN) with two dimensional structure 1 of recurrent optical NN 2 of the continuous logic equivalency models (CLEM) NN [3][4][5] , the elements of matrix logic are required, and not only of two-valued property, threshold, hybrid but also continuous, neuro-fuzzy logics and adequate structure of vector-matrix computational procedures with basic operations of abovementioned logics.…”
Section: Introductionmentioning
confidence: 99%
“…Besides, quality of recognition, in particular, the number of stored and correctly recognized references in neuroassociative memory, depends greatly on chosen matrix, type, space. In neuronetworks models and recognition algorithms in hidden layers, minimal spaces (while teaching) 2,10 and criteria of maximum convergency are used as intermediate criteria, in some new equivalence models for recognition of strongly corrected images 11,12 . The interest to these new directions, neurofuzzy models, logic-algebraic apparatus, common neurobionic principles can be explained by the possibility to understand with their help the principles of human brain functioning.…”
Section: Introductionmentioning
confidence: 99%