2002
DOI: 10.1080/09540090210162065
|View full text |Cite
|
Sign up to set email alerts
|

Greedy information acquisition algorithm: A new information theoretic approach to dynamic information acquisition in neural networks

Abstract: In this paper, we propose a new information theoretic approach to competitive learning. The new approach is called greedy information acquisition, because networks try to absorb as much information as possible in every stage of learning. In the first phase, with minimum network architecture for realizing competition, information is maximized. In the second phase, a new unit is added, and thereby information is again increased as much as possible. This process continues until no more increase in information is … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2004
2004
2011
2011

Publication Types

Select...
5
3

Relationship

3
5

Authors

Journals

citations
Cited by 36 publications
(12 citation statements)
references
References 30 publications
0
12
0
Order By: Relevance
“…However, as already mentioned in the introduction section, we have had the underutilized problem, and many methods have been proposed to overcome this problem [20][21][22][23][24][25]. To overcome this problem and to realize the softer type of competitive learning, we have introduced mutual information between input patterns and competitive units in competitive learning [15][16][17][18][19]. The use of mutual information for competitive learning is due to the fact that there is a strong similarity between competitive learning and mutual information maximization.…”
Section: Mutual Information and Competitive Learningmentioning
confidence: 96%
See 2 more Smart Citations
“…However, as already mentioned in the introduction section, we have had the underutilized problem, and many methods have been proposed to overcome this problem [20][21][22][23][24][25]. To overcome this problem and to realize the softer type of competitive learning, we have introduced mutual information between input patterns and competitive units in competitive learning [15][16][17][18][19]. The use of mutual information for competitive learning is due to the fact that there is a strong similarity between competitive learning and mutual information maximization.…”
Section: Mutual Information and Competitive Learningmentioning
confidence: 96%
“…We have also introduced information-theoretic methods in unsupervised competitive learning [15][16][17][18][19]. They are based upon discrete cases, and it is easy to define mutual information, because mutual information is directly computed by using competitive unit activations.…”
Section: Information-theoretic Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…Much attention has so far been paid to the improvement of classification performance, and little has been paid, from our point of view, to the interpretation of the configuration of the network. For this interpretation, we have introduced information-theoretic competitive learning [25,[32][33][34] in which competitive processes are supposed to be equivalent to information maximization ones. In our informationtheoretic methods, a competitive unit output has been defined by the Gaussian-like function…”
Section: Information-theoretic Competitive Learningmentioning
confidence: 99%
“…In other words, competitive learning is only one aspect of information maximization in neural networks. We have so far proposed information-theoretic competitive learning [10,16,17,12] in which competitive processes are supposed to be equivalent to information maximization processes. When mutual information between input patterns and connection weights is maximized, only one competitive unit is active, while all the other units are inactive.…”
Section: Competitive Learning and Information Contentmentioning
confidence: 99%