2022
DOI: 10.1631/fitee.2200297
|View full text |Cite
|
Sign up to set email alerts
|

On the principles of Parsimony and Self-consistency for the emergence of intelligence

Abstract: Ten years into the revival of deep networks and artificial intelligence, we propose a theoretical framework that sheds light on understanding deep networks within a bigger picture of intelligence in general. We introduce two fundamental principles, Parsimony and Self-consistency, which address two fundamental questions regarding intelligence: what to learn and how to learn, respectively. We believe the two principles serve as the cornerstone for the emergence of intelligence, artificial or natural. While they … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 30 publications
(10 citation statements)
references
References 84 publications
0
6
0
Order By: Relevance
“…This may be a general strategy for object representations in the primate brain 50 . Further, this particular kind of sparse representation has been explored in machine learning [51][52][53] and is thought to be essential for flexible and intelligent behavior 54 .…”
Section: Discussionmentioning
confidence: 99%
“…This may be a general strategy for object representations in the primate brain 50 . Further, this particular kind of sparse representation has been explored in machine learning [51][52][53] and is thought to be essential for flexible and intelligent behavior 54 .…”
Section: Discussionmentioning
confidence: 99%
“…In summary, in this study we provide detailed, comprehensive, longitudinal data on the formation of a cognitive map in the hippocampus during learning of a moderately complex task, which progresses in stages over the course of several training sessions. The excellent match between CSCGs, Hebbian-RNNs, and hippocampal data suggest that state machines with sparse, orthogonalized representations are likely to provide a powerful framework for neural computation 104 , memory 105,127 , and intelligence 128 . These results also underscore the need to understand how different learning rules such as local Hebbian learning and gradient descent-based learning might synergistically function to promote a more effective and efficient learning process in both natural and artificial learning systems 71,129,130 .…”
Section: Discussionmentioning
confidence: 99%
“…In a broad sense, compositionality can be seen as a particular way exploiting or imposing structure in the inner representations of a network. It has also been argued that data representations should be concentrated in low-dimensional linear spaces [29,8], or even be "disentangled" with respect to factors variation in the data [23,7,1]. Our perspective on compositional representations is closely related to the definition of disentanglement given in [23].…”
Section: Related Workmentioning
confidence: 98%