2016
DOI: 10.1007/978-3-319-41649-6_14
|View full text |Cite
|
Sign up to set email alerts
|

Different Conceptions of Learning: Function Approximation vs. Self-Organization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
4

Relationship

4
4

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 9 publications
0
11
0
Order By: Relevance
“…Even so, NARS is still related to the other AI theories and techniques here or there, and there are publications to compare NARS with the other techniques. For example, NARS has been compared with neural networks (Wang, 2006a;Wang and Li, 2016) and reinforcement learning (Wang and Hammer, 2015), which are related to the comment of Crosby and Shevlin (2020) on the relation between Deep Reinforcement Learning and Artificial Intelligence.…”
Section: Fruitfulnessmentioning
confidence: 99%
“…Even so, NARS is still related to the other AI theories and techniques here or there, and there are publications to compare NARS with the other techniques. For example, NARS has been compared with neural networks (Wang, 2006a;Wang and Li, 2016) and reinforcement learning (Wang and Hammer, 2015), which are related to the comment of Crosby and Shevlin (2020) on the relation between Deep Reinforcement Learning and Artificial Intelligence.…”
Section: Fruitfulnessmentioning
confidence: 99%
“…One example is that in the current machine learning studies, "learning" has been commonly specified as the process of using a meta-algorithm (learning algorithm) to produce an object-level algorithm (model for a domain problem) according to the training data (Flach, 2012). This working definition is exact and simple, as well as fruitful in many domains, though is arguably only a restricted version if compared to the learning processes in the human mind (Wang and Li, 2016), even compared to the initial diverse approaches within the field (Michalski, Carbonell, and Mitchell, 1984).…”
Section: Function-aimentioning
confidence: 99%
“…A cumulative learner in our conceptualization is a learning controller [22] that, guided by one or more top-level internalized goals (or drives), implements a cumulative modeling process whereby regularities are recursively extracted from the learner's experience (of self and environment) to construct integrated models useful for achieving goals [3,22]. The collection of models form a unified body of knowledge that can be used, by a set of additional and appropriate management processes (see Aa-f above), as the basis for making predictions about, and achieving goals with respect to, an environment, and that can be used to improve future learning-in speed, quality, efficiency, or all of these [28]. At the risk of oversimplification, a compact definition of cumulative learning might read something like "using a unified body of knowledge to continually and recursively integrate new information from many contexts into that body."…”
Section: Functions Of Cumulative Learningmentioning
confidence: 99%
“…An extreme case of this is using analogies to deepen or broaden knowledge of a set of phenomena. In NARS, for instance, learning involves not only tasks but also effective (re-)organization of knowledge, without respect to specific problems, so that it may later be used on any relevant task [28] (cf. Ae, Af).…”
Section: Generalitymentioning
confidence: 99%