2012
DOI: 10.1007/978-3-642-32560-1_4
|View full text |Cite
|
Sign up to set email alerts
|

New Millennium AI and the Convergence of History: Update of 2012

Abstract: Summary. Artificial Intelligence (AI) has recently become a real formal science: the new millennium brought the first mathematically sound, asymptotically optimal, universal problem solvers, providing a new, rigorous foundation for the previously largely heuristic field of General AI and embedded agents. There also has been rapid progress in not quite universal but still rather general and practical artificial recurrent neural networks for learning sequenceprocessing programs, now yielding state-of-the-art res… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 80 publications
(116 reference statements)
0
9
0
Order By: Relevance
“…Indeed, the main reason that the method is written as a generative probabilistic model is precisely so that it can account for the changing noise model (changing noise variances) from object to object and pixel to pixel. Standard supervised methods from the machine-learning literature (such as Random Forest, Breiman 2001, Deep Learning (e.g., LeCun et al 1989Bengio 2009;Schmidhuber 2015), and Kernel Support Vector Machines (Smola & Schlkopf 2004), do not have the property that they can account for variable noise models. These traditional machine-learning methods perform very badly as the training data become different from the test data (as they do in our S/N experiments in Section 5.6).…”
Section: Discussionmentioning
confidence: 99%
“…Indeed, the main reason that the method is written as a generative probabilistic model is precisely so that it can account for the changing noise model (changing noise variances) from object to object and pixel to pixel. Standard supervised methods from the machine-learning literature (such as Random Forest, Breiman 2001, Deep Learning (e.g., LeCun et al 1989Bengio 2009;Schmidhuber 2015), and Kernel Support Vector Machines (Smola & Schlkopf 2004), do not have the property that they can account for variable noise models. These traditional machine-learning methods perform very badly as the training data become different from the test data (as they do in our S/N experiments in Section 5.6).…”
Section: Discussionmentioning
confidence: 99%
“…In Figure 8, the CIFAR10 data set, which includes 32 × 32 × 3 pixel colour mappings. 10,000 training set, and 10,000 testing data, was generated using this data set [28]. Figures 9 and 10 are given below.…”
Section: Problem Formulationmentioning
confidence: 99%
“…Figure 12: e CIFAR10 dataset, which contains color maps with 32 × 32 × 3 pixels. We used 10,000 training samples and 10,000 test samples of this dataset [28]. Figure 13: e CIFAR100 dataset, which contains color maps with 32 × 32 × 3 pixels.…”
Section: Design Of the Comparative Experimentmentioning
confidence: 99%
“…Figure 13: e CIFAR100 dataset, which contains color maps with 32 × 32 × 3 pixels. e CIFAR100 dataset used in this paper consists of 10,000 training sets and 10,000 test sets [28]. Figure 14: e USPS handwritten dataset derived from the US Postal Service handwritten numeral recognition library.…”
Section: Design Of the Comparative Experimentmentioning
confidence: 99%