2008
DOI: 10.1007/s11227-008-0195-z
|View full text |Cite
|
Sign up to set email alerts
|

Scaling analysis of a neocortex inspired cognitive model on the Cray XD1

Abstract: This paper presents the implementation and scaling of a neocortex inspired cognitive model on a Cray XD1. Both software and reconfigurable logic based FPGA implementations of the model are examined. This model belongs to a new class of biologically inspired cognitive models. Large scale versions of these models have the potential for significantly stronger inference capabilities than current conventional computing systems. These models have large amounts of parallelism and simple computations, thus allowing hi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2010
2010
2017
2017

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 23 publications
0
7
0
Order By: Relevance
“…These difficulties notwithstanding, there have been several machine learning techniques translated successfully to work in parallel on FPGAs, all reporting significant improvements over standard software implementations on conventional CPU architectures. This includes a neurologically inspired hierarchical bayesian model used for invariant object recognition [42], an implementation of the RankBoost for web search relevance ranking [48] and a low-precision implementation of Support Vector Machines using Sequential Minimal Optimisation [8].…”
Section: Single Machine Parallelismmentioning
confidence: 99%
“…These difficulties notwithstanding, there have been several machine learning techniques translated successfully to work in parallel on FPGAs, all reporting significant improvements over standard software implementations on conventional CPU architectures. This includes a neurologically inspired hierarchical bayesian model used for invariant object recognition [42], an implementation of the RankBoost for web search relevance ranking [48] and a low-precision implementation of Support Vector Machines using Sequential Minimal Optimisation [8].…”
Section: Single Machine Parallelismmentioning
confidence: 99%
“…Rice et al [25] have proposed a neocortex-inspired cognitive model on the Cray XD1 supercomputer. The HTM, based on a hierarchical Bayesian network model proposed in [11], uses advanced software and reconfigurable hardware implementations to scale a model based on the human visual cortex to interesting problems.…”
Section: Related Workmentioning
confidence: 99%
“…The HTM, based on a hierarchical Bayesian network model proposed in [11], uses advanced software and reconfigurable hardware implementations to scale a model based on the human visual cortex to interesting problems. Like ourselves, Rice et al [25] take advantage of a massive amount of inherent parallelism in a model based on the neocortex. However, as described above, our implementation of a neocortex-inspired model does not use Bayesian inference.…”
Section: Related Workmentioning
confidence: 99%
“…is still struggling to achieve system-level "general-purpose artificial intelligence" [12]. But recently, the computational neuroscience community has begun developing scalable Bayesian models (based on Bayesian framework [13]- [15]) that have the potential of being applied to large-scale applications, such as, speech recognition, computer vision, image content recognition, robotic control, and making sense of massive quantities of data [4], [16]. Some of these new algorithms are ideal candidates for largescale hardware investigation (and future implementation), especially if they can leverage the high-density processing/storage advantages of hybrid nanoelectronics [1], [3].…”
Section: Cmol/cmos Implementations Of Bayesian Polytreementioning
confidence: 99%
“…The recent work by Rice et al [16], [25] and Vutsinas et al [26] explores the combined implementations in regions 3 and 4, for the GHM [13]. At this time, we are not aware of any work on custom hardware implementation of the GHM [13].…”
Section: B Existing Hardware Implementations Of Ghmmentioning
confidence: 99%