2023
DOI: 10.1088/1742-5468/ad0a86
|View full text |Cite
|
Sign up to set email alerts
|

Parallel learning by multitasking neural networks

Elena Agliari,
Andrea Alessandrelli,
Adriano Barra
et al.

Abstract: Parallel learning, namely the simultaneous learning of multiple patterns, constitutes a modern challenge for neural networks. While this cannot be accomplished by standard Hebbian associative neural networks, in this paper we show how the multitasking Hebbian network (a variation on the theme of the Hopfield model, working on sparse datasets) is naturally able to perform this complex task. We focus on systems processing in parallel a finite (up to logarithmic growth in the size of the network) number of patter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 34 publications
0
1
0
Order By: Relevance
“…From the mathematical viewpoint, possible future developments would be relaxing the constraints of a self-averaging Mattis magnetization and inspecting how the learning and retrieval properties of these networks change with different kind of noise: in this work we have focused on multiplicative noise, however the use of additive noise has lately gained large popularity in generative models for machine learning (see for instance [56]). Furthermore, within the framework of multiplicative noise, an interesting outlook would be considering corruptions of archetypes which also consist of blank (in addition to inverted) entries, as done recently in [57] for networks away from saturation. Their operation in the saturated regime and the effects of replica symmetry breaking have not yet been investigated: we plan to report soon on these topics.…”
Section: Conclusion and Outlooksmentioning
confidence: 99%
“…From the mathematical viewpoint, possible future developments would be relaxing the constraints of a self-averaging Mattis magnetization and inspecting how the learning and retrieval properties of these networks change with different kind of noise: in this work we have focused on multiplicative noise, however the use of additive noise has lately gained large popularity in generative models for machine learning (see for instance [56]). Furthermore, within the framework of multiplicative noise, an interesting outlook would be considering corruptions of archetypes which also consist of blank (in addition to inverted) entries, as done recently in [57] for networks away from saturation. Their operation in the saturated regime and the effects of replica symmetry breaking have not yet been investigated: we plan to report soon on these topics.…”
Section: Conclusion and Outlooksmentioning
confidence: 99%