2022
DOI: 10.48550/arxiv.2204.13361
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

It's DONE: Direct ONE-shot learning without training optimization

Abstract: Learning a new concept from one example is a superior function of human brain and it is drawing attention in the field of machine learning as one-shot learning task. In this paper, we propose the simplest method for this task, named Direct ONE-shot learning (DONE). DONE adds a new class to a pretrained deep neural network (DNN) classifier with neither training optimization nor other-classes modification. DONE is inspired by Hebbian theory and directly uses the neural activity input of the final dense layer obt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…This underscores the efficacy of transfer learning, which can yield a good performance even with a limited number of training images. Moreover, in a more general case, when adding a new class (in this context, a new species), even simple deterministic operations conducted in a non-parametric manner are known to achieve accuracy levels of approximately 80% for existing classes using only a single training instance (Hosoda et al, 2022). We detected individual microorganisms in images of microcosms or standard monoculture samples and used the detected data with YOLO confidence scores (Redmon et al, 2016) of 0.5 or higher.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
“…This underscores the efficacy of transfer learning, which can yield a good performance even with a limited number of training images. Moreover, in a more general case, when adding a new class (in this context, a new species), even simple deterministic operations conducted in a non-parametric manner are known to achieve accuracy levels of approximately 80% for existing classes using only a single training instance (Hosoda et al, 2022). We detected individual microorganisms in images of microcosms or standard monoculture samples and used the detected data with YOLO confidence scores (Redmon et al, 2016) of 0.5 or higher.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
“…For an ANN image classifier, we used ViT-B/32 (a Vision Transformer [6]) pretrained with ImageNet 1000 classes. Some of the 90 images were not included in the 1000 classes, and we added 15 new classes to the ANN by DONE (Direct ONE-shot learning [7]), using three Creative-Commons-license images on the internet as the training data for each of the 15 classes. We determined "correct class" for each binary image as a coarse-categoricallycorrect class [8] that were top-1 or top-2 output obtained when inputting the corresponding color image.…”
Section: Dataset and Ann Image Classifiermentioning
confidence: 99%
“…A typical example of the brain's adaptability to respond to unexpected situations is inspiration with the Eureka effect, which is an ability to come up with the correct answer to unlearned tasks without any learning by taking a relatively long time to think [39,40]. Also, the brain can learn a new concept from one example of Hebbian learning, a simple process of strengthening the synaptic connections that are used [41]. Conversely, it is theoretically shown that ecosystems can have adaptability akin to Hebbian learning in neural networks [42].…”
Section: Figure 2 Mechanism For Information Increase and Identifying ...mentioning
confidence: 99%