2018
DOI: 10.1007/s42044-018-00027-6
|View full text |Cite
|
Sign up to set email alerts
|

A review of various semi-supervised learning models with a deep learning and memory approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(11 citation statements)
references
References 23 publications
2
9
0
Order By: Relevance
“…Labeled data is challenging to access in machine learning, but unlabeled data is frequently collected and accessed quickly. In most initiatives, however, most of the data is unlabeled, but some are [ 80 ]. So, in machine learning and data mining, the primary assumption is that the training and future data have the same distribution and properties [ 81 ].…”
Section: Resultsmentioning
confidence: 99%
“…Labeled data is challenging to access in machine learning, but unlabeled data is frequently collected and accessed quickly. In most initiatives, however, most of the data is unlabeled, but some are [ 80 ]. So, in machine learning and data mining, the primary assumption is that the training and future data have the same distribution and properties [ 81 ].…”
Section: Resultsmentioning
confidence: 99%
“…Semi-supervised learning approaches include self-training methods [6], generative models [1,64], and graph-and vector-based methods [5]. The most common objective in semi-supervised learning is to directly improve the performance of the supervised learning part [24,39].…”
Section: Related Workmentioning
confidence: 99%
“…Note that a high number of data points is required for the training of the deep models, but a program can be induced from a single demonstration as shown in Section 3. The network included three convolutional layers with 3 kernels each with sizes of (7, 7), (5,5) and (3, 3) with RELU activation. These layers allow the network to extract features from the images.…”
Section: Program Synthesis and Autonomous Controlmentioning
confidence: 99%
“…All artificial intelligence models will use some training data such as pictures from neuroimaging techniques and other electronic healthcare data to extract full features or direct samples to classify, detect, and recognize ARD. Numerous ML applications involve tasks that can be set up as supervised and semisupervised learning [15]. ML algorithms often have reached more than 96% to classify AD [16,17].…”
Section: Introductionmentioning
confidence: 99%