2016
DOI: 10.48550/arxiv.1610.02373
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Distributed Averaging CNN-ELM for Big Data

Arif Budiman,
Mohamad Ivan Fanany,
Chan Basaruddin

Abstract: Increasing the scalability of machine learning to handle big volume of data is a challenging task. The scale up approach has some limitations. In this paper, we proposed a scale out approach for CNN-ELM based on MapReduce on classifier level. Map process is the CNN-ELM training for certain partition of data. It involves many CNN-ELM models that can be trained asynchronously. Reduce process is the averaging of all CNN-ELM weights as final training result. This approach can save a lot of training time than singl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…Furthermore, bias in machine learning models can also be exacerbated by the lack of diversity in the training data, where models trained on homogeneous datasets may fail to generalise well to diverse populations, leading to performance disparities across different groups (Shi et al, 2018). Additionally, bias can be introduced through a feature selection process where certain features may be overemphasised or underrepresented, impacting the model prediction predictive capabilities (Budiman, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, bias in machine learning models can also be exacerbated by the lack of diversity in the training data, where models trained on homogeneous datasets may fail to generalise well to diverse populations, leading to performance disparities across different groups (Shi et al, 2018). Additionally, bias can be introduced through a feature selection process where certain features may be overemphasised or underrepresented, impacting the model prediction predictive capabilities (Budiman, 2016).…”
Section: Introductionmentioning
confidence: 99%