2019 International Conference on Smart Systems and Inventive Technology (ICSSIT) 2019
DOI: 10.1109/icssit46314.2019.8987823
|View full text |Cite
|
Sign up to set email alerts
|

Optimization of Hadoop MapReduce Model in cloud Computing Environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…Apache Spark is a fast and universal computational engine designed for large-scale data processing, the same as Hadoop, whose computational model is based on MapReduce [1,2] . In 2012, UC Berkeley's AMPLab developed and open-sourced a new big data processing framework, Spark, whose core ideas include the following two aspects: on the one hand, the input and output of the big data processing framework and the intermediate data are abstractly modeled and represented in a unified data structure, which is named Resilient Distributed Dataset (Resilient Distributed Dataset (RDD).…”
Section: Sparkmentioning
confidence: 99%
“…Apache Spark is a fast and universal computational engine designed for large-scale data processing, the same as Hadoop, whose computational model is based on MapReduce [1,2] . In 2012, UC Berkeley's AMPLab developed and open-sourced a new big data processing framework, Spark, whose core ideas include the following two aspects: on the one hand, the input and output of the big data processing framework and the intermediate data are abstractly modeled and represented in a unified data structure, which is named Resilient Distributed Dataset (Resilient Distributed Dataset (RDD).…”
Section: Sparkmentioning
confidence: 99%
“…MapReduce is the main programming model of cloud computing, which is used to process and generate large datasets for various tasks in the real world [2]. Dayanand and others put forward an optimized HPMR (Hadoop MapReduce) model, which balances the performance between I/O system and CPU [3]. Liu Jun and others proposed a configuration parameter adjustment method based on feature selection algorithm, which improved the working efficiency of MapReduce in Hadoop [4].…”
Section: Introductionmentioning
confidence: 99%
“…Varying users can execute various querying the data and so has the ability to pull out valuable results from the filtered data and afterwards rate the results based on the dimensions they need. These studies allow users to discover the current business trends with their plans they can make changes  Complexity: The complexity of a system is measured by the extent of interdependence within enormous data structures, and a tiny change in one or a few pieces can lead to extremely huge changes or a little change that causes a shift in only a portion of the system broad reaching or disseminating impacts on the system, or no alterations at all (Katal, Wazid, & Goudar, 2013) [3] (possibly very large).…”
Section: Introductionmentioning
confidence: 99%