2014
DOI: 10.3844/jcssp.2014.2194.2210
|View full text |Cite
|
Sign up to set email alerts
|

Mapreduce Challenges on Pervasive Grids

Abstract: This study presents the advances on designing and implementing scalable techniques to support the development and execution of MapReduce application in pervasive distributed computing infrastructures, in the context of the PER-MARE project. A pervasive framework for MapReduce applications is very useful in practice, especially in those scientific, enterprises and educational centers which have many unused or underused computing resources, which can be fully exploited to solve relevant problems that demand larg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
2
2
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 23 publications
(16 reference statements)
0
2
0
Order By: Relevance
“…We believe that this twofold approach is essential to understand and cover all the problem issues. In the context of the PER-MARE project, this paper focus on improving Hadoop fault-tolerance and context-awareness [23], enabling its deployment over a pervasive environment [24]. Figure 5.…”
Section: B the Per-mare Projectmentioning
confidence: 99%
“…We believe that this twofold approach is essential to understand and cover all the problem issues. In the context of the PER-MARE project, this paper focus on improving Hadoop fault-tolerance and context-awareness [23], enabling its deployment over a pervasive environment [24]. Figure 5.…”
Section: B the Per-mare Projectmentioning
confidence: 99%
“…There are some studies that handled volatility in open systems by using a small set of dedicated nodes in order to ensure the minimum amount of resources required to execute MapReduce jobs [16]. In [17] authors considered the case of pervasive grids and monitored nodes capacity (i.e., processors and memory) to improve Hadoop scheduler decisions. Differently from those previous work, our contribution targets environments such as private Cloud data centers.…”
Section: Limitationsmentioning
confidence: 99%