2010
DOI: 10.1007/978-3-642-13067-0_27
|View full text |Cite
|
Sign up to set email alerts
|

Variable-Sized Map and Locality-Aware Reduce on Public-Resource Grids

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 7 publications
0
5
0
Order By: Relevance
“…In large-scale dataintensive applications like distributed data processing almost provides minimal abstractions which is also used to hide architectural details. Thus automatically parallelizes computation [17], and supports transparent fault tolerance via data repetition, replication and the copy of data continuously used in computation [3]. A important characteristic need of MapReduce [4,6,12] is effortlessness that permit programmers to inscribe functional-style and to code the execution of job in moderate and reliable method.…”
Section: Priority Based Classical Data Encapsulated Scheduling For Nementioning
confidence: 99%
See 1 more Smart Citation
“…In large-scale dataintensive applications like distributed data processing almost provides minimal abstractions which is also used to hide architectural details. Thus automatically parallelizes computation [17], and supports transparent fault tolerance via data repetition, replication and the copy of data continuously used in computation [3]. A important characteristic need of MapReduce [4,6,12] is effortlessness that permit programmers to inscribe functional-style and to code the execution of job in moderate and reliable method.…”
Section: Priority Based Classical Data Encapsulated Scheduling For Nementioning
confidence: 99%
“…Such kind of issues is data reduction, which requires to be implemented with very large-scale cluster data de-duplication systems [13]. The similarity-based deduplication scheme uses [3,4] to optimizes the exclusion procedure by bearing in mind the locality and resemblance of data points that are occurred in together the inter and intra node circumstances. Today's technology like Hadoop and Map Reduce [1,2,8] are facing such kind of redundancy elimination while data utilization.…”
Section: Introductionmentioning
confidence: 99%
“…Ussop [2] employs MapReduce on public-resource grids and suggests variable-size map tasks. LARTS [9] attempts to collocate reduce tasks with the maximum required data.…”
Section: Related Workmentioning
confidence: 99%
“…In our experiments, we observed that the total intermediate output (or total reduce input) size is sometimes equal to the total input size of all map tasks (e.g. sort) or even larger (e.g., 44.2% for K-means) 2 . Similar observations were reached in [10], [13].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation