2013
DOI: 10.1186/1869-0238-4-18
|View full text |Cite
|
Sign up to set email alerts
|

Internet-scale support for map-reduce processing

Abstract: Volunteer Computing systems (VC) harness computing resources of machines from around the world to perform distributed independent tasks. Existing infrastructures follow a master/worker model, with a centralized architecture. This limits the scalability of the solution due to its dependence on the server. Our goal is to create a fault-tolerant VC platform that supports complex applications, by using a distributed model which improves performance and reduces the burden on the server. In this paper we present VMR… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 18 publications
0
9
0
Order By: Relevance
“…Most of the existing volunteer computing systems, as well as the architectures described in Section IV, consider a centralized system, with communication going through a single server or coordinator that fulfills the role of job scheduler and handles the task distribution and result validation. As a result, these systems either do not implement any result aggregation and validation mechanisms or create a considerable overhead on the server and are thus limited to embarrassingly parallel applications [64]. Recent solutions tolerate clients' failures by assigning the same task to multiple devices and by replicating the intermediate and final output of the computations across different devices [64], [65].…”
Section: Results Validation and Aggregationmentioning
confidence: 99%
“…Most of the existing volunteer computing systems, as well as the architectures described in Section IV, consider a centralized system, with communication going through a single server or coordinator that fulfills the role of job scheduler and handles the task distribution and result validation. As a result, these systems either do not implement any result aggregation and validation mechanisms or create a considerable overhead on the server and are thus limited to embarrassingly parallel applications [64]. Recent solutions tolerate clients' failures by assigning the same task to multiple devices and by replicating the intermediate and final output of the computations across different devices [64], [65].…”
Section: Results Validation and Aggregationmentioning
confidence: 99%
“…Nevertheless, as MapReduce's popularity increased, some platforms decided to use available resources over the Internet to run MapReduce jobs. Therefore, solutions such as SCOLARS [5], MOON [11], Tang [15] and Marozzo [7] already support MapReduce applications.…”
Section: Related Workmentioning
confidence: 99%
“…With respect to the few solutions [11,15,7,5] that support MapReduce, we are able to point out some issues (more details on Section 4): data distribution could be improved, intermediate data availability is overlooked, and lack of support for multiple cycle MapReduce applications.…”
mentioning
confidence: 98%
“…Another similar work is VMR [5], a volunteer computing system able to run MapReduce applications on top of volunteer resources, spread throughout the Internet. VMR leverages users' bandwidth through the use of inter-client communication, and uses a lightweight task validation mechanism.…”
Section: Mapreduce On Non-dedicated Computing Resourcesmentioning
confidence: 99%
“…MapReduce is an emerging programming model for large-scale data processing [6]. Recently, there are some MapReduce implementations that are designed for large-scale parallel data processing specialized on desktop grid or volunteer resources in Intranet or Internet, such as MOON [11], P2P-MapReduce [13], VMR [5], HybridMR [18], etc. In our previous work, we also implemented a MapReduce system called BitDew-MapReduce, specifically for desktop grid environment [19].…”
Section: Introductionmentioning
confidence: 99%