Abstract.A big data benchmark suite is needed eagerly by customers, industry and academia recently. A number of prominent works in last several years are reviewed, their characteristics are introduced and shortcomings are analyzed. The authors also provide some suggestions on building the expected benchmark, including: component based benchmarks as well as end-to-end benchmarks should be used together to test distinct tools and test the system as a whole; workloads should be enriched with complex analytics to encompass different application scenarios; metrics other than performance metrics should also be considered.
MapReduce has shown vigorous vitality and penetrated both academia and industry in recent years.MapReduce not only can be used as an ETL tool, it can do even much more. The technique has been applied to SQL summation, OLAP, data mining, machine learning, information retrieval, multimedia data processing, science data processing etc. Basically MapReduce is a general purpose parallel computing framework for large dataset processing. A big data analytics ecosystem built around MapReduce is emerging alongside the traditional one built around RDBMS. The objectives of RDBMS and MapReduce, as well as the ecosystems built around them, overlap much really, in some sense they do the same thing and MapReduce can accomplish more works, such as graph processing, which RDBMS can not handle well. RBDMS enjoys high performance of relational data processing, which MapReduce needs to catch up. The authors envision that the two techniques are fusing into a unified system for big data analytics. With the ongoing endeavor to build up the system, much of the groundwork has been laid while some critical issues are still unresolved, we try to identify some of them. Two of our works as well as experiment results are presented, one is applying a hierarchical encoding to star schema data in Hadoop for high performance of OLAP processing, another is leveraging the natural three copies of HDFS blocks to exploit different data layouts to speed up queries in a OLAP workload, a cost model is used to route user queries to different data layouts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.