2016
DOI: 10.1007/s10586-016-0712-4
|View full text |Cite
|
Sign up to set email alerts
|

SmallClient for big data: an indexing framework towards fast data retrieval

Abstract: Numerous applications are continuously generating massive amount of data and it has become critical to extract useful information while maintaining acceptable computing performance. The objective of this work is to design an indexing framework which minimizes indexing overhead and improves query execution and data search performance with optimum aggregation of computing performance. We propose SmallClient, an indexing framework to speed up query execution. SmallClient has three modules: block creation, index c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 27 publications
0
9
0
Order By: Relevance
“…In terms of data indexing, SmallClient [11] improves query execution and search performance for big datasets and minimises the overhead of indexing. The framework is implementable on any distributed file system.…”
Section: B Data Indexing Techniquesmentioning
confidence: 99%
“…In terms of data indexing, SmallClient [11] improves query execution and search performance for big datasets and minimises the overhead of indexing. The framework is implementable on any distributed file system.…”
Section: B Data Indexing Techniquesmentioning
confidence: 99%
“…Thus, the system overhead has occurred with massive data. Siddiqaet, A et al [5] designed a model named SmallClient for speeding the query for executing massive datasets. This method failed to use a probabilistic machine learning algorithm with B-Tree indexing for attaining adaptive index creation by predicting query workload and index attribute.…”
Section: Literature Reviewmentioning
confidence: 99%
“…•Indexing methods are effective for current datasets, but inefficient with big data. The indexing size and indexing time are essential for massive datasets, and thus it is unfeasible to tolerate long delays in uploading data and data search operations [5].…”
Section: Challengesmentioning
confidence: 99%
“…Researchers have used different indexing techniques on big data. For example, to improve query execution and the performance of searching [4], big data coupled with elastic search technology to satisfy the daily health care needs [5], a model with block creation module, index creation module and query creation [6], R-tree based indexing to support cloud with multi dimension data indexing, Data Med for searching biomedical datasets across repositories [7]. Similar to indexing,optimization is also one of searching techniques in big data for finding the best optimum.…”
Section: Introductionmentioning
confidence: 99%