1993
DOI: 10.1145/152610.152611
|View full text |Cite
|
Sign up to set email alerts
|

Query evaluation techniques for large databases

Abstract: Database management systems will continue to manage large data volumes. Thus, efficient algorithms for accessing and manipulating large sets and sequences will be required to provide acceptable performance. The advent of object-oriented and extensible database systems will not solve this problem. On the contrary, modern data models exacerbate it: In order to manipulate large sets of complex objects as efficiently as today's database systems manipulate simple records, query processing algorithms and software wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
476
0
5

Year Published

1999
1999
2011
2011

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 927 publications
(491 citation statements)
references
References 146 publications
(71 reference statements)
1
476
0
5
Order By: Relevance
“…Assume that R is the left, and S the right, input of a join. Let H , N L , M , IN and SH , respectively, denote the above physical join operators, the algorithms for which are described briefly below: -Hash Join [7]: All R-tuples are read and stored in a hash table, indexed on the join attribute(s). Then, each S-tuple is read in turn and used to probe the hash table.…”
Section: Technical Contextmentioning
confidence: 99%
See 1 more Smart Citation
“…Assume that R is the left, and S the right, input of a join. Let H , N L , M , IN and SH , respectively, denote the above physical join operators, the algorithms for which are described briefly below: -Hash Join [7]: All R-tuples are read and stored in a hash table, indexed on the join attribute(s). Then, each S-tuple is read in turn and used to probe the hash table.…”
Section: Technical Contextmentioning
confidence: 99%
“…In this paper, pipelined evaluation is assumed to be implemented using the the iterator model [7], which has three principal functions: Open, Next and Close. The Open function prepares the operator for result production.…”
Section: Technical Contextmentioning
confidence: 99%
“…liquid operators use the standard iterator model of query processing. Interested readers are referred to [7] for an introduction to these concepts.…”
Section: Liquidmentioning
confidence: 99%
“…However, the answers obtained from a reduced data set are only approximate and in most cases the error is large [11], which greatly limits the applicability of data reduction in the data warehouse context. Many parallel databases systems have appeared both as research prototypes [12], [13], [14], [15] and as commercial products such as NonStop SQL fromTandem [16] or Oracle. However, even these "brute force" approaches have several difficulties when used in data warehouses such as the well-known problems of finding effective solutions for parallel data placement and parallel joins.…”
Section: Related Workmentioning
confidence: 99%