Automata, Languages and Programming
DOI: 10.1007/bfb0015768
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic interpolation search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
37
0

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(37 citation statements)
references
References 7 publications
0
37
0
Order By: Relevance
“…Interpolation search faces the limitation that as the distribution varies from the assumption of a linear scale, the algorithm starts to fail and reaches a worst-case complexity of O (n), which is same as linear search. Interpolation search tree [14] data structure has been introduced with an average cost of O (log (log n)) for search operations and a worst case cost of O (log ~ n) to overcome the above drawback. The augmented sampled forest, or ASF [15] was introduced by Arne Andersson and Christer Mattsson to support a large class of distributions using a dynamic approach.…”
Section: Interpolation Searchmentioning
confidence: 99%
“…Interpolation search faces the limitation that as the distribution varies from the assumption of a linear scale, the algorithm starts to fail and reaches a worst-case complexity of O (n), which is same as linear search. Interpolation search tree [14] data structure has been introduced with an average cost of O (log (log n)) for search operations and a worst case cost of O (log ~ n) to overcome the above drawback. The augmented sampled forest, or ASF [15] was introduced by Arne Andersson and Christer Mattsson to support a large class of distributions using a dynamic approach.…”
Section: Interpolation Searchmentioning
confidence: 99%
“…Given two functions f 1 and f 2 , a density function [30,4] if there exists a constant β, such that for all c 1 , c 2 , c 3 , a ≤ c 1 < c 2 < c 3 ≤ b, and all integers n, it holds that: = O( 1 f1 ); that is, f 1 measures how fine is the partitioning of an arbitrary subinterval. Function f 2 guarantees that no part, of the f 1 possible, gets more probability mass than β·f2 n ; that is, f 2 measures the sparseness of any subinterval [c 2 − c3−c1 f1 , c 2 ] ⊆ [c 1 , c 3 ].…”
Section: Probability Distributionsmentioning
confidence: 99%
“…However, none of these works consider the trees that could have lower than logarithmic height under a wide range of distributions. In this work, we consider the parallel batched version of Interpolation Search Tree (later called IST) introduced in [7]. The main difference with the previously researched data structures is that insertions, deletions and searches with smoothly distributed arguments take (log log ) time, where is the size of the tree.…”
mentioning
confidence: 99%