2022
DOI: 10.1007/978-3-031-08421-8_32
|View full text |Cite
|
Sign up to set email alerts
|

Learned Sorted Table Search and Static Indexes in Small Model Space

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(16 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…In particular, here we concentrate on the study of how different kinds of Binary and k-ary Searches can affect the performance of learned indexes. Moreover, following Amato et al, 3,4 we use datasets of varying sizes in order to understand how the data structures perform on the different levels of the internal memory hierarchy. In addition to that, we also use datasets generated as in Reference 12, in order to establish that the binary search routines we use behave consistently with the findings in the mentioned paper.…”
Section: Experimental Methodologymentioning
confidence: 99%
See 4 more Smart Citations
“…In particular, here we concentrate on the study of how different kinds of Binary and k-ary Searches can affect the performance of learned indexes. Moreover, following Amato et al, 3,4 we use datasets of varying sizes in order to understand how the data structures perform on the different levels of the internal memory hierarchy. In addition to that, we also use datasets generated as in Reference 12, in order to establish that the binary search routines we use behave consistently with the findings in the mentioned paper.…”
Section: Experimental Methodologymentioning
confidence: 99%
“…The second kind of datasets have origin from the carefully chosen ones in Reference 19 (and therein referred to as amzn32 , amzn64 , face , osm , wiki ). They have been derived from them in References 3 and 4, in order to fit well each level of the main memory hierarchy with respect to the Intel I7 architecture. The essential point of the derivation is that, for each of the generated datasets, the CDF of the corresponding original dataset is well approximated.…”
Section: Experimental Methodologymentioning
confidence: 99%
See 3 more Smart Citations