2012
DOI: 10.1007/978-3-642-30284-8_36
|View full text |Cite
|
Sign up to set email alerts
|

Exchange and Consumption of Huge RDF Data

Abstract: Huge RDF datasets are currently exchanged on textual RDF formats, hence consumers need to post-process them using RDF stores for local consumption, such as indexing and SPARQL query. This results in a painful task requiring a great effort in terms of time and computational resources. A first approach to lightweight data exchange is a compact (binary) RDF serialization format called HDT. In this paper, we show how to enhance the exchanged HDT with additional structures to support some basic forms of SPARQL quer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
48
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 51 publications
(51 citation statements)
references
References 11 publications
2
48
0
Order By: Relevance
“…and ?P?G). This result is in line with previous HDT-FoQ [16] remarks, which shows that adding the quad information as a triple annotation in HDT-AT keeps the retrieval features of HDT.…”
Section: Performance For Quad Pattern Resolutionsupporting
confidence: 92%
See 4 more Smart Citations
“…and ?P?G). This result is in line with previous HDT-FoQ [16] remarks, which shows that adding the quad information as a triple annotation in HDT-AT keeps the retrieval features of HDT.…”
Section: Performance For Quad Pattern Resolutionsupporting
confidence: 92%
“…Table 2 lists the space requirements of the uncompressed RDF datasets in N-Quads notation (column "Size"), in gigabytes, the respective gzipped datasets (column "gzip") and the systems under review, as the ratio between the size for the required space and the uncompressed size. The numbers reported for HDT-AG and HDT-AT include the size of HDTQ and the additional HDT indexes (created with HDT-FoQ [16]) needed to resolve all quad patterns 15 . Note that Virtuoso was not capable of importing the BEAR-A dataset due to a persistent error when inserting large quad data.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations