Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2006
2006
2011
2011

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(15 citation statements)
references
References 1 publication
0
15
0
Order By: Relevance
“…The seven best known representatives are XMark [13], XOO7 [17], XMach-1 [10], MBench [16], XBench [24], XPathMark [18] and TPoX [21].…”
Section: Related Workmentioning
confidence: 99%
“…The seven best known representatives are XMark [13], XOO7 [17], XMach-1 [10], MBench [16], XBench [24], XPathMark [18] and TPoX [21].…”
Section: Related Workmentioning
confidence: 99%
“…At the heart of Tamino is a powerful XML engine providing all functionality necessary to dynamically process, generate and exchange XML documents [18] . Although various benchmarks had been developed for studying the efficiency of XML databases [19,20,21,22,23,24] most of them concentrate on defining set of queries and specifications for evaluation of XML data management technologies. Other related works consist of evaluation of using various methods for extracting XML data from relational databases [16,25,26] and evaluation of XML query languages [27,28,29] .…”
Section: Xml Databasesmentioning
confidence: 99%
“…This contrasts with single scenario benchmarks, such as TPC-C, TPC-H, TPC-R, TPC-W (Transaction Processing Performance Council (http://www.tpc.org)), Wisconsin benchmark (DeWitt, 1993), and XML application benchmarks, such as XMach-1 (Bohme & Rahm, 2002), XMark (Schmidt et al, 2002), X007 (Brassan et al, 2002), XPathMark (Franceschet, M. 2005) and TPoz (Nicola et al, 2007). We also rejected micro benchmarks, such as Michigan Benchmark (Runapongsa et al, 2006) and MemBer (Afanasiev et al, 2005), since they evaluate at too small a level of query granularity.…”
Section: Benchmarkmentioning
confidence: 99%